threads
listlengths
1
2.99k
[ { "msg_contents": "Hi Robert,\n\nI was rebasing my patch to convert RELSEG_SIZE into an initdb-time\nsetting, and thus runtime variable, and I noticed this stack variable\nin src/backend/backup/basebackup_incremental.c:\n\nGetFileBackupMethod(IncrementalBackupInfo *ib, const char *path,\n BlockNumber *relative_block_numbers,\n unsigned *truncation_block_length)\n{\n BlockNumber absolute_block_numbers[RELSEG_SIZE];\n\nI'll have to move that sucker onto the heap (we banned C99 variable\nlength arrays and we don't use nonstandard alloca()), but I think the\ncoding in master is already a bit dodgy: that's 131072 *\nsizeof(BlockNumber) = 512kB with default configure options, but\n--with-segsize X could create a stack variable up to 16GB,\ncorresponding to segment size 32TB (meaning no segmentation at all).\nThat wouldn't work. Shouldn't we move it to the stack? See attached\ndraft patch.\n\nEven on the heap, 16GB is too much to assume we can allocate during a\nbase backup. I don't claim that's a real-world problem for\nincremental backup right now in master, because I don't have any\nevidence that anyone ever really uses --with-segsize (do they?), but\nif we make it an initdb option it will be more popular and this will\nbecome a problem. Hmm.", "msg_date": "Wed, 6 Mar 2024 18:43:36 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Potential stack overflow in incremental base backup" }, { "msg_contents": "On 2024-Mar-06, Thomas Munro wrote:\n\n> Even on the heap, 16GB is too much to assume we can allocate during a\n> base backup. I don't claim that's a real-world problem for\n> incremental backup right now in master, because I don't have any\n> evidence that anyone ever really uses --with-segsize (do they?), but\n> if we make it an initdb option it will be more popular and this will\n> become a problem. Hmm.\n\nWould it work to use a radix tree from the patchset at\nhttps://postgr.es/m/CANWCAZb43ZNRK03bzftnVRAfHzNGzH26sjc0Ep-sj8+w20VzSg@mail.gmail.com\n?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"E pur si muove\" (Galileo Galilei)\n\n\n", "msg_date": "Wed, 6 Mar 2024 12:29:18 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Potential stack overflow in incremental base backup" }, { "msg_contents": "On Wed, Mar 6, 2024 at 12:44 AM Thomas Munro <[email protected]> wrote:\n> I'll have to move that sucker onto the heap (we banned C99 variable\n> length arrays and we don't use nonstandard alloca()), but I think the\n> coding in master is already a bit dodgy: that's 131072 *\n> sizeof(BlockNumber) = 512kB with default configure options, but\n> --with-segsize X could create a stack variable up to 16GB,\n> corresponding to segment size 32TB (meaning no segmentation at all).\n> That wouldn't work. Shouldn't we move it to the stack? See attached\n> draft patch.\n>\n> Even on the heap, 16GB is too much to assume we can allocate during a\n> base backup. I don't claim that's a real-world problem for\n> incremental backup right now in master, because I don't have any\n> evidence that anyone ever really uses --with-segsize (do they?), but\n> if we make it an initdb option it will be more popular and this will\n> become a problem. Hmm.\n\nI don't see a problem with moving it from the stack to the heap. I\ndon't believe I had any particular reason for wanting it on the stack\nspecifically.\n\nI'm not sure what to do about really large segment sizes. As long as\nthe allocation here is < ~100MB, it's probably fine, both because (1)\n100MB isn't an enormously large allocation these days and (2) if you\nhave a good reason to increase the segment size by a factor of 256 or\nmore, you're probably running on a big machine, and then you should\ndefinitely have 100MB to burn.\n\nHowever, a 32TB segment size is another thing altogether. We could\navoid transposing relative block numbers to absolute block numbers\nwhenever start_blkno is 0, but that doesn't do a whole lot when the\nsegment size is 8TB or 16TB rather than 32TB; plus, in such cases, the\nrelative block number array is also going to be gigantic. Maybe what\nwe need to do is figure out how to dynamically size the arrays in such\ncases, so that you only make them as big as required for the file you\nactually have, rather than as big as the file could theoretically be.\nBut I think that's really only necessary if we're actually going to\nget rid of the idea of segmented relations altogether, which I don't\nthink is happening at least for v17, and maybe not ever.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Mar 2024 12:53:01 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Potential stack overflow in incremental base backup" }, { "msg_contents": "On Wed, Mar 6, 2024 at 6:29 AM Alvaro Herrera <[email protected]> wrote:\n> On 2024-Mar-06, Thomas Munro wrote:\n> > Even on the heap, 16GB is too much to assume we can allocate during a\n> > base backup. I don't claim that's a real-world problem for\n> > incremental backup right now in master, because I don't have any\n> > evidence that anyone ever really uses --with-segsize (do they?), but\n> > if we make it an initdb option it will be more popular and this will\n> > become a problem. Hmm.\n>\n> Would it work to use a radix tree from the patchset at\n> https://postgr.es/m/CANWCAZb43ZNRK03bzftnVRAfHzNGzH26sjc0Ep-sj8+w20VzSg@mail.gmail.com\n> ?\n\nProbably not that much, because we actually send the array to the\nclient very soon after we construct it:\n\n push_to_sink(sink, &checksum_ctx, &header_bytes_done,\n incremental_blocks,\n sizeof(BlockNumber) * num_incremental_blocks);\n\nThis is hard to do without materializing the array somewhere, so I\ndon't think an alternate representation is the way to go in this\ninstance.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Mar 2024 12:54:07 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Potential stack overflow in incremental base backup" }, { "msg_contents": "On Fri, Mar 8, 2024 at 6:53 AM Robert Haas <[email protected]> wrote:\n> But I think that's really only necessary if we're actually going to\n> get rid of the idea of segmented relations altogether, which I don't\n> think is happening at least for v17, and maybe not ever.\n\nYeah, I consider the feedback on ext4's size limitations to have\ncompletely killed the idea of getting rid of segments for the\nforeseeable future, at least in standard md.c (though who knows what\npeople will do with pluggable smgr?). As for initdb --rel-segsize (CF\n#4305) for md.c, I abandoned plans to do that for 17 because I\ncouldn't see what to do about this issue. Incremental backup\neffectively relies on smaller segments, by using them as\nproblem-dividing granularity for checksumming and memory usage.\nThat'll take some research...\n\n\n", "msg_date": "Sat, 23 Mar 2024 15:27:16 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Potential stack overflow in incremental base backup" }, { "msg_contents": "On Fri, Mar 8, 2024 at 6:53 AM Robert Haas <[email protected]> wrote:\n> ... We could\n> avoid transposing relative block numbers to absolute block numbers\n> whenever start_blkno is 0, ...\n\nCould we just write the blocks directly into the output array, and\nthen transpose them directly in place if start_blkno > 0? See\nattached. I may be missing something, but the only downside I can\nthink of is that the output array is still clobbered even if we decide\nto return BACK_UP_FILE_FULLY because of the 90% rule, but that just\nrequires a warning in the comment at the top.", "msg_date": "Wed, 10 Apr 2024 22:21:04 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Potential stack overflow in incremental base backup" }, { "msg_contents": "On Wed, Apr 10, 2024 at 6:21 AM Thomas Munro <[email protected]> wrote:\n> Could we just write the blocks directly into the output array, and\n> then transpose them directly in place if start_blkno > 0? See\n> attached. I may be missing something, but the only downside I can\n> think of is that the output array is still clobbered even if we decide\n> to return BACK_UP_FILE_FULLY because of the 90% rule, but that just\n> requires a warning in the comment at the top.\n\nYeah. This approach makes the name \"relative_block_numbers\" a bit\nconfusing, but not running out of memory is nice, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Apr 2024 08:10:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Potential stack overflow in incremental base backup" }, { "msg_contents": "On Thu, Apr 11, 2024 at 12:11 AM Robert Haas <[email protected]> wrote:\n> On Wed, Apr 10, 2024 at 6:21 AM Thomas Munro <[email protected]> wrote:\n> > Could we just write the blocks directly into the output array, and\n> > then transpose them directly in place if start_blkno > 0? See\n> > attached. I may be missing something, but the only downside I can\n> > think of is that the output array is still clobbered even if we decide\n> > to return BACK_UP_FILE_FULLY because of the 90% rule, but that just\n> > requires a warning in the comment at the top.\n>\n> Yeah. This approach makes the name \"relative_block_numbers\" a bit\n> confusing, but not running out of memory is nice, too.\n\nPushed. That fixes the stack problem.\n\nOf course we still have this:\n\n /*\n * Since this array is relatively large, avoid putting it on the stack.\n * But we don't need it at all if this is not an incremental backup.\n */\n if (ib != NULL)\n relative_block_numbers = palloc(sizeof(BlockNumber) * RELSEG_SIZE);\n\nTo rescue my initdb --rel-segsize project[1] for v18, I will have a go\nat making that dynamic. It looks like we don't actually need to\nallocate that until we get to the GetFileBackupMethod() call, and at\nthat point we have the file size. If I understand correctly,\nstatbuf.st_size / BLCKSZ would be enough, so we could embiggen our\nblock number buffer there if required, right? That way, you would\nonly need shedloads of virtual memory if you used initdb\n--rel-segsize=shedloads and you actually have shedloads of data in a\ntable/segment. For example, with initdb --rel-segsize=1TB and a table\nthat contains 1TB+ of data, that'd allocate 512MB. It sounds\nborderline OK when put that way. It sounds not OK with\n--rel-segsize=infinite and 32TB of data -> palloc(16GB). I suppose\none weasel-out would be to say we only support segments up to (say)\n1TB, until eventually we figure out how to break this code's\ndependency on segments. I guess we'll have to do that one day to\nsupport incremental backups of other smgr implementations that don't\neven have segments (segments are a private detail of md.c after all,\nnot part of the smgr abstraction).\n\n[1] https://commitfest.postgresql.org/48/4305/\n\n\n", "msg_date": "Thu, 11 Apr 2024 13:54:49 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Potential stack overflow in incremental base backup" }, { "msg_contents": "On Wed, Apr 10, 2024 at 9:55 PM Thomas Munro <[email protected]> wrote:\n> Pushed. That fixes the stack problem.\n\nCool.\n\n> Of course we still have this:\n>\n> /*\n> * Since this array is relatively large, avoid putting it on the stack.\n> * But we don't need it at all if this is not an incremental backup.\n> */\n> if (ib != NULL)\n> relative_block_numbers = palloc(sizeof(BlockNumber) * RELSEG_SIZE);\n>\n> To rescue my initdb --rel-segsize project[1] for v18, I will have a go\n> at making that dynamic. It looks like we don't actually need to\n> allocate that until we get to the GetFileBackupMethod() call, and at\n> that point we have the file size. If I understand correctly,\n> statbuf.st_size / BLCKSZ would be enough, so we could embiggen our\n> block number buffer there if required, right?\n\nYes.\n\n> That way, you would\n> only need shedloads of virtual memory if you used initdb\n> --rel-segsize=shedloads and you actually have shedloads of data in a\n> table/segment. For example, with initdb --rel-segsize=1TB and a table\n> that contains 1TB+ of data, that'd allocate 512MB. It sounds\n> borderline OK when put that way. It sounds not OK with\n> --rel-segsize=infinite and 32TB of data -> palloc(16GB). I suppose\n> one weasel-out would be to say we only support segments up to (say)\n> 1TB, until eventually we figure out how to break this code's\n> dependency on segments. I guess we'll have to do that one day to\n> support incremental backups of other smgr implementations that don't\n> even have segments (segments are a private detail of md.c after all,\n> not part of the smgr abstraction).\n\nI think the thing to do would be to fetch and send the array of block\nnumbers in chunks. Instead of calling BlockRefTableEntryGetBlocks()\njust once, you'd call it in a loop with a buffer that you know might\nnot be big enough and then iterate until you've got everything,\nsending the partial block array to the client each time. Then you\nrepeat that loop a second time as you work your way through the giant\nfile so that you always know what the next block you need to include\nin the incremental file is. The only problem here is: exactly how do\nyou iterate? If BlockRefTableEntryGetBlocks() were guaranteed to\nreturn blocks in order, then you could just keep track of the highest\nblock number you received from the previous call to that function and\npass the starting block number for the next call as that value plus\none. But as it is, that doesn't work. Whoops. Bad API design on my\npart, but fixable.\n\nThe other problem here is on the pg_combinebackup side, where we have\na lot of the same issues. reconstruct.c wants to build a map of where\nit should get each block before it actually does any I/O. If the whole\nfile is potentially terabytes in size, I guess that would need to use\nsome other approach there, too. That's probably doable with a bunch of\nrejiggering, but not trivial.\n\nI have to admit that I'm still not at all enthusiastic about 32TB\nsegments. I think we're going to find that incremental backup is only\nthe tip of the iceberg. I bet we'll find that there are all kinds of\nweird annoyances that pop up with 10TB+ files. It could be outright\nlack of support, like, oh, this archive format doesn't support files\nlarger than X size. But it could also be subtler things like, hey,\nthis directory copy routine updates the progress bar after each file,\nso when some of the files are gigantic, you can no longer count on\nhaving an accurate idea of how much of the directory has been copied.\nI suspect that a lot of the code that turns out to have problems will\nnot be PostgreSQL code (although I expect us to find more problems in\nour code, too) so we'll end up telling people \"hey, it's not OUR fault\nyour stuff doesn't work, you just need to file a bug report with\n${MAINTAINER_WHO_MAY_OR_MAY_NOT_CARE_ABOUT_30TB_FILES}\". And we won't\ncatch any of the bugs in our own code or anyone else's in the\nregression tests, buildfarm, or cfbot, because none of that stuff is\ngoing to test with multi-terabyte files.\n\nI do understand that a 1GB segment size is not that big in 2024, and\nthat filesystems with a 2GB limit are thought to have died out a long\ntime ago, and I'm not against using larger segments. I do think,\nthough, that increasing the segment size by 32768x in one shot is\nlikely to be overdoing it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 12:10:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Potential stack overflow in incremental base backup" }, { "msg_contents": "On Tue, Apr 16, 2024 at 4:10 AM Robert Haas <[email protected]> wrote:\n> On Wed, Apr 10, 2024 at 9:55 PM Thomas Munro <[email protected]> wrote:\n> > To rescue my initdb --rel-segsize project[1] for v18, I will have a go\n> > at making that dynamic. It looks like we don't actually need to\n> > allocate that until we get to the GetFileBackupMethod() call, and at\n> > that point we have the file size. If I understand correctly,\n> > statbuf.st_size / BLCKSZ would be enough, so we could embiggen our\n> > block number buffer there if required, right?\n>\n> Yes.\n\nHere is a first attempt at that. Better factoring welcome. New\nobservations made along the way: the current coding can exceed\nMaxAllocSize and error out, or overflow 32 bit size_t and allocate\nnonsense. Do you think it'd be better to raise an error explaining\nthat, or silently fall back to full backup (tried that way in the\nattached), or that + log messages? Another option would be to use\nhuge allocations, so we only have to deal with that sort of question\nfor 32 bit systems (i.e. effectively hypothetical/non-relevant\nscenarios), but I don't love that idea.\n\n> ...\n> I do understand that a 1GB segment size is not that big in 2024, and\n> that filesystems with a 2GB limit are thought to have died out a long\n> time ago, and I'm not against using larger segments. I do think,\n> though, that increasing the segment size by 32768x in one shot is\n> likely to be overdoing it.\n\nMy intuition is that the primary interesting lines to cross are at 2GB\nand 4GB due to data type stuff. I defend against the most basic\nproblem in my proposal: I don't let you exceed your off_t type, but\nthat doesn't mean we don't have off_t/ssize_t/size_t/long snafus\nlurking in our code that could bite a 32 bit system with large files.\nIf you make it past those magic numbers and your tools are all happy,\nI think you should be home free until you hit file system limits,\nwhich are effectively unhittable on most systems except ext4's already\nbemoaned 16TB limit AFAIK. But all the same, I'm contemplating\nlimiting the range to 1TB in the first version, not because of general\nfear of unknown unknowns, but just because it means we don't need to\nuse \"huge\" allocations for this known place, maybe until we can\naddress that.", "msg_date": "Fri, 17 May 2024 13:56:44 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Potential stack overflow in incremental base backup" } ]
[ { "msg_contents": "Hi\n\nafter today update, the build with --with-llvm produces broken code, and\nmake check fails with crash\n\n Upgrade hwdata-0.380-1.fc40.noarch\n@updates-testing\n Upgraded hwdata-0.379-1.fc40.noarch @@System\n Upgrade llvm-18.1.0~rc4-1.fc40.x86_64\n @updates-testing\n Upgraded llvm-17.0.6-6.fc40.x86_64 @@System\n Upgrade llvm-devel-18.1.0~rc4-1.fc40.x86_64\n @updates-testing\n Upgraded llvm-devel-17.0.6-6.fc40.x86_64 @@System\n Upgrade llvm-googletest-18.1.0~rc4-1.fc40.x86_64\n@updates-testing\n Upgraded llvm-googletest-17.0.6-6.fc40.x86_64 @@System\n Upgrade llvm-libs-18.1.0~rc4-1.fc40.i686\n@updates-testing\n Upgraded llvm-libs-17.0.6-6.fc40.i686 @@System\n Upgrade llvm-libs-18.1.0~rc4-1.fc40.x86_64\n@updates-testing\n Upgraded llvm-libs-17.0.6-6.fc40.x86_64 @@System\n Upgrade llvm-static-18.1.0~rc4-1.fc40.x86_64\n@updates-testing\n Upgraded llvm-static-17.0.6-6.fc40.x86_64 @@System\n Upgrade llvm-test-18.1.0~rc4-1.fc40.x86_64\n@updates-testing\n Instalovat llvm17-libs-17.0.6-7.fc40.i686\n@updates-testing\n Instalovat llvm17-libs-17.0.6-7.fc40.x86_64\n@updates-testing\n\nRegards\n\nPavel\n\nHiafter today update, the build with --with-llvm produces broken code, and make check fails with crash    Upgrade    hwdata-0.380-1.fc40.noarch                           @updates-testing    Upgraded   hwdata-0.379-1.fc40.noarch                           @@System    Upgrade    llvm-18.1.0~rc4-1.fc40.x86_64                        @updates-testing    Upgraded   llvm-17.0.6-6.fc40.x86_64                            @@System    Upgrade    llvm-devel-18.1.0~rc4-1.fc40.x86_64                  @updates-testing    Upgraded   llvm-devel-17.0.6-6.fc40.x86_64                      @@System    Upgrade    llvm-googletest-18.1.0~rc4-1.fc40.x86_64             @updates-testing    Upgraded   llvm-googletest-17.0.6-6.fc40.x86_64                 @@System    Upgrade    llvm-libs-18.1.0~rc4-1.fc40.i686                     @updates-testing    Upgraded   llvm-libs-17.0.6-6.fc40.i686                         @@System    Upgrade    llvm-libs-18.1.0~rc4-1.fc40.x86_64                   @updates-testing    Upgraded   llvm-libs-17.0.6-6.fc40.x86_64                       @@System    Upgrade    llvm-static-18.1.0~rc4-1.fc40.x86_64                 @updates-testing    Upgraded   llvm-static-17.0.6-6.fc40.x86_64                     @@System    Upgrade    llvm-test-18.1.0~rc4-1.fc40.x86_64                   @updates-testing    Instalovat llvm17-libs-17.0.6-7.fc40.i686                       @updates-testing    Instalovat llvm17-libs-17.0.6-7.fc40.x86_64                     @updates-testingRegardsPavel", "msg_date": "Wed, 6 Mar 2024 07:53:53 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "broken JIT support on Fedora 40" }, { "msg_contents": "On Wed, Mar 6, 2024 at 1:54 AM Pavel Stehule <[email protected]> wrote:\n> after today update, the build with --with-llvm produces broken code, and make check fails with crash\n>\n> Upgrade hwdata-0.380-1.fc40.noarch @updates-testing\n> Upgraded hwdata-0.379-1.fc40.noarch @@System\n> Upgrade llvm-18.1.0~rc4-1.fc40.x86_64 @updates-testing\n> Upgraded llvm-17.0.6-6.fc40.x86_64 @@System\n> Upgrade llvm-devel-18.1.0~rc4-1.fc40.x86_64 @updates-testing\n> Upgraded llvm-devel-17.0.6-6.fc40.x86_64 @@System\n> Upgrade llvm-googletest-18.1.0~rc4-1.fc40.x86_64 @updates-testing\n> Upgraded llvm-googletest-17.0.6-6.fc40.x86_64 @@System\n> Upgrade llvm-libs-18.1.0~rc4-1.fc40.i686 @updates-testing\n> Upgraded llvm-libs-17.0.6-6.fc40.i686 @@System\n> Upgrade llvm-libs-18.1.0~rc4-1.fc40.x86_64 @updates-testing\n> Upgraded llvm-libs-17.0.6-6.fc40.x86_64 @@System\n> Upgrade llvm-static-18.1.0~rc4-1.fc40.x86_64 @updates-testing\n> Upgraded llvm-static-17.0.6-6.fc40.x86_64 @@System\n> Upgrade llvm-test-18.1.0~rc4-1.fc40.x86_64 @updates-testing\n> Instalovat llvm17-libs-17.0.6-7.fc40.i686 @updates-testing\n> Instalovat llvm17-libs-17.0.6-7.fc40.x86_64 @updates-testing\n\nI don't know anything about LLVM, but somehow I'm guessing that people\nwho do will need more detail than this in order to be able to fix the\nproblem. I see that support for LLVM 18 was added in commit\nd282e88e50521a457fa1b36e55f43bac02a3167f on January 18th; perhaps you\nwere building from an older commit?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Mar 2024 14:20:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "Hi\n\nčt 14. 3. 2024 v 19:20 odesílatel Robert Haas <[email protected]>\nnapsal:\n\n> On Wed, Mar 6, 2024 at 1:54 AM Pavel Stehule <[email protected]>\n> wrote:\n> > after today update, the build with --with-llvm produces broken code, and\n> make check fails with crash\n> >\n> > Upgrade hwdata-0.380-1.fc40.noarch\n> @updates-testing\n> > Upgraded hwdata-0.379-1.fc40.noarch\n> @@System\n> > Upgrade llvm-18.1.0~rc4-1.fc40.x86_64\n> @updates-testing\n> > Upgraded llvm-17.0.6-6.fc40.x86_64\n> @@System\n> > Upgrade llvm-devel-18.1.0~rc4-1.fc40.x86_64\n> @updates-testing\n> > Upgraded llvm-devel-17.0.6-6.fc40.x86_64\n> @@System\n> > Upgrade llvm-googletest-18.1.0~rc4-1.fc40.x86_64\n> @updates-testing\n> > Upgraded llvm-googletest-17.0.6-6.fc40.x86_64\n> @@System\n> > Upgrade llvm-libs-18.1.0~rc4-1.fc40.i686\n> @updates-testing\n> > Upgraded llvm-libs-17.0.6-6.fc40.i686\n> @@System\n> > Upgrade llvm-libs-18.1.0~rc4-1.fc40.x86_64\n> @updates-testing\n> > Upgraded llvm-libs-17.0.6-6.fc40.x86_64\n> @@System\n> > Upgrade llvm-static-18.1.0~rc4-1.fc40.x86_64\n> @updates-testing\n> > Upgraded llvm-static-17.0.6-6.fc40.x86_64\n> @@System\n> > Upgrade llvm-test-18.1.0~rc4-1.fc40.x86_64\n> @updates-testing\n> > Instalovat llvm17-libs-17.0.6-7.fc40.i686\n> @updates-testing\n> > Instalovat llvm17-libs-17.0.6-7.fc40.x86_64\n> @updates-testing\n>\n> I don't know anything about LLVM, but somehow I'm guessing that people\n> who do will need more detail than this in order to be able to fix the\n> problem. I see that support for LLVM 18 was added in commit\n> d282e88e50521a457fa1b36e55f43bac02a3167f on January 18th; perhaps you\n> were building from an older commit?\n>\n\nI repeated build and check today, fresh master, today fedora update with\nsame result\n\nbuild is ok, but regress tests fails with crash (it works without\n-with-llvm)\n\nRegards\n\nPavel\n\n\n\n\n\n\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nHičt 14. 3. 2024 v 19:20 odesílatel Robert Haas <[email protected]> napsal:On Wed, Mar 6, 2024 at 1:54 AM Pavel Stehule <[email protected]> wrote:\n> after today update, the build with --with-llvm produces broken code, and make check fails with crash\n>\n>     Upgrade    hwdata-0.380-1.fc40.noarch                           @updates-testing\n>     Upgraded   hwdata-0.379-1.fc40.noarch                           @@System\n>     Upgrade    llvm-18.1.0~rc4-1.fc40.x86_64                        @updates-testing\n>     Upgraded   llvm-17.0.6-6.fc40.x86_64                            @@System\n>     Upgrade    llvm-devel-18.1.0~rc4-1.fc40.x86_64                  @updates-testing\n>     Upgraded   llvm-devel-17.0.6-6.fc40.x86_64                      @@System\n>     Upgrade    llvm-googletest-18.1.0~rc4-1.fc40.x86_64             @updates-testing\n>     Upgraded   llvm-googletest-17.0.6-6.fc40.x86_64                 @@System\n>     Upgrade    llvm-libs-18.1.0~rc4-1.fc40.i686                     @updates-testing\n>     Upgraded   llvm-libs-17.0.6-6.fc40.i686                         @@System\n>     Upgrade    llvm-libs-18.1.0~rc4-1.fc40.x86_64                   @updates-testing\n>     Upgraded   llvm-libs-17.0.6-6.fc40.x86_64                       @@System\n>     Upgrade    llvm-static-18.1.0~rc4-1.fc40.x86_64                 @updates-testing\n>     Upgraded   llvm-static-17.0.6-6.fc40.x86_64                     @@System\n>     Upgrade    llvm-test-18.1.0~rc4-1.fc40.x86_64                   @updates-testing\n>     Instalovat llvm17-libs-17.0.6-7.fc40.i686                       @updates-testing\n>     Instalovat llvm17-libs-17.0.6-7.fc40.x86_64                     @updates-testing\n\nI don't know anything about LLVM, but somehow I'm guessing that people\nwho do will need more detail than this in order to be able to fix the\nproblem. I see that support for LLVM 18 was added in commit\nd282e88e50521a457fa1b36e55f43bac02a3167f on January 18th; perhaps you\nwere building from an older commit?I repeated build and check today, fresh master, today fedora update with same resultbuild is ok, but regress tests fails with crash (it works without -with-llvm)RegardsPavel \n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 Mar 2024 20:15:01 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On 14 Mar 2024, at 20:15, Pavel Stehule <[email protected]> wrote:\n\n> build is ok, but regress tests fails with crash (it works without -with-llvm)\n\nCan you post some details around this crash? It doesn't seem to be a\ncombination we have covered in the buildfarm.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 00:26:49 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "On Fri, Mar 15, 2024 at 12:27 PM Daniel Gustafsson <[email protected]> wrote:\n> > On 14 Mar 2024, at 20:15, Pavel Stehule <[email protected]> wrote:\n>\n> > build is ok, but regress tests fails with crash (it works without -with-llvm)\n>\n> Can you post some details around this crash? It doesn't seem to be a\n> combination we have covered in the buildfarm.\n\nYeah, 18.1 (note they switched to 1-based minor numbers, there was no\n18.0) just came out a week or so ago. Despite testing their 18 branch\njust before their \"RC1\" tag, as recently as\n\ncommit d282e88e50521a457fa1b36e55f43bac02a3167f\nAuthor: Thomas Munro <[email protected]>\nDate: Thu Jan 25 10:37:35 2024 +1300\n\n Track LLVM 18 changes.\n\nat which point everything worked, it seems that something changed\nbefore they released. I haven't looked into why yet but it's crashing\non my FreeBSD box too.\n\n\n", "msg_date": "Fri, 15 Mar 2024 12:44:45 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "For me it seems that the LLVMRunPasses() call, new in\n\ncommit 76200e5ee469e4a9db5f9514b9d0c6a31b496bff\nAuthor: Thomas Munro <[email protected]>\nDate: Wed Oct 18 22:15:54 2023 +1300\n\n jit: Changes for LLVM 17.\n\nis reaching code that segfaults inside libLLVM, specifically in\nllvm::InlineFunction(llvm::CallBase&, llvm::InlineFunctionInfo&, bool,\nllvm::AAResults*, bool, llvm::Function*). First obvious question\nwould be: is that NULL argument still acceptable? Perhaps it wants\nour LLVMTargetMachineRef there:\n\n err = LLVMRunPasses(module, passes, NULL, options);\n\nBut then when we see what is does with that argument, it arrives at a\nplace that apparently accepts nullptr.\n\nhttps://github.com/llvm/llvm-project/blob/6b2bab2839c7a379556a10287034bd55906d7094/llvm/lib/Passes/PassBuilderBindings.cpp#L56\nhttps://github.com/llvm/llvm-project/blob/6b2bab2839c7a379556a10287034bd55906d7094/llvm/include/llvm/Passes/PassBuilder.h#L124\n\nHrmph. Might need an assertion build to learn more. I'll try to look\nagain next week or so.\n\n\n", "msg_date": "Fri, 15 Mar 2024 13:54:38 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On Fri, Mar 15, 2024 at 01:54:38PM +1300, Thomas Munro wrote:\n> For me it seems that the LLVMRunPasses() call, new in\n>\n> commit 76200e5ee469e4a9db5f9514b9d0c6a31b496bff\n> Author: Thomas Munro <[email protected]>\n> Date: Wed Oct 18 22:15:54 2023 +1300\n>\n> jit: Changes for LLVM 17.\n>\n> is reaching code that segfaults inside libLLVM, specifically in\n> llvm::InlineFunction(llvm::CallBase&, llvm::InlineFunctionInfo&, bool,\n> llvm::AAResults*, bool, llvm::Function*). First obvious question\n> would be: is that NULL argument still acceptable? Perhaps it wants\n> our LLVMTargetMachineRef there:\n>\n> err = LLVMRunPasses(module, passes, NULL, options);\n>\n> But then when we see what is does with that argument, it arrives at a\n> place that apparently accepts nullptr.\n>\n> https://github.com/llvm/llvm-project/blob/6b2bab2839c7a379556a10287034bd55906d7094/llvm/lib/Passes/PassBuilderBindings.cpp#L56\n> https://github.com/llvm/llvm-project/blob/6b2bab2839c7a379556a10287034bd55906d7094/llvm/include/llvm/Passes/PassBuilder.h#L124\n>\n> Hrmph. Might need an assertion build to learn more. I'll try to look\n> again next week or so.\n\nLooks like I can reproduce this as well, libLLVM crashes when reaching\nAddReturnAttributes inside InlineFunction, when trying to access\noperands of the return instruction. I think, for whatever reason, the\nlatest LLVM doesn't like (i.e. do not expect this when performing\ninlining pass) return instructions that do not have a return value, and\nthis is what happens in the outblock of deform function we generate\n(slot_compile_deform).\n\nFor verification, I've modified the deform.outblock to call LLVMBuildRet\ninstead of LLVMBuildRetVoid and this seems to help -- inline and deform\nstages are still performed as before, but nothing crashes. But of course\nit doesn't sound right that inlining pass cannot process such code.\nUnfortunately I don't see any obvious change in the recent LLVM history\nthat would justify this outcome, might be a genuine bug, will\ninvestigate further.\n\n\n", "msg_date": "Sun, 17 Mar 2024 21:02:08 +0100", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On Sun, Mar 17, 2024 at 09:02:08PM +0100, Dmitry Dolgov wrote:\n> > On Fri, Mar 15, 2024 at 01:54:38PM +1300, Thomas Munro wrote:\n> > For me it seems that the LLVMRunPasses() call, new in\n> >\n> > commit 76200e5ee469e4a9db5f9514b9d0c6a31b496bff\n> > Author: Thomas Munro <[email protected]>\n> > Date: Wed Oct 18 22:15:54 2023 +1300\n> >\n> > jit: Changes for LLVM 17.\n> >\n> > is reaching code that segfaults inside libLLVM, specifically in\n> > llvm::InlineFunction(llvm::CallBase&, llvm::InlineFunctionInfo&, bool,\n> > llvm::AAResults*, bool, llvm::Function*). First obvious question\n> > would be: is that NULL argument still acceptable? Perhaps it wants\n> > our LLVMTargetMachineRef there:\n> >\n> > err = LLVMRunPasses(module, passes, NULL, options);\n> >\n> > But then when we see what is does with that argument, it arrives at a\n> > place that apparently accepts nullptr.\n> >\n> > https://github.com/llvm/llvm-project/blob/6b2bab2839c7a379556a10287034bd55906d7094/llvm/lib/Passes/PassBuilderBindings.cpp#L56\n> > https://github.com/llvm/llvm-project/blob/6b2bab2839c7a379556a10287034bd55906d7094/llvm/include/llvm/Passes/PassBuilder.h#L124\n> >\n> > Hrmph. Might need an assertion build to learn more. I'll try to look\n> > again next week or so.\n>\n> Looks like I can reproduce this as well, libLLVM crashes when reaching\n> AddReturnAttributes inside InlineFunction, when trying to access\n> operands of the return instruction. I think, for whatever reason, the\n> latest LLVM doesn't like (i.e. do not expect this when performing\n> inlining pass) return instructions that do not have a return value, and\n> this is what happens in the outblock of deform function we generate\n> (slot_compile_deform).\n>\n> For verification, I've modified the deform.outblock to call LLVMBuildRet\n> instead of LLVMBuildRetVoid and this seems to help -- inline and deform\n> stages are still performed as before, but nothing crashes. But of course\n> it doesn't sound right that inlining pass cannot process such code.\n> Unfortunately I don't see any obvious change in the recent LLVM history\n> that would justify this outcome, might be a genuine bug, will\n> investigate further.\n\nI think I found the change that got it all started [1], the commit has a\nset of tags like 18.1.0-rc1 and is relatively recent. The message\ndoesn't say anything related to the crash that we see, so I assume it's\nindeed a bug. I've opened an issue to confirm this understanding [2]\n(wow, issues were indeed moved to github since the last time I was\ntouching LLVM), let's see what would be the response.\n\n[1]: https://github.com/llvm/llvm-project/commit/2da4960f20f7e5d88a68ce25636a895284dc66d8\n[2]: https://github.com/llvm/llvm-project/issues/86162\n\n\n", "msg_date": "Thu, 21 Mar 2024 19:15:08 +0100", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "On Fri, Mar 22, 2024 at 7:15 AM Dmitry Dolgov <[email protected]> wrote:\n> > For verification, I've modified the deform.outblock to call LLVMBuildRet\n> > instead of LLVMBuildRetVoid and this seems to help -- inline and deform\n> > stages are still performed as before, but nothing crashes. But of course\n> > it doesn't sound right that inlining pass cannot process such code.\n\nThanks for investigating and filing the issue. It doesn't seem to be\nmoving yet. Do you want to share the LLVMBuildRet() workaround?\nMaybe we need to consider shipping something like that in the\nmeantime?\n\n\n", "msg_date": "Sat, 30 Mar 2024 16:38:11 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On Sat, Mar 30, 2024 at 04:38:11PM +1300, Thomas Munro wrote:\n> On Fri, Mar 22, 2024 at 7:15 AM Dmitry Dolgov <[email protected]> wrote:\n> > > For verification, I've modified the deform.outblock to call LLVMBuildRet\n> > > instead of LLVMBuildRetVoid and this seems to help -- inline and deform\n> > > stages are still performed as before, but nothing crashes. But of course\n> > > it doesn't sound right that inlining pass cannot process such code.\n>\n> Thanks for investigating and filing the issue. It doesn't seem to be\n> moving yet. Do you want to share the LLVMBuildRet() workaround?\n> Maybe we need to consider shipping something like that in the\n> meantime?\n\nYeah, sorry, I'm a bit baffled about this situation myself. Yesterday\nI've opened a one-line PR fix that should address the issue, maybe this\nwould help. In the meantime I've attached what did work for me as a\nworkaround -- it essentially just makes the deform function to return\nsome value. It's ugly, but since call site will ignore that, and it's\nonly one occasion where LLVMBuildRetVoid is used, maybe it's acceptable.\nGive me a moment, I'm going to test this change more (waiting on\nrebuilding LLVM, it takes quire a while on my machine :( ), then can\nconfirm that it works as expected on the latest version.", "msg_date": "Sat, 30 Mar 2024 17:58:29 +0100", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "On Sun, Mar 31, 2024 at 5:59 AM Dmitry Dolgov <[email protected]> wrote:\n> Yeah, sorry, I'm a bit baffled about this situation myself. Yesterday\n> I've opened a one-line PR fix that should address the issue, maybe this\n> would help. In the meantime I've attached what did work for me as a\n> workaround -- it essentially just makes the deform function to return\n> some value. It's ugly, but since call site will ignore that, and it's\n> only one occasion where LLVMBuildRetVoid is used, maybe it's acceptable.\n> Give me a moment, I'm going to test this change more (waiting on\n> rebuilding LLVM, it takes quire a while on my machine :( ), then can\n> confirm that it works as expected on the latest version.\n\nGreat news. I see your PR:\n\nhttps://github.com/llvm/llvm-project/pull/87093\n\nChecking their release schedule, they have:\n\nMar 5th: 18.1.0 was released\nMar 8th: 18.1.1 was released\nMar 19th: 18.1.2 was released\nApr 16th: 18.1.3\nApr 30th: 18.1.4\nMay 14th: 18.1.5\nMay 28th: 18.1.6 (if necessary)\n\nOur next release is May 9th. So assuming your PR goes in in the next\ncouple of weeks and makes it into their 18.1.3 or even .4, there is no\npoint in pushing a work-around on our side.\n\n\n", "msg_date": "Sun, 31 Mar 2024 12:49:36 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "On Sun, Mar 31, 2024 at 12:49 PM Thomas Munro <[email protected]> wrote:\n> https://github.com/llvm/llvm-project/pull/87093\n\nOh, with those clues, I think I might see... It is a bit strange that\nwe copy attributes from AttributeTemplate(), a function that returns\nDatum, to our void deform function. It works (I mean doesn't crash)\nif you just comment this line out:\n\n llvm_copy_attributes(AttributeTemplate, v_deform_fn);\n\n... but I guess that disables inlining of the deform function? So\nperhaps we just need to teach that thing not to try to copy the return\nvalue's attributes, which also seems to work here:\n\ndiff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c\nindex ec0fdd49324..92b4993a98a 100644\n--- a/src/backend/jit/llvm/llvmjit.c\n+++ b/src/backend/jit/llvm/llvmjit.c\n@@ -552,8 +552,11 @@ llvm_copy_attributes(LLVMValueRef v_from,\nLLVMValueRef v_to)\n /* copy function attributes */\n llvm_copy_attributes_at_index(v_from, v_to, LLVMAttributeFunctionIndex);\n\n- /* and the return value attributes */\n- llvm_copy_attributes_at_index(v_from, v_to, LLVMAttributeReturnIndex);\n+ if (LLVMGetTypeKind(LLVMGetFunctionReturnType(v_to)) !=\nLLVMVoidTypeKind)\n+ {\n+ /* and the return value attributes */\n+ llvm_copy_attributes_at_index(v_from, v_to,\nLLVMAttributeReturnIndex);\n+ }\n\n\n", "msg_date": "Sat, 6 Apr 2024 02:00:38 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On Sat, Apr 06, 2024 at 02:00:38AM +1300, Thomas Munro wrote:\n> On Sun, Mar 31, 2024 at 12:49 PM Thomas Munro <[email protected]> wrote:\n> > https://github.com/llvm/llvm-project/pull/87093\n>\n> Oh, with those clues, I think I might see... It is a bit strange that\n> we copy attributes from AttributeTemplate(), a function that returns\n> Datum, to our void deform function. It works (I mean doesn't crash)\n> if you just comment this line out:\n>\n> llvm_copy_attributes(AttributeTemplate, v_deform_fn);\n>\n> ... but I guess that disables inlining of the deform function? So\n> perhaps we just need to teach that thing not to try to copy the return\n> value's attributes, which also seems to work here:\n\nYep, I think this is it. I've spent some hours trying to understand why\nsuddenly deform function has noundef ret attribute, when it shouldn't --\nthis explains it and the proposed change fixes the crash. One thing that\nis still not clear to me though is why this copied attribute doesn't\nshow up in the bitcode dumped right before running inline pass (I've\nadded this to troubleshoot the issue).\n\n\n", "msg_date": "Fri, 5 Apr 2024 15:21:06 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On Fri, Apr 05, 2024 at 03:21:06PM +0200, Dmitry Dolgov wrote:\n> > On Sat, Apr 06, 2024 at 02:00:38AM +1300, Thomas Munro wrote:\n> > On Sun, Mar 31, 2024 at 12:49 PM Thomas Munro <[email protected]> wrote:\n> > > https://github.com/llvm/llvm-project/pull/87093\n> >\n> > Oh, with those clues, I think I might see... It is a bit strange that\n> > we copy attributes from AttributeTemplate(), a function that returns\n> > Datum, to our void deform function. It works (I mean doesn't crash)\n> > if you just comment this line out:\n> >\n> > llvm_copy_attributes(AttributeTemplate, v_deform_fn);\n> >\n> > ... but I guess that disables inlining of the deform function? So\n> > perhaps we just need to teach that thing not to try to copy the return\n> > value's attributes, which also seems to work here:\n>\n> Yep, I think this is it. I've spent some hours trying to understand why\n> suddenly deform function has noundef ret attribute, when it shouldn't --\n> this explains it and the proposed change fixes the crash. One thing that\n> is still not clear to me though is why this copied attribute doesn't\n> show up in the bitcode dumped right before running inline pass (I've\n> added this to troubleshoot the issue).\n\nOne thing to consider in this context is indeed adding \"verify\" pass as\nsuggested in the PR, at least for the debugging configuration. Without the fix\nit immediately returns:\n\n\tRunning analysis: VerifierAnalysis on deform_0_1\n\tAttribute 'noundef' applied to incompatible type!\n\n\tllvm error: Broken function found, compilation aborted!\n\n\n", "msg_date": "Fri, 5 Apr 2024 15:50:50 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On Fri, Apr 05, 2024 at 03:50:50PM +0200, Dmitry Dolgov wrote:\n> > On Fri, Apr 05, 2024 at 03:21:06PM +0200, Dmitry Dolgov wrote:\n> > > On Sat, Apr 06, 2024 at 02:00:38AM +1300, Thomas Munro wrote:\n> > > On Sun, Mar 31, 2024 at 12:49 PM Thomas Munro <[email protected]> wrote:\n> > > > https://github.com/llvm/llvm-project/pull/87093\n> > >\n> > > Oh, with those clues, I think I might see... It is a bit strange that\n> > > we copy attributes from AttributeTemplate(), a function that returns\n> > > Datum, to our void deform function. It works (I mean doesn't crash)\n> > > if you just comment this line out:\n> > >\n> > > llvm_copy_attributes(AttributeTemplate, v_deform_fn);\n> > >\n> > > ... but I guess that disables inlining of the deform function? So\n> > > perhaps we just need to teach that thing not to try to copy the return\n> > > value's attributes, which also seems to work here:\n> >\n> > Yep, I think this is it. I've spent some hours trying to understand why\n> > suddenly deform function has noundef ret attribute, when it shouldn't --\n> > this explains it and the proposed change fixes the crash. One thing that\n> > is still not clear to me though is why this copied attribute doesn't\n> > show up in the bitcode dumped right before running inline pass (I've\n> > added this to troubleshoot the issue).\n>\n> One thing to consider in this context is indeed adding \"verify\" pass as\n> suggested in the PR, at least for the debugging configuration. Without the fix\n> it immediately returns:\n>\n> \tRunning analysis: VerifierAnalysis on deform_0_1\n> \tAttribute 'noundef' applied to incompatible type!\n>\n> \tllvm error: Broken function found, compilation aborted!\n\nHere is what I have in mind. Interestingly enough, it also shows few\nmore errors besides \"noundef\":\n\n Intrinsic name not mangled correctly for type arguments! Should be: llvm.lifetime.end.p0\n ptr @llvm.lifetime.end.p0i8\n\nIt refers to the function from create_LifetimeEnd, not sure how\nimportant is this.", "msg_date": "Fri, 5 Apr 2024 18:01:09 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "On Sat, Apr 6, 2024 at 5:01 AM Dmitry Dolgov <[email protected]> wrote:\n> > > Yep, I think this is it. I've spent some hours trying to understand why\n> > > suddenly deform function has noundef ret attribute, when it shouldn't --\n> > > this explains it and the proposed change fixes the crash. One thing that\n> > > is still not clear to me though is why this copied attribute doesn't\n> > > show up in the bitcode dumped right before running inline pass (I've\n> > > added this to troubleshoot the issue).\n> >\n> > One thing to consider in this context is indeed adding \"verify\" pass as\n> > suggested in the PR, at least for the debugging configuration. Without the fix\n> > it immediately returns:\n> >\n> > Running analysis: VerifierAnalysis on deform_0_1\n> > Attribute 'noundef' applied to incompatible type!\n> >\n> > llvm error: Broken function found, compilation aborted!\n>\n> Here is what I have in mind. Interestingly enough, it also shows few\n> more errors besides \"noundef\":\n>\n> Intrinsic name not mangled correctly for type arguments! Should be: llvm.lifetime.end.p0\n> ptr @llvm.lifetime.end.p0i8\n>\n> It refers to the function from create_LifetimeEnd, not sure how\n> important is this.\n\nWould it be too slow to run the verify pass always, in assertion\nbuilds? Here's a patch for the original issue, and a patch to try\nthat idea + a fix for that other complaint it spits out. The latter\nwould only run for LLVM 17+, but that seems OK.", "msg_date": "Tue, 9 Apr 2024 19:07:58 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On Tue, Apr 09, 2024 at 07:07:58PM +1200, Thomas Munro wrote:\n> On Sat, Apr 6, 2024 at 5:01 AM Dmitry Dolgov <[email protected]> wrote:\n> > > > Yep, I think this is it. I've spent some hours trying to understand why\n> > > > suddenly deform function has noundef ret attribute, when it shouldn't --\n> > > > this explains it and the proposed change fixes the crash. One thing that\n> > > > is still not clear to me though is why this copied attribute doesn't\n> > > > show up in the bitcode dumped right before running inline pass (I've\n> > > > added this to troubleshoot the issue).\n> > >\n> > > One thing to consider in this context is indeed adding \"verify\" pass as\n> > > suggested in the PR, at least for the debugging configuration. Without the fix\n> > > it immediately returns:\n> > >\n> > > Running analysis: VerifierAnalysis on deform_0_1\n> > > Attribute 'noundef' applied to incompatible type!\n> > >\n> > > llvm error: Broken function found, compilation aborted!\n> >\n> > Here is what I have in mind. Interestingly enough, it also shows few\n> > more errors besides \"noundef\":\n> >\n> > Intrinsic name not mangled correctly for type arguments! Should be: llvm.lifetime.end.p0\n> > ptr @llvm.lifetime.end.p0i8\n> >\n> > It refers to the function from create_LifetimeEnd, not sure how\n> > important is this.\n>\n> Would it be too slow to run the verify pass always, in assertion\n> builds? Here's a patch for the original issue, and a patch to try\n> that idea + a fix for that other complaint it spits out. The latter\n> would only run for LLVM 17+, but that seems OK.\n\nSounds like a good idea. About the overhead, I've done a quick test on the\nreproducer at hands, doing explain analyze in a tight loop and fetching\n\"optimization\" timinigs. It gives quite visible difference 96ms p99 with verify\nvs 46ms p99 without verify (and a rather low stddev, ~1.5ms). But I can\nimagine it's acceptable for a build with assertions.\n\nBtw, I've found there is a C-api for this exposed, which produces the same\nwarnings for me. Maybe it would be even better this way:\n\n\t/**\n\t * Toggle adding the VerifierPass for the PassBuilder, ensuring all functions\n\t * inside the module is valid.\n\t */\n\tvoid LLVMPassBuilderOptionsSetVerifyEach(LLVMPassBuilderOptionsRef Options,\n\t\t\t\t\t\t\t\t\t\t\t LLVMBool VerifyEach);\n\n\n\t+ /* In assertion builds, run the LLVM verify pass. */\n\t+#ifdef USE_ASSERT_CHECKING\n\t+ LLVMPassBuilderOptionsSetVerifyEach(options, true);\n\t+#endif\n\n\n\n", "msg_date": "Tue, 9 Apr 2024 12:05:21 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "On Tue, Apr 9, 2024 at 10:05 PM Dmitry Dolgov <[email protected]> wrote:\n> + /* In assertion builds, run the LLVM verify pass. */\n> +#ifdef USE_ASSERT_CHECKING\n> + LLVMPassBuilderOptionsSetVerifyEach(options, true);\n> +#endif\n\nThanks, that seems nicer. I think the question is whether it will\nslow down build farm/CI/local meson test runs to a degree that exceeds\nits value. Another option would be to have some other opt-in macro,\nlike the existing #ifdef LLVM_PASS_DEBUG, for people who maintain\nJIT-related stuff to turn on.\n\nSupposing we go with USE_ASSERT_CHECKING, I have another question:\n\n- const char *nm = \"llvm.lifetime.end.p0i8\";\n+ const char *nm = \"llvm.lifetime.end.p0\";\n\nWas that a mistake, or did the mangling rules change in some version?\nI don't currently feel inclined to go and test this on the ancient\nversions we claim to support in back-branches. Perhaps we should just\ndo this in master, and then it'd be limited to worrying about LLVM\nversions 10-18 (see 820b5af7), which have the distinct advantage of\nbeing available in package repositories for testing. Or I suppose we\ncould back-patch, but only do it if LLVM_VERSION_MAJOR >= 10. Or we\ncould do it unconditionally, and wait for ancient-LLVM build farm\nanimals to break if they're going to.\n\nI pushed the illegal attribute fix though. Thanks for the detective work!\n\n(It crossed my mind that perhaps deform functions should have their\nown template function, but if someone figures out that that's a good\nidea, I think we'll *still* need that change just pushed.)\n\n\n", "msg_date": "Wed, 10 Apr 2024 12:43:23 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "st 10. 4. 2024 v 2:44 odesílatel Thomas Munro <[email protected]>\nnapsal:\n\n> On Tue, Apr 9, 2024 at 10:05 PM Dmitry Dolgov <[email protected]>\n> wrote:\n> > + /* In assertion builds, run the LLVM verify pass. */\n> > +#ifdef USE_ASSERT_CHECKING\n> > + LLVMPassBuilderOptionsSetVerifyEach(options, true);\n> > +#endif\n>\n> Thanks, that seems nicer. I think the question is whether it will\n> slow down build farm/CI/local meson test runs to a degree that exceeds\n> its value. Another option would be to have some other opt-in macro,\n> like the existing #ifdef LLVM_PASS_DEBUG, for people who maintain\n> JIT-related stuff to turn on.\n>\n> Supposing we go with USE_ASSERT_CHECKING, I have another question:\n>\n> - const char *nm = \"llvm.lifetime.end.p0i8\";\n> + const char *nm = \"llvm.lifetime.end.p0\";\n>\n> Was that a mistake, or did the mangling rules change in some version?\n> I don't currently feel inclined to go and test this on the ancient\n> versions we claim to support in back-branches. Perhaps we should just\n> do this in master, and then it'd be limited to worrying about LLVM\n> versions 10-18 (see 820b5af7), which have the distinct advantage of\n> being available in package repositories for testing. Or I suppose we\n> could back-patch, but only do it if LLVM_VERSION_MAJOR >= 10. Or we\n> could do it unconditionally, and wait for ancient-LLVM build farm\n> animals to break if they're going to.\n>\n> I pushed the illegal attribute fix though. Thanks for the detective work!\n>\n> (It crossed my mind that perhaps deform functions should have their\n> own template function, but if someone figures out that that's a good\n> idea, I think we'll *still* need that change just pushed.)\n>\n\nall tests passed on fc 40 without problems\n\nThank you\n\nPavel\n\nst 10. 4. 2024 v 2:44 odesílatel Thomas Munro <[email protected]> napsal:On Tue, Apr 9, 2024 at 10:05 PM Dmitry Dolgov <[email protected]> wrote:\n>         +       /* In assertion builds, run the LLVM verify pass. */\n>         +#ifdef USE_ASSERT_CHECKING\n>         +       LLVMPassBuilderOptionsSetVerifyEach(options, true);\n>         +#endif\n\nThanks, that seems nicer.  I think the question is whether it will\nslow down build farm/CI/local meson test runs to a degree that exceeds\nits value.  Another option would be to have some other opt-in macro,\nlike the existing #ifdef LLVM_PASS_DEBUG, for people who maintain\nJIT-related stuff to turn on.\n\nSupposing we go with USE_ASSERT_CHECKING, I have another question:\n\n-       const char *nm = \"llvm.lifetime.end.p0i8\";\n+       const char *nm = \"llvm.lifetime.end.p0\";\n\nWas that a mistake, or did the mangling rules change in some version?\nI don't currently feel inclined to go and test this on the ancient\nversions we claim to support in back-branches.  Perhaps we should just\ndo this in master, and then it'd be limited to worrying about LLVM\nversions 10-18 (see 820b5af7), which have the distinct advantage of\nbeing available in package repositories for testing.  Or I suppose we\ncould back-patch, but only do it if LLVM_VERSION_MAJOR >= 10.  Or we\ncould do it unconditionally, and wait for ancient-LLVM build farm\nanimals to break if they're going to.\n\nI pushed the illegal attribute fix though.  Thanks for the detective work!\n\n(It crossed my mind that perhaps deform functions should have their\nown template function, but if someone figures out that that's a good\nidea, I think we'll *still* need that change just pushed.)all tests passed on fc 40 without problemsThank youPavel", "msg_date": "Wed, 10 Apr 2024 20:54:46 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On Wed, Apr 10, 2024 at 12:43:23PM +1200, Thomas Munro wrote:\n> On Tue, Apr 9, 2024 at 10:05 PM Dmitry Dolgov <[email protected]> wrote:\n> > + /* In assertion builds, run the LLVM verify pass. */\n> > +#ifdef USE_ASSERT_CHECKING\n> > + LLVMPassBuilderOptionsSetVerifyEach(options, true);\n> > +#endif\n>\n> Thanks, that seems nicer. I think the question is whether it will\n> slow down build farm/CI/local meson test runs to a degree that exceeds\n> its value. Another option would be to have some other opt-in macro,\n> like the existing #ifdef LLVM_PASS_DEBUG, for people who maintain\n> JIT-related stuff to turn on.\n\nOh, I see. I'm afraid I don't have enough knowledge of the CI pipeline,\nbut at least locally for me installcheck became only few percent slower\nwith the verify pass.\n\n> Supposing we go with USE_ASSERT_CHECKING, I have another question:\n>\n> - const char *nm = \"llvm.lifetime.end.p0i8\";\n> + const char *nm = \"llvm.lifetime.end.p0\";\n>\n> Was that a mistake, or did the mangling rules change in some version?\n\nI'm not sure, but inclined to think it's the same as with noundef -- a\nmistake, which was revealed in some recent version of LLVM. From what I\nunderstand the suffix i8 indicates an overloaded argument of that type,\nwhich is probably not needed. At the same time I can't get this error\nfrom the verify pass with LLVM-12 or LLVM-15 (I have those at hand by\naccident). To make it even more confusing I've found a few similar\nexamples in other projects, where this was really triggered by an issue\nin LLVM [1].\n\n[1]: https://github.com/rust-lang/rust/issues/102738\n\n\n", "msg_date": "Wed, 10 Apr 2024 22:15:27 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "Hi,\n\nOn 2024-04-10 22:15:27 +0200, Dmitry Dolgov wrote:\n> > On Wed, Apr 10, 2024 at 12:43:23PM +1200, Thomas Munro wrote:\n> > On Tue, Apr 9, 2024 at 10:05 PM Dmitry Dolgov <[email protected]> wrote:\n> > > + /* In assertion builds, run the LLVM verify pass. */\n> > > +#ifdef USE_ASSERT_CHECKING\n> > > + LLVMPassBuilderOptionsSetVerifyEach(options, true);\n> > > +#endif\n> >\n> > Thanks, that seems nicer. I think the question is whether it will\n> > slow down build farm/CI/local meson test runs to a degree that exceeds\n> > its value. Another option would be to have some other opt-in macro,\n> > like the existing #ifdef LLVM_PASS_DEBUG, for people who maintain\n> > JIT-related stuff to turn on.\n> \n> Oh, I see. I'm afraid I don't have enough knowledge of the CI pipeline,\n> but at least locally for me installcheck became only few percent slower\n> with the verify pass.\n\nI think it's worthwhile to add. It makes some problems so much easier to\nfind. And if you're building with debug enabled llvm, performance is so\natrocious anyway, that this isn't going to change the big picture...\n\n\n> > Supposing we go with USE_ASSERT_CHECKING, I have another question:\n> >\n> > - const char *nm = \"llvm.lifetime.end.p0i8\";\n> > + const char *nm = \"llvm.lifetime.end.p0\";\n> >\n> > Was that a mistake, or did the mangling rules change in some version?\n> \n> I'm not sure, but inclined to think it's the same as with noundef -- a\n> mistake, which was revealed in some recent version of LLVM. From what I\n> understand the suffix i8 indicates an overloaded argument of that type,\n> which is probably not needed. At the same time I can't get this error\n> from the verify pass with LLVM-12 or LLVM-15 (I have those at hand by\n> accident). To make it even more confusing I've found a few similar\n> examples in other projects, where this was really triggered by an issue\n> in LLVM [1].\n\nI'm afraid that it actually has changed over time, I'm fairly sure that it was\nrequired at some point.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Apr 2024 13:23:52 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "On Tue, Apr 9, 2024 at 8:44 PM Thomas Munro <[email protected]> wrote:\n> I pushed the illegal attribute fix though. Thanks for the detective work!\n\nThis was commit 53c8d6c9f157f2bc8211b8de02417e55fefddbc7 and as I\nunderstand it that fixed the issue originally reported on this thread.\n\nTherefore, I have marked https://commitfest.postgresql.org/48/4917/ as\nCommitted.\n\nIf that's not correct, please feel free to fix. If there are other\nissues that need to be patched separately, please consider opening a\nnew CF entry for those issues once a patch is available.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 May 2024 11:09:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "> On Tue, May 14, 2024 at 11:09:39AM -0400, Robert Haas wrote:\n> On Tue, Apr 9, 2024 at 8:44 PM Thomas Munro <[email protected]> wrote:\n> > I pushed the illegal attribute fix though. Thanks for the detective work!\n>\n> This was commit 53c8d6c9f157f2bc8211b8de02417e55fefddbc7 and as I\n> understand it that fixed the issue originally reported on this thread.\n>\n> Therefore, I have marked https://commitfest.postgresql.org/48/4917/ as\n> Committed.\n>\n> If that's not correct, please feel free to fix. If there are other\n> issues that need to be patched separately, please consider opening a\n> new CF entry for those issues once a patch is available.\n\nThanks, that's correct. I think the only thing left is to add a verifier\npass, which everybody seems to be agreed is nice to have. The plan is to\nadd it only to master without back-patching. I assume Thomas did not\nhave time for that yet, so I've added the latest suggestions into his\npatch, and going to open a CF item to not forget about it.", "msg_date": "Thu, 16 May 2024 18:25:33 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" }, { "msg_contents": "On Fri, May 17, 2024 at 4:26 AM Dmitry Dolgov <[email protected]> wrote:\n> Thanks, that's correct. I think the only thing left is to add a verifier\n> pass, which everybody seems to be agreed is nice to have. The plan is to\n> add it only to master without back-patching. I assume Thomas did not\n> have time for that yet, so I've added the latest suggestions into his\n> patch, and going to open a CF item to not forget about it.\n\nPushed, and closed. Thanks!\n\n\n", "msg_date": "Mon, 15 Jul 2024 21:52:28 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken JIT support on Fedora 40" } ]
[ { "msg_contents": "I think we generally follow the rule that we include 'postgres.h' or\n'postgres_fe.h' first, followed by system header files, and then\npostgres header files ordered in ASCII value. I noticed that in some C\nfiles we fail to follow this rule strictly. Attached is a patch to fix\nthis.\n\nBack in 2019, we performed the same operation in commits 7e735035f2,\ndddf4cdc33, and 14aec03502. It appears that the code has deviated from\nthat point onwards.\n\nPlease note that this patch only addresses the order of header file\nincludes in backend modules (and might not be thorough). It is possible\nthat other modules may have a similar issue, but I have not evaluated\nthem yet.\n\nThanks\nRichard", "msg_date": "Wed, 6 Mar 2024 17:32:31 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding the order of the header file includes" }, { "msg_contents": "On Wed, Mar 6, 2024 at 3:02 PM Richard Guo <[email protected]> wrote:\n>\n> I think we generally follow the rule that we include 'postgres.h' or\n> 'postgres_fe.h' first, followed by system header files, and then\n> postgres header files ordered in ASCII value. I noticed that in some C\n> files we fail to follow this rule strictly. Attached is a patch to fix\n> this.\n>\n> Back in 2019, we performed the same operation in commits 7e735035f2,\n> dddf4cdc33, and 14aec03502. It appears that the code has deviated from\n> that point onwards.\n>\n> Please note that this patch only addresses the order of header file\n> includes in backend modules (and might not be thorough). It is possible\n> that other modules may have a similar issue, but I have not evaluated\n> them yet.\n\n+1. I'm just curious to know if you've leveraged any tool from\nsrc/tools/pginclude or any script or such.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Mar 2024 15:55:29 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding the order of the header file includes" }, { "msg_contents": "On Wed, Mar 6, 2024 at 6:25 PM Bharath Rupireddy <\[email protected]> wrote:\n\n> On Wed, Mar 6, 2024 at 3:02 PM Richard Guo <[email protected]> wrote:\n> >\n> > I think we generally follow the rule that we include 'postgres.h' or\n> > 'postgres_fe.h' first, followed by system header files, and then\n> > postgres header files ordered in ASCII value. I noticed that in some C\n> > files we fail to follow this rule strictly. Attached is a patch to fix\n> > this.\n> >\n> > Back in 2019, we performed the same operation in commits 7e735035f2,\n> > dddf4cdc33, and 14aec03502. It appears that the code has deviated from\n> > that point onwards.\n> >\n> > Please note that this patch only addresses the order of header file\n> > includes in backend modules (and might not be thorough). It is possible\n> > that other modules may have a similar issue, but I have not evaluated\n> > them yet.\n>\n> +1. I'm just curious to know if you've leveraged any tool from\n> src/tools/pginclude or any script or such.\n\n\nThanks for looking.\n\nWhile rebasing one of my patches I noticed that the header file includes\nin relnode.c are not sorted in order. So I wrote a naive script to see\nif any other C files have the same issue. The script is:\n\n#!/bin/bash\n\nfind . -name \"*.c\" | while read -r file; do\n headers=$(grep -o '#include \"[^>]*\"' \"$file\" |\n grep -v \"postgres.h\" | grep -v \"postgres_fe.h\" |\n sed 's/\\.h\"//g')\n\n sorted_headers=$(echo \"$headers\" | sort)\n\n results=$(diff <(echo \"$headers\") <(echo \"$sorted_headers\"))\n\n if [[ $? != 0 ]]; then\n echo \"Headers in '$file' are out of order\"\n echo $results\n echo\n fi\ndone\n\nThanks\nRichard\n\nOn Wed, Mar 6, 2024 at 6:25 PM Bharath Rupireddy <[email protected]> wrote:On Wed, Mar 6, 2024 at 3:02 PM Richard Guo <[email protected]> wrote:\n>\n> I think we generally follow the rule that we include 'postgres.h' or\n> 'postgres_fe.h' first, followed by system header files, and then\n> postgres header files ordered in ASCII value.  I noticed that in some C\n> files we fail to follow this rule strictly.  Attached is a patch to fix\n> this.\n>\n> Back in 2019, we performed the same operation in commits 7e735035f2,\n> dddf4cdc33, and 14aec03502.  It appears that the code has deviated from\n> that point onwards.\n>\n> Please note that this patch only addresses the order of header file\n> includes in backend modules (and might not be thorough).  It is possible\n> that other modules may have a similar issue, but I have not evaluated\n> them yet.\n\n+1. I'm just curious to know if you've leveraged any tool from\nsrc/tools/pginclude or any script or such.Thanks for looking.While rebasing one of my patches I noticed that the header file includesin relnode.c are not sorted in order.  So I wrote a naive script to seeif any other C files have the same issue.  The script is:#!/bin/bashfind . -name \"*.c\" | while read -r file; do  headers=$(grep -o '#include \"[^>]*\"' \"$file\" |            grep -v \"postgres.h\" | grep -v \"postgres_fe.h\" |            sed 's/\\.h\"//g')  sorted_headers=$(echo \"$headers\" | sort)  results=$(diff <(echo \"$headers\") <(echo \"$sorted_headers\"))  if [[ $? != 0 ]]; then    echo \"Headers in '$file' are out of order\"    echo $results    echo  fidoneThanksRichard", "msg_date": "Thu, 7 Mar 2024 15:09:35 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regarding the order of the header file includes" }, { "msg_contents": "On Thu, Mar 7, 2024 at 12:39 PM Richard Guo <[email protected]> wrote:\n>\n> While rebasing one of my patches I noticed that the header file includes\n> in relnode.c are not sorted in order. So I wrote a naive script to see\n> if any other C files have the same issue. The script is:\n>\n> #!/bin/bash\n>\n> find . -name \"*.c\" | while read -r file; do\n> headers=$(grep -o '#include \"[^>]*\"' \"$file\" |\n> grep -v \"postgres.h\" | grep -v \"postgres_fe.h\" |\n> sed 's/\\.h\"//g')\n>\n> sorted_headers=$(echo \"$headers\" | sort)\n>\n> results=$(diff <(echo \"$headers\") <(echo \"$sorted_headers\"))\n>\n> if [[ $? != 0 ]]; then\n> echo \"Headers in '$file' are out of order\"\n> echo $results\n> echo\n> fi\n> done\n\nCool. Isn't it a better idea to improve this script to auto-order the\nheader files and land it under src/tools/pginclude/headerssort? It can\nthen be reusable and be another code beautification weapon one can use\nbefore the code release.\n\nFWIW, I'm getting the syntax error when ran the above shell script:\n\nheaderssort.sh: 10: Syntax error: \"(\" unexpected\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Mar 2024 16:28:25 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding the order of the header file includes" }, { "msg_contents": "On Fri, Mar 8, 2024 at 6:58 PM Bharath Rupireddy <\[email protected]> wrote:\n\n> On Thu, Mar 7, 2024 at 12:39 PM Richard Guo <[email protected]>\n> wrote:\n> >\n> > While rebasing one of my patches I noticed that the header file includes\n> > in relnode.c are not sorted in order. So I wrote a naive script to see\n> > if any other C files have the same issue. The script is:\n> >\n> > #!/bin/bash\n> >\n> > find . -name \"*.c\" | while read -r file; do\n> > headers=$(grep -o '#include \"[^>]*\"' \"$file\" |\n> > grep -v \"postgres.h\" | grep -v \"postgres_fe.h\" |\n> > sed 's/\\.h\"//g')\n> >\n> > sorted_headers=$(echo \"$headers\" | sort)\n> >\n> > results=$(diff <(echo \"$headers\") <(echo \"$sorted_headers\"))\n> >\n> > if [[ $? != 0 ]]; then\n> > echo \"Headers in '$file' are out of order\"\n> > echo $results\n> > echo\n> > fi\n> > done\n>\n> Cool. Isn't it a better idea to improve this script to auto-order the\n> header files and land it under src/tools/pginclude/headerssort? It can\n> then be reusable and be another code beautification weapon one can use\n> before the code release.\n\n\nYeah, perhaps. However the current script is quite unrefined and would\nrequire a lot of effort to make it a reusable tool. I will add it to my\nto-do list and hopefully one day I can get back to it. Feel free to\nmess around with it if someone is interested.\n\n\n> FWIW, I'm getting the syntax error when ran the above shell script:\n>\n> headerssort.sh: 10: Syntax error: \"(\" unexpected\n\n\nI think the error is due to line 10 containing bash-style syntax. Hmm,\nhave you tried to use 'bash' instead of 'sh' to run this script?\n\nThanks\nRichard\n\nOn Fri, Mar 8, 2024 at 6:58 PM Bharath Rupireddy <[email protected]> wrote:On Thu, Mar 7, 2024 at 12:39 PM Richard Guo <[email protected]> wrote:\n>\n> While rebasing one of my patches I noticed that the header file includes\n> in relnode.c are not sorted in order.  So I wrote a naive script to see\n> if any other C files have the same issue.  The script is:\n>\n> #!/bin/bash\n>\n> find . -name \"*.c\" | while read -r file; do\n>   headers=$(grep -o '#include \"[^>]*\"' \"$file\" |\n>             grep -v \"postgres.h\" | grep -v \"postgres_fe.h\" |\n>             sed 's/\\.h\"//g')\n>\n>   sorted_headers=$(echo \"$headers\" | sort)\n>\n>   results=$(diff <(echo \"$headers\") <(echo \"$sorted_headers\"))\n>\n>   if [[ $? != 0 ]]; then\n>     echo \"Headers in '$file' are out of order\"\n>     echo $results\n>     echo\n>   fi\n> done\n\nCool. Isn't it a better idea to improve this script to auto-order the\nheader files and land it under src/tools/pginclude/headerssort? It can\nthen be reusable and be another code beautification weapon one can use\nbefore the code release.Yeah, perhaps.  However the current script is quite unrefined and wouldrequire a lot of effort to make it a reusable tool.  I will add it to myto-do list and hopefully one day I can get back to it.  Feel free tomess around with it if someone is interested. \nFWIW, I'm getting the syntax error when ran the above shell script:\n\nheaderssort.sh: 10: Syntax error: \"(\" unexpectedI think the error is due to line 10 containing bash-style syntax.  Hmm,have you tried to use 'bash' instead of 'sh' to run this script?ThanksRichard", "msg_date": "Tue, 12 Mar 2024 19:39:16 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regarding the order of the header file includes" }, { "msg_contents": "On Wed, Mar 6, 2024 at 5:32 PM Richard Guo <[email protected]> wrote:\n\n> Please note that this patch only addresses the order of header file\n> includes in backend modules (and might not be thorough). It is possible\n> that other modules may have a similar issue, but I have not evaluated\n> them yet.\n>\n\nAttached is v2, which also includes the 0002 patch that addresses the\norder of header file includes in non-backend modules.\n\nThanks\nRichard", "msg_date": "Tue, 12 Mar 2024 19:47:01 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regarding the order of the header file includes" }, { "msg_contents": "On 12.03.24 12:47, Richard Guo wrote:\n> \n> On Wed, Mar 6, 2024 at 5:32 PM Richard Guo <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> Please note that this patch only addresses the order of header file\n> includes in backend modules (and might not be thorough).  It is possible\n> that other modules may have a similar issue, but I have not evaluated\n> them yet.\n> \n> \n> Attached is v2, which also includes the 0002 patch that addresses the\n> order of header file includes in non-backend modules.\n\ncommitted (as one patch)\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 15:07:55 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding the order of the header file includes" }, { "msg_contents": "On Wed, Mar 13, 2024 at 10:07 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 12.03.24 12:47, Richard Guo wrote:\n> >\n> > On Wed, Mar 6, 2024 at 5:32 PM Richard Guo <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > Please note that this patch only addresses the order of header file\n> > includes in backend modules (and might not be thorough). It is\n> possible\n> > that other modules may have a similar issue, but I have not evaluated\n> > them yet.\n> >\n> >\n> > Attached is v2, which also includes the 0002 patch that addresses the\n> > order of header file includes in non-backend modules.\n>\n> committed (as one patch)\n\n\nThanks for pushing!\n\nThanks\nRichard\n\nOn Wed, Mar 13, 2024 at 10:07 PM Peter Eisentraut <[email protected]> wrote:On 12.03.24 12:47, Richard Guo wrote:\n> \n> On Wed, Mar 6, 2024 at 5:32 PM Richard Guo <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n>     Please note that this patch only addresses the order of header file\n>     includes in backend modules (and might not be thorough).  It is possible\n>     that other modules may have a similar issue, but I have not evaluated\n>     them yet.\n> \n> \n> Attached is v2, which also includes the 0002 patch that addresses the\n> order of header file includes in non-backend modules.\n\ncommitted (as one patch)Thanks for pushing!ThanksRichard", "msg_date": "Mon, 18 Mar 2024 08:39:25 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regarding the order of the header file includes" } ]
[ { "msg_contents": "Someone on -general was asking about this, as they are listening on\nmultiple IPs and would like to know which exact one clients were hitting. I\ntook a quick look and we already have that information, so I grabbed some\nstuff from inet_server_addr and added it as part of a \"%L\" (for 'local\ninterface'). Quick patch / POC attached.\n\nCheers,\nGreg", "msg_date": "Wed, 6 Mar 2024 10:59:52 -0500", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Logging which interface was connected to in log_line_prefix" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi \r\n\r\nI did a quick test on this patch and it seems to work as expected. Originally I thought the patch would add the name of \"local interface\" such as \"eth0\", \"eth1\", \"lo\"... etc as %L log prefix format. Instead, it formats the local interface IP addresses , but I think it is fine too. \r\n\r\nI have tested this new addition with various types of IPs including IPv4, IPv4 and IPv6 local loop back addresses, global IPv6 address, linked local IPv6 address with interface specifier, it seems to format these IPs correctly\r\n\r\nThere is a comment in the patch that states:\r\n\r\n/* We do not need clean_ipv6_addr here: just report verbatim */\r\n\r\nI am not quite sure what it means, but I am guessing it means that the patch does not need to format the IPv6 addresses in any specific way. For example, removing leading zeros or compressing consecutive zeros to make a IPv6 address shorter. It may not be necessary to indicate this in a comment because In my test, if any of my interface's IPv6 address have consecutive zeroes like this: 2000:0000:0000:0000:0000:0000:200:cafe/64, my network driver (Ubuntu 18.04) will format it as 2000::200:cafe, and the patch of course will read it as 2000::200:cafe, which is ... correct and clean.\r\n\r\nthank you\r\nCary Huang\r\nwww.highgo.ca", "msg_date": "Mon, 29 Apr 2024 23:29:29 +0000", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logging which interface was connected to in log_line_prefix" }, { "msg_contents": "Thank you for taking the time to review this. I've attached a new rebased\nversion, which has no significant changes.\n\n\n> There is a comment in the patch that states:\n>\n> /* We do not need clean_ipv6_addr here: just report verbatim */\n>\n> I am not quite sure what it means, but I am guessing it means that the\n> patch does not need to format the IPv6 addresses in any specific way.\n\n\nYes, basically correct. There is a kluge (their word, not mine) in\nutils/adt/network.c to strip the zone - see the comment for the\nclean_ipv6_addr() function in that file. I added the patch comment in case\nsome future person wonders why we don't \"clean up\" the ipv6 address, like\nother places in the code base do. We don't need to pass it back to anything\nelse, so we can simply output the correct version, zone and all.\n\nCheers,\nGreg", "msg_date": "Wed, 1 May 2024 13:04:22 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logging which interface was connected to in log_line_prefix" }, { "msg_contents": "On 06.03.24 16:59, Greg Sabino Mullane wrote:\n> Someone on -general was asking about this, as they are listening on \n> multiple IPs and would like to know which exact one clients were \n> hitting. I took a quick look and we already have that information, so I \n> grabbed some stuff from inet_server_addr and added it as part of a \"%L\" \n> (for 'local interface'). Quick patch / POC attached.\n\nI was confused by this patch title. This feature does not log the \ninterface (like \"eth0\" or \"lo\"), but the local address. Please adjust \nthe terminology.\n\nI noticed that for Unix-domain socket connections, %r and %h write \n\"[local]\". I think that should be done for this new placeholder as well.\n\n\n\n", "msg_date": "Sun, 12 May 2024 14:21:15 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logging which interface was connected to in log_line_prefix" }, { "msg_contents": "On 01.05.24 19:04, Greg Sabino Mullane wrote:\n> Thank you for taking the time to review this. I've attached a new \n> rebased version, which has no significant changes.\n> \n> There is a comment in the patch that states:\n> \n> /* We do not need clean_ipv6_addr here: just report verbatim */\n> \n> I am not quite sure what it means, but I am guessing it means that\n> the patch does not need to format the IPv6 addresses in any specific\n> way.\n> \n> \n> Yes, basically correct. There is a kluge (their word, not mine) in \n> utils/adt/network.c to strip the zone - see the comment for the \n> clean_ipv6_addr() function in that file. I added the patch comment in \n> case some future person wonders why we don't \"clean up\" the ipv6 \n> address, like other places in the code base do. We don't need to pass it \n> back to anything else, so we can simply output the correct version, zone \n> and all.\n\nclean_ipv6_addr() needs to be called before trying to convert a string \nrepresentation into inet/cidr types. This is not what is happening \nhere. So the comment is not applicable.\n\n\n\n", "msg_date": "Sun, 12 May 2024 14:25:24 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logging which interface was connected to in log_line_prefix" }, { "msg_contents": "Peter, thank you for the feedback. Attached is a new patch with \"address\"\nrather than \"interface\", plus a new default of \"local\" if there is no\naddress. I also removed the questionable comment, and updated the\ncommitfest title.\n\nCheers,\nGreg", "msg_date": "Fri, 24 May 2024 11:33:58 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logging which local address was connected to in log_line_prefix" }, { "msg_contents": "On 5/24/24 22:33, Greg Sabino Mullane wrote:\n> Peter, thank you for the feedback. Attached is a new patch with \n> \"address\" rather than \"interface\", plus a new default of \"local\" if \n> there is no address. I also removed the questionable comment, and \n> updated the commitfest title.\n\nI tried the updated patch and it behaved as expected with [local] being \nlogged for peer connections and an IP being logged for host connections.\n\nOne thing -- the changes in postgresql.conf.sample should use tabs to \nmatch the other lines. The patch uses spaces.\n\nI also find the formatting in log_status_format() pretty awkward but I \nguess that will be handled by pgindent.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 8 Jul 2024 14:07:16 +0700", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logging which local address was connected to in log_line_prefix" }, { "msg_contents": "Thanks for the review. Please find attached a new version with proper tabs\nand indenting.\n\nCheers,\nGreg", "msg_date": "Thu, 11 Jul 2024 12:09:23 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logging which local address was connected to in log_line_prefix" }, { "msg_contents": "On 7/11/24 23:09, Greg Sabino Mullane wrote:\n> Thanks for the review. Please find attached a new version with proper \n> tabs and indenting.\n\nThis looks good to me now. +1 overall for the feature.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 12 Jul 2024 10:00:25 +0700", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logging which local address was connected to in log_line_prefix" } ]
[ { "msg_contents": "Hi hackers,\n\nI just noticed that commit d93627bc added a bunch of pg_fatal() calls\nusing %s and strerror(errno), which could be written more concisely as\n%m. I'm assuming this was done because the surrounding code also uses\nthis pattern, and hadn't been changed to use %m when support for that\nwas added to snprintf.c to avoid backporting hazards. However, that\nsupport was in v12, which is now the oldest still-supported back branch,\nso we can safely make that change.\n\nThe attached patch does so everywhere appropriate. One place where it's\nnot appropriate is the TAP-emitting functions in pg_regress, since those\ncall fprintf() and other potentially errno-modifying functions before\nevaluating the format string.\n\n- ilmari", "msg_date": "Wed, 06 Mar 2024 19:11:19 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": true, "msg_subject": "Using the %m printf format more" }, { "msg_contents": "On Wed, Mar 06, 2024 at 07:11:19PM +0000, Dagfinn Ilmari Mannsåker wrote:\n> I just noticed that commit d93627bc added a bunch of pg_fatal() calls\n> using %s and strerror(errno), which could be written more concisely as\n> %m. I'm assuming this was done because the surrounding code also uses\n> this pattern, and hadn't been changed to use %m when support for that\n> was added to snprintf.c to avoid backporting hazards. However, that\n> support was in v12, which is now the oldest still-supported back branch,\n> so we can safely make that change.\n\nRight. This may still create some spurious conflicts, but that's\nmanageable for error strings. The changes in your patch look OK.\n\n> The attached patch does so everywhere appropriate. One place where it's\n> not appropriate is the TAP-emitting functions in pg_regress, since those\n> call fprintf()\n\nI am not really following your argument with pg_regress.c and\nfprintf(). d6c55de1f99a should make that possible even in the case of\nemit_tap_output_v(), no? \n\n> and other potentially errno-modifying functions before\n> evaluating the format string.\n\nSure.\n--\nMichael", "msg_date": "Mon, 11 Mar 2024 15:30:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using the %m printf format more" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n\n> On Wed, Mar 06, 2024 at 07:11:19PM +0000, Dagfinn Ilmari Mannsåker wrote:\n>\n>> The attached patch does so everywhere appropriate. One place where it's\n>> not appropriate is the TAP-emitting functions in pg_regress, since those\n>> call fprintf()\n>\n> I am not really following your argument with pg_regress.c and\n> fprintf(). d6c55de1f99a should make that possible even in the case of\n> emit_tap_output_v(), no? \n\nThe problem isn't that emit_tap_output_v() doesn't support %m, which it\ndoes, but that it potentially calls fprintf() to output TAP protocol\nelements such as \"\\n\" and \"# \" before it calls vprintf(…, fmt, …), and\nthose calls might clobber errno. An option is to make it save errno at\nthe start and restore it before the vprintf() calls, as in the second\nattached patch.\n\n>> and other potentially errno-modifying functions before\n>> evaluating the format string.\n>\n> Sure.\n\nOn closer look, fprintf() is actually the only errno-clobbering function\nit calls, I was just hedging my bets in that statement.\n\n- ilmari", "msg_date": "Mon, 11 Mar 2024 11:19:16 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using the %m printf format more" }, { "msg_contents": "On Mon, Mar 11, 2024 at 11:19:16AM +0000, Dagfinn Ilmari Mannsåker wrote:\n> On closer look, fprintf() is actually the only errno-clobbering function\n> it calls, I was just hedging my bets in that statement.\n\nThis makes the whole simpler, so I'd be OK with that. I am wondering\nif others have opinions to offer about that.\n\nI've applied v2-0001 for now, as it is worth on its own and it shaves\na bit of code.\n--\nMichael", "msg_date": "Tue, 12 Mar 2024 10:22:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using the %m printf format more" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n\n> On Mon, Mar 11, 2024 at 11:19:16AM +0000, Dagfinn Ilmari Mannsåker wrote:\n>> On closer look, fprintf() is actually the only errno-clobbering function\n>> it calls, I was just hedging my bets in that statement.\n>\n> This makes the whole simpler, so I'd be OK with that. I am wondering\n> if others have opinions to offer about that.\n\nIf no one chimes in in the next couple of days I'll add it to the\ncommitfest so it doesn't get lost.\n\n> I've applied v2-0001 for now, as it is worth on its own and it shaves\n> a bit of code.\n\nThanks!\n\n- ilmari\n\n\n", "msg_date": "Wed, 13 Mar 2024 12:24:08 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using the %m printf format more" }, { "msg_contents": "On 12.03.24 02:22, Michael Paquier wrote:\n> On Mon, Mar 11, 2024 at 11:19:16AM +0000, Dagfinn Ilmari Mannsåker wrote:\n>> On closer look, fprintf() is actually the only errno-clobbering function\n>> it calls, I was just hedging my bets in that statement.\n> \n> This makes the whole simpler, so I'd be OK with that. I am wondering\n> if others have opinions to offer about that.\n\nThe 0002 patch looks sensible. It would be good to fix that, otherwise \nit could have some confusing outcomes in the future.\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 14:33:52 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using the %m printf format more" }, { "msg_contents": "On Wed, Mar 13, 2024 at 02:33:52PM +0100, Peter Eisentraut wrote:\n> The 0002 patch looks sensible. It would be good to fix that, otherwise it\n> could have some confusing outcomes in the future.\n\nYou mean if we begin to use %m in future callers of\nemit_tap_output_v(), hypothetically? That's a fair argument.\n--\nMichael", "msg_date": "Thu, 14 Mar 2024 16:04:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using the %m printf format more" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n\n> On Wed, Mar 13, 2024 at 02:33:52PM +0100, Peter Eisentraut wrote:\n>> The 0002 patch looks sensible. It would be good to fix that, otherwise it\n>> could have some confusing outcomes in the future.\n>\n> You mean if we begin to use %m in future callers of\n> emit_tap_output_v(), hypothetically? That's a fair argument.\n\nYeah, developers would rightfully expect to be able to use %m with\nanything that takes a printf format string. Case in point: when I was\nfirst doing the conversion I did change the bail() and diag() calls in\npg_regress to %m, and only while I was preparing the patch for\nsubmission did I think to check the actual implementation to see if it\nwas safe to do so.\n\nThe alternative would be to document that you can't use %m with these\nfunctions, which is silly IMO, given how simple the fix is.\n\nOne minor improvement I can think of is to add a comment by the\nsave_errno declaration noting that it's needed in order to support %m.\n\n- ilmari\n\n\n", "msg_date": "Thu, 14 Mar 2024 11:25:30 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using the %m printf format more" }, { "msg_contents": "Dagfinn Ilmari Mannsåker <[email protected]> writes:\n\n> Michael Paquier <[email protected]> writes:\n>\n>> On Wed, Mar 13, 2024 at 02:33:52PM +0100, Peter Eisentraut wrote:\n>>> The 0002 patch looks sensible. It would be good to fix that, otherwise it\n>>> could have some confusing outcomes in the future.\n>>\n>> You mean if we begin to use %m in future callers of\n>> emit_tap_output_v(), hypothetically? That's a fair argument.\n>\n> Yeah, developers would rightfully expect to be able to use %m with\n> anything that takes a printf format string. Case in point: when I was\n> first doing the conversion I did change the bail() and diag() calls in\n> pg_regress to %m, and only while I was preparing the patch for\n> submission did I think to check the actual implementation to see if it\n> was safe to do so.\n>\n> The alternative would be to document that you can't use %m with these\n> functions, which is silly IMO, given how simple the fix is.\n>\n> One minor improvement I can think of is to add a comment by the\n> save_errno declaration noting that it's needed in order to support %m.\n\nHere's an updated patch that adds such a comment. I'll add it to the\ncommitfest later today unless someone commits it before then.\n\n> - ilmari", "msg_date": "Fri, 22 Mar 2024 13:58:24 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using the %m printf format more" }, { "msg_contents": "On Fri, Mar 22, 2024 at 01:58:24PM +0000, Dagfinn Ilmari Mannsåker wrote:\n> Here's an updated patch that adds such a comment. I'll add it to the\n> commitfest later today unless someone commits it before then.\n\nI see no problem to do that now rather than later. So, done to make\npg_regress able to use %m.\n--\nMichael", "msg_date": "Thu, 4 Apr 2024 11:35:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using the %m printf format more" }, { "msg_contents": "On Thu, 4 Apr 2024, at 03:35, Michael Paquier wrote:\n> On Fri, Mar 22, 2024 at 01:58:24PM +0000, Dagfinn Ilmari Mannsåker wrote:\n>> Here's an updated patch that adds such a comment. I'll add it to the\n>> commitfest later today unless someone commits it before then.\n>\n> I see no problem to do that now rather than later. So, done to make\n> pg_regress able to use %m.\n\nThanks!\n\n-- \n- ilmari\n\n\n", "msg_date": "Thu, 04 Apr 2024 08:44:25 +0100", "msg_from": "=?UTF-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using the %m printf format more" } ]
[ { "msg_contents": "Hi Hackers,\n\nWhen configuring SSL on the Postgres server side with the following \ninformation:\n\nssl = on\nssl_ca_file = 'root_ca.crt'\nssl_cert_file = 'server-cn-only.crt'\nssl_key_file = 'server-cn-only.key'\n\nIf a user makes a mistake, for example, accidentally using 'root_ca.crl' \ninstead of 'root_ca.crt', Postgres will report an error like the one below:\nFATAL:  could not load root certificate file \"root_ca.crl\": SSL error \ncode 2147483650\n\nHere, the error code 2147483650 is not very helpful for the user. The \nreason is that Postgres is missing the initial SSL file check before \npassing it to the OpenSSL API. For example:\n\n     if (ssl_ca_file[0])\n     {\n         STACK_OF(X509_NAME) * root_cert_list;\n\n         if (SSL_CTX_load_verify_locations(context, ssl_ca_file, NULL) \n!= 1 ||\n             (root_cert_list = SSL_load_client_CA_file(ssl_ca_file)) == \nNULL)\n         {\n             ereport(isServerStart ? FATAL : LOG,\n                     (errcode(ERRCODE_CONFIG_FILE_ERROR),\n                      errmsg(\"could not load root certificate file \n\\\"%s\\\": %s\",\n                             ssl_ca_file, SSLerrmessage(ERR_get_error()))));\n             goto error;\n         }\n\nThe SSL_CTX_load_verify_locations function in OpenSSL will return NULL \nif there is a system error, such as \"No such file or directory\" in this \ncase:\n\nconst char *ERR_reason_error_string(unsigned long e)\n{\n     ERR_STRING_DATA d, *p = NULL;\n     unsigned long l, r;\n\n     if (!RUN_ONCE(&err_string_init, do_err_strings_init)) {\n         return NULL;\n     }\n\n     /*\n      * ERR_reason_error_string() can't safely return system error strings,\n      * since openssl_strerror_r() needs a buffer for thread safety, and we\n      * haven't got one that would serve any sensible purpose.\n      */\n     if (ERR_SYSTEM_ERROR(e))\n         return NULL;\n\nIt would be better to perform a simple SSL file check before passing the \nSSL file to OpenSSL APIs so that the system error can be captured and a \nmeaningful message provided to the end user.\n\nAttached is a simple patch to help address this issue for ssl_ca_file, \nssl_cert_file, and ssl_crl_file. With this patch, a similar test will \nreturn something like the message below:\nFATAL:  could not access certificate file \"root_ca.crl\": No such file or \ndirectory\n\nI believe this can help end users quickly realize the mistake.\n\n\nThank you,\nDavid", "msg_date": "Wed, 6 Mar 2024 16:12:23 -0800", "msg_from": "David Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "improve ssl error code, 2147483650" }, { "msg_contents": "On 07/03/2024 02:12, David Zhang wrote:\n> The SSL_CTX_load_verify_locations function in OpenSSL will return NULL\n> if there is a system error, such as \"No such file or directory\" in this\n> case:\n> \n> const char *ERR_reason_error_string(unsigned long e)\n> {\n>     ERR_STRING_DATA d, *p = NULL;\n>     unsigned long l, r;\n> \n>     if (!RUN_ONCE(&err_string_init, do_err_strings_init)) {\n>         return NULL;\n>     }\n> \n>     /*\n>      * ERR_reason_error_string() can't safely return system error strings,\n>      * since openssl_strerror_r() needs a buffer for thread safety, and we\n>      * haven't got one that would serve any sensible purpose.\n>      */\n>     if (ERR_SYSTEM_ERROR(e))\n>         return NULL;\n\nThat's pretty unfortunate. As typical with OpenSSL, this stuff is not \nvery well documented, but I think we could do something like this in \nSSLerrmessage():\n\nif (ERR_SYSTEM_ERROR(e))\n errreason = strerror(ERR_GET_REASON(e));\n\nERR_SYSTEM_ERROR only exists in OpenSSL 3.0 and above, and the only \ndocumentation I could find was in this one obscure place in the man \npages: \nhttps://www.openssl.org/docs/man3.2/man3/BIO_dgram_get_local_addr_enable.html. \nBut as a best-effort thing, it would still be better than \"SSL error \ncode 2147483650\".\n\n> It would be better to perform a simple SSL file check before passing the\n> SSL file to OpenSSL APIs so that the system error can be captured and a\n> meaningful message provided to the end user.\n\nThat feels pretty ugly. I agree it would catch most of the common \nmistakes in practice, so maybe we should just hold our noses and do it \nanyway, if the above ERR_SYSTEM_ERROR() method doesn't work.\n\nIt's sad that we cannot pass a file descriptor or in-memory copy of the \nfile contents to those functions.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 7 Mar 2024 18:08:35 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "Greetings,\n\n* Heikki Linnakangas ([email protected]) wrote:\n> On 07/03/2024 02:12, David Zhang wrote:\n> > The SSL_CTX_load_verify_locations function in OpenSSL will return NULL\n> > if there is a system error, such as \"No such file or directory\" in this\n> > case:\n> > \n> > const char *ERR_reason_error_string(unsigned long e)\n> > {\n> >     ERR_STRING_DATA d, *p = NULL;\n> >     unsigned long l, r;\n> > \n> >     if (!RUN_ONCE(&err_string_init, do_err_strings_init)) {\n> >         return NULL;\n> >     }\n> > \n> >     /*\n> >      * ERR_reason_error_string() can't safely return system error strings,\n> >      * since openssl_strerror_r() needs a buffer for thread safety, and we\n> >      * haven't got one that would serve any sensible purpose.\n> >      */\n> >     if (ERR_SYSTEM_ERROR(e))\n> >         return NULL;\n> \n> That's pretty unfortunate. As typical with OpenSSL, this stuff is not very\n> well documented, but I think we could do something like this in\n> SSLerrmessage():\n> \n> if (ERR_SYSTEM_ERROR(e))\n> errreason = strerror(ERR_GET_REASON(e));\n> \n> ERR_SYSTEM_ERROR only exists in OpenSSL 3.0 and above, and the only\n> documentation I could find was in this one obscure place in the man pages: https://www.openssl.org/docs/man3.2/man3/BIO_dgram_get_local_addr_enable.html.\n> But as a best-effort thing, it would still be better than \"SSL error code\n> 2147483650\".\n\nAgreed that it doesn't seem well documented. I was trying to figure out\nwhat the 'right' answer here was myself and not having much success. If\nthe above works, then +1 to that.\n\n> > It would be better to perform a simple SSL file check before passing the\n> > SSL file to OpenSSL APIs so that the system error can be captured and a\n> > meaningful message provided to the end user.\n> \n> That feels pretty ugly. I agree it would catch most of the common mistakes\n> in practice, so maybe we should just hold our noses and do it anyway, if the\n> above ERR_SYSTEM_ERROR() method doesn't work.\n\nYeah, seems better to try and handle this the OpenSSL way ... if that's\npossible to do.\n\n> It's sad that we cannot pass a file descriptor or in-memory copy of the file\n> contents to those functions.\n\nAgreed.\n\nThanks!\n\nStephen", "msg_date": "Thu, 7 Mar 2024 12:27:06 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "Stephen Frost <[email protected]> writes:\n> * Heikki Linnakangas ([email protected]) wrote:\n>> That's pretty unfortunate. As typical with OpenSSL, this stuff is not very\n>> well documented, but I think we could do something like this in\n>> SSLerrmessage():\n>> \n>> if (ERR_SYSTEM_ERROR(e))\n>> errreason = strerror(ERR_GET_REASON(e));\n>> \n>> ERR_SYSTEM_ERROR only exists in OpenSSL 3.0 and above, and the only\n>> documentation I could find was in this one obscure place in the man pages: https://www.openssl.org/docs/man3.2/man3/BIO_dgram_get_local_addr_enable.html.\n>> But as a best-effort thing, it would still be better than \"SSL error code\n>> 2147483650\".\n\n> Agreed that it doesn't seem well documented. I was trying to figure out\n> what the 'right' answer here was myself and not having much success. If\n> the above works, then +1 to that.\n\nMy reaction as well --- I was just gearing up to test this idea,\nunless one of you are already on it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Mar 2024 12:52:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "David Zhang <[email protected]> writes:\n> When configuring SSL on the Postgres server side with the following \n> information:\n\n> ssl = on\n> ssl_ca_file = 'root_ca.crt'\n> ssl_cert_file = 'server-cn-only.crt'\n> ssl_key_file = 'server-cn-only.key'\n\n> If a user makes a mistake, for example, accidentally using 'root_ca.crl' \n> instead of 'root_ca.crt', Postgres will report an error like the one below:\n> FATAL:  could not load root certificate file \"root_ca.crl\": SSL error \n> code 2147483650\n\nInterestingly, this works fine for me on RHEL8 (with openssl-1.1.1k):\n\n2024-03-07 12:57:53.432 EST [547522] FATAL: F0000: could not load root certificate file \"foo.bar\": No such file or directory\n2024-03-07 12:57:53.432 EST [547522] LOCATION: be_tls_init, be-secure-openssl.c:306\n\nI do reproduce your problem on Fedora 39 with openssl-3.1.1.\nSo this seems to be a regression on OpenSSL's part. Maybe\nthey'll figure out how to fix it sometime; that seems to be\nanother good argument for not pre-empting their error handling.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Mar 2024 13:10:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "I wrote:\n> Stephen Frost <[email protected]> writes:\n>> Agreed that it doesn't seem well documented. I was trying to figure out\n>> what the 'right' answer here was myself and not having much success. If\n>> the above works, then +1 to that.\n\n> My reaction as well --- I was just gearing up to test this idea,\n> unless one of you are already on it?\n\nI've confirmed that this:\n\ndiff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c\nindex e12b1cc9e3..47eee4b59d 100644\n--- a/src/backend/libpq/be-secure-openssl.c\n+++ b/src/backend/libpq/be-secure-openssl.c\n@@ -1363,6 +1363,10 @@ SSLerrmessage(unsigned long ecode)\n \terrreason = ERR_reason_error_string(ecode);\n \tif (errreason != NULL)\n \t\treturn errreason;\n+#ifdef ERR_SYSTEM_ERROR\n+\tif (ERR_SYSTEM_ERROR(ecode))\n+\t\treturn strerror(ERR_GET_REASON(ecode));\n+#endif\n \tsnprintf(errbuf, sizeof(errbuf), _(\"SSL error code %lu\"), ecode);\n \treturn errbuf;\n }\n\nseems to be enough to fix the problem on OpenSSL 3.1.1. The #ifdef\nis needed to avoid compile failure against OpenSSL 1.1.1 --- but that\nversion doesn't have the problem, so we don't need to sweat.\n\nThis could probably do with a comment, and we need to propagate\nthe fix into libpq's copy of the function too. Barring objections,\nI'll take care of that and push it later today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Mar 2024 14:58:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "> On 7 Mar 2024, at 20:58, Tom Lane <[email protected]> wrote:\n> \n> I wrote:\n>> Stephen Frost <[email protected]> writes:\n>>> Agreed that it doesn't seem well documented. I was trying to figure out\n>>> what the 'right' answer here was myself and not having much success. If\n>>> the above works, then +1 to that.\n> \n>> My reaction as well --- I was just gearing up to test this idea,\n>> unless one of you are already on it?\n> \n> I've confirmed that this:\n> \n> diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c\n> index e12b1cc9e3..47eee4b59d 100644\n> --- a/src/backend/libpq/be-secure-openssl.c\n> +++ b/src/backend/libpq/be-secure-openssl.c\n> @@ -1363,6 +1363,10 @@ SSLerrmessage(unsigned long ecode)\n> \terrreason = ERR_reason_error_string(ecode);\n> \tif (errreason != NULL)\n> \t\treturn errreason;\n> +#ifdef ERR_SYSTEM_ERROR\n> +\tif (ERR_SYSTEM_ERROR(ecode))\n> +\t\treturn strerror(ERR_GET_REASON(ecode));\n> +#endif\n> \tsnprintf(errbuf, sizeof(errbuf), _(\"SSL error code %lu\"), ecode);\n> \treturn errbuf;\n> }\n> \n> seems to be enough to fix the problem on OpenSSL 3.1.1. The #ifdef\n> is needed to avoid compile failure against OpenSSL 1.1.1 --- but that\n> version doesn't have the problem, so we don't need to sweat.\n\nThis was introduced in OpenSSL 3.0.0 so that makes sense. Pre-3.0.0 versions\ntruncates system errorcodes that was outside of the range 1..127 reserving the\nrest for OpenSSL specific errors. To capture the full range possible of system\nerrors the code is no longer truncated and the ERR_SYSTEM_FLAG flag is set,\nwhich can be tested for with the macro used here.\n\n> This could probably do with a comment, and we need to propagate\n> the fix into libpq's copy of the function too. Barring objections,\n> I'll take care of that and push it later today.\n\nLGTM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 7 Mar 2024 21:08:56 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 7 Mar 2024, at 20:58, Tom Lane <[email protected]> wrote:\n>> This could probably do with a comment, and we need to propagate\n>> the fix into libpq's copy of the function too. Barring objections,\n>> I'll take care of that and push it later today.\n\n> LGTM.\n\nDone so far as be-secure-openssl.c and fe-secure-openssl.c are\nconcerned. But I noticed that src/common/cryptohash_openssl.c and\nsrc/common/hmac_openssl.c have their own, rather half-baked versions\nof SSLerrmessage. I didn't do anything about that in the initial\npatch, because it's not clear to me that those routines would ever\nsee system-errno-based errors, plus their comments claim that\nreturning NULL isn't terribly bad. But if we want to do something\nabout it, I don't think that maintaining 3 copies of the code is the\nway to go. Maybe push be-secure-openssl.c's version into src/common?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Mar 2024 19:46:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "On 08.03.24 01:46, Tom Lane wrote:\n> Daniel Gustafsson <[email protected]> writes:\n>> On 7 Mar 2024, at 20:58, Tom Lane <[email protected]> wrote:\n>>> This could probably do with a comment, and we need to propagate\n>>> the fix into libpq's copy of the function too. Barring objections,\n>>> I'll take care of that and push it later today.\n> \n>> LGTM.\n> \n> Done so far as be-secure-openssl.c and fe-secure-openssl.c are\n> concerned.\n\nI noticed that this change uses not-thread-safe strerror() in libpq code.\n\nPerhaps something like this would be better (and simpler):\n\ndiff --git a/src/interfaces/libpq/fe-secure-openssl.c \nb/src/interfaces/libpq/fe-secure-openssl.c\nindex 5c867106fb0..14cd9ce404d 100644\n--- a/src/interfaces/libpq/fe-secure-openssl.c\n+++ b/src/interfaces/libpq/fe-secure-openssl.c\n@@ -1767,7 +1767,7 @@ SSLerrmessage(unsigned long ecode)\n #ifdef ERR_SYSTEM_ERROR\n if (ERR_SYSTEM_ERROR(ecode))\n {\n- strlcpy(errbuf, strerror(ERR_GET_REASON(ecode)), SSL_ERR_LEN);\n+ strerror_r(ERR_GET_REASON(ecode), errbuf, SSL_ERR_LEN);\n return errbuf;\n }\n #endif\n\n\n\n", "msg_date": "Fri, 21 Jun 2024 13:15:58 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "> On 21 Jun 2024, at 13:15, Peter Eisentraut <[email protected]> wrote:\n\n> I noticed that this change uses not-thread-safe strerror() in libpq code.\n> \n> Perhaps something like this would be better (and simpler):\n\nNice catch, LGTM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 21 Jun 2024 15:31:57 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> - strlcpy(errbuf, strerror(ERR_GET_REASON(ecode)), SSL_ERR_LEN);\n> + strerror_r(ERR_GET_REASON(ecode), errbuf, SSL_ERR_LEN);\n\nMost of libpq gets at strerror_r via SOCK_STRERROR for Windows\nportability. Is that relevant here?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2024 10:53:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "On 21.06.24 16:53, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> - strlcpy(errbuf, strerror(ERR_GET_REASON(ecode)), SSL_ERR_LEN);\n>> + strerror_r(ERR_GET_REASON(ecode), errbuf, SSL_ERR_LEN);\n> \n> Most of libpq gets at strerror_r via SOCK_STRERROR for Windows\n> portability. Is that relevant here?\n\nLooking inside the OpenSSL code, it makes no efforts to translate \nbetween winsock error codes and standard error codes, so I don't think \nour workaround/replacement code needs to do that either.\n\nBtw., our source code comments say something like \n\"ERR_reason_error_string randomly refuses to map system errno values.\" \nThe reason it doesn't is exactly that it can't do it while maintaining \nthread-safety. Here is the relevant commit: \nhttps://github.com/openssl/openssl/commit/71f2994b15\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 14:41:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 21.06.24 16:53, Tom Lane wrote:\n>> Most of libpq gets at strerror_r via SOCK_STRERROR for Windows\n>> portability. Is that relevant here?\n\n> Looking inside the OpenSSL code, it makes no efforts to translate \n> between winsock error codes and standard error codes, so I don't think \n> our workaround/replacement code needs to do that either.\n\nFair enough.\n\n> Btw., our source code comments say something like \n> \"ERR_reason_error_string randomly refuses to map system errno values.\" \n> The reason it doesn't is exactly that it can't do it while maintaining \n> thread-safety.\n\nAh. Do you want to improve that comment while you're at it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:21:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "On 25.06.24 16:21, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> On 21.06.24 16:53, Tom Lane wrote:\n>>> Most of libpq gets at strerror_r via SOCK_STRERROR for Windows\n>>> portability. Is that relevant here?\n> \n>> Looking inside the OpenSSL code, it makes no efforts to translate\n>> between winsock error codes and standard error codes, so I don't think\n>> our workaround/replacement code needs to do that either.\n> \n> Fair enough.\n> \n>> Btw., our source code comments say something like\n>> \"ERR_reason_error_string randomly refuses to map system errno values.\"\n>> The reason it doesn't is exactly that it can't do it while maintaining\n>> thread-safety.\n> \n> Ah. Do you want to improve that comment while you're at it?\n\nHere is a patch that fixes the strerror() call and updates the comments \na bit.\n\nThis ought to be backpatched like the original fix; ideally for the next \nminor releases in about two weeks.", "msg_date": "Wed, 24 Jul 2024 15:32:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "> On 24 Jul 2024, at 15:32, Peter Eisentraut <[email protected]> wrote:\n> \n> On 25.06.24 16:21, Tom Lane wrote:\n>> Peter Eisentraut <[email protected]> writes:\n>>> On 21.06.24 16:53, Tom Lane wrote:\n>>>> Most of libpq gets at strerror_r via SOCK_STRERROR for Windows\n>>>> portability. Is that relevant here?\n>>> Looking inside the OpenSSL code, it makes no efforts to translate\n>>> between winsock error codes and standard error codes, so I don't think\n>>> our workaround/replacement code needs to do that either.\n>> Fair enough.\n>>> Btw., our source code comments say something like\n>>> \"ERR_reason_error_string randomly refuses to map system errno values.\"\n>>> The reason it doesn't is exactly that it can't do it while maintaining\n>>> thread-safety.\n>> Ah. Do you want to improve that comment while you're at it?\n> \n> Here is a patch that fixes the strerror() call and updates the comments a bit.\n\nLGTM.\n\n> This ought to be backpatched like the original fix; ideally for the next minor releases in about two weeks.\n\nAgreed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 25 Jul 2024 11:36:57 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" }, { "msg_contents": "On 25.07.24 11:36, Daniel Gustafsson wrote:\n>> On 24 Jul 2024, at 15:32, Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 25.06.24 16:21, Tom Lane wrote:\n>>> Peter Eisentraut <[email protected]> writes:\n>>>> On 21.06.24 16:53, Tom Lane wrote:\n>>>>> Most of libpq gets at strerror_r via SOCK_STRERROR for Windows\n>>>>> portability. Is that relevant here?\n>>>> Looking inside the OpenSSL code, it makes no efforts to translate\n>>>> between winsock error codes and standard error codes, so I don't think\n>>>> our workaround/replacement code needs to do that either.\n>>> Fair enough.\n>>>> Btw., our source code comments say something like\n>>>> \"ERR_reason_error_string randomly refuses to map system errno values.\"\n>>>> The reason it doesn't is exactly that it can't do it while maintaining\n>>>> thread-safety.\n>>> Ah. Do you want to improve that comment while you're at it?\n>>\n>> Here is a patch that fixes the strerror() call and updates the comments a bit.\n> \n> LGTM.\n> \n>> This ought to be backpatched like the original fix; ideally for the next minor releases in about two weeks.\n> \n> Agreed.\n\ndone\n\n\n\n", "msg_date": "Sun, 28 Jul 2024 10:36:46 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve ssl error code, 2147483650" } ]
[ { "msg_contents": "The following query:\n\nSELECT * FROM (\n SELECT 2023 AS year, * FROM remote_table_1\n UNION ALL\n SELECT 2022 AS year, * FROM remote_table_2\n)\nORDER BY year DESC;\n\nyields the following remote query:\n\nSELECT [columns] FROM remote_table_1 ORDER BY 2023 DESC\n\nand subsequently fails remote execution.\n\n\nNot really sure where the problem is - the planner or postgres_fdw.\nI guess it is postgres_fdw not filtering out ordering keys.\n\nThis filtering would also be pretty useful in the following scenario (which is also why I went through UNION ALL route and discovered this issue):\n\nI have a table partitioned by year, partitions are remote tables.\nOn remote servers I have a GIST index that does not support ordering ([1]) so I would like to avoid sending ORDER BY year to remote servers.\nIdeally redundant ordering should be filtered out.\n\n[1] https://www.postgresql.org/message-id/B2AC13F9-6655-4E27-BFD3-068844E5DC91%40kleczek.org\n\n\n—\nKind regards,\nMichal\nThe following query:SELECT * FROM (  SELECT 2023 AS year, * FROM remote_table_1  UNION ALL  SELECT 2022 AS year, * FROM remote_table_2)ORDER BY year DESC;yields the following remote query:SELECT [columns] FROM remote_table_1 ORDER BY 2023 DESCand subsequently fails remote execution.Not really sure where the problem is - the planner or postgres_fdw.I guess it is postgres_fdw not filtering out ordering keys.This filtering would also be pretty useful in the following scenario (which is also why I went through UNION ALL route and discovered this issue):I have a table partitioned by year, partitions are remote tables.On remote servers I have a GIST index that does not support ordering ([1]) so I would like to avoid sending ORDER BY year to remote servers.Ideally redundant ordering should be filtered out.[1] https://www.postgresql.org/message-id/B2AC13F9-6655-4E27-BFD3-068844E5DC91%40kleczek.org—Kind regards,Michal", "msg_date": "Thu, 7 Mar 2024 07:08:40 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Invalid query generated by postgres_fdw with UNION ALL and ORDER BY" }, { "msg_contents": "On Thu, 7 Mar 2024 at 19:09, Michał Kłeczek <[email protected]> wrote:\n>\n> The following query:\n>\n> SELECT * FROM (\n> SELECT 2023 AS year, * FROM remote_table_1\n> UNION ALL\n> SELECT 2022 AS year, * FROM remote_table_2\n> )\n> ORDER BY year DESC;\n>\n> yields the following remote query:\n>\n> SELECT [columns] FROM remote_table_1 ORDER BY 2023 DESC\n>\n> and subsequently fails remote execution.\n>\n>\n> Not really sure where the problem is - the planner or postgres_fdw.\n> I guess it is postgres_fdw not filtering out ordering keys.\n\nInteresting. I've attached a self-contained recreator for the casual passerby.\n\nI think the fix should go in appendOrderByClause(). It's at that\npoint we look for the EquivalenceMember for the relation and can\neasily discover if the em_expr is a Const. I think we can safely just\nskip doing any ORDER BY <const> stuff and not worry about if the\nliteral format of the const will appear as a reference to an ordinal\ncolumn position in the ORDER BY clause.\n\nSomething like the attached patch I think should work.\n\nI wonder if we need a test...\n\nDavid", "msg_date": "Fri, 8 Mar 2024 00:08:42 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalid query generated by postgres_fdw with UNION ALL and ORDER\n BY" }, { "msg_contents": "On Thu, Mar 7, 2024 at 4:39 PM David Rowley <[email protected]> wrote:\n\n> On Thu, 7 Mar 2024 at 19:09, Michał Kłeczek <[email protected]> wrote:\n> >\n> > The following query:\n> >\n> > SELECT * FROM (\n> > SELECT 2023 AS year, * FROM remote_table_1\n> > UNION ALL\n> > SELECT 2022 AS year, * FROM remote_table_2\n> > )\n> > ORDER BY year DESC;\n> >\n> > yields the following remote query:\n> >\n> > SELECT [columns] FROM remote_table_1 ORDER BY 2023 DESC\n> >\n> > and subsequently fails remote execution.\n> >\n> >\n> > Not really sure where the problem is - the planner or postgres_fdw.\n> > I guess it is postgres_fdw not filtering out ordering keys.\n>\n> Interesting. I've attached a self-contained recreator for the casual\n> passerby.\n>\n> I think the fix should go in appendOrderByClause(). It's at that\n> point we look for the EquivalenceMember for the relation and can\n> easily discover if the em_expr is a Const. I think we can safely just\n> skip doing any ORDER BY <const> stuff and not worry about if the\n> literal format of the const will appear as a reference to an ordinal\n> column position in the ORDER BY clause.\n>\n\ndeparseSortGroupClause() calls deparseConst() with showtype = 1.\nappendOrderByClause() may want to do something similar for consistency. Or\nremove it from deparseSortGroupClause() as well?\n\n\n>\n> Something like the attached patch I think should work.\n>\n> I wonder if we need a test...\n>\n\nYes.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Mar 7, 2024 at 4:39 PM David Rowley <[email protected]> wrote:On Thu, 7 Mar 2024 at 19:09, Michał Kłeczek <[email protected]> wrote:\n>\n> The following query:\n>\n> SELECT * FROM (\n>   SELECT 2023 AS year, * FROM remote_table_1\n>   UNION ALL\n>   SELECT 2022 AS year, * FROM remote_table_2\n> )\n> ORDER BY year DESC;\n>\n> yields the following remote query:\n>\n> SELECT [columns] FROM remote_table_1 ORDER BY 2023 DESC\n>\n> and subsequently fails remote execution.\n>\n>\n> Not really sure where the problem is - the planner or postgres_fdw.\n> I guess it is postgres_fdw not filtering out ordering keys.\n\nInteresting.  I've attached a self-contained recreator for the casual passerby.\n\nI think the fix should go in appendOrderByClause().  It's at that\npoint we look for the EquivalenceMember for the relation and can\neasily discover if the em_expr is a Const.  I think we can safely just\nskip doing any ORDER BY <const> stuff and not worry about if the\nliteral format of the const will appear as a reference to an ordinal\ncolumn position in the ORDER BY clause.deparseSortGroupClause() calls deparseConst() with showtype = 1. appendOrderByClause() may want to do something similar for consistency. Or remove it from deparseSortGroupClause() as well? \n\nSomething like the attached patch I think should work.\n\nI wonder if we need a test...Yes. -- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 7 Mar 2024 17:24:40 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalid query generated by postgres_fdw with UNION ALL and ORDER\n BY" }, { "msg_contents": "On Fri, 8 Mar 2024 at 00:54, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Mar 7, 2024 at 4:39 PM David Rowley <[email protected]> wrote:\n>> I think the fix should go in appendOrderByClause(). It's at that\n>> point we look for the EquivalenceMember for the relation and can\n>> easily discover if the em_expr is a Const. I think we can safely just\n>> skip doing any ORDER BY <const> stuff and not worry about if the\n>> literal format of the const will appear as a reference to an ordinal\n>> column position in the ORDER BY clause.\n>\n> deparseSortGroupClause() calls deparseConst() with showtype = 1. appendOrderByClause() may want to do something similar for consistency. Or remove it from deparseSortGroupClause() as well?\n\nThe fix could also be to use deparseConst() in appendOrderByClause()\nand have that handle Const EquivalenceMember instead. I'd rather just\nskip them. To me, that seems less risky than ensuring deparseConst()\nhandles all Const types correctly.\n\nAlso, as far as adjusting GROUP BY to eliminate Consts, I don't think\nthat's material for a bug fix. If we want to consider doing that,\nthat's for master only.\n\n>> I wonder if we need a test...\n>\n> Yes.\n\nI've added two of those in the attached.\n\nI also changed the way the delimiter stuff works as the exiting code\nseems to want to avoid having a bool flag to record if we're adding\nthe first item. The change I'm making means the bool flag is now\nrequired, so we may as well use that flag to deal with the delimiter\nappend too.\n\nDavid", "msg_date": "Fri, 8 Mar 2024 15:12:46 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalid query generated by postgres_fdw with UNION ALL and ORDER\n BY" }, { "msg_contents": "On Fri, Mar 8, 2024 at 10:13 AM David Rowley <[email protected]> wrote:\n\n> The fix could also be to use deparseConst() in appendOrderByClause()\n> and have that handle Const EquivalenceMember instead. I'd rather just\n> skip them. To me, that seems less risky than ensuring deparseConst()\n> handles all Const types correctly.\n\n\nI've looked at this patch a bit. I once wondered why we don't check\npathkey->pk_eclass->ec_has_const with EC_MUST_BE_REDUNDANT to see if the\npathkey is not needed. Then I realized that a child member would not be\nmarked as constant even if the child expr is a Const, as explained in\nadd_child_rel_equivalences().\n\nBTW, I wonder if it is possible that we have a pseudoconstant expression\nthat is not of type Const. In such cases we would need to check\n'bms_is_empty(pull_varnos(em_expr))' instead of 'IsA(em_expr, Const)'.\nHowever, I'm unable to think of such an expression in this context.\n\nThe patch looks good to me otherwise.\n\nThanks\nRichard\n\nOn Fri, Mar 8, 2024 at 10:13 AM David Rowley <[email protected]> wrote:\nThe fix could also be to use deparseConst() in appendOrderByClause()\nand have that handle Const EquivalenceMember instead.  I'd rather just\nskip them. To me, that seems less risky than ensuring deparseConst()\nhandles all Const types correctly.I've looked at this patch a bit.  I once wondered why we don't checkpathkey->pk_eclass->ec_has_const with EC_MUST_BE_REDUNDANT to see if thepathkey is not needed.  Then I realized that a child member would not bemarked as constant even if the child expr is a Const, as explained inadd_child_rel_equivalences().BTW, I wonder if it is possible that we have a pseudoconstant expressionthat is not of type Const.  In such cases we would need to check'bms_is_empty(pull_varnos(em_expr))' instead of 'IsA(em_expr, Const)'.However, I'm unable to think of such an expression in this context.The patch looks good to me otherwise.ThanksRichard", "msg_date": "Fri, 8 Mar 2024 18:14:00 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalid query generated by postgres_fdw with UNION ALL and ORDER\n BY" }, { "msg_contents": "On Fri, 8 Mar 2024 at 23:14, Richard Guo <[email protected]> wrote:\n> I've looked at this patch a bit. I once wondered why we don't check\n> pathkey->pk_eclass->ec_has_const with EC_MUST_BE_REDUNDANT to see if the\n> pathkey is not needed. Then I realized that a child member would not be\n> marked as constant even if the child expr is a Const, as explained in\n> add_child_rel_equivalences().\n\nThis situation where the child member is a Const but the parent isn't\nis unique to UNION ALL queries. The only other cases where we have\nchild members are with partitioned and inheritance tables. In those\ncases, the parent member just maps to the equivalent child member\nreplacing parent Vars with the corresponding child Var according to\nthe column mapping between the parent and child. It might be nice if\npartitioning supported mapping to a Const as in many cases that could\nsave storing the same value in the table every time, but we don't\nsupport that, so this can only happen with UNION ALL queries.\n\n> BTW, I wonder if it is possible that we have a pseudoconstant expression\n> that is not of type Const. In such cases we would need to check\n> 'bms_is_empty(pull_varnos(em_expr))' instead of 'IsA(em_expr, Const)'.\n> However, I'm unable to think of such an expression in this context.\n\nI can't see how there'd be any problems with a misinterpretation of a\npseudoconstant value as an ordinal column position on the remote\nserver. Surely it's only actual \"Const\" node types that we're just\ngoing to call the type's output function which risks it yielding a\nstring of digits and the remote server thinking that we must mean an\nordinal column position.\n\n> The patch looks good to me otherwise.\n\nThanks\n\nDavid\n\n\n", "msg_date": "Mon, 11 Mar 2024 10:56:35 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalid query generated by postgres_fdw with UNION ALL and ORDER\n BY" }, { "msg_contents": "On Fri, Mar 8, 2024 at 7:43 AM David Rowley <[email protected]> wrote:\n\n> On Fri, 8 Mar 2024 at 00:54, Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Thu, Mar 7, 2024 at 4:39 PM David Rowley <[email protected]>\n> wrote:\n> >> I think the fix should go in appendOrderByClause(). It's at that\n> >> point we look for the EquivalenceMember for the relation and can\n> >> easily discover if the em_expr is a Const. I think we can safely just\n> >> skip doing any ORDER BY <const> stuff and not worry about if the\n> >> literal format of the const will appear as a reference to an ordinal\n> >> column position in the ORDER BY clause.\n> >\n> > deparseSortGroupClause() calls deparseConst() with showtype = 1.\n> appendOrderByClause() may want to do something similar for consistency. Or\n> remove it from deparseSortGroupClause() as well?\n>\n> The fix could also be to use deparseConst() in appendOrderByClause()\n> and have that handle Const EquivalenceMember instead. I'd rather just\n> skip them. To me, that seems less risky than ensuring deparseConst()\n> handles all Const types correctly.\n>\n> Also, as far as adjusting GROUP BY to eliminate Consts, I don't think\n> that's material for a bug fix. If we want to consider doing that,\n> that's for master only.\n>\n\nIf appendOrderByClause() would have been using deparseConst() since the\nbeginning this bug would not be there. Instead of maintaining two different\nways of deparsing ORDER BY clause, we could maintain just one. I think we\nshould unify those. If we should do it in only master be it so. I am fine\nto leave back branches with two methods.\n\n\n>\n> >> I wonder if we need a test...\n> >\n> > Yes.\n>\n> I've added two of those in the attached.\n>\n> Thanks. They look fine to me.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Mar 8, 2024 at 7:43 AM David Rowley <[email protected]> wrote:On Fri, 8 Mar 2024 at 00:54, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Mar 7, 2024 at 4:39 PM David Rowley <[email protected]> wrote:\n>> I think the fix should go in appendOrderByClause().  It's at that\n>> point we look for the EquivalenceMember for the relation and can\n>> easily discover if the em_expr is a Const.  I think we can safely just\n>> skip doing any ORDER BY <const> stuff and not worry about if the\n>> literal format of the const will appear as a reference to an ordinal\n>> column position in the ORDER BY clause.\n>\n> deparseSortGroupClause() calls deparseConst() with showtype = 1. appendOrderByClause() may want to do something similar for consistency. Or remove it from deparseSortGroupClause() as well?\n\nThe fix could also be to use deparseConst() in appendOrderByClause()\nand have that handle Const EquivalenceMember instead.  I'd rather just\nskip them. To me, that seems less risky than ensuring deparseConst()\nhandles all Const types correctly.\n\nAlso, as far as adjusting GROUP BY to eliminate Consts, I don't think\nthat's material for a bug fix. If we want to consider doing that,\nthat's for master only.If appendOrderByClause() would have been using deparseConst() since the beginning this bug would not be there. Instead of maintaining two different ways of deparsing ORDER BY clause, we could maintain just one. I think we should unify those. If we should do it in only master be it so. I am fine to leave back branches with two methods. \n\n>> I wonder if we need a test...\n>\n> Yes.\n\nI've added two of those in the attached.\nThanks. They look fine to me. -- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 11 Mar 2024 11:33:24 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalid query generated by postgres_fdw with UNION ALL and ORDER\n BY" } ]
[ { "msg_contents": "This patch adds a link to the \"attach partition\" command section\n(similar to the detach partition link above it) as well as a link to\n\"create table like\" as both commands contain additional information\nthat users should review beyond what is laid out in this section.\nThere's also a couple of wordsmiths in nearby areas to improve\nreadability.\n\nRobert Treat\nhttps://xzilla.net", "msg_date": "Thu, 7 Mar 2024 12:19:01 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "DOCS: add helpful partitioning links" }, { "msg_contents": "Hi Robert,\n\nOn Thu, Mar 7, 2024 at 10:49 PM Robert Treat <[email protected]> wrote:\n\n> This patch adds a link to the \"attach partition\" command section\n> (similar to the detach partition link above it) as well as a link to\n> \"create table like\" as both commands contain additional information\n> that users should review beyond what is laid out in this section.\n> There's also a couple of wordsmiths in nearby areas to improve\n> readability.\n>\n\nThanks.\n\nThe patch gives error when building html\n\nddl.sgml:4300: element link: validity error : No declaration for attribute\nlinked of element link\n <link linked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ...\nLIKE</l\n ^\nddl.sgml:4300: element link: validity error : Element link does not carry\nattribute linkend\nnked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ...\nLIKE</literal></link\n\n ^\nmake[1]: *** [Makefile:72: postgres-full.xml] Error 4\nmake[1]: *** Deleting file 'postgres-full.xml'\nmake[1]: Leaving directory\n'/home/ashutosh/work/units/pg_review/coderoot/pg/doc/src/sgml'\nmake: *** [Makefile:8: html] Error 2\n\nI have fixed in the attached.\n\n\n- As an alternative, it is sometimes more convenient to create the\n- new table outside the partition structure, and attach it as a\n+ As an alternative, it is sometimes more convenient to create a\n+ new table outside of the partition structure, and attach it as a\n\nit uses article \"the\" for \"new table\" since it's referring to the partition\nmentioned in the earlier example. I don't think using \"a\" is correct.\n\n\"outside\" seems better than \"outside of\". See\nhttps://english.stackexchange.com/questions/9700/outside-or-outside-of. But\nI think the meaning of the sentence will be more clear if we rephrase it as\nin the attached patch.\n\n- convenient, as not only will the existing partitions become indexed,\nbut\n- also any partitions that are created in the future will. One\nlimitation is\n+ convenient as not only will the existing partitions become indexed,\nbut\n+ any partitions created in the future will as well. One limitation is\n\nI am finding the current construct hard to read. The comma is misplaced as\nyou have pointed out. The pair of commas break the \"not only\" ... \"but\nalso\" construct. I have tried to simplify the sentence in the attached.\nPlease review.\n\n- the partitioned table; such an index is marked invalid, and the\npartitions\n- do not get the index applied automatically. The indexes on\npartitions can\n- be created individually using <literal>CONCURRENTLY</literal>, and\nthen\n+ the partitioned table; such an index is marked invalid and the\npartitions\n+ do not get the index applied automatically. The partition indexes can\n\n\"indexes on partition\" is clearer than \"partition index\". Fixed in the\nattached patch.\n\nPlease review.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 14 Mar 2024 21:45:28 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On Thu, Mar 14, 2024 at 12:15 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Hi Robert,\n>\n> On Thu, Mar 7, 2024 at 10:49 PM Robert Treat <[email protected]> wrote:\n>>\n>> This patch adds a link to the \"attach partition\" command section\n>> (similar to the detach partition link above it) as well as a link to\n>> \"create table like\" as both commands contain additional information\n>> that users should review beyond what is laid out in this section.\n>> There's also a couple of wordsmiths in nearby areas to improve\n>> readability.\n>\n>\n> Thanks.\n>\n> The patch gives error when building html\n>\n> ddl.sgml:4300: element link: validity error : No declaration for attribute linked of element link\n> <link linked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ... LIKE</l\n> ^\n> ddl.sgml:4300: element link: validity error : Element link does not carry attribute linkend\n> nked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ... LIKE</literal></link\n> ^\n> make[1]: *** [Makefile:72: postgres-full.xml] Error 4\n> make[1]: *** Deleting file 'postgres-full.xml'\n> make[1]: Leaving directory '/home/ashutosh/work/units/pg_review/coderoot/pg/doc/src/sgml'\n> make: *** [Makefile:8: html] Error 2\n>\n> I have fixed in the attached.\n>\n\nDoh! Thanks!\n\n>\n> - As an alternative, it is sometimes more convenient to create the\n> - new table outside the partition structure, and attach it as a\n> + As an alternative, it is sometimes more convenient to create a\n> + new table outside of the partition structure, and attach it as a\n>\n> it uses article \"the\" for \"new table\" since it's referring to the partition mentioned in the earlier example. I don't think using \"a\" is correct.\n>\n\nI think this section has a general problem of mingling the terms table\nand partition in a way they can be confusing.\n\nIn this case, you have to infer that the term \"the new table\" is\nreferring not to the only table mentioned in the previous paragraph\n(the partitioned table), but actually to the partition mentioned in\nthe previous paragraph. For long term postgres folks the idea that\npartitions are tables isn't a big deal, but that isn't how folks\ncoming from other databases see things. So I lean towards \"a new\ntable\" because we are specifically talking about an alternative to the\nabove paragraph... ie. don't make a new partition, make a new table.\nAnd tbh I think that wording (create a new table and attach it as a\npartition) is still better than the wording in your patch, because the\n\"new partition\" you are creating isn't a partition until it is\nattached; it is just a new table.\n\n> \"outside\" seems better than \"outside of\". See https://english.stackexchange.com/questions/9700/outside-or-outside-of. But I think the meaning of the sentence will be more clear if we rephrase it as in the attached patch.\n>\n\nThis didn't really clarify anything for me, as the discussion in that\nlink seems to be around the usage of the term wrt physical location,\nand it is much less clear about the context of a logical construct.\nGranted, your patch removes that, though I think now I'd lean toward\nusing the phrase \"separate from\".\n\n> - convenient, as not only will the existing partitions become indexed, but\n> - also any partitions that are created in the future will. One limitation is\n> + convenient as not only will the existing partitions become indexed, but\n> + any partitions created in the future will as well. One limitation is\n>\n> I am finding the current construct hard to read. The comma is misplaced as you have pointed out. The pair of commas break the \"not only\" ... \"but also\" construct. I have tried to simplify the sentence in the attached. Please review.\n>\n> - the partitioned table; such an index is marked invalid, and the partitions\n> - do not get the index applied automatically. The indexes on partitions can\n> - be created individually using <literal>CONCURRENTLY</literal>, and then\n> + the partitioned table; such an index is marked invalid and the partitions\n> + do not get the index applied automatically. The partition indexes can\n>\n> \"indexes on partition\" is clearer than \"partition index\". Fixed in the attached patch.\n>\n> Please review.\n\nThe language around all this is certainly tricky (like, what is a\npartitioned index vs parent index?), and one thing I'd certainly try\nto avoid is using any words like \"inherited\" which is also overloaded\nin this context. In any case, I took in all the above and had a stab\nat a v3\n\nRobert Treat\nhttps://xzilla.net", "msg_date": "Mon, 18 Mar 2024 13:22:29 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "Hi Robert,\n\n\nOn Mon, Mar 18, 2024 at 10:52 PM Robert Treat <[email protected]> wrote:\n\n> On Thu, Mar 14, 2024 at 12:15 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > Hi Robert,\n> >\n> > On Thu, Mar 7, 2024 at 10:49 PM Robert Treat <[email protected]> wrote:\n> >>\n> >> This patch adds a link to the \"attach partition\" command section\n> >> (similar to the detach partition link above it) as well as a link to\n> >> \"create table like\" as both commands contain additional information\n> >> that users should review beyond what is laid out in this section.\n> >> There's also a couple of wordsmiths in nearby areas to improve\n> >> readability.\n> >\n> >\n> > Thanks.\n> >\n> > The patch gives error when building html\n> >\n> > ddl.sgml:4300: element link: validity error : No declaration for\n> attribute linked of element link\n> > <link linked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ...\n> LIKE</l\n> > ^\n> > ddl.sgml:4300: element link: validity error : Element link does not\n> carry attribute linkend\n> > nked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ...\n> LIKE</literal></link\n> >\n> ^\n> > make[1]: *** [Makefile:72: postgres-full.xml] Error 4\n> > make[1]: *** Deleting file 'postgres-full.xml'\n> > make[1]: Leaving directory\n> '/home/ashutosh/work/units/pg_review/coderoot/pg/doc/src/sgml'\n> > make: *** [Makefile:8: html] Error 2\n> >\n> > I have fixed in the attached.\n> >\n>\n> Doh! Thanks!\n>\n> >\n> > - As an alternative, it is sometimes more convenient to create the\n> > - new table outside the partition structure, and attach it as a\n> > + As an alternative, it is sometimes more convenient to create a\n> > + new table outside of the partition structure, and attach it as a\n> >\n> > it uses article \"the\" for \"new table\" since it's referring to the\n> partition mentioned in the earlier example. I don't think using \"a\" is\n> correct.\n> >\n>\n> I think this section has a general problem of mingling the terms table\n> and partition in a way they can be confusing.\n>\n> In this case, you have to infer that the term \"the new table\" is\n> referring not to the only table mentioned in the previous paragraph\n> (the partitioned table), but actually to the partition mentioned in\n> the previous paragraph. For long term postgres folks the idea that\n> partitions are tables isn't a big deal, but that isn't how folks\n> coming from other databases see things. So I lean towards \"a new\n> table\" because we are specifically talking about an alternative to the\n> above paragraph... ie. don't make a new partition, make a new table.\n> And tbh I think that wording (create a new table and attach it as a\n> partition) is still better than the wording in your patch, because the\n> \"new partition\" you are creating isn't a partition until it is\n> attached; it is just a new table.\n>\n> > \"outside\" seems better than \"outside of\". See\n> https://english.stackexchange.com/questions/9700/outside-or-outside-of.\n> But I think the meaning of the sentence will be more clear if we rephrase\n> it as in the attached patch.\n> >\n>\n> This didn't really clarify anything for me, as the discussion in that\n> link seems to be around the usage of the term wrt physical location,\n> and it is much less clear about the context of a logical construct.\n> Granted, your patch removes that, though I think now I'd lean toward\n> using the phrase \"separate from\".\n>\n> > - convenient, as not only will the existing partitions become\n> indexed, but\n> > - also any partitions that are created in the future will. One\n> limitation is\n> > + convenient as not only will the existing partitions become\n> indexed, but\n> > + any partitions created in the future will as well. One limitation\n> is\n> >\n> > I am finding the current construct hard to read. The comma is misplaced\n> as you have pointed out. The pair of commas break the \"not only\" ... \"but\n> also\" construct. I have tried to simplify the sentence in the attached.\n> Please review.\n> >\n> > - the partitioned table; such an index is marked invalid, and the\n> partitions\n> > - do not get the index applied automatically. The indexes on\n> partitions can\n> > - be created individually using <literal>CONCURRENTLY</literal>, and\n> then\n> > + the partitioned table; such an index is marked invalid and the\n> partitions\n> > + do not get the index applied automatically. The partition indexes\n> can\n> >\n> > \"indexes on partition\" is clearer than \"partition index\". Fixed in the\n> attached patch.\n> >\n> > Please review.\n>\n> The language around all this is certainly tricky (like, what is a\n> partitioned index vs parent index?), and one thing I'd certainly try\n> to avoid is using any words like \"inherited\" which is also overloaded\n> in this context. In any case, I took in all the above and had a stab\n> at a v3\n>\n>\nThe patch doesn't apply cleanly\n$ git apply /tmp/improve-partition-links_v3.patch\nerror: patch failed: doc/src/sgml/ddl.sgml:4266\nerror: doc/src/sgml/ddl.sgml: patch does not apply\n\n$ patch -p1 < /tmp/improve-partition-links_v3.patch\npatching file doc/src/sgml/ddl.sgml\nHunk #1 FAILED at 4266.\nHunk #2 succeeded at 4333 (offset 12 lines).\nHunk #3 FAILED at 4332.\n2 out of 3 hunks FAILED -- saving rejects to file doc/src/sgml/ddl.sgml.rej\n\n+ As an alternative to creating new partitions, it is sometimes more\n\nedit: creating a new partition .. rest of the sentence is in singular.\n\n+ convenient to create a new table seperate from the partition structure\n+ and attach it as a partition later. This allows new data to be loaded,\n+ checked, and transformed prior to it appearing in the partitioned\ntable.\n\nRest of it looks good to me.\n\nPlease add it to the next commitfest. Most likely the patch will be\nconsidered for PG 17 itself, but we won't forget it if it's in CF.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi Robert,On Mon, Mar 18, 2024 at 10:52 PM Robert Treat <[email protected]> wrote:On Thu, Mar 14, 2024 at 12:15 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Hi Robert,\n>\n> On Thu, Mar 7, 2024 at 10:49 PM Robert Treat <[email protected]> wrote:\n>>\n>> This patch adds a link to the \"attach partition\" command section\n>> (similar to the detach partition link above it) as well as a link to\n>> \"create table like\" as both commands contain additional information\n>> that users should review beyond what is laid out in this section.\n>> There's also a couple of wordsmiths in nearby areas to improve\n>> readability.\n>\n>\n> Thanks.\n>\n> The patch gives error when building html\n>\n> ddl.sgml:4300: element link: validity error : No declaration for attribute linked of element link\n>      <link linked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ... LIKE</l\n>                                               ^\n> ddl.sgml:4300: element link: validity error : Element link does not carry attribute linkend\n> nked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ... LIKE</literal></link\n>                                                                                ^\n> make[1]: *** [Makefile:72: postgres-full.xml] Error 4\n> make[1]: *** Deleting file 'postgres-full.xml'\n> make[1]: Leaving directory '/home/ashutosh/work/units/pg_review/coderoot/pg/doc/src/sgml'\n> make: *** [Makefile:8: html] Error 2\n>\n> I have fixed in the attached.\n>\n\nDoh! Thanks!\n\n>\n> -     As an alternative, it is sometimes more convenient to create the\n> -     new table outside the partition structure, and attach it as a\n> +     As an alternative, it is sometimes more convenient to create a\n> +     new table outside of the partition structure, and attach it as a\n>\n> it uses article \"the\" for \"new table\" since it's referring to the partition mentioned in the earlier example. I don't think using \"a\" is correct.\n>\n\nI think this section has a general problem of mingling the terms table\nand partition in a way they can be confusing.\n\nIn this case, you have to infer that the term \"the new table\" is\nreferring not to the only table mentioned in the previous paragraph\n(the partitioned table), but actually to the partition mentioned in\nthe previous paragraph. For long term postgres folks the idea that\npartitions are tables isn't a big deal, but that isn't how folks\ncoming from other databases see things. So I lean towards \"a new\ntable\" because we are specifically talking about an alternative to the\nabove paragraph... ie. don't make a new partition, make a new table.\nAnd tbh I think that wording (create a new table and attach it as a\npartition) is still better than the wording in your patch, because the\n\"new partition\" you are creating isn't a partition until it is\nattached; it is just a new table.\n\n> \"outside\" seems better than \"outside of\". See https://english.stackexchange.com/questions/9700/outside-or-outside-of. But I think the meaning of the sentence will be more clear if we rephrase it as in the attached patch.\n>\n\nThis didn't really clarify anything for me, as the discussion in that\nlink seems to be around the usage of the term wrt physical location,\nand it is much less clear about the context of a logical construct.\nGranted, your patch removes that, though I think now I'd lean toward\nusing the phrase \"separate from\".\n\n> -     convenient, as not only will the existing partitions become indexed, but\n> -     also any partitions that are created in the future will.  One limitation is\n> +     convenient as not only will the existing partitions become indexed, but\n> +     any partitions created in the future will as well.  One limitation is\n>\n> I am finding the current construct hard to read. The comma is misplaced as you have pointed out. The pair of commas break the \"not only\" ... \"but also\" construct. I have tried to simplify the sentence in the attached. Please review.\n>\n> -     the partitioned table; such an index is marked invalid, and the partitions\n> -     do not get the index applied automatically.  The indexes on partitions can\n> -     be created individually using <literal>CONCURRENTLY</literal>, and then\n> +     the partitioned table; such an index is marked invalid and the partitions\n> +     do not get the index applied automatically.  The partition indexes can\n>\n> \"indexes on partition\" is clearer than \"partition index\". Fixed in the attached patch.\n>\n> Please review.\n\nThe language around all this is certainly tricky (like, what is a\npartitioned index vs parent index?), and one thing I'd certainly try\nto avoid is using any words like \"inherited\" which is also overloaded\nin this context. In any case, I took in all the above and had a stab\nat a v3\nThe patch doesn't apply cleanly$ git apply /tmp/improve-partition-links_v3.patcherror: patch failed: doc/src/sgml/ddl.sgml:4266error: doc/src/sgml/ddl.sgml: patch does not apply$ patch -p1 < /tmp/improve-partition-links_v3.patch patching file doc/src/sgml/ddl.sgmlHunk #1 FAILED at 4266.Hunk #2 succeeded at 4333 (offset 12 lines).Hunk #3 FAILED at 4332.2 out of 3 hunks FAILED -- saving rejects to file doc/src/sgml/ddl.sgml.rej+     As an alternative to creating new partitions, it is sometimes moreedit: creating a new partition .. rest of the sentence is in singular.+     convenient to create a new table seperate from the partition structure+     and attach it as a partition later. This allows new data to be loaded,+     checked, and transformed prior to it appearing in the partitioned table.Rest of it looks good to me.Please add it to the next commitfest. Most likely the patch will be considered for PG 17 itself, but we won't forget it if it's in CF.-- Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 19 Mar 2024 12:38:21 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On Tue, Mar 19, 2024 at 3:08 AM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Hi Robert,\n>\n>\n> On Mon, Mar 18, 2024 at 10:52 PM Robert Treat <[email protected]> wrote:\n>>\n>> On Thu, Mar 14, 2024 at 12:15 PM Ashutosh Bapat\n>> <[email protected]> wrote:\n>> >\n>> > Hi Robert,\n>> >\n>> > On Thu, Mar 7, 2024 at 10:49 PM Robert Treat <[email protected]> wrote:\n>> >>\n>> >> This patch adds a link to the \"attach partition\" command section\n>> >> (similar to the detach partition link above it) as well as a link to\n>> >> \"create table like\" as both commands contain additional information\n>> >> that users should review beyond what is laid out in this section.\n>> >> There's also a couple of wordsmiths in nearby areas to improve\n>> >> readability.\n>> >\n>> >\n>> > Thanks.\n>> >\n>> > The patch gives error when building html\n>> >\n>> > ddl.sgml:4300: element link: validity error : No declaration for attribute linked of element link\n>> > <link linked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ... LIKE</l\n>> > ^\n>> > ddl.sgml:4300: element link: validity error : Element link does not carry attribute linkend\n>> > nked=\"sql-createtable-parms-like\"><literal>CREATE TABLE ... LIKE</literal></link\n>> > ^\n>> > make[1]: *** [Makefile:72: postgres-full.xml] Error 4\n>> > make[1]: *** Deleting file 'postgres-full.xml'\n>> > make[1]: Leaving directory '/home/ashutosh/work/units/pg_review/coderoot/pg/doc/src/sgml'\n>> > make: *** [Makefile:8: html] Error 2\n>> >\n>> > I have fixed in the attached.\n>> >\n>>\n>> Doh! Thanks!\n>>\n>> >\n>> > - As an alternative, it is sometimes more convenient to create the\n>> > - new table outside the partition structure, and attach it as a\n>> > + As an alternative, it is sometimes more convenient to create a\n>> > + new table outside of the partition structure, and attach it as a\n>> >\n>> > it uses article \"the\" for \"new table\" since it's referring to the partition mentioned in the earlier example. I don't think using \"a\" is correct.\n>> >\n>>\n>> I think this section has a general problem of mingling the terms table\n>> and partition in a way they can be confusing.\n>>\n>> In this case, you have to infer that the term \"the new table\" is\n>> referring not to the only table mentioned in the previous paragraph\n>> (the partitioned table), but actually to the partition mentioned in\n>> the previous paragraph. For long term postgres folks the idea that\n>> partitions are tables isn't a big deal, but that isn't how folks\n>> coming from other databases see things. So I lean towards \"a new\n>> table\" because we are specifically talking about an alternative to the\n>> above paragraph... ie. don't make a new partition, make a new table.\n>> And tbh I think that wording (create a new table and attach it as a\n>> partition) is still better than the wording in your patch, because the\n>> \"new partition\" you are creating isn't a partition until it is\n>> attached; it is just a new table.\n>>\n>> > \"outside\" seems better than \"outside of\". See https://english.stackexchange.com/questions/9700/outside-or-outside-of. But I think the meaning of the sentence will be more clear if we rephrase it as in the attached patch.\n>> >\n>>\n>> This didn't really clarify anything for me, as the discussion in that\n>> link seems to be around the usage of the term wrt physical location,\n>> and it is much less clear about the context of a logical construct.\n>> Granted, your patch removes that, though I think now I'd lean toward\n>> using the phrase \"separate from\".\n>>\n>> > - convenient, as not only will the existing partitions become indexed, but\n>> > - also any partitions that are created in the future will. One limitation is\n>> > + convenient as not only will the existing partitions become indexed, but\n>> > + any partitions created in the future will as well. One limitation is\n>> >\n>> > I am finding the current construct hard to read. The comma is misplaced as you have pointed out. The pair of commas break the \"not only\" ... \"but also\" construct. I have tried to simplify the sentence in the attached. Please review.\n>> >\n>> > - the partitioned table; such an index is marked invalid, and the partitions\n>> > - do not get the index applied automatically. The indexes on partitions can\n>> > - be created individually using <literal>CONCURRENTLY</literal>, and then\n>> > + the partitioned table; such an index is marked invalid and the partitions\n>> > + do not get the index applied automatically. The partition indexes can\n>> >\n>> > \"indexes on partition\" is clearer than \"partition index\". Fixed in the attached patch.\n>> >\n>> > Please review.\n>>\n>> The language around all this is certainly tricky (like, what is a\n>> partitioned index vs parent index?), and one thing I'd certainly try\n>> to avoid is using any words like \"inherited\" which is also overloaded\n>> in this context. In any case, I took in all the above and had a stab\n>> at a v3\n>>\n>\n> The patch doesn't apply cleanly\n> $ git apply /tmp/improve-partition-links_v3.patch\n> error: patch failed: doc/src/sgml/ddl.sgml:4266\n> error: doc/src/sgml/ddl.sgml: patch does not apply\n>\n> $ patch -p1 < /tmp/improve-partition-links_v3.patch\n> patching file doc/src/sgml/ddl.sgml\n> Hunk #1 FAILED at 4266.\n> Hunk #2 succeeded at 4333 (offset 12 lines).\n> Hunk #3 FAILED at 4332.\n> 2 out of 3 hunks FAILED -- saving rejects to file doc/src/sgml/ddl.sgml.rej\n>\n> + As an alternative to creating new partitions, it is sometimes more\n>\n> edit: creating a new partition .. rest of the sentence is in singular.\n>\n> + convenient to create a new table seperate from the partition structure\n> + and attach it as a partition later. This allows new data to be loaded,\n> + checked, and transformed prior to it appearing in the partitioned table.\n>\n> Rest of it looks good to me.\n>\n> Please add it to the next commitfest. Most likely the patch will be considered for PG 17 itself, but we won't forget it if it's in CF.\n>\n\nI've put it in the next commitfest with target version of 17, and I've\nadded you as a reviewer :-)\n\nAlso, attached is an updated patch with your change above which should\napply cleanly to the current git master.\n\nThanks again,\n\nRobert Treat\nhttps://xzilla.net", "msg_date": "Tue, 19 Mar 2024 09:08:40 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On Tue, Mar 19, 2024 at 6:38 PM Robert Treat <[email protected]> wrote:\n\n>\n> I've put it in the next commitfest with target version of 17, and I've\n> added you as a reviewer :-)\n>\n>\nThanks.\n\n\n> Also, attached is an updated patch with your change above which should\n> apply cleanly to the current git master.\n>\n\nIt did apply for me now.\n\nThe HTML renders good, the links work as expected.\n\nThe CREATE TABLE ... LIKE command\nI think the original word \"option\" instead of \"command\" is better since we\nare talking about LIKE as an option to CREATE TABLE instead of CREATE TABLE\ncommand.\n\n+ but any future attached or created partitions will be indexed as well.\n\nI think just \"any future partitions will be indexed as well\" would suffice,\nno need to mention whether they were created or attached.\n\n+ One limitation when creating new indexes on partitioned tables is\nthat it\n+ is not possible to use the <literal>CONCURRENTLY</literal>\n+ qualifier when creating such a partitioned index. To avoid long\n\nThe sentence uses two different phrases, \"indexes on partitioned tables\"\nand \"partitioned index\", for the same thing in the same sentence. Probably\nit is better to leave original sentence as is.\n\nBut I think it's time for a committer to take a look at this. Please feel\nfree to address the above comments if you agree with them. Marking this as\nready for committer.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Tue, Mar 19, 2024 at 6:38 PM Robert Treat <[email protected]> wrote:\n\nI've put it in the next commitfest with target version of 17, and I've\nadded you as a reviewer :-)\nThanks. \nAlso, attached is an updated patch with your change above which should\napply cleanly to the current git master.It did apply for me now.The HTML renders good, the links work as expected.\n The\n CREATE TABLE ... LIKE\n commandI think the original word \"option\" instead of \"command\" is better since we are talking about LIKE as an option to CREATE TABLE instead of CREATE TABLE command.+     but any future attached or created partitions will be indexed as well.I think just \"any future partitions will be indexed as well\" would suffice, no need to mention whether they were created or attached.+     One limitation when creating new indexes on partitioned tables is that it+     is not possible to use the <literal>CONCURRENTLY</literal>+     qualifier when creating such a partitioned index.  To avoid longThe sentence uses two different phrases, \"indexes on partitioned tables\" and \"partitioned index\", for the same thing in the same sentence. Probably it is better to leave original sentence as is.But I think it's time for a committer to take a look at this. Please feel free to address the above comments if you agree with them. Marking this as ready for committer. -- Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 20 Mar 2024 17:22:47 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On Wed, Mar 20, 2024 at 5:22 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n>\n>\n> On Tue, Mar 19, 2024 at 6:38 PM Robert Treat <[email protected]> wrote:\n>\n>>\n>> I've put it in the next commitfest with target version of 17, and I've\n>> added you as a reviewer :-)\n>>\n>>\n> Thanks.\n>\n>\n>> Also, attached is an updated patch with your change above which should\n>> apply cleanly to the current git master.\n>>\n>\n> It did apply for me now.\n>\n> The HTML renders good, the links work as expected.\n>\n> The CREATE TABLE ... LIKE command\n> I think the original word \"option\" instead of \"command\" is better since we\n> are talking about LIKE as an option to CREATE TABLE instead of CREATE TABLE\n> command.\n>\n> + but any future attached or created partitions will be indexed as\n> well.\n>\n> I think just \"any future partitions will be indexed as well\" would\n> suffice, no need to mention whether they were created or attached.\n>\n> + One limitation when creating new indexes on partitioned tables is\n> that it\n> + is not possible to use the <literal>CONCURRENTLY</literal>\n> + qualifier when creating such a partitioned index. To avoid long\n>\n> The sentence uses two different phrases, \"indexes on partitioned tables\"\n> and \"partitioned index\", for the same thing in the same sentence. Probably\n> it is better to leave original sentence as is.\n>\n> But I think it's time for a committer to take a look at this. Please feel\n> free to address the above comments if you agree with them. Marking this as\n> ready for committer.\n>\n>\nThe patch changes things not directly related to $Subject. It will be good\nto add a commit message to the patch describing what are those changes\nabout. I observe that all of them are in section \"partition maintenance\".\nhttps://www.postgresql.org/docs/16/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-MAINTENANCE.\nDo you see any more edits required in that section?\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Wed, Mar 20, 2024 at 5:22 PM Ashutosh Bapat <[email protected]> wrote:On Tue, Mar 19, 2024 at 6:38 PM Robert Treat <[email protected]> wrote:\n\nI've put it in the next commitfest with target version of 17, and I've\nadded you as a reviewer :-)\nThanks. \nAlso, attached is an updated patch with your change above which should\napply cleanly to the current git master.It did apply for me now.The HTML renders good, the links work as expected.\n The\n CREATE TABLE ... LIKE\n commandI think the original word \"option\" instead of \"command\" is better since we are talking about LIKE as an option to CREATE TABLE instead of CREATE TABLE command.+     but any future attached or created partitions will be indexed as well.I think just \"any future partitions will be indexed as well\" would suffice, no need to mention whether they were created or attached.+     One limitation when creating new indexes on partitioned tables is that it+     is not possible to use the <literal>CONCURRENTLY</literal>+     qualifier when creating such a partitioned index.  To avoid longThe sentence uses two different phrases, \"indexes on partitioned tables\" and \"partitioned index\", for the same thing in the same sentence. Probably it is better to leave original sentence as is.But I think it's time for a committer to take a look at this. Please feel free to address the above comments if you agree with them. Marking this as ready for committer.The patch changes things not directly related to $Subject. It will be good to add a commit message to the patch describing what are those changes about. I observe that all of them are in section \"partition maintenance\". https://www.postgresql.org/docs/16/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-MAINTENANCE. Do you see any more edits required in that section?-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 21 Mar 2024 16:57:08 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On Thu, Mar 21, 2024 at 7:27 AM Ashutosh Bapat\n<[email protected]> wrote:\n> On Wed, Mar 20, 2024 at 5:22 PM Ashutosh Bapat <[email protected]> wrote:\n>> On Tue, Mar 19, 2024 at 6:38 PM Robert Treat <[email protected]> wrote:\n>>>\n>>>\n>>> I've put it in the next commitfest with target version of 17, and I've\n>>> added you as a reviewer :-)\n>>>\n>>\n>> Thanks.\n>>\n>>>\n>>> Also, attached is an updated patch with your change above which should\n>>> apply cleanly to the current git master.\n>>\n>>\n>> It did apply for me now.\n>>\n>> The HTML renders good, the links work as expected.\n>>\n>> The CREATE TABLE ... LIKE command\n>> I think the original word \"option\" instead of \"command\" is better since we are talking about LIKE as an option to CREATE TABLE instead of CREATE TABLE command.\n>>\n>> + but any future attached or created partitions will be indexed as well.\n>>\n>> I think just \"any future partitions will be indexed as well\" would suffice, no need to mention whether they were created or attached.\n>>\n>> + One limitation when creating new indexes on partitioned tables is that it\n>> + is not possible to use the <literal>CONCURRENTLY</literal>\n>> + qualifier when creating such a partitioned index. To avoid long\n>>\n>> The sentence uses two different phrases, \"indexes on partitioned tables\" and \"partitioned index\", for the same thing in the same sentence. Probably it is better to leave original sentence as is.\n>>\n>> But I think it's time for a committer to take a look at this. Please feel free to address the above comments if you agree with them. Marking this as ready for committer.\n>>\n>\n> The patch changes things not directly related to $Subject. It will be good to add a commit message to the patch describing what are those changes about. I observe that all of them are in section \"partition maintenance\". https://www.postgresql.org/docs/16/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-MAINTENANCE. Do you see any more edits required in that section?\n>\n\nHeh, well, I had thought about some other possible improvements to\nthat section but hadn't quite worked them out, but you inspired me to\nhave another go of it ;-)\n\nv5 patch attached which I think further improves clarity/brevity of\nthis section. I've left the patch name the same for simplicity, but\nI'd agree that the commit would now be more along the lines of editing\n/ improvements / copyrighting of \"Partition Maintenance\" docs.\n\nRobert Treat\nhttps://xzilla.net", "msg_date": "Fri, 22 Mar 2024 13:28:04 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On Fri, Mar 22, 2024 at 10:58 PM Robert Treat <[email protected]> wrote:\n\n>\n> v5 patch attached which I think further improves clarity/brevity of\n> this section. I've left the patch name the same for simplicity, but\n> I'd agree that the commit would now be more along the lines of editing\n> / improvements / copyrighting of \"Partition Maintenance\" docs.\n>\n\nRight. Minor suggestions.\n\n- It is recommended to drop the now-redundant <literal>CHECK</literal>\n- constraint after the <command>ATTACH PARTITION</command> is\ncomplete. If\n- the table being attached is itself a partitioned table, then each of\nits\n+ As illustrated above, it is recommended to avoid this scan by\ncreating a\n+ <literal>CHECK</literal> constraint on the to be attached table that\n\nInstead of \"to be attached table\", \"table to be attached\" reads better. You\nmay want to add \"as a partition\" after that.\n\n Similarly, if the partitioned table has a <literal>DEFAULT</literal>\n partition, it is recommended to create a <literal>CHECK</literal>\n constraint which excludes the to-be-attached partition's constraint.\nIf\n- this is not done then the <literal>DEFAULT</literal> partition will be\n+ this is not done, the <literal>DEFAULT</literal> partition must be\n\nI am not sure whether replacing \"will\" by \"must\" is correct. Usually I have\nseen \"will\" being used in such sentences, \"must\" seems appropriate given\nthe necessity.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Mar 22, 2024 at 10:58 PM Robert Treat <[email protected]> wrote:\n\nv5 patch attached which I think further improves clarity/brevity of\nthis section. I've left the patch name the same for simplicity, but\nI'd agree that the commit would now be more along the lines of editing\n/ improvements / copyrighting of \"Partition Maintenance\" docs.Right. Minor suggestions.-     It is recommended to drop the now-redundant <literal>CHECK</literal>-     constraint after the <command>ATTACH PARTITION</command> is complete.  If-     the table being attached is itself a partitioned table, then each of its+     As illustrated above, it is recommended to avoid this scan by creating a+     <literal>CHECK</literal> constraint on the to be attached table thatInstead of \"to be attached table\", \"table to be attached\" reads better. You may want to add \"as a partition\" after that.      Similarly, if the partitioned table has a <literal>DEFAULT</literal>      partition, it is recommended to create a <literal>CHECK</literal>      constraint which excludes the to-be-attached partition's constraint.  If-     this is not done then the <literal>DEFAULT</literal> partition will be+     this is not done, the <literal>DEFAULT</literal> partition must beI am not sure whether replacing \"will\" by \"must\" is correct. Usually I have seen \"will\" being used in such sentences, \"must\" seems appropriate given the necessity.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 25 Mar 2024 16:13:10 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On Mon, Mar 25, 2024 at 6:43 AM Ashutosh Bapat\n<[email protected]> wrote:\n> On Fri, Mar 22, 2024 at 10:58 PM Robert Treat <[email protected]> wrote:\n>> v5 patch attached which I think further improves clarity/brevity of\n>> this section. I've left the patch name the same for simplicity, but\n>> I'd agree that the commit would now be more along the lines of editing\n>> / improvements / copyrighting of \"Partition Maintenance\" docs.\n>\n>\n> Right. Minor suggestions.\n>\n> - It is recommended to drop the now-redundant <literal>CHECK</literal>\n> - constraint after the <command>ATTACH PARTITION</command> is complete. If\n> - the table being attached is itself a partitioned table, then each of its\n> + As illustrated above, it is recommended to avoid this scan by creating a\n> + <literal>CHECK</literal> constraint on the to be attached table that\n>\n> Instead of \"to be attached table\", \"table to be attached\" reads better. You may want to add \"as a partition\" after that.\n>\n\nThat sounds more awkward to me, but I've done some rewording to avoid both.\n\n> Similarly, if the partitioned table has a <literal>DEFAULT</literal>\n> partition, it is recommended to create a <literal>CHECK</literal>\n> constraint which excludes the to-be-attached partition's constraint. If\n> - this is not done then the <literal>DEFAULT</literal> partition will be\n> + this is not done, the <literal>DEFAULT</literal> partition must be\n>\n> I am not sure whether replacing \"will\" by \"must\" is correct. Usually I have seen \"will\" being used in such sentences, \"must\" seems appropriate given the necessity.\n>\n\nOK\n\nUpdated patch attached.\n\n\nRobert Treat\nhttps://xzilla.net", "msg_date": "Wed, 27 Mar 2024 20:24:04 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "LGTM.\n\nThe commitfest entry is marked as RFC already.\n\nThanks for taking care of the comments.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Mar 28, 2024 at 5:54 AM Robert Treat <[email protected]> wrote:\n\n> On Mon, Mar 25, 2024 at 6:43 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> > On Fri, Mar 22, 2024 at 10:58 PM Robert Treat <[email protected]> wrote:\n> >> v5 patch attached which I think further improves clarity/brevity of\n> >> this section. I've left the patch name the same for simplicity, but\n> >> I'd agree that the commit would now be more along the lines of editing\n> >> / improvements / copyrighting of \"Partition Maintenance\" docs.\n> >\n> >\n> > Right. Minor suggestions.\n> >\n> > - It is recommended to drop the now-redundant\n> <literal>CHECK</literal>\n> > - constraint after the <command>ATTACH PARTITION</command> is\n> complete. If\n> > - the table being attached is itself a partitioned table, then each\n> of its\n> > + As illustrated above, it is recommended to avoid this scan by\n> creating a\n> > + <literal>CHECK</literal> constraint on the to be attached table\n> that\n> >\n> > Instead of \"to be attached table\", \"table to be attached\" reads better.\n> You may want to add \"as a partition\" after that.\n> >\n>\n> That sounds more awkward to me, but I've done some rewording to avoid both.\n>\n> > Similarly, if the partitioned table has a\n> <literal>DEFAULT</literal>\n> > partition, it is recommended to create a <literal>CHECK</literal>\n> > constraint which excludes the to-be-attached partition's\n> constraint. If\n> > - this is not done then the <literal>DEFAULT</literal> partition\n> will be\n> > + this is not done, the <literal>DEFAULT</literal> partition must be\n> >\n> > I am not sure whether replacing \"will\" by \"must\" is correct. Usually I\n> have seen \"will\" being used in such sentences, \"must\" seems appropriate\n> given the necessity.\n> >\n>\n> OK\n>\n> Updated patch attached.\n>\n>\n> Robert Treat\n> https://xzilla.net\n>\n\nLGTM.The commitfest entry is marked as RFC already.Thanks for taking care of the comments.-- Best Wishes,Ashutosh BapatOn Thu, Mar 28, 2024 at 5:54 AM Robert Treat <[email protected]> wrote:On Mon, Mar 25, 2024 at 6:43 AM Ashutosh Bapat\n<[email protected]> wrote:\n> On Fri, Mar 22, 2024 at 10:58 PM Robert Treat <[email protected]> wrote:\n>> v5 patch attached which I think further improves clarity/brevity of\n>> this section. I've left the patch name the same for simplicity, but\n>> I'd agree that the commit would now be more along the lines of editing\n>> / improvements / copyrighting of \"Partition Maintenance\" docs.\n>\n>\n> Right. Minor suggestions.\n>\n> -     It is recommended to drop the now-redundant <literal>CHECK</literal>\n> -     constraint after the <command>ATTACH PARTITION</command> is complete.  If\n> -     the table being attached is itself a partitioned table, then each of its\n> +     As illustrated above, it is recommended to avoid this scan by creating a\n> +     <literal>CHECK</literal> constraint on the to be attached table that\n>\n> Instead of \"to be attached table\", \"table to be attached\" reads better. You may want to add \"as a partition\" after that.\n>\n\nThat sounds more awkward to me, but I've done some rewording to avoid both.\n\n>       Similarly, if the partitioned table has a <literal>DEFAULT</literal>\n>       partition, it is recommended to create a <literal>CHECK</literal>\n>       constraint which excludes the to-be-attached partition's constraint.  If\n> -     this is not done then the <literal>DEFAULT</literal> partition will be\n> +     this is not done, the <literal>DEFAULT</literal> partition must be\n>\n> I am not sure whether replacing \"will\" by \"must\" is correct. Usually I have seen \"will\" being used in such sentences, \"must\" seems appropriate given the necessity.\n>\n\nOK\n\nUpdated patch attached.\n\n\nRobert Treat\nhttps://xzilla.net", "msg_date": "Thu, 28 Mar 2024 08:49:49 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On 2024-Mar-28, Ashutosh Bapat wrote:\n\n> LGTM.\n> \n> The commitfest entry is marked as RFC already.\n> \n> Thanks for taking care of the comments.\n\nThanks for reviewing. I noticed a typo \"seperate\", fixed here. Also, I\nnoticed that Robert added an empty line which looks in the source like\nhe's breaking the paragraph -- but because he didn't add a closing </para> \nand an opening <para> to the next one, there's no actual new paragraph\nin the HTML output.\n\nMy first instinct was to add those. However, upon reading the text, I\nnoticed that the previous paragraph ends without offering an example,\nand then we attach the example to the paragraph that takes about CREATE\nTABLE LIKE showing both techniques, which seemed a bit odd. So instead\nI joined both paragraphs back together. I'm unsure which one looks\nbetter. Which one do you vote for?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)", "msg_date": "Thu, 28 Mar 2024 18:52:29 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On Thu, Mar 28, 2024 at 11:22 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2024-Mar-28, Ashutosh Bapat wrote:\n>\n> > LGTM.\n> >\n> > The commitfest entry is marked as RFC already.\n> >\n> > Thanks for taking care of the comments.\n>\n> Thanks for reviewing. I noticed a typo \"seperate\", fixed here.\n\n\nThanks for catching it.\n\n\n> Also, I\n> noticed that Robert added an empty line which looks in the source like\n> he's breaking the paragraph -- but because he didn't add a closing </para>\n> and an opening <para> to the next one, there's no actual new paragraph\n> in the HTML output.\n>\n\n> My first instinct was to add those. However, upon reading the text, I\n> noticed that the previous paragraph ends without offering an example,\n> and then we attach the example to the paragraph that takes about CREATE\n> TABLE LIKE showing both techniques, which seemed a bit odd. So instead\n> I joined both paragraphs back together. I'm unsure which one looks\n> better. Which one do you vote for?\n>\n\n\"CREATE TABLE ... LIKE\" is mentioned in a separate paragraph in HEAD as\nwell. The confused me too but I didn't find any reason. Robert just made\nthat explicit by adding a blank line. I thought that was ok. But it makes\nsense to not have a separate paragraph in the source code too. Thanks for\nfixing it. I think the intention of the current code as well as the patch\nis to have a single paragraph in HTML output, same as \"no-extra-para\"\noutput.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Mar 28, 2024 at 11:22 PM Alvaro Herrera <[email protected]> wrote:On 2024-Mar-28, Ashutosh Bapat wrote:\n\n> LGTM.\n> \n> The commitfest entry is marked as RFC already.\n> \n> Thanks for taking care of the comments.\n\nThanks for reviewing.  I noticed a typo \"seperate\", fixed here. Thanks for catching it.  Also, I\nnoticed that Robert added an empty line which looks in the source like\nhe's breaking the paragraph -- but because he didn't add a closing </para> \nand an opening <para> to the next one, there's no actual new paragraph\nin the HTML output.\n\nMy first instinct was to add those.  However, upon reading the text, I\nnoticed that the previous paragraph ends without offering an example,\nand then we attach the example to the paragraph that takes about CREATE\nTABLE LIKE showing both techniques, which seemed a bit odd.  So instead\nI joined both paragraphs back together.  I'm unsure which one looks\nbetter.  Which one do you vote for?\"CREATE TABLE ... LIKE\" is mentioned in a separate paragraph in HEAD as well. The confused me too but I didn't find any reason. Robert just made that explicit by adding a blank line. I thought that was ok. But it makes sense to not have a separate paragraph in the source code too. Thanks for fixing it. I think the intention of the current code as well as the patch is to have a single paragraph in HTML output, same as \"no-extra-para\" output. -- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 29 Mar 2024 09:13:06 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: add helpful partitioning links" }, { "msg_contents": "On Thu, Mar 28, 2024 at 11:43 PM Ashutosh Bapat\n<[email protected]> wrote:\n> On Thu, Mar 28, 2024 at 11:22 PM Alvaro Herrera <[email protected]> wrote:\n>>\n>> On 2024-Mar-28, Ashutosh Bapat wrote:\n>>\n>> > LGTM.\n>> >\n>> > The commitfest entry is marked as RFC already.\n>> >\n>> > Thanks for taking care of the comments.\n>>\n>> Thanks for reviewing. I noticed a typo \"seperate\", fixed here.\n>\n>\n> Thanks for catching it.\n>\n>>\n>> Also, I\n>> noticed that Robert added an empty line which looks in the source like\n>> he's breaking the paragraph -- but because he didn't add a closing </para>\n>> and an opening <para> to the next one, there's no actual new paragraph\n>> in the HTML output.\n>>\n>>\n>> My first instinct was to add those. However, upon reading the text, I\n>> noticed that the previous paragraph ends without offering an example,\n>> and then we attach the example to the paragraph that takes about CREATE\n>> TABLE LIKE showing both techniques, which seemed a bit odd. So instead\n>> I joined both paragraphs back together. I'm unsure which one looks\n>> better. Which one do you vote for?\n>\n>\n> \"CREATE TABLE ... LIKE\" is mentioned in a separate paragraph in HEAD as well. The confused me too but I didn't find any reason. Robert just made that explicit by adding a blank line. I thought that was ok. But it makes sense to not have a separate paragraph in the source code too. Thanks for fixing it. I think the intention of the current code as well as the patch is to have a single paragraph in HTML output, same as \"no-extra-para\" output.\n>\n\nIt does seem like the source and the html output ought to match, so +1 from me.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Fri, 29 Mar 2024 10:16:32 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DOCS: add helpful partitioning links" } ]
[ { "msg_contents": "I could not find any explanation of the following behaviour in docs -\n\n\nOur documentation for CREATE TABLE says:\n\nCREATE TABLE also automatically creates a data type that represents\nthe composite type corresponding to one row of the table. Therefore,\ntables cannot have the same name as any existing data type in the same\nschema.\n\n\nBut these composite tables are only sometimes there\n\nhannuk=# CREATE TABLE pair(a int, b int);\nCREATE TABLE\n\nhannuk=# INSERT INTO pair VALUES(1,2);\nINSERT 0 1\n\nhannuk=# select pg_typeof(p) from pair as p;\n pg_typeof\n-----------\n pair\n\nhannuk=# select pg_typeof(pg_typeof(p)) from pair as p;\n pg_typeof\n-----------\n regtype\n\n# first case where I can not use the table-defined type\n\nhannuk=# create table anoter_pair of pair;\nERROR: type pair is not a composite type\n\n# the type definitely is there as promised\n\nhannuk=# create type pair as (a int, b int);\nERROR: type \"pair\" already exists\n\n# and I can create similar type wit other name and use it to create table\n\nhannuk=# create type pair2 as (a int, b int);\nCREATE TYPE\n\nhannuk=# create table anoter_pair of pair2;\nCREATE TABLE\n\n# and i can even use it in LIKE\n\nhannuk=# CREATE TABLE pair3(like pair2);\nCREATE TABLE\n\n# the type is present in pg_type with type 'c' for Composite\n\nhannuk=# select typname, typtype from pg_type where typname = 'pair';\n typname | typtype\n---------+---------\n pair | c\n(1 row)\n\n# and I can add comment to the type\n\nhannuk=# COMMENT ON TYPE pair is 'A Shroedingers type';\nCOMMENT\n\n# but \\dT does not show it (second case)\n\nhannuk=# \\dT pair\n List of data types\n Schema | Name | Description\n--------+------+-------------\n(0 rows)\n\n---\nHannu\n\n\n", "msg_date": "Fri, 8 Mar 2024 01:12:40 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE TABLE creates a composite type corresponding to the table row,\n which is and is not there" }, { "msg_contents": "On 2024-03-08 01:12 +0100, Hannu Krosing wrote:\n> I could not find any explanation of the following behaviour in docs -\n> \n> \n> Our documentation for CREATE TABLE says:\n> \n> CREATE TABLE also automatically creates a data type that represents\n> the composite type corresponding to one row of the table. Therefore,\n> tables cannot have the same name as any existing data type in the same\n> schema.\n>\n> But these composite tables are only sometimes there\n\nThere's a distinction between stand-alone composite types created with CREATE\nTYPE and those created implicitly via CREATE TABLE. The former is also\ncalled \"free-standing\" in the docs for pg_type.typrelid[1].\n\n> hannuk=# CREATE TABLE pair(a int, b int);\n> CREATE TABLE\n> \n> hannuk=# INSERT INTO pair VALUES(1,2);\n> INSERT 0 1\n> \n> hannuk=# select pg_typeof(p) from pair as p;\n> pg_typeof\n> -----------\n> pair\n> \n> hannuk=# select pg_typeof(pg_typeof(p)) from pair as p;\n> pg_typeof\n> -----------\n> regtype\n> \n> # first case where I can not use the table-defined type\n> \n> hannuk=# create table anoter_pair of pair;\n> ERROR: type pair is not a composite type\n\nThat error message is simply misleading. What gets checked here is that\ntype \"pair\" was created with CREATE TYPE. The attached patch fixes the\nerror message and also documents that requirement.\n\ncheck_of_type() already addresses this limitation:\n\n\t/*\n\t * check_of_type\n\t *\n\t * Check whether a type is suitable for CREATE TABLE OF/ALTER TABLE OF. If it\n\t * isn't suitable, throw an error. Currently, we require that the type\n\t * originated with CREATE TYPE AS. We could support any row type, but doing so\n\t * would require handling a number of extra corner cases in the DDL commands.\n\t * (Also, allowing domain-over-composite would open up a can of worms about\n\t * whether and how the domain's constraints should apply to derived tables.)\n\t */\n\nNot sure what those corner cases are, but table inheritance is one of\nthem: I played around with typeOk in check_of_type() to also accept the\ncomposite types implicitly created by CREATE TABLE:\n\n\ttypeOk = (typeRelation->rd_rel->relkind == RELKIND_COMPOSITE_TYPE ||\n\t typeRelation->rd_rel->relkind == RELKIND_RELATION);\n\nWith that creating typed tables of parent and child works as expected:\n\n\tCREATE TABLE parent (a int);\n\tCREATE TABLE child (b int) INHERITS (parent);\n\tCREATE TABLE of_parent OF parent;\n\tCREATE TABLE of_child OF child;\n\t\\d parent\n\t Table \"public.parent\"\n\t Column | Type | Collation | Nullable | Default \n\t--------+---------+-----------+----------+---------\n\t a | integer | | | \n\tNumber of child tables: 1 (Use \\d+ to list them.)\n\t\n\t\\d of_parent\n\t Table \"public.of_parent\"\n\t Column | Type | Collation | Nullable | Default \n\t--------+---------+-----------+----------+---------\n\t a | integer | | | \n\tTyped table of type: parent\n\t\n\t\\d child\n\t Table \"public.child\"\n\t Column | Type | Collation | Nullable | Default \n\t--------+---------+-----------+----------+---------\n\t a | integer | | | \n\t b | integer | | | \n\tInherits: parent\n\t\n\t\\d of_child\n\t Table \"public.of_child\"\n\t Column | Type | Collation | Nullable | Default \n\t--------+---------+-----------+----------+---------\n\t a | integer | | | \n\t b | integer | | | \n\tTyped table of type: child\n\nBut adding columns to parent does not change the typed tables:\n\n\tALTER TABLE parent ADD c int;\n\t\\d parent\n\t Table \"public.parent\"\n\t Column | Type | Collation | Nullable | Default \n\t--------+---------+-----------+----------+---------\n\t a | integer | | | \n\t c | integer | | | \n\tNumber of child tables: 1 (Use \\d+ to list them.)\n\t\n\t\\d of_parent\n\t Table \"public.of_parent\"\n\t Column | Type | Collation | Nullable | Default \n\t--------+---------+-----------+----------+---------\n\t a | integer | | | \n\tTyped table of type: parent\n\t\n\t\\d child\n\t Table \"public.child\"\n\t Column | Type | Collation | Nullable | Default \n\t--------+---------+-----------+----------+---------\n\t a | integer | | | \n\t b | integer | | | \n\t c | integer | | | \n\tInherits: parent\n\t\n\t\\d of_child\n\t Table \"public.of_child\"\n\t Column | Type | Collation | Nullable | Default \n\t--------+---------+-----------+----------+---------\n\t a | integer | | | \n\t b | integer | | | \n\tTyped table of type: child\n\nWhereas changing a composite type and its typed tables is possible with\nALTER TYPE ... ADD ATTRIBUTE ... CASCADE.\n\n> # the type definitely is there as promised\n> \n> hannuk=# create type pair as (a int, b int);\n> ERROR: type \"pair\" already exists\n> \n> # and I can create similar type wit other name and use it to create table\n> \n> hannuk=# create type pair2 as (a int, b int);\n> CREATE TYPE\n> \n> hannuk=# create table anoter_pair of pair2;\n> CREATE TABLE\n> \n> # and i can even use it in LIKE\n> \n> hannuk=# CREATE TABLE pair3(like pair2);\n> CREATE TABLE\n> \n> # the type is present in pg_type with type 'c' for Composite\n> \n> hannuk=# select typname, typtype from pg_type where typname = 'pair';\n> typname | typtype\n> ---------+---------\n> pair | c\n> (1 row)\n> \n> # and I can add comment to the type\n> \n> hannuk=# COMMENT ON TYPE pair is 'A Shroedingers type';\n> COMMENT\n> \n> # but \\dT does not show it (second case)\n> \n> hannuk=# \\dT pair\n> List of data types\n> Schema | Name | Description\n> --------+------+-------------\n> (0 rows)\n\n\\dT ignores the composite types implicitly created by CREATE TABLE.\n\n[1] https://www.postgresql.org/docs/16/catalog-pg-type.html\n\n-- \nErik", "msg_date": "Fri, 8 Mar 2024 05:08:09 +0100", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "I wrote:\n> The attached patch fixes the error message and also documents that\n> requirement.\n\nOn second thought, adding a separate error message doesn't really make\nsense. The attached v2 is a simpler patch that instead modifies the\nexisting error message.\n\n-- \nErik\n\n\n", "msg_date": "Fri, 8 Mar 2024 05:24:19 +0100", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "I wrote:\n> The attached v2 is a simpler patch that instead modifies the existing\n> error message.\n\nForgot to attach v2.\n\n-- \nErik", "msg_date": "Fri, 8 Mar 2024 05:29:27 +0100", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On Thu, Mar 7, 2024 at 9:29 PM Erik Wienhold <[email protected]> wrote:\n\n> I wrote:\n> > The attached v2 is a simpler patch that instead modifies the existing\n> > error message.\n>\n> Forgot to attach v2.\n>\n>\nFor consideration for the doc portion. The existing wording is too\nimprecise for my liking and just tacking on \"expects...create type\" is\njarring.\n\n\"\"\"\nCreates a typed table, which takes it structure from an existing (name\noptionally schema-qualified) stand-alone composite type i.e., one created\nusing CREATE TYPE) though it still produces a new composite type as well.\nThe table will have a dependency to the referenced type such cascaded alter\nand drop actions on the type will propagate to the table.\n\nA typed table always has the same column names and data types as the type\nit is derived from, and you cannot specify additional columns. But the\nCREATE TABLE command can add defaults and constraints to the table, as well\nas specify storage parameters.\n\"\"\"\n\nWe do use the term \"stand-alone composite\" in create type so I'm inclined\nto use it instead of \"composite created with CREATE TYPE\"; especially in\nthe error messages; I'm a bit more willing to add the cross-reference to\ncreate type in the user docs.\n\nDavid J.\n\nOn Thu, Mar 7, 2024 at 9:29 PM Erik Wienhold <[email protected]> wrote:I wrote:\n> The attached v2 is a simpler patch that instead modifies the existing\n> error message.\n\nForgot to attach v2.For consideration for the doc portion.  The existing wording is too imprecise for my liking and just tacking on \"expects...create type\" is jarring.\"\"\"Creates a typed table, which takes it structure from an existing (name optionally schema-qualified) stand-alone composite type i.e., one created using CREATE TYPE) though it still produces a new composite type as well.  The table will have a dependency to the referenced type such cascaded alter and drop actions on the type will propagate to the table.A typed table always has the same column names and data types as the type it is derived from, and you cannot specify additional columns.  But the CREATE TABLE command can add defaults and constraints to the table, as well as specify storage parameters.\"\"\"We do use the term \"stand-alone composite\" in create type so I'm inclined to use it instead of \"composite created with CREATE TYPE\"; especially in the error messages; I'm a bit more willing to add the cross-reference to create type in the user docs.David J.", "msg_date": "Thu, 28 Mar 2024 18:42:12 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On 2024-03-29 02:42 +0100, David G. Johnston wrote:\n> For consideration for the doc portion. The existing wording is too\n> imprecise for my liking and just tacking on \"expects...create type\" is\n> jarring.\n> \n> \"\"\"\n> Creates a typed table, which takes it structure from an existing (name\n> optionally schema-qualified) stand-alone composite type i.e., one created\n> using CREATE TYPE) though it still produces a new composite type as well.\n> The table will have a dependency to the referenced type such cascaded alter\n> and drop actions on the type will propagate to the table.\n> \n> A typed table always has the same column names and data types as the type\n> it is derived from, and you cannot specify additional columns. But the\n> CREATE TABLE command can add defaults and constraints to the table, as well\n> as specify storage parameters.\n> \"\"\"\n\nThanks, that sounds better. I incorporated that with some minor edits\nin the attached v3.\n\n> We do use the term \"stand-alone composite\" in create type so I'm inclined\n> to use it instead of \"composite created with CREATE TYPE\"; especially in\n> the error messages; I'm a bit more willing to add the cross-reference to\n> create type in the user docs.\n\nOkay, changed in v3 as well. I used \"created with CREATE TYPE\" in the\nerror message because I thought it's clearer to the user. But I see no\nreason for not using \"stand-alone\" here as well if it's the established\nterm.\n\n-- \nErik", "msg_date": "Fri, 29 Mar 2024 04:02:16 +0100", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On Thu, Mar 28, 2024 at 8:02 PM Erik Wienhold <[email protected]> wrote:\n\n> Thanks, that sounds better. I incorporated that with some minor edits\n> in the attached v3.\n>\n\nLooks good.\n\nYou added my missing ( but dropped the comma after \"i.e.\"\n\ndiff --git a/doc/src/sgml/ref/create_table.sgml\nb/doc/src/sgml/ref/create_table.sgml\nindex dc69a3f5dc..b2e9e97b93 100644\n--- a/doc/src/sgml/ref/create_table.sgml\n+++ b/doc/src/sgml/ref/create_table.sgml\n@@ -251,7 +251,7 @@ WITH ( MODULUS <replaceable\nclass=\"parameter\">numeric_literal</replaceable>, REM\n <para>\n Creates a <firstterm>typed table</firstterm>, which takes its\nstructure\n from an existing (name optionally schema-qualified) stand-alone\ncomposite\n- type (i.e. created using <xref linkend=\"sql-createtype\"/>) though it\n+ type (i.e., created using <xref linkend=\"sql-createtype\"/>) though it\n still produces a new composite type as well. The table will have\n a dependency on the referenced type such that cascaded alter and drop\n actions on the type will propagate to the table.\n\nDavid J.\n\nOn Thu, Mar 28, 2024 at 8:02 PM Erik Wienhold <[email protected]> wrote:Thanks, that sounds better.  I incorporated that with some minor edits\nin the attached v3.Looks good.You added my missing ( but dropped the comma after \"i.e.\"diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgmlindex dc69a3f5dc..b2e9e97b93 100644--- a/doc/src/sgml/ref/create_table.sgml+++ b/doc/src/sgml/ref/create_table.sgml@@ -251,7 +251,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM      <para>       Creates a <firstterm>typed table</firstterm>, which takes its structure       from an existing (name optionally schema-qualified) stand-alone composite-      type (i.e. created using <xref linkend=\"sql-createtype\"/>) though it+      type (i.e., created using <xref linkend=\"sql-createtype\"/>) though it       still produces a new composite type as well.  The table will have       a dependency on the referenced type such that cascaded alter and drop       actions on the type will propagate to the table.David J.", "msg_date": "Wed, 3 Apr 2024 18:29:50 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On 2024-04-04 03:29 +0200, David G. Johnston wrote:\n> On Thu, Mar 28, 2024 at 8:02 PM Erik Wienhold <[email protected]> wrote:\n> \n> > Thanks, that sounds better. I incorporated that with some minor edits\n> > in the attached v3.\n> >\n> \n> You added my missing ( but dropped the comma after \"i.e.\"\n\nThanks, fixed in v4. Looks like American English prefers that comma and\nit's also more common in our docs.\n\n-- \nErik", "msg_date": "Thu, 4 Apr 2024 06:41:19 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On 29.03.24 02:42, David G. Johnston wrote:\n> We do use the term \"stand-alone composite\" in create type so I'm \n> inclined to use it instead of \"composite created with CREATE TYPE\"; \n> especially in the error messages; I'm a bit more willing to add the \n> cross-reference to create type in the user docs.\n\nI'm not sure this would have helped. If you see this in the error \nmessage, then there is no additional guidance what a \"stand-alone \ncomposite type\" and a not-\"stand-alone composite type\" are.\n\nMaybe it's possible to catch the forbidden cases more explicitly and \ncome up with more helpful error messages along the lines of \"cannot \ncreate a typed table based on the row type of another table\".\n\n\n\n", "msg_date": "Wed, 17 Apr 2024 14:53:40 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On Thu, Apr 4, 2024 at 12:41 AM Erik Wienhold <[email protected]> wrote:\n> Thanks, fixed in v4. Looks like American English prefers that comma and\n> it's also more common in our docs.\n\nReviewing this patch:\n\n- Creates a <firstterm>typed table</firstterm>, which takes its\n- structure from the specified composite type (name optionally\n- schema-qualified). A typed table is tied to its type; for\n- example the table will be dropped if the type is dropped\n- (with <literal>DROP TYPE ... CASCADE</literal>).\n+ Creates a <firstterm>typed table</firstterm>, which takes its structure\n+ from an existing (name optionally schema-qualified) stand-alone composite\n+ type (i.e., created using <xref linkend=\"sql-createtype\"/>) though it\n+ still produces a new composite type as well. The table will have\n+ a dependency on the referenced type such that cascaded alter and drop\n+ actions on the type will propagate to the table.\n\nIt would be better if this diff didn't reflow the unchanged portions\nof the paragraph.\n\nI agree that it's a good idea to mention that the table must have been\ncreated using CREATE TYPE .. AS here, but I disagree with the rest of\nthe rewording in this hunk. I think we could just add \"creating using\nCREATE TYPE\" to the end of the first sentence, with an xref, and leave\nthe rest as it is. I don't see a reason to mention that the typed\ntable also spawns a rowtype; that's just standard CREATE TABLE\nbehavior and not really relevant here. And I don't understand what the\nrest of the rewording does for us.\n\n <para>\n- When a typed table is created, then the data types of the\n- columns are determined by the underlying composite type and are\n- not specified by the <literal>CREATE TABLE</literal> command.\n+ A typed table always has the same column names and data types as the\n+ type it is derived from, and you cannot specify additional columns.\n But the <literal>CREATE TABLE</literal> command can add defaults\n- and constraints to the table and can specify storage parameters.\n+ and constraints to the table, as well as specify storage parameters.\n </para>\n\nI don't see how this is better.\n\n- errmsg(\"type %s is not a composite type\",\n+ errmsg(\"type %s is not a stand-alone composite type\",\n\nI agree with Peter's complaint that people aren't going to understand\nwhat a stand-alone composite type means when they see the revised\nerror message; to really help people, we're going to need to do better\nthan this, I think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 May 2024 11:46:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On Wed, May 15, 2024 at 8:46 AM Robert Haas <[email protected]> wrote:\n\n> On Thu, Apr 4, 2024 at 12:41 AM Erik Wienhold <[email protected]> wrote:\n> > Thanks, fixed in v4. Looks like American English prefers that comma and\n> > it's also more common in our docs.\n>\n> Reviewing this patch:\n>\n> - Creates a <firstterm>typed table</firstterm>, which takes its\n> - structure from the specified composite type (name optionally\n> - schema-qualified). A typed table is tied to its type; for\n> - example the table will be dropped if the type is dropped\n> - (with <literal>DROP TYPE ... CASCADE</literal>).\n> + Creates a <firstterm>typed table</firstterm>, which takes its\n> structure\n> + from an existing (name optionally schema-qualified) stand-alone\n> composite\n> + type (i.e., created using <xref linkend=\"sql-createtype\"/>) though\n> it\n> + still produces a new composite type as well. The table will have\n> + a dependency on the referenced type such that cascaded alter and\n> drop\n> + actions on the type will propagate to the table.\n>\n> It would be better if this diff didn't reflow the unchanged portions\n> of the paragraph.\n>\n> I agree that it's a good idea to mention that the table must have been\n> created using CREATE TYPE .. AS here, but I disagree with the rest of\n> the rewording in this hunk. I think we could just add \"creating using\n> CREATE TYPE\" to the end of the first sentence, with an xref, and leave\n> the rest as it is.\n\n\n\n> I don't see a reason to mention that the typed\n> table also spawns a rowtype; that's just standard CREATE TABLE\n> behavior and not really relevant here.\n\n\nI figured it wouldn't be immediately obvious that the system would create a\nsecond type with identical structure. Of course, in order for SELECT tbl\nFROM tbl; to work it must indeed do so. I'm not married to pointing out\nthis dynamic explicitly though.\n\n\n> And I don't understand what the\n> rest of the rewording does for us.\n>\n\nIt calls out the explicit behavior that the table's columns can change due\nto actions on the underlying type. Mentioning this unique behavior seems\nworth a sentence.\n\n\n> <para>\n> - When a typed table is created, then the data types of the\n> - columns are determined by the underlying composite type and are\n> - not specified by the <literal>CREATE TABLE</literal> command.\n> + A typed table always has the same column names and data types as the\n> + type it is derived from, and you cannot specify additional columns.\n> But the <literal>CREATE TABLE</literal> command can add defaults\n> - and constraints to the table and can specify storage parameters.\n> + and constraints to the table, as well as specify storage parameters.\n> </para>\n>\n> I don't see how this is better.\n>\n\nI'll agree this is more of a stylistic change, but mainly because the talk\nabout data types reasonably implies the other items the patch explicitly\nmentions - names and additional columns.\n\n\n> - errmsg(\"type %s is not a composite type\",\n> + errmsg(\"type %s is not a stand-alone composite type\",\n>\n> I agree with Peter's complaint that people aren't going to understand\n> what a stand-alone composite type means when they see the revised\n> error message; to really help people, we're going to need to do better\n> than this, I think.\n>\n>\nWe have a glossary.\n\nThat said, leave the wording as-is and add a conditional hint: The\ncomposite type must not also be a table.\n\nDavid J.\n\nOn Wed, May 15, 2024 at 8:46 AM Robert Haas <[email protected]> wrote:On Thu, Apr 4, 2024 at 12:41 AM Erik Wienhold <[email protected]> wrote:\n> Thanks, fixed in v4.  Looks like American English prefers that comma and\n> it's also more common in our docs.\n\nReviewing this patch:\n\n-      Creates a <firstterm>typed table</firstterm>, which takes its\n-      structure from the specified composite type (name optionally\n-      schema-qualified).  A typed table is tied to its type; for\n-      example the table will be dropped if the type is dropped\n-      (with <literal>DROP TYPE ... CASCADE</literal>).\n+      Creates a <firstterm>typed table</firstterm>, which takes its structure\n+      from an existing (name optionally schema-qualified) stand-alone composite\n+      type (i.e., created using <xref linkend=\"sql-createtype\"/>) though it\n+      still produces a new composite type as well.  The table will have\n+      a dependency on the referenced type such that cascaded alter and drop\n+      actions on the type will propagate to the table.\n\nIt would be better if this diff didn't reflow the unchanged portions\nof the paragraph.\n\nI agree that it's a good idea to mention that the table must have been\ncreated using CREATE TYPE .. AS here, but I disagree with the rest of\nthe rewording in this hunk. I think we could just add \"creating using\nCREATE TYPE\" to the end of the first sentence, with an xref, and leave\nthe rest as it is.  I don't see a reason to mention that the typed\ntable also spawns a rowtype; that's just standard CREATE TABLE\nbehavior and not really relevant here. I figured it wouldn't be immediately obvious that the system would create a second type with identical structure.  Of course, in order for SELECT tbl FROM tbl; to work it must indeed do so.  I'm not married to pointing out this dynamic explicitly though. And I don't understand what the\nrest of the rewording does for us.It calls out the explicit behavior that the table's columns can change due to actions on the underlying type.  Mentioning this unique behavior seems worth a sentence.\n\n      <para>\n-      When a typed table is created, then the data types of the\n-      columns are determined by the underlying composite type and are\n-      not specified by the <literal>CREATE TABLE</literal> command.\n+      A typed table always has the same column names and data types as the\n+      type it is derived from, and you cannot specify additional columns.\n       But the <literal>CREATE TABLE</literal> command can add defaults\n-      and constraints to the table and can specify storage parameters.\n+      and constraints to the table, as well as specify storage parameters.\n      </para>\n\nI don't see how this is better.I'll agree this is more of a stylistic change, but mainly because the talk about data types reasonably implies the other items the patch explicitly mentions - names and additional columns.\n\n- errmsg(\"type %s is not a composite type\",\n+ errmsg(\"type %s is not a stand-alone composite type\",\n\nI agree with Peter's complaint that people aren't going to understand\nwhat a stand-alone composite type means when they see the revised\nerror message; to really help people, we're going to need to do better\nthan this, I think.We have a glossary.That said, leave the wording as-is and add a conditional hint: The composite type must not also be a table.David J.", "msg_date": "Thu, 16 May 2024 08:47:05 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On 2024-05-16 17:47 +0200, David G. Johnston wrote:\n> On Wed, May 15, 2024 at 8:46 AM Robert Haas <[email protected]> wrote:\n> \n> > On Thu, Apr 4, 2024 at 12:41 AM Erik Wienhold <[email protected]> wrote:\n> > > Thanks, fixed in v4. Looks like American English prefers that comma and\n> > > it's also more common in our docs.\n> >\n> > Reviewing this patch:\n> >\n> > - Creates a <firstterm>typed table</firstterm>, which takes its\n> > - structure from the specified composite type (name optionally\n> > - schema-qualified). A typed table is tied to its type; for\n> > - example the table will be dropped if the type is dropped\n> > - (with <literal>DROP TYPE ... CASCADE</literal>).\n> > + Creates a <firstterm>typed table</firstterm>, which takes its\n> > structure\n> > + from an existing (name optionally schema-qualified) stand-alone\n> > composite\n> > + type (i.e., created using <xref linkend=\"sql-createtype\"/>) though\n> > it\n> > + still produces a new composite type as well. The table will have\n> > + a dependency on the referenced type such that cascaded alter and\n> > drop\n> > + actions on the type will propagate to the table.\n> >\n> > It would be better if this diff didn't reflow the unchanged portions\n> > of the paragraph.\n\nRight. I now reformatted it so that first line remains unchanged. But\nthe rest of the para is still a complete rewrite.\n\n> > I agree that it's a good idea to mention that the table must have been\n> > created using CREATE TYPE .. AS here, but I disagree with the rest of\n> > the rewording in this hunk. I think we could just add \"creating using\n> > CREATE TYPE\" to the end of the first sentence, with an xref, and leave\n> > the rest as it is.\n> \n> \n> \n> > I don't see a reason to mention that the typed\n> > table also spawns a rowtype; that's just standard CREATE TABLE\n> > behavior and not really relevant here.\n> \n> \n> I figured it wouldn't be immediately obvious that the system would create a\n> second type with identical structure. Of course, in order for SELECT tbl\n> FROM tbl; to work it must indeed do so. I'm not married to pointing out\n> this dynamic explicitly though.\n> \n> \n> > And I don't understand what the\n> > rest of the rewording does for us.\n> >\n> \n> It calls out the explicit behavior that the table's columns can change due\n> to actions on the underlying type. Mentioning this unique behavior seems\n> worth a sentence.\n> \n> \n> > <para>\n> > - When a typed table is created, then the data types of the\n> > - columns are determined by the underlying composite type and are\n> > - not specified by the <literal>CREATE TABLE</literal> command.\n> > + A typed table always has the same column names and data types as the\n> > + type it is derived from, and you cannot specify additional columns.\n> > But the <literal>CREATE TABLE</literal> command can add defaults\n> > - and constraints to the table and can specify storage parameters.\n> > + and constraints to the table, as well as specify storage parameters.\n> > </para>\n> >\n> > I don't see how this is better.\n> >\n> \n> I'll agree this is more of a stylistic change, but mainly because the talk\n> about data types reasonably implies the other items the patch explicitly\n> mentions - names and additional columns.\n\nI prefer David's changes to both paras because right now the details of\ntyped tables are spread over the respective CREATE and ALTER commands\nfor types and tables. Or maybe we should add those details to the\nexisting \"Typed Tables\" section at the very end of CREATE TABLE?\n\n> > - errmsg(\"type %s is not a composite type\",\n> > + errmsg(\"type %s is not a stand-alone composite type\",\n> >\n> > I agree with Peter's complaint that people aren't going to understand\n> > what a stand-alone composite type means when they see the revised\n> > error message; to really help people, we're going to need to do better\n> > than this, I think.\n> >\n> >\n> We have a glossary.\n> \n> That said, leave the wording as-is and add a conditional hint: The\n> composite type must not also be a table.\n\nIt's now a separate error message (like I already had in v1) which\nstates that the specified type must not be a row type of another table\n(based on Peter's feedback). And the hint directs the user to CREATE\nTYPE.\n\nIn passing, I also quoted the type name in the existing error message\nfor consistency. I saw that table names etc. are already quoted in\nother error messages.\n\n-- \nErik", "msg_date": "Sat, 18 May 2024 01:57:25 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On Fri, May 17, 2024 at 4:57 PM Erik Wienhold <[email protected]> wrote:\n\n> On 2024-05-16 17:47 +0200, David G. Johnston wrote:\n> > On Wed, May 15, 2024 at 8:46 AM Robert Haas <[email protected]>\n> wrote:\n> >\n> > > On Thu, Apr 4, 2024 at 12:41 AM Erik Wienhold <[email protected]> wrote:\n> > > > Thanks, fixed in v4. Looks like American English prefers that comma\n> and\n> > > > it's also more common in our docs.\n> > >\n> > > Reviewing this patch:\n> > >\n> > > - Creates a <firstterm>typed table</firstterm>, which takes its\n> > > - structure from the specified composite type (name optionally\n> > > - schema-qualified). A typed table is tied to its type; for\n> > > - example the table will be dropped if the type is dropped\n> > > - (with <literal>DROP TYPE ... CASCADE</literal>).\n> > > + Creates a <firstterm>typed table</firstterm>, which takes its\n> > > structure\n> > > + from an existing (name optionally schema-qualified) stand-alone\n> > > composite\n> > > + type (i.e., created using <xref linkend=\"sql-createtype\"/>)\n> though\n> > > it\n> > > + still produces a new composite type as well. The table will\n> have\n> > > + a dependency on the referenced type such that cascaded alter and\n> > > drop\n> > > + actions on the type will propagate to the table.\n> > >\n> > > It would be better if this diff didn't reflow the unchanged portions\n> > > of the paragraph.\n>\n> Right. I now reformatted it so that first line remains unchanged. But\n> the rest of the para is still a complete rewrite.\n>\n> > > I agree that it's a good idea to mention that the table must have been\n> > > created using CREATE TYPE .. AS here, but I disagree with the rest of\n> > > the rewording in this hunk. I think we could just add \"creating using\n> > > CREATE TYPE\" to the end of the first sentence, with an xref, and leave\n> > > the rest as it is.\n> >\n> >\n> >\n> > > I don't see a reason to mention that the typed\n> > > table also spawns a rowtype; that's just standard CREATE TABLE\n> > > behavior and not really relevant here.\n> >\n> >\n> > I figured it wouldn't be immediately obvious that the system would\n> create a\n> > second type with identical structure. Of course, in order for SELECT tbl\n> > FROM tbl; to work it must indeed do so. I'm not married to pointing out\n> > this dynamic explicitly though.\n> >\n> >\n> > > And I don't understand what the\n> > > rest of the rewording does for us.\n> > >\n> >\n> > It calls out the explicit behavior that the table's columns can change\n> due\n> > to actions on the underlying type. Mentioning this unique behavior seems\n> > worth a sentence.\n> >\n> >\n> > > <para>\n> > > - When a typed table is created, then the data types of the\n> > > - columns are determined by the underlying composite type and are\n> > > - not specified by the <literal>CREATE TABLE</literal> command.\n> > > + A typed table always has the same column names and data types\n> as the\n> > > + type it is derived from, and you cannot specify additional\n> columns.\n> > > But the <literal>CREATE TABLE</literal> command can add defaults\n> > > - and constraints to the table and can specify storage parameters.\n> > > + and constraints to the table, as well as specify storage\n> parameters.\n> > > </para>\n> > >\n> > > I don't see how this is better.\n> > >\n> >\n> > I'll agree this is more of a stylistic change, but mainly because the\n> talk\n> > about data types reasonably implies the other items the patch explicitly\n> > mentions - names and additional columns.\n>\n> I prefer David's changes to both paras because right now the details of\n> typed tables are spread over the respective CREATE and ALTER commands\n> for types and tables. Or maybe we should add those details to the\n> existing \"Typed Tables\" section at the very end of CREATE TABLE?\n>\n> > > - errmsg(\"type %s is not a composite type\",\n> > > + errmsg(\"type %s is not a stand-alone composite type\",\n> > >\n> > > I agree with Peter's complaint that people aren't going to understand\n> > > what a stand-alone composite type means when they see the revised\n> > > error message; to really help people, we're going to need to do better\n> > > than this, I think.\n> > >\n> > >\n> > We have a glossary.\n>\n\nIf sticking with stand-alone composite type as a formal term we should\ndocument it in the glossary. It's unclear whether this will survive review\nthough. With the wording provided in this patch it doesn't really add\nenough to continue a strong defense of it.\n\n\n\n> It's now a separate error message (like I already had in v1) which\n> states that the specified type must not be a row type of another table\n> (based on Peter's feedback). And the hint directs the user to CREATE\n> TYPE.\n>\n> In passing, I also quoted the type name in the existing error message\n> for consistency. I saw that table names etc. are already quoted in\n> other error messages.\n>\n>\nThe hint and the quoting both violate the documented rules for these things:\n\nhttps://www.postgresql.org/docs/current/error-style-guide.html#ERROR-STYLE-GUIDE-QUOTES\n\nThere are functions in the backend that will double-quote their own output\nas needed (for example, format_type_be()). Do not put additional quotes\naround the output of such functions.\n\nhttps://www.postgresql.org/docs/current/error-style-guide.html#ERROR-STYLE-GUIDE-GRAMMAR-PUNCTUATION\n\nDetail and hint messages: Use complete sentences, and end each with a\nperiod. Capitalize the first word of sentences.\n\nDavid J.\n\nOn Fri, May 17, 2024 at 4:57 PM Erik Wienhold <[email protected]> wrote:On 2024-05-16 17:47 +0200, David G. Johnston wrote:\n> On Wed, May 15, 2024 at 8:46 AM Robert Haas <[email protected]> wrote:\n> \n> > On Thu, Apr 4, 2024 at 12:41 AM Erik Wienhold <[email protected]> wrote:\n> > > Thanks, fixed in v4.  Looks like American English prefers that comma and\n> > > it's also more common in our docs.\n> >\n> > Reviewing this patch:\n> >\n> > -      Creates a <firstterm>typed table</firstterm>, which takes its\n> > -      structure from the specified composite type (name optionally\n> > -      schema-qualified).  A typed table is tied to its type; for\n> > -      example the table will be dropped if the type is dropped\n> > -      (with <literal>DROP TYPE ... CASCADE</literal>).\n> > +      Creates a <firstterm>typed table</firstterm>, which takes its\n> > structure\n> > +      from an existing (name optionally schema-qualified) stand-alone\n> > composite\n> > +      type (i.e., created using <xref linkend=\"sql-createtype\"/>) though\n> > it\n> > +      still produces a new composite type as well.  The table will have\n> > +      a dependency on the referenced type such that cascaded alter and\n> > drop\n> > +      actions on the type will propagate to the table.\n> >\n> > It would be better if this diff didn't reflow the unchanged portions\n> > of the paragraph.\n\nRight.  I now reformatted it so that first line remains unchanged.  But\nthe rest of the para is still a complete rewrite.\n\n> > I agree that it's a good idea to mention that the table must have been\n> > created using CREATE TYPE .. AS here, but I disagree with the rest of\n> > the rewording in this hunk. I think we could just add \"creating using\n> > CREATE TYPE\" to the end of the first sentence, with an xref, and leave\n> > the rest as it is.\n> \n> \n> \n> > I don't see a reason to mention that the typed\n> > table also spawns a rowtype; that's just standard CREATE TABLE\n> > behavior and not really relevant here.\n> \n> \n> I figured it wouldn't be immediately obvious that the system would create a\n> second type with identical structure.  Of course, in order for SELECT tbl\n> FROM tbl; to work it must indeed do so.  I'm not married to pointing out\n> this dynamic explicitly though.\n> \n> \n> > And I don't understand what the\n> > rest of the rewording does for us.\n> >\n> \n> It calls out the explicit behavior that the table's columns can change due\n> to actions on the underlying type.  Mentioning this unique behavior seems\n> worth a sentence.\n> \n> \n> >       <para>\n> > -      When a typed table is created, then the data types of the\n> > -      columns are determined by the underlying composite type and are\n> > -      not specified by the <literal>CREATE TABLE</literal> command.\n> > +      A typed table always has the same column names and data types as the\n> > +      type it is derived from, and you cannot specify additional columns.\n> >        But the <literal>CREATE TABLE</literal> command can add defaults\n> > -      and constraints to the table and can specify storage parameters.\n> > +      and constraints to the table, as well as specify storage parameters.\n> >       </para>\n> >\n> > I don't see how this is better.\n> >\n> \n> I'll agree this is more of a stylistic change, but mainly because the talk\n> about data types reasonably implies the other items the patch explicitly\n> mentions - names and additional columns.\n\nI prefer David's changes to both paras because right now the details of\ntyped tables are spread over the respective CREATE and ALTER commands\nfor types and tables.  Or maybe we should add those details to the\nexisting \"Typed Tables\" section at the very end of CREATE TABLE?\n\n> > - errmsg(\"type %s is not a composite type\",\n> > + errmsg(\"type %s is not a stand-alone composite type\",\n> >\n> > I agree with Peter's complaint that people aren't going to understand\n> > what a stand-alone composite type means when they see the revised\n> > error message; to really help people, we're going to need to do better\n> > than this, I think.\n> >\n> >\n> We have a glossary.If sticking with stand-alone composite type as a formal term we should document it in the glossary.  It's unclear whether this will survive review though.  With the wording provided in this patch it doesn't really add enough to continue a strong defense of it. \nIt's now a separate error message (like I already had in v1) which\nstates that the specified type must not be a row type of another table\n(based on Peter's feedback).  And the hint directs the user to CREATE\nTYPE.\n\nIn passing, I also quoted the type name in the existing error message\nfor consistency.  I saw that table names etc. are already quoted in\nother error messages.The hint and the quoting both violate the documented rules for these things:https://www.postgresql.org/docs/current/error-style-guide.html#ERROR-STYLE-GUIDE-QUOTESThere are functions in the backend that will double-quote their own output as needed (for example, format_type_be()). Do not put additional quotes around the output of such functions.https://www.postgresql.org/docs/current/error-style-guide.html#ERROR-STYLE-GUIDE-GRAMMAR-PUNCTUATIONDetail and hint messages: Use complete sentences, and end each with a period. Capitalize the first word of sentences.David J.", "msg_date": "Fri, 17 May 2024 18:27:25 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On 2024-05-18 03:27 +0200, David G. Johnston wrote:\n> > On 2024-05-16 17:47 +0200, David G. Johnston wrote:\n> > > We have a glossary.\n> \n> If sticking with stand-alone composite type as a formal term we should\n> document it in the glossary. It's unclear whether this will survive review\n> though. With the wording provided in this patch it doesn't really add\n> enough to continue a strong defense of it.\n\nOh, I thought you meant we already have that term in the glossary (I\nhaven't checked until now). Let's see if we can convince Robert of the\nrewording.\n\n> > It's now a separate error message (like I already had in v1) which\n> > states that the specified type must not be a row type of another table\n> > (based on Peter's feedback). And the hint directs the user to CREATE\n> > TYPE.\n> >\n> > In passing, I also quoted the type name in the existing error message\n> > for consistency. I saw that table names etc. are already quoted in\n> > other error messages.\n> >\n> >\n> The hint and the quoting both violate the documented rules for these things:\n> \n> https://www.postgresql.org/docs/current/error-style-guide.html#ERROR-STYLE-GUIDE-QUOTES\n> \n> There are functions in the backend that will double-quote their own output\n> as needed (for example, format_type_be()). Do not put additional quotes\n> around the output of such functions.\n> \n> https://www.postgresql.org/docs/current/error-style-guide.html#ERROR-STYLE-GUIDE-GRAMMAR-PUNCTUATION\n> \n> Detail and hint messages: Use complete sentences, and end each with a\n> period. Capitalize the first word of sentences.\n\nThanks, I didn't know that guideline. Both fixed in v6.\n\n-- \nErik", "msg_date": "Sat, 18 May 2024 03:56:36 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> Thanks, I didn't know that guideline. Both fixed in v6.\n\nThis still isn't following our usual message style IMO. Here's a\nproposed v7 that outputs\n\n-ERROR: type stuff is not a composite type\n+ERROR: type stuff is the row type of another table\n+DETAIL: A typed table must use a stand-alone composite type created with CREATE TYPE.\n\nI did a bit of copy-editing on the docs changes too. One notable\npoint is that I dropped the parenthetical bit about \"(name optionally\nschema-qualified)\". That struck me as quite unnecessary, and\nit definitely doesn't read well to have two parenthetical comments\nin a single four-line sentence.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 25 Jul 2024 16:29:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "On 2024-07-25 22:29 +0200, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n> > Thanks, I didn't know that guideline. Both fixed in v6.\n> \n> This still isn't following our usual message style IMO. Here's a\n> proposed v7 that outputs\n> \n> -ERROR: type stuff is not a composite type\n> +ERROR: type stuff is the row type of another table\n> +DETAIL: A typed table must use a stand-alone composite type created with CREATE TYPE.\n> \n> I did a bit of copy-editing on the docs changes too. One notable\n> point is that I dropped the parenthetical bit about \"(name optionally\n> schema-qualified)\". That struck me as quite unnecessary, and\n> it definitely doesn't read well to have two parenthetical comments\n> in a single four-line sentence.\n\nWorks for me. Thanks!\n\n-- \nErik\n\n\n", "msg_date": "Fri, 26 Jul 2024 04:11:22 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-07-25 22:29 +0200, Tom Lane wrote:\n>> This still isn't following our usual message style IMO. Here's a\n>> proposed v7 that outputs ...\n\n> Works for me. Thanks!\n\nPushed, then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jul 2024 12:40:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE creates a composite type corresponding to the table\n row, which is and is not there" } ]
[ { "msg_contents": "Hi,\n\nWhile looking ExecQueryAndProcessResults, I found the following code.\n\n /* \n * If SIGINT is sent while the query is processing, the interrupt will be\n * consumed. The user's intention, though, is to cancel the entire watch\n * process, so detect a sent cancellation request and exit in this case.\n */\n if (is_watch && cancel_pressed)\n { \n ClearOrSaveAllResults();\n return 0;\n }\n\nI guess the intention is that when the query is cancelled during \\watch process,\nwe want to exit early before handling the error. However, we cannot detect the\ncancel at this timing because currently we use PQsendQuery which is asynchronous\nand does not wait the result. We have to check cancel_pressed after PQgetResult\ncall. I'm also attached a patch for this, with some comments fix. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Fri, 8 Mar 2024 14:24:12 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Fix cancellation check in ExecQueryAndProcessResults" } ]
[ { "msg_contents": "If popen fails in pipe_read_line we invoke perror for the error message, and\npipe_read_line is in turn called by find_other_exec which is used in both\nfrontend and backend code. This is an old codepath, and it seems like it ought\nto be replaced with the more common logging tools we now have like in the\nattached diff (which also makes the error translated as opposed to now). Any\nobjections to removing this last perror() call?\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 8 Mar 2024 11:05:01 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Call perror on popen failure" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> If popen fails in pipe_read_line we invoke perror for the error message, and\n> pipe_read_line is in turn called by find_other_exec which is used in both\n> frontend and backend code. This is an old codepath, and it seems like it ought\n> to be replaced with the more common logging tools we now have like in the\n> attached diff (which also makes the error translated as opposed to now). Any\n> objections to removing this last perror() call?\n\nI didn't check your replacement code in detail, but I think we have\na policy against using perror, mainly because we can't count on it\nto localize consistently with ereport et al. My grep confirms this\nis the only use, so +1 for removing it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Mar 2024 10:24:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call perror on popen failure" }, { "msg_contents": "On 08.03.24 11:05, Daniel Gustafsson wrote:\n> If popen fails in pipe_read_line we invoke perror for the error message, and\n> pipe_read_line is in turn called by find_other_exec which is used in both\n> frontend and backend code. This is an old codepath, and it seems like it ought\n> to be replaced with the more common logging tools we now have like in the\n> attached diff (which also makes the error translated as opposed to now). Any\n> objections to removing this last perror() call?\n\nThis change makes it consistent with other popen() calls, so it makes \nsense to me.\n\n\n\n", "msg_date": "Fri, 8 Mar 2024 18:08:38 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call perror on popen failure" }, { "msg_contents": "> On 8 Mar 2024, at 18:08, Peter Eisentraut <[email protected]> wrote:\n> \n> On 08.03.24 11:05, Daniel Gustafsson wrote:\n>> If popen fails in pipe_read_line we invoke perror for the error message, and\n>> pipe_read_line is in turn called by find_other_exec which is used in both\n>> frontend and backend code. This is an old codepath, and it seems like it ought\n>> to be replaced with the more common logging tools we now have like in the\n>> attached diff (which also makes the error translated as opposed to now). Any\n>> objections to removing this last perror() call?\n> \n> This change makes it consistent with other popen() calls, so it makes sense to me.\n\nThanks for review, committed that way.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 9 Mar 2024 00:05:34 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Call perror on popen failure" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a tiny patch to remove a $SUBJECT.\n\nIndeed, it does not seem appropriate to remove stats during slot invalidation as\none could still be interested to look at them.\n\nThis spurious call has been introduced in be87200efd. I think that initially we\ndesigned to drop slots when a recovery conflict occurred during logical decoding\non a standby. But we changed our mind to invalidate such a slot instead.\n\nThe spurious call is probably due to the initial design but has not been removed.\n\nI don't think it's worth to add a test but can do if one feel the need.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 8 Mar 2024 10:19:11 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Spurious pgstat_drop_replslot() call" }, { "msg_contents": "On Fri, Mar 08, 2024 at 10:19:11AM +0000, Bertrand Drouvot wrote:\n> Indeed, it does not seem appropriate to remove stats during slot invalidation as\n> one could still be interested to look at them.\n\nYeah, my take is that this can still be interesting to know, at least\nfor debugging. This would limit the stats to be dropped when the slot\nis dropped, and that looks like a sound idea.\n\n> This spurious call has been introduced in be87200efd. I think that initially we\n> designed to drop slots when a recovery conflict occurred during logical decoding\n> on a standby. But we changed our mind to invalidate such a slot instead.\n> \n> The spurious call is probably due to the initial design but has not been removed.\n\nThis is not a subject that has really been touched on the original\nthread mentioned on the commit, so it is a bit hard to be sure. The\nonly comment is that a previous version of the patch did the stats\ndrop in the slot invalidation path at an incorrect location.\n\n> I don't think it's worth to add a test but can do if one feel the need.\n\nWell, that would not be complicated while being cheap, no? We have a\nfew paths in the 035 test where we know that a slot has been\ninvalidated so it is just a matter of querying once\npg_stat_replication_slot and see if some data is still there.\n--\nMichael", "msg_date": "Fri, 8 Mar 2024 19:55:39 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spurious pgstat_drop_replslot() call" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 08, 2024 at 07:55:39PM +0900, Michael Paquier wrote:\n> On Fri, Mar 08, 2024 at 10:19:11AM +0000, Bertrand Drouvot wrote:\n> > Indeed, it does not seem appropriate to remove stats during slot invalidation as\n> > one could still be interested to look at them.\n> \n> Yeah, my take is that this can still be interesting to know, at least\n> for debugging. This would limit the stats to be dropped when the slot\n> is dropped, and that looks like a sound idea.\n\nThanks for looking at it!\n\n> > This spurious call has been introduced in be87200efd. I think that initially we\n> > designed to drop slots when a recovery conflict occurred during logical decoding\n> > on a standby. But we changed our mind to invalidate such a slot instead.\n> > \n> > The spurious call is probably due to the initial design but has not been removed.\n> \n> This is not a subject that has really been touched on the original\n> thread mentioned on the commit, so it is a bit hard to be sure. The\n> only comment is that a previous version of the patch did the stats\n> drop in the slot invalidation path at an incorrect location.\n\nThe switch in the patch from \"drop\" to \"invalidation\" is in [1], see:\n\n\"\nGiven the precedent of max_slot_wal_keep_size, I think it's wrong to
just drop\nthe logical slots. Instead we should just mark them as
invalid, \nlike InvalidateObsoleteReplicationSlots().\n\nMakes fully sense and done that way in the attached patch.\n“\n\nBut yeah, hard to be sure why this call is there, at least I don't remember...\n\n> > I don't think it's worth to add a test but can do if one feel the need.\n> \n> Well, that would not be complicated while being cheap, no? We have a\n> few paths in the 035 test where we know that a slot has been\n> invalidated so it is just a matter of querying once\n> pg_stat_replication_slot and see if some data is still there.\n\nWe can not be 100% sure that the stats are up to date when the process holding\nthe active slot is killed. \n\nSo v2 attached adds a test where we ensure first that we have non empty stats\nbefore triggering the invalidation.\n\n[1]: https://www.postgresql.org/message-id/26c6f320-98f0-253c-f8b5-df1e7c1f6315%40amazon.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 8 Mar 2024 15:04:10 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Spurious pgstat_drop_replslot() call" }, { "msg_contents": "On Fri, Mar 08, 2024 at 03:04:10PM +0000, Bertrand Drouvot wrote:\n> The switch in the patch from \"drop\" to \"invalidation\" is in [1], see:\n> \n> \"\n> Given the precedent of max_slot_wal_keep_size, I think it's wrong to
just drop\n> the logical slots. Instead we should just mark them as
invalid, \n> like InvalidateObsoleteReplicationSlots().\n> \n> Makes fully sense and done that way in the attached patch.\n> “\n> \n> But yeah, hard to be sure why this call is there, at least I don't remember...\n\nYep, noticed that on Friday.\n\n> We can not be 100% sure that the stats are up to date when the process holding\n> the active slot is killed. \n> \n> So v2 attached adds a test where we ensure first that we have non empty stats\n> before triggering the invalidation.\n\nAh, that explains the extra poll. What you have done here makes sense\nto me, and the new test fails immediately if I add back the stats drop\nin the invalidation code path.\n\nThat's a slight change in behavior, unfortunately, and it cannot be\ncalled a bug as this improves the visibility of the stats after an\ninvalidation, so this is not something that can be backpatched.\n--\nMichael", "msg_date": "Mon, 11 Mar 2024 16:15:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spurious pgstat_drop_replslot() call" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 11, 2024 at 04:15:40PM +0900, Michael Paquier wrote:\n> That's a slight change in behavior, unfortunately, and it cannot be\n> called a bug as this improves the visibility of the stats after an\n> invalidation, so this is not something that can be backpatched.\n\nYeah, makes sense to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 11 Mar 2024 07:24:54 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Spurious pgstat_drop_replslot() call" }, { "msg_contents": "On Mon, Mar 11, 2024 at 04:15:40PM +0900, Michael Paquier wrote:\n> That's a slight change in behavior, unfortunately, and it cannot be\n> called a bug as this improves the visibility of the stats after an\n> invalidation, so this is not something that can be backpatched.\n\nI've looked again at that and that was OK on a second look. May I\nsuggest the attached additions with LWLockHeldByMeInMode to make sure\nthat the stats are dropped and created while we hold\nReplicationSlotAllocationLock?\n--\nMichael", "msg_date": "Tue, 12 Mar 2024 14:36:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spurious pgstat_drop_replslot() call" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 12, 2024 at 02:36:58PM +0900, Michael Paquier wrote:\n> On Mon, Mar 11, 2024 at 04:15:40PM +0900, Michael Paquier wrote:\n> > That's a slight change in behavior, unfortunately, and it cannot be\n> > called a bug as this improves the visibility of the stats after an\n> > invalidation, so this is not something that can be backpatched.\n> \n> I've looked again at that and that was OK on a second look.\n\nThanks!\n\n> May I\n> suggest the attached additions with LWLockHeldByMeInMode to make sure\n> that the stats are dropped and created while we hold\n> ReplicationSlotAllocationLock?\n\nYeah, makes fully sense and looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 06:17:11 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Spurious pgstat_drop_replslot() call" }, { "msg_contents": "Hello Bertrand and Michael,\n\n12.03.2024 09:17, Bertrand Drouvot wrote:\n>\n>> May I\n>> suggest the attached additions with LWLockHeldByMeInMode to make sure\n>> that the stats are dropped and created while we hold\n>> ReplicationSlotAllocationLock?\n> Yeah, makes fully sense and looks good to me.\n\nSorry for a bit off-topic, but I've remembered an anomaly with a similar\nassert:\nhttps://www.postgresql.org/message-id/17947-b9554521ad963c9c%40postgresql.org\n\nMaybe you would find it worth considering while working in this area...\n(I've just run that reproducer on b36fbd9f8 and confirmed that the\nassertion failure is still here.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 12 Mar 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spurious pgstat_drop_replslot() call" }, { "msg_contents": "On Tue, Mar 12, 2024 at 12:00:00PM +0300, Alexander Lakhin wrote:\n> Sorry for a bit off-topic, but I've remembered an anomaly with a similar\n> assert:\n> https://www.postgresql.org/message-id/17947-b9554521ad963c9c%40postgresql.org\n\nThanks for the reminder. The invalidation path with the stats drop is\nonly in 16~.\n\n> Maybe you would find it worth considering while working in this area...\n> (I've just run that reproducer on b36fbd9f8 and confirmed that the\n> assertion failure is still here.)\n\nIndeed, something needs to happen. I am not surprised that it still\nreproduces; nothing has changed with the locking of the stats entries.\n:/\n--\nMichael", "msg_date": "Wed, 13 Mar 2024 07:37:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spurious pgstat_drop_replslot() call" } ]
[ { "msg_contents": "Hello Team,\n\nCan you help me with steps to identify transactions which caused wal\ngeneration to surge ?\n\nRegards,\nGayatri.\n\n\n", "msg_date": "Fri, 8 Mar 2024 20:20:30 +0530", "msg_from": "Gayatri Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Identify transactions causing highest wal generation" }, { "msg_contents": "On 3/8/24 15:50, Gayatri Singh wrote:\n> Hello Team,\n> \n> Can you help me with steps to identify transactions which caused wal\n> generation to surge ?\n> \n\nYou should probably take a look at pg_waldump, which prints information\nabout WAL contents, including which XID generated each record.\n\nI don't know what exactly is your goal, but sometimes it's not entirely\ndirect relationship. For example, a transaction may delete a record,\nwhich generates just a little bit of WAL. But then after a checkpoint a\nVACUUM comes around, vacuums the page to reclaim the space of the entry,\nand ends up writing FPI (which is much larger). You could argue this WAL\nis also attributable to the original transaction, but that's not what\npg_waldump will allow you to do. FPIs in general may inflate the numbers\nunpredictably, and it's not something the original transaction can\naffect very much.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 8 Mar 2024 16:40:39 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identify transactions causing highest wal generation" }, { "msg_contents": "On Fri, Mar 8, 2024, at 12:40 PM, Tomas Vondra wrote:\n> On 3/8/24 15:50, Gayatri Singh wrote:\n> > Hello Team,\n> > \n> > Can you help me with steps to identify transactions which caused wal\n> > generation to surge ?\n> > \n> \n> You should probably take a look at pg_waldump, which prints information\n> about WAL contents, including which XID generated each record.\n\nYou can also use pg_stat_statements to obtain this information.\n\npostgres=# select * from pg_stat_statements order by wal_bytes desc;\n-[ RECORD 1 ]----------+---------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------\nuserid | 10\ndbid | 16385\ntoplevel | t\nqueryid | -8403979585082616547\nquery | UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\nplans | 0\ntotal_plan_time | 0\nmin_plan_time | 0\nmax_plan_time | 0\nmean_plan_time | 0\nstddev_plan_time | 0\ncalls | 238260\ntotal_exec_time | 4642.599296000018\nmin_exec_time | 0.011094999999999999\nmax_exec_time | 0.872748\nmean_exec_time | 0.01948543312347807\nstddev_exec_time | 0.006370786385582063\nrows | 238260\n.\n.\n.\nwal_records | 496659\nwal_fpi | 19417\nwal_bytes | 208501147\n.\n.\n.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Mar 8, 2024, at 12:40 PM, Tomas Vondra wrote:On 3/8/24 15:50, Gayatri Singh wrote:> Hello Team,> > Can you help me with steps to identify transactions which caused wal> generation to surge ?> You should probably take a look at pg_waldump, which prints informationabout WAL contents, including which XID generated each record.You can also use pg_stat_statements to obtain this information.postgres=# select * from pg_stat_statements order by wal_bytes desc;-[ RECORD 1 ]----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------userid                 | 10dbid                   | 16385toplevel               | tqueryid                | -8403979585082616547query                  | UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2plans                  | 0total_plan_time        | 0min_plan_time          | 0max_plan_time          | 0mean_plan_time         | 0stddev_plan_time       | 0calls                  | 238260total_exec_time        | 4642.599296000018min_exec_time          | 0.011094999999999999max_exec_time          | 0.872748mean_exec_time         | 0.01948543312347807stddev_exec_time       | 0.006370786385582063rows                   | 238260...wal_records            | 496659wal_fpi                | 19417wal_bytes              | 208501147...--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Fri, 08 Mar 2024 13:32:47 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identify transactions causing highest wal generation" }, { "msg_contents": "On Fri, Mar 8, 2024 at 9:10 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 3/8/24 15:50, Gayatri Singh wrote:\n> > Hello Team,\n> >\n> > Can you help me with steps to identify transactions which caused wal\n> > generation to surge ?\n> >\n>\n> You should probably take a look at pg_waldump, which prints information\n> about WAL contents, including which XID generated each record.\n\nRight. pg_walinspect too can help get the same info for the available\nWAL if you are on a production database with PG15 without any access\nto the host instance.\n\n> I don't know what exactly is your goal,\n\nYeah, it's good to know the use-case if possible.\n\n> but sometimes it's not entirely\n> direct relationship.For example, a transaction may delete a record,\n> which generates just a little bit of WAL. But then after a checkpoint a\n> VACUUM comes around, vacuums the page to reclaim the space of the entry,\n> and ends up writing FPI (which is much larger). You could argue this WAL\n> is also attributable to the original transaction, but that's not what\n> pg_waldump will allow you to do. FPIs in general may inflate the numbers\n> unpredictably, and it's not something the original transaction can\n> affect very much.\n\nNice. If one knows the fact that there can be WAL generated without\nassociated transaction (no XID), there won't be surprises when the\namount of WAL generated by all transactions is compared against the\ntotal WAL on the database.\n\nAlternatively, one can get the correct amount of WAL generated\nincluding the WAL without XID before and after doing some operations\nas shown below:\n\npostgres=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn\n--------------------\n 0/52EB488\n(1 row)\n\npostgres=# create table foo as select i from generate_series(1, 1000000) i;\nSELECT 1000000\npostgres=# update foo set i = i +1 where i%2 = 0;\nUPDATE 500000\npostgres=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn\n--------------------\n 0/D2B8000\n(1 row)\n\npostgres=# SELECT pg_wal_lsn_diff('0/D2B8000', '0/52EB488');\n pg_wal_lsn_diff\n-----------------\n 134007672\n(1 row)\n\npostgres=# SELECT pg_size_pretty(pg_wal_lsn_diff('0/D2B8000', '0/52EB488'));\n pg_size_pretty\n----------------\n 128 MB\n(1 row)\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Mar 2024 22:08:43 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identify transactions causing highest wal generation" } ]
[ { "msg_contents": "Hi,\n\nWe observed a slight lag in timestamp for a few logs from the emit_log_hook\nhook implementation when the log_line_prefix GUC has '%m'.\n\nUpon debugging, we found that the saved_timeval_set variable is set to\n'true' in get_formatted_log_time() but is not reset to 'false' until the\nnext call to send_message_to_server_log(). Due to this, saved_timeval_set\nwill be true during the execution of hook emit_log_hook() which prefixes\nthe saved timestamp 'saved_timeval' from the previous log line (our hook\nimplementation calls log_line_prefix()).\n\nAttached patch sets the saved_timeval_set to false before executing the\nemit_log_hook()\n\nThanks,\nVinay", "msg_date": "Sat, 9 Mar 2024 21:09:39 +0530", "msg_from": "Kambam Vinay <[email protected]>", "msg_from_op": true, "msg_subject": "Fix for timestamp lag issue from emit_log_hook when GUC\n log_line_prefix has '%m'" }, { "msg_contents": "On Sat, Mar 09, 2024 at 09:09:39PM +0530, Kambam Vinay wrote:\n> We observed a slight lag in timestamp for a few logs from the emit_log_hook\n> hook implementation when the log_line_prefix GUC has '%m'.\n> \n> Upon debugging, we found that the saved_timeval_set variable is set to\n> 'true' in get_formatted_log_time() but is not reset to 'false' until the\n> next call to send_message_to_server_log(). Due to this, saved_timeval_set\n> will be true during the execution of hook emit_log_hook() which prefixes\n> the saved timestamp 'saved_timeval' from the previous log line (our hook\n> implementation calls log_line_prefix()).\n> \n> Attached patch sets the saved_timeval_set to false before executing the\n> emit_log_hook()\n\nIndeed. If you rely on log_line_prefix() in your hook before the\nserver side elog, the saved timestamp would not match with what could\nbe computed in the follow-up send_message_to_server_log or\nsend_message_to_frontend.\n\nHmm. Shouldn't we remove the forced reset of formatted_log_time for\nthe 'm' case in log_status_format() and remove the reset done at the\nbeginning of send_message_to_server_log()? One problem with your\npatch is that we would still finish with a different saved_timeval in\nthe hook and in send_message_to_server_log(), but we should do things\nso as the timestamps are the same for the whole duration of\nEmitErrorReport(), no? It seems to me that a more correct solution\nwould be to reset saved_timeval_set and formatted_log_time[0] once\nbefore the hook, at the beginning of EmitErrorReport().\n--\nMichael", "msg_date": "Mon, 11 Mar 2024 16:43:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix for timestamp lag issue from emit_log_hook when GUC\n log_line_prefix has '%m'" }, { "msg_contents": "Thanks Michael for the review. Agree with your comment on the patch.\nupdated the patch with recommended change.\n\nThanks,\nVinay\n\nOn Mon, Mar 11, 2024 at 1:13 PM Michael Paquier <[email protected]> wrote:\n\n> On Sat, Mar 09, 2024 at 09:09:39PM +0530, Kambam Vinay wrote:\n> > We observed a slight lag in timestamp for a few logs from the\n> emit_log_hook\n> > hook implementation when the log_line_prefix GUC has '%m'.\n> >\n> > Upon debugging, we found that the saved_timeval_set variable is set to\n> > 'true' in get_formatted_log_time() but is not reset to 'false' until the\n> > next call to send_message_to_server_log(). Due to this, saved_timeval_set\n> > will be true during the execution of hook emit_log_hook() which prefixes\n> > the saved timestamp 'saved_timeval' from the previous log line (our hook\n> > implementation calls log_line_prefix()).\n> >\n> > Attached patch sets the saved_timeval_set to false before executing the\n> > emit_log_hook()\n>\n> Indeed. If you rely on log_line_prefix() in your hook before the\n> server side elog, the saved timestamp would not match with what could\n> be computed in the follow-up send_message_to_server_log or\n> send_message_to_frontend.\n>\n> Hmm. Shouldn't we remove the forced reset of formatted_log_time for\n> the 'm' case in log_status_format() and remove the reset done at the\n> beginning of send_message_to_server_log()? One problem with your\n> patch is that we would still finish with a different saved_timeval in\n> the hook and in send_message_to_server_log(), but we should do things\n> so as the timestamps are the same for the whole duration of\n> EmitErrorReport(), no? It seems to me that a more correct solution\n> would be to reset saved_timeval_set and formatted_log_time[0] once\n> before the hook, at the beginning of EmitErrorReport().\n> --\n> Michael\n>", "msg_date": "Sun, 17 Mar 2024 19:35:57 +0530", "msg_from": "Kambam Vinay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix for timestamp lag issue from emit_log_hook when GUC\n log_line_prefix has '%m'" }, { "msg_contents": "On Sun, Mar 17, 2024 at 07:35:57PM +0530, Kambam Vinay wrote:\n> Thanks Michael for the review. Agree with your comment on the patch.\n> updated the patch with recommended change.\n\nThat should be fine. I would suggest to document why the reset is\ndone at this location, aka this is to ensure that the same formatted\ntimestamp is used across the board, for all log destinations as well\nas hook callers that rely on log_line_prefix.\n\nWhile reviewing, I have noticed that a comment was not correct: JSON\nlogs also use the formatted timestamp via get_formatted_log_time().\n\nI may be able to get this one committed just before the feature freeze\nof v17, as timestamp consistency for hooks that call\nlog_status_format() is narrow. For now, I have added an entry in the\nCF app to keep track of it:\nhttps://commitfest.postgresql.org/48/4901/\n--\nMichael", "msg_date": "Mon, 18 Mar 2024 14:12:57 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix for timestamp lag issue from emit_log_hook when GUC\n log_line_prefix has '%m'" }, { "msg_contents": "On Mon, Mar 18, 2024 at 02:12:57PM +0900, Michael Paquier wrote:\n> I may be able to get this one committed just before the feature freeze\n> of v17, as timestamp consistency for hooks that call\n> log_status_format() is narrow. For now, I have added an entry in the\n> CF app to keep track of it:\n> https://commitfest.postgresql.org/48/4901/\n\nWhile looking again at that. there were two more comments that missed\na refresh about JSON in get_formatted_log_time() and\nget_formatted_start_time(). It's been a few weeks since the last\nupdate, but I'll be able to wrap that tomorrow, updating these\ncomments on the way.\n--\nMichael", "msg_date": "Wed, 3 Apr 2024 15:13:13 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix for timestamp lag issue from emit_log_hook when GUC\n log_line_prefix has '%m'" }, { "msg_contents": "On Wed, Apr 03, 2024 at 03:13:13PM +0900, Michael Paquier wrote:\n> While looking again at that. there were two more comments that missed\n> a refresh about JSON in get_formatted_log_time() and\n> get_formatted_start_time(). It's been a few weeks since the last\n> update, but I'll be able to wrap that tomorrow, updating these\n> comments on the way.\n\nAnd done with 2a217c371799, before the feature freeze.\n--\nMichael", "msg_date": "Thu, 4 Apr 2024 14:26:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix for timestamp lag issue from emit_log_hook when GUC\n log_line_prefix has '%m'" } ]
[ { "msg_contents": "Hi,\n\nI was trying to learn enough about the new bulk_write.c to figure out\nwhat might be going wrong over at [1], and finished up doing this\nexercise, which is experiment quality but passes basic tests. It's a\nbit like v1-0013 and v1-0014's experimental vectored checkpointing\nfrom [2] (which themselves are not currently proposed, that too was in\nthe experiment category), but this usage is a lot simpler and might be\nworth considering. Presumably both things would eventually finish up\nbeing done by (not yet proposed) streaming write, but could also be\ndone directly in this simple case.\n\nThis way, CREATE INDEX generates 128kB pwritev() calls instead of 8kB\npwrite() calls. (There's a magic 16 in there, we'd probably need to\nthink harder about that.) It'd be better if bulk_write.c's memory\nmanagement were improved: if buffers were mostly contiguous neighbours\ninstead of being separately palloc'd objects, you'd probably often get\n128kB pwrite() instead of pwritev(), which might be marginally more\nefficient.\n\nThis made me wonder why smgrwritev() and smgrextendv() shouldn't be\nbacked by the same implementation, since they are essentially the same\noperation. The differences are some assertions which might as well be\nmoved up to the smgr.c level as they must surely apply to any future\nsmgr implementation too, right?, and the segment file creation policy\nwhich can be controlled with a new argument. I tried that here. An\nalternative would be for md.c to have a workhorse function that both\nmdextendv() and mdwritev() call, but I'm not sure if there's much\npoint in that.\n\nWhile thinking about that I realised that an existing write-or-extend\nassertion in master is wrong because it doesn't add nblocks.\n\nHmm, it's a bit weird that we have nblocks as int or BlockNumber in\nvarious places, which I think should probably be fixed.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK%2B5DOmLaBp3Z7C4S-Yv6yoROvr1UncjH2S1ZbPT8D%2BZg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com", "msg_date": "Sun, 10 Mar 2024 13:20:06 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Vectored I/O in bulk_write.c" }, { "msg_contents": "Slightly better version, adjusting a few obsoleted comments, adjusting\nerror message to distinguish write/extend, fixing a thinko in\nsmgr_cached_nblocks maintenance.", "msg_date": "Sun, 10 Mar 2024 14:17:34 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "Here also is a first attempt at improving the memory allocation and\nmemory layout.\n\nI wonder if bulk logging should trigger larger WAL writes in the \"Have\nto write it ourselves\" case in AdvanceXLInsertBuffer(), since writing\n8kB of WAL at a time seems like an unnecessarily degraded level of\nperformance, especially with wal_sync_method=open_datasync. Of course\nthe real answer is \"make sure wal_buffers is high enough for your\nworkload\" (usually indirectly by automatically scaling from\nshared_buffers), but this problem jumps out when tracing bulk_writes.c\nwith default settings. We write out the index 128kB at a time, but\nthe WAL 8kB at a time.", "msg_date": "Mon, 11 Mar 2024 12:42:26 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "One more observation while I'm thinking about bulk_write.c... hmm, it\nwrites the data out and asks the checkpointer to fsync it, but doesn't\ncall smgrwriteback(). I assume that means that on Linux the physical\nwriteback sometimes won't happen until the checkpointer eventually\ncalls fsync() sequentially, one segment file at a time. I see that\nit's difficult to decide how to do that though; unlike checkpoints,\nwhich have rate control/spreading, bulk writes could more easily flood\nthe I/O subsystem in a burst. I expect most non-Linux systems'\nwrite-behind heuristics to fire up for bulk sequential writes, but on\nLinux where most users live, there is no write-behind heuristic AFAIK\n(on the most common file systems anyway), so you have to crank that\nhandle if you want it to wake up and smell the coffee before it hits\ninternal limits, but then you have to decide how fast to crank it.\n\nThis problem will come into closer focus when we start talking about\nstreaming writes. For the current non-streaming bulk_write.c coding,\nI don't have any particular idea of what to do about that, so I'm just\nnoting the observation here.\n\nSorry for the sudden wall of text/monologue; this is all a sort of\nreaction to bulk_write.c that I should perhaps have sent to the\nbulk_write.c thread, triggered by a couple of debugging sessions.\n\n\n", "msg_date": "Wed, 13 Mar 2024 10:38:52 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "(Replying to all your messages in this thread together)\n\n> This made me wonder why smgrwritev() and smgrextendv() shouldn't be\n> backed by the same implementation, since they are essentially the same\n> operation.\n\n+1 to the general idea of merging the write and extend functions.\n\n> The differences are some assertions which might as well be\n> moved up to the smgr.c level as they must surely apply to any future\n> smgr implementation too, right?, and the segment file creation policy\n> which can be controlled with a new argument. I tried that here.\n\nCurrently, smgrwrite/smgrextend just calls through to mdwrite/mdextend. \nI'd like to keep it that way. Otherwise we need to guess what a \nhypothetical smgr implementation might look like.\n\nFor example this assertion:\n\n> \t/* This assert is too expensive to have on normally ... */\n> #ifdef CHECK_WRITE_VS_EXTEND\n> \tAssert(blocknum >= mdnblocks(reln, forknum));\n> #endif\n\nI think that should continue to be md.c's internal affair. For example, \nimagine that you had a syscall like write() but which always writes to \nthe end of the file and also returns the offset that the data was \nwritten to. You would want to assert the returned offset instead of the \nabove.\n\nSo -1 on moving up the assertions to smgr.c.\n\nLet's bite the bullet and merge the smgrwrite and smgrextend functions \nat the smgr level too. I propose the following signature:\n\n#define SWF_SKIP_FSYNC\t\t0x01\n#define SWF_EXTEND\t\t0x02\n#define SWF_ZERO\t\t0x04\n\nvoid smgrwritev(SMgrRelation reln, ForkNumber forknum,\n\t\tBlockNumber blocknum,\n\t\tconst void **buffer, BlockNumber nblocks,\n\t\tint flags);\n\nThis would replace smgwrite, smgrextend, and smgrzeroextend. The \nmdwritev() function would have the same signature. A single 'flags' arg \nlooks better in the callers than booleans, because you don't need to \nremember what 'true' or 'false' means. The callers would look like this:\n\n/* like old smgrwrite(reln, MAIN_FORKNUM, 123, buf, false) */\nsmgrwritev(reln, MAIN_FORKNUM, 123, buf, 1, 0);\n\n/* like old smgrwrite(reln, MAIN_FORKNUM, 123, buf, true) */\nsmgrwritev(reln, MAIN_FORKNUM, 123, buf, 1, SWF_SKIP_FSYNC);\n\n/* like old smgrextend(reln, MAIN_FORKNUM, 123, buf, true) */\nsmgrwritev(reln, MAIN_FORKNUM, 123, buf, 1,\n SWF_EXTEND | SWF_SKIP_FSYNC);\n\n/* like old smgrzeroextend(reln, MAIN_FORKNUM, 123, 10, true) */\nsmgrwritev(reln, MAIN_FORKNUM, 123, NULL, 10,\n SWF_EXTEND | SWF_ZERO | SWF_SKIP_FSYNC);\n\n> While thinking about that I realised that an existing write-or-extend\n> assertion in master is wrong because it doesn't add nblocks.\n> \n> Hmm, it's a bit weird that we have nblocks as int or BlockNumber in\n> various places, which I think should probably be fixed.\n\n+1\n\n> Here also is a first attempt at improving the memory allocation and\n> memory layout.\n> ...\n> +typedef union BufferSlot\n> +{\n> +\tPGIOAlignedBlock buffer;\n> +\tdlist_node\tfreelist_node;\n> +}\t\t\tBufferSlot;\n> +\n\nIf you allocated the buffers in one large contiguous chunk, you could \noften do one large write() instead of a gathered writev() of multiple \nblocks. That should be even better, although I don't know much of a \ndifference it makes. The above layout wastes a fair amount memory too, \nbecause 'buffer' is I/O aligned.\n\n> I wonder if bulk logging should trigger larger WAL writes in the \"Have\n> to write it ourselves\" case in AdvanceXLInsertBuffer(), since writing\n> 8kB of WAL at a time seems like an unnecessarily degraded level of\n> performance, especially with wal_sync_method=open_datasync. Of course\n> the real answer is \"make sure wal_buffers is high enough for your\n> workload\" (usually indirectly by automatically scaling from\n> shared_buffers), but this problem jumps out when tracing bulk_writes.c\n> with default settings. We write out the index 128kB at a time, but\n> the WAL 8kB at a time.\n\nMakes sense.\n\nOn 12/03/2024 23:38, Thomas Munro wrote:\n> One more observation while I'm thinking about bulk_write.c... hmm, it\n> writes the data out and asks the checkpointer to fsync it, but doesn't\n> call smgrwriteback(). I assume that means that on Linux the physical\n> writeback sometimes won't happen until the checkpointer eventually\n> calls fsync() sequentially, one segment file at a time. I see that\n> it's difficult to decide how to do that though; unlike checkpoints,\n> which have rate control/spreading, bulk writes could more easily flood\n> the I/O subsystem in a burst. I expect most non-Linux systems'\n> write-behind heuristics to fire up for bulk sequential writes, but on\n> Linux where most users live, there is no write-behind heuristic AFAIK\n> (on the most common file systems anyway), so you have to crank that\n> handle if you want it to wake up and smell the coffee before it hits\n> internal limits, but then you have to decide how fast to crank it.\n> \n> This problem will come into closer focus when we start talking about\n> streaming writes. For the current non-streaming bulk_write.c coding,\n> I don't have any particular idea of what to do about that, so I'm just\n> noting the observation here.\n\nIt would be straightforward to call smgrwriteback() from sgmr_bulk_flush \nevery 1 GB of written data for example. It would be nice to do it in the \nbackground though, and not stall the bulk writing for it. With the AIO \npatches, I presume we could easily start an asynchronous writeback and \nnot wait for it to finish.\n\n> Sorry for the sudden wall of text/monologue; this is all a sort of\n> reaction to bulk_write.c that I should perhaps have sent to the\n> bulk_write.c thread, triggered by a couple of debugging sessions.\n\nThanks for looking! This all makes sense. The idea of introducing the \nbulk write interface was that now we have a natural place to put all \nthese heuristics and optimizations in. That seems to be a success, \nAFAICS none of the things discussed here will change the bulk_write API, \njust the implementation.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 10:57:17 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "On Wed, Mar 13, 2024 at 9:57 PM Heikki Linnakangas <[email protected]> wrote:\n> Let's bite the bullet and merge the smgrwrite and smgrextend functions\n> at the smgr level too. I propose the following signature:\n>\n> #define SWF_SKIP_FSYNC 0x01\n> #define SWF_EXTEND 0x02\n> #define SWF_ZERO 0x04\n>\n> void smgrwritev(SMgrRelation reln, ForkNumber forknum,\n> BlockNumber blocknum,\n> const void **buffer, BlockNumber nblocks,\n> int flags);\n>\n> This would replace smgwrite, smgrextend, and smgrzeroextend. The\n\nThat sounds pretty good to me.\n\n> > Here also is a first attempt at improving the memory allocation and\n> > memory layout.\n> > ...\n> > +typedef union BufferSlot\n> > +{\n> > + PGIOAlignedBlock buffer;\n> > + dlist_node freelist_node;\n> > +} BufferSlot;\n> > +\n>\n> If you allocated the buffers in one large contiguous chunk, you could\n> often do one large write() instead of a gathered writev() of multiple\n> blocks. That should be even better, although I don't know much of a\n> difference it makes. The above layout wastes a fair amount memory too,\n> because 'buffer' is I/O aligned.\n\nThe patch I posted has an array of buffers with the properties you\ndescribe, so you get a pwrite() (no 'v') sometimes, and a pwritev()\nwith a small iovcnt when it wraps around:\n\npwrite(...) = 131072 (0x20000)\npwritev(...,3,...) = 131072 (0x20000)\npwrite(...) = 131072 (0x20000)\npwritev(...,3,...) = 131072 (0x20000)\npwrite(...) = 131072 (0x20000)\n\nHmm, I expected pwrite() alternating with pwritev(iovcnt=2), the\nlatter for when it wraps around the buffer array, so I'm not sure why it's\n3. I guess the btree code isn't writing them strictly monotonically or\nsomething...\n\nI don't believe it wastes any memory on padding (except a few bytes\nwasted by palloc_aligned() before BulkWriteState):\n\n(gdb) p &bulkstate->buffer_slots[0]\n$4 = (BufferSlot *) 0x15c731cb4000\n(gdb) p &bulkstate->buffer_slots[1]\n$5 = (BufferSlot *) 0x15c731cb6000\n(gdb) p sizeof(bulkstate->buffer_slots[0])\n$6 = 8192\n\n\n", "msg_date": "Wed, 13 Mar 2024 23:18:33 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "On 13/03/2024 12:18, Thomas Munro wrote:\n> On Wed, Mar 13, 2024 at 9:57 PM Heikki Linnakangas <[email protected]> wrote:\n>>> Here also is a first attempt at improving the memory allocation and\n>>> memory layout.\n>>> ...\n>>> +typedef union BufferSlot\n>>> +{\n>>> + PGIOAlignedBlock buffer;\n>>> + dlist_node freelist_node;\n>>> +} BufferSlot;\n>>> +\n>>\n>> If you allocated the buffers in one large contiguous chunk, you could\n>> often do one large write() instead of a gathered writev() of multiple\n>> blocks. That should be even better, although I don't know much of a\n>> difference it makes. The above layout wastes a fair amount memory too,\n>> because 'buffer' is I/O aligned.\n> \n> The patch I posted has an array of buffers with the properties you\n> describe, so you get a pwrite() (no 'v') sometimes, and a pwritev()\n> with a small iovcnt when it wraps around:\n\nOh I missed that it was \"union BufferSlot\". I thought it was a struct. \nNever mind then.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 12:22:20 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "Alright, here is a first attempt at merging all three interfaces as\nyou suggested. I like it! I especially like the way it removes lots\nof duplication.\n\nI don't understand your argument about the location of the\nwrite-vs-extent assertions. It seems to me that these are assertions\nabout what the *public* smgrnblocks() function returns. In other\nwords, we assert that the caller is aware of the current relation size\n(and has some kind of interlocking scheme for that to be possible),\naccording to the smgr implementation's public interface. That's not\nan assertion about internal details of the smgr implementation, it's\npart of the \"contract\" for the API.", "msg_date": "Thu, 14 Mar 2024 10:12:32 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "On 13/03/2024 23:12, Thomas Munro wrote:\n> Alright, here is a first attempt at merging all three interfaces as\n> you suggested. I like it! I especially like the way it removes lots\n> of duplication.\n> \n> I don't understand your argument about the location of the\n> write-vs-extent assertions. It seems to me that these are assertions\n> about what the *public* smgrnblocks() function returns. In other\n> words, we assert that the caller is aware of the current relation size\n> (and has some kind of interlocking scheme for that to be possible),\n> according to the smgr implementation's public interface. That's not\n> an assertion about internal details of the smgr implementation, it's\n> part of the \"contract\" for the API.\n\nI tried to say that smgr implementation might have better ways to assert \nthat than calling smgrnblocks(), so it would be better to leave it to \nthe implementation. But what bothered me most was that smgrwrite() had a \ndifferent signature than mdwrite(). I'm happy with the way you have it \nin the v4 patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 23:49:49 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "On Thu, Mar 14, 2024 at 10:49 AM Heikki Linnakangas <[email protected]> wrote:\n> I tried to say that smgr implementation might have better ways to assert\n> that than calling smgrnblocks(), so it would be better to leave it to\n> the implementation. But what bothered me most was that smgrwrite() had a\n> different signature than mdwrite(). I'm happy with the way you have it\n> in the v4 patch.\n\nLooking for other things that can be hoisted up to smgr.c level\nbecause they are part of the contract or would surely need to be\nduplicated by any implementation: I think the check that you don't\nexceed the maximum possible block number could be there too, no?\nHere's a version like that.\n\nDoes anyone else want to object to doing this for 17? Probably still\nneeds some cleanup eg comments etc that may be lurking around the\nplace and another round of testing. As for the overall idea, I'll\nleave it a few days and see if others have feedback. My take is that\nthis is what bulk_write.c was clearly intended to do, it's just that\nsmgr let it down by not allowing vectored extension yet. It's a\nfairly mechanical simplification, generalisation, and net code\nreduction to do so by merging, like this.", "msg_date": "Thu, 14 Mar 2024 12:35:27 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "I canvassed Andres off-list since smgrzeroextend() is his invention,\nand he wondered if it was a good idea to blur the distinction between\nthe different zero-extension strategies like that. Good question. My\ntake is that it's fine:\n\nmdzeroextend() already uses fallocate() only for nblocks > 8, but\notherwise writes out zeroes, because that was seen to interact better\nwith file system heuristics on common systems. That preserves current\nbehaviour for callers of plain-old sgmrextend(), which becomes a\nwrapper for smgrwrite(..., 1, _ZERO | _EXTEND). If some hypothetical\nfuture caller wants to be able to call smgrwritev(..., NULL, 9 blocks,\n_ZERO | _EXTEND) directly without triggering the fallocate() strategy\nfor some well researched reason, we could add a new flag to say so, ie\nadjust that gating.\n\nIn other words, we have already blurred the semantics. To me, it\nseems nicer to have a single high level interface for the same logical\noperation, and flags to select strategies more explicitly if that is\neventually necessary.\n\n\n", "msg_date": "Sat, 16 Mar 2024 12:27:15 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "Hi,\n\nOn 2024-03-16 12:27:15 +1300, Thomas Munro wrote:\n> I canvassed Andres off-list since smgrzeroextend() is his invention,\n> and he wondered if it was a good idea to blur the distinction between\n> the different zero-extension strategies like that. Good question. My\n> take is that it's fine:\n> \n> mdzeroextend() already uses fallocate() only for nblocks > 8, but\n> otherwise writes out zeroes, because that was seen to interact better\n> with file system heuristics on common systems. That preserves current\n> behaviour for callers of plain-old sgmrextend(), which becomes a\n> wrapper for smgrwrite(..., 1, _ZERO | _EXTEND). If some hypothetical\n> future caller wants to be able to call smgrwritev(..., NULL, 9 blocks,\n> _ZERO | _EXTEND) directly without triggering the fallocate() strategy\n> for some well researched reason, we could add a new flag to say so, ie\n> adjust that gating.\n> \n> In other words, we have already blurred the semantics. To me, it\n> seems nicer to have a single high level interface for the same logical\n> operation, and flags to select strategies more explicitly if that is\n> eventually necessary.\n\nI don't think zeroextend on the one hand and and on the other hand a normal\nwrite or extend are really the same operation. In the former case the content\nis hard-coded in the latter it's caller provided. Sure, we can deal with that\nby special-casing NULL content - but at that point, what's the benefit of\ncombinding the two operations? I'm not dead-set against this, just not really\nconvinced that it's a good idea to combine the operations.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 16 Mar 2024 12:10:14 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "On Sun, Mar 17, 2024 at 8:10 AM Andres Freund <[email protected]> wrote:\n> I don't think zeroextend on the one hand and and on the other hand a normal\n> write or extend are really the same operation. In the former case the content\n> is hard-coded in the latter it's caller provided. Sure, we can deal with that\n> by special-casing NULL content - but at that point, what's the benefit of\n> combinding the two operations? I'm not dead-set against this, just not really\n> convinced that it's a good idea to combine the operations.\n\nI liked some things about that, but I'm happy to drop that part.\nHere's a version that leaves smgrzeroextend() alone, and I also gave\nup on moving errors and assertions up into the smgr layer for now to\nminimise the change. So to summarise, this gives us smgrwritev(...,\nflags), where flags can include _SKIP_FSYNC and _EXTEND. This way we\ndon't have to invent smgrextendv(). The old single block smgrextend()\nstill exists as a static inline wrapper.", "msg_date": "Wed, 20 Mar 2024 09:10:02 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "Then I would make the trivial change to respect the new\nio_combine_limit GUC that I'm gearing up to commit in another thread.\nAs attached.", "msg_date": "Fri, 29 Mar 2024 00:32:41 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "Here's a rebase. I decided against committing this for v17 in the\nend. There's not much wrong with it AFAIK, except perhaps an\nunprincipled chopping up of writes with large io_combine_limit due to\nsimplistic flow control, and I liked the idea of having a decent user\nof smgrwritev() in the tree, and it probably makes CREATE INDEX a bit\nfaster, but... I'd like to try something more ambitious that\nstreamifies this and also the \"main\" writeback paths. I shared some\npatches for that that are counterparts to this, over at[1].\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK1in4FiWtisXZ%2BJo-cNSbWjmBcPww3w3DBM%2BwhJTABXA%40mail.gmail.com", "msg_date": "Tue, 9 Apr 2024 16:51:52 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored I/O in bulk_write.c" }, { "msg_contents": "On Tue, Apr 09, 2024 at 04:51:52PM +1200, Thomas Munro wrote:\n> Here's a rebase. I decided against committing this for v17 in the\n> end. There's not much wrong with it AFAIK, except perhaps an\n> unprincipled chopping up of writes with large io_combine_limit due to\n> simplistic flow control, and I liked the idea of having a decent user\n> of smgrwritev() in the tree, and it probably makes CREATE INDEX a bit\n> faster, but... I'd like to try something more ambitious that\n> streamifies this and also the \"main\" writeback paths.\n\nI see this in the CF as Needs Review since 2024-03-10, but this 2024-04-09\nmessage sounds like you were abandoning it. Are you still commissioning a\nreview of this thread's patches, or is the CF entry obsolete?\n\n\n", "msg_date": "Sat, 6 Jul 2024 15:46:47 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vectored I/O in bulk_write.c" } ]
[ { "msg_contents": "When including tables with the new pg_dump functionality, it fails to\nerror out if a table is missing, but only if more than one table is\nspecified.\n\nE.g., if table foo exist, but not bar:\n\npg_dump --table bar\npg_dump: error: no matching tables were found\n\nwith file \"myfilter\" containing just \"table bar\"\npg_dump --filter myfilter\npg_dump: error: no matching tables were found\n\nwith the file \"myfilter\" containing both \"table foo\" and \"table bar\"\n(order doesn't matter):\n<no error, but dump of course only contains foo>\n\nNot having looked into the code, but it looks to me like some variable\nisn't properly reset, or perhaps there is a check for existence rather\nthan count?\n\n//Magnus\n\n\n", "msg_date": "Sun, 10 Mar 2024 15:22:58 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump include/exclude fails to error" }, { "msg_contents": "> On 10 Mar 2024, at 15:22, Magnus Hagander <[email protected]> wrote:\n\n> Not having looked into the code, but it looks to me like some variable\n> isn't properly reset, or perhaps there is a check for existence rather\n> than count?\n\nThanks for the report, I'll have a look later today when back.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sun, 10 Mar 2024 15:32:12 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump include/exclude fails to error" }, { "msg_contents": "Hi\r\n\r\nne 10. 3. 2024 v 15:23 odesílatel Magnus Hagander <[email protected]>\r\nnapsal:\r\n\r\n> When including tables with the new pg_dump functionality, it fails to\r\n> error out if a table is missing, but only if more than one table is\r\n> specified.\r\n>\r\n> E.g., if table foo exist, but not bar:\r\n>\r\n> pg_dump --table bar\r\n> pg_dump: error: no matching tables were found\r\n>\r\n> with file \"myfilter\" containing just \"table bar\"\r\n> pg_dump --filter myfilter\r\n> pg_dump: error: no matching tables were found\r\n>\r\n> with the file \"myfilter\" containing both \"table foo\" and \"table bar\"\r\n> (order doesn't matter):\r\n> <no error, but dump of course only contains foo>\r\n>\r\n\r\nis not this expected behaviour (consistent with -t option)?\r\n\r\n(2024-03-10 16:48:07) postgres=# \\dt\r\n List of relations\r\n┌────────┬──────┬───────┬───────┐\r\n│ Schema │ Name │ Type │ Owner │\r\n╞════════╪══════╪═══════╪═══════╡\r\n│ public │ foo │ table │ pavel │\r\n└────────┴──────┴───────┴───────┘\r\n(1 row)\r\n\r\npavel@nemesis:~/src/orafce$ /usr/local/pgsql/master/bin/pg_dump -t foo -t\r\nboo > /dev/null\r\npavel@nemesis:~/src/orafce$\r\n\r\nif you want to raise error, you should to use option --strict-names.\r\n\r\npavel@nemesis:~/src/orafce$ /usr/local/pgsql/master/bin/pg_dump -t foo -t\r\nboo --strict-names > /dev/null\r\npg_dump: error: no matching tables were found for pattern \"boo\"\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n>\r\n> Not having looked into the code, but it looks to me like some variable\r\n> isn't properly reset, or perhaps there is a check for existence rather\r\n> than count?\r\n>\r\n> //Magnus\r\n>\r\n\nHine 10. 3. 2024 v 15:23 odesílatel Magnus Hagander <[email protected]> napsal:When including tables with the new pg_dump functionality, it fails to\nerror out if a table is missing, but only if more than one table is\nspecified.\n\nE.g., if table foo exist, but not bar:\n\npg_dump --table bar\npg_dump: error: no matching tables were found\n\nwith file \"myfilter\" containing just \"table bar\"\npg_dump --filter myfilter\npg_dump: error: no matching tables were found\n\nwith the file \"myfilter\" containing both \"table foo\" and \"table bar\"\n(order doesn't matter):\n<no error, but dump of course only contains foo>is not this expected behaviour (consistent with -t option)?(2024-03-10 16:48:07) postgres=# \\dt        List of relations┌────────┬──────┬───────┬───────┐│ Schema │ Name │ Type  │ Owner │╞════════╪══════╪═══════╪═══════╡│ public │ foo  │ table │ pavel │└────────┴──────┴───────┴───────┘(1 row)pavel@nemesis:~/src/orafce$ /usr/local/pgsql/master/bin/pg_dump -t foo -t boo > /dev/nullpavel@nemesis:~/src/orafce$ if you want to raise error, you should to use option --strict-names.pavel@nemesis:~/src/orafce$ /usr/local/pgsql/master/bin/pg_dump -t foo -t boo --strict-names > /dev/nullpg_dump: error: no matching tables were found for pattern \"boo\"RegardsPavel \n\nNot having looked into the code, but it looks to me like some variable\nisn't properly reset, or perhaps there is a check for existence rather\nthan count?\n\n//Magnus", "msg_date": "Sun, 10 Mar 2024 16:51:19 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump include/exclude fails to error" }, { "msg_contents": "On Sun, Mar 10, 2024 at 4:51 PM Pavel Stehule <[email protected]> wrote:\n>\n> Hi\n>\n> ne 10. 3. 2024 v 15:23 odesílatel Magnus Hagander <[email protected]> napsal:\n>>\n>> When including tables with the new pg_dump functionality, it fails to\n>> error out if a table is missing, but only if more than one table is\n>> specified.\n>>\n>> E.g., if table foo exist, but not bar:\n>>\n>> pg_dump --table bar\n>> pg_dump: error: no matching tables were found\n>>\n>> with file \"myfilter\" containing just \"table bar\"\n>> pg_dump --filter myfilter\n>> pg_dump: error: no matching tables were found\n>>\n>> with the file \"myfilter\" containing both \"table foo\" and \"table bar\"\n>> (order doesn't matter):\n>> <no error, but dump of course only contains foo>\n>\n>\n> is not this expected behaviour (consistent with -t option)?\n>\n> (2024-03-10 16:48:07) postgres=# \\dt\n> List of relations\n> ┌────────┬──────┬───────┬───────┐\n> │ Schema │ Name │ Type │ Owner │\n> ╞════════╪══════╪═══════╪═══════╡\n> │ public │ foo │ table │ pavel │\n> └────────┴──────┴───────┴───────┘\n> (1 row)\n>\n> pavel@nemesis:~/src/orafce$ /usr/local/pgsql/master/bin/pg_dump -t foo -t boo > /dev/null\n> pavel@nemesis:~/src/orafce$\n>\n> if you want to raise error, you should to use option --strict-names.\n>\n> pavel@nemesis:~/src/orafce$ /usr/local/pgsql/master/bin/pg_dump -t foo -t boo --strict-names > /dev/null\n> pg_dump: error: no matching tables were found for pattern \"boo\"\n\nHmpf, you're right. I guess I don't use multiple-dash-t often enough\n:) So yeah, then I agree this is probably the right behaviour. Maybe\nthe docs for --filter deserves a specific mention about these rules\nthough, as it's going to be a lot more common to specify multiples\nwhen using --filter?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 10 Mar 2024 19:20:35 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump include/exclude fails to error" } ]
[ { "msg_contents": "Greetings, everyone!\n\nWhile running \"installchecks\" on databases with UTF-8 encoding the test\ncitext_utf8 fails because of Turkish dotted I like this:\n\n SELECT 'i'::citext = 'İ'::citext AS t;\n t\n ---\n- t\n+ f\n (1 row)\n\nI tried to replicate the test's results by hand and with any collation\nthat I tried (including --locale=\"Turkish\") this test failed\n\nAlso an interesing result of my tesing. If you initialize you DB\nwith -E utf-8 --locale=\"Turkish\" and then run select LOWER('İ');\nthe output will be this:\n lower\n-------\n İ\n(1 row)\n\nWhich I find strange since lower() uses collation that was passed\n(default in this case but still)\n\nMy PostgreSQL version is this:\npostgres=# select version();\n version\n----------------------------------------------------------------------\n PostgreSQL 17devel on x86_64-windows, compiled by gcc-13.1.0, 64-bit\n\nThe proposed patch for skipping test is attached\n\nOleg Tselebrovskiy, Postgres Pro", "msg_date": "Mon, 11 Mar 2024 15:21:11 +0700", "msg_from": "Oleg Tselebrovskiy <[email protected]>", "msg_from_op": true, "msg_subject": "[PROPOSAL] Skip test citext_utf8 on Windows" }, { "msg_contents": "On Mon, Mar 11, 2024 at 03:21:11PM +0700, Oleg Tselebrovskiy wrote:\n> The proposed patch for skipping test is attached\n\nYour attached patch seems to be in binary format.\n--\nMichael", "msg_date": "Tue, 12 Mar 2024 08:24:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Skip test citext_utf8 on Windows" }, { "msg_contents": "\nOn 2024-03-11 Mo 04:21, Oleg Tselebrovskiy wrote:\n> Greetings, everyone!\n>\n> While running \"installchecks\" on databases with UTF-8 encoding the test\n> citext_utf8 fails because of Turkish dotted I like this:\n>\n>  SELECT 'i'::citext = 'İ'::citext AS t;\n>   t\n>  ---\n> - t\n> + f\n>  (1 row)\n>\n> I tried to replicate the test's results by hand and with any collation\n> that I tried (including --locale=\"Turkish\") this test failed\n>\n> Also an interesing result of my tesing. If you initialize you DB\n> with -E utf-8 --locale=\"Turkish\" and then run select LOWER('İ');\n> the output will be this:\n>  lower\n> -------\n>  İ\n> (1 row)\n>\n> Which I find strange since lower() uses collation that was passed\n> (default in this case but still)\n\n\n\nWouldn't we be better off finding a Windows fix for this, instead of \nsweeping it under the rug?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 11 Mar 2024 21:55:53 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Skip test citext_utf8 on Windows" }, { "msg_contents": "On Tue, Mar 12, 2024 at 2:56 PM Andrew Dunstan <[email protected]> wrote:\n> On 2024-03-11 Mo 04:21, Oleg Tselebrovskiy wrote:\n> > Greetings, everyone!\n> >\n> > While running \"installchecks\" on databases with UTF-8 encoding the test\n> > citext_utf8 fails because of Turkish dotted I like this:\n> >\n> > SELECT 'i'::citext = 'İ'::citext AS t;\n> > t\n> > ---\n> > - t\n> > + f\n> > (1 row)\n> >\n> > I tried to replicate the test's results by hand and with any collation\n> > that I tried (including --locale=\"Turkish\") this test failed\n> >\n> > Also an interesing result of my tesing. If you initialize you DB\n> > with -E utf-8 --locale=\"Turkish\" and then run select LOWER('İ');\n> > the output will be this:\n> > lower\n> > -------\n> > İ\n> > (1 row)\n> >\n> > Which I find strange since lower() uses collation that was passed\n> > (default in this case but still)\n>\n> Wouldn't we be better off finding a Windows fix for this, instead of\n> sweeping it under the rug?\n\nGiven the sorry state of our Windows locale support, I've started\nwondering about deleting it and telling users to adopt our nascent\nbuilt-in support or ICU[1].\n\nThis other thread [2] says the sorting is intransitive so I don't\nthink it really meets our needs anyway.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJhV__g_TJ0jVqPbnTuqT%2B%2BM6KFv2wj%2B9AV-cABNCXN6Q%40mail.gmail.com#bc35c0b88962ff8c24c27aecc1bca72e\n[2] https://www.postgresql.org/message-id/flat/1407a2c0-062b-4e4c-b728-438fdff5cb07%40manitou-mail.org\n\n\n", "msg_date": "Tue, 12 Mar 2024 15:50:20 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Skip test citext_utf8 on Windows" }, { "msg_contents": "Michael Paquier писал(а) 2024-03-12 06:24:\n> On Mon, Mar 11, 2024 at 03:21:11PM +0700, Oleg Tselebrovskiy wrote:\n>> The proposed patch for skipping test is attached\n> \n> Your attached patch seems to be in binary format.\n> --\n> Michael\nRight, I had it saved in not-UTF-8 encoding. Kind of ironic\n\nHere's a fixed version", "msg_date": "Tue, 12 Mar 2024 11:45:03 +0700", "msg_from": "Oleg Tselebrovskiy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] Skip test citext_utf8 on Windows" }, { "msg_contents": "\nOn 2024-03-11 Mo 22:50, Thomas Munro wrote:\n> On Tue, Mar 12, 2024 at 2:56 PM Andrew Dunstan <[email protected]> wrote:\n>> On 2024-03-11 Mo 04:21, Oleg Tselebrovskiy wrote:\n>>> Greetings, everyone!\n>>>\n>>> While running \"installchecks\" on databases with UTF-8 encoding the test\n>>> citext_utf8 fails because of Turkish dotted I like this:\n>>>\n>>> SELECT 'i'::citext = 'İ'::citext AS t;\n>>> t\n>>> ---\n>>> - t\n>>> + f\n>>> (1 row)\n>>>\n>>> I tried to replicate the test's results by hand and with any collation\n>>> that I tried (including --locale=\"Turkish\") this test failed\n>>>\n>>> Also an interesing result of my tesing. If you initialize you DB\n>>> with -E utf-8 --locale=\"Turkish\" and then run select LOWER('İ');\n>>> the output will be this:\n>>> lower\n>>> -------\n>>> İ\n>>> (1 row)\n>>>\n>>> Which I find strange since lower() uses collation that was passed\n>>> (default in this case but still)\n>> Wouldn't we be better off finding a Windows fix for this, instead of\n>> sweeping it under the rug?\n> Given the sorry state of our Windows locale support, I've started\n> wondering about deleting it and telling users to adopt our nascent\n> built-in support or ICU[1].\n>\n> This other thread [2] says the sorting is intransitive so I don't\n> think it really meets our needs anyway.\n>\n> [1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJhV__g_TJ0jVqPbnTuqT%2B%2BM6KFv2wj%2B9AV-cABNCXN6Q%40mail.gmail.com#bc35c0b88962ff8c24c27aecc1bca72e\n> [2] https://www.postgresql.org/message-id/flat/1407a2c0-062b-4e4c-b728-438fdff5cb07%40manitou-mail.org\n\n\nMakes more sense than just hacking the tests to avoid running them on \nWindows. (I also didn't much like doing it by parsing the version \nstring, although I know there's at least one precedent for doing that.)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 20:26:38 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Skip test citext_utf8 on Windows" } ]
[ { "msg_contents": "Hi all!\n\npg_regress accepts the expecteddir argument. However, it is never used\nand outputdir is used instead to get the expected files paths.\n\nThis patch fixes this and uses the expecteddir argument as expected.\n\nRegards,\nAnthonin", "msg_date": "Mon, 11 Mar 2024 09:23:16 +0100", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Fix expecteddir argument in pg_regress" }, { "msg_contents": "> On 11 Mar 2024, at 09:23, Anthonin Bonnefoy <[email protected]> wrote:\n\n> pg_regress accepts the expecteddir argument. However, it is never used\n> and outputdir is used instead to get the expected files paths.\n\nNice catch, c855872074b5bf44ecea033674d22fac831cfc31 added --expecteddir\nsupport to pg_regress but only implemented it for the ECPG tests. Will have\nanother look at this before applying with a backpatch to v16 where\n--expecteddir was added.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 11:45:52 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix expecteddir argument in pg_regress" }, { "msg_contents": "> On 14 Mar 2024, at 11:45, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 11 Mar 2024, at 09:23, Anthonin Bonnefoy <[email protected]> wrote:\n> \n>> pg_regress accepts the expecteddir argument. However, it is never used\n>> and outputdir is used instead to get the expected files paths.\n> \n> Nice catch, c855872074b5bf44ecea033674d22fac831cfc31 added --expecteddir\n> support to pg_regress but only implemented it for the ECPG tests. Will have\n> another look at this before applying with a backpatch to v16 where\n> --expecteddir was added.\n\nPushed and backpatched, thanks for the submission!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 21:24:38 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix expecteddir argument in pg_regress" } ]
[ { "msg_contents": "While self-reviewing my \"Refactoring backend fork+exec code\" patches, I \nnoticed this in pq_init():\n\n> \t/*\n> \t * In backends (as soon as forked) we operate the underlying socket in\n> \t * nonblocking mode and use latches to implement blocking semantics if\n> \t * needed. That allows us to provide safely interruptible reads and\n> \t * writes.\n> \t *\n> \t * Use COMMERROR on failure, because ERROR would try to send the error to\n> \t * the client, which might require changing the mode again, leading to\n> \t * infinite recursion.\n> \t */\n> #ifndef WIN32\n> \tif (!pg_set_noblock(MyProcPort->sock))\n> \t\tereport(COMMERROR,\n> \t\t\t\t(errmsg(\"could not set socket to nonblocking mode: %m\")));\n> #endif\n> \n> #ifndef WIN32\n> \n> \t/* Don't give the socket to any subprograms we execute. */\n> \tif (fcntl(MyProcPort->sock, F_SETFD, FD_CLOEXEC) < 0)\n> \t\telog(FATAL, \"fcntl(F_SETFD) failed on socket: %m\");\n> #endif\n\nUsing COMMERROR here seems bogus. Firstly, if there was a problem with \nrecursion, surely the elog(FATAL) that follows would also be wrong. But \nmore seriously, it's not cool to continue using the connection as if \neverything is OK, if we fail to put it into non-blocking mode. We should \ndisconnect. (COMMERROR merely logs the error, it does not bail out like \nERROR does)\n\nBarring objections, I'll commit and backpatch the attached to fix that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 11 Mar 2024 16:44:32 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Disconnect if socket cannot be put into non-blocking mode" }, { "msg_contents": "On 11/03/2024 16:44, Heikki Linnakangas wrote:\n> While self-reviewing my \"Refactoring backend fork+exec code\" patches, I\n> noticed this in pq_init():\n> \n>> \t/*\n>> \t * In backends (as soon as forked) we operate the underlying socket in\n>> \t * nonblocking mode and use latches to implement blocking semantics if\n>> \t * needed. That allows us to provide safely interruptible reads and\n>> \t * writes.\n>> \t *\n>> \t * Use COMMERROR on failure, because ERROR would try to send the error to\n>> \t * the client, which might require changing the mode again, leading to\n>> \t * infinite recursion.\n>> \t */\n>> #ifndef WIN32\n>> \tif (!pg_set_noblock(MyProcPort->sock))\n>> \t\tereport(COMMERROR,\n>> \t\t\t\t(errmsg(\"could not set socket to nonblocking mode: %m\")));\n>> #endif\n>>\n>> #ifndef WIN32\n>>\n>> \t/* Don't give the socket to any subprograms we execute. */\n>> \tif (fcntl(MyProcPort->sock, F_SETFD, FD_CLOEXEC) < 0)\n>> \t\telog(FATAL, \"fcntl(F_SETFD) failed on socket: %m\");\n>> #endif\n> \n> Using COMMERROR here seems bogus. Firstly, if there was a problem with\n> recursion, surely the elog(FATAL) that follows would also be wrong. But\n> more seriously, it's not cool to continue using the connection as if\n> everything is OK, if we fail to put it into non-blocking mode. We should\n> disconnect. (COMMERROR merely logs the error, it does not bail out like\n> ERROR does)\n> \n> Barring objections, I'll commit and backpatch the attached to fix that.\n\nCommitted.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 10:49:31 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disconnect if socket cannot be put into non-blocking mode" }, { "msg_contents": "> On 12 Mar 2024, at 09:49, Heikki Linnakangas <[email protected]> wrote:\n\n>> Barring objections, I'll commit and backpatch the attached to fix that.\n> \n> Committed.\n\nSorry for being slow to review, but +1 on this change.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 12:09:52 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disconnect if socket cannot be put into non-blocking mode" } ]
[ { "msg_contents": "Hi All,\n\nDuring my journey of designing Pg based solution at my work I was severely hit by several shortcomings in GiST.\nThe most severe one is the lack of support for SAOP filters as it makes it difficult to have partition pruning and index (only) scans working together.\n\nTo overcome the difficulties I implemented a simple extension:\nhttps://github.com/mkleczek/btree_gist_extra\nmkleczek/btree_gist_extra: Extra operators support for PostgreSQL btree_gist\ngithub.com\n\nSince it provides a separate operator class from btree_gist it requires re-indexing the data.\nSo I thought maybe it would be a good idea to incorporate it into btree_gist.\n\nI am aware of two patches related to SAOP being discussed at the moment but I am not sure if SAOP support for GiST indexes is planned.\n\nLet me know if you think it is a good idea to work on a patch.\n\nMore general remark:\nI am wondering if SAOP support in core should not be implemented by mapping SAOP operations to specialised operators and delegating\nall the work to index AMs. That way core could be decoupled from particular index AMs implementation details.\n\n\nThanks!\n\n—\nMichal", "msg_date": "Mon, 11 Mar 2024 18:58:54 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Alternative SAOP support for GiST" }, { "msg_contents": "Hi All,\n\n> On 11 Mar 2024, at 18:58, Michał Kłeczek <[email protected]> wrote:\n> \n> Hi All,\n> \n> During my journey of designing Pg based solution at my work I was severely hit by several shortcomings in GiST.\n> The most severe one is the lack of support for SAOP filters as it makes it difficult to have partition pruning and index (only) scans working together.\n> \n> To overcome the difficulties I implemented a simple extension:\n> https://github.com/mkleczek/btree_gist_extra\n> \n> Since it provides a separate operator class from btree_gist it requires re-indexing the data.\n> So I thought maybe it would be a good idea to incorporate it into btree_gist.\n> \n\nWhile working on supporting (sort of) SAOP support for Gist I was stuck with the interplay between Gist consistent function and partition pruning.\nNot sure how it applies to SAOP handling in general though.\n\nI’ve implemented an operator class/family that supports Gist index scan for the following query:\n\nSELECT * FROM tbl WHERE col ||= ARRAY[‘a’, ‘b’, ‘c’];\n\nIt all works well except cases where tbl is partitioned by “col” column.\nIn this case index scan unnecessarily scans pages for values that are not in the partition.\n\nI am not sure if it works as expected (ie. no unnecessary scans) in case of ANY = (ARRAY[]) and nbtree.\nIn case of Gist the only place where pre-processing of queries can be performed is consistent function.\nBut right now there is no possibility to access any scan related information as it is not passed to consistent function.\nThe only thing available is GISTENTRY and it does not contain any metadata.\n\nAs a workaround I’ve added options to the op family that allows (redundantly) specifying MODULUS/REMAINDER for the index:\n\nCREATE INDEX idx ON tbl_partition_01 USING gist ( col gist_text_extra_ops (modulus=4, remainder=2) )\n\nand use the values to filter out query array passed to consistent function.\n\nThis is of course not ideal:\n- the information is redundant\n- changing partitioning scheme requires re-indexing\n\nI am wondering if it is possible to extend consistent function API so that either ScanKey or even the whole IndexScanDesc is passed as additional parameter.\n\n—\nMichal\n\n\n\n\nHi All,On 11 Mar 2024, at 18:58, Michał Kłeczek <[email protected]> wrote:Hi All,During my journey of designing Pg based solution at my work I was severely hit by several shortcomings in GiST.The most severe one is the lack of support for SAOP filters as it makes it difficult to have partition pruning and index (only) scans working together.To overcome the difficulties I implemented a simple extension:<btree_gist_extra.png>mkleczek/btree_gist_extra: Extra operators support for PostgreSQL btree_gistgithub.comSince it provides a separate operator class from btree_gist it requires re-indexing the data.So I thought maybe it would be a good idea to incorporate it into btree_gist.While working on supporting (sort of) SAOP support for Gist I was stuck with the interplay between Gist consistent function and partition pruning.Not sure how it applies to SAOP handling in general though.I’ve implemented an operator class/family that supports Gist index scan for the following query:SELECT * FROM tbl WHERE col ||= ARRAY[‘a’, ‘b’, ‘c’];It all works well except cases where tbl is partitioned by “col” column.In this case index scan unnecessarily scans pages for values that are not in the partition.I am not sure if it works as expected (ie. no unnecessary scans) in case of ANY = (ARRAY[]) and nbtree.In case of Gist the only place where pre-processing of queries can be performed is consistent function.But right now there is no possibility to access any scan related information as it is not passed to consistent function.The only thing available is GISTENTRY and it does not contain any metadata.As a workaround I’ve added options to the op family that allows (redundantly) specifying MODULUS/REMAINDER for the index:CREATE INDEX idx ON tbl_partition_01 USING gist ( col gist_text_extra_ops (modulus=4, remainder=2) )and use the values to filter out query array passed to consistent function.This is of course not ideal:- the information is redundant- changing partitioning scheme requires re-indexingI am wondering if it is possible to extend consistent function API so that either ScanKey or even the whole IndexScanDesc is passed as additional parameter.—Michal", "msg_date": "Mon, 18 Mar 2024 14:31:41 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Alternative SAOP support for GiST" }, { "msg_contents": "I realised it might be enough to pass sk_attno to consistent function as that should be enough to lookup metadata if needed.\n\nAttached is a draft patch that does this.\n\n—\nMichal\n\n\n\n> On 18 Mar 2024, at 14:31, Michał Kłeczek <[email protected]> wrote:\n> \n> Hi All,\n> \n>> On 11 Mar 2024, at 18:58, Michał Kłeczek <[email protected]> wrote:\n>> \n>> Hi All,\n>> \n>> During my journey of designing Pg based solution at my work I was severely hit by several shortcomings in GiST.\n>> The most severe one is the lack of support for SAOP filters as it makes it difficult to have partition pruning and index (only) scans working together.\n>> \n>> To overcome the difficulties I implemented a simple extension:\n>> https://github.com/mkleczek/btree_gist_extra\n>> \n>> Since it provides a separate operator class from btree_gist it requires re-indexing the data.\n>> So I thought maybe it would be a good idea to incorporate it into btree_gist.\n>> \n> \n> While working on supporting (sort of) SAOP support for Gist I was stuck with the interplay between Gist consistent function and partition pruning.\n> Not sure how it applies to SAOP handling in general though.\n> \n> I’ve implemented an operator class/family that supports Gist index scan for the following query:\n> \n> SELECT * FROM tbl WHERE col ||= ARRAY[‘a’, ‘b’, ‘c’];\n> \n> It all works well except cases where tbl is partitioned by “col” column.\n> In this case index scan unnecessarily scans pages for values that are not in the partition.\n> \n> I am not sure if it works as expected (ie. no unnecessary scans) in case of ANY = (ARRAY[]) and nbtree.\n> In case of Gist the only place where pre-processing of queries can be performed is consistent function.\n> But right now there is no possibility to access any scan related information as it is not passed to consistent function.\n> The only thing available is GISTENTRY and it does not contain any metadata.\n> \n> As a workaround I’ve added options to the op family that allows (redundantly) specifying MODULUS/REMAINDER for the index:\n> \n> CREATE INDEX idx ON tbl_partition_01 USING gist ( col gist_text_extra_ops (modulus=4, remainder=2) )\n> \n> and use the values to filter out query array passed to consistent function.\n> \n> This is of course not ideal:\n> - the information is redundant\n> - changing partitioning scheme requires re-indexing\n> \n> I am wondering if it is possible to extend consistent function API so that either ScanKey or even the whole IndexScanDesc is passed as additional parameter.\n> \n> —\n> Michal", "msg_date": "Mon, 18 Mar 2024 15:14:03 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "Wrong file in the previous message - sorry for the noise.\n\nAttached is a fixed patch.\n\nThanks,\nMichal\n\n\n\n\n> On 18 Mar 2024, at 15:14, Michał Kłeczek <[email protected]> wrote:\n> \n> I realised it might be enough to pass sk_attno to consistent function as that should be enough to lookup metadata if needed.\n> \n> Attached is a draft patch that does this.\n> \n> —\n> Michal\n> \n> <0001-Pass-key-sk_attno-to-consistent-function.patch>\n> \n>> On 18 Mar 2024, at 14:31, Michał Kłeczek <[email protected]> wrote:\n>> \n>> Hi All,\n>> \n>>> On 11 Mar 2024, at 18:58, Michał Kłeczek <[email protected]> wrote:\n>>> \n>>> Hi All,\n>>> \n>>> During my journey of designing Pg based solution at my work I was severely hit by several shortcomings in GiST.\n>>> The most severe one is the lack of support for SAOP filters as it makes it difficult to have partition pruning and index (only) scans working together.\n>>> \n>>> To overcome the difficulties I implemented a simple extension:\n>>> https://github.com/mkleczek/btree_gist_extra\n>>> \n>>> Since it provides a separate operator class from btree_gist it requires re-indexing the data.\n>>> So I thought maybe it would be a good idea to incorporate it into btree_gist.\n>>> \n>> \n>> While working on supporting (sort of) SAOP support for Gist I was stuck with the interplay between Gist consistent function and partition pruning.\n>> Not sure how it applies to SAOP handling in general though.\n>> \n>> I’ve implemented an operator class/family that supports Gist index scan for the following query:\n>> \n>> SELECT * FROM tbl WHERE col ||= ARRAY[‘a’, ‘b’, ‘c’];\n>> \n>> It all works well except cases where tbl is partitioned by “col” column.\n>> In this case index scan unnecessarily scans pages for values that are not in the partition.\n>> \n>> I am not sure if it works as expected (ie. no unnecessary scans) in case of ANY = (ARRAY[]) and nbtree.\n>> In case of Gist the only place where pre-processing of queries can be performed is consistent function.\n>> But right now there is no possibility to access any scan related information as it is not passed to consistent function.\n>> The only thing available is GISTENTRY and it does not contain any metadata.\n>> \n>> As a workaround I’ve added options to the op family that allows (redundantly) specifying MODULUS/REMAINDER for the index:\n>> \n>> CREATE INDEX idx ON tbl_partition_01 USING gist ( col gist_text_extra_ops (modulus=4, remainder=2) )\n>> \n>> and use the values to filter out query array passed to consistent function.\n>> \n>> This is of course not ideal:\n>> - the information is redundant\n>> - changing partitioning scheme requires re-indexing\n>> \n>> I am wondering if it is possible to extend consistent function API so that either ScanKey or even the whole IndexScanDesc is passed as additional parameter.\n>> \n>> —\n>> Michal\n>", "msg_date": "Mon, 18 Mar 2024 15:17:54 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "Hi All,\n\nSince it looks like there is not much interest in the patch I will try to provide some background to explain why I think it is needed.\n\nWe are in the process of migration from an old db platform to PostgreSQL.\nOur database is around 10TB big and contains around 10 billion financial transactions in a single table.\nEach transaction is assigned to an account (column acc_number).\n\nWe have partitioned the table BY HASH (acc_number).\n\nA client can query transactions belonging to his accounts using several criteria - among them is te xt search.\nQueries are of type TOP N (ie ORDER BY … LIMIT ).\n\nThe list of accounts that we are querying is provided as a parameter to the query.\n\nWe have decided to use a single Gist index supporting all queries (reasons described in [1]).\n\nThere are several problems with Gist usage (but I still think we have no other choice) but the most important is\nthat we cannot use SAOP in our queries - since Gist does not support it the planner decides to perform Bitmap Scan\nwhich in turn does not support ORDER BY … LIMIT well because requires Sort.\n\nSo when we use “= ANY (array of account numbers) … ORDER BY ...” the plan requires reading all records meeting\nsearch criteria and then sort.\n\nAs a workaround we have to perform LATERAL joins:\n\nunnest(list of account numbers) AS a(n) LATERAL JOIN (SELECT * FROM … WHERE account_number = a.n ORDER BY … LIMIT …) ORDER BY … LIMIT …\n\nIt is still bad because requires multiple scans of the same partition if account number hashes are the same.\n\nWhat we really need is for Gist to support “= ANY (…)”.\nAs Gist index is extensible in terms of queries it supports it is quite easy to implement an operator class/family with operator:\n\n||= (text, text[])\n\nthat has semantics the same as “= ANY (…)”\n\nWith this operator we can write our queries like:\n\naccount_number ||= [list of account numbers] AND\naccount_number = ANY ([list of account numbers]) — redundant for partition pruning as it does not understand ||=\n\nand have optimal plans:\n\nLimit\n— Merge Append\n—— Index scan of relevant partitions\n\nThe problem is that now each partition scan is for the same list of accounts.\nThe “consistent” function cannot assume anything about contents of the table so it has to check all elements of the list\nand that in turn causes reading unnecessary index pages for accounts not in this partition.\n\nWhat we need is a way for the “consistent” function to be able to pre-process input query array and remove elements\nthat are not relevant for this scan. To do that it is necessary to have enough information to read necessary metadata from the catalog.\n\nThe proposed patch addresses this need and seems (to me) largely uncontroversial as it does not break any existing extensions.\n\nAttached is a patch with consolidated changes (I am pretty new to managing patches so previous two were partial and not something shareable, I am afraid).\n\n—\nMichal", "msg_date": "Tue, 19 Mar 2024 17:00:15 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "On Tue, 19 Mar 2024 at 17:00, Michał Kłeczek <[email protected]> wrote:\n>\n> Hi All,\n>\n> Since it looks like there is not much interest in the patch I will try to provide some background to explain why I think it is needed.\n>\n[...]\n> What we really need is for Gist to support “= ANY (…)”.\n> As Gist index is extensible in terms of queries it supports it is quite easy to implement an operator class/family with operator:\n>\n> ||= (text, text[])\n>\n> that has semantics the same as “= ANY (…)”\n\nI've had a similar idea while working on BRIN, and was planning on\noverloading `@>` for @>(anyarray, anyelement) or using a new\n`@>>(anyarray, anyelement)` operator. No implementation yet, though.\n\n> With this operator we can write our queries like:\n>\n> account_number ||= [list of account numbers] AND\n> account_number = ANY ([list of account numbers]) — redundant for partition pruning as it does not understand ||=\n>\n> and have optimal plans:\n>\n> Limit\n> — Merge Append\n> —— Index scan of relevant partitions\n>\n> The problem is that now each partition scan is for the same list of accounts.\n> The “consistent” function cannot assume anything about contents of the table so it has to check all elements of the list\n> and that in turn causes reading unnecessary index pages for accounts not in this partition.\n\nYou seem to already be using your own operator class, so you may want\nto look into CREATE FUNCTION's support_function parameter; which would\nhandle SupportRequestIndexCondition and/or SupportRequestSimplify.\nI suspect a support function that emits multiple clauses that each\napply to only a single partition at a time should get you quite far if\ncombined with per-partition constraints that filter all but one of\nthose. Also, at plan-time you can get away with much more than at\nruntime.\n\n> What we need is a way for the “consistent” function to be able to pre-process input query array and remove elements\n> that are not relevant for this scan. To do that it is necessary to have enough information to read necessary metadata from the catalog.\n\n> The proposed patch addresses this need and seems (to me) largely uncontroversial as it does not break any existing extensions.\n\nI don't think \"my index operator class will go into the table\ndefinition and check if parts of the scankey are consistent with the\ntable constraints\" is a good reason to expose the index column to\noperator classes.\nNote that operator classes were built specifically so that they're\nindependent from their column position. It doens't really make sense\nto expose this. Maybe my suggestion up above helps?\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 21 Mar 2024 23:42:58 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "\n\n> On 21 Mar 2024, at 23:42, Matthias van de Meent <[email protected]> wrote:\n> \n> On Tue, 19 Mar 2024 at 17:00, Michał Kłeczek <[email protected]> wrote:\n> \n>> With this operator we can write our queries like:\n>> \n>> account_number ||= [list of account numbers] AND\n>> account_number = ANY ([list of account numbers]) — redundant for partition pruning as it does not understand ||=\n>> \n>> and have optimal plans:\n>> \n>> Limit\n>> — Merge Append\n>> —— Index scan of relevant partitions\n>> \n>> The problem is that now each partition scan is for the same list of accounts.\n>> The “consistent” function cannot assume anything about contents of the table so it has to check all elements of the list\n>> and that in turn causes reading unnecessary index pages for accounts not in this partition.\n> \n> You seem to already be using your own operator class, so you may want\n> to look into CREATE FUNCTION's support_function parameter; which would\n> handle SupportRequestIndexCondition and/or SupportRequestSimplify.\n> I suspect a support function that emits multiple clauses that each\n> apply to only a single partition at a time should get you quite far if\n> combined with per-partition constraints that filter all but one of\n> those. Also, at plan-time you can get away with much more than at\n> runtime.\n\nThanks for suggestion.\n\nI am afraid I don’t understand how it might actually work though:\n\n1) I think planning time is too early for us - we are heavily using planner_mode = force_generic_plan:\n- we have many partitions and planning times started to dominate execution time\n- generic plans are not worse than specialised\n\n2) I am not sure how I could transform\n\"col ||= [array]\" to multiple criteria to make sure it works well with partition pruning and planner.\n\nIt looks like what you are suggesting is to generate something like:\n(part_condition AND col ||= [subarray1]) OR (part_condition AND col ||= [subarray2])\nand hope the planner would generate proper Merge Append node (which I doubt would happen and planner would generate Bitmap scan due to lack of OR support in Gist).\nWhat’s more - there is no part_condition for hash partitions.\n\n> \n>> What we need is a way for the “consistent” function to be able to pre-process input query array and remove elements\n>> that are not relevant for this scan. To do that it is necessary to have enough information to read necessary metadata from the catalog.\n> \n>> The proposed patch addresses this need and seems (to me) largely uncontroversial as it does not break any existing extensions.\n> \n> I don't think \"my index operator class will go into the table\n> definition and check if parts of the scankey are consistent with the\n> table constraints\" is a good reason to expose the index column to\n> operator classes.\n\nQuite possibly but I still don’t see any other way to do that TBH.\n\n—\nMichal\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 01:28:55 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "\n> On 22 Mar 2024, at 01:29, Michał Kłeczek <[email protected]> wrote:\n> \n> \n> \n>> On 21 Mar 2024, at 23:42, Matthias van de Meent <[email protected]> wrote:\n>> \n>>> On Tue, 19 Mar 2024 at 17:00, Michał Kłeczek <[email protected]> wrote:\n>>> \n>>> With this operator we can write our queries like:\n>>> \n>>> account_number ||= [list of account numbers] AND\n>>> account_number = ANY ([list of account numbers]) — redundant for partition pruning as it does not understand ||=\n>>> \n>>> and have optimal plans:\n>>> \n>>> Limit\n>>> — Merge Append\n>>> —— Index scan of relevant partitions\n>>> \n>>> The problem is that now each partition scan is for the same list of accounts.\n>>> The “consistent” function cannot assume anything about contents of the table so it has to check all elements of the list\n>>> and that in turn causes reading unnecessary index pages for accounts not in this partition.\n>> \n>> You seem to already be using your own operator class, so you may want\n>> to look into CREATE FUNCTION's support_function parameter; which would\n>> handle SupportRequestIndexCondition and/or SupportRequestSimplify.\n>> I suspect a support function that emits multiple clauses that each\n>> apply to only a single partition at a time should get you quite far if\n>> combined with per-partition constraints that filter all but one of\n>> those. Also, at plan-time you can get away with much more than at\n>> runtime.\n\nThinking about it some more - the suggestion goes backwards - ie. there must have been some deep misunderstanding:\n\nIf I was able to write my query such that the planner generates optimal plan, I would not implement the custom operator in the first place.\n\nAnd this need of custom operator and operator class triggered the proposal in turn.\n\n—\nMichal\n\n", "msg_date": "Fri, 22 Mar 2024 10:11:51 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "> On 22 Mar 2024, at 10:11, Michał Kłeczek <[email protected]> wrote:\n> \n>> \n>> On 22 Mar 2024, at 01:29, Michał Kłeczek <[email protected]> wrote:\n>> \n>> \n>> \n>>> On 21 Mar 2024, at 23:42, Matthias van de Meent <[email protected]> wrote:\n>>> \n>>> \n>>> You seem to already be using your own operator class, so you may want\n>>> to look into CREATE FUNCTION's support_function parameter; which would\n>>> handle SupportRequestIndexCondition and/or SupportRequestSimplify.\n>>> I suspect a support function that emits multiple clauses that each\n>>> apply to only a single partition at a time should get you quite far if\n>>> combined with per-partition constraints that filter all but one of\n>>> those. Also, at plan-time you can get away with much more than at\n>>> runtime.\n> \n> Thinking about it some more - the suggestion goes backwards - ie. there must have been some deep misunderstanding:\n> \n> If I was able to write my query such that the planner generates optimal plan, I would not implement the custom operator in the first place.\n\nTo make it more concrete:\n\ncreate extension pg_trgm;\ncreate table t ( pk text not null primary key, data text not null ) partition by hash (pk);\ncreate table t_2_0 partition of t for values with ( modulus 2, remainder 0 );\ncreate table t_2_0 partition of t for values with ( modulus 2, remainder 1 );\ncreate index ti on t using gist ( pk, data gist_trgm_ops);\n\nexplain select * from t where pk = any (array['1', '2', '4', '5']) and data % 'what' order by data <-> 'what' limit 5;\n\nLimit (cost=41.32..41.33 rows=2 width=21)\n -> Sort (cost=41.32..41.33 rows=2 width=21)\n Sort Key: ((t.data <-> 'what'::text))\n -> Append (cost=16.63..41.31 rows=2 width=21)\n -> Bitmap Heap Scan on t_2_0 t_1 (cost=16.63..20.65 rows=1 width=21)\n Recheck Cond: ((pk = ANY ('{1,2,4,5}'::text[])) AND (data % 'what'::text))\n -> Bitmap Index Scan on t_2_0_pk_data_idx (cost=0.00..16.63 rows=1 width=0)\n Index Cond: ((pk = ANY ('{1,2,4,5}'::text[])) AND (data % 'what'::text))\n -> Bitmap Heap Scan on t_2_1 t_2 (cost=16.63..20.65 rows=1 width=21)\n Recheck Cond: ((pk = ANY ('{1,2,4,5}'::text[])) AND (data % 'what'::text))\n -> Bitmap Index Scan on t_2_1_pk_data_idx (cost=0.00..16.63 rows=1 width=0)\n Index Cond: ((pk = ANY ('{1,2,4,5}'::text[])) AND (data % 'what'::text))\n\n\nThat’s bad as the number of records to sort might be huge. So:\n\nset enable_bitmapscan to off;\n\nLimit (cost=0.30..242.85 rows=2 width=21)\n -> Merge Append (cost=0.30..242.85 rows=2 width=21)\n Sort Key: ((t.data <-> 'what'::text))\n -> Index Scan using t_2_0_pk_data_idx on t_2_0 t_1 (cost=0.15..121.40 rows=1 width=21)\n Index Cond: (data % 'what'::text)\n Order By: (data <-> 'what'::text)\n Filter: (pk = ANY ('{1,2,4,5}'::text[]))\n -> Index Scan using t_2_1_pk_data_idx on t_2_1 t_2 (cost=0.15..121.42 rows=1 width=21)\n Index Cond: (data % 'what'::text)\n Order By: (data <-> 'what'::text)\n Filter: (pk = ANY ('{1,2,4,5}'::text[]))\n\nThat’s bad as well since pk = ANY([…]) is not in Index Cond.\n\nLets use ||= operator then:\n\ndrop index ti;\ncreate index ti on t using gist ( pk gist_extra_text_ops, data gist_trgm_ops);\n\nexplain select * from t where pk = any (array['1', '2', '4', '5']) and pk ||= array['1', '2', '4', '5'] and data % 'what' order by data <-> 'what' limit 5;\n\nLimit (cost=0.30..153.70 rows=2 width=21)\n -> Merge Append (cost=0.30..153.70 rows=2 width=21)\n Sort Key: ((t.data <-> 'what'::text))\n -> Index Scan using t_2_0_pk_data_idx on t_2_0 t_1 (cost=0.15..76.84 rows=1 width=21)\n Index Cond: ((pk ||= '{1,2,4,5}'::text[]) AND (data % 'what'::text))\n Order By: (data <-> 'what'::text)\n Filter: (pk = ANY ('{1,2,4,5}'::text[]))\n -> Index Scan using t_2_1_pk_data_idx on t_2_1 t_2 (cost=0.15..76.84 rows=1 width=21)\n Index Cond: ((pk ||= '{1,2,4,5}'::text[]) AND (data % 'what'::text))\n Order By: (data <-> 'what'::text)\n Filter: (pk = ANY ('{1,2,4,5}'::text[]))\n\n\nThat’s much better. There is still Filter on SAOP but it is almost harmless: always true and does not require heap access.\n\n\nA few observations:\n1. I am not sure why SAOP can be Index Cond in case of Bitmap Index Scan but not in case of Index Scan - don’t know what the interplay between the planner and amsearcharray is.\n2. In all cases array passed to (Bitmap) Index Scan is NOT filtered by partition pruning.\n\nThe second point means useless index page reads (amplified by the number of elements in input array and the width of the index).\n\nIt is the _second_ point I would like to address with this patch - for us it makes a big difference as it means almost order of magnitude (around 5-10 times based on preliminary tests) fewer buffers read.\n\n\nI’ve done an experiment with adding partitioning bounds information as options when creating index. Table as above (so only two index columns) but with 128 partitions (modulus 128).\n\nWithout filtering array elements we get 103 buffers read:\n\nexplain (analyse, buffers) select * from t where pk ||= array['1', '5', '10', '2', '3', '4', '23', '5', '7', '0'] and pk = any (array['1', '5', '10', '2', '3', '4', '23', '5', '7', '0']) and data % 'ever 3' order by data <-> '3' limit 5;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2.82..43.07 rows=5 width=32) (actual time=8.093..9.390 rows=4 loops=1)\n Buffers: shared hit=103\n -> Merge Append (cost=2.82..75.28 rows=9 width=32) (actual time=8.091..9.387 rows=4 loops=1)\n Sort Key: ((t.data <-> '3'::text))\n Buffers: shared hit=103\n -> Index Scan using t_128_9_pk_data_idx on t_128_9 t_1 (cost=0.30..8.33 rows=1 width=32) (actual time=0.741..0.741 rows=0 loops=1)\n Index Cond: ((pk ||= '{1,5,10,2,3,4,23,5,7,0}'::text[]) AND (data % 'ever 3'::text))\n Order By: (data <-> '3'::text)\n Filter: (pk = ANY ('{1,5,10,2,3,4,23,5,7,0}'::text[]))\n Buffers: shared hit=10\n… more partitions\n\nAfter creating indexes for all partitions with some metadata used by consistent function to filter out array elements:\ncreate index ti_128_0 on t_128_0 (pk gist_extra_text_ops (modulus=128, remainder=0), data gist_trgm_ops (siglen=128))\n\nThe result is only 29 buffers read:\n\nexplain (analyse, buffers) select * from t where pk ||= array['1', '5', '10', '2', '3', '4', '23', '5', '7', '0'] and pk = any (array['1', '5', '10', '2', '3', '4', '23', '5', '7', '0']) and data % 'ever 3' order by data <-> '3' limit 5;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2.82..43.07 rows=5 width=32) (actual time=0.852..0.864 rows=4 loops=1)\n Buffers: shared hit=29\n -> Merge Append (cost=2.82..75.28 rows=9 width=32) (actual time=0.850..0.862 rows=4 loops=1)\n Sort Key: ((t.data <-> '3'::text))\n Buffers: shared hit=29\n -> Index Scan using ti_128_9 on t_128_9 t_1 (cost=0.30..8.33 rows=1 width=32) (actual time=0.045..0.045 rows=0 loops=1)\n Index Cond: ((pk ||= '{1,5,10,2,3,4,23,5,7,0}'::text[]) AND (data % 'ever 3'::text))\n Order By: (data <-> '3'::text)\n Filter: (pk = ANY ('{1,5,10,2,3,4,23,5,7,0}'::text[]))\n Buffers: shared hit=1\n…. more partitions\n\n\nThere is 3x fewer buffers read in this case (103 vs 29).\nIt makes a huge difference in memory pressure in our case (10 billion rows 10 TB data, wide index to support various search criteria - text/similiarity search among them).\n\nI understand extensibility of GIST makes many operators opaque to the planner and it is the “consistent” function that can perform optimisations (or we can come up with some additional mechanism during planning phase).\nProviding more information to “consistent” function would make it possible to implement optimisations not only for array scan keys but for ranges and others.\n\nWhat we can do is to provide the index attribute number (reduntantly) as option - it is going to work but is somewhat ugly - especially that this information is already available when calling consistent function.\n\n—\nMichal\nOn 22 Mar 2024, at 10:11, Michał Kłeczek <[email protected]> wrote:On 22 Mar 2024, at 01:29, Michał Kłeczek <[email protected]> wrote:On 21 Mar 2024, at 23:42, Matthias van de Meent <[email protected]> wrote:You seem to already be using your own operator class, so you may wantto look into CREATE FUNCTION's support_function parameter; which wouldhandle SupportRequestIndexCondition and/or SupportRequestSimplify.I suspect a support function that emits multiple clauses that eachapply to only a single partition at a time should get you quite far ifcombined with per-partition constraints that filter all but one ofthose. Also, at plan-time you can get away with much more than atruntime.Thinking about it some more - the suggestion goes backwards - ie. there must have been some deep misunderstanding:If I was able to write my query such that the planner generates optimal plan, I would not implement the custom operator in the first place.To make it more concrete:create extension pg_trgm;create table t ( pk text not null primary key, data text not null ) partition by hash (pk);create table t_2_0 partition of t for values with ( modulus 2, remainder 0 );create table t_2_0 partition of t for values with ( modulus 2, remainder 1 );create index ti on t using gist ( pk, data gist_trgm_ops);explain select * from t where pk = any (array['1', '2', '4', '5']) and data % 'what' order by data <-> 'what' limit 5;Limit  (cost=41.32..41.33 rows=2 width=21)   ->  Sort  (cost=41.32..41.33 rows=2 width=21)         Sort Key: ((t.data <-> 'what'::text))         ->  Append  (cost=16.63..41.31 rows=2 width=21)               ->  Bitmap Heap Scan on t_2_0 t_1  (cost=16.63..20.65 rows=1 width=21)                     Recheck Cond: ((pk = ANY ('{1,2,4,5}'::text[])) AND (data % 'what'::text))                     ->  Bitmap Index Scan on t_2_0_pk_data_idx  (cost=0.00..16.63 rows=1 width=0)                           Index Cond: ((pk = ANY ('{1,2,4,5}'::text[])) AND (data % 'what'::text))               ->  Bitmap Heap Scan on t_2_1 t_2  (cost=16.63..20.65 rows=1 width=21)                     Recheck Cond: ((pk = ANY ('{1,2,4,5}'::text[])) AND (data % 'what'::text))                     ->  Bitmap Index Scan on t_2_1_pk_data_idx  (cost=0.00..16.63 rows=1 width=0)                           Index Cond: ((pk = ANY ('{1,2,4,5}'::text[])) AND (data % 'what'::text))That’s bad as the number of records to sort might be huge. So:set enable_bitmapscan to off;Limit  (cost=0.30..242.85 rows=2 width=21)   ->  Merge Append  (cost=0.30..242.85 rows=2 width=21)         Sort Key: ((t.data <-> 'what'::text))         ->  Index Scan using t_2_0_pk_data_idx on t_2_0 t_1  (cost=0.15..121.40 rows=1 width=21)               Index Cond: (data % 'what'::text)               Order By: (data <-> 'what'::text)               Filter: (pk = ANY ('{1,2,4,5}'::text[]))         ->  Index Scan using t_2_1_pk_data_idx on t_2_1 t_2  (cost=0.15..121.42 rows=1 width=21)               Index Cond: (data % 'what'::text)               Order By: (data <-> 'what'::text)               Filter: (pk = ANY ('{1,2,4,5}'::text[]))That’s bad as well since pk = ANY([…]) is not in Index Cond.Lets use ||= operator then:drop index ti;create index ti on t using gist ( pk gist_extra_text_ops, data gist_trgm_ops);explain select * from t where pk = any (array['1', '2', '4', '5']) and pk ||= array['1', '2', '4', '5'] and data % 'what' order by data <-> 'what' limit 5;Limit  (cost=0.30..153.70 rows=2 width=21)   ->  Merge Append  (cost=0.30..153.70 rows=2 width=21)         Sort Key: ((t.data <-> 'what'::text))         ->  Index Scan using t_2_0_pk_data_idx on t_2_0 t_1  (cost=0.15..76.84 rows=1 width=21)               Index Cond: ((pk ||= '{1,2,4,5}'::text[]) AND (data % 'what'::text))               Order By: (data <-> 'what'::text)               Filter: (pk = ANY ('{1,2,4,5}'::text[]))         ->  Index Scan using t_2_1_pk_data_idx on t_2_1 t_2  (cost=0.15..76.84 rows=1 width=21)               Index Cond: ((pk ||= '{1,2,4,5}'::text[]) AND (data % 'what'::text))               Order By: (data <-> 'what'::text)               Filter: (pk = ANY ('{1,2,4,5}'::text[]))That’s much better. There is still Filter on SAOP but it is almost harmless: always true and does not require heap access.A few observations:1. I am not sure why SAOP can be Index Cond in case of Bitmap Index Scan but not in case of Index Scan - don’t know what the interplay between the planner and amsearcharray is.2. In all cases array passed to (Bitmap) Index Scan is NOT filtered by partition pruning.The second point means useless index page reads (amplified by the number of elements in input array and the width of the index).It is the _second_ point I would like to address with this patch - for us it makes a big difference as it means almost order of magnitude (around 5-10 times based on preliminary tests) fewer buffers read.I’ve done an experiment with adding partitioning bounds information as options when creating index. Table as above (so only two index columns) but with 128 partitions (modulus 128).Without filtering array elements we get 103 buffers read:explain (analyse, buffers) select * from t where pk ||= array['1', '5', '10', '2', '3', '4', '23', '5', '7', '0'] and pk = any (array['1', '5', '10', '2', '3', '4', '23', '5', '7', '0']) and data % 'ever 3' order by data <-> '3' limit 5;                                                                    QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=2.82..43.07 rows=5 width=32) (actual time=8.093..9.390 rows=4 loops=1)   Buffers: shared hit=103   ->  Merge Append  (cost=2.82..75.28 rows=9 width=32) (actual time=8.091..9.387 rows=4 loops=1)         Sort Key: ((t.data <-> '3'::text))         Buffers: shared hit=103         ->  Index Scan using t_128_9_pk_data_idx on t_128_9 t_1  (cost=0.30..8.33 rows=1 width=32) (actual time=0.741..0.741 rows=0 loops=1)               Index Cond: ((pk ||= '{1,5,10,2,3,4,23,5,7,0}'::text[]) AND (data % 'ever 3'::text))               Order By: (data <-> '3'::text)               Filter: (pk = ANY ('{1,5,10,2,3,4,23,5,7,0}'::text[]))               Buffers: shared hit=10… more partitionsAfter creating indexes for all partitions with some metadata used by consistent function to filter out array elements:create index ti_128_0 on t_128_0 (pk gist_extra_text_ops (modulus=128, remainder=0), data gist_trgm_ops (siglen=128))The result is only 29 buffers read:explain (analyse, buffers) select * from t where pk ||= array['1', '5', '10', '2', '3', '4', '23', '5', '7', '0'] and pk = any (array['1', '5', '10', '2', '3', '4', '23', '5', '7', '0']) and data % 'ever 3' order by data <-> '3' limit 5;                                                              QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=2.82..43.07 rows=5 width=32) (actual time=0.852..0.864 rows=4 loops=1)   Buffers: shared hit=29   ->  Merge Append  (cost=2.82..75.28 rows=9 width=32) (actual time=0.850..0.862 rows=4 loops=1)         Sort Key: ((t.data <-> '3'::text))         Buffers: shared hit=29         ->  Index Scan using ti_128_9 on t_128_9 t_1  (cost=0.30..8.33 rows=1 width=32) (actual time=0.045..0.045 rows=0 loops=1)               Index Cond: ((pk ||= '{1,5,10,2,3,4,23,5,7,0}'::text[]) AND (data % 'ever 3'::text))               Order By: (data <-> '3'::text)               Filter: (pk = ANY ('{1,5,10,2,3,4,23,5,7,0}'::text[]))               Buffers: shared hit=1…. more partitionsThere is 3x fewer buffers read in this case (103 vs 29).It makes a huge difference in memory pressure in our case (10 billion rows 10 TB data, wide index to support various search criteria - text/similiarity search among them).I understand extensibility of GIST makes many operators opaque to the planner and it is the “consistent” function that can perform optimisations (or we can come up with some additional mechanism during planning phase).Providing more information to “consistent” function would make it possible to implement optimisations not only for array scan keys but for ranges and others.What we can do is to provide the index attribute number (reduntantly) as option - it is going to work but is somewhat ugly - especially that this information is already available when calling consistent function.—Michal", "msg_date": "Sat, 23 Mar 2024 17:32:35 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]> writes:\n> I understand extensibility of GIST makes many operators opaque to the planner and it is the “consistent” function that can perform optimisations (or we can come up with some additional mechanism during planning phase).\n> Providing more information to “consistent” function would make it possible to implement optimisations not only for array scan keys but for ranges and others.\n\n> What we can do is to provide the index attribute number (reduntantly) as option - it is going to work but is somewhat ugly - especially that this information is already available when calling consistent function.\n\nWhile the proposed change is simple enough and wouldn't break\nanything, I share Matthias' distaste for it, because the motivation\nyou've given for it is deeply unsatisfying and indeed troubling.\nA GIST consistent function is surely the wrong place to be optimizing\naway unnecessary partitions: that consideration is not unique to\nGIST indexes (or even to indexscans), much less to particular opclasses.\nMoreover, having a consistent function inspect catalog state seems\nlike a kluge of the first order, not least because there'd be no good\nplace to cache the lookup results, so you'd be doing them over and\nover again.\n\nI like Matthias' suggestion of seeing whether you could use a\nplanner support function to split up the query by partitions\nduring planning.\n\nThere are bits of what you mentioned that I would gladly take\npatches for --- for example, I think the reason GIST lacks SAOP\nsupport is mostly lack of round tuits, not that it'd be a bad\nthing. But it's not clear to me that that alone would help you\nmuch. The whole design as you've described it seems like multiple\nlayers of kluges, so that getting rid of any one of them doesn't\nreally make it not klugy. (I also have serious doubts about how well\nit would perform for you, even with this additional kluge in place.\nI don't think that a multidimensional GIST over zillions of rows will\nperform very well: the upper tree layers are just not going to be very\nselective.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jul 2024 16:19:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "Hi Tom,\n\nThanks for looking at it.\n\n> On 24 Jul 2024, at 22:19, Tom Lane <[email protected]> wrote:\n> \n> =?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]> writes:\n>> I understand extensibility of GIST makes many operators opaque to the planner and it is the “consistent” function that can perform optimisations (or we can come up with some additional mechanism during planning phase).\n>> Providing more information to “consistent” function would make it possible to implement optimisations not only for array scan keys but for ranges and others.\n> \n>> What we can do is to provide the index attribute number (reduntantly) as option - it is going to work but is somewhat ugly - especially that this information is already available when calling consistent function.\n> \n> While the proposed change is simple enough and wouldn't break\n> anything, I share Matthias' distaste for it, because the motivation\n> you've given for it is deeply unsatisfying and indeed troubling.\n> A GIST consistent function is surely the wrong place to be optimizing\n> away unnecessary partitions: that consideration is not unique to\n> GIST indexes (or even to indexscans), much less to particular opclasses.\n> Moreover, having a consistent function inspect catalog state seems\n> like a kluge of the first order, not least because there'd be no good\n> place to cache the lookup results, so you'd be doing them over and\n> over again.\n\nIndeed - caching results of such lookups is clumsy.\n\nAnd I agree the whole thing is hackish.\nBut in reality I think this is due to fundamental mismatch between\nextensibility interface of GIST and the lack of it in partition pruning code.\n\n> \n> I like Matthias' suggestion of seeing whether you could use a\n> planner support function to split up the query by partitions\n> during planning.\n\nYou mean implement custom partition pruning algorithm using\nquery rewriting?\n\nAs I’ve written in another message in this thread:\nIf it was possible to write the query in such a way then the whole\ndiscussion would be moot and I wouldn’t propose this patch :)\n\n> \n> There are bits of what you mentioned that I would gladly take\n> patches for --- for example, I think the reason GIST lacks SAOP\n> support is mostly lack of round tuits, not that it'd be a bad\n> thing. But it's not clear to me that that alone would help you\n> much. The whole design as you've described it seems like multiple\n> layers of kluges, so that getting rid of any one of them doesn't\n> really make it not klugy.\n\nKluges are workarounds for lack of two things in GIST:\n- missing SAOP support (which I try to simulate using custom operator)\n- missing ORDER BY support (for which I posted a draft idea in another patch)\n\n> (I also have serious doubts about how well\n> it would perform for you, even with this additional kluge in place.\n> I don't think that a multidimensional GIST over zillions of rows will\n> perform very well: the upper tree layers are just not going to be very\n> selective.)\n\nIt work surprisingly well in practice as our performance tests show.\nA single multicolumn GIST index is capable of handling most of our queries.\n\nRegards,\n\n\n—\nMichal\n\n", "msg_date": "Thu, 25 Jul 2024 18:58:00 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "On Fri, 22 Mar 2024, 01:29 Michał Kłeczek, <[email protected]> wrote:\n> On 21 Mar 2024, at 23:42, Matthias van de Meent <[email protected]> wrote:\n>> On Tue, 19 Mar 2024 at 17:00, Michał Kłeczek <[email protected]> wrote:\n>>> With this operator we can write our queries like:\n>>>\n>>> account_number ||= [list of account numbers] AND\n>>> account_number = ANY ([list of account numbers]) — redundant for partition pruning as it does not understand ||=\n>>>\n>>> and have optimal plans:\n>>>\n>>> Limit\n>>> — Merge Append\n>>> —— Index scan of relevant partitions\n>>>\n>>> The problem is that now each partition scan is for the same list of accounts.\n>>> The “consistent” function cannot assume anything about contents of the table so it has to check all elements of the list\n>>> and that in turn causes reading unnecessary index pages for accounts not in this partition.\n>>\n>> You seem to already be using your own operator class, so you may want\n>> to look into CREATE FUNCTION's support_function parameter; which would\n>> handle SupportRequestIndexCondition and/or SupportRequestSimplify.\n>> I suspect a support function that emits multiple clauses that each\n>> apply to only a single partition at a time should get you quite far if\n>> combined with per-partition constraints that filter all but one of\n>> those. Also, at plan-time you can get away with much more than at\n>> runtime.\n>\n> Thanks for suggestion.\n>\n> I am afraid I don’t understand how it might actually work though:\n[...]\n> 2) I am not sure how I could transform\n> \"col ||= [array]\" to multiple criteria to make sure it works well with partition pruning and planner.\n>\n> It looks like what you are suggesting is to generate something like:\n> (part_condition AND col ||= [subarray1]) OR (part_condition AND col ||= [subarray2])\n> and hope the planner would generate proper Merge Append node (which I doubt would happen and planner would generate Bitmap scan due to lack of OR support in Gist).\n> What’s more - there is no part_condition for hash partitions.\n\nI would probably (try to) implement something like the following:\n\n- Alter each partition by adding a constraint `CHECK\n(hash(partitioncol) % part_modulus = part_remainder)`, to give the\nplanner the tools to do partition pruning. This solves the partition\npruning part, in userspace. I woudln't be opposed to a fix in\nPostgreSQL if done well - hash partition pruning sounds like a niche,\nbut a valid niche nonetheless.\n- Add support function that translates e.g. ||=(array, elem) on the\nbase table into a list of OR-ed `(hash(partitioncol) % part_modulus =\npart_remainder AND col ||= [sublist])`-statements, one for each of the\npartitions.\n- Make sure there's a planner facility that pushes down OR branches\nand removes non-matched qualifiers. Presumably, partition pruning can\nalready take care of that.\n\nThe planner should be able to deduct that each partition has their own\n-unique- CHECK constraint, and that this check can't be satisfied by\nany other partition and thus is ignored for those other partitions'\nscans.\n\nAlternatively, partition the table not using HASH, but using LIST\n((hash(partitioncol) % modulus)) - this should enable partition\npruning without creating those manual CHECK constraints. However,\nthat'd be at the cost of access to hash partitioning-related features,\nlike different modulos at the same partitioning level.\n\nAll in all, this still seems like a very (very) specific optimization,\nof which I'm not sure that it is generalizable. However, array\nintrospection and filtering for SAOP equality checks feel like a\nrelatively easy (?) push-down optimization in (e.g.) runtime partition\npruning (or planning); isn't that even better patch potential here?\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 26 Jul 2024 01:28:40 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "> On 26 Jul 2024, at 01:28, Matthias van de Meent <[email protected]> wrote:\n> \n> All in all, this still seems like a very (very) specific optimization,\n> of which I'm not sure that it is generalizable. However, array\n> introspection and filtering for SAOP equality checks feel like a\n> relatively easy (?) push-down optimization in (e.g.) runtime partition\n> pruning (or planning); isn't that even better patch potential here?\n\nThe issue is not partition pruning for SAOP - it works fine. The issue is lack of SAOP support in GIST.\n\nBecause I cannot use SAOP I have two options:\n\n1) LATERAL JOIN (ie. iterate through input array elements, SELECT rows for each, merge results)\n2) Implement a custom operator that emulates SAOP and provide consistent function for it. Additionally provide SAOP clause (redundantly) to enable partition pruning.\n\nIn case of 1):\n- the query becomes convoluted with multiple redundant ORDER BY and LIMIT clauses\n- unnecessary sort is performed (because we have to merge results of subqueries)\n- some partitions are scanned multiple times (per each element in input array that happens to land in the same partition)\n\nIn case of 2):\n- the whole input array is passed to consistent function for each partition so we unnecessarily search for non-existent rows\n\nTo fix the issue in 2) I need to somehow filter input array per partition - hence this patch.\n\nRegards,\n\n—\nMichal\nOn 26 Jul 2024, at 01:28, Matthias van de Meent <[email protected]> wrote:All in all, this still seems like a very (very) specific optimization,of which I'm not sure that it is generalizable. However, arrayintrospection and filtering for SAOP equality checks feel like arelatively easy (?) push-down optimization in (e.g.) runtime partitionpruning (or planning); isn't that even better patch potential here?The issue is not partition pruning for SAOP - it works fine. The issue is lack of SAOP support in GIST.Because I cannot use SAOP I have two options:1) LATERAL JOIN (ie. iterate through input array elements, SELECT rows for each, merge results)2) Implement a custom operator that emulates SAOP and provide consistent function for it. Additionally provide SAOP clause (redundantly) to enable partition pruning.In case of 1):- the query becomes convoluted with multiple redundant ORDER BY and LIMIT clauses- unnecessary sort is performed (because we have to merge results of subqueries)- some partitions are scanned multiple times (per each element in input array that happens to land in the same partition)In case of 2):- the whole input array is passed to consistent function for each partition so we unnecessarily search for non-existent rowsTo fix the issue in 2) I need to somehow filter input array per partition - hence this patch.Regards,—Michal", "msg_date": "Fri, 26 Jul 2024 10:10:38 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" }, { "msg_contents": "> On 26 Jul 2024, at 10:10, Michał Kłeczek <[email protected]> wrote:\n> \n> \n> \n>> On 26 Jul 2024, at 01:28, Matthias van de Meent <[email protected]> wrote:\n>> \n>> All in all, this still seems like a very (very) specific optimization,\n>> of which I'm not sure that it is generalizable. However, array\n>> introspection and filtering for SAOP equality checks feel like a\n>> relatively easy (?) push-down optimization in (e.g.) runtime partition\n>> pruning (or planning); isn't that even better patch potential here?\n> \n> The issue is not partition pruning for SAOP - it works fine. The issue is lack of SAOP support in GIST.\n> \n> Because I cannot use SAOP I have two options:\n> \n> 1) LATERAL JOIN (ie. iterate through input array elements, SELECT rows for each, merge results)\n> 2) Implement a custom operator that emulates SAOP and provide consistent function for it. Additionally provide SAOP clause (redundantly) to enable partition pruning.\n> \n> In case of 1):\n> - the query becomes convoluted with multiple redundant ORDER BY and LIMIT clauses\n> - unnecessary sort is performed (because we have to merge results of subqueries)\n> - some partitions are scanned multiple times (per each element in input array that happens to land in the same partition)\n> \n> In case of 2):\n> - the whole input array is passed to consistent function for each partition so we unnecessarily search for non-existent rows\n\nWhich is especially painful in case of sharding implementation based on partitioning and postgres_fdw as it requires multiple\nSELECTS from remote partition.\n\n> \n> To fix the issue in 2) I need to somehow filter input array per partition - hence this patch.\n> \n\nRegards,\n\n—\nMichal\n\n\nOn 26 Jul 2024, at 10:10, Michał Kłeczek <[email protected]> wrote:On 26 Jul 2024, at 01:28, Matthias van de Meent <[email protected]> wrote:All in all, this still seems like a very (very) specific optimization,of which I'm not sure that it is generalizable. However, arrayintrospection and filtering for SAOP equality checks feel like arelatively easy (?) push-down optimization in (e.g.) runtime partitionpruning (or planning); isn't that even better patch potential here?The issue is not partition pruning for SAOP - it works fine. The issue is lack of SAOP support in GIST.Because I cannot use SAOP I have two options:1) LATERAL JOIN (ie. iterate through input array elements, SELECT rows for each, merge results)2) Implement a custom operator that emulates SAOP and provide consistent function for it. Additionally provide SAOP clause (redundantly) to enable partition pruning.In case of 1):- the query becomes convoluted with multiple redundant ORDER BY and LIMIT clauses- unnecessary sort is performed (because we have to merge results of subqueries)- some partitions are scanned multiple times (per each element in input array that happens to land in the same partition)In case of 2):- the whole input array is passed to consistent function for each partition so we unnecessarily search for non-existent rowsWhich is especially painful in case of sharding implementation based on partitioning and postgres_fdw as it requires multipleSELECTS from remote partition.To fix the issue in 2) I need to somehow filter input array per partition - hence this patch.Regards,—Michal", "msg_date": "Fri, 26 Jul 2024 16:33:29 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRAFT: Pass sk_attno to consistent function" } ]
[ { "msg_contents": "I am seeing an increasing number of bug/problem reports on obsolete\nPostgres versions, either not running a superseded minor version, or\nrunning an unsupported major version.\n\nWhat can we do to reduce such reports, or at least give a consistent\nresponse? It is very helpful that we have this web page, and I have\nmade a habit of pointing reporters to that page since it has all the\ninformation they need:\n\n\thttps://www.postgresql.org/support/versioning/\n\nThis web page should correct the idea that \"upgrades are more risky than\nstaying with existing versions\". Is there more we can do? Should we\nhave a more consistent response for such reporters?\n\nIt would be a crazy idea to report something in the logs if a major\nversion is run after a certain date, since we know the date when major\nversions will become unsupported.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 11 Mar 2024 16:37:49 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Reports on obsolete Postgres versions" }, { "msg_contents": "On Mon, Mar 11, 2024 at 04:37:49PM -0400, Bruce Momjian wrote:\n> \thttps://www.postgresql.org/support/versioning/\n> \n> This web page should correct the idea that \"upgrades are more risky than\n> staying with existing versions\". Is there more we can do? Should we\n> have a more consistent response for such reporters?\n\nI've read that the use of the term \"minor release\" can be confusing. While\nthe versioning page clearly describes what is eligible for a minor release,\nnot everyone reads it, so I suspect that many folks think there are new\nfeatures, etc. in minor releases. I think a \"minor release\" of Postgres is\nmore similar to what other projects would call a \"patch version.\"\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 11 Mar 2024 16:12:04 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Mon, Mar 11, 2024 at 04:12:04PM -0500, Nathan Bossart wrote:\n> On Mon, Mar 11, 2024 at 04:37:49PM -0400, Bruce Momjian wrote:\n> > \thttps://www.postgresql.org/support/versioning/\n> > \n> > This web page should correct the idea that \"upgrades are more risky than\n> > staying with existing versions\". Is there more we can do? Should we\n> > have a more consistent response for such reporters?\n> \n> I've read that the use of the term \"minor release\" can be confusing. While\n> the versioning page clearly describes what is eligible for a minor release,\n> not everyone reads it, so I suspect that many folks think there are new\n> features, etc. in minor releases. I think a \"minor release\" of Postgres is\n> more similar to what other projects would call a \"patch version.\"\n\nWell, we do say:\n\n\tWhile upgrading will always contain some level of risk, PostgreSQL\n\tminor releases fix only frequently-encountered bugs, security issues,\n\tand data corruption problems to reduce the risk associated with\n\tupgrading. For minor releases, the community considers not upgrading to\n\tbe riskier than upgrading. \n\nbut that is far down the page. Do we need to improve this?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 11 Mar 2024 17:17:13 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Mon, Mar 11, 2024 at 4:38 PM Bruce Momjian <[email protected]> wrote:\n\n> https://www.postgresql.org/support/versioning/\n>\n> This web page should correct the idea that \"upgrades are more risky than\n> staying with existing versions\". Is there more we can do? Should we have\n> a more consistent response for such reporters?\n>\n\nIt could be helpful to remove this sentence:\n\n\"Upgrading to a minor release does not normally require a dump and restore\"\n\nWhile technically true, \"not normally\" is quite the understatement, as the\ntrue answer is \"never\" or at least \"not in the last few decades\". I have a\nhard time even imagining a scenario that would require a minor revision to\ndo a dump and restore - surely, that in itself would warrant a major\nrelease?\n\n\n> It would be a crazy idea to report something in the logs if a major\n> version is run after a certain date, since we know the date when major\n> versions will become unsupported.\n>\n\nCould indeed be useful to spit something out at startup. Heck, even minor\nversions are fairly regular now. Sure would be nice to be able to point a\nclient at the database and say \"See? Even Postgres itself thinks you should\nupgrade from 11.3!!\" (totally made up example, not at all related to an\nactual production system /sarcasm)\n\nCheers,\nGreg\n\nOn Mon, Mar 11, 2024 at 4:38 PM Bruce Momjian <[email protected]> wrote:        https://www.postgresql.org/support/versioning/\n\nThis web page should correct the idea that \"upgrades are more risky than staying with existing versions\".  Is there more we can do?  Should we have a more consistent response for such reporters?It could be helpful to remove this sentence:\"Upgrading to a minor release does not normally require a dump and restore\"While technically true, \"not normally\" is quite the understatement, as the true answer is \"never\" or at least \"not in the last few decades\". I have a hard time even imagining a scenario that would require a minor revision to do a dump and restore - surely, that in itself would warrant a major release? \nIt would be a crazy idea to report something in the logs if a major version is run after a certain date, since we know the date when major\nversions will become unsupported.Could indeed be useful to spit something out at startup. Heck, even minor versions are fairly regular now. Sure would be nice to be able to point a client at the database and say \"See? Even Postgres itself thinks you should upgrade from 11.3!!\" (totally made up example, not at all related to an actual production system /sarcasm)Cheers,Greg", "msg_date": "Mon, 11 Mar 2024 20:40:56 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Mon, Mar 11, 2024 at 05:17:13PM -0400, Bruce Momjian wrote:\n> On Mon, Mar 11, 2024 at 04:12:04PM -0500, Nathan Bossart wrote:\n>> I've read that the use of the term \"minor release\" can be confusing. While\n>> the versioning page clearly describes what is eligible for a minor release,\n>> not everyone reads it, so I suspect that many folks think there are new\n>> features, etc. in minor releases. I think a \"minor release\" of Postgres is\n>> more similar to what other projects would call a \"patch version.\"\n> \n> Well, we do say:\n> \n> \tWhile upgrading will always contain some level of risk, PostgreSQL\n> \tminor releases fix only frequently-encountered bugs, security issues,\n> \tand data corruption problems to reduce the risk associated with\n> \tupgrading. For minor releases, the community considers not upgrading to\n> \tbe riskier than upgrading. \n> \n> but that is far down the page. Do we need to improve this?\n\nI think making that note more visible would certainly be an improvement.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 11 Mar 2024 20:37:56 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "> On 12 Mar 2024, at 02:37, Nathan Bossart <[email protected]> wrote:\n> \n> On Mon, Mar 11, 2024 at 05:17:13PM -0400, Bruce Momjian wrote:\n>> On Mon, Mar 11, 2024 at 04:12:04PM -0500, Nathan Bossart wrote:\n>>> I've read that the use of the term \"minor release\" can be confusing. While\n>>> the versioning page clearly describes what is eligible for a minor release,\n>>> not everyone reads it, so I suspect that many folks think there are new\n>>> features, etc. in minor releases. I think a \"minor release\" of Postgres is\n>>> more similar to what other projects would call a \"patch version.\"\n>> \n>> Well, we do say:\n>> \n>> \tWhile upgrading will always contain some level of risk, PostgreSQL\n>> \tminor releases fix only frequently-encountered bugs, security issues,\n>> \tand data corruption problems to reduce the risk associated with\n>> \tupgrading. For minor releases, the community considers not upgrading to\n>> \tbe riskier than upgrading. \n>> \n>> but that is far down the page. Do we need to improve this?\n> \n> I think making that note more visible would certainly be an improvement.\n\nWe have this almost at the top of the page, which IMHO isn't a very good\ndescription about what a minor version is:\n\n\tEach major version receives bug fixes and, if need be, security fixes\n\tthat are released at least once every three months in what we call a\n\t\"minor release.\"\n\nMaybe we can rewrite that sentence to properly document what a minor is (bug\nfixes *and* security fixes) with a small blurb about the upgrade risk?\n\n(Adding Jonathan in CC: who is good at website copy).\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 11:00:19 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 11, 2024 at 05:17:13PM -0400, Bruce Momjian wrote:\n> On Mon, Mar 11, 2024 at 04:12:04PM -0500, Nathan Bossart wrote:\n> > On Mon, Mar 11, 2024 at 04:37:49PM -0400, Bruce Momjian wrote:\n> > > \thttps://www.postgresql.org/support/versioning/\n> > > \n> > > This web page should correct the idea that \"upgrades are more risky than\n> > > staying with existing versions\". Is there more we can do? Should we\n> > > have a more consistent response for such reporters?\n> > \n> > I've read that the use of the term \"minor release\" can be confusing. While\n> > the versioning page clearly describes what is eligible for a minor release,\n> > not everyone reads it, so I suspect that many folks think there are new\n> > features, etc. in minor releases. I think a \"minor release\" of Postgres is\n> > more similar to what other projects would call a \"patch version.\"\n> \n> Well, we do say:\n> \n> \tWhile upgrading will always contain some level of risk, PostgreSQL\n> \tminor releases fix only frequently-encountered bugs, security issues,\n> \tand data corruption problems to reduce the risk associated with\n> \tupgrading. For minor releases, the community considers not upgrading to\n> \tbe riskier than upgrading. \n> \n> but that is far down the page. Do we need to improve this?\n\nI liked the statement from Laurenz a while ago on his blog\n(paraphrased): \"Upgrading to the latest patch release does not require\napplication testing or recertification\". I am not sure we want to put\nthat into the official page (or maybe tone down/qualify it a bit), but I\nthink a lot of users stay on older minor versions because they dread\ntheir internal testing policies.\n\nThe other thing that could maybe be made a bit better is the fantastic\npatch release schedule, which however is buried in the \"developer\nroadmap\". I can see how this was useful years ago, but I think this page\nshould be moved to the end-user part of the website, and maybe (also)\nintegrated into the support/versioning page?\n\n\nMichael\n\n\n", "msg_date": "Tue, 12 Mar 2024 11:12:24 +0100", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": ">> but that is far down the page. Do we need to improve this?\n\n> I liked the statement from Laurenz a while ago on his blog\n> (paraphrased): \"Upgrading to the latest patch release does not require\n> application testing or recertification\". I am not sure we want to put\n> that into the official page (or maybe tone down/qualify it a bit), but I\n> think a lot of users stay on older minor versions because they dread\n> their internal testing policies.\n\nI think we need a more conservative language since a minor release might fix a\nplanner bug that someone's app relied on and their plans will be worse after\nupgrading. While rare, it can for sure happen so the official wording should\nprobably avoid such bold claims.\n\n> The other thing that could maybe be made a bit better is the fantastic\n> patch release schedule, which however is buried in the \"developer\n> roadmap\". I can see how this was useful years ago, but I think this page\n> should be moved to the end-user part of the website, and maybe (also)\n> integrated into the support/versioning page?\n\nFair point.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 11:56:19 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On 3/12/24 3:56 AM, Daniel Gustafsson wrote:\n>>> but that is far down the page. Do we need to improve this?\n> \n>> I liked the statement from Laurenz a while ago on his blog\n>> (paraphrased): \"Upgrading to the latest patch release does not require\n>> application testing or recertification\". I am not sure we want to put\n>> that into the official page (or maybe tone down/qualify it a bit), but I\n>> think a lot of users stay on older minor versions because they dread\n>> their internal testing policies.\n> \n> I think we need a more conservative language since a minor release might fix a\n> planner bug that someone's app relied on and their plans will be worse after\n> upgrading. While rare, it can for sure happen so the official wording should\n> probably avoid such bold claims.\n> \n>> The other thing that could maybe be made a bit better is the fantastic\n>> patch release schedule, which however is buried in the \"developer\n>> roadmap\". I can see how this was useful years ago, but I think this page\n>> should be moved to the end-user part of the website, and maybe (also)\n>> integrated into the support/versioning page?\n> \n> Fair point.\n\nBoth of the above points show inconsistency in how PG uses the terms\n\"minor\" and \"patch\" today.\n\nIt's not just roadmaps and release pages where we mix up these terms\neither, it's even in user-facing SQL and libpq routines: both\nPQserverVersion and current_setting('server_version_num') return the\npatch release version in the numeric patch field, rather than the\nnumeric minor field (which is always 0).\n\nIn my view, the best thing would be to move toward consistently using\nthe word \"patch\" and moving away from the word \"minor\" for the\nPostgreSQL quarterly maintenance updates.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 09:21:27 -0700", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Wed, Mar 13, 2024 at 09:21:27AM -0700, Jeremy Schneider wrote:\n> It's not just roadmaps and release pages where we mix up these terms\n> either, it's even in user-facing SQL and libpq routines: both\n> PQserverVersion and current_setting('server_version_num') return the\n> patch release version in the numeric patch field, rather than the\n> numeric minor field (which is always 0).\n> \n> In my view, the best thing would be to move toward consistently using\n> the word \"patch\" and moving away from the word \"minor\" for the\n> PostgreSQL quarterly maintenance updates.\n> \n\nI think \"minor\" is a better term since it contrasts with \"major\". We\ndon't actually supply patches to upgrade minor versions.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 13 Mar 2024 13:12:01 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Wed, Mar 13, 2024 at 1:12 PM Bruce Momjian <[email protected]> wrote:\n>\n> On Wed, Mar 13, 2024 at 09:21:27AM -0700, Jeremy Schneider wrote:\n> > It's not just roadmaps and release pages where we mix up these terms\n> > either, it's even in user-facing SQL and libpq routines: both\n> > PQserverVersion and current_setting('server_version_num') return the\n> > patch release version in the numeric patch field, rather than the\n> > numeric minor field (which is always 0).\n> >\n> > In my view, the best thing would be to move toward consistently using\n> > the word \"patch\" and moving away from the word \"minor\" for the\n> > PostgreSQL quarterly maintenance updates.\n> >\n>\n> I think \"minor\" is a better term since it contrasts with \"major\". We\n> don't actually supply patches to upgrade minor versions.\n>\n\nI tend to agree with Bruce, and major/minor seems to be the more\ncommon usage within the industry; iirc, debian, ubuntu, gnome, suse,\nand mariadb all use that nomenclature; and ISTR some distro's who\nrelease packaged versions of postgres with custom patches applied (ie\n12.4-2 for postgres 12.4 patchlevel 2).\n\nBTW, as a reminder, we do have this statement, in bold, in the\n\"upgrading\" section of the versioning page:\n\"We always recommend that all users run the latest available minor\nrelease for whatever major version is in use.\" There is actually\nseveral other phrases and wording on that page that could probably be\npropagated as replacement language in some of these other areas.\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Wed, 13 Mar 2024 14:04:16 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> On Wed, Mar 13, 2024 at 1:12 PM Bruce Momjian <[email protected]> wrote:\n>> On Wed, Mar 13, 2024 at 09:21:27AM -0700, Jeremy Schneider wrote:\n>>> In my view, the best thing would be to move toward consistently using\n>>> the word \"patch\" and moving away from the word \"minor\" for the\n>>> PostgreSQL quarterly maintenance updates.\n\n>> I think \"minor\" is a better term since it contrasts with \"major\". We\n>> don't actually supply patches to upgrade minor versions.\n\n> I tend to agree with Bruce, and major/minor seems to be the more\n> common usage within the industry; iirc, debian, ubuntu, gnome, suse,\n> and mariadb all use that nomenclature; and ISTR some distro's who\n> release packaged versions of postgres with custom patches applied (ie\n> 12.4-2 for postgres 12.4 patchlevel 2).\n\nAgreed, we would probably add confusion not reduce it if we were to\nchange our longstanding nomenclature for this.\n\nI'm +1 on rewriting these documentation pages though. Seems like\nthey could do with a whole fresh start rather than just tweaks\naround the edges --- what we've got now is an accumulation of such\ntweaks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Mar 2024 14:21:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On 3/13/24 11:21 AM, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n>> On Wed, Mar 13, 2024 at 1:12 PM Bruce Momjian <[email protected]> wrote:\n>>> On Wed, Mar 13, 2024 at 09:21:27AM -0700, Jeremy Schneider wrote:\n>>>> In my view, the best thing would be to move toward consistently using\n>>>> the word \"patch\" and moving away from the word \"minor\" for the\n>>>> PostgreSQL quarterly maintenance updates.\n> \n>>> I think \"minor\" is a better term since it contrasts with \"major\". We\n>>> don't actually supply patches to upgrade minor versions.\n> \n>> I tend to agree with Bruce, and major/minor seems to be the more\n>> common usage within the industry; iirc, debian, ubuntu, gnome, suse,\n>> and mariadb all use that nomenclature; and ISTR some distro's who\n>> release packaged versions of postgres with custom patches applied (ie\n>> 12.4-2 for postgres 12.4 patchlevel 2).\n> \n> Agreed, we would probably add confusion not reduce it if we were to\n> change our longstanding nomenclature for this.\n\n\n\"Longstanding nomenclature\"??\n\nBefore v10, the quarterly maintenance updates were unambiguously and\nalways called patch releases\n\nI don't understand the line of thinking here\n\nBruce started this whole thread because of \"an increasing number of\nbug/problem reports on obsolete Postgres versions\"\n\nAcross the industry the word \"minor\" often implies a release that will\nbe maintained, and I'm trying to point out that the change in v10 to\nchange terminology from \"patch\" to \"minor\" actually might be part of\nwhat's responsible for the increasing number of bug reports on old patch\nreleases, because people don't understand that patch releases are the\nway those bugfixes were already delivered.\n\nJust taking MySQL as an example, it's clear that a \"minor\" like 5.7 is a\nfull blown release that gets separate patches from 5.6 - so I don't\nunderstand how we're making an argument it's the opposite?\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 11:29:57 -0700", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "Jeremy Schneider <[email protected]> writes:\n> On 3/13/24 11:21 AM, Tom Lane wrote:\n>> Agreed, we would probably add confusion not reduce it if we were to\n>> change our longstanding nomenclature for this.\n\n> Before v10, the quarterly maintenance updates were unambiguously and\n> always called patch releases\n\nI think that's highly revisionist history. I've always called them\nminor releases, and I don't recall other people using different\nterminology. I believe the leadoff text on\n\nhttps://www.postgresql.org/support/versioning/\n\nis much older than when we switched from two-part major version\nnumbers to one-part major version numbers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Mar 2024 14:39:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "\n> On Mar 13, 2024, at 11:39 AM, Tom Lane <[email protected]> wrote:\n> \n> Jeremy Schneider <[email protected]> writes:\n>>> On 3/13/24 11:21 AM, Tom Lane wrote:\n>>> Agreed, we would probably add confusion not reduce it if we were to\n>>> change our longstanding nomenclature for this.\n> \n>> Before v10, the quarterly maintenance updates were unambiguously and\n>> always called patch releases\n> \n> I think that's highly revisionist history. I've always called them\n> minor releases, and I don't recall other people using different\n> terminology. I believe the leadoff text on\n> \n> https://www.postgresql.org/support/versioning/\n> \n> is much older than when we switched from two-part major version\n> numbers to one-part major version numbers.\n\nHuh, that wasn’t what I expected. I only started (in depth) working with PG around 9.6 and I definitely thought of “6” as the minor version. This is an interesting mailing list thread.\n\n-Jeremy\n\n\nSent from my TI-83\n\n", "msg_date": "Wed, 13 Mar 2024 11:47:00 -0700", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Wed, Mar 13, 2024 at 02:21:20PM -0400, Tom Lane wrote:\n> I'm +1 on rewriting these documentation pages though. Seems like\n> they could do with a whole fresh start rather than just tweaks\n> around the edges --- what we've got now is an accumulation of such\n> tweaks.\n\nIf no one else volunteers, I could probably give this a try once v17 is\nfrozen.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 13 Mar 2024 13:52:45 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Tue, 2024-03-12 at 11:56 +0100, Daniel Gustafsson wrote:\n> > I liked the statement from Laurenz a while ago on his blog\n> > (paraphrased): \"Upgrading to the latest patch release does not require\n> > application testing or recertification\". I am not sure we want to put\n> > that into the official page (or maybe tone down/qualify it a bit), but I\n> > think a lot of users stay on older minor versions because they dread\n> > their internal testing policies.\n> \n> I think we need a more conservative language since a minor release might fix a\n> planner bug that someone's app relied on and their plans will be worse after\n> upgrading.  While rare, it can for sure happen so the official wording should\n> probably avoid such bold claims.\n\nI think we are pretty conservative with backpatching changes to the\noptimizer that could destabilize existing plans.\n\nI feel quite strongly that we should not use overly conservative language\nthere. If people feel that they have to test their applications for new\nminor releases, the only effect will be that they won't install minor releases.\nInstalling a minor release should be as routine as the operating system\npatches that many companies apply automatically during weekend maintenance\nwindows. They can also introduce bugs, and everybody knows and accepts that.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 13 Mar 2024 20:04:59 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Wed, Mar 13, 2024 at 3:05 PM Laurenz Albe <[email protected]> wrote:\n> I think we are pretty conservative with backpatching changes to the\n> optimizer that could destabilize existing plans.\n\nWe have gotten better about that, which is good.\n\n> I feel quite strongly that we should not use overly conservative language\n> there. If people feel that they have to test their applications for new\n> minor releases, the only effect will be that they won't install minor releases.\n> Installing a minor release should be as routine as the operating system\n> patches that many companies apply automatically during weekend maintenance\n> windows. They can also introduce bugs, and everybody knows and accepts that.\n\nI don't agree with this. If we tell people that they don't need to\ntest their applications after an upgrade, I do not think that the\nresult will be that they skip the testing and everything works\nperfectly. I think that the result will be that we lose all\ncredibility. Nobody who has been working on computers for longer than\na week is going to believe that a software upgrade can't break\nanything, and someone whose entire business depends on things not\nbreaking is going to want to validate that they haven't. And they're\nnot wrong to think that way, either.\n\nI think that whatever we say here should focus on what we try to do or\nguarantee, not on what actions we think users ought to take, never\nmind must take. We can say that we try to avoid making any changes\nupon which an application might be relying -- but there surely is some\nweasel-wording there, because we have made such changes before in the\nname of security, and sometimes to fix bugs, and we will likely to do\nso again in the future. But it's not for us to decide how much testing\nis warranted. It's the user's system, not ours.\n\nIn the end, while I certainly don't mind improving the web page, I\nthink that a lot of what we're seeing here probably has to do with the\ngrowing popularity and success of PostgreSQL. If you have more people\nusing your software, you're also going to have more people using\nout-of-date versions of your software.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Mar 2024 10:15:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On 13.03.24 18:12, Bruce Momjian wrote:\n> On Wed, Mar 13, 2024 at 09:21:27AM -0700, Jeremy Schneider wrote:\n>> It's not just roadmaps and release pages where we mix up these terms\n>> either, it's even in user-facing SQL and libpq routines: both\n>> PQserverVersion and current_setting('server_version_num') return the\n>> patch release version in the numeric patch field, rather than the\n>> numeric minor field (which is always 0).\n>>\n>> In my view, the best thing would be to move toward consistently using\n>> the word \"patch\" and moving away from the word \"minor\" for the\n>> PostgreSQL quarterly maintenance updates.\n> \n> I think \"minor\" is a better term since it contrasts with \"major\". We\n> don't actually supply patches to upgrade minor versions.\n\nThere are potentially different adjectives that could apply to \"version\" \nand \"release\".\n\nThe version numbers can be called major and minor, because that just \ndescribes their ordering and significance.\n\nBut I do agree that \"minor release\" isn't quite as clear, because one \ncould also interpret that as \"a release, but a bit smaller this time\". \n(Also might not translate well, since \"minor\" and \"small\" could \ntranslate to the same thing.)\n\nOne could instead, for example, describe those as \"maintenance releases\":\n\n\"Are you on the latest maintenance release? Why not? Are you not \nmaintaining your database?\"\n\nThat carries much more urgency than the same with \"minor\".\n\nA maintenance release corresponds to an increase in the minor version \nnumber. It's still tied together, but has different terminology.\n\nThe last news item reads:\n\n\"The PostgreSQL Global Development Group has released an update to all \nsupported versions of PostgreSQL\"\n\nwhich has no urgency, but consider\n\n\"The PostgreSQL Global Development Group has published maintenance \nreleases to all supported versions of PostgreSQL\"\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 16:48:07 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Thu, Mar 14, 2024 at 10:15:18AM -0400, Robert Haas wrote:\n> I think that whatever we say here should focus on what we try to do or\n> guarantee, not on what actions we think users ought to take, never\n> mind must take. We can say that we try to avoid making any changes\n> upon which an application might be relying -- but there surely is some\n> weasel-wording there, because we have made such changes before in the\n> name of security, and sometimes to fix bugs, and we will likely to do\n> so again in the future. But it's not for us to decide how much testing\n> is warranted. It's the user's system, not ours.\n\nYes, good point, let's tell whem what our goals are and they can decide\nwhat testing they need.\n\n> In the end, while I certainly don't mind improving the web page, I\n> think that a lot of what we're seeing here probably has to do with the\n> growing popularity and success of PostgreSQL. If you have more people\n> using your software, you're also going to have more people using\n> out-of-date versions of your software.\n\nYeah, probably, and we recently end-of-life'ed PG 11.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 14 Mar 2024 22:46:28 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Thu, Mar 14, 2024 at 10:46:28PM -0400, Bruce Momjian wrote:\n> > In the end, while I certainly don't mind improving the web page, I\n> > think that a lot of what we're seeing here probably has to do with the\n> > growing popularity and success of PostgreSQL. If you have more people\n> > using your software, you're also going to have more people using\n> > out-of-date versions of your software.\n> \n> Yeah, probably, and we recently end-of-life'ed PG 11.\n\nIn a way it is that we had more users during the PG 10/11 period than\nbefore that, and those people aren't upgrading as quickly.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 14 Mar 2024 22:50:57 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "> On 14 Mar 2024, at 16:48, Peter Eisentraut <[email protected]> wrote:\n> On 13.03.24 18:12, Bruce Momjian wrote:\n\n>> I think \"minor\" is a better term since it contrasts with \"major\". We\n>> don't actually supply patches to upgrade minor versions.\n> \n> There are potentially different adjectives that could apply to \"version\" and \"release\".\n> \n> The version numbers can be called major and minor, because that just describes their ordering and significance.\n> \n> But I do agree that \"minor release\" isn't quite as clear, because one could also interpret that as \"a release, but a bit smaller this time\". (Also might not translate well, since \"minor\" and \"small\" could translate to the same thing.)\n\nSome of the user confusion likely stems from us using the same nomenclature as\nSemVer, but for different things. SemVer has become very widely adopted, to\nthe point where it's almost assumed by many, so maybe we need to explicitly\nstate that we *don't* use SemVer (we don't mention that anywhere in the docs or\non the website).\n\n> One could instead, for example, describe those as \"maintenance releases\":\n\nThat might indeed be a better name for what we provide.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 11:17:53 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Fri, Mar 15, 2024 at 11:17:53AM +0100, Daniel Gustafsson wrote:\n> > On 14 Mar 2024, at 16:48, Peter Eisentraut <[email protected]> wrote:\n> > One could instead, for example, describe those as \"maintenance releases\":\n> \n> That might indeed be a better name for what we provide.\n\n+1\n\n\n", "msg_date": "Fri, 15 Mar 2024 17:14:52 +0100", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On 3/15/24 3:17 AM, Daniel Gustafsson wrote:\n>> On 14 Mar 2024, at 16:48, Peter Eisentraut <[email protected]> wrote:\n>> On 13.03.24 18:12, Bruce Momjian wrote:\n> \n>>> I think \"minor\" is a better term since it contrasts with \"major\". We\n>>> don't actually supply patches to upgrade minor versions.\n>>\n>> There are potentially different adjectives that could apply to \"version\" and \"release\".\n>>\n>> The version numbers can be called major and minor, because that just describes their ordering and significance.\n>>\n>> But I do agree that \"minor release\" isn't quite as clear, because one could also interpret that as \"a release, but a bit smaller this time\". (Also might not translate well, since \"minor\" and \"small\" could translate to the same thing.)\n> \n> Some of the user confusion likely stems from us using the same nomenclature as\n> SemVer, but for different things. SemVer has become very widely adopted, to\n> the point where it's almost assumed by many, so maybe we need to explicitly\n> state that we *don't* use SemVer (we don't mention that anywhere in the docs or\n> on the website).\n\nSemantic Versioning was definitely part of what led to my confusion\nup-thread here. I was also mistaken in what I said up-thread about\nMySQL, who also calls \"5.7\" the \"major\" version.\n\n\n>> One could instead, for example, describe those as \"maintenance releases\":\n> \n> That might indeed be a better name for what we provide.\n\nThe latest PostgreSQL news item uses the word \"update\" and seems pretty\nwell written in this area already (at least to me)\n\nAlso I just confirmed, the bug reporting form also seems well written:\n\n\"Make sure you are running the latest available minor release for your\nmajor version before reporting a bug. The current list of supported\nversions is 16.2, 15.6, 14.11, 13.14, 12.18.\"\n\nThis all looks good, but I do still agree that a gradual shift toward\nsaying \"maintenance update\" instead of \"minor\" might still promote more\nclarity in the long run?\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 09:48:08 -0700", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Wed, Mar 13, 2024 at 02:04:16PM -0400, Robert Treat wrote:\n> I tend to agree with Bruce, and major/minor seems to be the more\n> common usage within the industry; iirc, debian, ubuntu, gnome, suse,\n> and mariadb all use that nomenclature; and ISTR some distro's who\n> release packaged versions of postgres with custom patches applied (ie\n> 12.4-2 for postgres 12.4 patchlevel 2).\n> \n> BTW, as a reminder, we do have this statement, in bold, in the\n> \"upgrading\" section of the versioning page:\n> \"We always recommend that all users run the latest available minor\n> release for whatever major version is in use.\" There is actually\n> several other phrases and wording on that page that could probably be\n> propagated as replacement language in some of these other areas.\n\nI ended up writing the attached doc patch. I found that some or our\ntext was overly-wordy, causing the impact of what we were trying to say\nto be lessened. We might want to go farther than this patch, but I\nthink it is an improvement.\n\nI also moved the <strong> text to the bottom of the section ---\npreviously, our <strong> wording referenced minor releases, then we\ntalked about major releases, and then minor releases. This gives a more\nnatural flow.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Mon, 1 Apr 2024 18:56:54 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "> On 2 Apr 2024, at 00:56, Bruce Momjian <[email protected]> wrote:\n\n> I ended up writing the attached doc patch. I found that some or our\n> text was overly-wordy, causing the impact of what we were trying to say\n> to be lessened. We might want to go farther than this patch, but I\n> think it is an improvement.\n\nAgreed, this is an good incremental improvement over what we have.\n\n> I also moved the <strong> text to the bottom of the section\n\n+1\n\nA few small comments:\n\n+considers performing minor upgrades to be less risky than continuing to\n+run superseded minor versions.</em>\n\nI think \"superseded minor versions\" could be unnecessarily complicated for\nnon-native speakers, I consider myself fairly used to reading english but still\nhad to spend a few extra (brain)cycles parsing the meaning of it in this\ncontext.\n\n+ We recommend that users always run the latest minor release associated\n\nOr perhaps \"current minor release\" which is the term we use in the table below\non the same page?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 2 Apr 2024 09:24:04 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Wed, Mar 13, 2024 at 7:47 PM Jeremy Schneider <[email protected]>\nwrote:\n\n>\n> > On Mar 13, 2024, at 11:39 AM, Tom Lane <[email protected]> wrote:\n> >\n> > Jeremy Schneider <[email protected]> writes:\n> >>> On 3/13/24 11:21 AM, Tom Lane wrote:\n> >>> Agreed, we would probably add confusion not reduce it if we were to\n> >>> change our longstanding nomenclature for this.\n> >\n> >> Before v10, the quarterly maintenance updates were unambiguously and\n> >> always called patch releases\n> >\n> > I think that's highly revisionist history. I've always called them\n> > minor releases, and I don't recall other people using different\n> > terminology. I believe the leadoff text on\n> >\n> > https://www.postgresql.org/support/versioning/\n> >\n> > is much older than when we switched from two-part major version\n> > numbers to one-part major version numbers.\n>\n> Huh, that wasn’t what I expected. I only started (in depth) working with\n> PG around 9.6 and I definitely thought of “6” as the minor version. This is\n> an interesting mailing list thread.\n>\n\nThat common misunderstanding was, in fact, one of the reasons to go to\ntwo-part version numbers instead of 3. Because people did not realize that\nthe full 9.6 digit was the major version, and thus what was maintained and\nsuch.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Mar 13, 2024 at 7:47 PM Jeremy Schneider <[email protected]> wrote:\n> On Mar 13, 2024, at 11:39 AM, Tom Lane <[email protected]> wrote:\n> \n> Jeremy Schneider <[email protected]> writes:\n>>> On 3/13/24 11:21 AM, Tom Lane wrote:\n>>> Agreed, we would probably add confusion not reduce it if we were to\n>>> change our longstanding nomenclature for this.\n> \n>> Before v10, the quarterly maintenance updates were unambiguously and\n>> always called patch releases\n> \n> I think that's highly revisionist history.  I've always called them\n> minor releases, and I don't recall other people using different\n> terminology.  I believe the leadoff text on\n> \n> https://www.postgresql.org/support/versioning/\n> \n> is much older than when we switched from two-part major version\n> numbers to one-part major version numbers.\n\nHuh, that wasn’t what I expected. I only started (in depth) working with PG around 9.6 and I definitely thought of “6” as the minor version. This is an interesting mailing list thread.That common misunderstanding was, in fact, one of the reasons to go to two-part version numbers instead of 3. Because people did not realize that the full 9.6 digit was the major version, and thus what was maintained and such.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 2 Apr 2024 11:31:17 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Tue, Apr 2, 2024 at 9:24 AM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 2 Apr 2024, at 00:56, Bruce Momjian <[email protected]> wrote:\n>\n> > I ended up writing the attached doc patch. I found that some or our\n> > text was overly-wordy, causing the impact of what we were trying to say\n> > to be lessened. We might want to go farther than this patch, but I\n> > think it is an improvement.\n>\n> Agreed, this is an good incremental improvement over what we have.\n>\n> > I also moved the <strong> text to the bottom of the section\n>\n> +1\n>\n> A few small comments:\n>\n> +considers performing minor upgrades to be less risky than continuing to\n> +run superseded minor versions.</em>\n>\n> I think \"superseded minor versions\" could be unnecessarily complicated for\n> non-native speakers, I consider myself fairly used to reading english but\n> still\n> had to spend a few extra (brain)cycles parsing the meaning of it in this\n> context.\n>\n> + We recommend that users always run the latest minor release associated\n>\n> Or perhaps \"current minor release\" which is the term we use in the table\n> below\n> on the same page?\n>\n\n\nI do like the term \"current\" better. It conveys (at least a bit) that we\nreally consider all the older ones to be, well, obsolete. The difference\n\"current vs obsolete\" is stronger than \"latest vs not quite latest\".\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Apr 2, 2024 at 9:24 AM Daniel Gustafsson <[email protected]> wrote:> On 2 Apr 2024, at 00:56, Bruce Momjian <[email protected]> wrote:\n\n> I ended up writing the attached doc patch.  I found that some or our\n> text was overly-wordy, causing the impact of what we were trying to say\n> to be lessened.  We might want to go farther than this patch, but I\n> think it is an improvement.\n\nAgreed, this is an good incremental improvement over what we have.\n\n> I also moved the <strong> text to the bottom of the section\n\n+1\n\nA few small comments:\n\n+considers performing minor upgrades to be less risky than continuing to\n+run superseded minor versions.</em>\n\nI think \"superseded minor versions\" could be unnecessarily complicated for\nnon-native speakers, I consider myself fairly used to reading english but still\nhad to spend a few extra (brain)cycles parsing the meaning of it in this\ncontext.\n\n+ We recommend that users always run the latest minor release associated\n\nOr perhaps \"current minor release\" which is the term we use in the table below\non the same page?I do like the term \"current\"  better. It conveys (at least a bit) that we really consider all the older ones to be, well, obsolete. The difference \"current vs obsolete\" is stronger than \"latest vs not quite latest\".--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 2 Apr 2024 11:34:46 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Tue, Apr 2, 2024 at 11:34:46AM +0200, Magnus Hagander wrote:\n> On Tue, Apr 2, 2024 at 9:24 AM Daniel Gustafsson <[email protected]> wrote:\n> A few small comments:\n> \n> +considers performing minor upgrades to be less risky than continuing to\n> +run superseded minor versions.</em>\n> \n> I think \"superseded minor versions\" could be unnecessarily complicated for\n> non-native speakers, I consider myself fairly used to reading english but\n> still\n> had to spend a few extra (brain)cycles parsing the meaning of it in this\n> context.\n> \n> + We recommend that users always run the latest minor release associated\n> \n> Or perhaps \"current minor release\" which is the term we use in the table\n> below\n> on the same page?\n> \n> I do like the term \"current\"  better. It conveys (at least a bit) that we\n> really consider all the older ones to be, well, obsolete. The difference\n> \"current vs obsolete\" is stronger than \"latest vs not quite latest\".\n\nOkay, I changed \"superseded\" to \"old\", and changed \"latest\" to\n\"current\", patch attached.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Tue, 2 Apr 2024 16:46:58 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "> On 2 Apr 2024, at 22:46, Bruce Momjian <[email protected]> wrote:\n> On Tue, Apr 2, 2024 at 11:34:46AM +0200, Magnus Hagander wrote:\n\n>> I do like the term \"current\" better. It conveys (at least a bit) that we\n>> really consider all the older ones to be, well, obsolete. The difference\n>> \"current vs obsolete\" is stronger than \"latest vs not quite latest\".\n> \n> Okay, I changed \"superseded\" to \"old\", and changed \"latest\" to\n> \"current\", patch attached.\n\n+1, LGTM\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 2 Apr 2024 22:48:28 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Tue, Apr 2, 2024 at 1:47 PM Bruce Momjian <[email protected]> wrote:\n\n> On Tue, Apr 2, 2024 at 11:34:46AM +0200, Magnus Hagander wrote:\n>\n> Okay, I changed \"superseded\" to \"old\", and changed \"latest\" to\n> \"current\", patch attached.\n>\n>\nI took a pass at this and found a few items of note. Changes on top of\nBruce's patch.\n\ndiff --git a/templates/support/versioning.html\nb/templates/support/versioning.html\nindex 0ed79f4f..d4762967 100644\n--- a/templates/support/versioning.html\n+++ b/templates/support/versioning.html\n@@ -21,9 +21,9 @@ a release available outside of the minor release roadmap.\n\n <p>\n The PostgreSQL Global Development Group supports a major version for 5\nyears\n-after its initial release. After its five year anniversary, a major\nversion will\n-have one last minor release containing any fixes and will be considered\n-end-of-life (EOL) and no longer supported.\n+after its initial release. After its fifth anniversary, a major version\nwill\n+have one last minor release and will be considered\n+end-of-life (EOL), meaning no new bug fixes will be written for it.\n </p>\n\n# \"fifth anniversary \"seems more correct than \"five year anniversary\".\n# The fact it gets a minor release implies it receives fixes.\n# I've always had an issue with our use of the phrasing \"no longer\nsupported\". It seems vague when practically it just means we stop writing\npatches for it. Whether individual community members refrain from\nanswering questions or helping people on these older releases is not a\nproject decision but a personal one. Also, since we already say it is\nsupported for 5 years it seems a bit redundant to then also say \"after 5\nyears it is unsupported\".\n\n\n <h2>Version Numbering</h2>\n@@ -45,11 +45,12 @@ number, e.g. 9.5.3 to 9.5.4.\n <h2>Upgrading</h2>\n\n <p>\n-Major versions usually change the internal format of the system tables.\n-These changes are often complex, so we do not maintain backward\n-compatibility of all stored data. A dump/reload of the database or use of\nthe\n-<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module is required\n-for major upgrades. We also recommend reading the\n+Major versions need the data directory to be initialized so that the\nsystem tables\n+specific to that version can be created. There are two options to then\n+migrate the user data from the old directory to the new one: a dump/reload\n+of the database or using the\n+<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module.\n+We also recommend reading the\n <a href=\"/docs/current/upgrading.html\">upgrading</a> section of the major\n version you are planning to upgrade to. You can upgrade from one major\nversion\n to another without upgrading to intervening versions, but we recommend\nreading\n@@ -58,14 +59,15 @@ versions prior to doing so.\n </p>\n\n# This still talked about \"stored data\" when really that isn't the main\nconcern and if it was pg_upgrade wouldn't work as an option.\n# The choice to say \"data directory\" here seems reasonable if arguable.\nBut it implies the location of user data and we state it also contains\nversion-specific system tables. It can go unsaid that they are\nversion-specific because the collection as a whole and the layout of\nindividual tables can and almost certainly will change between versions.\n\n <p>\n-Minor release upgrades do not require a dump and restore; you simply stop\n+Minor releases upgrades do not impact the data directory, only the\nbinaries.\n+You simply stop\n the database server, install the updated binaries, and restart the server.\n-Such upgrades might require manual changes to complete so always read\n+However, some upgrades might require manual changes to complete so always\nread\n the release notes first.\n </p>\n\n# The fact minor releases don't require dump/reload doesn't directly\npreclude them from needing pg_upgrade and writing \"Such upgrades\" seems\nlike it could lead someone to think that.\n# Data Directory seems generic enough to be understood here and since I\nmention it in the Major Version as to why data migration is needed,\nmentioning here\n# as something not directly altered and thus precluding the data migration\nhas a nice symmetry. The potential need for manual changes becomes clearer\nas well.\n\n\n <p>\n-Minor releases only fix frequently-encountered bugs, <a\n+Minor releases only fix bugs, <a\n href=\"/support/security/\">security</a> issues, and data corruption\n problems, so such upgrades are very low risk. <em>The community\n considers performing minor upgrades to be less risky than continuing to\n\n# Reality mostly boils down to trusting our judgement when it comes to bugs\nas each one is evaluated individually. We do not promise to leave simply\nbuggy behavior alone in minor releases which is the only policy that would\nnearly eliminate upgrade risk. We back-patch 5 year old bugs quite often\nwhich by definition are not frequently encountered. I cannot think of a\ngood adjective to put there nor does one seem necessary even if I agree it\nreads a bit odd otherwise. I think that has more to do with this being\njust the wrong structure to get our point across. Can we pick out some\nstatistics from our long history of publishing minor releases to present an\nobjective reality to the reader to convince them to trust our track-record\nin this matter?\n\nDavid J.\n\nOn Tue, Apr 2, 2024 at 1:47 PM Bruce Momjian <[email protected]> wrote:On Tue, Apr  2, 2024 at 11:34:46AM +0200, Magnus Hagander wrote:\nOkay, I changed \"superseded\" to \"old\", and changed \"latest\" to\n\"current\", patch attached.I took a pass at this and found a few items of note.  Changes on top of Bruce's patch.diff --git a/templates/support/versioning.html b/templates/support/versioning.htmlindex 0ed79f4f..d4762967 100644--- a/templates/support/versioning.html+++ b/templates/support/versioning.html@@ -21,9 +21,9 @@ a release available outside of the minor release roadmap. <p> The PostgreSQL Global Development Group supports a major version for 5 years-after its initial release. After its five year anniversary, a major version will-have one last minor release containing any fixes and will be considered-end-of-life (EOL) and no longer supported.+after its initial release. After its fifth anniversary, a major version will+have one last minor release and will be considered+end-of-life (EOL), meaning no new bug fixes will be written for it. </p># \"fifth anniversary \"seems more correct than \"five year anniversary\".# The fact it gets a minor release implies it receives fixes.# I've always had an issue with our use of the phrasing \"no longer supported\".  It seems vague when practically it just means we stop writing patches for it.  Whether individual community members refrain from answering questions or helping people on these older releases is not a project decision but a personal one.  Also, since we already say it is supported for 5 years it seems a bit redundant to then also say \"after 5 years it is unsupported\". <h2>Version Numbering</h2>@@ -45,11 +45,12 @@ number, e.g. 9.5.3 to 9.5.4. <h2>Upgrading</h2> <p>-Major versions usually change the internal format of the system tables.-These changes are often complex, so we do not maintain backward-compatibility of all stored data.  A dump/reload of the database or use of the-<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module is required-for major upgrades. We also recommend reading the+Major versions need the data directory to be initialized so that the system tables+specific to that version can be created.  There are two options to then+migrate the user data from the old directory to the new one: a dump/reload+of the database or using the+<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module.+We also recommend reading the <a href=\"/docs/current/upgrading.html\">upgrading</a> section of the major version you are planning to upgrade to. You can upgrade from one major version to another without upgrading to intervening versions, but we recommend reading@@ -58,14 +59,15 @@ versions prior to doing so. </p># This still talked about \"stored data\" when really that isn't the main concern and if it was pg_upgrade wouldn't work as an option.# The choice to say \"data directory\" here seems reasonable if arguable.  But it implies the location of user data and we state it also contains version-specific system tables.  It can go unsaid that they are version-specific because the collection as a whole and the layout of individual tables can and almost certainly will change between versions. <p>-Minor release upgrades do not require a dump and restore;  you simply stop+Minor releases upgrades do not impact the data directory, only the binaries.+You simply stop the database server, install the updated binaries, and restart the server.-Such upgrades might require manual changes to complete so always read+However, some upgrades might require manual changes to complete so always read the release notes first. </p># The fact minor releases don't require dump/reload doesn't directly preclude them from needing pg_upgrade and writing \"Such upgrades\" seems like it could lead someone to think that.# Data Directory seems generic enough to be understood here and since I mention it in the Major Version as to why data migration is needed, mentioning here# as something not directly altered and thus precluding the data migration has a nice symmetry.  The potential need for manual changes becomes clearer as well. <p>-Minor releases only fix frequently-encountered bugs, <a+Minor releases only fix bugs, <a href=\"/support/security/\">security</a> issues, and data corruption problems, so such upgrades are very low risk.  <em>The community considers performing minor upgrades to be less risky than continuing to # Reality mostly boils down to trusting our judgement when it comes to bugs as each one is evaluated individually.  We do not promise to leave simply buggy behavior alone in minor releases which is the only policy that would nearly eliminate upgrade risk.  We back-patch 5 year old bugs quite often which by definition are not frequently encountered.  I cannot think of a good adjective to put there nor does one seem necessary even if I agree it reads a bit odd otherwise.  I think that has more to do with this being just the wrong structure to get our point across.  Can we pick out some statistics from our long history of publishing minor releases to present an objective reality to the reader to convince them to trust our track-record in this matter?David J.", "msg_date": "Wed, 3 Apr 2024 18:01:41 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Wed, Apr 3, 2024 at 06:01:41PM -0700, David G. Johnston wrote:\n>  <p>\n>  The PostgreSQL Global Development Group supports a major version for 5 years\n> -after its initial release. After its five year anniversary, a major version\n> will\n> -have one last minor release containing any fixes and will be considered\n> -end-of-life (EOL) and no longer supported.\n> +after its initial release. After its fifth anniversary, a major version will\n> +have one last minor release and will be considered\n> +end-of-life (EOL), meaning no new bug fixes will be written for it.\n>  </p>\n> \n> # \"fifth anniversary \"seems more correct than \"five year anniversary\".\n> # The fact it gets a minor release implies it receives fixes.\n> # I've always had an issue with our use of the phrasing \"no longer supported\". \n> It seems vague when practically it just means we stop writing patches for it. \n> Whether individual community members refrain from answering questions or\n> helping people on these older releases is not a project decision but a personal\n> one.  Also, since we already say it is supported for 5 years it seems a bit\n> redundant to then also say \"after 5 years it is unsupported\".\n\nI went with the attached patch. I tightned up the \"unsupported\" part, trying to\ntie it to the fact that we don't make anymore releases for it.\n\n>  <h2>Version Numbering</h2>\n> @@ -45,11 +45,12 @@ number, e.g. 9.5.3 to 9.5.4.\n>  <h2>Upgrading</h2>\n> \n>  <p>\n> -Major versions usually change the internal format of the system tables.\n> -These changes are often complex, so we do not maintain backward\n> -compatibility of all stored data.  A dump/reload of the database or use of the\n> -<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module is required\n> -for major upgrades. We also recommend reading the\n> +Major versions need the data directory to be initialized so that the system\n> tables\n> +specific to that version can be created.  There are two options to then\n> +migrate the user data from the old directory to the new one: a dump/reload\n> +of the database or using the\n> +<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module.\n> +We also recommend reading the\n>  <a href=\"/docs/current/upgrading.html\">upgrading</a> section of the major\n>  version you are planning to upgrade to. You can upgrade from one major version\n>  to another without upgrading to intervening versions, but we recommend reading\n> @@ -58,14 +59,15 @@ versions prior to doing so.\n>  </p>\n> \n> # This still talked about \"stored data\" when really that isn't the main concern\n> and if it was pg_upgrade wouldn't work as an option.\n> # The choice to say \"data directory\" here seems reasonable if arguable.  But it\n> implies the location of user data and we state it also contains\n> version-specific system tables.  It can go unsaid that they are\n> version-specific because the collection as a whole and the layout of individual\n> tables can and almost certainly will change between versions.\n> \n>  <p>\n> -Minor release upgrades do not require a dump and restore;  you simply stop\n> +Minor releases upgrades do not impact the data directory, only the binaries.\n> +You simply stop\n>  the database server, install the updated binaries, and restart the server.\n> -Such upgrades might require manual changes to complete so always read\n> +However, some upgrades might require manual changes to complete so always read\n>  the release notes first.\n>  </p>\n> \n> # The fact minor releases don't require dump/reload doesn't directly preclude\n> them from needing pg_upgrade and writing \"Such upgrades\" seems like it could\n\nMinor upgrades never have required pg_upgrade.\n\n> lead someone to think that.\n> # Data Directory seems generic enough to be understood here and since I mention\n> it in the Major Version as to why data migration is needed, mentioning here\n> # as something not directly altered and thus precluding the data migration has\n> a nice symmetry.  The potential need for manual changes becomes clearer as\n> well.\n> \n\nI decided your use of \"data directory\" was a better term and to combine\nthe first two sentences.\n\n>  <p>\n> -Minor releases only fix frequently-encountered bugs, <a\n> +Minor releases only fix bugs, <a\n>  href=\"/support/security/\">security</a> issues, and data corruption\n>  problems, so such upgrades are very low risk.  <em>The community\n>  considers performing minor upgrades to be less risky than continuing to\n>  \n> # Reality mostly boils down to trusting our judgement when it comes to bugs as\n> each one is evaluated individually.  We do not promise to leave simply buggy\n> behavior alone in minor releases which is the only policy that would nearly\n> eliminate upgrade risk.  We back-patch 5 year old bugs quite often which by\n> definition are not frequently encountered.  I cannot think of a good adjective\n> to put there nor does one seem necessary even if I agree it reads a bit odd\n> otherwise.  I think that has more to do with this being just the wrong\n> structure to get our point across.  Can we pick out some statistics from our\n> long history of publishing minor releases to present an objective reality to\n> the reader to convince them to trust our track-record in this matter?\n\nI went with frequently-encountered and low risk bugs\".\n\nPatch attached.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 4 Apr 2024 14:23:14 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Thu, Apr 4, 2024 at 11:23 AM Bruce Momjian <[email protected]> wrote:\n\n> On Wed, Apr 3, 2024 at 06:01:41PM -0700, David G. Johnston wrote:\n> > <p>\n> > The PostgreSQL Global Development Group supports a major version for 5\n> years\n> > -after its initial release. After its five year anniversary, a major\n> version\n> > will\n> > -have one last minor release containing any fixes and will be considered\n> > -end-of-life (EOL) and no longer supported.\n> > +after its initial release. After its fifth anniversary, a major version\n> will\n> > +have one last minor release and will be considered\n> > +end-of-life (EOL), meaning no new bug fixes will be written for it.\n> > </p>\n> >\n> > # \"fifth anniversary \"seems more correct than \"five year anniversary\".\n> > # The fact it gets a minor release implies it receives fixes.\n> > # I've always had an issue with our use of the phrasing \"no longer\n> supported\".\n> > It seems vague when practically it just means we stop writing patches\n> for it.\n> > Whether individual community members refrain from answering questions or\n> > helping people on these older releases is not a project decision but a\n> personal\n> > one. Also, since we already say it is supported for 5 years it seems a\n> bit\n> > redundant to then also say \"after 5 years it is unsupported\".\n>\n> I went with the attached patch. I tightned up the \"unsupported\" part,\n> trying to\n> tie it to the fact that we don't make anymore releases for it.\n>\n> > <h2>Version Numbering</h2>\n> > @@ -45,11 +45,12 @@ number, e.g. 9.5.3 to 9.5.4.\n> > <h2>Upgrading</h2>\n> >\n> > <p>\n> > -Major versions usually change the internal format of the system tables.\n> > -These changes are often complex, so we do not maintain backward\n> > -compatibility of all stored data. A dump/reload of the database or use\n> of the\n> > -<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module is required\n> > -for major upgrades. We also recommend reading the\n> > +Major versions need the data directory to be initialized so that the\n> system\n> > tables\n> > +specific to that version can be created. There are two options to then\n> > +migrate the user data from the old directory to the new one: a\n> dump/reload\n> > +of the database or using the\n> > +<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module.\n> > +We also recommend reading the\n> > <a href=\"/docs/current/upgrading.html\">upgrading</a> section of the\n> major\n> > version you are planning to upgrade to. You can upgrade from one major\n> version\n> > to another without upgrading to intervening versions, but we recommend\n> reading\n> > @@ -58,14 +59,15 @@ versions prior to doing so.\n> > </p>\n> >\n> > # This still talked about \"stored data\" when really that isn't the main\n> concern\n> > and if it was pg_upgrade wouldn't work as an option.\n> > # The choice to say \"data directory\" here seems reasonable if arguable.\n> But it\n> > implies the location of user data and we state it also contains\n> > version-specific system tables. It can go unsaid that they are\n> > version-specific because the collection as a whole and the layout of\n> individual\n> > tables can and almost certainly will change between versions.\n> >\n> > <p>\n> > -Minor release upgrades do not require a dump and restore; you simply\n> stop\n> > +Minor releases upgrades do not impact the data directory, only the\n> binaries.\n> > +You simply stop\n> > the database server, install the updated binaries, and restart the\n> server.\n> > -Such upgrades might require manual changes to complete so always read\n> > +However, some upgrades might require manual changes to complete so\n> always read\n> > the release notes first.\n> > </p>\n> >\n> > # The fact minor releases don't require dump/reload doesn't directly\n> preclude\n> > them from needing pg_upgrade and writing \"Such upgrades\" seems like it\n> could\n>\n> Minor upgrades never have required pg_upgrade.\n>\n>\nHow about this:\n\"\"\"\nMajor versions make complex changes, so the contents of the data directory\ncannot be maintained in a backward compatible way. A dump and restore of\nthe databases is required, either done manually or as part of running the\n<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> server application.\n\"\"\"\n\nMy main change here is to mirror \"dump and restore\" in both paragraphs and\nmake it clear that this action is required and that the unnamed\npg_dump/pg_restore tools or pg_upgrade are used in order to perform this\ntask. Since minor version upgrades do not require \"dump and restore\" they\nneed not use these tools.\n\nAlso, calling pg_upgrade a module doesn't seem correct. It is found under\nserver applications in our docs and consists solely of that program (and a\nbunch of manual steps) from the user's perspective.\n\nDavid J.\n\nOn Thu, Apr 4, 2024 at 11:23 AM Bruce Momjian <[email protected]> wrote:On Wed, Apr  3, 2024 at 06:01:41PM -0700, David G. Johnston wrote:\n>  <p>\n>  The PostgreSQL Global Development Group supports a major version for 5 years\n> -after its initial release. After its five year anniversary, a major version\n> will\n> -have one last minor release containing any fixes and will be considered\n> -end-of-life (EOL) and no longer supported.\n> +after its initial release. After its fifth anniversary, a major version will\n> +have one last minor release and will be considered\n> +end-of-life (EOL), meaning no new bug fixes will be written for it.\n>  </p>\n> \n> # \"fifth anniversary \"seems more correct than \"five year anniversary\".\n> # The fact it gets a minor release implies it receives fixes.\n> # I've always had an issue with our use of the phrasing \"no longer supported\". \n> It seems vague when practically it just means we stop writing patches for it. \n> Whether individual community members refrain from answering questions or\n> helping people on these older releases is not a project decision but a personal\n> one.  Also, since we already say it is supported for 5 years it seems a bit\n> redundant to then also say \"after 5 years it is unsupported\".\n\nI went with the attached patch.  I tightned up the \"unsupported\" part, trying to\ntie it to the fact that we don't make anymore releases for it.\n\n>  <h2>Version Numbering</h2>\n> @@ -45,11 +45,12 @@ number, e.g. 9.5.3 to 9.5.4.\n>  <h2>Upgrading</h2>\n> \n>  <p>\n> -Major versions usually change the internal format of the system tables.\n> -These changes are often complex, so we do not maintain backward\n> -compatibility of all stored data.  A dump/reload of the database or use of the\n> -<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module is required\n> -for major upgrades. We also recommend reading the\n> +Major versions need the data directory to be initialized so that the system\n> tables\n> +specific to that version can be created.  There are two options to then\n> +migrate the user data from the old directory to the new one: a dump/reload\n> +of the database or using the\n> +<a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> module.\n> +We also recommend reading the\n>  <a href=\"/docs/current/upgrading.html\">upgrading</a> section of the major\n>  version you are planning to upgrade to. You can upgrade from one major version\n>  to another without upgrading to intervening versions, but we recommend reading\n> @@ -58,14 +59,15 @@ versions prior to doing so.\n>  </p>\n> \n> # This still talked about \"stored data\" when really that isn't the main concern\n> and if it was pg_upgrade wouldn't work as an option.\n> # The choice to say \"data directory\" here seems reasonable if arguable.  But it\n> implies the location of user data and we state it also contains\n> version-specific system tables.  It can go unsaid that they are\n> version-specific because the collection as a whole and the layout of individual\n> tables can and almost certainly will change between versions.\n> \n>  <p>\n> -Minor release upgrades do not require a dump and restore;  you simply stop\n> +Minor releases upgrades do not impact the data directory, only the binaries.\n> +You simply stop\n>  the database server, install the updated binaries, and restart the server.\n> -Such upgrades might require manual changes to complete so always read\n> +However, some upgrades might require manual changes to complete so always read\n>  the release notes first.\n>  </p>\n> \n> # The fact minor releases don't require dump/reload doesn't directly preclude\n> them from needing pg_upgrade and writing \"Such upgrades\" seems like it could\n\nMinor upgrades never have required pg_upgrade.How about this:\"\"\"Major versions make complex changes, so the contents of the data directory cannot be maintained in a backward compatible way.  A dump and restore of the databases is required, either done manually or as part of running the <a href=\"/docs/current/pgupgrade.html\">pg_upgrade</a> server application.\"\"\"My main change here is to mirror \"dump and restore\" in both paragraphs and make it clear that this action is required and that the unnamed pg_dump/pg_restore tools or pg_upgrade are used in order to perform this task.  Since minor version upgrades do not require \"dump and restore\" they need not use these tools.Also, calling pg_upgrade a module doesn't seem correct.  It is found under server applications in our docs and consists solely of that program (and a bunch of manual steps) from the user's perspective.David J.", "msg_date": "Thu, 4 Apr 2024 12:27:32 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Thu, Apr 4, 2024 at 12:27:32PM -0700, David G. Johnston wrote:\n> How about this:\n> \"\"\"\n> Major versions make complex changes, so the contents of the data directory\n> cannot be maintained in a backward compatible way.  A dump and restore of the\n> databases is required, either done manually or as part of running the <a href=\"\n> /docs/current/pgupgrade.html\">pg_upgrade</a> server application.\n> \"\"\"\n> \n> My main change here is to mirror \"dump and restore\" in both paragraphs and make\n> it clear that this action is required and that the unnamed pg_dump/pg_restore\n> tools or pg_upgrade are used in order to perform this task.  Since minor\n> version upgrades do not require \"dump and restore\" they need not use these\n> tools.\n\npg_upgrade only dumps/restores the database schema, which is not\nsomething most people would consider dump/restore; see:\n\n\thttps://momjian.us/main/writings/pgsql/pg_upgrade.pdf\n\n> Also, calling pg_upgrade a module doesn't seem correct.  It is found under\n> server applications in our docs and consists solely of that program (and a\n> bunch of manual steps) from the user's perspective.\n\nYes, you are correct. It used to be under \"modules\" and we didn't\nupdate this text, partly because this it not in our source git tree; \nupdated patch attached.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 4 Apr 2024 16:34:49 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Thu, Apr 4, 2024 at 2:23 PM Bruce Momjian <[email protected]> wrote:\n\n> -end-of-life (EOL) and no longer supported.\n> +after its initial release. After this, a final minor version will be\n> released\n> +and the software will then be unsupported (end-of-life).\n\n\nWould be a shame to lose the EOL acronym.\n\n+Such upgrades might require manual changes to complete so always read\n> +the release notes first.\n\n\nProposal:\n\"Such upgrades might require additional steps, so always read the release\nnotes first.\"\n\nI went with frequently-encountered and low risk bugs\".\n>\n\nBut neither of those classifications are really true. Especially the \"low\nrisk\" part - I could see various ways a reader could wrongly interpret that.\n\nCheers,\nGreg\n\nOn Thu, Apr 4, 2024 at 2:23 PM Bruce Momjian <[email protected]> wrote:-end-of-life (EOL) and no longer supported.+after its initial release. After this, a final minor version will be released+and the software will then be unsupported (end-of-life).Would be a shame to lose the EOL acronym.+Such upgrades might require manual changes to complete so always read+the release notes first.Proposal:\"Such upgrades might require additional steps, so always read the release notes first.\"I went with frequently-encountered and low risk bugs\".But neither of those classifications are really true. Especially the \"low risk\" part - I could see various ways a reader could wrongly interpret that.Cheers,Greg", "msg_date": "Thu, 4 Apr 2024 16:38:10 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Thu, Apr 4, 2024 at 04:38:10PM -0400, Greg Sabino Mullane wrote:\n> On Thu, Apr 4, 2024 at 2:23 PM Bruce Momjian <[email protected]> wrote:\n> +Such upgrades might require manual changes to complete so always read\n> +the release notes first.\n> \n> Proposal:\n> \"Such upgrades might require additional steps, so always read the release notes\n> first.\"\n\nYes, I modified that sentence.\n\n> I went with frequently-encountered and low risk bugs\".\n> \n> But neither of those classifications are really true. Especially the \"low risk\"\n> part - I could see various ways a reader could wrongly interpret that.\n\nI see your point. Updated patch attached.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 4 Apr 2024 16:51:50 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On Thu, Apr 4, 2024 at 04:51:50PM -0400, Bruce Momjian wrote:\n> On Thu, Apr 4, 2024 at 04:38:10PM -0400, Greg Sabino Mullane wrote:\n> > On Thu, Apr 4, 2024 at 2:23 PM Bruce Momjian <[email protected]> wrote:\n> > +Such upgrades might require manual changes to complete so always read\n> > +the release notes first.\n> > \n> > Proposal:\n> > \"Such upgrades might require additional steps, so always read the release notes\n> > first.\"\n> \n> Yes, I modified that sentence.\n> \n> > I went with frequently-encountered and low risk bugs\".\n> > \n> > But neither of those classifications are really true. Especially the \"low risk\"\n> > part - I could see various ways a reader could wrongly interpret that.\n> \n> I see your point. Updated patch attached.\n\nI am ready to apply this patch to the website. How do I do this? Do I\njust commit this to the pgweb git tree? Does that push to the website?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 12 Apr 2024 17:12:06 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reports on obsolete Postgres versions" }, { "msg_contents": "On 4/12/24 2:12 PM, Bruce Momjian wrote:\r\n\r\n> I am ready to apply this patch to the website. How do I do this? Do I\r\n> just commit this to the pgweb git tree? Does that push to the website?\r\n\r\nI pushed this to the website[1].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/support/versioning/", "msg_date": "Fri, 12 Apr 2024 19:09:21 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reports on obsolete Postgres versions" } ]
[ { "msg_contents": "Hi hackers,\n\nBelow is a `case when` demo,\n\n```sql\ncreate table foo(a int, b int);\ninsert into foo values (1, 2);\nselect case 1 when 1 then a else b end from foo;\n```\n\n\nCurrently, psql output is,\n\n\n```text\n b\n---\n 1\n(1 row)\n```\n\n\nAt the first glance at the output column title, I assume the result of the sql is wrong. It should be `a`.\nAfter some investigation, I discovered that the result's value is accurate. However, PostgreSQL utilizes b as the title for the output column.\nNee we change the title of the case-when output column? If you hackers think it's worth the effort, I'm willing to invest time in working on it.\nBest Regards,\nWinter Loo\nHi hackers,\n\nBelow is a `case when` demo,```sqlcreate table foo(a int, b int);insert into foo values (1, 2);select case 1 when 1 then a else b end from foo;```Currently, psql output is,```text b\n---\n 1\n(1 row)```At the first glance at the output column title, I assume the result of the sql is wrong. It should be `a`.After some investigation, I discovered that the result's value is accurate. However, PostgreSQL utilizes b as the title for the output column.Nee we change the title of the case-when output column? If you hackers think it's worth the effort, I'm willing to invest time in working on it.\nBest Regards,Winter Loo", "msg_date": "Tue, 12 Mar 2024 20:40:15 +0800 (CST)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "confusing `case when` column name" }, { "msg_contents": "On Tuesday, March 12, 2024, [email protected] <[email protected]> wrote:\n\n>\n> Nee we change the title of the case-when output column?\n>\n>\nChoosing either a or b as the label seems wrong and probably worth changing\nto something that has no meaning and encourages the application of a column\nalias.\n\nDavid J.\n\nOn Tuesday, March 12, 2024, [email protected] <[email protected]> wrote:Nee we change the title of the case-when output column?Choosing either a or b as the label seems wrong and probably worth changing to something that has no meaning and encourages the application of a column alias.David J.", "msg_date": "Tue, 12 Mar 2024 06:27:31 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: confusing `case when` column name" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Tuesday, March 12, 2024, [email protected] <[email protected]> wrote:\n>> Nee we change the title of the case-when output column?\n\n> Choosing either a or b as the label seems wrong and probably worth changing\n> to something that has no meaning and encourages the application of a column\n> alias.\n\nYeah, we won't get any kudos for changing a rule that's stood for\n25 years, even if it's not very good. This is one of the places\nwhere it's just hard to make a great choice automatically, and\nusers are probably going to end up applying an AS clause most of\nthe time if they care about the column name at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Mar 2024 10:19:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: confusing `case when` column name" }, { "msg_contents": "> On 12 Mar 2024, at 15:19, Tom Lane <[email protected]> wrote:\n\n> users are probably going to end up applying an AS clause most of\n> the time if they care about the column name at all.\n\nIf anything, we could perhaps add a short note in the CASE documentation about\ncolumn naming, the way it's phrased now a new user could very easily assume it\nwould be \"case\".\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 15:37:56 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: confusing `case when` column name" } ]
[ { "msg_contents": "Hi hackers,\n\nWhen the PostgreSQL server is configured with --with-llvm, the pgxs.mk\nframework will generate LLVM bitcode for extensions automatically.\nSometimes, I don't want to generate bitcode for some extensions. I can\nturn off this feature by specifying with_llvm=0 in the make command.\n\n```\nmake with_llvm=0\n```\n\nWould it be possible to add a new switch in the pgxs.mk framework to\nallow users to disable this feature? E.g., the Makefile looks like:\n\n```\nWITH_LLVM=no\nPG_CONFIG = pg_config\nPGXS := $(shell $(PG_CONFIG) --pgxs)\n```\n\nBest Regards,\nXing\n\n\n", "msg_date": "Tue, 12 Mar 2024 21:38:23 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Disable LLVM bitcode generation with pgxs.mk framework." }, { "msg_contents": "> On 12 Mar 2024, at 14:38, Xing Guo <[email protected]> wrote:\n\n> Would it be possible to add a new switch in the pgxs.mk framework to\n> allow users to disable this feature?\n\nSomething like that doesn't seem unreasonable I think.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 15:40:19 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disable LLVM bitcode generation with pgxs.mk framework." }, { "msg_contents": "> On Tue, Mar 12, 2024 at 10:40 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 12 Mar 2024, at 14:38, Xing Guo <[email protected]> wrote:\n>\n> > Would it be possible to add a new switch in the pgxs.mk framework to\n> > allow users to disable this feature?\n>\n> Something like that doesn't seem unreasonable I think.\n\nThanks.\n\nI added a new option NO_LLVM_BITCODE to pgxs. I'm not sure if the name\nis appropriate.\n\n> --\n> Daniel Gustafsson\n>", "msg_date": "Wed, 13 Mar 2024 08:01:31 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disable LLVM bitcode generation with pgxs.mk framework." }, { "msg_contents": "On 12.03.24 14:38, Xing Guo wrote:\n> When the PostgreSQL server is configured with --with-llvm, the pgxs.mk\n> framework will generate LLVM bitcode for extensions automatically.\n> Sometimes, I don't want to generate bitcode for some extensions. I can\n> turn off this feature by specifying with_llvm=0 in the make command.\n> \n> ```\n> make with_llvm=0\n> ```\n> \n> Would it be possible to add a new switch in the pgxs.mk framework to\n> allow users to disable this feature? E.g., the Makefile looks like:\n> \n> ```\n> WITH_LLVM=no\n> PG_CONFIG = pg_config\n> PGXS := $(shell $(PG_CONFIG) --pgxs)\n> ```\n\nCan't you just put the very same with_llvm=0 into the makefile?\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 07:45:26 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disable LLVM bitcode generation with pgxs.mk framework." }, { "msg_contents": "> On Wed, Mar 13, 2024 at 2:45 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 12.03.24 14:38, Xing Guo wrote:\n> > When the PostgreSQL server is configured with --with-llvm, the pgxs.mk\n> > framework will generate LLVM bitcode for extensions automatically.\n> > Sometimes, I don't want to generate bitcode for some extensions. I can\n> > turn off this feature by specifying with_llvm=0 in the make command.\n> >\n> > ```\n> > make with_llvm=0\n> > ```\n> >\n> > Would it be possible to add a new switch in the pgxs.mk framework to\n> > allow users to disable this feature? E.g., the Makefile looks like:\n> >\n> > ```\n> > WITH_LLVM=no\n> > PG_CONFIG = pg_config\n> > PGXS := $(shell $(PG_CONFIG) --pgxs)\n> > ```\n>\n> Can't you just put the very same with_llvm=0 into the makefile?\n\nAh, you're right. I can set it by overriding that variable.\n\n```\noverride with_llvm=no\n```\n\n\n", "msg_date": "Wed, 13 Mar 2024 15:18:33 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disable LLVM bitcode generation with pgxs.mk framework." } ]
[ { "msg_contents": "Moved from discussion on -committers:\n\nhttps://postgr.es/m/[email protected]\n\nSummary:\n\nDo not use perl empty patterns like // or qr// or s//.../, the behavior\nis too surprising for perl non-experts. There are a few such uses in\nour tests; patch attached. Unfortunately, there is no obvious way to\nautomatically detect them so I am just relying on grep. I'm sure there\nare others here who know more about perl than I do, so\nsuggestions/corrections are welcome.\n\nLong version:\n\nSome may know this already, but we just discovered the dangers of using\nempty patterns in perl:\n\n\"If the PATTERN evaluates to the empty string, the last successfully\nmatched regular expression is used instead... If no match has\npreviously succeeded, this will (silently) act instead as a genuine\nempty pattern (which will always match).\"\n\nhttps://perldoc.perl.org/perlop#The-empty-pattern-//\n\nIn other words, if you have code like:\n\n if ('xyz' =~ //)\n {\n print \"'xyz' matches //\\n\";\n }\n\nThe match will succeed and print, because there's no previous pattern,\nso // is a \"genuine\" empty pattern, which is treated like /.*/ (I\nthink?). Then, if you add some other code before it:\n\n if ('abc' =~ /abc/)\n {\n print \"'abc' matches /abc/\\n\";\n }\n\n if ('xyz' =~ //)\n {\n print \"'xyz' matches //\\n\";\n }\n\nThe first match will succeed, but the second match will fail, because\n// is treated like /abc/.\n\nOn reflection, that does seem very perl-like. But it can cause\nsurprising action-at-a-distance if not used carefully, especially for\nthose who aren't experts in perl. It's much safer to just not use the\nempty pattern.\n\nIf you use qr// instead:\n\nhttps://perldoc.perl.org/perlop#qr/STRING/msixpodualn\n\nlike:\n\n if ('abc' =~ qr/abc/)\n {\n print \"'abc' matches /abc/\\n\";\n }\n\n if ('xyz' =~ qr//)\n {\n print \"'xyz' matches //\\n\";\n }\n\nThen the second match may succeed or may fail, and it's not clear from\nthe documentation what precise circumstances matter. It seems to fail\non older versions of perl (like 5.16.3) and succeed on newer versions\n(5.38.2). However, it may also depend on when the qr// is [re]compiled,\nor regex flags, or locale, or may just be undefined.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 12 Mar 2024 10:22:04 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "perl: unsafe empty pattern behavior" }, { "msg_contents": "On 2024-Mar-12, Jeff Davis wrote:\n\n> Do not use perl empty patterns like // or qr// or s//.../, the behavior\n> is too surprising for perl non-experts.\n\nYeah, nasty.\n\n> There are a few such uses in\n> our tests; patch attached. Unfortunately, there is no obvious way to\n> automatically detect them so I am just relying on grep. I'm sure there\n> are others here who know more about perl than I do, so\n> suggestions/corrections are welcome.\n\nI suggest that pg_dump/t/002_pg_dump.pl could use a verification that\nthe ->{regexp} thing is not empty. I also tried grepping (for things\nlike qr{}, qr[], qr||, qr!!) and didn't find anything beyond what you\nhave ... but I only looked for the \"qr\" literal, not other ways to get\nregexes.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n", "msg_date": "Tue, 12 Mar 2024 18:53:23 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: perl: unsafe empty pattern behavior" }, { "msg_contents": "On Tue, 2024-03-12 at 18:53 +0100, Alvaro Herrera wrote:\n> I suggest that pg_dump/t/002_pg_dump.pl could use a verification that\n> the ->{regexp} thing is not empty.\n\nI'm not sure how exactly to test for an empty pattern. The problem is,\nwe don't really want to test for an empty pattern, because /(?^:)/ is\nfine. The problem is //, which gets turned into an actual pattern\n(perhaps empty or perhaps not), and by the time it's in the %tests\nhash, I think it's too late to distinguish.\n\nAgain, I'm not a perl expert, so suggestions welcome.\n\n>   I also tried grepping (for things\n> like qr{}, qr[], qr||, qr!!) and didn't find anything beyond what you\n> have ... but I only looked for the \"qr\" literal, not other ways to\n> get\n> regexes.\n\nI think that's fine. qr// seems the most dangerous, because it seems to\nbehave differently in different versions of perl.\n\nGrepping for regexes in perl code is an \"interesting\" exercise.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 15:51:09 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: perl: unsafe empty pattern behavior" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> On Tue, 2024-03-12 at 18:53 +0100, Alvaro Herrera wrote:\n>> I also tried grepping (for things\n>> like qr{}, qr[], qr||, qr!!) and didn't find anything beyond what you\n>> have ... but I only looked for the \"qr\" literal, not other ways to\n>> get regexes.\n\n> I think that's fine. qr// seems the most dangerous, because it seems to\n> behave differently in different versions of perl.\n\nI wonder whether perlcritic has sufficiently deep understanding of\nPerl code that it could find these hazards. I already checked,\nand found that there's no built-in filter for this (at least not\nin the perlcritic version I have), but maybe we could write one?\nThe rules seem to be plug-in modules, so you can make your own\nin principle.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Mar 2024 18:59:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: perl: unsafe empty pattern behavior" }, { "msg_contents": "\nOn 2024-03-12 Tu 18:59, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n>> On Tue, 2024-03-12 at 18:53 +0100, Alvaro Herrera wrote:\n>>> I also tried grepping (for things\n>>> like qr{}, qr[], qr||, qr!!) and didn't find anything beyond what you\n>>> have ... but I only looked for the \"qr\" literal, not other ways to\n>>> get regexes.\n>> I think that's fine. qr// seems the most dangerous, because it seems to\n>> behave differently in different versions of perl.\n> I wonder whether perlcritic has sufficiently deep understanding of\n> Perl code that it could find these hazards. I already checked,\n> and found that there's no built-in filter for this (at least not\n> in the perlcritic version I have), but maybe we could write one?\n> The rules seem to be plug-in modules, so you can make your own\n> in principle.\n\n\n\nYeah, that was my thought too. I'd start with ProhibitComplexRegexes.pm \nas a template.\n\nIf nobody else does it I'll have a go, but it might take a while.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 00:04:49 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: perl: unsafe empty pattern behavior" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-03-12 Tu 18:59, Tom Lane wrote:\n>> I wonder whether perlcritic has sufficiently deep understanding of\n>> Perl code that it could find these hazards.\n\n> Yeah, that was my thought too. I'd start with ProhibitComplexRegexes.pm \n> as a template.\n\nOooh. Taking a quick look at the source code:\n\nhttps://metacpan.org/dist/Perl-Critic/source/lib/Perl/Critic/Policy/RegularExpressions/ProhibitComplexRegexes.pm\n\nit seems like it'd be pretty trivial to convert that from \"complain if\nregex contains more than N characters\" to \"complain if regex contains\nzero characters\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Mar 2024 00:15:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: perl: unsafe empty pattern behavior" } ]
[ { "msg_contents": "Hello,\n\nBoth the incremental JSON [1] and OAuth [2] patchsets would be\nimproved by json_errdetail(), which was removed from FRONTEND builds\nin b44669b2ca:\n\n> The routine providing a detailed error message based on the error code\n> is made backend-only, the existing code being unsafe to use in the\n> frontend as the error message may finish by being palloc'd or point to a\n> static string, so there is no way to know if the memory of the message\n> should be pfree'd or not.\n\nAttached is a patch to undo this, by attaching any necessary\nallocations to the JsonLexContext so the caller doesn't have to keep\ntrack of them.\n\nThis is based on the first patch of the OAuth patchset, which\nadditionally needs json_errdetail() to be safe to use from libpq\nitself. Alvaro pointed out offlist that we don't have to go that far\nto re-enable this function for the utilities, so this patch is a sort\nof middle ground between what we have now and what OAuth implements.\n(There is some additional minimization that could be done to this\npatch, but I'm hoping to keep the code structure consistent between\nthe two, if the result is acceptable.)\n\nTwo notes that I wanted to point out explicitly:\n- On the other thread, Daniel contributed a destroyStringInfo()\ncounterpart for makeStringInfo(), which is optional but seemed useful\nto include here.\n- After this patch, if a frontend client calls json_errdetail()\nwithout calling freeJsonLexContext(), it will leak memory.\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/682c8fff-355c-a04f-57ac-81055c4ccda8%40dunslane.net\n[2] https://www.postgresql.org/message-id/CAOYmi%2BkKNZCL7uz-LHyBggM%2BfEcf4285pFWwm7spkUb8irY7mQ%40mail.gmail.com", "msg_date": "Tue, 12 Mar 2024 11:43:23 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Support json_errdetail in FRONTEND builds" }, { "msg_contents": "\nOn 2024-03-12 Tu 14:43, Jacob Champion wrote:\n> Hello,\n>\n> Both the incremental JSON [1] and OAuth [2] patchsets would be\n> improved by json_errdetail(), which was removed from FRONTEND builds\n> in b44669b2ca:\n>\n>> The routine providing a detailed error message based on the error code\n>> is made backend-only, the existing code being unsafe to use in the\n>> frontend as the error message may finish by being palloc'd or point to a\n>> static string, so there is no way to know if the memory of the message\n>> should be pfree'd or not.\n> Attached is a patch to undo this, by attaching any necessary\n> allocations to the JsonLexContext so the caller doesn't have to keep\n> track of them.\n>\n> This is based on the first patch of the OAuth patchset, which\n> additionally needs json_errdetail() to be safe to use from libpq\n> itself. Alvaro pointed out offlist that we don't have to go that far\n> to re-enable this function for the utilities, so this patch is a sort\n> of middle ground between what we have now and what OAuth implements.\n> (There is some additional minimization that could be done to this\n> patch, but I'm hoping to keep the code structure consistent between\n> the two, if the result is acceptable.)\n\n\n\nSeems reasonable.\n\n>\n> Two notes that I wanted to point out explicitly:\n> - On the other thread, Daniel contributed a destroyStringInfo()\n> counterpart for makeStringInfo(), which is optional but seemed useful\n> to include here.\n\n\nyeah, although maybe worth a different patch.\n\n\n> - After this patch, if a frontend client calls json_errdetail()\n> without calling freeJsonLexContext(), it will leak memory.\n\n\nNot too concerned about this:\n\n\n1. we tend to be a bit more relaxed about that in frontend programs, \nespecially those not expected to run for long times and especially on \nerror paths, where in many cases the program will just exit anyway.\n\n2. the fix is simple where it's needed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 20:38:43 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "On Tue, Mar 12, 2024 at 08:38:43PM -0400, Andrew Dunstan wrote:\n> On 2024-03-12 Tu 14:43, Jacob Champion wrote:\n>> Two notes that I wanted to point out explicitly:\n>> - On the other thread, Daniel contributed a destroyStringInfo()\n>> counterpart for makeStringInfo(), which is optional but seemed useful\n>> to include here.\n> \n> yeah, although maybe worth a different patch.\n\n-\t{\n-\t\tpfree(lex->strval->data);\n-\t\tpfree(lex->strval);\n-\t}\n+\t\tdestroyStringInfo(lex->strval);\n\nI've wanted that a few times, FWIW. I would do a split, mainly for\nclarity.\n\n>> - After this patch, if a frontend client calls json_errdetail()\n>> without calling freeJsonLexContext(), it will leak memory.\n>\n> Not too concerned about this:\n> \n> 1. we tend to be a bit more relaxed about that in frontend programs,\n> especially those not expected to run for long times and especially on error\n> paths, where in many cases the program will just exit anyway.\n> \n> 2. the fix is simple where it's needed.\n\nThis does not stress me much, either. I can see that OAuth introduces\nat least two calls of json_errdetail() in libpq, and that would matter\nthere.\n\n case JSON_EXPECTED_STRING:\n- return psprintf(_(\"Expected string, but found \\\"%s\\\".\"),\n- extract_token(lex));\n+ appendStringInfo(lex->errormsg,\n+ _(\"Expected string, but found \\\"%.*s\\\".\"),\n+ toklen, lex->token_start);\n\nMaybe this could be wrapped in a macro to avoid copying around\ntoken_start and toklen for all the error cases.\n--\nMichael", "msg_date": "Wed, 13 Mar 2024 15:37:47 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "On Tue, Mar 12, 2024 at 11:38 PM Michael Paquier <[email protected]> wrote:\n> On Tue, Mar 12, 2024 at 08:38:43PM -0400, Andrew Dunstan wrote:\n> > yeah, although maybe worth a different patch.\n>\n> I've wanted that a few times, FWIW. I would do a split, mainly for\n> clarity.\n\nSounds good, split into v2-0002. (That gives me room to switch other\ncall sites to the new API, too.) Thanks both!\n\n> This does not stress me much, either. I can see that OAuth introduces\n> at least two calls of json_errdetail() in libpq, and that would matter\n> there.\n\nYep.\n\n> case JSON_EXPECTED_STRING:\n> - return psprintf(_(\"Expected string, but found \\\"%s\\\".\"),\n> - extract_token(lex));\n> + appendStringInfo(lex->errormsg,\n> + _(\"Expected string, but found \\\"%.*s\\\".\"),\n> + toklen, lex->token_start);\n>\n> Maybe this could be wrapped in a macro to avoid copying around\n> token_start and toklen for all the error cases.\n\nI gave it a shot in 0001; see what you think.\n\nThanks,\n--Jacob", "msg_date": "Wed, 13 Mar 2024 11:20:16 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "On Wed, Mar 13, 2024 at 11:20:16AM -0700, Jacob Champion wrote:\n> Sounds good, split into v2-0002. (That gives me room to switch other\n> call sites to the new API, too.) Thanks both!\n\nThat looks OK to me. I can see 7~8 remaining sites where StringInfo\ndata is freed, like in the syslogger, but we should not touch the\nStringInfo.\n\n>> case JSON_EXPECTED_STRING:\n>> - return psprintf(_(\"Expected string, but found \\\"%s\\\".\"),\n>> - extract_token(lex));\n>> + appendStringInfo(lex->errormsg,\n>> + _(\"Expected string, but found \\\"%.*s\\\".\"),\n>> + toklen, lex->token_start);\n>>\n>> Maybe this could be wrapped in a macro to avoid copying around\n>> token_start and toklen for all the error cases.\n> \n> I gave it a shot in 0001; see what you think.\n\nThat's cleaner, thanks.\n\nHmm, I think that it is cleaner to create the new API first, and then\nrely on it, reversing the order of both patches (perhaps the extra\ndestroyStringInfo in freeJsonLexContext() should have been moved in\n0001). See the attached with few tweaks to 0001, previously 0002 @-@.\nI'd still need to do a more serious lookup of 0002, previously 0001.\n--\nMichael", "msg_date": "Thu, 14 Mar 2024 17:06:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "> On 14 Mar 2024, at 09:06, Michael Paquier <[email protected]> wrote:\n\n> I think that it is cleaner to create the new API first, and then\n> rely on it, reversing the order of both patches\n\nI agree with this ordering.\n\n> (perhaps the extra destroyStringInfo in freeJsonLexContext() should\n> have been moved in 0001).\n\nI wouldn't worry about that, seems fine as is to me.\n\n> See the attached with few tweaks to 0001, previously 0002 @-@.\n> I'd still need to do a more serious lookup of 0002, previously 0001.\n\nA few small comments:\n\n- *\n+*\nWhitespace\n\n\n+\t/* don't allow destroys of read-only StringInfos */\n+\tAssert(str->maxlen != 0);\nConsidering that StringInfo.c don't own the memory here I think it's warranted\nto turn this assert into an elog() to avoid the risk of use-after-free bugs.\n\n\n+ * The returned allocation is either static or owned by the JsonLexContext and\n+ * should not be freed.\nThe most important part of that comment is at the very end, to help readers I\nwould reword this to just \"The returned pointer should not be freed.\", or at\nleast put that part first.\n\n\n+#define token_error(lex, format) \\\nI'm not sure this buys much more than reduced LoC, the expansion isn't\nunreadable to the point that the format constraint encoded in the macro is\nworth it IMO.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 10:56:46 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "On Thu, Mar 14, 2024 at 10:56:46AM +0100, Daniel Gustafsson wrote:\n> +\t/* don't allow destroys of read-only StringInfos */\n> +\tAssert(str->maxlen != 0);\n> Considering that StringInfo.c don't own the memory here I think it's warranted\n> to turn this assert into an elog() to avoid the risk of use-after-free bugs.\n\nHmm. I am not sure how much protection this would offer, TBH. One\nthing that I find annoying with common/stringinfo.c as it is currently\nis that we have two exit() calls in the enlarge path, and it does not\nseem wise to me to spread that even more.\n\nMy last argument sounds like a nit for HEAD knowing that this does not\nimpact libpq that has its own pqexpbuffer.c to avoid issues with\npalloc, elog and exit, but that could be a problem if OAuth relies\nmore on these code paths in libpq.\n--\nMichael", "msg_date": "Fri, 15 Mar 2024 09:10:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Hmm. I am not sure how much protection this would offer, TBH. One\n> thing that I find annoying with common/stringinfo.c as it is currently\n> is that we have two exit() calls in the enlarge path, and it does not\n> seem wise to me to spread that even more.\n\n> My last argument sounds like a nit for HEAD knowing that this does not\n> impact libpq that has its own pqexpbuffer.c to avoid issues with\n> palloc, elog and exit, but that could be a problem if OAuth relies\n> more on these code paths in libpq.\n\nI hope nobody is expecting that such code will get accepted. We have\na policy (and an enforcement mechanism) that libpq.so must not call\nexit(). OAuth code in libpq will need to cope with using pqexpbuffer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Mar 2024 21:04:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "On 2024-Mar-14, Tom Lane wrote:\n\n> Michael Paquier <[email protected]> writes:\n> > Hmm. I am not sure how much protection this would offer, TBH. One\n> > thing that I find annoying with common/stringinfo.c as it is currently\n> > is that we have two exit() calls in the enlarge path, and it does not\n> > seem wise to me to spread that even more.\n\n> I hope nobody is expecting that such code will get accepted. We have\n> a policy (and an enforcement mechanism) that libpq.so must not call\n> exit(). OAuth code in libpq will need to cope with using pqexpbuffer.\n\nFWIW that's exactly what Jacob's OAUTH patch does -- it teaches the\nrelevant JSON parser code to use pqexpbuffer when in frontend\nenvironment, and continues to use StringInfo in backend. I find that a\nbit cumbersome, but if the idea of changing the StringInfo behavior\n(avoiding exit()) is too radical, then perhaps it's better that we go\nwith Jacob's proposal in the other thread:\n\n+/*\n+ * In backend, we will use palloc/pfree along with StringInfo. In frontend, use\n+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.\n+ */\n+#ifdef FRONTEND\n+\n+#define STRDUP(s) strdup(s)\n+#define ALLOC(size) malloc(size)\n+\n+#define appendStrVal appendPQExpBuffer\n+#define appendBinaryStrVal appendBinaryPQExpBuffer\n+#define appendStrValChar appendPQExpBufferChar\n+#define createStrVal createPQExpBuffer\n+#define resetStrVal resetPQExpBuffer\n+#define destroyStrVal destroyPQExpBuffer\n+\n+#else /* !FRONTEND */\n+\n+#define STRDUP(s) pstrdup(s)\n+#define ALLOC(size) palloc(size)\n+\n+#define appendStrVal appendStringInfo\n+#define appendBinaryStrVal appendBinaryStringInfo\n+#define appendStrValChar appendStringInfoChar\n+#define createStrVal makeStringInfo\n+#define resetStrVal resetStringInfo\n+#define destroyStrVal destroyStringInfo\n+\n+#endif\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Ni aún el genio muy grande llegaría muy lejos\nsi tuviera que sacarlo todo de su propio interior\" (Goethe)\n\n\n", "msg_date": "Fri, 15 Mar 2024 09:38:19 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "> On 15 Mar 2024, at 09:38, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2024-Mar-14, Tom Lane wrote:\n> \n>> Michael Paquier <[email protected]> writes:\n>>> Hmm. I am not sure how much protection this would offer, TBH. One\n>>> thing that I find annoying with common/stringinfo.c as it is currently\n>>> is that we have two exit() calls in the enlarge path, and it does not\n>>> seem wise to me to spread that even more.\n> \n>> I hope nobody is expecting that such code will get accepted. We have\n>> a policy (and an enforcement mechanism) that libpq.so must not call\n>> exit(). OAuth code in libpq will need to cope with using pqexpbuffer.\n> \n> FWIW that's exactly what Jacob's OAUTH patch does -- it teaches the\n> relevant JSON parser code to use pqexpbuffer when in frontend\n> environment, and continues to use StringInfo in backend. I find that a\n> bit cumbersome, but if the idea of changing the StringInfo behavior\n> (avoiding exit()) is too radical, then perhaps it's better that we go\n> with Jacob's proposal in the other thread:\n\nCorrect, the OAuth work does not make any claims to use StringInfo in libpq.\nMy understanding of this thread was to make it use StringInfo for now to get\nthis available for frontend binaries now, and reduce scope here, and later\nchange it up if/when the OAuth patch lands.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 10:03:54 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "> On 15 Mar 2024, at 01:10, Michael Paquier <[email protected]> wrote:\n> \n> On Thu, Mar 14, 2024 at 10:56:46AM +0100, Daniel Gustafsson wrote:\n>> +\t/* don't allow destroys of read-only StringInfos */\n>> +\tAssert(str->maxlen != 0);\n>> Considering that StringInfo.c don't own the memory here I think it's warranted\n>> to turn this assert into an elog() to avoid the risk of use-after-free bugs.\n> \n> Hmm. I am not sure how much protection this would offer, TBH.\n\nI can't see how refusing to free memory owned and controlled by someone else,\nand throwing an error if attempted, wouldn't be a sound defensive programming\nmeasure.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 10:32:00 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> I can't see how refusing to free memory owned and controlled by someone else,\n> and throwing an error if attempted, wouldn't be a sound defensive programming\n> measure.\n\nI think the argument is about what \"refusal\" looks like.\nAn Assert seems OK to me, but anything based on elog is\nlikely to be problematic, because it'll involve exit()\nsomewhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 Mar 2024 10:15:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "On Fri, Mar 15, 2024 at 10:15 AM Tom Lane <[email protected]> wrote:\n\n> Daniel Gustafsson <[email protected]> writes:\n> > I can't see how refusing to free memory owned and controlled by someone\n> else,\n> > and throwing an error if attempted, wouldn't be a sound defensive\n> programming\n> > measure.\n>\n> I think the argument is about what \"refusal\" looks like.\n> An Assert seems OK to me, but anything based on elog is\n> likely to be problematic, because it'll involve exit()\n> somewhere.\n>\n>\n>\n\nYeah, I agree an Assert seems safest here.\n\nI'd like to get this done ASAP so I can rebase my incremental parse\npatchset. Daniel, do you want to commit it? If not, I can.\n\ncheers\n\nandrew\n\nOn Fri, Mar 15, 2024 at 10:15 AM Tom Lane <[email protected]> wrote:Daniel Gustafsson <[email protected]> writes:\n> I can't see how refusing to free memory owned and controlled by someone else,\n> and throwing an error if attempted, wouldn't be a sound defensive programming\n> measure.\n\nI think the argument is about what \"refusal\" looks like.\nAn Assert seems OK to me, but anything based on elog is\nlikely to be problematic, because it'll involve exit()\nsomewhere.\n\n                    Yeah, I agree an Assert seems safest here. I'd like to get this done ASAP so I can rebase my incremental parse patchset. Daniel, do you want to commit it? If not, I can.cheersandrew", "msg_date": "Fri, 15 Mar 2024 16:56:22 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "> On 15 Mar 2024, at 21:56, Andrew Dunstan <[email protected]> wrote:\n> On Fri, Mar 15, 2024 at 10:15 AM Tom Lane <[email protected] <mailto:[email protected]>> wrote:\n> Daniel Gustafsson <[email protected] <mailto:[email protected]>> writes:\n> > I can't see how refusing to free memory owned and controlled by someone else,\n> > and throwing an error if attempted, wouldn't be a sound defensive programming\n> > measure.\n> \n> I think the argument is about what \"refusal\" looks like.\n> An Assert seems OK to me, but anything based on elog is\n> likely to be problematic, because it'll involve exit()\n> somewhere. \n> \n> Yeah, I agree an Assert seems safest here. \n> \n> I'd like to get this done ASAP so I can rebase my incremental parse patchset. Daniel, do you want to commit it? If not, I can.\n\nSure, I can commit these patches. It won't be until tomorrow though since I\nprefer to have ample time to monitor the buildfarm, if you are in a bigger rush\nthan that then feel free to go ahead.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 23:23:07 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "On Fri, Mar 15, 2024 at 11:23:07PM +0100, Daniel Gustafsson wrote:\n> On 15 Mar 2024, at 21:56, Andrew Dunstan <[email protected]> wrote:\n>> On Fri, Mar 15, 2024 at 10:15 AM Tom Lane <[email protected] <mailto:[email protected]>> wrote:\n>> Yeah, I agree an Assert seems safest here.\n\nCool.\n\n>> I'd like to get this done ASAP so I can rebase my incremental parse\n>> patchset. Daniel, do you want to commit it? If not, I can.\n> \n> Sure, I can commit these patches. It won't be until tomorrow though since I\n> prefer to have ample time to monitor the buildfarm, if you are in a bigger rush\n> than that then feel free to go ahead.\n\n+1. I've not looked much at 0002, but feel free to do so if you think\nboth are good for shipping.\n--\nMichael", "msg_date": "Sat, 16 Mar 2024 08:45:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "\n\n> On Mar 16, 2024, at 8:53 AM, Daniel Gustafsson <[email protected]> wrote:\n> \n> \n>> \n>> On 15 Mar 2024, at 21:56, Andrew Dunstan <[email protected]> wrote:\n>> On Fri, Mar 15, 2024 at 10:15 AM Tom Lane <[email protected] <mailto:[email protected]>> wrote:\n>> Daniel Gustafsson <[email protected] <mailto:[email protected]>> writes:\n>>> I can't see how refusing to free memory owned and controlled by someone else,\n>>> and throwing an error if attempted, wouldn't be a sound defensive programming\n>>> measure.\n>> \n>> I think the argument is about what \"refusal\" looks like.\n>> An Assert seems OK to me, but anything based on elog is\n>> likely to be problematic, because it'll involve exit()\n>> somewhere. \n>> \n>> Yeah, I agree an Assert seems safest here.\n>> \n>> I'd like to get this done ASAP so I can rebase my incremental parse patchset. Daniel, do you want to commit it? If not, I can.\n> \n> Sure, I can commit these patches. It won't be until tomorrow though since I\n> prefer to have ample time to monitor the buildfarm, if you are in a bigger rush\n> than that then feel free to go ahead.\n> \n\ntomorrow will be fine, thanks \n\nCheers\n\nAndrew \n\n\n", "msg_date": "Sat, 16 Mar 2024 10:29:19 +1030", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "> On 16 Mar 2024, at 00:59, Andrew Dunstan <[email protected]> wrote:\n>> On Mar 16, 2024, at 8:53 AM, Daniel Gustafsson <[email protected]> wrote:\n>>> On 15 Mar 2024, at 21:56, Andrew Dunstan <[email protected]> wrote:\n\n>>> I'd like to get this done ASAP so I can rebase my incremental parse patchset. Daniel, do you want to commit it? If not, I can.\n>> \n>> Sure, I can commit these patches. It won't be until tomorrow though since I\n>> prefer to have ample time to monitor the buildfarm, if you are in a bigger rush\n>> than that then feel free to go ahead.\n> \n> tomorrow will be fine, thanks \n\nSorry, I ran into some unforeseen scheduling issues and had less time available\nthan planned. I have pushed the 0001 StringInfo patch to reduce the work for\ntomorrow when I will work on 0002 unless beaten to it.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sun, 17 Mar 2024 00:00:56 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "> On 17 Mar 2024, at 00:00, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 16 Mar 2024, at 00:59, Andrew Dunstan <[email protected]> wrote:\n>>> On Mar 16, 2024, at 8:53 AM, Daniel Gustafsson <[email protected]> wrote:\n>>>> On 15 Mar 2024, at 21:56, Andrew Dunstan <[email protected]> wrote:\n> \n>>>> I'd like to get this done ASAP so I can rebase my incremental parse patchset. Daniel, do you want to commit it? If not, I can.\n>>> \n>>> Sure, I can commit these patches. It won't be until tomorrow though since I\n>>> prefer to have ample time to monitor the buildfarm, if you are in a bigger rush\n>>> than that then feel free to go ahead.\n>> \n>> tomorrow will be fine, thanks \n> \n> Sorry, I ran into some unforeseen scheduling issues and had less time available\n> than planned. I have pushed the 0001 StringInfo patch to reduce the work for\n> tomorrow when I will work on 0002 unless beaten to it.\n\nI took another look at this tonight and committed it with some mostly cosmetic\nchanges. Since then, tamandua failed the SSL test on this commit but I am\nunable to reproduce any error both on older OpenSSL and matching the 3.1\nversion on tamandua, so not sure if its related. Other animals have cleared\nsslcheck after this, but looking at this highlights just how rare it is for a\nbuildfarm animal to run sslcheck =/\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 18 Mar 2024 00:49:24 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> I took another look at this tonight and committed it with some mostly cosmetic\n> changes. Since then, tamandua failed the SSL test on this commit but I am\n> unable to reproduce any error both on older OpenSSL and matching the 3.1\n> version on tamandua, so not sure if its related.\n\nThat failure looks like just a random buildfarm burp:\n\n2024-03-17 23:11:30.989 UTC [106988][postmaster][:0] LOG: starting PostgreSQL 17devel on x86_64-linux, compiled by gcc-12.3.0, 64-bit\n2024-03-17 23:11:30.989 UTC [106988][postmaster][:0] LOG: could not bind IPv4 address \"127.0.0.1\": Address already in use\n2024-03-17 23:11:30.989 UTC [106988][postmaster][:0] HINT: Is another postmaster already running on port 54588? If not, wait a few seconds and retry.\n2024-03-17 23:11:30.990 UTC [106988][postmaster][:0] WARNING: could not create listen socket for \"127.0.0.1\"\n2024-03-17 23:11:30.990 UTC [106988][postmaster][:0] FATAL: could not create any TCP/IP sockets\n2024-03-17 23:11:30.993 UTC [106988][postmaster][:0] LOG: database system is shut down\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Mar 2024 20:13:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "> On 18 Mar 2024, at 01:13, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> I took another look at this tonight and committed it with some mostly cosmetic\n>> changes. Since then, tamandua failed the SSL test on this commit but I am\n>> unable to reproduce any error both on older OpenSSL and matching the 3.1\n>> version on tamandua, so not sure if its related.\n> \n> That failure looks like just a random buildfarm burp:\n\nIndeed, and it corrected itself a few hours later.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 18 Mar 2024 09:34:35 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "On Sun, Mar 17, 2024 at 4:49 PM Daniel Gustafsson <[email protected]> wrote:\n> I took another look at this tonight and committed it with some mostly cosmetic\n> changes.\n\nGreat! Thanks everyone.\n\n--Jacob\n\n\n", "msg_date": "Mon, 18 Mar 2024 06:17:18 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" }, { "msg_contents": "On Mon, Mar 18, 2024 at 06:17:18AM -0700, Jacob Champion wrote:\n> Great! Thanks everyone.\n\nCool. Thanks for the commit.\n--\nMichael", "msg_date": "Tue, 19 Mar 2024 10:04:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support json_errdetail in FRONTEND builds" } ]
[ { "msg_contents": "While playing around with EXPLAIN and SubPlans, I noticed that there's\na bug in how this is handled for MERGE. For example:\n\ndrop table if exists src, tgt, ref;\ncreate table src (a int, b text);\ncreate table tgt (a int, b text);\ncreate table ref (a int);\n\nexplain (verbose, costs off)\nmerge into tgt t\n using (select (select r.a from ref r where r.a = s.a) a, b from src s) s\n on t.a = s.a\n when not matched then insert values (s.a, s.b);\n\n QUERY PLAN\n-----------------------------------------------------------\n Merge on public.tgt t\n -> Merge Left Join\n Output: t.ctid, s.a, s.b, s.ctid\n Merge Cond: (((SubPlan 1)) = t.a)\n -> Sort\n Output: s.a, s.b, s.ctid, ((SubPlan 1))\n Sort Key: ((SubPlan 1))\n -> Seq Scan on public.src s\n Output: s.a, s.b, s.ctid, (SubPlan 1)\n SubPlan 1\n -> Seq Scan on public.ref r\n Output: r.a\n Filter: (r.a = s.a)\n -> Sort\n Output: t.ctid, t.a\n Sort Key: t.a\n -> Seq Scan on public.tgt t\n Output: t.ctid, t.a\n SubPlan 2\n -> Seq Scan on public.ref r_1\n Output: r_1.a\n Filter: (r_1.a = t.ctid)\n\nThe final filter condition \"(r_1.a = t.ctid)\" is incorrect, and should\nbe \"(r_1.a = s.a)\".\n\nWhat's happening is that the right hand side of that filter expression\nis an input Param node which get_parameter() tries to display by\ncalling find_param_referent() and then drilling down through the\nancestor node (the ModifyTable node) to try to find the real name of\nthe variable (s.a).\n\nHowever, that isn't working properly for MERGE because the inner_plan\nand inner_tlist of the corresponding deparse_namespace aren't set\ncorrectly. Actually the inner_tlist is correct, but the inner_plan is\nset to the ModifyTable node, whereas it needs to be the outer child\nnode -- in a MERGE, any references to the source relation will be\nINNER_VAR references to the targetlist of the join node immediately\nunder the ModifyTable node.\n\nSo I think we want to do something like the attached.\n\nRegards,\nDean", "msg_date": "Tue, 12 Mar 2024 18:43:27 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Broken EXPLAIN output for SubPlan in MERGE" }, { "msg_contents": "On 2024-Mar-12, Dean Rasheed wrote:\n\n> While playing around with EXPLAIN and SubPlans, I noticed that there's\n> a bug in how this is handled for MERGE. [...]\n\n> However, that isn't working properly for MERGE because the inner_plan\n> and inner_tlist of the corresponding deparse_namespace aren't set\n> correctly. Actually the inner_tlist is correct, but the inner_plan is\n> set to the ModifyTable node, whereas it needs to be the outer child\n> node -- in a MERGE, any references to the source relation will be\n> INNER_VAR references to the targetlist of the join node immediately\n> under the ModifyTable node.\n\nHmm, interesting, thanks for fixing it (commit 33e729c5148c). I remember\nwondering whether the nodes ought to be set differently, and now I have to\nadmit that this\n\n if (((ModifyTable *) plan)->operation == CMD_MERGE)\n dpns->inner_plan = outerPlan(plan);\n\nis very funny-looking. But I didn't come up with any examples where it\nmattered.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Thou shalt not follow the NULL pointer, for chaos and madness await\nthee at its end.\" (2nd Commandment for C programmers)\n\n\n", "msg_date": "Thu, 21 Mar 2024 10:23:13 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Broken EXPLAIN output for SubPlan in MERGE" } ]
[ { "msg_contents": "Hello\n\n\n\nI noticed that the comment for declaring create_tidscan_paths() in src/include/optimizer/paths.h has a typo. The function is implemented in tidpath.c, not tidpath.h as stated, which does not exist.\n\n\n\nMade a small patch to correct it.\n\n\n\nThank you\n\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:[email protected]\n\nhttp://www.highgo.ca", "msg_date": "Tue, 12 Mar 2024 11:57:44 -0700", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": true, "msg_subject": "typo in paths.h" }, { "msg_contents": "On Wed, 13 Mar 2024 at 07:58, Cary Huang <[email protected]> wrote:\n> I noticed that the comment for declaring create_tidscan_paths() in src/include/optimizer/paths.h has a typo. The function is implemented in tidpath.c, not tidpath.h as stated, which does not exist.\n\nThank you. Pushed.\n\nDavid\n\n\n", "msg_date": "Wed, 13 Mar 2024 09:35:03 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: typo in paths.h" } ]
[ { "msg_contents": "Hi,\n\nSeveral animals are timing out while waiting for catchup,\nsporadically. I don't know why. The oldest example I have found so\nfar by clicking around is:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-02-23%2015%3A44%3A35\n\nSo perhaps something was committed ~3 weeks ago triggered this.\n\nThere are many examples since, showing as recoveryCheck failures.\nApparently they are all on animals wrangled by Andres. Hmm. I think\nsome/all share a physical host, they seem to have quite high run time\nvariance, and they're using meson.\n\n\n", "msg_date": "Wed, 13 Mar 2024 10:53:40 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "On Wed, Mar 13, 2024 at 10:53 AM Thomas Munro <[email protected]> wrote:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-02-23%2015%3A44%3A35\n\nAssuming it is due to a commit in master, and given the failure\nfrequency, I think it is very likely to be a change from this 3 day\nwindow of commits, and more likely in the top half dozen or so:\n\nd360e3cc60e Fix compiler warning on typedef redeclaration\n8af25652489 Introduce a new smgr bulk loading facility.\ne612384fc78 Fix mistake in SQL features list\nd13ff82319c Fix BF failure in commit 93db6cbda0.\nefa70c15c74 Make GetSlotInvalidationCause() return RS_INVAL_NONE on\nunexpected input\n93db6cbda03 Add a new slot sync worker to synchronize logical slots.\n3d47b75546d pgindent fix\nb6df0798a5e Fix the intermittent buildfarm failures in 031_column_list.\nfbc93b8b5f5 Remove custom Constraint node read/write implementations\n801792e528d Improve ERROR/LOG messages added by commits ddd5f4f54a and\n7a424ece48.\n011d60c4352 Speed up uuid_out() by not relying on a StringInfo\n943f7ae1c86 Add lookup table for replication slot conflict reasons\n28f3915b73f Remove superfluous 'pgprocno' field from PGPROC\n4989ce72644 MERGE ... DO NOTHING: require SELECT privileges\ned345c2728b Fix typo\n690805ca754 doc: Fix link to pg_ident_file_mappings view\nff9e1e764fc Add option force_initdb to PostgreSQL::Test::Cluster:init()\n75bcba6cbd2 Remove extra check_stack_depth() from dropconstraint_internal()\nfcd210d496d Doc: improve explanation of type interval, especially extract().\n489072ab7a9 Replace relids in lateral subquery parse tree during SJE\n74563f6b902 Revert \"Improve compression and storage support with inheritance\"\nd2ca9a50b5b Minor corrections for partition pruning\n818fefd8fd4 Fix race leading to incorrect conflict cause in\nInvalidatePossiblyObsoleteSlot()\n01ec4d89b91 doc: Use system-username instead of system-user\n\nI just haven't got a specific theory yet, as the logs are empty. I\nwonder if some kind of failures could start firing signals around to\nget us a stack.\n\n\n", "msg_date": "Thu, 14 Mar 2024 15:00:28 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "On Thu, Mar 14, 2024 at 03:00:28PM +1300, Thomas Munro wrote:\n> Assuming it is due to a commit in master, and given the failure\n> frequency, I think it is very likely to be a change from this 3 day\n> window of commits, and more likely in the top half dozen or so:\n> \n> d360e3cc60e Fix compiler warning on typedef redeclaration\n> 8af25652489 Introduce a new smgr bulk loading facility.\n> e612384fc78 Fix mistake in SQL features list\n> d13ff82319c Fix BF failure in commit 93db6cbda0.\n> efa70c15c74 Make GetSlotInvalidationCause() return RS_INVAL_NONE on\n> unexpected input\n> 93db6cbda03 Add a new slot sync worker to synchronize logical slots.\n> 3d47b75546d pgindent fix\n> b6df0798a5e Fix the intermittent buildfarm failures in 031_column_list.\n> fbc93b8b5f5 Remove custom Constraint node read/write implementations\n> 801792e528d Improve ERROR/LOG messages added by commits ddd5f4f54a and\n> 7a424ece48.\n> 011d60c4352 Speed up uuid_out() by not relying on a StringInfo\n> 943f7ae1c86 Add lookup table for replication slot conflict reasons\n> 28f3915b73f Remove superfluous 'pgprocno' field from PGPROC\n> 4989ce72644 MERGE ... DO NOTHING: require SELECT privileges\n> ed345c2728b Fix typo\n> 690805ca754 doc: Fix link to pg_ident_file_mappings view\n> ff9e1e764fc Add option force_initdb to PostgreSQL::Test::Cluster:init()\n> 75bcba6cbd2 Remove extra check_stack_depth() from dropconstraint_internal()\n> fcd210d496d Doc: improve explanation of type interval, especially extract().\n> 489072ab7a9 Replace relids in lateral subquery parse tree during SJE\n> 74563f6b902 Revert \"Improve compression and storage support with inheritance\"\n> d2ca9a50b5b Minor corrections for partition pruning\n> 818fefd8fd4 Fix race leading to incorrect conflict cause in\n> InvalidatePossiblyObsoleteSlot()\n> 01ec4d89b91 doc: Use system-username instead of system-user\n> \n> I just haven't got a specific theory yet, as the logs are empty. I\n> wonder if some kind of failures could start firing signals around to\n> get us a stack.\n\nThanks for providing this list and an analysis. \n\nHmm. Perhaps 8af25652489? That looks like the closest thing in the\nlist that could have played with the way WAL is generated, hence\npotentially impacting the records that are replayed.\n\n93db6cbda03, efa70c15c74 and 818fefd8fd4 came to mind, but they touch\nunrelated territory.\n--\nMichael", "msg_date": "Thu, 14 Mar 2024 11:26:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "On Thu, Mar 14, 2024 at 3:27 PM Michael Paquier <[email protected]> wrote:\n> Hmm. Perhaps 8af25652489? That looks like the closest thing in the\n> list that could have played with the way WAL is generated, hence\n> potentially impacting the records that are replayed.\n\nYeah, I was wondering if its checkpoint delaying logic might have\ngot the checkpointer jammed or something like that, but I don't\ncurrently see how. Yeah, the replay of bulk newpages could be\nrelevant, but it's not exactly new technology. One thing I wondered\nabout is whether the Perl \"wait for catchup\" thing, which generates\nlarge volumes of useless log, could be somehow changed to actually\nshow the progress after some time. Something like \"I'm still waiting\nfor this replica to reach LSN X, but it has so far only reported LSN\nY, and here's a dump of the WAL around there\"?\n\n> 93db6cbda03, efa70c15c74 and 818fefd8fd4 came to mind, but they touch\n> unrelated territory.\n\nHmm.\n\n\n", "msg_date": "Thu, 14 Mar 2024 16:16:24 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hello Thomas and Michael,\n\n14.03.2024 06:16, Thomas Munro wrote:\n>\n> Yeah, I was wondering if its checkpoint delaying logic might have\n> got the checkpointer jammed or something like that, but I don't\n> currently see how. Yeah, the replay of bulk newpages could be\n> relevant, but it's not exactly new technology. One thing I wondered\n> about is whether the Perl \"wait for catchup\" thing, which generates\n> large volumes of useless log, could be somehow changed to actually\n> show the progress after some time. Something like \"I'm still waiting\n> for this replica to reach LSN X, but it has so far only reported LSN\n> Y, and here's a dump of the WAL around there\"?\n\nI have perhaps reproduced the issue here (at least I'm seeing something\nsimilar), and going to investigate the issue in the coming days, but what\nI'm confused with now is the duration of poll_query_until:\nFor the failure you referenced:\n[15:55:54.740](418.725s) # poll_query_until timed out executing this query:\n\nAnd a couple of others:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-03-08%2000%3A34%3A06\n[00:45:57.747](376.159s) # poll_query_until timed out executing this query:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-03-04%2016%3A32%3A17\n[16:45:24.870](407.970s) # poll_query_until timed out executing this query:\n\nCould it be that the timeout (360 sec?) is just not enough for the test\nunder the current (changed due to switch to meson) conditions?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 14 Mar 2024 21:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "On Fri, Mar 15, 2024 at 7:00 AM Alexander Lakhin <[email protected]> wrote:\n> Could it be that the timeout (360 sec?) is just not enough for the test\n> under the current (changed due to switch to meson) conditions?\n\nHmm, well it looks like he switched over to meson around 42 days ago\n2024-02-01, looking at \"calliphoridae\" (skink has the extra\ncomplication of valgrind, let's look at a more 'normal' animal\ninstead). The first failure that looks like that on calliphoridae is\n19 days ago 2024-02-23, and after that it's happening every 3 days,\nsometimes in clusters.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=calliphoridae&br=HEAD\n\nBut you're right that under meson the test takes a lot longer, I guess\ndue to increased concurrency:\n\n287/287 postgresql:recovery / recovery/027_stream_regress\n OK 684.50s 6 subtests passed\n\nWith make we don't have an individual time per script, but for for all\nof the recovery tests we had for example:\n\nt/027_stream_regress.pl ............... ok\nAll tests successful.\nFiles=39, Tests=542, 65 wallclock secs ( 0.26 usr 0.06 sys + 20.16\ncusr 31.65 csys = 52.13 CPU)\n\n\n", "msg_date": "Fri, 15 Mar 2024 09:28:19 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Fri, Mar 15, 2024 at 7:00 AM Alexander Lakhin <[email protected]> wrote:\n>> Could it be that the timeout (360 sec?) is just not enough for the test\n>> under the current (changed due to switch to meson) conditions?\n\n> But you're right that under meson the test takes a lot longer, I guess\n> due to increased concurrency:\n\nWhat it seems to be is highly variable. Looking at calliphoridae's\nlast half dozen successful runs, I see reported times for\n027_stream_regress anywhere from 183 to 704 seconds. I wonder what\nelse is happening on that machine. Also, this is probably not\nhelping anything:\n\n 'extra_config' => {\n ...\n 'fsync = on'\n\nI would suggest turning that off and raising wait_timeout a good\ndeal, and then we'll see if calliphoridae gets any more stable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Mar 2024 16:56:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "14.03.2024 23:56, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n>> On Fri, Mar 15, 2024 at 7:00 AM Alexander Lakhin <[email protected]> wrote:\n>>> Could it be that the timeout (360 sec?) is just not enough for the test\n>>> under the current (changed due to switch to meson) conditions?\n>> But you're right that under meson the test takes a lot longer, I guess\n>> due to increased concurrency:\n> What it seems to be is highly variable. Looking at calliphoridae's\n> last half dozen successful runs, I see reported times for\n> 027_stream_regress anywhere from 183 to 704 seconds. I wonder what\n> else is happening on that machine. Also, this is probably not\n> helping anything:\n>\n> 'extra_config' => {\n> ...\n> 'fsync = on'\n>\n> I would suggest turning that off and raising wait_timeout a good\n> deal, and then we'll see if calliphoridae gets any more stable.\n\nI could reproduce similar failures with\nPG_TEST_EXTRA=wal_consistency_checking\nonly, running 5 tests in parallel on a slowed-down VM, so that test\nduration increased to ~1900 seconds, but perhaps that buildfarm machine\nhas a different bottleneck (I/O?) or it's concurrent workload is not\nuniform as in my experiments.\n\nMeanwhile, I've analyzed failed test logs from buildfarm and calculated\nthe percentage of WAL replayed before timeout.\nFor instance, one of the failures:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-03-18%2022%3A36%3A40\nstandby_1.log:\n2024-03-18 22:38:22.743 UTC [2010896][walreceiver][:0] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1\n...\n2024-03-18 22:50:02.439 UTC [2004203][checkpointer][:0] LOG: recovery restart point at 0/E00E030\n2024-03-18 22:50:02.439 UTC [2004203][checkpointer][:0] DETAIL: Last completed transaction was at log time 2024-03-18 \n22:41:26.647756+00.\n2024-03-18 22:50:12.841 UTC [2010896][walreceiver][:0] FATAL:  could not receive data from WAL stream: server closed the \nconnection unexpectedly\n\nprimary.log:\n2024-03-18 22:38:23.754 UTC [2012240][client backend][3/3:0] LOG: statement: GRANT ALL ON SCHEMA public TO public;\n# ^^^ One of the first records produced by `make check`\n...\n2024-03-18 22:41:26.647 UTC [2174047][client backend][10/752:0] LOG:  statement: ALTER VIEW my_property_secure SET \n(security_barrier=false);\n# ^^^ A record near the last completed transaction on standby\n...\n2024-03-18 22:44:13.226 UTC [2305844][client backend][22/3784:0] LOG:  statement: DROP TABLESPACE regress_tblspace_renamed;\n# ^^^ One of the last records produced by `make check`\n\n\\set t0 '22:38:23.754' \\set t1 '22:44:13.226' \\set tf '22:41:26.647756'\nselect extract(epoch from (:'tf'::time - :'t0'::time)) / extract(epoch from (:'t1'::time - :'t0'::time));\n~52%\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-03-18%2018%3A58%3A58\n~48%\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-03-18%2016%3A41%3A13\n~43%\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-03-18%2015%3A47%3A09\n~36%\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-03-15%2011%3A24%3A38\n~87%\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-03-17%2021%3A55%3A41\n~36%\n\nSo it still looks like a performance-related issue to me. And yes,\nfsync = off -> on greatly increases (~3x) the overall test duration in\nthat my environment.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 19 Mar 2024 14:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hi,\n\nOn 2024-03-14 16:56:39 -0400, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Fri, Mar 15, 2024 at 7:00 AM Alexander Lakhin <[email protected]> wrote:\n> >> Could it be that the timeout (360 sec?) is just not enough for the test\n> >> under the current (changed due to switch to meson) conditions?\n> \n> > But you're right that under meson the test takes a lot longer, I guess\n> > due to increased concurrency:\n> \n> What it seems to be is highly variable. Looking at calliphoridae's\n> last half dozen successful runs, I see reported times for\n> 027_stream_regress anywhere from 183 to 704 seconds. I wonder what\n> else is happening on that machine.\n\nThere's a lot of other animals on the same machine, however it's rarely fuly\nloaded (with either CPU or IO).\n\nI don't think the test just being slow is the issue here, e.g. in the last\nfailing iteration\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-03-20%2022%3A03%3A15\n\nthe tests completed\n\n2024-03-20 22:07:50.239 UTC [3937667][client backend][22/3255:0] LOG: statement: DROP ROLE regress_tablespace_user2;\n2024-03-20 22:07:50.251 UTC [3937667][client backend][:0] LOG: disconnection: session time: 0:00:12.431 user=bf database=regression host=[local]\n\nand we waited to replicate for quite a while:\n\n2024-03-20 22:14:01.904 UTC [56343][client backend][6/1925:0] LOG: connection authorized: user=bf database=postgres application_name=027_stream_regress.pl\n2024-03-20 22:14:01.930 UTC [56343][client backend][6/1926:0] LOG: statement: SELECT '0/15BA21B0' <= replay_lsn AND state = 'streaming'\n\t FROM pg_catalog.pg_stat_replication\n\t WHERE application_name IN ('standby_1', 'walreceiver')\n2024-03-20 22:14:01.958 UTC [56343][client backend][:0] LOG: disconnection: session time: 0:00:00.063 user=bf database=postgres host=[local]\n2024-03-20 22:14:02.083 UTC [3729516][postmaster][:0] LOG: received immediate shutdown request\n2024-03-20 22:14:04.970 UTC [3729516][postmaster][:0] LOG: database system is shut down\n\nThere was no activity for 7 minutes.\n\nSystem statistics show relatively low load CPU and IO load for the period from\n22:00 - 22:10.\n\n\nI suspect we have some more fundamental instability at our hands, there have\nbeen failures like this going back a while, and on various machines.\n\n\n\nI think at the very least we should make Cluster.pm's wait_for_catchup() print\nsome information when it times out - right now it's neigh on undebuggable,\nbecause we don't even log what we were waiting for and what the actual\nreplication position was.\n\n\n\n> Also, this is probably not\n> helping anything:\n> \n> 'extra_config' => {\n> ...\n> 'fsync = on'\n\nAt some point we had practically no test coverage of fsync, so I made my\nanimals use fsync. I think we still have little coverage. I probably could\nreduce the number of animals using it though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Mar 2024 17:41:45 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hi,\n\nOn 2024-03-20 17:41:47 -0700, Andres Freund wrote:\n> There's a lot of other animals on the same machine, however it's rarely fuly\n> loaded (with either CPU or IO).\n>\n> I don't think the test just being slow is the issue here, e.g. in the last\n> failing iteration\n>\n> [...]\n>\n> I suspect we have some more fundamental instability at our hands, there have\n> been failures like this going back a while, and on various machines.\n\nI'm somewhat confused by the timestamps in the log:\n\n[22:07:50.263](223.929s) ok 2 - regression tests pass\n...\n[22:14:02.051](371.788s) # poll_query_until timed out executing this query:\n\nI read this as 371.788s having passed between the messages. Which of course is\nmuch higher than PostgreSQL::Test::Utils::timeout_default=180\n\nAh.\n\nThe way that poll_query_until() implements timeouts seems decidedly\nsuboptimal. If a psql invocation, including query processing, takes any\nappreciateble amount of time, poll_query_until() waits much longer than it\nshoulds, because it very naively determines a number of waits ahead of time:\n\n\tmy $max_attempts = 10 * $PostgreSQL::Test::Utils::timeout_default;\n\tmy $attempts = 0;\n\n\twhile ($attempts < $max_attempts)\n\t{\n...\n\n\t\t# Wait 0.1 second before retrying.\n\t\tusleep(100_000);\n\n\t\t$attempts++;\n\t}\n\nIck.\n\nWhat's worse is that if the query takes too long, the timeout afaict never\ntakes effect.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Mar 2024 19:50:24 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hi,\n\nOn 2024-03-20 17:41:45 -0700, Andres Freund wrote:\n> 2024-03-20 22:14:01.904 UTC [56343][client backend][6/1925:0] LOG: connection authorized: user=bf database=postgres application_name=027_stream_regress.pl\n> 2024-03-20 22:14:01.930 UTC [56343][client backend][6/1926:0] LOG: statement: SELECT '0/15BA21B0' <= replay_lsn AND state = 'streaming'\n> \t FROM pg_catalog.pg_stat_replication\n> \t WHERE application_name IN ('standby_1', 'walreceiver')\n> 2024-03-20 22:14:01.958 UTC [56343][client backend][:0] LOG: disconnection: session time: 0:00:00.063 user=bf database=postgres host=[local]\n> 2024-03-20 22:14:02.083 UTC [3729516][postmaster][:0] LOG: received immediate shutdown request\n> 2024-03-20 22:14:04.970 UTC [3729516][postmaster][:0] LOG: database system is shut down\n>\n> There was no activity for 7 minutes.\n>\n> System statistics show relatively low load CPU and IO load for the period from\n> 22:00 - 22:10.\n>\n>\n> I suspect we have some more fundamental instability at our hands, there have\n> been failures like this going back a while, and on various machines.\n\nI've reproduced something like this scenario locally, although I am not sure\nit is precisely what is happening on the buildfarm. At least here it looks\nlike the problem is that apply is lagging substantially:\n\n2024-03-20 22:43:11.024 PDT [1023505][walreceiver][:0][] DEBUG: sendtime 2024-03-20 22:43:11.024348-07 receipttime 2024-03-20 22:43:11.02437-07 replication apply delay 285322 ms transfer latency 1 ms\n\nWhich then means that we'll wait for a long time for apply to finish:\n\nWaiting for replication conn standby_1's replay_lsn to pass 0/14385E20 on primary\n[22:41:34.521](0.221s) # state before polling:\n# pid | 1023508\n# application_name | standby_1\n# sent_lsn | 0/14385E20\n# primary_wal_lsn | 0/14385E20\n# standby_write_lsn | 0/14385E20\n# primary_flush_lsn | 0/14385E20\n# standby_flush_lsn | 0/14385E20\n# standby_replay_lsn | 0/126D5C58\n...\n[22:43:16.376](0.161s) # running query, attempt 679/1800\n[22:43:16.627](0.251s) # state now:\n# pid | 1023508\n# application_name | standby_1\n# sent_lsn | 0/14778468\n# primary_wal_lsn | 0/14778468\n# standby_write_lsn | 0/14778468\n# primary_flush_lsn | 0/14778468\n# standby_flush_lsn | 0/14778468\n# standby_replay_lsn | 0/14778468\n\n\n\nI am not sure I have debugged why exactly, but it sure looks like one part is\nthe startup process being busy unlinking files synchronously. This appears to\nbe exacerbated by mdunlinkfork() first truncating and then separately\nunlinking the file - that looks to trigger a lot of filesystem journal\nflushes (on xfs).\n\nWe also spend a fair bit of time updating the control file, because we flush\nthe WAL when replaying a transaction commit with a relation unlink. That also\nbadly interacts with doing metadata operations...\n\nThirdly, we flush received WAL extremely granularly at times, which requires\nanother fsync:\n2024-03-20 23:30:21.469 PDT [1525084][walreceiver][:0][] LOG: flushed received WAL up to 0/13BB0000\n2024-03-20 23:30:21.473 PDT [1525084][walreceiver][:0][] LOG: flushed received WAL up to 0/13BB0170\n2024-03-20 23:30:21.479 PDT [1525084][walreceiver][:0][] LOG: flushed received WAL up to 0/13BB2528\n2024-03-20 23:30:21.480 PDT [1525084][walreceiver][:0][] LOG: flushed received WAL up to 0/13BB58C8\n2024-03-20 23:30:21.487 PDT [1525084][walreceiver][:0][] LOG: flushed received WAL up to 0/13BB7DA0\n2024-03-20 23:30:21.490 PDT [1525084][walreceiver][:0][] LOG: flushed received WAL up to 0/13BB92B0\n2024-03-20 23:30:21.494 PDT [1525084][walreceiver][:0][] LOG: flushed received WAL up to 0/13BBBAC0\n2024-03-20 23:30:21.496 PDT [1525084][walreceiver][:0][] LOG: flushed received WAL up to 0/13BBCCC0\n2024-03-20 23:30:21.499 PDT [1525084][walreceiver][:0][] LOG: flushed received WAL up to 0/13BBCE18\n\nThis all when we're quite far behind with apply...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Mar 2024 23:39:53 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hi,\n\nOn 2024-03-20 17:41:45 -0700, Andres Freund wrote:\n> On 2024-03-14 16:56:39 -0400, Tom Lane wrote:\n> > Also, this is probably not\n> > helping anything:\n> >\n> > 'extra_config' => {\n> > ...\n> > 'fsync = on'\n>\n> At some point we had practically no test coverage of fsync, so I made my\n> animals use fsync. I think we still have little coverage. I probably could\n> reduce the number of animals using it though.\n\nI think there must be some actual regression involved. The frequency of\nfailures on HEAD vs failures on 16 - both of which run the tests concurrently\nvia meson - is just vastly different. I'd expect the absolute number of\nfailures in 027_stream_regress.pl to differ between branches due to fewer runs\non 16, but there's no explanation for the difference in percentage of\nfailures. My menagerie had only a single recoveryCheck failure on !HEAD in the\nlast 30 days, but in the vicinity of 100 on HEAD\nhttps://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=30&stage=recoveryCheck&filter=Submit\n\n\nIf anything the load when testing back branch changes is higher, because\ncommonly back-branch builds are happening on all branches, so I don't think\nthat can be the explanation either.\n\n From what I can tell the pattern changed on 2024-02-16 19:39:02 - there was a\nrash of recoveryCheck failures in the days before that too, but not\n027_stream_regress.pl in that way.\n\n\nIt certainly seems suspicious that one commit before the first observed failure\nis\n2024-02-16 11:09:11 -0800 [73f0a132660] Pass correct count to WALRead().\n\nOf course the failure rate is low enough that it could have been a day or two\nbefore that, too.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Mar 2024 20:56:07 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> I think there must be some actual regression involved. The frequency of\n> failures on HEAD vs failures on 16 - both of which run the tests concurrently\n> via meson - is just vastly different.\n\nAre you sure it's not just that the total time to run the core\nregression tests has grown to a bit more than what the test timeout\nallows for?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Mar 2024 00:00:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hi,\n\nOn 2024-03-26 00:00:38 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I think there must be some actual regression involved. The frequency of\n> > failures on HEAD vs failures on 16 - both of which run the tests concurrently\n> > via meson - is just vastly different.\n>\n> Are you sure it's not just that the total time to run the core\n> regression tests has grown to a bit more than what the test timeout\n> allows for?\n\nYou're right, that could be it - in a way at least, the issue is replay not\ncatching up within 180s, so it'd have to be the data volume growing, I think.\n\nBut it doesn't look like the regression volume meaningfully grew around that\ntime?\n\nI guess I'll try to write a buildfarm database query to extract how long that\nphase of the test took from all runs on my menagerie, not just the failing\none, and see if there's a visible trend.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Mar 2024 21:28:32 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-03-26 00:00:38 -0400, Tom Lane wrote:\n>> Are you sure it's not just that the total time to run the core\n>> regression tests has grown to a bit more than what the test timeout\n>> allows for?\n\n> You're right, that could be it - in a way at least, the issue is replay not\n> catching up within 180s, so it'd have to be the data volume growing, I think.\n> But it doesn't look like the regression volume meaningfully grew around that\n> time?\n\nNo, but my impression is that the failure rate has been getting slowly\nworse for awhile now.\n\n> I guess I'll try to write a buildfarm database query to extract how long that\n> phase of the test took from all runs on my menagerie, not just the failing\n> one, and see if there's a visible trend.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Mar 2024 00:54:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hi,\n\nOn 2024-03-26 00:54:54 -0400, Tom Lane wrote:\n> > I guess I'll try to write a buildfarm database query to extract how long that\n> > phase of the test took from all runs on my menagerie, not just the failing\n> > one, and see if there's a visible trend.\n>\n> +1\n\nOnly the query for successful runs has finished, and it looks like the error\ncse is still going to take a while longer, so here are excerpts from a query\nshowing how long 027_stream_regress.pl took to succeed:\n\n sysname | date | count | avg_duration\n---------------+------------+-------+--------------\n\n calliphoridae | 2024-01-29 | 10 | 435\n calliphoridae | 2024-02-05 | 25 | 496\n calliphoridae | 2024-02-12 | 36 | 522\n calliphoridae | 2024-02-19 | 25 | 445\n calliphoridae | 2024-02-26 | 35 | 516\n calliphoridae | 2024-03-04 | 53 | 507\n calliphoridae | 2024-03-11 | 51 | 532\n calliphoridae | 2024-03-18 | 53 | 548\n calliphoridae | 2024-03-25 | 13 | 518\n\n culicidae | 2024-01-29 | 11 | 420\n culicidae | 2024-02-05 | 31 | 485\n culicidae | 2024-02-12 | 35 | 513\n culicidae | 2024-02-19 | 29 | 489\n culicidae | 2024-02-26 | 36 | 512\n culicidae | 2024-03-04 | 63 | 541\n culicidae | 2024-03-11 | 62 | 597\n culicidae | 2024-03-18 | 56 | 603\n culicidae | 2024-03-25 | 16 | 550\n\n tamandua | 2024-01-29 | 13 | 420\n tamandua | 2024-02-05 | 29 | 433\n tamandua | 2024-02-12 | 34 | 431\n tamandua | 2024-02-19 | 27 | 382\n tamandua | 2024-02-26 | 36 | 492\n tamandua | 2024-03-04 | 60 | 475\n tamandua | 2024-03-11 | 56 | 533\n tamandua | 2024-03-18 | 54 | 527\n tamandua | 2024-03-25 | 21 | 507\n\nParticularly on tamandua it does look like there has been an upwards trend.\n\nLate, will try to look more in the next few days.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Mar 2024 00:59:12 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hello Andres,\n\n26.03.2024 10:59, Andres Freund wrote:\n> Late, will try to look more in the next few days.\n>\n\nAFAICS, last 027_streaming_regress.pl failures on calliphoridae,\nculicidae, tamandua occurred before 2024-03-27:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-03-26%2004%3A07%3A30\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-03-22%2013%3A26%3A21\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-03-24%2007%3A44%3A27\n\nSo it looks like the issue resolved, but there is another apparently\nperformance-related issue: deadlock-parallel test failures.\n\nA recent one:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=canebrake&dt=2024-04-02%2022%3A20%3A22\ntest deadlock-parallel            ... FAILED   345099 ms\n\n+isolationtester: canceling step d2a1 after 300 seconds\n  step d2a1: <... completed>\n-  sum\n------\n-10000\n-(1 row)\n-\n...\n\nThe server log shows:\n2024-04-02 23:56:45.353 UTC [3583878][client backend][5/530:0] LOG: statement: SET force_parallel_mode = on;\n...\n                   SELECT lock_share(3,x) FROM bigt LIMIT 1;\n2024-04-02 23:56:45.364 UTC [3583876][client backend][3/2732:0] LOG:  execute isolationtester_waiting: SELECT \npg_catalog.pg_isolation_test_session_is_blocked($1, '{3583877,3583878,3583879,3583880}')\n2024-04-02 23:56:45.364 UTC [3583876][client backend][3/2732:0] DETAIL:  parameters: $1 = '3583878'\n...\n2024-04-02 23:57:28.967 UTC [3583876][client backend][3/5097:0] LOG:  execute isolationtester_waiting: SELECT \npg_catalog.pg_isolation_test_session_is_blocked($1, '{3583877,3583878,3583879,3583880}')\n2024-04-02 23:57:28.967 UTC [3583876][client backend][3/5097:0] DETAIL:  parameters: $1 = '3583877'\n2024-04-02 23:57:29.016 UTC [3583877][client backend][4/530:0] LOG: statement: COMMIT;\n2024-04-02 23:57:29.039 UTC [3583876][client backend][3/5098:0] LOG:  execute isolationtester_waiting: SELECT \npg_catalog.pg_isolation_test_session_is_blocked($1, '{3583877,3583878,3583879,3583880}')\n2024-04-02 23:57:29.039 UTC [3583876][client backend][3/5098:0] DETAIL:  parameters: $1 = '3583879'\n...\n2024-04-03 00:02:29.096 UTC [3583876][client backend][3/9472:0] LOG:  execute isolationtester_waiting: SELECT \npg_catalog.pg_isolation_test_session_is_blocked($1, '{3583877,3583878,3583879,3583880}')\n2024-04-03 00:02:29.096 UTC [3583876][client backend][3/9472:0] DETAIL:  parameters: $1 = '3583878'\n2024-04-03 00:02:29.172 UTC [3905345][not initialized][:0] LOG: connection received: host=[local]\n2024-04-03 00:02:29.240 UTC [3583878][client backend][5/530:0] ERROR:  canceling statement due to user request\n\nThe last step duration is 00:02:29.096 - 23:57:29.039 ~ 300 seconds\n(default max_step_wait for REL_15_STABLE- (for REL_16_STABLE+ the default\nvalue was increased to 360 by c99c67fc4)).\n\nThe average deadlock-parallel duration for REL_15_STABLE on canebrake is\naround 128 seconds (for 140 runs I analyzed), but we can find also:\ntest deadlock-parallel            ... ok       377895 ms\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=canebrake&dt=2024-03-27%2001%3A06%3A24&stg=isolation-check\ntest deadlock-parallel            ... ok       302549 ms\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=canebrake&dt=2023-11-06%2012%3A47%3A01&stg=isolation-check\ntest deadlock-parallel            ... ok       255045 ms\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=canebrake&dt=2023-11-09%2010%3A02%3A59&stg=isolation-check\n\nThe similar situation on phycodurus:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&amp;dt=2024-02-11%2021:05:41\ntest deadlock-parallel            ... FAILED   389381 ms\n\nThe average deadlock-parallel duration for REL_13_STABLE on phycodurus is\naround 78 seconds (for 138 recent runs), but there were also:\ntest deadlock-parallel            ... ok       441736 ms\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=phycodurus&dt=2024-03-04%2015%3A19%3A04&stg=isolation-check\ntest deadlock-parallel            ... ok       187844 ms\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=phycodurus&dt=2023-11-03%2016%3A13%3A46&stg=isolation-check\n\nAnd also pogona, REL_14_STABLE:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2024-02-20%2003%3A50%3A49\ntest deadlock-parallel            ... FAILED   425482 ms\n\n(I could reach similar duration on a slowed-down VM, with JIT enabled as\non these animals.)\n\nSo, maybe these machines require larger PGISOLATIONTIMEOUT or there is\nstill some OS/environment issue there?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 4 Apr 2024 19:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hi,\n\nOn 2024-04-04 19:00:00 +0300, Alexander Lakhin wrote:\n> 26.03.2024 10:59, Andres Freund wrote:\n> > Late, will try to look more in the next few days.\n> > \n> \n> AFAICS, last 027_streaming_regress.pl failures on calliphoridae,\n> culicidae, tamandua occurred before 2024-03-27:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-03-26%2004%3A07%3A30\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-03-22%2013%3A26%3A21\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-03-24%2007%3A44%3A27\n> \n> So it looks like the issue resolved, but there is another apparently\n> performance-related issue: deadlock-parallel test failures.\n\nI reduced test concurrency a bit. I hadn't quite realized how the buildfarm\nconfig and meson test concurrency interact. But there's still something off\nwith the frequency of fsyncs during replay, but perhaps that doesn't qualify\nas a bug.\n\n\n> A recent one:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=canebrake&dt=2024-04-02%2022%3A20%3A22\n> test deadlock-parallel����������� ... FAILED�� 345099 ms\n\n> (I could reach similar duration on a slowed-down VM, with JIT enabled as\n> on these animals.)\n> \n> So, maybe these machines require larger PGISOLATIONTIMEOUT or there is\n> still some OS/environment issue there?\n\nHm, possible. Forcing every query to be JITed, in a debug build of LLVM is\nabsurdly expensive.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Apr 2024 10:00:55 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hello Andres,\n\n>> So it looks like the issue resolved, but there is another apparently\n>> performance-related issue: deadlock-parallel test failures.\n> I reduced test concurrency a bit. I hadn't quite realized how the buildfarm\n> config and meson test concurrency interact. But there's still something off\n> with the frequency of fsyncs during replay, but perhaps that doesn't qualify\n> as a bug.\n\nIt looks like that set of animals is still suffering from extreme load.\nPlease take a look at the today's failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-06-04%2002%3A44%3A19\n\n1/1 postgresql:regress-running / regress-running/regress TIMEOUT        3000.06s   killed by signal 15 SIGTERM\n\ninst/logfile ends with:\n2024-06-04 03:39:24.664 UTC [3905755][client backend][5/1787:16793] ERROR:  column \"c2\" of relation \"test_add_column\" \nalready exists\n2024-06-04 03:39:24.664 UTC [3905755][client backend][5/1787:16793] STATEMENT:  ALTER TABLE test_add_column\n         ADD COLUMN c2 integer, -- fail because c2 already exists\n         ADD COLUMN c3 integer primary key;\n2024-06-04 03:39:30.815 UTC [3905755][client backend][5/0:0] LOG: could not send data to client: Broken pipe\n2024-06-04 03:39:30.816 UTC [3905755][client backend][5/0:0] FATAL: connection to client lost\n\n\"ALTER TABLE test_add_column\" is from the alter_table test, which executed\nin the group 21 out of 25.\n\nAnother similar failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=skink&dt=2024-05-24%2002%3A22%3A26&stg=install-check-C\n\n1/1 postgresql:regress-running / regress-running/regress TIMEOUT        3000.06s   killed by signal 15 SIGTERM\n\ninst/logfile ends with:\n2024-05-24 03:18:51.469 UTC [998579][client backend][7/1792:16786] ERROR:  could not change table \"logged1\" to unlogged \nbecause it references logged table \"logged2\"\n2024-05-24 03:18:51.469 UTC [998579][client backend][7/1792:16786] STATEMENT:  ALTER TABLE logged1 SET UNLOGGED;\n(This is the alter_table test again.)\n\nI've analyzed duration of the regress-running/regress test for the recent\n167 runs on skink and found that the average duration is 1595 seconds, but\nthere were much longer test runs:\n2979.39: \nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=skink&dt=2024-05-01%2004%3A15%3A29&stg=install-check-C\n2932.86: \nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=skink&dt=2024-04-28%2018%3A57%3A37&stg=install-check-C\n2881.78: \nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=skink&dt=2024-05-15%2020%3A53%3A30&stg=install-check-C\n\nSo it seems that the default timeout is not large enough for these\nconditions. (I've counted 10 such timeout failures of 167 test runs.)\n\nAlso, 027_stream_regress still fails due to the same reason:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-05-22%2021%3A55%3A03\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-05-22%2021%3A54%3A50\n(It's remarkable that these two animals failed at the same time.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 4 Jun 2024 13:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Hello Andrew,\n\n04.06.2024 13:00, Alexander Lakhin wrote:\n> Also, 027_stream_regress still fails due to the same reason:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-05-22%2021%3A55%3A03\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-05-22%2021%3A54%3A50\n> (It's remarkable that these two animals failed at the same time.)\n>\n\nIt looks like crake is failing now due to other reasons (not just\nconcurrency) — it produced 10+ failures of the\n027_stream_regress test starting from July, 9.\n\nThe first such failure on REL_16_STABLE was [1], and that was the first run\nwith 'PG_TEST_EXTRA' => '... wal_consistency_checking'.\n\nThere is one known issue related to wal_consistency_checking [2], but I\nsee no \"incorrect resource manager data checksum\" in the failure log...\n\nMoreover, the first such failure on REL_17_STABLE was [3], but that run\nwas performed without wal_consistency_checking, as far as I can see.\n\nCan that failure be also related to the OS upgrade (I see that back in\nJune crake was running on Fedora 39, but now it's running on Fedora 40)?\n\nSo maybe we have two factors combined or there is another one?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-17%2014%3A56%3A09\n[2] https://www.postgresql.org/message-id/[email protected]\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-09%2021%3A37%3A04\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 25 Jul 2024 07:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "\nOn 2024-07-25 Th 12:00 AM, Alexander Lakhin wrote:\n> Hello Andrew,\n>\n> 04.06.2024 13:00, Alexander Lakhin wrote:\n>> Also, 027_stream_regress still fails due to the same reason:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-05-22%2021%3A55%3A03 \n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-05-22%2021%3A54%3A50 \n>>\n>> (It's remarkable that these two animals failed at the same time.)\n>>\n>\n> It looks like crake is failing now due to other reasons (not just\n> concurrency) — it produced 10+ failures of the\n> 027_stream_regress test starting from July, 9.\n>\n> The first such failure on REL_16_STABLE was [1], and that was the \n> first run\n> with 'PG_TEST_EXTRA' => '... wal_consistency_checking'.\n>\n> There is one known issue related to wal_consistency_checking [2], but I\n> see no \"incorrect resource manager data checksum\" in the failure log...\n>\n> Moreover, the first such failure on REL_17_STABLE was [3], but that run\n> was performed without wal_consistency_checking, as far as I can see.\n>\n> Can that failure be also related to the OS upgrade (I see that back in\n> June crake was running on Fedora 39, but now it's running on Fedora 40)?\n>\n> So maybe we have two factors combined or there is another one?\n>\n> [1] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-17%2014%3A56%3A09\n> [2] \n> https://www.postgresql.org/message-id/[email protected]\n> [3] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-09%2021%3A37%3A04\n\n\n\nUnlikely. The change in OS version was on June 17, more than a month ago.\n\nBut yes we do seem to have seen a lot of recovery_check failures on \ncrake in the last 8 days, which is roughly when I changed PG_TEST_EXTRA \nto get more coverage.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 25 Jul 2024 14:08:43 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> But yes we do seem to have seen a lot of recovery_check failures on \n> crake in the last 8 days, which is roughly when I changed PG_TEST_EXTRA \n> to get more coverage.\n\nI'm confused by crake's buildfarm logs. AFAICS it is not running\nrecovery-check at all in most of the runs; at least there is no\nmention of that step, for example here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2013%3A27%3A02\n\nIt seems implausible that it would only run the test occasionally,\nso what I suspect is a bug in the buildfarm client causing it to\nomit that step's log if successful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jul 2024 15:06:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "I wrote:\n> I'm confused by crake's buildfarm logs. AFAICS it is not running\n> recovery-check at all in most of the runs; at least there is no\n> mention of that step, for example here:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2013%3A27%3A02\n\nOh, I see it: the log file that is called recovery-check in a\nfailing run is called misc-check if successful. That seems\nmighty bizarre, and it's not how my own animals behave.\nSomething weird about the meson code path, perhaps?\n\nAnyway, in this successful run:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2018%3A57%3A02&stg=misc-check\n\nhere are some salient test timings:\n\n 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.18s 9 subtests passed\n 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 15.95s 12 subtests passed\n 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 16.29s 14 subtests passed\n 17/297 postgresql:isolation / isolation/isolation OK 71.60s 119 subtests passed\n 41/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 169.13s 18 subtests passed\n140/297 postgresql:initdb / initdb/001_initdb OK 41.34s 52 subtests passed\n170/297 postgresql:recovery / recovery/027_stream_regress OK 469.49s 9 subtests passed\n\nwhile in the next, failing run\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2020%3A18%3A05&stg=recovery-check\n\nthe same tests took:\n\n 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.22s 9 subtests passed\n 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 56.62s 12 subtests passed\n 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 71.92s 14 subtests passed\n 21/297 postgresql:isolation / isolation/isolation OK 299.12s 119 subtests passed\n 31/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 344.42s 18 subtests passed\n159/297 postgresql:initdb / initdb/001_initdb OK 344.46s 52 subtests passed\n162/297 postgresql:recovery / recovery/027_stream_regress ERROR 840.84s exit status 29\n\nBased on this, it seems fairly likely that crake is simply timing out\nas a consequence of intermittent heavy background activity.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jul 2024 17:14:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "On Fri, Jul 26, 2024 at 9:14 AM Tom Lane <[email protected]> wrote:\n> Based on this, it seems fairly likely that crake is simply timing out\n> as a consequence of intermittent heavy background activity.\n\nWould it be better to keep going as long as progress is being made?\nI.e. time out only when the relevant LSN stops advancing for N\nseconds? Or something like that...\n\n\n", "msg_date": "Fri, 26 Jul 2024 10:10:29 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "On 2024-07-25 Th 5:14 PM, Tom Lane wrote:\n> I wrote:\n>> I'm confused by crake's buildfarm logs. AFAICS it is not running\n>> recovery-check at all in most of the runs; at least there is no\n>> mention of that step, for example here:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2013%3A27%3A02\n> Oh, I see it: the log file that is called recovery-check in a\n> failing run is called misc-check if successful. That seems\n> mighty bizarre, and it's not how my own animals behave.\n> Something weird about the meson code path, perhaps?\n\n\nYes, it was discussed some time ago. As suggested by Andres, we run the \nmeson test suite more or less all together (in effect like \"make \ncheckworld\" but without the main regression suite, which is run on its \nown first), rather than in the separate (and serialized) way we do with \nthe configure/make tests. That results in significant speedup. If the \ntests fail we report the failure as happening at the first failure we \nencounter, which is possibly less than ideal, but I haven't got a better \nidea.\n\n\n>\n> Anyway, in this successful run:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2018%3A57%3A02&stg=misc-check\n>\n> here are some salient test timings:\n>\n> 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.18s 9 subtests passed\n> 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 15.95s 12 subtests passed\n> 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 16.29s 14 subtests passed\n> 17/297 postgresql:isolation / isolation/isolation OK 71.60s 119 subtests passed\n> 41/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 169.13s 18 subtests passed\n> 140/297 postgresql:initdb / initdb/001_initdb OK 41.34s 52 subtests passed\n> 170/297 postgresql:recovery / recovery/027_stream_regress OK 469.49s 9 subtests passed\n>\n> while in the next, failing run\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2020%3A18%3A05&stg=recovery-check\n>\n> the same tests took:\n>\n> 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.22s 9 subtests passed\n> 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 56.62s 12 subtests passed\n> 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 71.92s 14 subtests passed\n> 21/297 postgresql:isolation / isolation/isolation OK 299.12s 119 subtests passed\n> 31/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 344.42s 18 subtests passed\n> 159/297 postgresql:initdb / initdb/001_initdb OK 344.46s 52 subtests passed\n> 162/297 postgresql:recovery / recovery/027_stream_regress ERROR 840.84s exit status 29\n>\n> Based on this, it seems fairly likely that crake is simply timing out\n> as a consequence of intermittent heavy background activity.\n>\n\n\nThe latest failure is this:\n\n\nWaiting for replication conn standby_1's replay_lsn to pass 2/88E4E260 on primary\n[16:40:29.481](208.545s) # poll_query_until timed out executing this query:\n# SELECT '2/88E4E260' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('standby_1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\ntimed out waiting for catchup at /home/andrew/bf/root/HEAD/pgsql/src/test/recovery/t/027_stream_regress.pl line 103.\n\n\nMaybe it's a case where the system is overloaded, I dunno. I wouldn't bet my house on it. Pretty much nothing else runs on this machine.\n\nI have added a mild throttle to the buildfarm config so it doesn't try \nto run every branch at once. Maybe I also need to bring down the number \nor meson jobs too? But I suspect there's something deeper. Prior to the \nfailure of this test 10 days ago it hadn't failed in a very long time. \nThe OS was upgraded a month ago. Eight or so days ago I changed \nPG_TEST_EXTRA. I can't think of anything else.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-25 Th 5:14 PM, Tom Lane\n wrote:\n\n\nI wrote:\n\n\nI'm confused by crake's buildfarm logs. AFAICS it is not running\nrecovery-check at all in most of the runs; at least there is no\nmention of that step, for example here:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2013%3A27%3A02\n\n\n\nOh, I see it: the log file that is called recovery-check in a\nfailing run is called misc-check if successful. That seems\nmighty bizarre, and it's not how my own animals behave.\nSomething weird about the meson code path, perhaps?\n\n\n\nYes, it was discussed some time ago. As suggested by Andres, we\n run the meson test suite more or less all together (in effect like\n \"make checkworld\" but without the main regression suite, which is\n run on its own first), rather than in the separate (and\n serialized) way we do with the configure/make tests. That results\n in significant speedup. If the tests fail we report the failure as\n happening at the first failure we encounter, which is possibly\n less than ideal, but I haven't got a better idea.\n\n\n\n\n\n\nAnyway, in this successful run:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2018%3A57%3A02&stg=misc-check\n\nhere are some salient test timings:\n\n 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.18s 9 subtests passed\n 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 15.95s 12 subtests passed\n 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 16.29s 14 subtests passed\n 17/297 postgresql:isolation / isolation/isolation OK 71.60s 119 subtests passed\n 41/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 169.13s 18 subtests passed\n140/297 postgresql:initdb / initdb/001_initdb OK 41.34s 52 subtests passed\n170/297 postgresql:recovery / recovery/027_stream_regress OK 469.49s 9 subtests passed\n\nwhile in the next, failing run\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2020%3A18%3A05&stg=recovery-check\n\nthe same tests took:\n\n 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.22s 9 subtests passed\n 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 56.62s 12 subtests passed\n 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 71.92s 14 subtests passed\n 21/297 postgresql:isolation / isolation/isolation OK 299.12s 119 subtests passed\n 31/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 344.42s 18 subtests passed\n159/297 postgresql:initdb / initdb/001_initdb OK 344.46s 52 subtests passed\n162/297 postgresql:recovery / recovery/027_stream_regress ERROR 840.84s exit status 29\n\nBased on this, it seems fairly likely that crake is simply timing out\nas a consequence of intermittent heavy background activity.\n\n\n\n\n\n\n\nThe latest failure is this:\n\n\nWaiting for replication conn standby_1's replay_lsn to pass 2/88E4E260 on primary\n[16:40:29.481](208.545s) # poll_query_until timed out executing this query:\n# SELECT '2/88E4E260' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('standby_1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\ntimed out waiting for catchup at /home/andrew/bf/root/HEAD/pgsql/src/test/recovery/t/027_stream_regress.pl line 103.\n\n\nMaybe it's a case where the system is overloaded, I dunno. I wouldn't bet my house on it. Pretty much nothing else runs on this machine. \n\nI have added a mild throttle to the buildfarm config so it\n doesn't try to run every branch at once. Maybe I also need to\n bring down the number or meson jobs too? But I suspect there's\n something deeper. Prior to the failure of this test 10 days ago it\n hadn't failed in a very long time. The OS was upgraded a month\n ago. Eight or so days ago I changed PG_TEST_EXTRA. I can't think\n of anything else.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 25 Jul 2024 18:33:19 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "On 2024-07-25 Th 6:33 PM, Andrew Dunstan wrote:\n>\n>\n> On 2024-07-25 Th 5:14 PM, Tom Lane wrote:\n>> I wrote:\n>>> I'm confused by crake's buildfarm logs. AFAICS it is not running\n>>> recovery-check at all in most of the runs; at least there is no\n>>> mention of that step, for example here:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2013%3A27%3A02\n>> Oh, I see it: the log file that is called recovery-check in a\n>> failing run is called misc-check if successful. That seems\n>> mighty bizarre, and it's not how my own animals behave.\n>> Something weird about the meson code path, perhaps?\n>\n>\n> Yes, it was discussed some time ago. As suggested by Andres, we run \n> the meson test suite more or less all together (in effect like \"make \n> checkworld\" but without the main regression suite, which is run on its \n> own first), rather than in the separate (and serialized) way we do \n> with the configure/make tests. That results in significant speedup. If \n> the tests fail we report the failure as happening at the first failure \n> we encounter, which is possibly less than ideal, but I haven't got a \n> better idea.\n>\n>\n>> Anyway, in this successful run:\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2018%3A57%3A02&stg=misc-check\n>>\n>> here are some salient test timings:\n>>\n>> 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.18s 9 subtests passed\n>> 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 15.95s 12 subtests passed\n>> 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 16.29s 14 subtests passed\n>> 17/297 postgresql:isolation / isolation/isolation OK 71.60s 119 subtests passed\n>> 41/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 169.13s 18 subtests passed\n>> 140/297 postgresql:initdb / initdb/001_initdb OK 41.34s 52 subtests passed\n>> 170/297 postgresql:recovery / recovery/027_stream_regress OK 469.49s 9 subtests passed\n>>\n>> while in the next, failing run\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2020%3A18%3A05&stg=recovery-check\n>>\n>> the same tests took:\n>>\n>> 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.22s 9 subtests passed\n>> 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 56.62s 12 subtests passed\n>> 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 71.92s 14 subtests passed\n>> 21/297 postgresql:isolation / isolation/isolation OK 299.12s 119 subtests passed\n>> 31/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 344.42s 18 subtests passed\n>> 159/297 postgresql:initdb / initdb/001_initdb OK 344.46s 52 subtests passed\n>> 162/297 postgresql:recovery / recovery/027_stream_regress ERROR 840.84s exit status 29\n>>\n>> Based on this, it seems fairly likely that crake is simply timing out\n>> as a consequence of intermittent heavy background activity.\n>>\n>\n>\n> The latest failure is this:\n>\n>\n> Waiting for replication conn standby_1's replay_lsn to pass 2/88E4E260 on primary\n> [16:40:29.481](208.545s) # poll_query_until timed out executing this query:\n> # SELECT '2/88E4E260' <= replay_lsn AND state = 'streaming'\n> # FROM pg_catalog.pg_stat_replication\n> # WHERE application_name IN ('standby_1', 'walreceiver')\n> # expecting this output:\n> # t\n> # last actual query output:\n> # f\n> # with stderr:\n> timed out waiting for catchup at /home/andrew/bf/root/HEAD/pgsql/src/test/recovery/t/027_stream_regress.pl line 103.\n>\n>\n> Maybe it's a case where the system is overloaded, I dunno. I wouldn't bet my house on it. Pretty much nothing else runs on this machine.\n>\n> I have added a mild throttle to the buildfarm config so it doesn't try \n> to run every branch at once. Maybe I also need to bring down the \n> number or meson jobs too? But I suspect there's something deeper. \n> Prior to the failure of this test 10 days ago it hadn't failed in a \n> very long time. The OS was upgraded a month ago. Eight or so days ago \n> I changed PG_TEST_EXTRA. I can't think of anything else.\n>\n>\n>\n\n\nThere seem to be a bunch of recent failures, and not just on crake, and \nnot just on HEAD: \n<https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=14&member=&stage=recoveryCheck&filter=Submit>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-25 Th 6:33 PM, Andrew\n Dunstan wrote:\n\n\n\n\n\nOn 2024-07-25 Th 5:14 PM, Tom Lane\n wrote:\n\n\nI wrote:\n\n\nI'm confused by crake's buildfarm logs. AFAICS it is not running\nrecovery-check at all in most of the runs; at least there is no\nmention of that step, for example here:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2024-07-25%2013%3A27%3A02\n\n\nOh, I see it: the log file that is called recovery-check in a\nfailing run is called misc-check if successful. That seems\nmighty bizarre, and it's not how my own animals behave.\nSomething weird about the meson code path, perhaps?\n\n\n\nYes, it was discussed some time ago. As suggested by Andres, we\n run the meson test suite more or less all together (in effect\n like \"make checkworld\" but without the main regression suite,\n which is run on its own first), rather than in the separate (and\n serialized) way we do with the configure/make tests. That\n results in significant speedup. If the tests fail we report the\n failure as happening at the first failure we encounter, which is\n possibly less than ideal, but I haven't got a better idea.\n\n\n\n\nAnyway, in this successful run:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2018%3A57%3A02&stg=misc-check\n\nhere are some salient test timings:\n\n 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.18s 9 subtests passed\n 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 15.95s 12 subtests passed\n 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 16.29s 14 subtests passed\n 17/297 postgresql:isolation / isolation/isolation OK 71.60s 119 subtests passed\n 41/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 169.13s 18 subtests passed\n140/297 postgresql:initdb / initdb/001_initdb OK 41.34s 52 subtests passed\n170/297 postgresql:recovery / recovery/027_stream_regress OK 469.49s 9 subtests passed\n\nwhile in the next, failing run\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-07-25%2020%3A18%3A05&stg=recovery-check\n\nthe same tests took:\n\n 1/297 postgresql:pg_upgrade / pg_upgrade/001_basic OK 0.22s 9 subtests passed\n 2/297 postgresql:pg_upgrade / pg_upgrade/003_logical_slots OK 56.62s 12 subtests passed\n 3/297 postgresql:pg_upgrade / pg_upgrade/004_subscription OK 71.92s 14 subtests passed\n 21/297 postgresql:isolation / isolation/isolation OK 299.12s 119 subtests passed\n 31/297 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 344.42s 18 subtests passed\n159/297 postgresql:initdb / initdb/001_initdb OK 344.46s 52 subtests passed\n162/297 postgresql:recovery / recovery/027_stream_regress ERROR 840.84s exit status 29\n\nBased on this, it seems fairly likely that crake is simply timing out\nas a consequence of intermittent heavy background activity.\n\n\n\n\n\n\n\nThe latest failure is this:\n\n\nWaiting for replication conn standby_1's replay_lsn to pass 2/88E4E260 on primary\n[16:40:29.481](208.545s) # poll_query_until timed out executing this query:\n# SELECT '2/88E4E260' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('standby_1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\ntimed out waiting for catchup at /home/andrew/bf/root/HEAD/pgsql/src/test/recovery/t/027_stream_regress.pl line 103.\n\n\nMaybe it's a case where the system is overloaded, I dunno. I wouldn't bet my house on it. Pretty much nothing else runs on this machine. \n\nI have added a mild throttle to the buildfarm config so it\n doesn't try to run every branch at once. Maybe I also need to\n bring down the number or meson jobs too? But I suspect there's\n something deeper. Prior to the failure of this test 10 days ago\n it hadn't failed in a very long time. The OS was upgraded a\n month ago. Eight or so days ago I changed PG_TEST_EXTRA. I can't\n think of anything else.\n\n\n\n\n\n\n\n\nThere seem to be a bunch of recent failures, and not just on\n crake, and not just on HEAD:\n<https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=14&member=&stage=recoveryCheck&filter=Submit>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 31 Jul 2024 11:54:31 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> There seem to be a bunch of recent failures, and not just on crake, and \n> not just on HEAD: \n> <https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=14&member=&stage=recoveryCheck&filter=Submit>\n\nThere were a batch of recovery-stage failures ending about 9 days ago\ncaused by instability of the new 043_vacuum_horizon_floor.pl test.\nOnce you take those out it doesn't look quite so bad.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Jul 2024 12:05:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "\nOn 2024-07-31 We 12:05 PM, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> There seem to be a bunch of recent failures, and not just on crake, and\n>> not just on HEAD:\n>> <https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=14&member=&stage=recoveryCheck&filter=Submit>\n> There were a batch of recovery-stage failures ending about 9 days ago\n> caused by instability of the new 043_vacuum_horizon_floor.pl test.\n> Once you take those out it doesn't look quite so bad.\n\n\n\nWe'll see. I have switched crake from --run-parallel mode to --run-all \nmode i.e. the runs are serialized. Maybe that will be enough to stop the \nerrors. I'm still annoyed that this test is susceptible to load, if that \nis indeed what is the issue.\n\n\ncheers\n\n\nandrew\n\n\n--\n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 12:58:37 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> We'll see. I have switched crake from --run-parallel mode to --run-all \n> mode i.e. the runs are serialized. Maybe that will be enough to stop the \n> errors. I'm still annoyed that this test is susceptible to load, if that \n> is indeed what is the issue.\n\ncrake is still timing out intermittently on 027_streaming_regress.pl,\nso that wasn't it. I think we need more data. We know that the\nwait_for_catchup query is never getting to true:\n\n\tSELECT '$target_lsn' <= ${mode}_lsn AND state = 'streaming'\n\nbut we don't know if the LSN condition or the state condition is\nwhat is failing. And if it is the LSN condition, it'd be good\nto see the actual last LSN, so we can look for patterns like\nwhether there is a page boundary crossing involved. So I suggest\nadding something like the attached.\n\nIf we do this, I'd be inclined to instrument wait_for_slot_catchup\nand wait_for_subscription_sync similarly, but I thought I'd check\nfor contrary opinions first.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 11 Aug 2024 20:32:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "On 2024-08-11 Su 8:32 PM, Tom Lane wrote:\n> Andrew Dunstan<[email protected]> writes:\n>> We'll see. I have switched crake from --run-parallel mode to --run-all\n>> mode i.e. the runs are serialized. Maybe that will be enough to stop the\n>> errors. I'm still annoyed that this test is susceptible to load, if that\n>> is indeed what is the issue.\n> crake is still timing out intermittently on 027_streaming_regress.pl,\n> so that wasn't it. I think we need more data. We know that the\n> wait_for_catchup query is never getting to true:\n>\n> \tSELECT '$target_lsn' <= ${mode}_lsn AND state = 'streaming'\n>\n> but we don't know if the LSN condition or the state condition is\n> what is failing. And if it is the LSN condition, it'd be good\n> to see the actual last LSN, so we can look for patterns like\n> whether there is a page boundary crossing involved. So I suggest\n> adding something like the attached.\n>\n> If we do this, I'd be inclined to instrument wait_for_slot_catchup\n> and wait_for_subscription_sync similarly, but I thought I'd check\n> for contrary opinions first.\n>\n> \t\t\t\n\n\nSeems reasonable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-08-11 Su 8:32 PM, Tom Lane\n wrote:\n\n\nAndrew Dunstan <[email protected]> writes:\n\n\nWe'll see. I have switched crake from --run-parallel mode to --run-all \nmode i.e. the runs are serialized. Maybe that will be enough to stop the \nerrors. I'm still annoyed that this test is susceptible to load, if that \nis indeed what is the issue.\n\n\n\ncrake is still timing out intermittently on 027_streaming_regress.pl,\nso that wasn't it. I think we need more data. We know that the\nwait_for_catchup query is never getting to true:\n\n\tSELECT '$target_lsn' <= ${mode}_lsn AND state = 'streaming'\n\nbut we don't know if the LSN condition or the state condition is\nwhat is failing. And if it is the LSN condition, it'd be good\nto see the actual last LSN, so we can look for patterns like\nwhether there is a page boundary crossing involved. So I suggest\nadding something like the attached.\n\nIf we do this, I'd be inclined to instrument wait_for_slot_catchup\nand wait_for_subscription_sync similarly, but I thought I'd check\nfor contrary opinions first.\n\n\t\t\t\n\n\n\nSeems reasonable.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 12 Aug 2024 09:43:43 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-08-11 Su 8:32 PM, Tom Lane wrote:\n>> I think we need more data. We know that the\n>> wait_for_catchup query is never getting to true:\n>> \n>> SELECT '$target_lsn' <= ${mode}_lsn AND state = 'streaming'\n>> \n>> but we don't know if the LSN condition or the state condition is\n>> what is failing. And if it is the LSN condition, it'd be good\n>> to see the actual last LSN, so we can look for patterns like\n>> whether there is a page boundary crossing involved. So I suggest\n>> adding something like the attached.\n\n> Seems reasonable.\n\nPushed. In the event I made it just \"SELECT * FROM\" the relevant\nview: there will be few if any rows that aren't potentially\ninteresting, and filtering the columns doesn't seem like a\nforward-looking idea either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Aug 2024 13:21:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 027_streaming_regress.pl hangs" } ]
[ { "msg_contents": "\nI and several colleagues have just been trying to build from a tarball \nwith meson.\n\n\n`meson setup build .` results in this:\n\n\n[...]\n\nMessage: checking for file conflicts between source and build directory\n\nmeson.build:2963:2: ERROR: Problem encountered:\n****\nNon-clean source code directory detected.\n\nTo build with meson the source tree may not have an in-place, ./configure\nstyle, build configured. You can have both meson and ./configure style \nbuilds\nfor the same source tree by building out-of-source / VPATH with\nconfigure. Alternatively use a separate check out for meson based builds.\n\n\nConflicting files in source directory:\n\n[huge list of files]\n\nThe conflicting files need to be removed, either by removing the files \nlisted\nabove, or by running configure and then make maintainer-clean.\n\n****\n\n\nThat seems pretty awful and unfriendly and I didn't see anything about \nit in the docs.\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 02:11:01 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "meson vs tarballs" }, { "msg_contents": "On 13.03.24 07:11, Andrew Dunstan wrote:\n> I and several colleagues have just been trying to build from a tarball \n> with meson.\n\n> That seems pretty awful and unfriendly and I didn't see anything about \n> it in the docs.\n\nAt https://www.postgresql.org/docs/16/install-requirements.html is says:\n\n\"\"\"\nAlternatively, PostgreSQL can be built using Meson. This is currently \nexperimental and only works when building from a Git checkout (not from \na distribution tarball).\n\"\"\"\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 07:22:43 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson vs tarballs" }, { "msg_contents": "\nOn 2024-03-13 We 02:22, Peter Eisentraut wrote:\n> On 13.03.24 07:11, Andrew Dunstan wrote:\n>> I and several colleagues have just been trying to build from a \n>> tarball with meson.\n>\n>> That seems pretty awful and unfriendly and I didn't see anything \n>> about it in the docs.\n>\n> At https://www.postgresql.org/docs/16/install-requirements.html is says:\n>\n> \"\"\"\n> Alternatively, PostgreSQL can be built using Meson. This is currently \n> experimental and only works when building from a Git checkout (not \n> from a distribution tarball).\n> \"\"\"\n>\n\nAh!. Darn, I missed that. Thanks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 02:31:46 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson vs tarballs" }, { "msg_contents": "\nOn 2024-03-13 We 02:31, Andrew Dunstan wrote:\n>\n> On 2024-03-13 We 02:22, Peter Eisentraut wrote:\n>> On 13.03.24 07:11, Andrew Dunstan wrote:\n>>> I and several colleagues have just been trying to build from a \n>>> tarball with meson.\n>>\n>>> That seems pretty awful and unfriendly and I didn't see anything \n>>> about it in the docs.\n>>\n>> At https://www.postgresql.org/docs/16/install-requirements.html is says:\n>>\n>> \"\"\"\n>> Alternatively, PostgreSQL can be built using Meson. This is currently \n>> experimental and only works when building from a Git checkout (not \n>> from a distribution tarball).\n>> \"\"\"\n>>\n>\n> Ah!. Darn, I missed that. Thanks.\n>\n>\n>\n\nOf course, when release 17 comes out this had better not be the case, \nsince we have removed the custom Windows build system.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 02:42:31 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson vs tarballs" }, { "msg_contents": "On 13.03.24 07:42, Andrew Dunstan wrote:\n> On 2024-03-13 We 02:31, Andrew Dunstan wrote:\n>> On 2024-03-13 We 02:22, Peter Eisentraut wrote:\n>>> On 13.03.24 07:11, Andrew Dunstan wrote:\n>>>> I and several colleagues have just been trying to build from a \n>>>> tarball with meson.\n>>>\n>>>> That seems pretty awful and unfriendly and I didn't see anything \n>>>> about it in the docs.\n>>>\n>>> At https://www.postgresql.org/docs/16/install-requirements.html is says:\n>>>\n>>> \"\"\"\n>>> Alternatively, PostgreSQL can be built using Meson. This is currently \n>>> experimental and only works when building from a Git checkout (not \n>>> from a distribution tarball).\n>>> \"\"\"\n>>\n>> Ah!. Darn, I missed that. Thanks.\n\n> Of course, when release 17 comes out this had better not be the case, \n> since we have removed the custom Windows build system.\n\nYes, this has been changed in 17.\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 08:40:42 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson vs tarballs" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 13.03.24 07:42, Andrew Dunstan wrote:\n>> On 2024-03-13 We 02:31, Andrew Dunstan wrote:\n>>>> Alternatively, PostgreSQL can be built using Meson. This is currently \n>>>> experimental and only works when building from a Git checkout (not \n>>>> from a distribution tarball).\n\n>> Ah!. Darn, I missed that. Thanks.\n>> Of course, when release 17 comes out this had better not be the case, \n>> since we have removed the custom Windows build system.\n\n> Yes, this has been changed in 17.\n\nMy understanding is that pretty soon there will be no difference,\nie distribution tarballs will have the same contents as a git pull\n(less the .git infrastructure). If we're planning on making that\nhappen for 17, perhaps we'd better get on with it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Mar 2024 10:42:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson vs tarballs" } ]
[ { "msg_contents": "Hi,\n\nI noticed 3 regression test failures like $SUBJECT in cfbot runs for\nunrelated patches that probably shouldn't affect GIN, so I guess this\nis probably a problem in master. All three happened on FreeBSD, but I\ndoubt that's relevant, it's just that the FreeBSD CI task was randomly\nselected to also run the \"regress-running\" test suite, which is a sort\nof Meson equivalent of make installcheck (eg running tests against a\ncluster that was already running). As for why it might require\nregress-running or have started only recently, it could be due to\nyesterday's boost in the number of CPUs for FreeBSD CI, changing the\ntiming. (?)\n\nhttp://cfbot.cputube.org/highlights/regress-7.html\n\n\n", "msg_date": "Thu, 14 Mar 2024 14:37:07 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR: error triggered for injection point\n gin-leave-leaf-split-incomplete" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> I noticed 3 regression test failures like $SUBJECT in cfbot runs for\n> unrelated patches that probably shouldn't affect GIN, so I guess this\n> is probably a problem in master.\n\nHmm ... I have no insight on what's causing this, but \"error triggered\nfor\" is about as content-free an error message as I've ever seen.\nEven granting that it's developer-directed, it's still content-free:\nwe already know it's an error, and something must have triggered that,\nbut you're offering nothing about what. Can't we do better than that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Mar 2024 22:28:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ERROR: error triggered for injection point\n gin-leave-leaf-split-incomplete" } ]
[ { "msg_contents": "I was taking a look at the login event triggers work (nice work btw)\nand saw a couple of minor items that I thought would be worth cleaning\nup. This is mostly just clarifying the exiting docs and code comments.\n\nRobert Treat\nhttps://xzilla.net", "msg_date": "Wed, 13 Mar 2024 21:47:53 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "small_cleanups around login event triggers" }, { "msg_contents": "> On 14 Mar 2024, at 02:47, Robert Treat <[email protected]> wrote:\n\n> I was taking a look at the login event triggers work (nice work btw)\n\nThanks for reviewing committed code, that's something which doesn't happen\noften enough and is much appreciated.\n\n> and saw a couple of minor items that I thought would be worth cleaning\n> up. This is mostly just clarifying the exiting docs and code comments.\n\n+ either in a connection string or configuration file. Alternativly, you can\nThis should be \"Alternatively\" I think.\n\n- canceling connection in <application>psql</application> wouldn't cancel\n+ canceling a connection in <application>psql</application> will not cancel\nNitpickery (perhaps motivated by english not being my first language), but\nsince psql only deals with one connection I would expect this to read \"the\nconnection\".\n\n- * Returns true iff the lock was acquired.\n+ * Returns true if the lock was acquired.\nUsing \"iff\" here is being consistent with the rest of the file (and technically\ncorrect):\n\n$ grep -c \"if the lock was\" src/backend/storage/lmgr/lmgr.c\n1\n$ grep -c \"iff the lock was\" src/backend/storage/lmgr/lmgr.c\n5\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 13:21:29 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: small_cleanups around login event triggers" }, { "msg_contents": "On Thu, Mar 14, 2024 at 8:21 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 14 Mar 2024, at 02:47, Robert Treat <[email protected]> wrote:\n>\n> > I was taking a look at the login event triggers work (nice work btw)\n>\n> Thanks for reviewing committed code, that's something which doesn't happen\n> often enough and is much appreciated.\n>\n> > and saw a couple of minor items that I thought would be worth cleaning\n> > up. This is mostly just clarifying the exiting docs and code comments.\n>\n> + either in a connection string or configuration file. Alternativly, you can\n> This should be \"Alternatively\" I think.\n>\n\nYes.\n\n> - canceling connection in <application>psql</application> wouldn't cancel\n> + canceling a connection in <application>psql</application> will not cancel\n> Nitpickery (perhaps motivated by english not being my first language), but\n> since psql only deals with one connection I would expect this to read \"the\n> connection\".\n>\n\nMy interpretation of this is that \"a connection\" is more correct\nbecause it could be your connection or someone else's connection (ie,\nyou are canceling one of many possible connections). Definitely\nnitpickery either way.\n\n> - * Returns true iff the lock was acquired.\n> + * Returns true if the lock was acquired.\n> Using \"iff\" here is being consistent with the rest of the file (and technically\n> correct):\n>\n> $ grep -c \"if the lock was\" src/backend/storage/lmgr/lmgr.c\n> 1\n> $ grep -c \"iff the lock was\" src/backend/storage/lmgr/lmgr.c\n> 5\n>\n\nAh, yeah, I was pretty focused on the event trigger stuff and didn't\nnotice it being used elsewhere; thought it was a typo, but I guess\nit's meant as shorthand for \"if and only if\", I wonder how many people\nare familiar with that.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Thu, 14 Mar 2024 09:21:18 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: small_cleanups around login event triggers" }, { "msg_contents": "> On 14 Mar 2024, at 14:21, Robert Treat <[email protected]> wrote:\n> On Thu, Mar 14, 2024 at 8:21 AM Daniel Gustafsson <[email protected]> wrote:\n\n>> - canceling connection in <application>psql</application> wouldn't cancel\n>> + canceling a connection in <application>psql</application> will not cancel\n>> Nitpickery (perhaps motivated by english not being my first language), but\n>> since psql only deals with one connection I would expect this to read \"the\n>> connection\".\n>> \n> \n> My interpretation of this is that \"a connection\" is more correct\n> because it could be your connection or someone else's connection (ie,\n> you are canceling one of many possible connections). Definitely\n> nitpickery either way.\n\nFair point.\n\n>> - * Returns true iff the lock was acquired.\n>> + * Returns true if the lock was acquired.\n>> Using \"iff\" here is being consistent with the rest of the file (and technically\n>> correct):\n\n> Ah, yeah, I was pretty focused on the event trigger stuff and didn't\n> notice it being used elsewhere; thought it was a typo, but I guess\n> it's meant as shorthand for \"if and only if\", I wonder how many people\n> are familiar with that.\n\nI would like to think it's fairly widely understood among programmers, but I\nmight be dating myself in saying so.\n\nI went ahead and applied this with the fixes mentioned here with one more tiny\nchange to the last hunk of the patch to make it say \"login event trigger\"\nrather than just \"login trigger\". Thanks for the submission!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 00:22:18 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: small_cleanups around login event triggers" }, { "msg_contents": "On Thu, Mar 14, 2024 at 7:23 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 14 Mar 2024, at 14:21, Robert Treat <[email protected]> wrote:\n> > On Thu, Mar 14, 2024 at 8:21 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> >> - canceling connection in <application>psql</application> wouldn't cancel\n> >> + canceling a connection in <application>psql</application> will not cancel\n> >> Nitpickery (perhaps motivated by english not being my first language), but\n> >> since psql only deals with one connection I would expect this to read \"the\n> >> connection\".\n> >>\n> >\n> > My interpretation of this is that \"a connection\" is more correct\n> > because it could be your connection or someone else's connection (ie,\n> > you are canceling one of many possible connections). Definitely\n> > nitpickery either way.\n>\n> Fair point.\n>\n> >> - * Returns true iff the lock was acquired.\n> >> + * Returns true if the lock was acquired.\n> >> Using \"iff\" here is being consistent with the rest of the file (and technically\n> >> correct):\n>\n> > Ah, yeah, I was pretty focused on the event trigger stuff and didn't\n> > notice it being used elsewhere; thought it was a typo, but I guess\n> > it's meant as shorthand for \"if and only if\", I wonder how many people\n> > are familiar with that.\n>\n> I would like to think it's fairly widely understood among programmers, but I\n> might be dating myself in saying so.\n>\n> I went ahead and applied this with the fixes mentioned here with one more tiny\n> change to the last hunk of the patch to make it say \"login event trigger\"\n> rather than just \"login trigger\". Thanks for the submission!\n>\n\nLGTM, thanks!\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Mon, 18 Mar 2024 11:22:15 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: small_cleanups around login event triggers" } ]
[ { "msg_contents": "Hello.\n\nA recent commit 6612185883 introduced two error messages that are\nidentical in text but differ in their placeholders.\n\n-\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %d of %d bytes\",\n-\t\t\t\t\t filename, (int) rb, (int) st.st_size);\n+\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %zd of %lld bytes\",\n+\t\t\t\t\t filename, rb, (long long int) st.st_size);\n...\n-\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %d of %d bytes\",\n+\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %d of %u bytes\",\n \t\t\t\t\t rf->filename, rb, length);\n\nI'd be happy if the two messages kept consistency. I suggest aligning\ntypes instead of making the messages different, as attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 14 Mar 2024 13:20:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Inconsistent printf placeholders" }, { "msg_contents": "> On 14 Mar 2024, at 05:20, Kyotaro Horiguchi <[email protected]> wrote:\n\n> I'd be happy if the two messages kept consistency. I suggest aligning\n> types instead of making the messages different, as attached.\n\nI've only skimmed this so far but +1 on keeping the messages the same where\npossible to reduce translation work. Adding a comment on the message where the\ncasting is done to indicate that it is for translation might reduce the risk of\nit \"getting fixed\" down the line.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 13:45:41 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistent printf placeholders" }, { "msg_contents": "On 14.03.24 05:20, Kyotaro Horiguchi wrote:\n> A recent commit 6612185883 introduced two error messages that are\n> identical in text but differ in their placeholders.\n> \n> -\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %d of %d bytes\",\n> -\t\t\t\t\t filename, (int) rb, (int) st.st_size);\n> +\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %zd of %lld bytes\",\n> +\t\t\t\t\t filename, rb, (long long int) st.st_size);\n> ...\n> -\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %d of %d bytes\",\n> +\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %d of %u bytes\",\n> \t\t\t\t\t rf->filename, rb, length);\n> \n> I'd be happy if the two messages kept consistency. I suggest aligning\n> types instead of making the messages different, as attached.\n\nIf you want to make them uniform, then I suggest the error messages \nshould both be \"%zd of %zu bytes\", which are the actual types read() \ndeals with.\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 14:02:46 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistent printf placeholders" }, { "msg_contents": "Thank you for the suggestions.\n\nAt Thu, 14 Mar 2024 13:45:41 +0100, Daniel Gustafsson <[email protected]> wrote in \n> I've only skimmed this so far but +1 on keeping the messages the same where\n> possible to reduce translation work. Adding a comment on the message where the\n> casting is done to indicate that it is for translation might reduce the risk of\n> it \"getting fixed\" down the line.\n\nAdded a comment \"/* cast xxx to avoid extra translatable messages */.\n\nAt Thu, 14 Mar 2024 14:02:46 +0100, Peter Eisentraut <[email protected]> wrote in \n> If you want to make them uniform, then I suggest the error messages\n\nYeah. Having the same messages with only the placeholders changed is\nnot very pleasing during translation. If possible, I would like to\nalign them.\n\n> should both be \"%zd of %zu bytes\", which are the actual types read()\n> deals with.\n\nI have considered only the two messages. Actually, buffile.c and md.c\nare already like that. The attached aligns the messages in\npg_combinebackup.c and reconstruct.c with the precedents.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 15 Mar 2024 11:27:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inconsistent printf placeholders" }, { "msg_contents": "On Fri, 15 Mar 2024 at 15:27, Kyotaro Horiguchi <[email protected]> wrote:\n> I have considered only the two messages. Actually, buffile.c and md.c\n> are already like that. The attached aligns the messages in\n> pg_combinebackup.c and reconstruct.c with the precedents.\n\nThis looks like a worthy cause to make translator work easier.\n\nI don't want to widen the goalposts or anything, but just wondering if\nyou'd searched for any others that could get similar treatment?\n\nI only just had a quick look at the following.\n\n$ cat src/backend/po/fr.po | grep -E \"^msgid\\s\" | sed -E\n's/%[a-zA-Z]+/\\%/g' | sort | uniq -d -c\n 31 msgid \"\"\n 2 msgid \"could not accept SSL connection: %\"\n 2 msgid \"could not initialize LDAP: %\"\n 2 msgid \"could not look up local user ID %: %\"\n 2 msgid \"could not open file \\\"%\\\": %\"\n 2 msgid \"could not read file \\\"%\\\": read % of %\"\n 2 msgid \"could not read from log segment %, offset %: %\"\n 2 msgid \"could not read from log segment %, offset %: read % of %\"\n 2 msgid \"index % out of valid range, 0..%\"\n 2 msgid \"invalid value for parameter \\\"%\\\": %\"\n 2 msgid \"%%% is outside the valid range for parameter \\\"%\\\" (% .. %)\"\n 2 msgid \"must be owner of large object %\"\n 2 msgid \"oversize GSSAPI packet sent by the client (% > %)\"\n 2 msgid \"permission denied for large object %\"\n 2 msgid \"string is too long for tsvector (% bytes, max % bytes)\"\n 2 msgid \"timestamp out of range: \\\"%\\\"\"\n 2 msgid \"Valid values are between \\\"%\\\" and \\\"%\\\".\"\n\nI've not looked at how hard it would be with any of the above to\ndetermine how hard it would be to make the formats consistent. The\n3rd last one seems similar enough that it might be worth doing\ntogether with this?\n\nDavid\n\n\n", "msg_date": "Fri, 15 Mar 2024 16:01:28 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistent printf placeholders" }, { "msg_contents": "At Fri, 15 Mar 2024 16:01:28 +1300, David Rowley <[email protected]> wrote in \n> On Fri, 15 Mar 2024 at 15:27, Kyotaro Horiguchi <[email protected]> wrote:\n> > I have considered only the two messages. Actually, buffile.c and md.c\n> > are already like that. The attached aligns the messages in\n> > pg_combinebackup.c and reconstruct.c with the precedents.\n> \n> This looks like a worthy cause to make translator work easier.\n> \n> I don't want to widen the goalposts or anything, but just wondering if\n> you'd searched for any others that could get similar treatment?\n> \n> I only just had a quick look at the following.\n> \n> $ cat src/backend/po/fr.po | grep -E \"^msgid\\s\" | sed -E\n> 's/%[a-zA-Z]+/\\%/g' | sort | uniq -d -c\n> 31 msgid \"\"\n> 2 msgid \"could not accept SSL connection: %\"\n> 2 msgid \"could not initialize LDAP: %\"\n> 2 msgid \"could not look up local user ID %: %\"\n...\n> I've not looked at how hard it would be with any of the above to\n> determine how hard it would be to make the formats consistent. The\n> 3rd last one seems similar enough that it might be worth doing\n> together with this?\n\n\nI checked for that kind of msgids in a bit more intensive way. The\nnumbers are the line numbers in ja.po of backend. I didn't check the\nsame for other modules.\n\n> ###: invalid timeline %lld\n> @ backup/walsummaryfuncs.c:95\n> invalid timeline %u\n> @ repl_gram.y:318 repl_gram.y:359\n\nIn the first case, the %lld can be more than 2^31.\n\nIn the second case, %u is uint32. However, the bigger issue is\nchecking if the uint32 value is negative:\n\nrepl_gram.c: 147\n>\tif ((yyvsp[0].uintval) <= 0)\n>\t\tereport(ERROR,\n>\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n>\t\t\t\t errmsg(\"invalid timeline %u\", (yyvsp[0].uintval))));\n\nwhich cannot happen. I think we can simply remove the useless error\ncheck.\n\n> ###: could not read file \\\"%s\\\": read %d of %zu\n> @ ../common/controldata_utils.c:116 ../common/controldata_utils.c:119 access/transam/xlog.c:3417 access/transam/xlog.c:4278 replication/logical/origin.c:750 replication/logical/origin.c:789 replication/logical/snapbuild.c:2040 replication/slot.c:2218 replication/slot.c:2259 replication/walsender.c:660 utils/cache/relmapper.c:833\n> could not read file \\\"%s\\\": read %d of %lld\n> @ access/transam/twophase.c:1372\n> could not read file \\\"%s\\\": read %zd of %zu\n> @ backup/basebackup.c:2102\n> ###: oversize GSSAPI packet sent by the client (%zu > %zu)\n> @ libpq/be-secure-gssapi.c:351\n> oversize GSSAPI packet sent by the client (%zu > %d)\n> @ libpq/be-secure-gssapi.c:575\n> ###: compressed segment file \\\"%s\\\" has incorrect uncompressed size %d, skipping\n> @ pg_receivewal.c:362\n> compressed segment file \\\"%s\\\" has incorrect uncompressed size %zu, skipping\n> @ pg_receivewal.c:448\n> ###: could not read file \\\"%s\\\": read only %zd of %lld bytes\n> @ pg_combinebackup.c:1304\n> could not read file \\\"%s\\\": read only %d of %u bytes\n> @ reconstruct.c:514\n\n\nWe can \"fix\" them the same way. I debated whether to use ssize_t for\nread() and replace all instances of size_t with Size. However, in the\nend, I decided to only keep it consistent with the surrounding code.\n\n> ###: index %d out of valid range, 0..%d\n> @ utils/adt/varlena.c:3220 utils/adt/varlena.c:3287\n> index %lld out of valid range, 0..%lld\n> @ utils/adt/varlena.c:3251 utils/adt/varlena.c:3323\n> ###: string is too long for tsvector (%d bytes, max %d bytes)\n> @ tsearch/to_tsany.c:194 utils/adt/tsvector.c:277 utils/adt/tsvector_op.c:1126\n> string is too long for tsvector (%ld bytes, max %ld bytes)\n> @ utils/adt/tsvector.c:223\n\nWe can unify them and did in the attached, but I'm not sure that's\nsensible..\n\n> ###: could not look up local user ID %d: %s\n> @ ../port/user.c:43 ../port/user.c:79\n> could not look up local user ID %ld: %s\n> @ libpq/auth.c:1886\n\nBoth of the above use uid_t (defined as int) as user ID and it is\nexplicitly cast to \"int\" in the first case and to long in the second,\nwhich seems mysterious. Although I'm not sure if there's a possibility\nof uid_t being widened in the future, I unified them to the latter way\nfor robustness.\n\n> ###: error while cloning relation \\\"%s.%s\\\" (\\\"%s\\\" to \\\"%s\\\"): %m\n> @ file.c:44\n> error while cloning relation \\\"%s.%s\\\" (\\\"%s\\\" to \\\"%s\\\"): %s\n> @ file.c:65\n> \n> The latter seems to be changed to %m by reassiging save_errno to errno, as done in other places.\n> \n> ###: could not get data directory using %s: %m\n> @ option.c:448\n> could not get data directory using %s: %s\n> @ option.c:452\n> ###: could not get control data using %s: %m\n> @ controldata.c:129 controldata.c:199\n> could not get control data using %s: %s\n> @ controldata.c:175 controldata.c:507\n> ###: %s: %m\n> @ copy.c:401 psqlscanslash.l:805 psqlscanslash.l:817 psqlscanslash.l:835\n> %s: %s\n> @ command.c:2728 copy.c:388\n\nIn these cases, %m can be replaced by %s by using\nwait_result_to_str(-1), but I'm not sure it is sensible. (I didn't do\nthat in the attached)\n\nFinally, it doesn't seem possible to sensibly unify everything that\nfollows.\n\n> ###: could not initialize LDAP: %s\n> @ libpq/auth.c:2327\n> could not initialize LDAP: %m\n> @ libpq/auth.c:2345\n> ###: Valid values are between \\\"%d\\\" and \\\"%d\\\".\n> @ access/common/reloptions.c:1616\n> Valid values are between \\\"%f\\\" and \\\"%f\\\".\n> @ access/common/reloptions.c:1636\n> ###: permission denied for large object %s\n> @ catalog/aclchk.c:2745\n> permission denied for large object %u\n> @ libpq/be-fsstubs.c:871 storage/large_object/inv_api.c:297 storage/large_object/inv_api.c:309 storage/large_object/inv_api.c:506 storage/large_object/inv_api.c:617 storage/large_object/inv_api.c:807\n> ###: commutator operator %s is already the commutator of operator %s\n> @ catalog/pg_operator.c:739\n> commutator operator %s is already the commutator of operator %u\n> @ catalog/pg_operator.c:744\n> ###: must be owner of large object %s\n> @ catalog/aclchk.c:2877\n> must be owner of large object %u\n> @ catalog/objectaddress.c:2511 libpq/be-fsstubs.c:329\n> ###: could not open file \\\"%s\\\": %m\n> @ replication/slot.c:2186 replication/walsender.c:628 replication/walsender.c:3051 storage/file/copydir.c:151 storage/file/fd.c:803 storage/file/fd.c:3510 storage/file/fd.c:3740 storage/file/fd.c:3830 storage/smgr/md.c:661 utils/cache/relmapper.c:818 utils/cache/relmapper.c:935 utils/error/elog.c:2085 utils/init/miscinit.c:1526 utils/init/miscinit.c:1660 utils/init/miscinit.c:1737 utils/misc/guc.c:4712 utils/misc/guc.c:4762\n> could not open file \\\"%s\\\": %s\n> @ ../port/open.c:115\n> ###: %d%s%s is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\n> @ utils/misc/guc.c:3179\n> %g%s%s is outside the valid range for parameter \\\"%s\\\" (%g .. %g)\n> @ utils/misc/guc.c:3215\n> ###: could not accept SSL connection: %m\n> @ libpq/be-secure-openssl.c:503\n> could not accept SSL connection: %s\n> @ libpq/be-secure-openssl.c:546\n> ###: invalid value for parameter \\\"%s\\\": %d\n> @ utils/adt/regexp.c:716 utils/adt/regexp.c:725 utils/adt/regexp.c:1082 utils/adt/regexp.c:1146 utils/adt/regexp.c:1155 utils/adt/regexp.c:1164 utils/adt/regexp.c:1173 utils/adt/regexp.c:1853 utils/adt/regexp.c:1862 utils/adt/regexp.c:1871 utils/misc/guc.c:6761 utils/misc/guc.c:6795\n> invalid value for parameter \\\"%s\\\": %g\n> @ utils/misc/guc.c:6829\n> ###: could not close file \\\"%s\\\": %m\n> @ bbstreamer_file.c:138 pg_recvlogical.c:650\n> could not close file \\\"%s\\\": %s\n> @ receivelog.c:227 receivelog.c:317 receivelog.c:688\n> ###: could not fsync file \\\"%s\\\": %m\n> @ pg_recvlogical.c:204\n> could not fsync file \\\"%s\\\": %s\n> @ receivelog.c:775 receivelog.c:1022 walmethods.c:1206\n> ###: could not read from input file: %s\n> @ compress_lz4.c:628 compress_lz4.c:647 compress_none.c:97 compress_none.c:140\n> could not read from input file: %m\n> @ compress_zstd.c:373 pg_backup_custom.c:655\n> ###: could not get pg_ctl version data using %s: %m\n> @ exec.c:47\n> could not get pg_ctl version data using %s: %s\n> @ exec.c:51\n> ###: negator operator %s is already the negator of operator %s\n> @ catalog/pg_operator.c:807\n> negator operator %s is already the negator of operator %u\n> @ catalog/pg_operator.c:812\n> ###: timestamp out of range: \\\"%s\\\"\n> @ access/transam/xlogrecovery.c:4937 utils/adt/timestamp.c:202 utils/adt/timestamp.c:455\n> timestamp out of range: \\\"%g\\\"\n> @ utils/adt/timestamp.c:762 utils/adt/timestamp.c:774\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 15 Mar 2024 16:20:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inconsistent printf placeholders" }, { "msg_contents": "At Fri, 15 Mar 2024 16:20:27 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> I checked for that kind of msgids in a bit more intensive way. The\n> numbers are the line numbers in ja.po of backend. I didn't check the\n> same for other modules.\n> \n> > ###: invalid timeline %lld\n> > @ backup/walsummaryfuncs.c:95\n> > invalid timeline %u\n> > @ repl_gram.y:318 repl_gram.y:359\n\n\"The numbers are the line numbers in ja.po\" is wrong. Correctly, it\nshould be written as:\n\nThe information consists of two lines: the first of them is the msgid,\nand the second indicates the locations where they appear.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 15 Mar 2024 16:28:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inconsistent printf placeholders" }, { "msg_contents": "On 15.03.24 08:20, Kyotaro Horiguchi wrote:\n> diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\n> @@ -1369,8 +1369,8 @@ ReadTwoPhaseFile(TransactionId xid, bool missing_ok)\n> \t\t\t\t\t errmsg(\"could not read file \\\"%s\\\": %m\", path)));\n> \t\telse\n> \t\t\tereport(ERROR,\n> -\t\t\t\t\t(errmsg(\"could not read file \\\"%s\\\": read %d of %lld\",\n> -\t\t\t\t\t\t\tpath, r, (long long int) stat.st_size)));\n> +\t\t\t\t\t(errmsg(\"could not read file \\\"%s\\\": read %zd of %zu\",\n> +\t\t\t\t\t\t\tpath, r, (Size) stat.st_size)));\n> \t}\n> \n> \tpgstat_report_wait_end();\n\nThis might be worse, because stat.st_size is of type off_t, which could \nbe smaller than Size/size_t.\n\n> diff --git a/src/backend/libpq/be-secure-gssapi.c b/src/backend/libpq/be-secure-gssapi.c\n> index bc04e78abb..68645b4519 100644\n> --- a/src/backend/libpq/be-secure-gssapi.c\n> +++ b/src/backend/libpq/be-secure-gssapi.c\n> @@ -572,9 +572,9 @@ secure_open_gssapi(Port *port)\n> \t\tif (input.length > PQ_GSS_RECV_BUFFER_SIZE)\n> \t\t{\n> \t\t\tereport(COMMERROR,\n> -\t\t\t\t\t(errmsg(\"oversize GSSAPI packet sent by the client (%zu > %d)\",\n> +\t\t\t\t\t(errmsg(\"oversize GSSAPI packet sent by the client (%zu > %zu)\",\n> \t\t\t\t\t\t\t(size_t) input.length,\n> -\t\t\t\t\t\t\tPQ_GSS_RECV_BUFFER_SIZE)));\n> +\t\t\t\t\t\t\t(size_t) PQ_GSS_RECV_BUFFER_SIZE)));\n> \t\t\treturn -1;\n> \t\t}\n> \n\nMight be better to add that cast to the definition of \nPQ_GSS_RECV_BUFFER_SIZE instead, so that all code can benefit.\n\n> diff --git a/src/backend/replication/repl_gram.y b/src/backend/replication/repl_gram.y\n> index 7474f5bd67..baa76280b9 100644\n> --- a/src/backend/replication/repl_gram.y\n> +++ b/src/backend/replication/repl_gram.y\n> @@ -312,11 +312,6 @@ timeline_history:\n> \t\t\t\t{\n> \t\t\t\t\tTimeLineHistoryCmd *cmd;\n> \n> -\t\t\t\t\tif ($2 <= 0)\n> -\t\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> -\t\t\t\t\t\t\t\t errmsg(\"invalid timeline %u\", $2)));\n> -\n> \t\t\t\t\tcmd = makeNode(TimeLineHistoryCmd);\n> \t\t\t\t\tcmd->timeline = $2;\n> \n> @@ -352,13 +347,7 @@ opt_slot:\n> \n> opt_timeline:\n> \t\t\tK_TIMELINE UCONST\n> -\t\t\t\t{\n> -\t\t\t\t\tif ($2 <= 0)\n> -\t\t\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> -\t\t\t\t\t\t\t\t errmsg(\"invalid timeline %u\", $2)));\n> -\t\t\t\t\t$$ = $2;\n> -\t\t\t\t}\n> +\t\t\t\t{ $$ = $2; }\n> \t\t\t\t| /* EMPTY */\t\t\t{ $$ = 0; }\n> \t\t\t;\n> \n\nI don't think this is correct. It loses the check for == 0.\n\n> diff --git a/src/backend/tsearch/to_tsany.c b/src/backend/tsearch/to_tsany.c\n> index 88cba58cba..9d21178107 100644\n> --- a/src/backend/tsearch/to_tsany.c\n> +++ b/src/backend/tsearch/to_tsany.c\n> @@ -191,7 +191,8 @@ make_tsvector(ParsedText *prs)\n> \tif (lenstr > MAXSTRPOS)\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> -\t\t\t\t errmsg(\"string is too long for tsvector (%d bytes, max %d bytes)\", lenstr, MAXSTRPOS)));\n> +\t\t\t\t /* cast values to avoid extra translatable messages */\n> +\t\t\t\t errmsg(\"string is too long for tsvector (%ld bytes, max %ld bytes)\", (long) lenstr, (long) MAXSTRPOS)));\n> \n> \ttotallen = CALCDATASIZE(prs->curwords, lenstr);\n> \tin = (TSVector) palloc0(totallen);\n\nI think it would be better instead to change the message in tsvectorin() \nto *not* use long. The size of long is unportable, so I would rather \navoid using it at all.\n\n> diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c\n> index 8d28dd42ce..5de490b569 100644\n> --- a/src/backend/utils/adt/varlena.c\n> +++ b/src/backend/utils/adt/varlena.c\n> @@ -3217,8 +3217,9 @@ byteaGetByte(PG_FUNCTION_ARGS)\n> \tif (n < 0 || n >= len)\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),\n> -\t\t\t\t errmsg(\"index %d out of valid range, 0..%d\",\n> -\t\t\t\t\t\tn, len - 1)));\n> +\t\t\t\t /* cast values to avoid extra translable messages */\n> +\t\t\t\t errmsg(\"index %lld out of valid range, 0..%lld\",\n> +\t\t\t\t\t\t(long long)n, (long long) len - 1)));\n> \n> \tbyte = ((unsigned char *) VARDATA_ANY(v))[n];\n\nI think this is taking it too far. We shouldn't try to make all similar \nmessages use the same placeholders. If the underlying types are \ndifferent, we should use them. Adding more casts makes the code less \nrobust overall. The size_t/ssize_t cleanup is different, because there \nthe types were arguably wrong to begin with, and by using the right \ntypes we move toward more consistency.\n\n> diff --git a/src/bin/pg_combinebackup/pg_combinebackup.c b/src/bin/pg_combinebackup/pg_combinebackup.c\n> index 6f0814d9ac..feb4d5dcf4 100644\n> --- a/src/bin/pg_combinebackup/pg_combinebackup.c\n> +++ b/src/bin/pg_combinebackup/pg_combinebackup.c\n> @@ -1301,8 +1301,9 @@ slurp_file(int fd, char *filename, StringInfo buf, int maxlen)\n> \t\tif (rb < 0)\n> \t\t\tpg_fatal(\"could not read file \\\"%s\\\": %m\", filename);\n> \t\telse\n> -\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %zd of %lld bytes\",\n> -\t\t\t\t\t filename, rb, (long long int) st.st_size);\n> +\t\t\t/* cast st_size to avoid extra translatable messages */\n> +\t\t\tpg_fatal(\"could not read file \\\"%s\\\": read only %zd of %zu bytes\",\n> +\t\t\t\t\t filename, rb, (size_t) st.st_size);\n> \t}\n> \n> \t/* Adjust buffer length for new data and restore trailing-\\0 invariant */\n\nSimilar to above, casting off_t to size_t is dubious.\n\n> diff --git a/src/port/user.c b/src/port/user.c\n> index 7444aeb64b..9364bdb69e 100644\n> --- a/src/port/user.c\n> +++ b/src/port/user.c\n> @@ -40,8 +40,8 @@ pg_get_user_name(uid_t user_id, char *buffer, size_t buflen)\n> \t}\n> \tif (pwerr != 0)\n> \t\tsnprintf(buffer, buflen,\n> -\t\t\t\t _(\"could not look up local user ID %d: %s\"),\n> -\t\t\t\t (int) user_id,\n> +\t\t\t\t _(\"could not look up local user ID %ld: %s\"),\n> +\t\t\t\t (long) user_id,\n> \t\t\t\t strerror_r(pwerr, pwdbuf, sizeof(pwdbuf)));\n> \telse\n> \t\tsnprintf(buffer, buflen,\n\nAlso dubious use of \"long\" here.\n\n\n\n", "msg_date": "Tue, 19 Mar 2024 10:50:23 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistent printf placeholders" }, { "msg_contents": "Thank you for looking this.\n\nAt Tue, 19 Mar 2024 10:50:23 +0100, Peter Eisentraut <[email protected]> wrote in \n> On 15.03.24 08:20, Kyotaro Horiguchi wrote:\n> > diff --git a/src/backend/access/transam/twophase.c\n> > b/src/backend/access/transam/twophase.c\n> > @@ -1369,8 +1369,8 @@ ReadTwoPhaseFile(TransactionId xid, bool\n> > missing_ok)\n> > \t\t\t\t\t errmsg(\"could not read file \\\"%s\\\":\n> > \t\t\t\t\t %m\", path)));\n> > \t\telse\n> > \t\t\tereport(ERROR,\n> > -\t\t\t\t\t(errmsg(\"could not read file \\\"%s\\\":\n> > -\t\t\t\t\tread %d of %lld\",\n> > -\t\t\t\t\t\t\tpath, r, (long long\n> > -\t\t\t\t\t\t\tint) stat.st_size)));\n> > + (errmsg(\"could not read file \\\"%s\\\": read %zd of %zu\",\n> > + path, r, (Size) stat.st_size)));\n> > \t}\n> > \tpgstat_report_wait_end();\n> \n> This might be worse, because stat.st_size is of type off_t, which\n> could be smaller than Size/size_t.\n\nI think you were trying to mention that off_t could be wider than\nsize_t and you're right in that point. I thought that it is safe since\nwe are trying to read the whole content of the file at once here into\npalloc'ed memory.\n\nHowever, on second thought, if st_size is out of the range of ssize_t,\nand palloc accepts that size, at least on Linux, read(2) reads only\n0x7ffff000 bytes and raches the error reporting. Addition to that,\nthis size was closer to the memory allocation size limit than I had\nthought.\n\nAs the result, I removed the change. However, I kept the change of the\ntype of variable \"r\" and corresponding placeholder %zd.\n\n> > diff --git a/src/backend/libpq/be-secure-gssapi.c\n> > b/src/backend/libpq/be-secure-gssapi.c\n> > index bc04e78abb..68645b4519 100644\n> > --- a/src/backend/libpq/be-secure-gssapi.c\n> > +++ b/src/backend/libpq/be-secure-gssapi.c\n> > @@ -572,9 +572,9 @@ secure_open_gssapi(Port *port)\n> > \t\tif (input.length > PQ_GSS_RECV_BUFFER_SIZE)\n> > \t\t{\n> > \t\t\tereport(COMMERROR,\n> > -\t\t\t\t\t(errmsg(\"oversize GSSAPI packet sent\n> > -\t\t\t\t\tby the client (%zu > %d)\",\n> > + (errmsg(\"oversize GSSAPI packet sent by the client (%zu > %zu)\",\n> > \t\t\t\t\t\t\t(size_t) input.length,\n> > -\t\t\t\t\t\t\tPQ_GSS_RECV_BUFFER_SIZE)));\n> > + (size_t) PQ_GSS_RECV_BUFFER_SIZE)));\n> > \t\t\treturn -1;\n> > \t\t}\n> > \n> \n> Might be better to add that cast to the definition of\n> PQ_GSS_RECV_BUFFER_SIZE instead, so that all code can benefit.\n\nAs far as I see, the only exceptional uses I found were a comparison\nwith int values, and being passed as an OM_uint32 (to one of the\nparameters of gss_wrap_size_limit()). Therefore, I agree that it is\nbeneficial. By the way, we currently define Size as the same as size_t\n(since 1998). Is it correct to consider Size as merely for backward\ncompatibility and we should use size_t for new code? I used size_t in\nthe modified part in the attached patch.\n\n> > diff --git a/src/backend/replication/repl_gram.y\n> > b/src/backend/replication/repl_gram.y\n> > index 7474f5bd67..baa76280b9 100644\n> > --- a/src/backend/replication/repl_gram.y\n> > +++ b/src/backend/replication/repl_gram.y\n> > @@ -312,11 +312,6 @@ timeline_history:\n> > \t\t\t\t{\n> > \t\t\t\t\tTimeLineHistoryCmd *cmd;\n> > -\t\t\t\t\tif ($2 <= 0)\n> > -\t\t\t\t\t\tereport(ERROR,\n> > -\t\t\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> > -\t\t\t\t\t\t\t\t errmsg(\"invalid\n> > -\t\t\t\t\t\t\t\t timeline %u\",\n> > -\t\t\t\t\t\t\t\t $2)));\n> > -\n...\n> I don't think this is correct. It loses the check for == 0.\n\nUgh. It's my mistake. So we need to consider unifying the messages\nagain. In walsummaryfuncs.c, %lld is required, but it's silly for the\nuses in repl_gram.y. Finally, I chose not to change anything here.\n\n> > diff --git a/src/backend/tsearch/to_tsany.c\n> > b/src/backend/tsearch/to_tsany.c\n> > index 88cba58cba..9d21178107 100644\n> > --- a/src/backend/tsearch/to_tsany.c\n> > +++ b/src/backend/tsearch/to_tsany.c\n> > @@ -191,7 +191,8 @@ make_tsvector(ParsedText *prs)\n> > \tif (lenstr > MAXSTRPOS)\n> > \t\tereport(ERROR,\n> > \t\t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> > -\t\t\t\t errmsg(\"string is too long for tsvector (%d\n> > -\t\t\t\t bytes, max %d bytes)\", lenstr, MAXSTRPOS)));\n> > + /* cast values to avoid extra translatable messages */\n> > + errmsg(\"string is too long for tsvector (%ld bytes, max %ld bytes)\",\n> > (long) lenstr, (long) MAXSTRPOS)));\n> > \ttotallen = CALCDATASIZE(prs->curwords, lenstr);\n> > \tin = (TSVector) palloc0(totallen);\n> \n> I think it would be better instead to change the message in\n> tsvectorin() to *not* use long. The size of long is unportable, so I\n> would rather avoid using it at all.\n\nThe casts to long are tentative only to adjust to the corresponding\nplaceholder, and in this context, portability concerns are not\napplicable. However, those casts are apparently useless. As you\nsuggested, I tried to change tsvectorin() instead, but there's a\nproblem here.\n\ntsvector.c:224\n>\t errmsg(\"string is too long for tsvector (%ld bytes, max %ld bytes)\",\n>\t\t\t(long) (cur - tmpbuf), (long) MAXSTRPOS)));\n\ncur and tmpbuf are pointers. The byte width of the subtraction results\nvaries by architecture. However, the surrounding code apparently\nassumes that the difference fits within an int. I added a cast to int\nfor the pointer arithmetic here. (Although I'm not sure this is the\nright direction.)\n\n> > diff --git a/src/backend/utils/adt/varlena.c\n> > b/src/backend/utils/adt/varlena.c\n> > index 8d28dd42ce..5de490b569 100644\n> > --- a/src/backend/utils/adt/varlena.c\n> > +++ b/src/backend/utils/adt/varlena.c\n> > @@ -3217,8 +3217,9 @@ byteaGetByte(PG_FUNCTION_ARGS)\n> > \tif (n < 0 || n >= len)\n> > \t\tereport(ERROR,\n> > \t\t\t\t(errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),\n> > -\t\t\t\t errmsg(\"index %d out of valid range, 0..%d\",\n> > -\t\t\t\t\t\tn, len - 1)));\n> > + /* cast values to avoid extra translable messages */\n> > + errmsg(\"index %lld out of valid range, 0..%lld\",\n> > + (long long)n, (long long) len - 1)));\n> > \tbyte = ((unsigned char *) VARDATA_ANY(v))[n];\n> \n> I think this is taking it too far. We shouldn't try to make all\n> similar messages use the same placeholders. If the underlying types\n> are different, we should use them. Adding more casts makes the code\n> less robust overall. The size_t/ssize_t cleanup is different, because\n> there the types were arguably wrong to begin with, and by using the\n> right types we move toward more consistency.\n\nOuch! Understood. They treat byte and bit locations accordingly. I\nagree that it's too far. Removed.\n\n> > diff --git a/src/bin/pg_combinebackup/pg_combinebackup.c b/src/bin/pg_combinebackup/pg_combinebackup.c\n> > index 6f0814d9ac..feb4d5dcf4 100644\n> > --- a/src/bin/pg_combinebackup/pg_combinebackup.c\n> > +++ b/src/bin/pg_combinebackup/pg_combinebackup.c\n> > - pg_fatal(\"could not read file \\\"%s\\\": read only %zd of %lld bytes\",\n> > - filename, rb, (long long int) st.st_size);\n> > + /* cast st_size to avoid extra translatable messages */\n> > + pg_fatal(\"could not read file \\\"%s\\\": read only %zd of %zu bytes\",\n> > + filename, rb, (size_t) st.st_size);\n> > }\n> > /* Adjust buffer length for new data and restore trailing-\\0 invariant */\n> \n> Similar to above, casting off_t to size_t is dubious.\n\nThe same discussion regarding the change in twophase.c is also\napplicable to this change. I applied the same amendment.\n\n> > diff --git a/src/port/user.c b/src/port/user.c\n> > index 7444aeb64b..9364bdb69e 100644\n> > --- a/src/port/user.c\n> > +++ b/src/port/user.c\n> > @@ -40,8 +40,8 @@ pg_get_user_name(uid_t user_id, char *buffer, size_t\n> > buflen)\n> > \t}\n> > \tif (pwerr != 0)\n> > \t\tsnprintf(buffer, buflen,\n> > -\t\t\t\t _(\"could not look up local user ID %d: %s\"),\n> > -\t\t\t\t (int) user_id,\n> > + _(\"could not look up local user ID %ld: %s\"),\n> > +\t\t\t\t (long) user_id,\n> > \t\t\t\t strerror_r(pwerr, pwdbuf, sizeof(pwdbuf)));\n> > \telse\n> > \t\tsnprintf(buffer, buflen,\n> \n> Also dubious use of \"long\" here.\n\nOkay, used %d instead. In addition to that, I removed the casts from\nuid_t expecting that compilers will detect the change of the\ndefinition of uid_t.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 21 Mar 2024 17:16:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inconsistent printf placeholders" } ]
[ { "msg_contents": "Hello.\n\nWhile examining reorderbuffer.c, I found several typos. I'm not sure\nif fixing them is worthwhile, but I've attached a fix just in case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 14 Mar 2024 13:28:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Typos in reorderbuffer.c." }, { "msg_contents": "On Thu, Mar 14, 2024 at 9:58 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> While examining reorderbuffer.c, I found several typos. I'm not sure\n> if fixing them is worthwhile, but I've attached a fix just in case.\n>\n\nLGTM. I'll push this in some time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 14 Mar 2024 11:23:38 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in reorderbuffer.c." }, { "msg_contents": "At Thu, 14 Mar 2024 11:23:38 +0530, Amit Kapila <[email protected]> wrote in \r\n> On Thu, Mar 14, 2024 at 9:58 AM Kyotaro Horiguchi\r\n> <[email protected]> wrote:\r\n> >\r\n> > While examining reorderbuffer.c, I found several typos. I'm not sure\r\n> > if fixing them is worthwhile, but I've attached a fix just in case.\r\n> >\r\n> \r\n> LGTM. I'll push this in some time.\r\n\r\nThanks!\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 15 Mar 2024 11:28:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typos in reorderbuffer.c." } ]
[ { "msg_contents": "Greetings!\n\nThe question (a short version): is it possible for a client to send two\nselects in the same transaction using the extended query protocol (without\ndeclaring cursors) and pull rows simultaneously by means of interleaving\nportal names and restricting fetch size in Execute commands.\n\n\nThe question (a long version with a motivation):\n\nA Postgresql backend is capable of operating multiple portals within a\ntransaction and switching between them on and off. For instance the\nfollowing sequence (issued from a kotlin application via r2dbc-driver not\nfrom psql)\n\n\n```\n\n*// The table *users_Fetch *contains users with ids between 1 and 20*\n\nBEGIN\n\n\nDECLARE fetch_test1 SCROLL CURSOR FOR SELECT userId FROM users_Fetch;\n\nDECLARE fetch_test2 SCROLL CURSOR FOR SELECT userId FROM users_Fetch;\n\n\nMOVE FORWARD 3 FROM fetch_test1;\n\nFETCH FORWARD 5 FROM fetch_test1;\n\nFETCH FORWARD 5 FROM fetch_test2;\n\n\nselect userId from users_Fetch;\n\n\nFETCH BACKWARD 5 FROM fetch_test1;\n\nFETCH FORWARD 5 FROM fetch_test2;\n\n\nCOMMIT;\n\n```\n\n\nresults in an expected outcome:\n\n```\n\n 4, 5, 6, 7, 8, *// MOVE FORWARD 3 FROM fetch_test1; FETCH FORWARD 5\nFROM fetch_test1;*\n\n 1, 2, 3, 4, 5, *// FETCH FORWARD 5 FROM fetch_test2;*\n\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,\n20, *// select userId from users_Fetch;*\n\n 7, 6, 5, 4, 3, *// FETCH BACKWARD 5 FROM fetch_test1;*\n\n 6, 7, 8, 9, 10, *// FETCH FORWARD 5 FROM fetch_test2;*\n\n\n```\n\n\nIs the same possible for conventional selects issued with extended query\nprotocol? From the protocol perspective it would result in the following\ntraffic:\n\n\n```\n\n231 53111 5432 PGSQL 109 >Q ———> BEGIN\n\n232 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=1 Ack=54 Win=6371 Len=0\nTSval=2819351776 TSecr=589492423\n\n237 5432 53111 PGSQL 73 <C/Z\n\n238 53111 5432 TCP 56 53111 → 5432 [ACK] Seq=54 Ack=18 Win=6366 Len=0\nTSval=589492435 TSecr=2819351788\n\n// A client issues a select\n\n239 53111 5432 PGSQL 276 >P/B/D/E/H ———> select * from …; bind B_1; execute\nB_1, fetch 2 rows; flush\n\n240 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=18 Ack=274 Win=6368 Len=0\nTSval=2819351793 TSecr=589492440\n\n245 5432 53111 PGSQL 552 <1/2/T/D/D/s ———> Data, Data, Portal suspended\n\n…\n\n// Then the same sequence for another prepared statement and portal (lets\nsay B_2) but without a limit in the Execute command and sync at the end.\n\n…\n\n// Then the client proceeds with B_1 till the completion\n\n270 53111 5432 PGSQL 69 > E ———> execute B_1, fetch 2 rows,\n\n271 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=925 Ack=323 Win=6367 Len=0\nTSval=2819351846 TSecr=589492493\n\n272 53111 5432 PGSQL 61 >H ———> Flush\n\n274 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=925 Ack=328 Win=6367 Len=0\nTSval=2819351846 TSecr=589492493\n\n282 5432 53111 PGSQL 144 <D/C ———> Command completion\n\n283 53111 5432 TCP 56 53111 → 5432 [ACK] Seq=328 Ack=1013 Win=6351 Len=0\nTSval=589492496 TSecr=2819351849\n\n284 53111 5432 PGSQL 66 >C ———> Close B_1\n\n285 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=1013 Ack=338 Win=6367 Len=0\nTSval=2819351849 TSecr=589492496\n\n286 53111 5432 PGSQL 61 >S ———> Sync\n\n287 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=1013 Ack=343 Win=6366 Len=0\nTSval=2819351849 TSecr=589492496\n\n293 5432 53111 PGSQL 67 <3/Z\n\n294 53111 5432 TCP 56 53111 → 5432 [ACK] Seq=343 Ack=1024 Win=6351 Len=0\nTSval=589492498 TSecr=2819351851\n\n295 53111 5432 PGSQL 68 >Q ———> COMMIT\n\n```\n\n\nI’m interested because such a communication is intrinsic to r2dbc scenarios\nlike this\n\n```\n\nval usersWithAccouns = Flux.defer *{*\n\n *// Select all users*\n\n databaseClient.sql(\"select * from users where userId >= $1 and userId\n<= $2\")\n\n .bind(\"$1\", 1)\n\n .bind(\"$2\", 255)\n\n .flatMap *{ *r *-> *r.map *{ *row, meta *-> *… *} }*\n\n .flatMap *{ *user *->*\n\n *// For each user select all its accounts*\n\n databaseClient.sql(\"select login from accounts where userId=$1\nlimit 1\")\n\n .bind(\"$1\", user.id)\n\n .flatMap *{ *r *-> *r.map *{ *row, meta *-> *… *} }*\n\n .reduce …\n\n* }*\n\n*}*.`as`(transactionalOperator::transactional)\n\n```\n\nwhich results in a failure owing to inner requests building up a queue\ninside the driver (due to inability to suspend a limitless Execute for\n\"select * from users…\" ).\n\n\nThanks!\n\n\nGreetings!\nThe question (a short version): is it possible for a client to send two selects in the same transaction using the extended query protocol (without declaring cursors) and pull rows simultaneously by means of interleaving portal names and restricting fetch size in Execute commands.\n\nThe question (a long version with a motivation):\nA Postgresql backend is capable of operating multiple portals within a transaction and switching between them on and off. For instance the following sequence (issued from a kotlin application via r2dbc-driver not from psql)\n\n```\n// The table users_Fetch contains users with ids between 1 and 20\nBEGIN\nDECLARE fetch_test1 SCROLL CURSOR FOR SELECT userId FROM users_Fetch;\nDECLARE fetch_test2 SCROLL CURSOR FOR SELECT userId FROM users_Fetch;\n\nMOVE FORWARD 3 FROM fetch_test1;\nFETCH FORWARD 5 FROM fetch_test1;\nFETCH FORWARD 5 FROM fetch_test2;\n\nselect userId from users_Fetch;\n\nFETCH BACKWARD 5 FROM fetch_test1;\nFETCH FORWARD 5 FROM fetch_test2;COMMIT;\n``` \n\nresults in an expected outcome:\n``` \n        4, 5, 6, 7, 8, // MOVE FORWARD 3 FROM fetch_test1; FETCH FORWARD 5 FROM fetch_test1;\n        1, 2, 3, 4, 5, // FETCH FORWARD 5 FROM fetch_test2;\n        1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, // select userId from users_Fetch;\n        7, 6, 5, 4, 3, // FETCH BACKWARD 5 FROM fetch_test1;\n        6, 7, 8, 9, 10, // FETCH FORWARD 5 FROM fetch_test2;\n\n``` \n\nIs the same possible for conventional selects issued with extended query protocol? From the protocol perspective it would result in the following traffic:\n\n``` \n231 53111 5432 PGSQL 109 >Q ———> BEGIN\n232 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=1 Ack=54 Win=6371 Len=0 TSval=2819351776 TSecr=589492423\n237 5432 53111 PGSQL 73 <C/Z\n238 53111 5432 TCP 56 53111 → 5432 [ACK] Seq=54 Ack=18 Win=6366 Len=0 TSval=589492435 TSecr=2819351788\n// A client issues a select\n239 53111 5432 PGSQL 276 >P/B/D/E/H ———> select * from …; bind B_1; execute B_1, fetch 2 rows; flush\n240 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=18 Ack=274 Win=6368 Len=0 TSval=2819351793 TSecr=589492440\n245 5432 53111 PGSQL 552 <1/2/T/D/D/s ———> Data, Data, Portal suspended\n…\n// Then the same sequence for another prepared statement and portal (lets say B_2) but without a limit in the Execute command and sync at the end. \n…\n// Then the client proceeds with B_1 till the completion\n270 53111 5432 PGSQL 69 > E ———> execute B_1, fetch 2 rows,\n271 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=925 Ack=323 Win=6367 Len=0 TSval=2819351846 TSecr=589492493\n272 53111 5432 PGSQL 61 >H ———> Flush\n274 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=925 Ack=328 Win=6367 Len=0 TSval=2819351846 TSecr=589492493\n282 5432 53111 PGSQL 144 <D/C ———> Command completion\n283 53111 5432 TCP 56 53111 → 5432 [ACK] Seq=328 Ack=1013 Win=6351 Len=0 TSval=589492496 TSecr=2819351849\n284 53111 5432 PGSQL 66 >C ———> Close B_1\n285 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=1013 Ack=338 Win=6367 Len=0 TSval=2819351849 TSecr=589492496\n286 53111 5432 PGSQL 61 >S ———> Sync\n287 5432 53111 TCP 56 5432 → 53111 [ACK] Seq=1013 Ack=343 Win=6366 Len=0 TSval=2819351849 TSecr=589492496\n293 5432 53111 PGSQL 67 <3/Z \n294 53111 5432 TCP 56 53111 → 5432 [ACK] Seq=343 Ack=1024 Win=6351 Len=0 TSval=589492498 TSecr=2819351851\n295 53111 5432 PGSQL 68 >Q ———> COMMIT\n``` \n\nI’m interested because such a communication is intrinsic to r2dbc scenarios like this\n```\nval usersWithAccouns = Flux.defer {\n    // Select all users\n    databaseClient.sql(\"select * from users where userId >= $1 and userId <= $2\")\n        .bind(\"$1\", 1)\n        .bind(\"$2\", 255)\n        .flatMap { r -> r.map { row, meta -> … } }\n         .flatMap { user ->\n            // For each user select all its accounts\n            databaseClient.sql(\"select login from accounts where userId=$1 limit 1\")\n                .bind(\"$1\", user.id)\n                .flatMap { r -> r.map { row, meta -> … } }\n                .reduce …\n        }\n}.`as`(transactionalOperator::transactional)\n```\nwhich results in a failure owing to inner requests building up a queue inside the driver (due to inability to suspend a limitless Execute for \"select * from users…\" ).\n\nThanks!", "msg_date": "Thu, 14 Mar 2024 12:05:25 +0700", "msg_from": "Evgeny Smirnov <[email protected]>", "msg_from_op": true, "msg_subject": "Can Execute commands for different portals interleave?" }, { "msg_contents": "On 14/03/2024 07:05, Evgeny Smirnov wrote:\n> The question (a short version): is it possible for a client to send two \n> selects in the same transaction using the extended query protocol \n> (without declaring cursors) and pull rows simultaneously by means of \n> interleaving portal names and restricting fetch size in Execute commands.\n\nYes, that's possible.\n\nNamed portals created with the extended query protocol are the same as \ncursors, really. You can even use the Execute protocol message to fetch \nfrom cursor created with DECLARE CURSOR, or use FETCH command to fetch \nfrom a portal created with the Bind message.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 12:53:48 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Execute commands for different portals interleave?" } ]
[ { "msg_contents": "Two meson patches.\n\nOne of them adds version gates to two LLVM flags (-frwapv, \n-fno-strict-aliasing). I believe we moved the minimum LLVM version \nrecently, so these might not be necessary, but maybe it helps for \nhistorictal reasons. If not, I'll just remove the comment in a different \npatch.\n\nSecond patch removes some unused variables. Were they analogous to \nthings in autotools and the Meson portions haven't been added yet?\n\nI was looking into adding LLVM JIT support to Meson since there is \na TODO about it, but it wasn't clear what was missing except adding some \nvariables into the PGXS Makefile.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 14 Mar 2024 00:13:18 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Remove a FIXME and unused variables in Meson" }, { "msg_contents": "On Thu, Mar 14, 2024 at 12:13:18AM -0500, Tristan Partin wrote:\n> One of them adds version gates to two LLVM flags (-frwapv,\n> -fno-strict-aliasing). I believe we moved the minimum LLVM version recently,\n> so these might not be necessary, but maybe it helps for historictal reasons.\n> If not, I'll just remove the comment in a different patch.\n> \n> Second patch removes some unused variables. Were they analogous to things in\n> autotools and the Meson portions haven't been added yet?\n> \n> I was looking into adding LLVM JIT support to Meson since there is a TODO\n> about it, but it wasn't clear what was missing except adding some variables\n> into the PGXS Makefile.\n\nIt looks like you have forgotten to attach the patches. :)\n--\nMichael", "msg_date": "Thu, 14 Mar 2024 14:15:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove a FIXME and unused variables in Meson" }, { "msg_contents": "On Thu Mar 14, 2024 at 12:15 AM CDT, Michael Paquier wrote:\n> On Thu, Mar 14, 2024 at 12:13:18AM -0500, Tristan Partin wrote:\n> > One of them adds version gates to two LLVM flags (-frwapv,\n> > -fno-strict-aliasing). I believe we moved the minimum LLVM version recently,\n> > so these might not be necessary, but maybe it helps for historictal reasons.\n> > If not, I'll just remove the comment in a different patch.\n> > \n> > Second patch removes some unused variables. Were they analogous to things in\n> > autotools and the Meson portions haven't been added yet?\n> > \n> > I was looking into adding LLVM JIT support to Meson since there is a TODO\n> > about it, but it wasn't clear what was missing except adding some variables\n> > into the PGXS Makefile.\n>\n> It looks like you have forgotten to attach the patches. :)\n\nCLASSIC!\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Thu, 14 Mar 2024 00:17:04 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove a FIXME and unused variables in Meson" }, { "msg_contents": "On 14.03.24 06:13, Tristan Partin wrote:\n> One of them adds version gates to two LLVM flags (-frwapv, \n> -fno-strict-aliasing). I believe we moved the minimum LLVM version \n> recently, so these might not be necessary, but maybe it helps for \n> historictal reasons. If not, I'll just remove the comment in a different \n> patch.\n\nWe usually remove version gates once the overall minimum required \nversion is new enough. So this doesn't seem like a step in the right \ndirection.\n\n> Second patch removes some unused variables. Were they analogous to \n> things in autotools and the Meson portions haven't been added yet?\n\nHmm, yeah, no idea. These were not used even in the first commit for \nMeson support. Might have had a purpose in earlier draft patches.\n\n\n\n", "msg_date": "Mon, 18 Mar 2024 09:16:08 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove a FIXME and unused variables in Meson" } ]
[ { "msg_contents": "Hi,\n\nThe synopsis of pg_md5_hash() seems wrong such as:\n - s/int/bool/\n - \"errstr\" is missing\nSo, I created a patch to fix them.\n\nsrc/common/md5_common.c\n==================================================\n* SYNOPSIS #include \"md5.h\"\n* int pg_md5_hash(const void *buff, size_t len, char *hexsum)\n...\nbool\npg_md5_hash(const void *buff, size_t len, char *hexsum, const char **errstr)\n==================================================\n\nPlease find attached file.\n\nRegards,\nTatsuro Yamada\nNTT Open Source Software Center", "msg_date": "Thu, 14 Mar 2024 06:02:04 +0000", "msg_from": "Tatsuro Yamada <[email protected]>", "msg_from_op": true, "msg_subject": "Fix the synopsis of pg_md5_hash" }, { "msg_contents": "> On 14 Mar 2024, at 07:02, Tatsuro Yamada <[email protected]> wrote:\n\n> So, I created a patch to fix them.\n\nThanks, applied.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 09:32:55 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix the synopsis of pg_md5_hash" }, { "msg_contents": "On Thu, Mar 14, 2024 at 09:32:55AM +0100, Daniel Gustafsson wrote:\n> On 14 Mar 2024, at 07:02, Tatsuro Yamada <[email protected]> wrote:\n>> So, I created a patch to fix them.\n> \n> Thanks, applied.\n\nOops. Thanks.\n--\nMichael", "msg_date": "Fri, 15 Mar 2024 07:59:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix the synopsis of pg_md5_hash" }, { "msg_contents": "Hi, Daniel and Michael,\n\nOn Thu, Mar 14, 2024 at 09:32:55AM +0100, Daniel Gustafsson wrote:\n> > On 14 Mar 2024, at 07:02, Tatsuro Yamada <[email protected]> wrote:\n> >> So, I created a patch to fix them.\n> >\n> > Thanks, applied.\n>\n> Oops. Thanks.\n> --\n> Michael\n>\n\nThank you guys!\n\nRegards,\nTatsuro Yamada\nNTT Open Source Software Center\n\nHi, Daniel and Michael,On Thu, Mar 14, 2024 at 09:32:55AM +0100, Daniel Gustafsson wrote:\n> On 14 Mar 2024, at 07:02, Tatsuro Yamada <[email protected]> wrote:\n>> So, I created a patch to fix them.\n> \n> Thanks, applied.\n\nOops.  Thanks.\n--\nMichaelThank you guys! Regards,Tatsuro YamadaNTT Open Source Software Center", "msg_date": "Fri, 15 Mar 2024 08:38:52 +0900", "msg_from": "Tatsuro Yamada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix the synopsis of pg_md5_hash" } ]
[ { "msg_contents": "Introduce \"builtin\" collation provider.\n\nNew provider for collations, like \"libc\" or \"icu\", but without any\nexternal dependency.\n\nInitially, the only locale supported by the builtin provider is \"C\",\nwhich is identical to the libc provider's \"C\" locale. The libc\nprovider's \"C\" locale has always been treated as a special case that\nuses an internal implementation, without using libc at all -- so the\nnew builtin provider uses the same implementation.\n\nThe builtin provider's locale is independent of the server environment\nvariables LC_COLLATE and LC_CTYPE. Using the builtin provider, the\ndatabase collation locale can be \"C\" while LC_COLLATE and LC_CTYPE are\nset to \"en_US\", which is impossible with the libc provider.\n\nBy offering a new builtin provider, it clarifies that the semantics of\na collation using this provider will never depend on libc, and makes\nit easier to document the behavior.\n\nDiscussion: https://postgr.es/m/[email protected]\nDiscussion: https://postgr.es/m/[email protected]\nDiscussion: https://postgr.es/m/ff4c2f2f9c8fc7ca27c1c24ae37ecaeaeaff6b53.camel%40j-davis.com\nReviewed-by: Daniel Vérité, Peter Eisentraut, Jeremy Schneider\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/2d819a08a1cbc11364e36f816b02e33e8dcc030b\n\nModified Files\n--------------\ndoc/src/sgml/charset.sgml | 90 ++++++++++++++++++-----\ndoc/src/sgml/ref/create_collation.sgml | 11 ++-\ndoc/src/sgml/ref/create_database.sgml | 7 +-\ndoc/src/sgml/ref/createdb.sgml | 2 +-\ndoc/src/sgml/ref/initdb.sgml | 17 ++++-\nsrc/backend/catalog/pg_collation.c | 5 +-\nsrc/backend/commands/collationcmds.c | 74 +++++++++++++++----\nsrc/backend/commands/dbcommands.c | 129 +++++++++++++++++++++++++--------\nsrc/backend/utils/adt/formatting.c | 6 ++\nsrc/backend/utils/adt/pg_locale.c | 123 ++++++++++++++++++++++++++-----\nsrc/backend/utils/init/postinit.c | 20 ++++-\nsrc/bin/initdb/initdb.c | 53 ++++++++++----\nsrc/bin/initdb/t/001_initdb.pl | 40 +++++++++-\nsrc/bin/pg_dump/pg_dump.c | 23 +++++-\nsrc/bin/pg_upgrade/t/002_pg_upgrade.pl | 81 ++++++++++++++++-----\nsrc/bin/psql/describe.c | 4 +-\nsrc/bin/scripts/createdb.c | 19 ++++-\nsrc/bin/scripts/t/020_createdb.pl | 60 +++++++++++++++\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_collation.dat | 6 +-\nsrc/include/catalog/pg_collation.h | 3 +\nsrc/include/utils/pg_locale.h | 5 ++\nsrc/test/icu/t/010_database.pl | 22 +++---\nsrc/test/regress/expected/collate.out | 19 ++++-\nsrc/test/regress/sql/collate.sql | 8 ++\n25 files changed, 671 insertions(+), 158 deletions(-)", "msg_date": "Thu, 14 Mar 2024 06:39:05 +0000", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Introduce \"builtin\" collation provider." }, { "msg_contents": "On 14.03.24 07:39, Jeff Davis wrote:\n> Introduce \"builtin\" collation provider.\n\nJeff,\n\nI think I found a small bug in this commit.\n\nThe new code in dbcommands.c createdb() reads like this:\n\n+ /* validate provider-specific parameters */\n+ if (dblocprovider != COLLPROVIDER_BUILTIN)\n+ {\n+ if (dbuiltinlocale)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n+ errmsg(\"BUILTIN_LOCALE cannot be specified unless \nlocale provider is builtin\")));\n+ }\n+ else if (dblocprovider != COLLPROVIDER_ICU)\n+ {\n+ if (diculocale)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n+ errmsg(\"ICU locale cannot be specified unless \nlocale provider is ICU\")));\n+\n+ if (dbicurules)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n+ errmsg(\"ICU rules cannot be specified unless locale \nprovider is ICU\")));\n+ }\n\nBut if dblocprovider is COLLPROVIDER_LIBC, then the first \"if\" is true \nand the second one won't be checked. I think the correct code structure \nwould be to make both of these checks separate if statements.\n\n\n\n", "msg_date": "Tue, 23 Apr 2024 11:23:35 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce \"builtin\" collation provider." }, { "msg_contents": "\nOn 2024-03-14 Th 02:39, Jeff Davis wrote:\n> Introduce \"builtin\" collation provider.\n\n\nThe new value \"b\" for pg_collation.collprovider doesn't seem to be \ndocumented. Is that deliberate?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 23 Apr 2024 11:33:51 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce \"builtin\" collation provider." }, { "msg_contents": "On Tue, 2024-04-23 at 11:23 +0200, Peter Eisentraut wrote:\n> I think I found a small bug in this commit.\n\nGood catch, thank you.\n\nCommitted a fix.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 30 Apr 2024 19:55:33 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce \"builtin\" collation provider." } ]
[ { "msg_contents": "Hi,\n\nSince ldap2pg 6, I'm working on running by default as non-super role\nwith CREATEDB. Robert Haas made this a viable solution as of Postgres\n16.\n\nI got a case where ldap2pg tries to remove a role from a group. But\nldap2pg user is not the grantor of this membership. This triggers a\nwarning:\n\n$ REVOKE owners FROM alice;\nWARNING: role \"alice\" has not been granted membership in role \"owners\"\nby role \"ldap2pg\"\n\nI'll add a condition on grantor when listing manageable membership to\nsimply avoid this.\n\nHowever, I'd prefer if Postgres fails properly. Because the GRANT is\nactually not revoked. This prevent ldap2pg to report an issue in\nhandling privileges on such roles.\n\nWhat do you think of make this warning an error ?\n\n\n", "msg_date": "Thu, 14 Mar 2024 13:09:37 +0100", "msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>", "msg_from_op": true, "msg_subject": "REVOKE FROM warning on grantor" }, { "msg_contents": "On Thursday, March 14, 2024, Étienne BERSAC <[email protected]>\nwrote:\n\n>\n> However, I'd prefer if Postgres fails properly. Because the GRANT is\n> actually not revoked. This prevent ldap2pg to report an issue in\n> handling privileges on such roles.\n>\n> What do you think of make this warning an error ?\n>\n>\nThe choice of warning is made because after the command ends the grantmin\nquestion does not exist. The revoke was a no-op and the final state is as\nthe user intended. Historically doing this didn’t give any message at all\nwhich was confusing so we added a warning so the semantics of not failing\nwere preserved but there was some indication that something was amiss. I\ndon’t have a compelling argument to,change the long-standing behavior.\nClient code can and probably should look for a show errors reported by the\nbackend. It is indeed possibly to treat this warning more serverly than\nthe server chooses to.\n\nDavid J.\n\nOn Thursday, March 14, 2024, Étienne BERSAC <[email protected]> wrote:\nHowever, I'd prefer if Postgres fails properly. Because the GRANT is\nactually not revoked. This prevent ldap2pg to report an issue in\nhandling privileges on such roles.\n\nWhat do you think of make this warning an error ?\nThe choice of warning is made because after the command ends the grantmin question does not exist.  The revoke was a no-op and the final state is as the user intended.  Historically doing this didn’t give any message at all which was confusing so we added a warning so the semantics of not failing were preserved but there was some indication that something was amiss.  I don’t have a compelling argument to,change the long-standing behavior.  Client code can and probably should look for a show errors reported by the backend.  It is indeed possibly to treat this warning more serverly than the server chooses to.David J.", "msg_date": "Thu, 14 Mar 2024 07:03:09 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "Hi David,\nThanks for your answer.\n\n\n\n> The choice of warning is made because after the command ends the\n> grantmin question does not exist.  The revoke was a no-op and the\n> final state is as the user intended. \n\n\nSorry, can you explain me what's the grantmin question is ?\n\nRegards,\nÉtienne\n\n\n", "msg_date": "Sat, 16 Mar 2024 21:00:29 +0100", "msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "On Sat, Mar 16, 2024 at 1:00 PM Étienne BERSAC <[email protected]>\nwrote:\n\n>\n> > The choice of warning is made because after the command ends the\n> > grantmin question does not exist. The revoke was a no-op and the\n> > final state is as the user intended.\n>\n>\n> Sorry, can you explain me what's the grantmin question is ?\n>\n>\nThat should have read: the granted permission does not exist\n\nDavid J.\n\nOn Sat, Mar 16, 2024 at 1:00 PM Étienne BERSAC <[email protected]> wrote:\n> The choice of warning is made because after the command ends the\n> grantmin question does not exist.  The revoke was a no-op and the\n> final state is as the user intended. \n\n\nSorry, can you explain me what's the grantmin question is ?That should have read:  the granted permission does not existDavid J.", "msg_date": "Sat, 16 Mar 2024 13:17:14 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "Hi David,\n\n> That should have read:  the granted permission does not exist\n\nThanks, its clear.\n\nHowever, I'm hitting the warning when removing a role from a group. But\nthe membership remains after the warning. In this case, I expect an\nerror.\n\nI'll try to patch the behaviour to ensure an error if the REVOKE is\nineffective.\n\nRegards,\nÉtienne\n\n\n", "msg_date": "Sat, 16 Mar 2024 21:30:23 +0100", "msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]> writes:\n> I'll try to patch the behaviour to ensure an error if the REVOKE is\n> ineffective.\n\nI think we're unlikely to accept such a patch. By my reading, the way\nwe do it now is required by the SQL standard. The standard doesn't\nseem to say that in so many words; what it does say (from SQL99) is\n\n b) If the <revoke statement> is a <revoke role statement>, then\n for every <grantee> specified, a set of role authorization\n descriptors is identified. A role authorization descriptor is\n said to be identified if it defines the grant of any of the\n specified <role revoked>s to <grantee> with grantor A.\n\nIt does not say that that set must be nonempty. Admittedly it's not\nvery clear from this one point. However, if you look around in the\nstandard it seems clear that they expect no-op revokes to be no-ops\nnot errors. As an example, every type of DROP command includes an\nexplicit step to drop privileges attached to the object, with wording\nlike (this is for ALTER TABLE DROP COLUMN):\n\n 3) Let A be the <authorization identifier> that owns T. The\n following <revoke statement> is effectively executed with\n a current authorization identifier of \"_SYSTEM\" and without\n further Access Rule checking:\n\n REVOKE INSERT(CN), UPDATE(CN), SELECT(CN), REFERENCES(CN) ON\n TABLE TN FROM A CASCADE\n\nThere is no special rule for the case that all (or any...) of those\nprivileges were previously revoked; but if that case is supposed to be\nan error, there would have to be an exception here, or you couldn't\ndrop such columns.\n\nEven taking the position that this is an unspecified point that we\ncould implement how we like, I don't think there's a sufficient\nargument for changing behavior that's stood for a couple of decades.\n(The spelling of the message has changed over the years, but giving a\nwarning not an error appears to go all the way back to 99b8f8451\nwhere we implemented user groups.) It is certain that there are\napplications out there that rely on this behavior and would break.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 Mar 2024 20:17:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "Hi Tom,\n\nThanks for your anwser.\n\n\n> It does not say that that set must be nonempty.  Admittedly it's not\n> very clear from this one point.  However, if you look around in the\n> standard it seems clear that they expect no-op revokes to be no-ops\n> not errors.\n\nPostgres actually identifies memberhips to revoke. The list is not\nempty. Event if revoker has USAGE privilege on parent role, the\nmembership is protected by a new check on grantor of membership. This\nis a new semantic for me. I guess this may obfuscate other people too.\n\nI would compare denied revoking of role with revoking privilege on\ndenied table:\n\n\t> REVOKE SELECT ON TABLE toto FROM PUBLIC ;\n\tERROR: permission denied for table toto\n\n> Even taking the position that this is an unspecified point that we\n> could implement how we like, I don't think there's a sufficient\n> argument for changing behavior that's stood for a couple of decades.\n\nIn Postgres 15, revoking a membership granted by another role is\naccepted. I suspect this is related to the new CREATEROLE behaviour\nimplemented by Robert Haas (which is great job anyway). Attached is a\nscript to reproduce.\n\nHere is the output on Postgres 15:\n\n SET\n DROP ROLE\n DROP ROLE\n DROP ROLE\n CREATE ROLE\n CREATE ROLE\n CREATE ROLE\n GRANT ROLE\n SET\n REVOKE ROLE\n DO\n \nHere is the output of the same script on Postgres 16:\n \n \n SET\n DROP ROLE\n DROP ROLE\n DROP ROLE\n CREATE ROLE\n CREATE ROLE\n CREATE ROLE\n GRANT ROLE\n SET\n psql:ldap2pg/my-revoke.sql:12: WARNING: role \"r\" has not been granted membership in role \"g\" by role \"m\"\n REVOKE ROLE\n psql:ldap2pg/my-revoke.sql:18: ERROR: REVOKE failed\n CONTEXTE : PL/pgSQL function inline_code_block line 4 at RAISE\n\nCan you confirm this ?\n\n\nRegards,\nÉtienne", "msg_date": "Mon, 18 Mar 2024 14:37:13 +0100", "msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "On Sat, Mar 16, 2024 at 8:17 PM Tom Lane <[email protected]> wrote:\n> Even taking the position that this is an unspecified point that we\n> could implement how we like, I don't think there's a sufficient\n> argument for changing behavior that's stood for a couple of decades.\n> (The spelling of the message has changed over the years, but giving a\n> warning not an error appears to go all the way back to 99b8f8451\n> where we implemented user groups.) It is certain that there are\n> applications out there that rely on this behavior and would break.\n\nI got curious about the behavior of other database systems.\n\nhttps://dev.mysql.com/doc/refman/8.0/en/revoke.html documents an \"IF\nEXISTS\" option whose documentation reads, in relevant part,\n\"Otherwise, REVOKE executes normally; if the user does not exist, the\nstatement raises an error.\"\n\nhttps://community.snowflake.com/s/article/Access-Control-Error-Message-When-Revoking-a-Non-existent-Role-Grant-From-a-Role-or-User\nis kind of interesting. It says that such commands used to fail with\nan error but that's been changed; now they don't.\n\nI couldn't find a clear reference for Oracle or DB-2 or SQL server,\nbut it doesn't look like any of them have an IF EXISTS option, and the\nexamples they show don't mention this being an issue AFAICS, so I'm\nguessing that all of them accept commands of this type without error.\n\nOn the whole, it seems like we might be taking the majority position\nhere, but I can't help but feel some sympathy with people who don't\nlike it. Maybe we need a REVOKE OR ELSE command. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Mar 2024 16:17:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Sat, Mar 16, 2024 at 8:17 PM Tom Lane <[email protected]> wrote:\n>> Even taking the position that this is an unspecified point that we\n>> could implement how we like, I don't think there's a sufficient\n>> argument for changing behavior that's stood for a couple of decades.\n\n> I got curious about the behavior of other database systems.\n\nYeah, I was mildly curious about that too; it'd be unlikely to sway\nmy bottom-line opinion, but it would be interesting to check.\n\n> https://dev.mysql.com/doc/refman/8.0/en/revoke.html documents an \"IF\n> EXISTS\" option whose documentation reads, in relevant part,\n> \"Otherwise, REVOKE executes normally; if the user does not exist, the\n> statement raises an error.\"\n\nHmm, I don't think that's quite what's at stake here. We do throw\nerror if either named role doesn't exist:\n\nregression=# revoke foo from joe;\nERROR: role \"joe\" does not exist\nregression=# create user joe;\nCREATE ROLE\nregression=# revoke foo from joe;\nERROR: role \"foo\" does not exist\nregression=# create role foo;\nCREATE ROLE\nregression=# revoke foo from joe;\nWARNING: role \"joe\" has not been granted membership in role \"foo\" by role \"postgres\"\nREVOKE ROLE\n\nWhat the OP is on about is that that last case issues WARNING not\nERROR.\n\nReading further down in the mysql page you cite, it looks like their\nIF EXISTS conflates \"role doesn't exist\" with \"role wasn't granted\",\nand suppresses errors for both those cases. I'm not in favor of\nchanging things here, but if we did, I sure wouldn't want to adopt\nthose exact semantics.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Mar 2024 17:30:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "On 18.03.24 22:30, Tom Lane wrote:\n> regression=# revoke foo from joe;\n> WARNING: role \"joe\" has not been granted membership in role \"foo\" by role \"postgres\"\n> REVOKE ROLE\n> \n> What the OP is on about is that that last case issues WARNING not\n> ERROR.\n\nAnother point is that granting a role that has already been granted is \nnot an error. So it makes some sense that revoking a role that has not \nbeen granted is also not an error. Both of these operations are \nidempotent in the same way.\n\n\n\n", "msg_date": "Wed, 20 Mar 2024 10:25:11 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "\nHi,\n\n> https://dev.mysql.com/doc/refman/8.0/en/revoke.html documents an \"IF\n> EXISTS\" option whose documentation reads, in relevant part,\n> \"Otherwise, REVOKE executes normally; if the user does not exist, the\n> statement raises an error.\"\n> \n> https://community.snowflake.com/s/article/Access-Control-Error-Message-When-Revoking-a-Non-existent-Role-Grant-From-a-Role-or-User\n> is kind of interesting. It says that such commands used to fail with\n> an error but that's been changed; now they don't.\n\nIt's not about inexistant user. It's not about inexistant membership.\nIt's about membership you are not allowed to revoke.\n\nldap2pg goals is to revoke spurious privileges. If ldap2pg find a\nspurious membership, it revokes it. Postgres 16 does not revoke some\nmembership revoked before, and does not fail.\n\nThe usual case is: a superuser grants writers role to alice. In\ndirectory, alice is degraded to readers. ldap2pg is not superuser but\nhas CREATEROLE. ldap2pg applies the changes. In Postgres 15, revocation\nis completed. In Postgres 16, alice still has writers privileges and\nldap2pg is not aware of this without clunky checks.\n\nDo you see a security concern here ?\n\nRegards,\nÉtienne\n\n\n\n", "msg_date": "Wed, 20 Mar 2024 18:25:59 +0100", "msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "On Wed, Mar 20, 2024 at 1:26 PM Étienne BERSAC\n<[email protected]> wrote:\n> The usual case is: a superuser grants writers role to alice. In\n> directory, alice is degraded to readers. ldap2pg is not superuser but\n> has CREATEROLE. ldap2pg applies the changes. In Postgres 15, revocation\n> is completed. In Postgres 16, alice still has writers privileges and\n> ldap2pg is not aware of this without clunky checks.\n\nIn previous versions of PostgreSQL, the grantor column of\npg_auth_members was not meaningful. Commit\nce6b672e4455820a0348214be0da1a024c3f619f changed that, for reasons\nexplained in the commit message. So now, to revoke a particular grant,\nldap2pg really ought to issue REVOKE x FROM y GRANTED BY z. Notice\nthat pg_auth_members now has a unique constraint on (roleid, member,\ngrantor) instead of (roleid, member) as it did previously, so if you\nspecify all of those things, you're identifying a unique grant; if you\nspecify only two of them, you aren't. I think in a case like the one\nyou described here, the REVOKE would fail with an error, because\nldap2pg, as a non-superuser, would not have permission to revoke a\nsuperuser's grant. That's intentional.\n\nI don't really understand why you describe the checks as \"clunky\". The\nstructure of pg_auth_members is straightforward. If you issue a\ncommand to try to make a row go away from that table, it shouldn't be\nthat difficult to figure out whether or not it actually vanished. I\n*think* that if you use GRANTED BY and don't have any other mistakes\nin your SQL construction, you'll either succeed in getting rid of the\nrow or you'll get an error; the non-error case is when REVOKE would\nhave targeted a row that's not actually present. But even if I'm wrong\nabout that, running the REVOKE and then checking whether the row you\nwanted to eliminate is gone seems straightforward. The main thing, to\nme, seems like you want to make sure that you actually are targeting a\nspecific row.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 14:51:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "Hi,\n\n> ldap2pg really ought to issue REVOKE x FROM y GRANTED BY z. \n\nThanks for this. I missed this notation and it is exactly what I need.\n\nYou could consider this subject as closed. Thanks for your time and\nexplanation.\n\nRegards,\nÉtienne\n\n\n", "msg_date": "Tue, 26 Mar 2024 10:22:34 +0100", "msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REVOKE FROM warning on grantor" }, { "msg_contents": "On Tue, Mar 26, 2024 at 5:22 AM Étienne BERSAC\n<[email protected]> wrote:\n> > ldap2pg really ought to issue REVOKE x FROM y GRANTED BY z.\n>\n> Thanks for this. I missed this notation and it is exactly what I need.\n>\n> You could consider this subject as closed. Thanks for your time and\n> explanation.\n\nNo problem. Glad it helped!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:17:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REVOKE FROM warning on grantor" } ]
[ { "msg_contents": "I got a weird test failure while testing my forking refactor patches on \nCirrus CI \n(https://cirrus-ci.com/task/5880724448870400?logs=test_running#L121):\n\n> [16:52:39.753] Summary of Failures:\n> [16:52:39.753] \n> [16:52:39.753] 66/73 postgresql:intarray-running / intarray-running/regress ERROR 6.27s exit status 1\n> [16:52:39.753] \n> [16:52:39.753] Ok: 72 \n> [16:52:39.753] Expected Fail: 0 \n> [16:52:39.753] Fail: 1 \n> [16:52:39.753] Unexpected Pass: 0 \n> [16:52:39.753] Skipped: 0 \n> [16:52:39.753] Timeout: 0 \n> [16:52:39.753] \n> [16:52:39.753] Full log written to /tmp/cirrus-ci-build/build/meson-logs/testlog-running.txt\n\nAnd:\n\n> diff -U3 /tmp/cirrus-ci-build/contrib/intarray/expected/_int.out /tmp/cirrus-ci-build/build/testrun/intarray-running/regress/results/_int.out\n> --- /tmp/cirrus-ci-build/contrib/intarray/expected/_int.out\t2024-03-14 16:48:48.690367000 +0000\n> +++ /tmp/cirrus-ci-build/build/testrun/intarray-running/regress/results/_int.out\t2024-03-14 16:52:05.759444000 +0000\n> @@ -804,6 +804,7 @@\n> \n> DROP INDEX text_idx;\n> CREATE INDEX text_idx on test__int using gin ( a gin__int_ops );\n> +ERROR: error triggered for injection point gin-leave-leaf-split-incomplete\n> SELECT count(*) from test__int WHERE a && '{23,50}';\n> count \n> -------\n> @@ -877,6 +878,7 @@\n> (1 row)\n> \n> DROP INDEX text_idx;\n> +ERROR: index \"text_idx\" does not exist\n> -- Repeat the same queries with an extended data set. The data set is the\n> -- same that we used before, except that each element in the array is\n> -- repeated three times, offset by 1000 and 2000. For example, {1, 5}\n\nSomehow the 'gin-leave-leaf-split-incomplete' injection point was active \nin the 'intarray' test. That makes no sense. That injection point is \nonly used by the test in src/test/modules/gin/. Perhaps that ran at the \nsame time as the intarray test? But they run in separate instances, with \ndifferent data directories. And the 'gin' test passed.\n\nI'm completely stumped. Anyone have a theory?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 14 Mar 2024 23:23:46 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Weird test mixup" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Somehow the 'gin-leave-leaf-split-incomplete' injection point was active \n> in the 'intarray' test. That makes no sense. That injection point is \n> only used by the test in src/test/modules/gin/. Perhaps that ran at the \n> same time as the intarray test? But they run in separate instances, with \n> different data directories.\n\nDo they? It'd be fairly easy to explain this if these things were\nbeing run in \"installcheck\" style. I'm not sure about CI, but from\nmemory, the buildfarm does use installcheck for some things.\n\nI wonder if it'd be wise to adjust the injection point stuff so that\nit's active in only the specific database the injection point was\nactivated in.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Mar 2024 18:19:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Fri, Mar 15, 2024 at 11:19 AM Tom Lane <[email protected]> wrote:\n> Heikki Linnakangas <[email protected]> writes:\n> > Somehow the 'gin-leave-leaf-split-incomplete' injection point was active\n> > in the 'intarray' test. That makes no sense. That injection point is\n> > only used by the test in src/test/modules/gin/. Perhaps that ran at the\n> > same time as the intarray test? But they run in separate instances, with\n> > different data directories.\n>\n> Do they? It'd be fairly easy to explain this if these things were\n> being run in \"installcheck\" style. I'm not sure about CI, but from\n> memory, the buildfarm does use installcheck for some things.\n\nRight, as mentioned here:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGJYhcG_o2nwSK6r01eOZJwNWUJUbX%3D%3DAVnW84f-%2B8yamQ%40mail.gmail.com\n\nThat's the \"running\" test, which is like the old installcheck.\n\n\n", "msg_date": "Fri, 15 Mar 2024 11:27:20 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "I wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> Somehow the 'gin-leave-leaf-split-incomplete' injection point was active \n>> in the 'intarray' test. That makes no sense. That injection point is \n>> only used by the test in src/test/modules/gin/. Perhaps that ran at the \n>> same time as the intarray test? But they run in separate instances, with \n>> different data directories.\n\n> Do they? It'd be fairly easy to explain this if these things were\n> being run in \"installcheck\" style. I'm not sure about CI, but from\n> memory, the buildfarm does use installcheck for some things.\n\nHmm, Munro's comment yesterday[1] says that current CI does use\ninstallcheck mode in some cases.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CA+hUKGJYhcG_o2nwSK6r01eOZJwNWUJUbX==AVnW84f-+8yamQ@mail.gmail.com\n\n\n", "msg_date": "Thu, 14 Mar 2024 18:30:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Fri, Mar 15, 2024 at 11:19 AM Tom Lane <[email protected]> wrote:\n>> Do they? It'd be fairly easy to explain this if these things were\n>> being run in \"installcheck\" style. I'm not sure about CI, but from\n>> memory, the buildfarm does use installcheck for some things.\n\n> Right, as mentioned here:\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGJYhcG_o2nwSK6r01eOZJwNWUJUbX%3D%3DAVnW84f-%2B8yamQ%40mail.gmail.com\n> That's the \"running\" test, which is like the old installcheck.\n\nHmm. Seems like maybe we need to institute a rule that anything\nusing injection points has to be marked NO_INSTALLCHECK. That's\nkind of a big hammer though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Mar 2024 18:44:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, Mar 14, 2024 at 06:19:38PM -0400, Tom Lane wrote:\n> Do they? It'd be fairly easy to explain this if these things were\n> being run in \"installcheck\" style. I'm not sure about CI, but from\n> memory, the buildfarm does use installcheck for some things.\n> \n> I wonder if it'd be wise to adjust the injection point stuff so that\n> it's active in only the specific database the injection point was\n> activated in.\n\nIt can be made optional by extending InjectionPointAttach() to\nspecify a database OID or a database name. Note that\n041_checkpoint_at_promote.pl wants an injection point to run in the\ncheckpointer, where we don't have a database requirement.\n\nOr we could just disable runningcheck because of the concurrency\nrequirement in this test. The test would still be able to run, just\nless times.\n--\nMichael", "msg_date": "Fri, 15 Mar 2024 07:53:57 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Fri, Mar 15, 2024 at 07:53:57AM +0900, Michael Paquier wrote:\n> It can be made optional by extending InjectionPointAttach() to\n> specify a database OID or a database name. Note that\n> 041_checkpoint_at_promote.pl wants an injection point to run in the\n> checkpointer, where we don't have a database requirement.\n\nSlight correction here. It is also possible to not touch\nInjectionPointAttach() at all: just tweak the callbacks to do that as\nlong as the database that should be used is tracked in shmem with its\npoint name, say with new fields in InjectionPointSharedState. That\nkeeps the backend APIs in a cleaner state.\n--\nMichael", "msg_date": "Fri, 15 Mar 2024 08:08:27 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Mar 14, 2024 at 06:19:38PM -0400, Tom Lane wrote:\n>> I wonder if it'd be wise to adjust the injection point stuff so that\n>> it's active in only the specific database the injection point was\n>> activated in.\n\n> It can be made optional by extending InjectionPointAttach() to\n> specify a database OID or a database name. Note that\n> 041_checkpoint_at_promote.pl wants an injection point to run in the\n> checkpointer, where we don't have a database requirement.\n\n> Or we could just disable runningcheck because of the concurrency\n> requirement in this test. The test would still be able to run, just\n> less times.\n\nNo, actually we *must* mark all these tests NO_INSTALLCHECK if we\nstick with the current definition of injection points. The point\nof installcheck mode is that the tests are supposed to be safe to\nrun in a live installation. Side-effects occurring in other\ndatabases are completely not OK.\n\nI can see that some tests would want to be able to inject code\ncluster-wide, but I bet that's going to be a small minority.\nI suggest that we invent a notion of \"global\" vs \"local\"\ninjection points, where a \"local\" one only fires in the DB it\nwas defined in. Then only tests that require a global injection\npoint need to be NO_INSTALLCHECK.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Mar 2024 19:13:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, Mar 14, 2024 at 07:13:53PM -0400, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> Or we could just disable runningcheck because of the concurrency\n>> requirement in this test. The test would still be able to run, just\n>> less times.\n> \n> No, actually we *must* mark all these tests NO_INSTALLCHECK if we\n> stick with the current definition of injection points. The point\n> of installcheck mode is that the tests are supposed to be safe to\n> run in a live installation. Side-effects occurring in other\n> databases are completely not OK.\n\nI really don't want to plug any runtime conditions into the backend\nAPIs, because there can be so much more that can be done there than\nonly restricting a callback to a database. I can imagine process type\nrestrictions, process PID restrictions, etc. So this knowledge should\nstick into the test module itself, and be expanded there. That's\neasier ABI-wise, as well.\n\n> I can see that some tests would want to be able to inject code\n> cluster-wide, but I bet that's going to be a small minority.\n> I suggest that we invent a notion of \"global\" vs \"local\"\n> injection points, where a \"local\" one only fires in the DB it\n> was defined in. Then only tests that require a global injection\n> point need to be NO_INSTALLCHECK.\n\nAttached is a POC of what could be done. I have extended the module\ninjection_points so as it is possible to register what I'm calling a\n\"condition\" in the module that can be defined with a new SQL function.\n\nThe condition is stored in shared memory with the point name, then at\nruntime the conditions are cross-checked in the callbacks. With the\ninterface of this patch, the condition should be registered *before* a\npoint is attached, but this stuff could also be written so as\ninjection_points_attach() takes an optional argument with a database\nname. Or this could use a different, new SQL function, say a\ninjection_points_attach_local() that registers a condition with\nMyDatabaseId on top of attaching the point, making the whole happening\nwhile holding once the spinlock of the shmem state for the module.\n\nBy the way, modules/gin/ was missing missing a detach, so the test was\nnot repeatable with successive installchecks. Adding a pg_sleep of a\nfew seconds after 'gin-leave-leaf-split-incomplete' is registered\nenlarges the window, and the patch avoids failures when running\ninstallcheck in parallel for modules/gin/ and something else using\ngin, like contrib/btree_gin/:\nwhile make USE_MODULE_DB=1 installcheck; do :; done\n\n0001 is the condition facility for the module, 0002 is a fix for the\nGIN test. Thoughts are welcome.\n--\nMichael", "msg_date": "Fri, 15 Mar 2024 16:39:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On 15/03/2024 09:39, Michael Paquier wrote:\n> On Thu, Mar 14, 2024 at 07:13:53PM -0400, Tom Lane wrote:\n>> I can see that some tests would want to be able to inject code\n>> cluster-wide, but I bet that's going to be a small minority.\n>> I suggest that we invent a notion of \"global\" vs \"local\"\n>> injection points, where a \"local\" one only fires in the DB it\n>> was defined in. Then only tests that require a global injection\n>> point need to be NO_INSTALLCHECK.\n> \n> Attached is a POC of what could be done. I have extended the module\n> injection_points so as it is possible to register what I'm calling a\n> \"condition\" in the module that can be defined with a new SQL function.\n>\n> The condition is stored in shared memory with the point name, then at\n> runtime the conditions are cross-checked in the callbacks. With the\n> interface of this patch, the condition should be registered *before* a\n> point is attached, but this stuff could also be written so as\n> injection_points_attach() takes an optional argument with a database\n> name. Or this could use a different, new SQL function, say a\n> injection_points_attach_local() that registers a condition with\n> MyDatabaseId on top of attaching the point, making the whole happening\n> while holding once the spinlock of the shmem state for the module.\n\nFor the gin test, a single \"SELECT injection_points_attach_local()\" at \nthe top of the test file would be most convenient.\n\nIf I have to do \"SELECT \ninjection_points_condition('gin-finish-incomplete-split', :'datname');\" \nfor every injection point in the test, I will surely forget it sometimes.\n\nIn the 'gin' test, they could actually be scoped to the same backend.\n\n\nWrt. the spinlock and shared memory handling, I think this would be \nsimpler if you could pass some payload in the InjectionPointAttach() \ncall, which would be passed back to the callback function:\n\n void\n InjectionPointAttach(const char *name,\n \t\t\t\t\t const char *library,\n-\t\t\t\t\t const char *function)\n+\t\t\t\t\t const char *function,\n+\t\t\t\t\t uint64 payload)\n\nIn this case, the payload would be the \"slot index\" in shared memory.\n\nOr perhaps always allocate, say, 1024 bytes of working area for every \nattached injection point that the test module can use any way it wants. \nLike for storing extra conditions, or for the wakeup counter stuff in \ninjection_wait(). A fixed size working area is a little crude, but would \nbe very handy in practice.\n\n> By the way, modules/gin/ was missing missing a detach, so the test was\n> not repeatable with successive installchecks.\n\nOops.\n\nIt would be nice to automatically detach all the injection points on \nprocess exit. You wouldn't always want that, but I think most tests hold \na session open throughout the test, and for those it would be handy.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 15 Mar 2024 11:23:31 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On 15/03/2024 01:13, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> Or we could just disable runningcheck because of the concurrency\n>> requirement in this test. The test would still be able to run, just\n>> less times.\n> \n> No, actually we *must* mark all these tests NO_INSTALLCHECK if we\n> stick with the current definition of injection points. The point\n> of installcheck mode is that the tests are supposed to be safe to\n> run in a live installation. Side-effects occurring in other\n> databases are completely not OK.\n\nI committed a patch to do that, to put out the fire.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 13:09:30 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On 15/03/2024 13:09, Heikki Linnakangas wrote:\n> On 15/03/2024 01:13, Tom Lane wrote:\n>> Michael Paquier <[email protected]> writes:\n>>> Or we could just disable runningcheck because of the concurrency\n>>> requirement in this test. The test would still be able to run, just\n>>> less times.\n>>\n>> No, actually we *must* mark all these tests NO_INSTALLCHECK if we\n>> stick with the current definition of injection points. The point\n>> of installcheck mode is that the tests are supposed to be safe to\n>> run in a live installation. Side-effects occurring in other\n>> databases are completely not OK.\n> \n> I committed a patch to do that, to put out the fire.\n\nThat's turning the buildfarm quite red. Many, but not all animals are \nfailing like this:\n\n> --- /home/buildfarm/hippopotamus/buildroot/HEAD/pgsql.build/src/test/modules/injection_points/expected/injection_points.out\t2024-03-15 12:41:16.363286975 +0100\n> +++ /home/buildfarm/hippopotamus/buildroot/HEAD/pgsql.build/src/test/modules/injection_points/results/injection_points.out\t2024-03-15 12:53:11.528159615 +0100\n> @@ -1,118 +1,111 @@\n> CREATE EXTENSION injection_points;\n> +ERROR: extension \"injection_points\" is not available\n> +DETAIL: Could not open extension control file \"/home/buildfarm/hippopotamus/buildroot/HEAD/pgsql.build/tmp_install/home/buildfarm/hippopotamus/buildroot/HEAD/inst/share/postgresql/extension/injection_points.control\": No such file or directory.\n> +HINT: The extension must first be installed on the system where PostgreSQL is running.\n> ... \n\nLooks like adding NO_INSTALLCHECK somehow affected how the modules are \ninstalled in tmp_install. I'll investigate..\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 15 Mar 2024 14:10:14 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On 15/03/2024 14:10, Heikki Linnakangas wrote:\n> On 15/03/2024 13:09, Heikki Linnakangas wrote:\n>> I committed a patch to do that, to put out the fire.\n> \n> That's turning the buildfarm quite red. Many, but not all animals are\n> failing like this:\n> \n>> --- /home/buildfarm/hippopotamus/buildroot/HEAD/pgsql.build/src/test/modules/injection_points/expected/injection_points.out\t2024-03-15 12:41:16.363286975 +0100\n>> +++ /home/buildfarm/hippopotamus/buildroot/HEAD/pgsql.build/src/test/modules/injection_points/results/injection_points.out\t2024-03-15 12:53:11.528159615 +0100\n>> @@ -1,118 +1,111 @@\n>> CREATE EXTENSION injection_points;\n>> +ERROR: extension \"injection_points\" is not available\n>> +DETAIL: Could not open extension control file \"/home/buildfarm/hippopotamus/buildroot/HEAD/pgsql.build/tmp_install/home/buildfarm/hippopotamus/buildroot/HEAD/inst/share/postgresql/extension/injection_points.control\": No such file or directory.\n>> +HINT: The extension must first be installed on the system where PostgreSQL is running.\n>> ...\n> \n> Looks like adding NO_INSTALLCHECK somehow affected how the modules are\n> installed in tmp_install. I'll investigate..\n\nI think this is a bug in the buildfarm client. In the make_misc_check \nstep, it does this (reduced to just the interesting parts):\n\n> # run the modules that can't be run with installcheck\n> sub make_misc_check\n> {\n> \t...\n> \tmy @dirs = glob(\"$pgsql/src/test/modules/* $pgsql/contrib/*\");\n> \tforeach my $dir (@dirs)\n> \t{\n> \t\tnext unless -e \"$dir/Makefile\";\n> \t\tmy $makefile = file_contents(\"$dir/Makefile\");\n> \t\tnext unless $makefile =~ /^NO_INSTALLCHECK/m;\n> \t\tmy $test = basename($dir);\n> \n> \t\t# skip redundant TAP tests which are called elsewhere\n> \t\tmy @out = run_log(\"cd $dir && $make $instflags TAP_TESTS= check\");\n> \t\t...\n> \t}\n\nSo it scans src/test/modules, and runs \"make check\" for all \nsubdirectories that have NO_INSTALLCHECK in the makefile. But the \ninjection fault tests are also conditional on the \nenable_injection_points in the parent Makefile:\n\n> ifeq ($(enable_injection_points),yes)\n> SUBDIRS += injection_points gin\n> else\n> ALWAYS_SUBDIRS += injection_points gin\n> endif\n\nThe buildfarm client doesn't pay any attention to that, and runs the \ntest anyway.\n\nI committed an ugly hack to the subdirectory Makefiles, to turn \"make \ncheck\" into a no-op if injection points are disabled. Normally when you \nrun \"make check\" at the parent level, it doesn't even recurse to the \ndirectories, but this works around the buildfarm script. I hope...\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 15:23:18 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 15/03/2024 13:09, Heikki Linnakangas wrote:\n>> I committed a patch to do that, to put out the fire.\n\n> That's turning the buildfarm quite red. Many, but not all animals are \n> failing like this:\n\nIt may be even worse than it appears from the buildfarm status page.\nMy animals were stuck in infinite loops that required a manual \"kill\"\nto get out of, and it's reasonable to assume there are others that\nwill require owner intervention. Why would this test have done that,\nif the module failed to load?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 Mar 2024 10:00:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On 15/03/2024 16:00, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> On 15/03/2024 13:09, Heikki Linnakangas wrote:\n>>> I committed a patch to do that, to put out the fire.\n> \n>> That's turning the buildfarm quite red. Many, but not all animals are\n>> failing like this:\n> \n> It may be even worse than it appears from the buildfarm status page.\n> My animals were stuck in infinite loops that required a manual \"kill\"\n> to get out of, and it's reasonable to assume there are others that\n> will require owner intervention. Why would this test have done that,\n> if the module failed to load?\n\nThe gin_incomplete_split test inserts rows until it hits the injection \npoint, at page split. There is a backstop, it should give up after 10000 \niterations, but that was broken. Fixed that, thanks for the report!\n\nHmm, don't we have any timeout that would kill tests if they get stuck?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 15 Mar 2024 17:56:14 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 15/03/2024 16:00, Tom Lane wrote:\n>> It may be even worse than it appears from the buildfarm status page.\n>> My animals were stuck in infinite loops that required a manual \"kill\"\n>> to get out of, and it's reasonable to assume there are others that\n>> will require owner intervention. Why would this test have done that,\n>> if the module failed to load?\n\n> The gin_incomplete_split test inserts rows until it hits the injection \n> point, at page split. There is a backstop, it should give up after 10000 \n> iterations, but that was broken. Fixed that, thanks for the report!\n\nDuh ...\n\n> Hmm, don't we have any timeout that would kill tests if they get stuck?\n\nAFAIK, the only constraint on a buildfarm animal's runtime is the\nwait_timeout setting, which is infinite by default, and was on my\nmachines. (Not anymore ;-).) We do have timeouts in (most?) TAP\ntests, but this wasn't a TAP test.\n\nIf this is a continuous-insertion loop, presumably it will run the\nmachine out of disk space eventually, which could be unpleasant\nif there are other services running besides the buildfarm.\nI think I'll go notify the buildfarm owners list to check for\ntrouble.\n\nAre there limits on the runtime of CI or cfbot jobs? Maybe\nsomebody should go check those systems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 Mar 2024 14:27:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Sat, Mar 16, 2024 at 7:27 AM Tom Lane <[email protected]> wrote:\n> Are there limits on the runtime of CI or cfbot jobs? Maybe\n> somebody should go check those systems.\n\nThose get killed at a higher level after 60 minutes (configurable but\nwe didn't change it AFAIK):\n\nhttps://cirrus-ci.org/faq/#instance-timed-out\n\nIt's a fresh virtual machine for each run, and after that it's gone\n(well the ccache directory survives but only by being\nuploaded/downloaded in explicit steps to transmit it between runs).\n\n\n", "msg_date": "Sat, 16 Mar 2024 08:11:08 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Fri, Mar 15, 2024 at 11:23:31AM +0200, Heikki Linnakangas wrote:\n> For the gin test, a single \"SELECT injection_points_attach_local()\" at the\n> top of the test file would be most convenient.\n> \n> If I have to do \"SELECT\n> injection_points_condition('gin-finish-incomplete-split', :'datname');\" for\n> every injection point in the test, I will surely forget it sometimes.\n\nSo will I, most likely.. The odds never play in favor of hackers. I\nhave a few more tests in mind that can be linked to a specific\nbackend with SQL queries, but I've not been able to get back to it\nyet.\n\n> Wrt. the spinlock and shared memory handling, I think this would be simpler\n> if you could pass some payload in the InjectionPointAttach() call, which\n> would be passed back to the callback function:\n> \n> In this case, the payload would be the \"slot index\" in shared memory.\n>\n> Or perhaps always allocate, say, 1024 bytes of working area for every\n> attached injection point that the test module can use any way it wants. Like\n> for storing extra conditions, or for the wakeup counter stuff in\n> injection_wait(). A fixed size working area is a little crude, but would be\n> very handy in practice.\n\nPerhaps. I am not sure that we need more than the current signature,\nall that can just be handled in some module-specific shmem area. The\nkey is to be able to link a point name to some state related to it.\nUsing a hash table would be more efficient, but performance wise a\narray is not going to matter as there will most likely never be more\nthan 8 points. 4 is already a lot, just doubling that on safety\nground.\n\n> It would be nice to automatically detach all the injection points on process\n> exit. You wouldn't always want that, but I think most tests hold a session\n> open throughout the test, and for those it would be handy.\n\nLinking all the points to a PID with a injection_points_attach_local()\nthat switches a static flag while registering a before_shmem_exit() to\ndo an automated cleanup sounds like the simplest approach to me based\non what I'm reading on this thread.\n\n(Just saw the buildfarm storm, wow.)\n--\nMichael", "msg_date": "Sat, 16 Mar 2024 08:40:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Sat, Mar 16, 2024 at 08:40:21AM +0900, Michael Paquier wrote:\n> Linking all the points to a PID with a injection_points_attach_local()\n> that switches a static flag while registering a before_shmem_exit() to\n> do an automated cleanup sounds like the simplest approach to me based\n> on what I'm reading on this thread.\n\nPlease find a patch to do exactly that, without touching the backend\nAPIs. 0001 adds a new function call injection_points_local() that can\nbe added on top of a SQL test to make it concurrent-safe. 0002 is the\nfix for the GIN tests.\n\nI am going to add an open item to not forget about all that.\n\nComments are welcome.\n--\nMichael", "msg_date": "Mon, 18 Mar 2024 10:04:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "\n\n> On 18 Mar 2024, at 06:04, Michael Paquier <[email protected]> wrote:\n> \n> new function call injection_points_local() that can\n> be added on top of a SQL test to make it concurrent-safe.\n\nMaybe consider function injection_points_attach_local(‘point name’) instead of static switch?\nOr even injection_points_attach_global(‘point name’), while function injection_points_attach(‘point name’) will be global? This would favour writing concurrent test by default…\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 18 Mar 2024 10:50:25 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Mon, Mar 18, 2024 at 10:50:25AM +0500, Andrey M. Borodin wrote:\n> Maybe consider function injection_points_attach_local(‘point name’)\n> instead of static switch?\n> Or even injection_points_attach_global(‘point name’), while function\n> injection_points_attach(‘point name’) will be global? This would\n> favour writing concurrent test by default… \n\nThe point is to limit accidents like the one of this thread. So, for \ncases already in the tree, not giving the point name in the SQL\nfunction would be simple enough.\n\nWhat you are suggesting can be simply done, as well, though I'd rather\nwait for a reason to justify doing so.\n--\nMichael", "msg_date": "Mon, 18 Mar 2024 15:13:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Mon, Mar 18, 2024 at 10:04:45AM +0900, Michael Paquier wrote:\n> Please find a patch to do exactly that, without touching the backend\n> APIs. 0001 adds a new function call injection_points_local() that can\n> be added on top of a SQL test to make it concurrent-safe. 0002 is the\n> fix for the GIN tests.\n> \n> I am going to add an open item to not forget about all that.\n\nIt's been a couple of weeks since this has been sent, and this did not\nget any reviews. I'd still be happy with the simplicity of a single\ninjection_points_local() that can be used to link all the injection\npoints created in a single process to it, discarding them once the\nprocess exists with a shmem exit callback. And I don't really see an\nargument to tweak the backend-side routines, as well. Comments and/or\nobjections?\n--\nMichael", "msg_date": "Fri, 5 Apr 2024 11:19:26 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "\n\n> On 5 Apr 2024, at 07:19, Michael Paquier <[email protected]> wrote:\n> \n> It's been a couple of weeks since this has been sent, and this did not\n> get any reviews. I'd still be happy with the simplicity of a single\n> injection_points_local() that can be used to link all the injection\n> points created in a single process to it, discarding them once the\n> process exists with a shmem exit callback.\n\nOK, makes sense.\nI find name of the function \"injection_points_local()\" strange, because there is no verb in the name. How about \"injection_points_set_local\"?\n\n> And I don't really see an\n> argument to tweak the backend-side routines, as well.\n> Comments and/or\n> objections?\n\nI'm not sure if we should refactor anything here, but InjectionPointSharedState has singular name, plural wait_counts and singular condition.\nInjectionPointSharedState is already an array of injection points, maybe let's add there optional pid instead of inventing separate array of pids?\n\nCan we set global point to 'notice', but same local to 'wait'? Looks like now we can't, but allowing to do so would make code simpler.\n\nBesides this opportunity to simplify stuff, both patches looks good to me.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 6 Apr 2024 10:34:46 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Sat, Apr 06, 2024 at 10:34:46AM +0500, Andrey M. Borodin wrote:\n> I find name of the function \"injection_points_local()\" strange,\n> because there is no verb in the name. How about\n> \"injection_points_set_local\"? \n\nThat makes sense.\n\n> I'm not sure if we should refactor anything here, but\n> InjectionPointSharedState has singular name, plural wait_counts and\n> singular condition.\n> InjectionPointSharedState is already an array of injection points,\n> maybe let's add there optional pid instead of inventing separate\n> array of pids?\n\nPerhaps we could unify these two concepts, indeed, with a \"kind\" added\nto InjectionPointCondition. Now waits/wakeups are a different beast\nthan the conditions that could be assigned to a point to filter if it\nshould be executed. More runtime conditions coming immediately into\nmy mind, that could be added to this structure relate mostly to global\nobjects, like:\n- Specific database name and/or OID.\n- Specific role(s).\nSo that's mostly cross-checking states coming from miscadmin.h for\nnow.\n\n> Can we set global point to 'notice', but same local to 'wait'? Looks\n> like now we can't, but allowing to do so would make code simpler.\n\nYou mean using the name point name with more than more callback? Not\nsure we'd want to be able to do that. Perhaps you're right, though,\nif there is a use case that justifies it.\n\n> Besides this opportunity to simplify stuff, both patches looks good\n> to me.\n\nYeah, this module can be always tweaked more if necessary. Saying\nthat, naming the new thing \"condition\" in InjectionPointSharedState\nfelt strange, as you said, because it is an array of multiple\nconditions.\n\nFor now I have applied 997db123c054 to make the GIN tests with\ninjection points repeatable as it was an independent issue, and\nf587338dec87 to add the local function pieces.\n\nAttached is the last piece to switch the GIN test to use local\ninjection points. 85f65d7a26fc should maintain the buildfarm at bay,\nbut I'd rather not take a bet and accidently freeze the buildfarm as\nit would impact folks who aim at getting patches committed just before\nthe finish line. So I am holding on this one for a few more days\nuntil we're past the freeze and the buildfarm is more stable.\n--\nMichael", "msg_date": "Mon, 8 Apr 2024 10:22:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Mon, Apr 08, 2024 at 10:22:40AM +0900, Michael Paquier wrote:\n> For now I have applied 997db123c054 to make the GIN tests with\n> injection points repeatable as it was an independent issue, and\n> f587338dec87 to add the local function pieces.\n\nBharath has reported me offlist that one of the new tests has a race\ncondition when doing the reconnection. When the backend creating the\nlocal points is very slow to exit, the backend created after the\nreconnection may detect that a local point previously created still\nexists, causing a failure. The failure can be reproduced with a sleep\nin the shmem exit callback, like:\n--- a/src/test/modules/injection_points/injection_points.c\n+++ b/src/test/modules/injection_points/injection_points.c\n@@ -163,6 +163,8 @@ injection_points_cleanup(int code, Datum arg)\n \tif (!injection_point_local)\n \t\treturn;\n \n+\tpg_usleep(1000000 * 1L);\n+\n \tSpinLockAcquire(&inj_state->lock);\n \tfor (int i = 0; i < INJ_MAX_CONDITION; i++)\n \t{\n\nAt first I was looking at a loop with a scan of pg_stat_activity, but\nI've noticed that regress.so includes a wait_pid() that we can use to\nmake sure that a given process exits before moving on to the next\nparts of a test, so I propose to just reuse that here. This requires\ntweaks with --dlpath for meson and ./configure, nothing new. The CI\nis clean. Patch attached.\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 8 Apr 2024 16:33:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "\n\n> On 8 Apr 2024, at 10:33, Michael Paquier <[email protected]> wrote:\n> \n> Thoughts?\n\nAs an alternative we can make local injection points mutually exclusive.\n\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Mon, 8 Apr 2024 10:42:08 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Mon, Apr 08, 2024 at 10:42:08AM +0300, Andrey M. Borodin wrote:\n> As an alternative we can make local injection points mutually exclusive.\n\nSure. Now, the point of the test is to make sure that the local\ncleanup happens, so I'd rather keep it as-is and use the same names\nacross reloads.\n--\nMichael", "msg_date": "Mon, 8 Apr 2024 17:55:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "\n\n> On 8 Apr 2024, at 11:55, Michael Paquier <[email protected]> wrote:\n> \n> the point of the test is to make sure that the local\n> cleanup happens\nUh, I did not understand this. Because commit message was about stabiilzizing tests, not extending coverage.\nAlso, should we drop function wait_pid() at the end of a test?\nGiven that tweaks with are nothing new, I think patch looks good.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 8 Apr 2024 12:29:43 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Mon, Apr 08, 2024 at 12:29:43PM +0300, Andrey M. Borodin wrote:\n> On 8 Apr 2024, at 11:55, Michael Paquier <[email protected]> wrote:\n>> Uh, I did not understand this. Because commit message was about\n>> stabiilzizing tests, not extending coverage.\n\nOkay, it is about stabilizing an existing test.\n\n> Also, should we drop function wait_pid() at the end of a test?\n\nSure.\n\n> Given that tweaks with are nothing new, I think patch looks good.\n\nApplied that after a second check. And thanks to Bharath for the\npoke.\n--\nMichael", "msg_date": "Tue, 9 Apr 2024 12:41:57 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Tue, Apr 09, 2024 at 12:41:57PM +0900, Michael Paquier wrote:\n> Applied that after a second check. And thanks to Bharath for the\n> poke.\n\nAnd now that the buildfarm is cooler, I've also applied the final\npatch in the series as of 5105c9079681 to make the GIN module\nconcurrent-safe using injection_points_set_local().\n--\nMichael", "msg_date": "Wed, 10 Apr 2024 13:50:57 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "While writing an injection point test, I encountered a variant of the race\ncondition that f4083c4 fixed. It had three sessions and this sequence of\nevents:\n\ns1: local-attach to POINT\ns2: enter InjectionPointRun(POINT), yield CPU just before injection_callback()\ns3: detach POINT, deleting the InjectionPointCondition record\ns2: wake up and run POINT as though it had been non-local\n\nOn Sat, Mar 16, 2024 at 08:40:21AM +0900, Michael Paquier wrote:\n> On Fri, Mar 15, 2024 at 11:23:31AM +0200, Heikki Linnakangas wrote:\n> > Wrt. the spinlock and shared memory handling, I think this would be simpler\n> > if you could pass some payload in the InjectionPointAttach() call, which\n> > would be passed back to the callback function:\n> > \n> > In this case, the payload would be the \"slot index\" in shared memory.\n> > \n> > Or perhaps always allocate, say, 1024 bytes of working area for every\n> > attached injection point that the test module can use any way it wants. Like\n> > for storing extra conditions, or for the wakeup counter stuff in\n> > injection_wait(). A fixed size working area is a little crude, but would be\n> > very handy in practice.\n\nThat would be one good way to solve it. (Storing a slot index has the same\nrace condition, but it fixes the race to store a struct containing the PID.)\n\nThe best alternative I see is to keep an InjectionPointCondition forever after\ncreating it. Give it a \"bool valid\" field that we set on detach. I don't see\na major reason to prefer one of these over the other. One puts a negligible\namount of memory pressure on the main segment, but it simplifies the module\ncode. I lean toward the \"1024 bytes of working area\" idea. Other ideas or\nopinions?\n\n\nSeparately, injection_points_cleanup() breaks the rules by calling\nInjectionPointDetach() while holding a spinlock. The latter has an\nelog(ERROR), and reaching that elog(ERROR) leaves a stuck spinlock. I haven't\ngiven as much thought to solutions for this one.\n\nThanks,\nnm\n\n\n", "msg_date": "Wed, 1 May 2024 16:12:14 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Wed, May 01, 2024 at 04:12:14PM -0700, Noah Misch wrote:\n> While writing an injection point test, I encountered a variant of the race\n> condition that f4083c4 fixed. It had three sessions and this sequence of\n> events:\n> \n> s1: local-attach to POINT\n> s2: enter InjectionPointRun(POINT), yield CPU just before injection_callback()\n> s3: detach POINT, deleting the InjectionPointCondition record\n> s2: wake up and run POINT as though it had been non-local\n\nFun. One thing I would ask is why it makes sense to be able to detach\na local point from a different session than the one who defined it as\nlocal. Shouldn't the operation of s3 be restricted rather than\nauthorized as a safety measure, instead?\n\n> On Sat, Mar 16, 2024 at 08:40:21AM +0900, Michael Paquier wrote:\n>> On Fri, Mar 15, 2024 at 11:23:31AM +0200, Heikki Linnakangas wrote:\n>>> Wrt. the spinlock and shared memory handling, I think this would be simpler\n>>> if you could pass some payload in the InjectionPointAttach() call, which\n>>> would be passed back to the callback function:\n>>> \n>>> In this case, the payload would be the \"slot index\" in shared memory.\n>>> \n>>> Or perhaps always allocate, say, 1024 bytes of working area for every\n>>> attached injection point that the test module can use any way it wants. Like\n>>> for storing extra conditions, or for the wakeup counter stuff in\n>>> injection_wait(). A fixed size working area is a little crude, but would be\n>>> very handy in practice.\n> \n> That would be one good way to solve it. (Storing a slot index has the same\n> race condition, but it fixes the race to store a struct containing the PID.)\n> \n> The best alternative I see is to keep an InjectionPointCondition forever after\n> creating it. Give it a \"bool valid\" field that we set on detach. I don't see\n> a major reason to prefer one of these over the other. One puts a negligible\n> amount of memory pressure on the main segment, but it simplifies the module\n> code. I lean toward the \"1024 bytes of working area\" idea. Other ideas or\n> opinions?\n\nIf more state data is needed, the fixed area injection_point.c would\nbe better. Still, I am not sure that this is required here, either.\n\n> Separately, injection_points_cleanup() breaks the rules by calling\n> InjectionPointDetach() while holding a spinlock. The latter has an\n> elog(ERROR), and reaching that elog(ERROR) leaves a stuck spinlock. I haven't\n> given as much thought to solutions for this one.\n\nIndeed. That's a brain fade. This one could be fixed by collecting\nthe point names when cleaning up the conditions and detach after\nreleasing the spinlock. This opens a race condition between the\nmoment when the spinlock is released and the detach, where another\nbackend could come in and detach a point before the shmem_exit\ncallback has the time to do its cleanup, even if detach() is\nrestricted for local points. So we could do the callback cleanup in\nthree steps in the shmem exit callback: \n- Collect the names of the points to detach, while holding the\nspinlock.\n- Do the Detach.\n- Take again the spinlock, clean up the conditions.\n\nPlease see the attached.\n--\nMichael", "msg_date": "Thu, 2 May 2024 16:27:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "\n\n> On 2 May 2024, at 12:27, Michael Paquier <[email protected]> wrote:\n> \n> On Wed, May 01, 2024 at 04:12:14PM -0700, Noah Misch wrote:\n>> While writing an injection point test, I encountered a variant of the race\n>> condition that f4083c4 fixed. It had three sessions and this sequence of\n>> events:\n>> \n>> s1: local-attach to POINT\n>> s2: enter InjectionPointRun(POINT), yield CPU just before injection_callback()\n>> s3: detach POINT, deleting the InjectionPointCondition record\n>> s2: wake up and run POINT as though it had been non-local\n> \n> Fun. One thing I would ask is why it makes sense to be able to detach\n> a local point from a different session than the one who defined it as\n> local. Shouldn't the operation of s3 be restricted rather than\n> authorized as a safety measure, instead?\n\nThat seems to prevent meaningful use case. If we want exactly one session to be waiting just before some specific point, the only way to achieve this is to create local injection point. But the session must be resumable from another session.\nWithout this local waiting injection points are meaningless.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 2 May 2024 13:33:45 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, May 02, 2024 at 01:33:45PM +0500, Andrey M. Borodin wrote:\n> That seems to prevent meaningful use case. If we want exactly one\n> session to be waiting just before some specific point, the only way\n> to achieve this is to create local injection point. But the session\n> must be resumable from another session.\n> Without this local waiting injection points are meaningless.\n\nI am not quite sure to follow your argument here. It is still\npossible to attach a local injection point with a wait callback that\ncan be awaken by a different backend:\ns1: select injection_points_set_local();\ns1: select injection_points_attach('popo', 'wait');\ns1: select injection_points_run('popo'); -- waits\ns2: select injection_points_wakeup('popo');\ns1: -- ready for action.\n\nA detach is not a wakeup.\n--\nMichael", "msg_date": "Thu, 2 May 2024 17:43:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "\n\n> On 2 May 2024, at 13:43, Michael Paquier <[email protected]> wrote:\n> \n> A detach is not a wakeup.\n\nOh, now I see. Sorry for the noise.\n\nDetaching local injection point of other backend seems to be useless and can be forbidden.\nAs far as I understand, your patch is already doing this in\n+\tif (!injection_point_allowed(name))\n+\t\telog(ERROR, \"cannot detach injection point \\\"%s\\\" not allowed to run\",\n+\t\t\t name);\n+\n\nAs far as I understand this will effectively forbid calling injection_points_detach() for local injection point of other backend. Do I get it right?\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 2 May 2024 13:52:20 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, May 02, 2024 at 04:27:12PM +0900, Michael Paquier wrote:\n> On Wed, May 01, 2024 at 04:12:14PM -0700, Noah Misch wrote:\n> > While writing an injection point test, I encountered a variant of the race\n> > condition that f4083c4 fixed. It had three sessions and this sequence of\n> > events:\n> > \n> > s1: local-attach to POINT\n> > s2: enter InjectionPointRun(POINT), yield CPU just before injection_callback()\n> > s3: detach POINT, deleting the InjectionPointCondition record\n> > s2: wake up and run POINT as though it had been non-local\n\nI should have given a simpler example:\n\ns1: local-attach to POINT\ns2: enter InjectionPointRun(POINT), yield CPU just before injection_callback()\ns1: exit\ns2: wake up and run POINT as though it had been non-local\n\n> Fun. One thing I would ask is why it makes sense to be able to detach\n> a local point from a different session than the one who defined it as\n> local. Shouldn't the operation of s3 be restricted rather than\n> authorized as a safety measure, instead?\n\n(That's orthogonal to the race condition.) When s1 would wait at the\ninjection point multiple times in one SQL statement, I like issuing the detach\nfrom s3 so s1 waits at just the first encounter with the injection point.\nThis mimics setting a gdb breakpoint and deleting that breakpoint before\n\"continue\". The alternative, waking s1 repeatedly until it finishes the SQL\nstatement, is less convenient. (I also patched _detach() to wake the waiter,\nand I plan to propose that.)\n\n> > Separately, injection_points_cleanup() breaks the rules by calling\n> > InjectionPointDetach() while holding a spinlock. The latter has an\n> > elog(ERROR), and reaching that elog(ERROR) leaves a stuck spinlock. I haven't\n> > given as much thought to solutions for this one.\n> \n> Indeed. That's a brain fade. This one could be fixed by collecting\n> the point names when cleaning up the conditions and detach after\n> releasing the spinlock. This opens a race condition between the\n> moment when the spinlock is released and the detach, where another\n> backend could come in and detach a point before the shmem_exit\n> callback has the time to do its cleanup, even if detach() is\n> restricted for local points. So we could do the callback cleanup in\n\nThat race condition seems fine. The test can be expected to control the\ntiming of backend exits vs. detach calls. Unlike the InjectionPointRun()\nrace, it wouldn't affect backends unrelated to the test.\n\n> three steps in the shmem exit callback: \n> - Collect the names of the points to detach, while holding the\n> spinlock.\n> - Do the Detach.\n> - Take again the spinlock, clean up the conditions.\n> \n> Please see the attached.\n\nThe injection_points_cleanup() parts look good. Thanks.\n\n> @@ -403,6 +430,10 @@ injection_points_detach(PG_FUNCTION_ARGS)\n> {\n> \tchar\t *name = text_to_cstring(PG_GETARG_TEXT_PP(0));\n> \n> +\tif (!injection_point_allowed(name))\n> +\t\telog(ERROR, \"cannot detach injection point \\\"%s\\\" not allowed to run\",\n> +\t\t\t name);\n> +\n\nAs above, I disagree with the injection_points_detach() part.\n\n\n", "msg_date": "Thu, 2 May 2024 12:35:55 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, May 02, 2024 at 01:52:20PM +0500, Andrey M. Borodin wrote:\n> As far as I understand this will effectively forbid calling\n> injection_points_detach() for local injection point of other\n> backend. Do I get it right?\n\nYes, that would be the intention. Noah has other use cases in mind\nwith this interface that I did not think about supporting, hence he's\nobjecting to this restriction.\n--\nMichael", "msg_date": "Sat, 4 May 2024 18:49:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, May 02, 2024 at 12:35:55PM -0700, Noah Misch wrote:\n> I should have given a simpler example:\n> \n> s1: local-attach to POINT\n> s2: enter InjectionPointRun(POINT), yield CPU just before injection_callback()\n> s1: exit\n> s2: wake up and run POINT as though it had been non-local\n\nHmm. Even if you were to emulate that in a controlled manner, you\nwould need a second injection point that does a wait in s2, which is\nsomething that would happen before injection_callback() and before\nscanning the local entry. This relies on the fact that we're holding\nCPU in s2 between the backend shmem hash table lookup and the callback\nbeing called.\n\n>> Fun. One thing I would ask is why it makes sense to be able to detach\n>> a local point from a different session than the one who defined it as\n>> local. Shouldn't the operation of s3 be restricted rather than\n>> authorized as a safety measure, instead?\n> \n> (That's orthogonal to the race condition.) When s1 would wait at the\n> injection point multiple times in one SQL statement, I like issuing the detach\n> from s3 so s1 waits at just the first encounter with the injection point.\n> This mimics setting a gdb breakpoint and deleting that breakpoint before\n> \"continue\". The alternative, waking s1 repeatedly until it finishes the SQL\n> statement, is less convenient. (I also patched _detach() to wake the waiter,\n> and I plan to propose that.)\n\nOkay.\n\n>> Indeed. That's a brain fade. This one could be fixed by collecting\n>> the point names when cleaning up the conditions and detach after\n>> releasing the spinlock. This opens a race condition between the\n>> moment when the spinlock is released and the detach, where another\n>> backend could come in and detach a point before the shmem_exit\n>> callback has the time to do its cleanup, even if detach() is\n>> restricted for local points. So we could do the callback cleanup in\n> \n> That race condition seems fine. The test can be expected to control the\n> timing of backend exits vs. detach calls. Unlike the InjectionPointRun()\n> race, it wouldn't affect backends unrelated to the test.\n\nSure. The fact that there are two spinlocks in the backend code and\nthe module opens concurrency issues. Making that more robust if there\nis a case for it is OK by me, but I'd rather avoid making the\nbackend-side more complicated than need be.\n\n>> three steps in the shmem exit callback: \n>> - Collect the names of the points to detach, while holding the\n>> spinlock.\n>> - Do the Detach.\n>> - Take again the spinlock, clean up the conditions.\n>> \n>> Please see the attached.\n> \n> The injection_points_cleanup() parts look good. Thanks.\n\nThanks for the check.\n\n>> @@ -403,6 +430,10 @@ injection_points_detach(PG_FUNCTION_ARGS)\n>> {\n>> \tchar\t *name = text_to_cstring(PG_GETARG_TEXT_PP(0));\n>> \n>> +\tif (!injection_point_allowed(name))\n>> +\t\telog(ERROR, \"cannot detach injection point \\\"%s\\\" not allowed to run\",\n>> +\t\t\t name);\n>> +\n> \n> As above, I disagree with the injection_points_detach() part.\n\nOkay, noted. Fine by me to expand this stuff as you feel, the code\nhas been written to be extended depending on what people want to\nsupport. There should be tests in the tree that rely on any\nnew behavior, though.\n\nI've applied the patch to fix the spinlock logic in the exit callback\nfor now.\n--\nMichael", "msg_date": "Mon, 6 May 2024 10:03:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Mon, May 06, 2024 at 10:03:37AM +0900, Michael Paquier wrote:\n> On Thu, May 02, 2024 at 12:35:55PM -0700, Noah Misch wrote:\n> > I should have given a simpler example:\n> > \n> > s1: local-attach to POINT\n> > s2: enter InjectionPointRun(POINT), yield CPU just before injection_callback()\n> > s1: exit\n> > s2: wake up and run POINT as though it had been non-local\n\nHere's how I've patched it locally. It does avoid changing the backend-side,\nwhich has some attraction. Shall I just push this?\n\n> Hmm. Even if you were to emulate that in a controlled manner, you\n> would need a second injection point that does a wait in s2, which is\n> something that would happen before injection_callback() and before\n> scanning the local entry. This relies on the fact that we're holding\n> CPU in s2 between the backend shmem hash table lookup and the callback\n> being called.\n\nRight. We would need \"second-level injection points\" to write a test for that\nrace in the injection point system.", "msg_date": "Mon, 6 May 2024 14:23:24 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Mon, May 06, 2024 at 02:23:24PM -0700, Noah Misch wrote:\n> Here's how I've patched it locally. It does avoid changing the backend-side,\n> which has some attraction. Shall I just push this?\n\nIt looks like you did not rebase on top of HEAD to avoid the spinlock\ntaken with InjectionPointDetach() in the shmem callback. I think that\nyou'd mean the attached, once rebased (apologies I've reset the author\nfield).\n\n+ * s1: local-attach to POINT\n+ * s2: yield CPU before InjectionPointRun(POINT) calls injection_callback()\n+ * s1: exit()\n+ * s2: run POINT as though it had been non-local\n\nI see. So you are taking a shortcut in the shape of never resetting\nthe name of a condition, so as it is possible to let the point of step\n4 check the runtime condition with a condition still stored while the\npoint has been detached, removed from the hash table.\n\n if (strcmp(condition->name, name) == 0)\n {\n+ condition->valid = false;\n condition->pid = 0;\n- condition->name[0] = '\\0';\n }\n }\n\nAs of HEAD, we rely on InjectionPointCondition->name to be set to\ncheck if a condition is valid. Your patch adds a different variable\nto do mostly the same thing, and never clears the name in any existing\nconditions. A side effect is that this causes the conditions to pile\nup on a running server when running installcheck, and assuming that\nmany test suites are run on a server left running this could cause\nspurious failures when failing to find a new slot. Always resetting\ncondition->name when detaching a point is a simpler flow and saner\nIMO.\n\nOverall, this switches from one detach behavior to a different one,\nwhich may or may not be intuitive depending on what one is looking\nfor. FWIW, I see InjectionPointCondition as something that should be\naround as long as its injection point exists, with the condition\nentirely gone once the point is detached because it should not exist\nanymore on the server running, with no information left in shmem.\n\nThrough your patch, you make conditions have a different meaning, with\na mix of \"local\" definition, but it is kind of permanent as it keeps a\ntrace of the point's name in shmem. I find the behavior of the patch\nless intuitive. Perhaps it would be interesting to see first the bug\nand/or problem you are trying to tackle with this different behavior\nas I feel like we could do something even with the module as-is. As\nfar as I understand, the implementation of the module on HEAD allows\none to emulate a breakpoint with a wait/wake, which can avoid the\nwindow mentioned in step 2. Even if a wait point is detached\nconcurrently, it can be awaken with its traces in shmem removed.\n--\nMichael", "msg_date": "Tue, 7 May 2024 10:17:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Tue, May 07, 2024 at 10:17:49AM +0900, Michael Paquier wrote:\n> On Mon, May 06, 2024 at 02:23:24PM -0700, Noah Misch wrote:\n> > Here's how I've patched it locally. It does avoid changing the backend-side,\n> > which has some attraction. Shall I just push this?\n> \n> It looks like you did not rebase on top of HEAD\n\nYes, the base was 713cfaf (Sunday).\n\n> A side effect is that this causes the conditions to pile\n> up on a running server when running installcheck, and assuming that\n> many test suites are run on a server left running this could cause\n> spurious failures when failing to find a new slot.\n\nYes, we'd be raising INJ_MAX_CONDITION more often under this approach.\n\n> Always resetting\n> condition->name when detaching a point is a simpler flow and saner\n> IMO.\n> \n> Overall, this switches from one detach behavior to a different one,\n\nCan you say more about that? The only behavior change known to me is that a\ngiven injection point workload uses more of INJ_MAX_CONDITION. If there's\nanother behavior change, it was likely unintended.\n\n> which may or may not be intuitive depending on what one is looking\n> for. FWIW, I see InjectionPointCondition as something that should be\n> around as long as its injection point exists, with the condition\n> entirely gone once the point is detached because it should not exist\n> anymore on the server running, with no information left in shmem.\n> \n> Through your patch, you make conditions have a different meaning, with\n> a mix of \"local\" definition, but it is kind of permanent as it keeps a\n> trace of the point's name in shmem. I find the behavior of the patch\n> less intuitive. Perhaps it would be interesting to see first the bug\n> and/or problem you are trying to tackle with this different behavior\n> as I feel like we could do something even with the module as-is. As\n> far as I understand, the implementation of the module on HEAD allows\n> one to emulate a breakpoint with a wait/wake, which can avoid the\n> window mentioned in step 2. Even if a wait point is detached\n> concurrently, it can be awaken with its traces in shmem removed.\n\nThe problem I'm trying to tackle in this thread is to make\nsrc/test/modules/gin installcheck-safe. $SUBJECT's commit 5105c90 started\nthat work, having seen the intarray test suite break when run concurrently\nwith the injection_points test suite. That combination still does break at\nthe exit-time race condition. To reproduce, apply this attachment to add\nsleeps, and run:\n\nmake -C src/test/modules/gin installcheck USE_MODULE_DB=1 & sleep 2; make -C contrib/intarray installcheck USE_MODULE_DB=1\n\nSeparately, I see injection_points_attach() populates InjectionPointCondition\nafter InjectionPointAttach(). Shouldn't InjectionPointAttach() come last, to\navoid the same sort of race? I've not tried to reproduce that one.", "msg_date": "Tue, 7 May 2024 11:53:10 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Tue, May 07, 2024 at 11:53:10AM -0700, Noah Misch wrote:\n> On Tue, May 07, 2024 at 10:17:49AM +0900, Michael Paquier wrote:\n> > Overall, this switches from one detach behavior to a different one,\n> \n> Can you say more about that? The only behavior change known to me is that a\n> given injection point workload uses more of INJ_MAX_CONDITION. If there's\n> another behavior change, it was likely unintended.\n\nI see patch inplace030-inj-exit-race-v1.patch does not fix the race seen with\nrepro-inj-exit-race-v1.patch. I withdraw inplace030-inj-exit-race-v1.patch,\nand I withdraw the above question.\n\n> To reproduce, apply [repro-inj-exit-race-v1.patch] to add\n> sleeps, and run:\n> \n> make -C src/test/modules/gin installcheck USE_MODULE_DB=1 & sleep 2; make -C contrib/intarray installcheck USE_MODULE_DB=1\n> \n> Separately, I see injection_points_attach() populates InjectionPointCondition\n> after InjectionPointAttach(). Shouldn't InjectionPointAttach() come last, to\n> avoid the same sort of race? I've not tried to reproduce that one.\n\n\n", "msg_date": "Tue, 7 May 2024 15:00:23 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Tue, May 07, 2024 at 11:53:10AM -0700, Noah Misch wrote:\n> On Tue, May 07, 2024 at 10:17:49AM +0900, Michael Paquier wrote:\n>> Always resetting\n>> condition->name when detaching a point is a simpler flow and saner\n>> IMO.\n>> \n>> Overall, this switches from one detach behavior to a different one,\n> \n> Can you say more about that? The only behavior change known to me is that a\n> given injection point workload uses more of INJ_MAX_CONDITION. If there's\n> another behavior change, it was likely unintended.\n\nAs far as I read the previous patch, the conditions stored in\nInjectionPointSharedState would never be really gone, even if the\npoints are removed from InjectionPointHash. \n\n> The problem I'm trying to tackle in this thread is to make\n> src/test/modules/gin installcheck-safe. $SUBJECT's commit 5105c90 started\n> that work, having seen the intarray test suite break when run concurrently\n> with the injection_points test suite. That combination still does break at\n> the exit-time race condition. To reproduce, apply this attachment to add\n> sleeps, and run:\n> \n> make -C src/test/modules/gin installcheck USE_MODULE_DB=1 & sleep 2; make -C contrib/intarray installcheck USE_MODULE_DB=1\n\nThanks for that. I am not really sure how to protect that without a\nsmall cost in flexibility for the cases of detach vs run paths. This\ncomes down to the fact that a custom routine could be run while it\ncould be detached concurrently, removing any stuff a callback could\ndepend on in the module.\n\nIt was mentioned upthread to add to InjectionPointCacheEntry a fixed\narea of memory that modules could use to store some \"status\" data, but\nit would not close the run/detach race because a backend could still\nhold a pointer to a callback, with concurrent backends playing with\nthe contents of InjectionPointCacheEntry (concurrent detaches and\nattaches that would cause the same entries to be reused).\n\nOne way to close entirely the window would be to hold\nInjectionPointLock longer in InjectionPointRun() until the callback\nfinishes or until it triggers an ERROR. This would mean that the case\nyou've mentioned in [1] would change, by blocking the detach() of s3\nuntil the callback of s2 finishes. We don't have tests in the tree\nthat do any of that, so holding InjectionPointLock longer would not\nbreak anything on HEAD. A detach being possible while the callback is\nrun is something I've considered as valid in d86d20f0ba79, but perhaps\nthat was too cute of me, even more with the use case of local points.\n\n> Separately, I see injection_points_attach() populates InjectionPointCondition\n> after InjectionPointAttach(). Shouldn't InjectionPointAttach() come last, to\n> avoid the same sort of race? I've not tried to reproduce that one.\n\nGood point. You could run into the case of a concurrent backend\nrunning an injection point that should be local if waiting between\nInjectionPointAttach() and the condition getting registered in\ninjection_points_attach(). That should be reversed.\n\n[1] https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Thu, 9 May 2024 09:37:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, May 09, 2024 at 09:37:54AM +0900, Michael Paquier wrote:\n> On Tue, May 07, 2024 at 11:53:10AM -0700, Noah Misch wrote:\n> > The problem I'm trying to tackle in this thread is to make\n> > src/test/modules/gin installcheck-safe. $SUBJECT's commit 5105c90 started\n> > that work, having seen the intarray test suite break when run concurrently\n> > with the injection_points test suite. That combination still does break at\n> > the exit-time race condition. To reproduce, apply this attachment to add\n> > sleeps, and run:\n> > \n> > make -C src/test/modules/gin installcheck USE_MODULE_DB=1 & sleep 2; make -C contrib/intarray installcheck USE_MODULE_DB=1\n> \n> Thanks for that. I am not really sure how to protect that without a\n> small cost in flexibility for the cases of detach vs run paths. This\n> comes down to the fact that a custom routine could be run while it\n> could be detached concurrently, removing any stuff a callback could\n> depend on in the module.\n> \n> It was mentioned upthread to add to InjectionPointCacheEntry a fixed\n> area of memory that modules could use to store some \"status\" data, but\n> it would not close the run/detach race because a backend could still\n> hold a pointer to a callback, with concurrent backends playing with\n> the contents of InjectionPointCacheEntry (concurrent detaches and\n> attaches that would cause the same entries to be reused).\n\n> One way to close entirely the window would be to hold\n> InjectionPointLock longer in InjectionPointRun() until the callback\n> finishes or until it triggers an ERROR. This would mean that the case\n> you've mentioned in [1] would change, by blocking the detach() of s3\n> until the callback of s2 finishes.\n\n> [1] https://www.postgresql.org/message-id/[email protected]\n\nYes, that would be a loss for test readability. Also, I wouldn't be surprised\nif some test will want to attach POINT-B while POINT-A is in injection_wait().\nVarious options avoiding those limitations:\n\n1. The data area formerly called a \"status\" area is immutable after attach.\n The core code copies it into a stack buffer to pass in a const callback\n argument.\n\n2. InjectionPointAttach() returns an attachment serial number, and the\n callback receives that as an argument. injection_points.c would maintain\n an InjectionPointCondition for every still-attached serial number,\n including global attachments. Finding no condition matching the serial\n number argument means detach already happened and callback should do\n nothing.\n\n3. Move the PID check into core code.\n\n4. Separate the concept of \"make ineligible to fire\" from \"detach\", with\n stronger locking for detach. v1 pointed in this direction, though not\n using that terminology.\n\n5. Injection point has multiple callbacks. At least one runs with the lock\n held and can mutate the \"status\" data. At least one runs without the lock.\n\n(1) is, I think, simple and sufficient. How about that?\n\n\n", "msg_date": "Wed, 8 May 2024 20:15:53 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Wed, May 08, 2024 at 08:15:53PM -0700, Noah Misch wrote:\n> Yes, that would be a loss for test readability. Also, I wouldn't be surprised\n> if some test will want to attach POINT-B while POINT-A is in injection_wait().\n\nNot impossible, still annoying with more complex scenarios.\n\n> Various options avoiding those limitations:\n> \n> 1. The data area formerly called a \"status\" area is immutable after attach.\n> The core code copies it into a stack buffer to pass in a const callback\n> argument.\n> \n> 3. Move the PID check into core code.\n\nThe PID checks are specific to the module, and there could be much\nmore conditions like running only in specific backend types (first\nexample coming into mind), so I want to avoid that kind of knowledge\nin the backend.\n\n> (1) is, I think, simple and sufficient. How about that?\n\nThat sounds fine to do that at the end.. I'm not sure how large this\nchunk area added to each InjectionPointEntry should be, though. 128B\nstored in each InjectionPointEntry would be more than enough I guess?\nOr more like 1024? The in-core module does not need much currently,\nbut larger is likely better for pluggability.\n--\nMichael", "msg_date": "Thu, 9 May 2024 13:47:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, May 09, 2024 at 01:47:45PM +0900, Michael Paquier wrote:\n> That sounds fine to do that at the end.. I'm not sure how large this\n> chunk area added to each InjectionPointEntry should be, though. 128B\n> stored in each InjectionPointEntry would be more than enough I guess?\n> Or more like 1024? The in-core module does not need much currently,\n> but larger is likely better for pluggability.\n\nActually, this is leading to simplifications in the module, giving me\nthe attached:\n4 files changed, 117 insertions(+), 134 deletions(-) \n\nSo I like your suggestion. This version closes the race window\nbetween the shmem exit detach in backend A and a concurrent backend B\nrunning a point local to A so as B will never run the local point of\nA. However, it can cause a failure in the shmem exit callback of\nbackend A if backend B does a detach of a point local to A because A\ntracks its local points with a static list in its TopMemoryContext, at\nleast in the attached. The second case could be solved by tracking\nthe list of local points in the module's InjectionPointSharedState,\nbut is this case really worth the complications it adds in the module\nknowing that the first case would be solid enough? Perhaps not.\n\nAnother idea I have for the second case is to make\nInjectionPointDetach() a bit \"softer\", by returning a boolean status \nrather than fail if the detach cannot be done, so as the shmem exit\ncallback can still loop through the entries it has in store. It could\nalways be possible that a concurrent backend does a detach followed by\nan attach with the same name, causing the shmem exit callback to drop\na point it should not, but that's not really a plausible case IMO :)\n\nThis stuff can be adjusted in subtle ways depending on the cases you\nare most interested in. What do you think?\n--\nMichael", "msg_date": "Thu, 9 May 2024 16:40:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, May 09, 2024 at 04:40:43PM +0900, Michael Paquier wrote:\n> So I like your suggestion. This version closes the race window\n> between the shmem exit detach in backend A and a concurrent backend B\n> running a point local to A so as B will never run the local point of\n> A. However, it can cause a failure in the shmem exit callback of\n> backend A if backend B does a detach of a point local to A because A\n> tracks its local points with a static list in its TopMemoryContext, at\n> least in the attached. The second case could be solved by tracking\n> the list of local points in the module's InjectionPointSharedState,\n> but is this case really worth the complications it adds in the module\n> knowing that the first case would be solid enough? Perhaps not.\n> \n> Another idea I have for the second case is to make\n> InjectionPointDetach() a bit \"softer\", by returning a boolean status \n> rather than fail if the detach cannot be done, so as the shmem exit\n> callback can still loop through the entries it has in store.\n\nThe return-bool approach sounds fine. Up to you whether to do in this patch,\nelse I'll do it when I add the test.\n\n> It could\n> always be possible that a concurrent backend does a detach followed by\n> an attach with the same name, causing the shmem exit callback to drop\n> a point it should not, but that's not really a plausible case IMO :)\n\nAgreed. It's reasonable to expect test cases to serialize backend exits,\nattach calls, and detach calls. If we need to fix that later, we can use\nattachment serial numbers.\n\n> --- a/src/test/modules/injection_points/injection_points.c\n> +++ b/src/test/modules/injection_points/injection_points.c\n\n> +typedef enum InjectionPointConditionType\n> +{\n> +\tINJ_CONDITION_INVALID = 0,\n\nI'd name this INJ_CONDITION_UNCONDITIONAL or INJ_CONDITION_ALWAYS. INVALID\nsounds like a can't-happen event or an injection point that never runs.\nOtherwise, the patch looks good and makes src/test/modules/gin safe for\ninstallcheck. Thanks.\n\n\n", "msg_date": "Thu, 9 May 2024 16:39:00 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Thu, May 09, 2024 at 04:39:00PM -0700, Noah Misch wrote:\n\nThanks for the feedback.\n\n> The return-bool approach sounds fine. Up to you whether to do in this patch,\n> else I'll do it when I add the test.\n\nI see no reason to not change the signature of the routine now if we\nknow that we're going to do it anyway in the future. I was shortly\nwondering if doing the same for InjectionpointAttach() would make\nsense, but it has more error states, so I'm not really tempted without\nan actual reason (cannot think of a case where I'd want to put more\ncontrol into a module after a failed attach).\n\n>> It could\n>> always be possible that a concurrent backend does a detach followed by\n>> an attach with the same name, causing the shmem exit callback to drop\n>> a point it should not, but that's not really a plausible case IMO :)\n> \n> Agreed. It's reasonable to expect test cases to serialize backend exits,\n> attach calls, and detach calls. If we need to fix that later, we can use\n> attachment serial numbers.\n\nOkay by me.\n\n> I'd name this INJ_CONDITION_UNCONDITIONAL or INJ_CONDITION_ALWAYS. INVALID\n> sounds like a can't-happen event or an injection point that never runs.\n> Otherwise, the patch looks good and makes src/test/modules/gin safe for\n> installcheck. Thanks.\n\nINJ_CONDITION_ALWAYS sounds like a good compromise here.\n\nAttached is an updated patch for now, indented with a happy CI. I am\nstill planning to look at that a second time on Monday with a fresher\nmind, in case I'm missing something now.\n--\nMichael", "msg_date": "Fri, 10 May 2024 10:04:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "\n\n> On 10 May 2024, at 06:04, Michael Paquier <[email protected]> wrote:\n> \n> Attached is an updated patch for now\n\nCan you, please, add some more comments regarding purpose of private data?\nI somewhat lost understanding of the discussion for a week or so. And I hoped to grasp the idea of private_data from resulting code. But I cannot do so from current patch version...\n\nI see that you store condition in private_data. So \"private\" means that this is a data specific to extension, do I understand it right?\n\nAs long as I started anyway, I also want to ask some more stupid questions:\n1. Where is the border between responsibility of an extension and the core part? I mean can we define in simple words what functionality must be in extension?\n2. If we have some concurrency issues, why can't we just protect everything with one giant LWLock\\SpinLock. We have some locking model instead of serializing access from enter until exit.\n\nMost probably, this was discussed somewhere, but I could not find it.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 11 May 2024 11:45:33 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Fri, May 10, 2024 at 10:04:17AM +0900, Michael Paquier wrote:\n> Attached is an updated patch for now, indented with a happy CI. I am\n> still planning to look at that a second time on Monday with a fresher\n> mind, in case I'm missing something now.\n\nThis looks correct, and it works well in my tests. Thanks.\n\n\n", "msg_date": "Sun, 12 May 2024 10:48:51 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Sun, May 12, 2024 at 10:48:51AM -0700, Noah Misch wrote:\n> This looks correct, and it works well in my tests. Thanks.\n\nThanks for looking. While looking at it yesterday I've decided to\nsplit the change into two commits, one for the infra and one for the\nmodule. While doing so, I've noticed that the case of a private area\npassed as NULL was not completely correct as memcpy would be\nundefined.\n\nThe open item has been marked as fixed.\n--\nMichael", "msg_date": "Mon, 13 May 2024 07:35:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" }, { "msg_contents": "On Sat, May 11, 2024 at 11:45:33AM +0500, Andrey M. Borodin wrote:\n> I see that you store condition in private_data. So \"private\" means\n> that this is a data specific to extension, do I understand it right?\n\nYes, it is possible to pass down some custom data to the callbacks\nregistered, generated in a module. One example would be more complex\ncondition grammar, like a JSON-based thing. I don't really see the\nneed for this amount of complexity in the tree yet, but one could do\nthat outside the tree easily.\n\n> As long as I started anyway, I also want to ask some more stupid\n> questions:\n> 1. Where is the border between responsibility of an extension and\n> the core part? I mean can we define in simple words what\n> functionality must be in extension?\n\nRule 0 I've been using here: keep the footprint on the backend as\nsimple as possible. These have as absolute minimum requirement:\n- A function name.\n- A library name.\n- A point name.\n\nThe private area contents and size are added to address the\nconcurrency cases with runtime checks. I didn't see a strong use for\nthat first, but Noah has been convincing enough with his use cases and\nthe fact that the race between detach and run was not completely\nclosed because we lacked consistency with the shmem hash table lookup.\n\n> 2. If we have some concurrency issues, why can't we just protect\n> everything with one giant LWLock\\SpinLock. We have some locking\n> model instead of serializing access from enter until exit.\n\nThis reduces the test infrastructure flexibility, because one may want\nto attach or detach injection points while a point is running. So it\nis by design that the LWLock protecting the shmem hash table is not hold\nwhen a point is running. This has been covered a bit upthread, and I\nwant to be able to do that as well.\n--\nMichael", "msg_date": "Mon, 13 May 2024 08:02:02 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird test mixup" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1], it was identified that\nWaitXLogInsertionsToFinish emits a LOG message, and adjusts the upto\nptr to proceed further when caller requests to flush past the end of\ngenerated WAL. There's a comment explaining no caller should ever do\nthat intentionally except in cases with bogus LSNs. For a similar\nsituation, XLogWrite emits a PANIC \"xlog write request %X/%X is past\nend of log %X/%X\". Although there's no problem if\nWaitXLogInsertionsToFinish emits LOG, but why can't it be a bit more\nharsh and emit PANIC something like the attached to detect the corner\ncase?\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/b43615437ac7d7fdef86a36e5d5bf3fc049bc11b.camel%40j-davis.com\n\nOn Thu, Feb 22, 2024 at 1:54 AM Jeff Davis <[email protected]> wrote:\n>\n> WaitXLogInsertionsToFinish() uses a LOG level message\n> for the same situation. They should probably be the same log level, and\n> I would think it would be either PANIC or WARNING. I have no idea why\n> LOG was chosen.\n\n[2]\n /*\n * No-one should request to flush a piece of WAL that hasn't even been\n * reserved yet. However, it can happen if there is a block with a bogus\n * LSN on disk, for example. XLogFlush checks for that situation and\n * complains, but only after the flush. Here we just assume that to mean\n * that all WAL that has been reserved needs to be finished. In this\n * corner-case, the return value can be smaller than 'upto' argument.\n */\n if (upto > reservedUpto)\n {\n ereport(LOG,\n (errmsg(\"request to flush past end of generated WAL;\nrequest %X/%X, current position %X/%X\",\n LSN_FORMAT_ARGS(upto), LSN_FORMAT_ARGS(reservedUpto))));\n upto = reservedUpto;\n }\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 15 Mar 2024 13:12:35 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Be strict when request to flush past end of WAL in\n WaitXLogInsertionsToFinish" }, { "msg_contents": "On Fri, 2024-03-15 at 13:12 +0530, Bharath Rupireddy wrote:\n> Hi,\n> \n> While working on [1], it was identified that\n> WaitXLogInsertionsToFinish emits a LOG message, and adjusts the upto\n> ptr to proceed further when caller requests to flush past the end of\n> generated WAL. There's a comment explaining no caller should ever do\n> that intentionally except in cases with bogus LSNs. For a similar\n> situation, XLogWrite emits a PANIC \"xlog write request %X/%X is past\n> end of log %X/%X\". Although there's no problem if\n> WaitXLogInsertionsToFinish emits LOG, but why can't it be a bit more\n> harsh and emit PANIC something like the attached to detect the corner\n> case?\n> \n> Thoughts?\n\nI'm not clear on why the callers of WaitXLogInsertionsToFinish() are\nhandling errors the way they are. XLogWrite PANICs, XLogFlush ERRORs\n(which is likely to be escalated to a PANIC anyway), and the other\ncallers ignore the return value and leave it up to XLogWrite() to\nPANIC.\n\nAs far as I can tell, once WaitXLogInsertionsToFinish() detects this\nbogus LSN, a PANIC is a likely outcome, so your proposed change makes\nsense. But then why are the callers also checking?\n\nI haven't looked in a lot of detail.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 18 Mar 2024 21:58:57 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Be strict when request to flush past end of WAL in\n WaitXLogInsertionsToFinish" } ]
[ { "msg_contents": "Hello Hackers,\n\nI have implemented TODO “Allow LISTEN on patterns” [1] and attached\nthe patch to the email. The patch basically consists of the following\ntwo parts.\n\n1. Support wildcards in LISTEN command\n\nNotification channels can be composed of multiple levels in the form\n‘a.b.c’ where ‘a’, ‘b’ and ‘c’ are identifiers.\n\nListen channels can be composed of multiple levels in the form ‘a.b.c’\nwhere ‘a’, ‘b’ and ‘c’ are identifiers which can contain the following\nwildcards:\n * ‘%’ matches everything until the end of a level. Can only appear\nat the end of a level. For example, the notification channels ‘a.b.c’,\n‘a.bc.c’ match against the listen channel ‘a.b%.c’.\n * ‘>’ matches everything to the right. Can only appear at the end of\nthe last level. For example, the notification channels ‘a.b’, ‘a.bc.d’\nmatch against the listen channel ‘a.b>’.\n\nIn [1] Sev Zaslavsky proposed to add a GUC so that we don't break\nexisting code. The patch adds three additional characters ‘.’, ‘%’ and\n‘>’ which are forbidden in existing code. Without these characters the\npatch works in the same way as existing code. So there is no need to\nadd a GUC.\n\n2. Performance improvement of matching\n\nTo match a notification channel against listen channels Postgres uses\nthe function IsListeningOn which iterates over all listen channels and\ncompares them with the notification channel. The time complexity can\nbe estimated as O(nm) where n is the number of the listen channels and\nm is the size of the notification channel.\n\nTo match a notification channel against listen channels the patch\nsearches in binary trie. The time complexity can be estimated as O(m)\nwhere m is the size of the notification channel. So there is no\ndependence on the number of the listen channels.\n\nThe patch builds binary trie in O(nm) where n is the number of the\nlisten channels and m is the maximum length among the listen channels.\nThe space complexity required to build a binary trie is dominated by\nthe leaf nodes and can be estimated as O(n) where n is the number of\nthe listen channels.\n\nI gathered data to compare Postgres with the patch. In the file\nbenchmark.jpg you can find three graphics for three different amounts\nof notifications. Horizontal line represents number of listen channels\nand vertical line represents time in nanoseconds. The time measures\npure calls to IsListeningOn and IsMatchingOn. From the graphics you\ncan deduce the following observations:\n * The time of the trie match doesn’t depend on the number of listen\nchannels and remains the same. The time of the list search depends\nlinerary on the number of listen channels. So the practical results\ncoincide with the theoretical observations.\n * When the number of listen channels is higher than 250 the trie\nmatch outperforms list search around 6 times.\n * When the number of listen channels is lower than 16 the list search\noutperforms the trie match around 2 times. I tried to use list match\non a small number of listen channels but it didn’t perform any better\nthan the trie match. The reason for that is that the list search uses\nstrcmp under the hood which I couldn’t purely use because I also had\nto deal with wildcards.\n\n[1] https://www.postgresql.org/message-id/flat/52693FC5.7070507%40gmail.com\n\nRegards,\nAlexander Cheshev", "msg_date": "Fri, 15 Mar 2024 09:01:03 +0100", "msg_from": "Alexander Cheshev <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?=5BPATCH=5D_TODO_=E2=80=9CAllow_LISTEN_on_patterns=E2=80=9D?=" }, { "msg_contents": "Hello there,\n\n\nEl vie, 15 mar 2024 a las 9:01, Alexander Cheshev (<[email protected]>)\nescribió:\n\n> Hello Hackers,\n>\n> I have implemented TODO “Allow LISTEN on patterns” [1] and attached\n> the patch to the email. The patch basically consists of the following\n> two parts.\n>\n> 1. Support wildcards in LISTEN command\n>\n> Notification channels can be composed of multiple levels in the form\n> ‘a.b.c’ where ‘a’, ‘b’ and ‘c’ are identifiers.\n>\n> Listen channels can be composed of multiple levels in the form ‘a.b.c’\n> where ‘a’, ‘b’ and ‘c’ are identifiers which can contain the following\n> wildcards:\n> * ‘%’ matches everything until the end of a level. Can only appear\n> at the end of a level. For example, the notification channels ‘a.b.c’,\n> ‘a.bc.c’ match against the listen channel ‘a.b%.c’.\n> * ‘>’ matches everything to the right. Can only appear at the end of\n> the last level. For example, the notification channels ‘a.b’, ‘a.bc.d’\n> match against the listen channel ‘a.b>’.\n>\n>\nI did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is\nexpected.\nThis command I assume should free all the listening channels, however, it\ndoesn't\nseem to do so:\n\npostgres=# LISTEN device1.alerts.%;\nLISTEN\npostgres=# ;\nAsynchronous notification \"device1.alerts.temp\" with payload \"80\" received\nfrom server process with PID 237.\npostgres=# UNLISTEN >;\nUNLISTEN\npostgres=# ; -- Here I send a notification over the same channel\nAsynchronous notification \"device1.alerts.temp\" with payload \"80\" received\nfrom server process with PID 237.\n\nThe same happens with \"UNLISTEN %;\", although I'm not sure if this should\nhave\nthe same behavior.\n\nIt stops listening correctly if I do explicit UNLISTEN (exact channel\nmatching).\n\nI'll be glad to conduct more tests or checks on this.\n\nCheers,\n\n\n-- \n--\nEmanuel Calvo\nDatabase Engineering\nhttps://tr3s.ma/aobut\n\nHello there,El vie, 15 mar 2024 a las 9:01, Alexander Cheshev (<[email protected]>) escribió:Hello Hackers,\n\nI have implemented TODO “Allow LISTEN on patterns” [1] and attached\nthe patch to the email. The patch basically consists of the following\ntwo parts.\n\n1. Support wildcards in LISTEN command\n\nNotification channels can be composed of multiple levels in the form\n‘a.b.c’ where ‘a’, ‘b’ and ‘c’ are identifiers.\n\nListen channels can be composed of multiple levels in the form ‘a.b.c’\nwhere ‘a’, ‘b’ and ‘c’ are identifiers which can contain the following\nwildcards:\n *  ‘%’ matches everything until the end of a level. Can only appear\nat the end of a level. For example, the notification channels ‘a.b.c’,\n‘a.bc.c’ match against the listen channel ‘a.b%.c’.\n * ‘>’ matches everything to the right. Can only appear at the end of\nthe last level. For example, the notification channels ‘a.b’, ‘a.bc.d’\nmatch against the listen channel ‘a.b>’.\nI did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is expected.This command I assume should free all the listening channels, however, it doesn'tseem to do so:postgres=# LISTEN device1.alerts.%;LISTENpostgres=# ;Asynchronous notification \"device1.alerts.temp\" with payload \"80\" received from server process with PID 237.postgres=# UNLISTEN >;UNLISTENpostgres=# ; -- Here I send a notification over the same channelAsynchronous notification \"device1.alerts.temp\" with payload \"80\" received from server process with PID 237.The same happens with \"UNLISTEN %;\", although I'm not sure if this should havethe same behavior.It stops listening correctly if I do explicit UNLISTEN (exact channel matching).I'll be glad to conduct more tests or checks on this.Cheers,-- --Emanuel CalvoDatabase Engineeringhttps://tr3s.ma/aobut", "msg_date": "Tue, 9 Jul 2024 10:01:03 +0200", "msg_from": "Emanuel Calvo <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IFtQQVRDSF0gVE9ETyDigJxBbGxvdyBMSVNURU4gb24gcGF0dGVybnPigJ0=?=" }, { "msg_contents": "Hi Emanuel,\n\nI did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is\n> expected.\n> This command I assume should free all the listening channels, however, it\n> doesn't\n> seem to do so:\n\n\nTODO “Allow LISTEN on patterns” [1] is a bit vague about that feature. So I\ndidn't implement it in the first version of the patch. Also I see that I\nmade a mistake in the documentation and mentioned that it is actually\nsupported. Sorry for the confusion.\n\nBesides obvious reasons I think that your finding is especially attractive\nfor the following reason. We have an UNLISTEN * command. If we replace >\nwith * in the patch (which I actually did in the new version of the patch)\nthen we have a generalisation of the above command. For example, UNLISTEN\na* cancels registration on all channels which start with a.\n\nI attached to the email the new version of the patch which supports the\nrequested feature. Instead of > I use * for the reason which I mentioned\nabove. Also I added test cases, changed documentation, etc.\n\nI appreciate your work, Emanuel! If you have any further findings I will be\nglad to adjust the patch accordingly.\n\n[1] https://www.postgresql.org/message-id/flat/52693FC5.7070507%40gmail.com\n\nRegards,\nAlexander Cheshev\n\nRegards,\nAlexander Cheshev\n\n\nOn Tue, 9 Jul 2024 at 11:01, Emanuel Calvo <[email protected]> wrote:\n\n>\n> Hello there,\n>\n>\n> El vie, 15 mar 2024 a las 9:01, Alexander Cheshev (<[email protected]>)\n> escribió:\n>\n>> Hello Hackers,\n>>\n>> I have implemented TODO “Allow LISTEN on patterns” [1] and attached\n>> the patch to the email. The patch basically consists of the following\n>> two parts.\n>>\n>> 1. Support wildcards in LISTEN command\n>>\n>> Notification channels can be composed of multiple levels in the form\n>> ‘a.b.c’ where ‘a’, ‘b’ and ‘c’ are identifiers.\n>>\n>> Listen channels can be composed of multiple levels in the form ‘a.b.c’\n>> where ‘a’, ‘b’ and ‘c’ are identifiers which can contain the following\n>> wildcards:\n>> * ‘%’ matches everything until the end of a level. Can only appear\n>> at the end of a level. For example, the notification channels ‘a.b.c’,\n>> ‘a.bc.c’ match against the listen channel ‘a.b%.c’.\n>> * ‘>’ matches everything to the right. Can only appear at the end of\n>> the last level. For example, the notification channels ‘a.b’, ‘a.bc.d’\n>> match against the listen channel ‘a.b>’.\n>>\n>>\n> I did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is\n> expected.\n> This command I assume should free all the listening channels, however, it\n> doesn't\n> seem to do so:\n>\n> postgres=# LISTEN device1.alerts.%;\n> LISTEN\n> postgres=# ;\n> Asynchronous notification \"device1.alerts.temp\" with payload \"80\" received\n> from server process with PID 237.\n> postgres=# UNLISTEN >;\n> UNLISTEN\n> postgres=# ; -- Here I send a notification over the same channel\n> Asynchronous notification \"device1.alerts.temp\" with payload \"80\" received\n> from server process with PID 237.\n>\n> The same happens with \"UNLISTEN %;\", although I'm not sure if this should\n> have\n> the same behavior.\n>\n> It stops listening correctly if I do explicit UNLISTEN (exact channel\n> matching).\n>\n> I'll be glad to conduct more tests or checks on this.\n>\n> Cheers,\n>\n>\n> --\n> --\n> Emanuel Calvo\n> Database Engineering\n> https://tr3s.ma/aobut\n>\n>", "msg_date": "Sat, 13 Jul 2024 13:26:02 +0300", "msg_from": "Alexander Cheshev <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IFtQQVRDSF0gVE9ETyDigJxBbGxvdyBMSVNURU4gb24gcGF0dGVybnPigJ0=?=" }, { "msg_contents": "Hi Emanuel,\n\nChanged implementation of the function Exec_UnlistenCommit . v2 of the path\ncontained a bug in the function Exec_UnlistenCommit (added a test case for\nthat) and also it was not implemented in natural to C form using pointers.\nNow it looks fine and works as expected.\n\nIn the previous email I forgot to mention that the new implementation of\nthe function Exec_UnlistenCommit has the same space and time complexities\nas the original implementation (which doesn't support wildcards).\n\nRegards,\nAlexander Cheshev\n\n\nOn Sat, 13 Jul 2024 at 13:26, Alexander Cheshev <[email protected]>\nwrote:\n\n> Hi Emanuel,\n>\n> I did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is\n>> expected.\n>> This command I assume should free all the listening channels, however, it\n>> doesn't\n>> seem to do so:\n>\n>\n> TODO “Allow LISTEN on patterns” [1] is a bit vague about that feature. So\n> I didn't implement it in the first version of the patch. Also I see that I\n> made a mistake in the documentation and mentioned that it is actually\n> supported. Sorry for the confusion.\n>\n> Besides obvious reasons I think that your finding is especially attractive\n> for the following reason. We have an UNLISTEN * command. If we replace >\n> with * in the patch (which I actually did in the new version of the patch)\n> then we have a generalisation of the above command. For example, UNLISTEN\n> a* cancels registration on all channels which start with a.\n>\n> I attached to the email the new version of the patch which supports the\n> requested feature. Instead of > I use * for the reason which I mentioned\n> above. Also I added test cases, changed documentation, etc.\n>\n> I appreciate your work, Emanuel! If you have any further findings I will\n> be glad to adjust the patch accordingly.\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/52693FC5.7070507%40gmail.com\n>\n> Regards,\n> Alexander Cheshev\n>\n> Regards,\n> Alexander Cheshev\n>\n>\n> On Tue, 9 Jul 2024 at 11:01, Emanuel Calvo <[email protected]> wrote:\n>\n>>\n>> Hello there,\n>>\n>>\n>> El vie, 15 mar 2024 a las 9:01, Alexander Cheshev (<\n>> [email protected]>) escribió:\n>>\n>>> Hello Hackers,\n>>>\n>>> I have implemented TODO “Allow LISTEN on patterns” [1] and attached\n>>> the patch to the email. The patch basically consists of the following\n>>> two parts.\n>>>\n>>> 1. Support wildcards in LISTEN command\n>>>\n>>> Notification channels can be composed of multiple levels in the form\n>>> ‘a.b.c’ where ‘a’, ‘b’ and ‘c’ are identifiers.\n>>>\n>>> Listen channels can be composed of multiple levels in the form ‘a.b.c’\n>>> where ‘a’, ‘b’ and ‘c’ are identifiers which can contain the following\n>>> wildcards:\n>>> * ‘%’ matches everything until the end of a level. Can only appear\n>>> at the end of a level. For example, the notification channels ‘a.b.c’,\n>>> ‘a.bc.c’ match against the listen channel ‘a.b%.c’.\n>>> * ‘>’ matches everything to the right. Can only appear at the end of\n>>> the last level. For example, the notification channels ‘a.b’, ‘a.bc.d’\n>>> match against the listen channel ‘a.b>’.\n>>>\n>>>\n>> I did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is\n>> expected.\n>> This command I assume should free all the listening channels, however, it\n>> doesn't\n>> seem to do so:\n>>\n>> postgres=# LISTEN device1.alerts.%;\n>> LISTEN\n>> postgres=# ;\n>> Asynchronous notification \"device1.alerts.temp\" with payload \"80\"\n>> received from server process with PID 237.\n>> postgres=# UNLISTEN >;\n>> UNLISTEN\n>> postgres=# ; -- Here I send a notification over the same channel\n>> Asynchronous notification \"device1.alerts.temp\" with payload \"80\"\n>> received from server process with PID 237.\n>>\n>> The same happens with \"UNLISTEN %;\", although I'm not sure if this should\n>> have\n>> the same behavior.\n>>\n>> It stops listening correctly if I do explicit UNLISTEN (exact channel\n>> matching).\n>>\n>> I'll be glad to conduct more tests or checks on this.\n>>\n>> Cheers,\n>>\n>>\n>> --\n>> --\n>> Emanuel Calvo\n>> Database Engineering\n>> https://tr3s.ma/aobut\n>>\n>>", "msg_date": "Mon, 15 Jul 2024 13:58:53 +0300", "msg_from": "Alexander Cheshev <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IFtQQVRDSF0gVE9ETyDigJxBbGxvdyBMSVNURU4gb24gcGF0dGVybnPigJ0=?=" }, { "msg_contents": "Hi Alexander,\n\nI did a review on the new patch version and I observed that the identifier\npassed to the LISTEN command is handled differently between outer and inner\nlevels.\n\nWhen the outer level exceeds the 64 characters limitation, the outer level\nof the\nchannel name is truncated, but leaves the inner levels in the channel name\ndue\nthat isn't parsed in the same way.\n\nAlso, even if the outer level isn't truncated, it is allowed to add\nchannels names\nthat exceeds the allowed identifier size.\n\nIt can be reproduced just by:\n\n # LISTEN a.a.a.a.a.lot.of.levels..; -- this doesn't fail at LISTEN,\nbut fails in NOTIFY due to channel name too long\n\nIn the following, the outer level is truncated, but it doesn't cut out the\ninner levels. This leaves\nlistening channels that cannot receive any notifications in the queue:\n\n # LISTEN\nnotify_async_channel_name_too_long____________________________________.a.a.\n...\n NOTICE: identifier .... will be truncated\n\n # select substring(c.channel,0,66), length(c.channel) from\npg_listening_channels() c(channel) where length(c.channel) > 64;\n substring |\nnotify_async_channel_name_too_long_____________________________.a\n length | 1393\n\n\nI guess that the expected behavior would be that if the outer level is\ntruncated, the rest of the\nchannel name should be ignored, as there won't be possible to notify it\nanyway.\n\nIn the case of the inner levels creating a channel name too long, it may\nprobably sane to just\ncheck the length of the entire identifier, and truncate -- ensuring that\nchannel name doesn't\nend with the level separator.\n\nAnother observation, probably not strictly related to this patch itself but\nthe async-notify tests, is that there is no test for\n\"payload too long\". Probably there is a reason on why isn't in the specs?\n\n\nRegards,\n\n\nEl lun, 15 jul 2024 a las 12:59, Alexander Cheshev (<[email protected]>)\nescribió:\n\n> Hi Emanuel,\n>\n> Changed implementation of the function Exec_UnlistenCommit . v2 of the\n> path contained a bug in the function Exec_UnlistenCommit (added a test case\n> for that) and also it was not implemented in natural to C form using\n> pointers. Now it looks fine and works as expected.\n>\n> In the previous email I forgot to mention that the new implementation of\n> the function Exec_UnlistenCommit has the same space and time complexities\n> as the original implementation (which doesn't support wildcards).\n>\n> Regards,\n> Alexander Cheshev\n>\n>\n> On Sat, 13 Jul 2024 at 13:26, Alexander Cheshev <[email protected]>\n> wrote:\n>\n>> Hi Emanuel,\n>>\n>> I did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is\n>>> expected.\n>>> This command I assume should free all the listening channels, however,\n>>> it doesn't\n>>> seem to do so:\n>>\n>>\n>> TODO “Allow LISTEN on patterns” [1] is a bit vague about that feature. So\n>> I didn't implement it in the first version of the patch. Also I see that I\n>> made a mistake in the documentation and mentioned that it is actually\n>> supported. Sorry for the confusion.\n>>\n>> Besides obvious reasons I think that your finding is especially\n>> attractive for the following reason. We have an UNLISTEN * command. If we\n>> replace > with * in the patch (which I actually did in the new version of\n>> the patch) then we have a generalisation of the above command. For example,\n>> UNLISTEN a* cancels registration on all channels which start with a.\n>>\n>> I attached to the email the new version of the patch which supports the\n>> requested feature. Instead of > I use * for the reason which I mentioned\n>> above. Also I added test cases, changed documentation, etc.\n>>\n>> I appreciate your work, Emanuel! If you have any further findings I will\n>> be glad to adjust the patch accordingly.\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/flat/52693FC5.7070507%40gmail.com\n>>\n>> Regards,\n>> Alexander Cheshev\n>>\n>> Regards,\n>> Alexander Cheshev\n>>\n>>\n>> On Tue, 9 Jul 2024 at 11:01, Emanuel Calvo <[email protected]> wrote:\n>>\n>>>\n>>> Hello there,\n>>>\n>>>\n>>> El vie, 15 mar 2024 a las 9:01, Alexander Cheshev (<\n>>> [email protected]>) escribió:\n>>>\n>>>> Hello Hackers,\n>>>>\n>>>> I have implemented TODO “Allow LISTEN on patterns” [1] and attached\n>>>> the patch to the email. The patch basically consists of the following\n>>>> two parts.\n>>>>\n>>>> 1. Support wildcards in LISTEN command\n>>>>\n>>>> Notification channels can be composed of multiple levels in the form\n>>>> ‘a.b.c’ where ‘a’, ‘b’ and ‘c’ are identifiers.\n>>>>\n>>>> Listen channels can be composed of multiple levels in the form ‘a.b.c’\n>>>> where ‘a’, ‘b’ and ‘c’ are identifiers which can contain the following\n>>>> wildcards:\n>>>> * ‘%’ matches everything until the end of a level. Can only appear\n>>>> at the end of a level. For example, the notification channels ‘a.b.c’,\n>>>> ‘a.bc.c’ match against the listen channel ‘a.b%.c’.\n>>>> * ‘>’ matches everything to the right. Can only appear at the end of\n>>>> the last level. For example, the notification channels ‘a.b’, ‘a.bc.d’\n>>>> match against the listen channel ‘a.b>’.\n>>>>\n>>>>\n>>> I did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is\n>>> expected.\n>>> This command I assume should free all the listening channels, however,\n>>> it doesn't\n>>> seem to do so:\n>>>\n>>> postgres=# LISTEN device1.alerts.%;\n>>> LISTEN\n>>> postgres=# ;\n>>> Asynchronous notification \"device1.alerts.temp\" with payload \"80\"\n>>> received from server process with PID 237.\n>>> postgres=# UNLISTEN >;\n>>> UNLISTEN\n>>> postgres=# ; -- Here I send a notification over the same channel\n>>> Asynchronous notification \"device1.alerts.temp\" with payload \"80\"\n>>> received from server process with PID 237.\n>>>\n>>> The same happens with \"UNLISTEN %;\", although I'm not sure if this\n>>> should have\n>>> the same behavior.\n>>>\n>>> It stops listening correctly if I do explicit UNLISTEN (exact channel\n>>> matching).\n>>>\n>>> I'll be glad to conduct more tests or checks on this.\n>>>\n>>> Cheers,\n>>>\n>>>\n>>> --\n>>> --\n>>> Emanuel Calvo\n>>> Database Engineering\n>>> https://tr3s.ma/aobut\n>>>\n>>>\n\n-- \n--\nEmanuel Calvo\nhttps://tr3s.ma/ <https://tr3s.ma/about>\n\nHi Alexander,I did a review on the new patch version and I observed that the identifierpassed to the LISTEN command is handled differently between outer and inner levels.When the outer level exceeds the 64 characters limitation, the outer level of the channel name is truncated, but leaves the inner levels in the channel name duethat isn't parsed in the same way. Also, even if the outer level isn't truncated, it is allowed to add channels names that exceeds the allowed identifier size.It can be reproduced just by:      # LISTEN a.a.a.a.a.lot.of.levels..; -- this doesn't fail at LISTEN, but fails in NOTIFY due to channel name too longIn the following, the outer level is truncated, but it doesn't cut out the inner levels. This leaveslistening channels that cannot receive any notifications in the queue:      # LISTEN notify_async_channel_name_too_long____________________________________.a.a. ...      NOTICE: identifier .... will be truncated      # select substring(c.channel,0,66), length(c.channel) from pg_listening_channels() c(channel) where length(c.channel) > 64;      substring | notify_async_channel_name_too_long_____________________________.a      length    | 1393I guess that the expected behavior would be that if the outer level is truncated, the rest of thechannel name should be ignored, as there won't be possible to notify it anyway.In the case of the inner levels creating a channel name too long, it may probably sane to just check the length of the entire identifier, and truncate -- ensuring that channel name doesn't end with the level separator.Another observation, probably not strictly related to this patch itself but the async-notify tests, is that there is no test for \"payload too long\". Probably there is a reason on why isn't in the specs?Regards,El lun, 15 jul 2024 a las 12:59, Alexander Cheshev (<[email protected]>) escribió:Hi Emanuel,Changed implementation of the function Exec_UnlistenCommit . v2 of the path contained a bug in the function Exec_UnlistenCommit (added a test case for that) and also it was not implemented in natural to C form using pointers. Now it looks fine and works as expected.In the previous email I forgot to mention that the new implementation of the function Exec_UnlistenCommit has the same space and time complexities as the original implementation (which doesn't support wildcards).Regards,Alexander CheshevOn Sat, 13 Jul 2024 at 13:26, Alexander Cheshev <[email protected]> wrote:Hi Emanuel,I did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is expected.This command I assume should free all the listening channels, however, it doesn'tseem to do so:TODO “Allow LISTEN on patterns” [1] is a bit vague about that feature. So I didn't implement it in the first version of the patch. Also I see that I made a mistake in the documentation and mentioned that it is actually supported. Sorry for the confusion.Besides obvious reasons I think that your finding is especially attractive for the following reason. We have an UNLISTEN * command. If we replace > with * in the patch (which I actually did in the new version of the patch) then we have a generalisation of the above command. For example, UNLISTEN a* cancels registration on all channels which start with a.I attached to the email the new version of the patch which supports the requested feature. Instead of > I use * for the reason which I mentioned above. Also I added test cases, changed documentation, etc. I appreciate your work, Emanuel! If you have any further findings I will be glad to adjust the patch accordingly.[1] https://www.postgresql.org/message-id/flat/52693FC5.7070507%40gmail.comRegards,Alexander CheshevRegards,Alexander CheshevOn Tue, 9 Jul 2024 at 11:01, Emanuel Calvo <[email protected]> wrote:Hello there,El vie, 15 mar 2024 a las 9:01, Alexander Cheshev (<[email protected]>) escribió:Hello Hackers,\n\nI have implemented TODO “Allow LISTEN on patterns” [1] and attached\nthe patch to the email. The patch basically consists of the following\ntwo parts.\n\n1. Support wildcards in LISTEN command\n\nNotification channels can be composed of multiple levels in the form\n‘a.b.c’ where ‘a’, ‘b’ and ‘c’ are identifiers.\n\nListen channels can be composed of multiple levels in the form ‘a.b.c’\nwhere ‘a’, ‘b’ and ‘c’ are identifiers which can contain the following\nwildcards:\n *  ‘%’ matches everything until the end of a level. Can only appear\nat the end of a level. For example, the notification channels ‘a.b.c’,\n‘a.bc.c’ match against the listen channel ‘a.b%.c’.\n * ‘>’ matches everything to the right. Can only appear at the end of\nthe last level. For example, the notification channels ‘a.b’, ‘a.bc.d’\nmatch against the listen channel ‘a.b>’.\nI did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is expected.This command I assume should free all the listening channels, however, it doesn'tseem to do so:postgres=# LISTEN device1.alerts.%;LISTENpostgres=# ;Asynchronous notification \"device1.alerts.temp\" with payload \"80\" received from server process with PID 237.postgres=# UNLISTEN >;UNLISTENpostgres=# ; -- Here I send a notification over the same channelAsynchronous notification \"device1.alerts.temp\" with payload \"80\" received from server process with PID 237.The same happens with \"UNLISTEN %;\", although I'm not sure if this should havethe same behavior.It stops listening correctly if I do explicit UNLISTEN (exact channel matching).I'll be glad to conduct more tests or checks on this.Cheers,-- --Emanuel CalvoDatabase Engineeringhttps://tr3s.ma/aobut\n\n\n-- --Emanuel Calvohttps://tr3s.ma/", "msg_date": "Sun, 21 Jul 2024 20:36:09 +0200", "msg_from": "Emanuel Calvo <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IFtQQVRDSF0gVE9ETyDigJxBbGxvdyBMSVNURU4gb24gcGF0dGVybnPigJ0=?=" }, { "msg_contents": "Hi Emanuel,\n\nI did a review on the new patch version and I observed that the identifier\n> passed to the LISTEN command is handled differently between outer and\n> inner\n> levels.\n>\n\nWe have the following grammar:\n\nnotify_channel:\n ColId\n { $$ = $1; }\n | notify_channel '.' ColId\n { $$ = psprintf(\"%s.%s\", $1, $3); }\n\nAnd ColId is truncated in core scanner:\n\n ident = downcase_truncate_identifier(yytext, yyleng, true);\n\nSo each level is truncated independently. For this reason we observe the\nbehaviour which you described above.\n\nAnother observation, probably not strictly related to this patch itself but\n> the async-notify tests, is that there is no test for\n> \"payload too long\". Probably there is a reason on why isn't in the specs?\n>\n\nI believe that simply because not all functionality is covered with tests.\nBut I have noticed a very interesting test \"channel name too long\":\n\nSELECT\npg_notify('notify_async_channel_name_too_long______________________________','sample_message1');\nERROR: channel name too long\n\nBut the behaviour is inconsistent with NOTIFY command:\n\nNOTIFY notify_async_channel_name_too_long______________________________\nNOTICE: identifier\n\"notify_async_channel_name_too_long______________________________\" will be\ntruncated to ...\n\nI guess that the expected behavior would be that if the outer level is\n> truncated, the rest of the\n> channel name should be ignored, as there won't be possible to notify it\n> anyway.\n>\n> In the case of the inner levels creating a channel name too long, it may\n> probably sane to just\n> check the length of the entire identifier, and truncate -- ensuring that\n> channel name doesn't\n> end with the level separator.\n>\n>\nWell, I believe that we can forbid too long channel names at all. So it\nprovides consistent behaviour among different ways we can send\nnotifications, and I agree with you that \"there won't be possible to notify\nit anyway\". I created a patch for that and attached it to the email. In the\npatch I relocated truncation from core scanner to parser. And as the same\ncore scanner is also used in plsql I added three lines of code to its\nscanner to basically truncate too long identifiers in there. Here is an\nexample of the new behaviour:\n\n-- Should fail. Too long channel names\nNOTIFY notify_async_channel_name_too_long_________._____________________;\nERROR: channel name too long\nLISTEN notify_async_channel_name_too_long_________%._____________________;\nERROR: channel name too long\nUNLISTEN notify_async_channel_name_too_long_________%._____________________;\nERROR: channel name too long\n\nRegards,\nAlexander Cheshev\n\n\nOn Sun, 21 Jul 2024 at 21:36, Emanuel Calvo <[email protected]> wrote:\n\n>\n> Hi Alexander,\n>\n> I did a review on the new patch version and I observed that the identifier\n> passed to the LISTEN command is handled differently between outer and\n> inner\n> levels.\n>\n> When the outer level exceeds the 64 characters limitation, the outer level\n> of the\n> channel name is truncated, but leaves the inner levels in the channel name\n> due\n> that isn't parsed in the same way.\n>\n> Also, even if the outer level isn't truncated, it is allowed to add\n> channels names\n> that exceeds the allowed identifier size.\n>\n> It can be reproduced just by:\n>\n> # LISTEN a.a.a.a.a.lot.of.levels..; -- this doesn't fail at LISTEN,\n> but fails in NOTIFY due to channel name too long\n>\n> In the following, the outer level is truncated, but it doesn't cut out the\n> inner levels. This leaves\n> listening channels that cannot receive any notifications in the queue:\n>\n> # LISTEN\n> notify_async_channel_name_too_long____________________________________.a.a.\n> ...\n> NOTICE: identifier .... will be truncated\n>\n> # select substring(c.channel,0,66), length(c.channel) from\n> pg_listening_channels() c(channel) where length(c.channel) > 64;\n> substring |\n> notify_async_channel_name_too_long_____________________________.a\n> length | 1393\n>\n>\n> I guess that the expected behavior would be that if the outer level is\n> truncated, the rest of the\n> channel name should be ignored, as there won't be possible to notify it\n> anyway.\n>\n> In the case of the inner levels creating a channel name too long, it may\n> probably sane to just\n> check the length of the entire identifier, and truncate -- ensuring that\n> channel name doesn't\n> end with the level separator.\n>\n> Another observation, probably not strictly related to this patch itself\n> but the async-notify tests, is that there is no test for\n> \"payload too long\". Probably there is a reason on why isn't in the specs?\n>\n>\n> Regards,\n>\n>\n> El lun, 15 jul 2024 a las 12:59, Alexander Cheshev (<\n> [email protected]>) escribió:\n>\n>> Hi Emanuel,\n>>\n>> Changed implementation of the function Exec_UnlistenCommit . v2 of the\n>> path contained a bug in the function Exec_UnlistenCommit (added a test case\n>> for that) and also it was not implemented in natural to C form using\n>> pointers. Now it looks fine and works as expected.\n>>\n>> In the previous email I forgot to mention that the new implementation of\n>> the function Exec_UnlistenCommit has the same space and time complexities\n>> as the original implementation (which doesn't support wildcards).\n>>\n>> Regards,\n>> Alexander Cheshev\n>>\n>>\n>> On Sat, 13 Jul 2024 at 13:26, Alexander Cheshev <[email protected]>\n>> wrote:\n>>\n>>> Hi Emanuel,\n>>>\n>>> I did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this is\n>>>> expected.\n>>>> This command I assume should free all the listening channels, however,\n>>>> it doesn't\n>>>> seem to do so:\n>>>\n>>>\n>>> TODO “Allow LISTEN on patterns” [1] is a bit vague about that feature.\n>>> So I didn't implement it in the first version of the patch. Also I see that\n>>> I made a mistake in the documentation and mentioned that it is actually\n>>> supported. Sorry for the confusion.\n>>>\n>>> Besides obvious reasons I think that your finding is especially\n>>> attractive for the following reason. We have an UNLISTEN * command. If we\n>>> replace > with * in the patch (which I actually did in the new version of\n>>> the patch) then we have a generalisation of the above command. For example,\n>>> UNLISTEN a* cancels registration on all channels which start with a.\n>>>\n>>> I attached to the email the new version of the patch which supports the\n>>> requested feature. Instead of > I use * for the reason which I mentioned\n>>> above. Also I added test cases, changed documentation, etc.\n>>>\n>>> I appreciate your work, Emanuel! If you have any further findings I will\n>>> be glad to adjust the patch accordingly.\n>>>\n>>> [1]\n>>> https://www.postgresql.org/message-id/flat/52693FC5.7070507%40gmail.com\n>>>\n>>> Regards,\n>>> Alexander Cheshev\n>>>\n>>> Regards,\n>>> Alexander Cheshev\n>>>\n>>>\n>>> On Tue, 9 Jul 2024 at 11:01, Emanuel Calvo <[email protected]> wrote:\n>>>\n>>>>\n>>>> Hello there,\n>>>>\n>>>>\n>>>> El vie, 15 mar 2024 a las 9:01, Alexander Cheshev (<\n>>>> [email protected]>) escribió:\n>>>>\n>>>>> Hello Hackers,\n>>>>>\n>>>>> I have implemented TODO “Allow LISTEN on patterns” [1] and attached\n>>>>> the patch to the email. The patch basically consists of the following\n>>>>> two parts.\n>>>>>\n>>>>> 1. Support wildcards in LISTEN command\n>>>>>\n>>>>> Notification channels can be composed of multiple levels in the form\n>>>>> ‘a.b.c’ where ‘a’, ‘b’ and ‘c’ are identifiers.\n>>>>>\n>>>>> Listen channels can be composed of multiple levels in the form ‘a.b.c’\n>>>>> where ‘a’, ‘b’ and ‘c’ are identifiers which can contain the following\n>>>>> wildcards:\n>>>>> * ‘%’ matches everything until the end of a level. Can only appear\n>>>>> at the end of a level. For example, the notification channels ‘a.b.c’,\n>>>>> ‘a.bc.c’ match against the listen channel ‘a.b%.c’.\n>>>>> * ‘>’ matches everything to the right. Can only appear at the end of\n>>>>> the last level. For example, the notification channels ‘a.b’, ‘a.bc.d’\n>>>>> match against the listen channel ‘a.b>’.\n>>>>>\n>>>>>\n>>>> I did a test over the \"UNLISTEN >\" behavior, and I'm not sure if this\n>>>> is expected.\n>>>> This command I assume should free all the listening channels, however,\n>>>> it doesn't\n>>>> seem to do so:\n>>>>\n>>>> postgres=# LISTEN device1.alerts.%;\n>>>> LISTEN\n>>>> postgres=# ;\n>>>> Asynchronous notification \"device1.alerts.temp\" with payload \"80\"\n>>>> received from server process with PID 237.\n>>>> postgres=# UNLISTEN >;\n>>>> UNLISTEN\n>>>> postgres=# ; -- Here I send a notification over the same channel\n>>>> Asynchronous notification \"device1.alerts.temp\" with payload \"80\"\n>>>> received from server process with PID 237.\n>>>>\n>>>> The same happens with \"UNLISTEN %;\", although I'm not sure if this\n>>>> should have\n>>>> the same behavior.\n>>>>\n>>>> It stops listening correctly if I do explicit UNLISTEN (exact channel\n>>>> matching).\n>>>>\n>>>> I'll be glad to conduct more tests or checks on this.\n>>>>\n>>>> Cheers,\n>>>>\n>>>>\n>>>> --\n>>>> --\n>>>> Emanuel Calvo\n>>>> Database Engineering\n>>>> https://tr3s.ma/aobut\n>>>>\n>>>>\n>\n> --\n> --\n> Emanuel Calvo\n> https://tr3s.ma/ <https://tr3s.ma/about>\n>\n>", "msg_date": "Sun, 28 Jul 2024 18:17:23 +0300", "msg_from": "Alexander Cheshev <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IFtQQVRDSF0gVE9ETyDigJxBbGxvdyBMSVNURU4gb24gcGF0dGVybnPigJ0=?=" }, { "msg_contents": "Alexander Cheshev <[email protected]> writes:\n> [ v4-0001-Support-wildcards-in-LISTEN-command.patch ]\n\nI had not been paying much if any attention to this thread.\nI assumed from the title that it had in mind to allow something like\n\tLISTEN \"foo%bar\";\nwhere the parameter would be interpreted similarly to a LIKE pattern.\nI happened to look at the current state of affairs and was rather\nastonished to find how far off those rails the proposal has gotten.\nWhere is the field demand for N-part channel names? If we do accept\nthat, how well do you think it's going to work to continue to\nconstrain the total name length to NAMEDATALEN? Why, if you're\nbuilding a thousand-line patch, would you have arbitrary pattern\nrestrictions like \"% can only appear at the end of a name part\"?\nWhat makes you think it's okay to randomly change around unrelated\nparts of the grammar, scansup.c, etc? (The potential side-effects\nof that scare me quite a bit: even if you didn't break anything,\nthe blast radius that a reviewer has to examine is very large.)\n\nI've also got serious doubts about the value of the trie structure\nyou're building to try to speed up name matching. I haven't seen\nany evidence that channel name matching is a bottleneck in NOTIFY\nprocessing (in a quick test with \"perf\", it's not even visible).\nI do think the net effect of a patch like this would be to slow things\ndown, but mostly because it would encourage use of overly-verbose\nchannel names and thereby increase the amount of data passing through\nthe notify SLRU.\n\nI think this is dramatically over-engineered and you ought to\nstart over with a much simpler concept. The fact that one person\nten years ago asked for something that used exactly ASP.NET's\nnotation doesn't mean that that's exactly how we need to do it.\n\n(There's a separate discussion to be had about whether the\nwhole issue is really worth bothering with, given the rather\nlow field demand. But it'd be a lot easier to justify a\nhundred-line patch than this thing.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Sep 2024 14:42:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re:_[PATCH]_TODO_=E2=80=9CAllow_LISTEN_on_patterns?=\n =?UTF-8?Q?=E2=80=9D?=" } ]
[ { "msg_contents": "To build on NixOS/nixpkgs I came up with a few small patches to \nmeson.build. All of this works fine with Autoconf/Make already.", "msg_date": "Sat, 16 Mar 2024 12:48:31 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Building with meson on NixOS/nixpkgs" }, { "msg_contents": "Hi,\n\nThank you for the patches!\n\nOn Sat, 16 Mar 2024 at 14:48, Wolfgang Walther <[email protected]> wrote:\n>\n> To build on NixOS/nixpkgs I came up with a few small patches to\n> meson.build. All of this works fine with Autoconf/Make already.\n\nI do not have NixOS but I confirm that patches cleanly apply to master\nand do pass CI. I have a small feedback:\n\n0001 & 0002: Adding code comments to explain why they have fallback\ncould be nice.\n0003: Looks good to me.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 18 Mar 2024 15:25:58 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "On 2024-Mar-16, Wolfgang Walther wrote:\n\n> The upstream name for the ossp-uuid package / pkg-config file is \"uuid\". Many\n> distributions change this to be \"ossp-uuid\" to not conflict with e2fsprogs.\n\nI can confirm that this is true for Debian, at least; the packaging\nrules have this in override_dh_install:\n\n install -D -m 644 debian/tmp/usr/lib/$(DEB_HOST_MULTIARCH)/pkgconfig/uuid.pc \\\n debian/libossp-uuid-dev/usr/lib/pkgconfig/ossp-uuid.pc\n\nwhich matches the fact that Engelschall's official repository has the\nfile named simply uuid.pc:\nhttps://github.com/rse/uuid/blob/master/uuid.pc.in\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 18 Mar 2024 20:03:53 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "Nazir Bilal Yavuz:\n> 0001 & 0002: Adding code comments to explain why they have fallback\n> could be nice.\n> 0003: Looks good to me.\n\nAdded some comments in the attached.\n\nBest,\n\nWolfgang", "msg_date": "Thu, 21 Mar 2024 21:44:16 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "Wolfgang Walther:\n> To build on NixOS/nixpkgs I came up with a few small patches to \n> meson.build. All of this works fine with Autoconf/Make already.\n\nIn v3, I added another small patch for meson, this one about proper \nhandling of -Dlibedit_preferred when used together with -Dreadline=enabled.\n\nBest,\n\nWolfgang", "msg_date": "Fri, 29 Mar 2024 19:47:54 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "Hi,\n\n From your prior reply:\n\nOn Thu, 21 Mar 2024 at 23:44, Wolfgang Walther <[email protected]> wrote:\n>\n> Nazir Bilal Yavuz:\n> > 0001 & 0002: Adding code comments to explain why they have fallback\n> > could be nice.\n> > 0003: Looks good to me.\n>\n> Added some comments in the attached.\n\nComments look good, thanks.\n\nOn Fri, 29 Mar 2024 at 21:48, <[email protected]> wrote:\n>\n> In v3, I added another small patch for meson, this one about proper\n> handling of -Dlibedit_preferred when used together with -Dreadline=enabled.\n\nYou are right. I confirm the bug and your proposed patch fixes this.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 1 Apr 2024 13:55:45 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "On 29.03.24 19:47, [email protected] wrote:\n > - uuid = dependency('ossp-uuid', required: true)\n > + # upstream is called \"uuid\", but many distros change this to \n\"ossp-uuid\"\n > + uuid = dependency('ossp-uuid', 'uuid', required: true)\n\nHow would this behave if you have only uuid.pc from e2fsprogs installed \nbut choose -Duuid=ossp? Then it would pick up uuid.pc here, but fail to \ncompile later?\n\n\n\n", "msg_date": "Wed, 17 Apr 2024 13:49:53 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "Peter Eisentraut:\n> On 29.03.24 19:47, [email protected] wrote:\n> > -    uuid = dependency('ossp-uuid', required: true)\n> > +    # upstream is called \"uuid\", but many distros change this to \n> \"ossp-uuid\"\n> > +    uuid = dependency('ossp-uuid', 'uuid', required: true)\n> \n> How would this behave if you have only uuid.pc from e2fsprogs installed \n> but choose -Duuid=ossp?  Then it would pick up uuid.pc here, but fail to \n> compile later?\n\nIt would still fail the meson setup step, because for e2fs we have:\n\nuuidfunc = 'uuid_generate'\nuuidheader = 'uuid/uuid.h'\n\nwhile for ossp we have:\n\nuuidfunc = 'uuid_export'\nuuidheader = 'uuid.h'\n\nand later we do:\n\nif not cc.has_header_symbol(uuidheader, uuidfunc, args: test_c_args, \ndependencies: uuid)\n error('uuid library @0@ missing required function \n@1@'.format(uuidopt, uuidfunc))\nendif\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Wed, 17 Apr 2024 14:02:41 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "Heikki asked me to take a look at this patchset for the commitfest. \nLooks good to me.\n\nHeikki, just be careful rebasing the first patch. You need to make sure \nthe newly set `required: false` gets carried forward.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 26 Jul 2024 15:01:14 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "On 26/07/2024 23:01, Tristan Partin wrote:\n> Heikki asked me to take a look at this patchset for the commitfest.\n> Looks good to me.\n> \n> Heikki, just be careful rebasing the first patch. You need to make sure\n> the newly set `required: false` gets carried forward.\n\nCommitted and backpatched to v16 and v17. Thanks for the good \nexplanations in the commit messages, Walther!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sat, 27 Jul 2024 14:17:51 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "Hi,\n\n\ncommit 4d8de281b5834c8f5e0be6ae21e884e69dffd4ce\nAuthor: Heikki Linnakangas <[email protected]>\nDate: 2024-07-27 13:53:11 +0300\n\n Fallback to clang in PATH with meson\n\n Some distributions put clang into a different path than the llvm\n binary path.\n\n For example, this is the case on NixOS / nixpkgs, which failed to find\n clang with meson before this patch.\n\n\nI think this is a bad change unfortunately - this way clang and llvm version\ncan mismatch. Yes, we've done it that way for autoconf, but back then LLVM\nbroke compatibility far less often.\n\n\ncommit a00fae9d43e5adabc56e64a4df6d332062666501\nAuthor: Heikki Linnakangas <[email protected]>\nDate: 2024-07-27 13:53:08 +0300\n\n Fallback to uuid for ossp-uuid with meson\n\n The upstream name for the ossp-uuid package / pkg-config file is\n \"uuid\". Many distributions change this to be \"ossp-uuid\" to not\n conflict with e2fsprogs.\n\n This lookup fails on distributions which don't change this name, for\n example NixOS / nixpkgs. Both \"ossp-uuid\" and \"uuid\" are also checked\n in configure.ac.\n\n Author: Wolfgang Walther\n Reviewed-by: Nazir Bilal Yavuz, Alvaro Herrera, Peter Eisentraut\n Reviewed-by: Tristan Partin\n Discussion: https://www.postgresql.org/message-id/[email protected]\n Backpatch: 16-, where meson support was added\n\nI think this is a redundant change with\n\ncommit 2416fdb3ee30bdd2810408f93f14d47bff840fea\nAuthor: Andres Freund <[email protected]>\nDate: 2024-07-20 13:51:08 -0700\n\n meson: Add support for detecting ossp-uuid without pkg-config\n\n This is necessary as ossp-uuid on windows installs neither a pkg-config nor a\n cmake dependency information. Nor is there another supported uuid\n implementation available on windows.\n\n Reported-by: Dave Page <[email protected]>\n Reviewed-by: Tristan Partin <[email protected]>\n Discussion: https://postgr.es/m/[email protected]\n Backpatch: 16-, where meson support was added\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Aug 2024 09:13:58 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "On Fri Aug 9, 2024 at 11:14 AM CDT, Andres Freund wrote:\n> Hi,\n>\n>\n> commit 4d8de281b5834c8f5e0be6ae21e884e69dffd4ce\n> Author: Heikki Linnakangas <[email protected]>\n> Date: 2024-07-27 13:53:11 +0300\n>\n> Fallback to clang in PATH with meson\n>\n> Some distributions put clang into a different path than the llvm\n> binary path.\n>\n> For example, this is the case on NixOS / nixpkgs, which failed to find\n> clang with meson before this patch.\n>\n>\n> I think this is a bad change unfortunately - this way clang and llvm version\n> can mismatch. Yes, we've done it that way for autoconf, but back then LLVM\n> broke compatibility far less often.\n\nSee the attached patch on how we could make this situation better.\n\n> commit a00fae9d43e5adabc56e64a4df6d332062666501\n> Author: Heikki Linnakangas <[email protected]>\n> Date: 2024-07-27 13:53:08 +0300\n>\n> Fallback to uuid for ossp-uuid with meson\n>\n> The upstream name for the ossp-uuid package / pkg-config file is\n> \"uuid\". Many distributions change this to be \"ossp-uuid\" to not\n> conflict with e2fsprogs.\n>\n> This lookup fails on distributions which don't change this name, for\n> example NixOS / nixpkgs. Both \"ossp-uuid\" and \"uuid\" are also checked\n> in configure.ac.\n>\n> Author: Wolfgang Walther\n> Reviewed-by: Nazir Bilal Yavuz, Alvaro Herrera, Peter Eisentraut\n> Reviewed-by: Tristan Partin\n> Discussion: https://www.postgresql.org/message-id/[email protected]\n> Backpatch: 16-, where meson support was added\n>\n> I think this is a redundant change with\n>\n> commit 2416fdb3ee30bdd2810408f93f14d47bff840fea\n> Author: Andres Freund <[email protected]>\n> Date: 2024-07-20 13:51:08 -0700\n>\n> meson: Add support for detecting ossp-uuid without pkg-config\n>\n> This is necessary as ossp-uuid on windows installs neither a pkg-config nor a\n> cmake dependency information. Nor is there another supported uuid\n> implementation available on windows.\n>\n> Reported-by: Dave Page <[email protected]>\n> Reviewed-by: Tristan Partin <[email protected]>\n> Discussion: https://postgr.es/m/[email protected]\n> Backpatch: 16-, where meson support was added\n\nI'm not sure I would call them redundant. It's cheaper (and better) to \ndo a pkg-config lookup than it is to do the various checks in your \npatch. I think the two patches are complementary. Yours services Windows \nplus anywhere else that doesn't have a pkg-config file, while Wolfgang's \nservices distros that install the pkg-config with a different name.\n\n-- \nTristan Partin\nhttps://tristan.partin.io", "msg_date": "Fri, 09 Aug 2024 11:49:08 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "Tristan Partin:\n> On Fri Aug 9, 2024 at 11:14 AM CDT, Andres Freund wrote:\n> [..]\n>> commit a00fae9d43e5adabc56e64a4df6d332062666501\n>> Author: Heikki Linnakangas <[email protected]>\n>> Date:   2024-07-27 13:53:08 +0300\n>>\n>>     Fallback to uuid for ossp-uuid with meson\n>> [..]\n>>\n>> I think this is a redundant change with\n>>\n>> commit 2416fdb3ee30bdd2810408f93f14d47bff840fea\n>> Author: Andres Freund <[email protected]>\n>> Date:   2024-07-20 13:51:08 -0700\n>>\n>>     meson: Add support for detecting ossp-uuid without pkg-config\n>> [..]\n> \n> I'm not sure I would call them redundant. It's cheaper (and better) to \n> do a pkg-config lookup than it is to do the various checks in your \n> patch. I think the two patches are complementary. Yours services Windows \n> plus anywhere else that doesn't have a pkg-config file, while Wolfgang's \n> services distros that install the pkg-config with a different name.\n\nAgreed.\n\nThere is also a small difference in output for meson: When uuid is \nqueried via pkg-config, meson also detects the version, so I get this \noutput:\n\n External libraries\n[..]\n uuid : YES 1.6.2\n\n\nWithout pkg-config:\n\n External libraries\n[..]\n uuid : YES\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sat, 17 Aug 2024 23:24:43 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" }, { "msg_contents": "Tristan Partin:\n> On Fri Aug 9, 2024 at 11:14 AM CDT, Andres Freund wrote:\n>> commit 4d8de281b5834c8f5e0be6ae21e884e69dffd4ce\n>> Author: Heikki Linnakangas <[email protected]>\n>> Date:   2024-07-27 13:53:11 +0300\n>>\n>>     Fallback to clang in PATH with meson\n>> [..]\n>>\n>> I think this is a bad change unfortunately - this way clang and llvm \n>> version\n>> can mismatch. Yes, we've done it that way for autoconf, but back then \n>> LLVM\n>> broke compatibility far less often.\n> \n> See the attached patch on how we could make this situation better.\n\nWorks great.\n\nWith the correct clang on path:\n\nProgram clang found: YES 18.1.8 18.1.8 \n(/nix/store/mr1y1rxkx59dr2bci2akmw2zkbbpmc15-clang-wrapper-18.1.8/bin/clang)\n\nWith a mismatching version on path:\n\nProgram \n/nix/store/x4gwwwlw2ylv0d9vjmkx3dmlcb7gingd-llvm-18.1.8/bin/clang clang \nfound: NO found 16.0.6 but need: '18.1.8' \n(/nix/store/r85xsa9z0s04n0y21xhrii47bh74g2a8-clang-wrapper-16.0.6/bin/clang)\n\nYes, the match is exact, also fails with a newer version:\n\nProgram \n/nix/store/x4gwwwlw2ylv0d9vjmkx3dmlcb7gingd-llvm-18.1.8/bin/clang clang \nfound: NO found 19.1.0 but need: '18.1.8' \n(/nix/store/rjsfx6sxjpkgd4f9hl9apm0n8dk7jd9w-clang-wrapper-19.1.0-rc2/bin/clang)\n\n+1 for this patch.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sat, 17 Aug 2024 23:43:37 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Building with meson on NixOS/nixpkgs" } ]
[ { "msg_contents": "Running the regression tests when building with musl libc fails, with \nerrors like the following:\n\nERROR: could not load library \n\"<builddir>/tmp_install/usr/lib/postgresql/libpqwalreceiver.so\": Error \nloading shared library libpq.so.5: No such file or directory (needed by \n<builddir>/tmp_install/usr/lib/postgresql/libpqwalreceiver.so)\n\nThis was observed in Alpine Linux [1] and nixpkgs [2] a few years ago. I \nnow looked at this a bit and this is what happens:\n\n- The temporary install location is set via LD_LIBRARY_PATH in the \nregression tests, so that postgres can find those libs.\n\n- All tests which load an extension / shared module via dlopen() fail, \nwhen the loaded library in turn depends on another library in \ntmp_install - I think in practice it's libpq.so all the time.\n\n- LD_LIBRARY_PATH is used correctly to look for the direct dependency \nloaded in dlopen(...), but is not taken into account anymore when trying \nto locate libpq.so. This step only fails with musl, but works fine with \nglibc.\n\n\nI can reproduce this with a simple Dockerfile (attached), which uses the \nlibrary/postgres-alpine image, moves libpq.so to a different folder and \npoints LD_LIBRARY_PATH at it. Build and run the dockerfile like this:\n\n docker build . -t pg-musl && docker run --rm pg-musl\n\nThis Dockerfile can easily be adjusted to work with the debian image - \nwhich shows that doing the same with glibc works just fine.\n\n\nEven though this originated in \"just\" the regression tests, I'm filing \nthis as a bug, because:\n- The docs explicitly mention LD_LIBRARY_PATH support to point at a \ndifferent /lib folder in [3].\n- This can clearly break outside the test-suite as shown with the \nDockerfile.\n\n\nI tried a few more things:\n- When I add an /etc/ld-musl-$(ARCH).path file and add the path to \nlibpq.so's libdir to it, libpq.so can be found.\n- When I add the path to libpq.so as an rpath to the postgres binary, \nlibpq.so can be found.\n\nBoth is not surprising, but just confirms musl-ld actually works as \nexpected. It's just LD_LIBRARY_PATH that seems to not be passed on.\n\nTo rule out a musl bug, I also put together a very simple test-case of \nan executable loading liba with dlopen(), which depends on libb and then \nconstructing the same scenario with LD_LIBRARY_PATH. This works fine \nwhen compiled with glibc and musl, too. Thus, I believe the problem to \nbe somewhere in how postgres loads those libraries.\n\nBest,\n\nWolfgang\n\n[1]: \nhttps://github.com/alpinelinux/aports/commit/d67ceb66a1ca9e1899071c9ef09fffba29fa0417#diff-2bd25b5172fc52319de1b09086ac0db6314d2e9fa73497979f5198f8caaec1b9\n\n[2]: \nhttps://github.com/NixOS/nixpkgs/commit/09ffd722072291f00f2a54d7404eb568a15e562a\n\n[3]: https://www.postgresql.org/docs/current/install-post.html", "msg_date": "Sat, 16 Mar 2024 13:38:48 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Regression tests fail with musl libc because libpq.so can't be loaded" }, { "msg_contents": "Wolfgang Walther <[email protected]> writes:\n> - LD_LIBRARY_PATH is used correctly to look for the direct dependency \n> loaded in dlopen(...), but is not taken into account anymore when trying \n> to locate libpq.so. This step only fails with musl, but works fine with \n> glibc.\n\nWhy do you think this is our bug and not musl's? We do not even have\nany code that knows anything about indirect library dependencies.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 Mar 2024 10:24:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Tom Lane:\n> Why do you think this is our bug and not musl's?\n\nBecause I tried to reproduce with musl directly with a very simple \nexample as mentioned in:\n\n> To rule out a musl bug, I also put together a very simple test-case of an executable loading liba with dlopen(), which depends on libb and then constructing the same scenario with LD_LIBRARY_PATH. This works fine when compiled with glibc and musl, too. Thus, I believe the problem to be somewhere in how postgres loads those libraries.\n\nMy test case looked like the attached. To compile it with musl via \nDockerfile:\n\n docker build . -t musl-dlopen && docker run --rm musl-dlopen\n\na.c/a.h is equivalent to libpqwalreceiver and b.c/b.h to libpq.\n\nThis works fine with both musl and glibc.\n\n(Note: I also tried putting liba.so and libb.so in different folders, \nadding both to LD_LIBRARY_PATH - but that worked fine as well.)\n\nNow my very simple example probably does something different than \npostgres, so that the problem doesn't appear there. But since it seems \npossible to do this with musl in principle, it should be possible to do \nit differently in postgres to make it work, too.\n\nAny ideas?\n\nBest,\nWolfgang", "msg_date": "Sat, 16 Mar 2024 16:55:49 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Sat, Mar 16, 2024 at 11:55 AM Wolfgang Walther <[email protected]>\nwrote:\n\n> Tom Lane:\n> > Why do you think this is our bug and not musl's?\n>\n> Because I tried to reproduce with musl directly with a very simple\n> example as mentioned in:\n>\n> > To rule out a musl bug, I also put together a very simple test-case of\n> an executable loading liba with dlopen(), which depends on libb and then\n> constructing the same scenario with LD_LIBRARY_PATH. This works fine when\n> compiled with glibc and musl, too. Thus, I believe the problem to be\n> somewhere in how postgres loads those libraries.\n>\n> My test case looked like the attached. To compile it with musl via\n> Dockerfile:\n>\n> docker build . -t musl-dlopen && docker run --rm musl-dlopen\n>\n> a.c/a.h is equivalent to libpqwalreceiver and b.c/b.h to libpq.\n>\n> This works fine with both musl and glibc.\n>\n> (Note: I also tried putting liba.so and libb.so in different folders,\n> adding both to LD_LIBRARY_PATH - but that worked fine as well.)\n>\n> Now my very simple example probably does something different than\n> postgres, so that the problem doesn't appear there. But since it seems\n> possible to do this with musl in principle, it should be possible to do\n> it differently in postgres to make it work, too.\n>\n> Any ideas?\n>\n>\n>\n\nOn Alpine Linux, which uses musl libc, you have to run `make install`\nbefore you can run `make check`. Have you tried\nthat?\n\n(Note to self: need a new Alpine buildfarm member)\n\ncheers\n\nandrew\n\nOn Sat, Mar 16, 2024 at 11:55 AM Wolfgang Walther <[email protected]> wrote:Tom Lane:\n> Why do you think this is our bug and not musl's?\n\nBecause I tried to reproduce with musl directly with a very simple \nexample as mentioned in:\n\n> To rule out a musl bug, I also put together a very simple test-case of an executable loading liba with dlopen(), which depends on libb and then constructing the same scenario with LD_LIBRARY_PATH. This works fine when compiled with glibc and musl, too. Thus, I believe the problem to be somewhere in how postgres loads those libraries.\n\nMy test case looked like the attached. To compile it with musl via \nDockerfile:\n\n   docker build . -t musl-dlopen && docker run --rm musl-dlopen\n\na.c/a.h is equivalent to libpqwalreceiver and b.c/b.h to libpq.\n\nThis works fine with both musl and glibc.\n\n(Note: I also tried putting liba.so and libb.so in different folders, \nadding both to LD_LIBRARY_PATH - but that worked fine as well.)\n\nNow my very simple example probably does something different than \npostgres, so that the problem doesn't appear there. But since it seems \npossible to do this with musl in principle, it should be possible to do \nit differently in postgres to make it work, too.\n\nAny ideas?\nOn Alpine Linux, which uses musl libc, you have to run `make install` before you can run `make check`. Have you tried that?                                        (Note to self: need a new Alpine buildfarm member)cheersandrew", "msg_date": "Sat, 16 Mar 2024 15:49:27 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Andrew Dunstan:\n> On Alpine Linux, which uses musl libc, you have to run `make install` \n> before you can run `make check`. Have you tried that?\n\nI can see how that could work around the problem, because the library \nwould already be available in the default library path / rpath and \nLD_LIBRARY_PATH would not be needed.\n\nHowever, this would only be a workaround for the specific case of \nrunning the regression tests, not a solution. Using LD_LIBRARY_PATH, as \ndocumented, would still not be possible.\n\nIn my case, I am just using docker images with Alpine to easily \nreproduce the problem. I am building with NixOS / nixpkgs' pkgsMusl. The \norder of check and install phases can't be changed there, AFAICT. The \nworkaround I use right now is to temporarily patch rpath of the postgres \nbinary - this will be reset during the install phase anyway. This works, \nbut again is not a real solution.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sat, 16 Mar 2024 21:00:28 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On Alpine Linux, which uses musl libc, you have to run `make install`\n> before you can run `make check`. Have you tried that?\n\nWe have the same situation on macOS. There, it seems to be the result\nof a \"security feature\" that strips DYLD_LIBRARY_PATH from the process\nenvironment when make executes a shell. There's not much we can do\nabout that, and I suspect there is not much we can do about musl's\nbehavior either. (I am not a fan of proposals to modify the binaries\nunder test, because then you are not testing what you intend to\ninstall.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 Mar 2024 16:10:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Sun, Mar 17, 2024 at 4:56 AM Wolfgang Walther\n<[email protected]> wrote:\n> Any ideas?\n\nI'd look into whether there is a difference in the rules it uses for\ndeciding not to trust LD_LIBRARY_PATH, which seems to be around here\nsomewhere:\n\nhttps://github.com/bminor/musl/blob/7ada6dde6f9dc6a2836c3d92c2f762d35fd229e0/ldso/dynlink.c#L1812\n\nI wonder if you can break into an affected program and check out the\nmagic there. FWIW on MacOS something equivalent happens at the moment\nwe execute a shell, because the system shell is 'code signed' and that\nOS treats signed stuff similar to setuid binaries for this purpose\n(IIRC setting SHELL to point to a suitable unsigned shell could work\naround the problem there?)\n\nAnother interesting thing that came up when I googled musl/glibc\ndifferences -- old but looks plausibly still true (not that I expect\nour code to be modifying that stuff in place, just something to\ncheck):\n\nhttps://www.openwall.com/lists/musl/2014/08/31/14\n\n\n", "msg_date": "Sun, 17 Mar 2024 09:19:34 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Tom Lane:\n> We have the same situation on macOS. There, it seems to be the result\n> of a \"security feature\" that strips DYLD_LIBRARY_PATH from the process\n> environment when make executes a shell.\n\nI'm not sure whether this explanation is sufficient for the musl case, \nbecause LD_LIBRARY_PATH does make a difference: The direct dependency \n(libpqwalreceiver.so) can still be found if it's moved elsewhere and \nLD_LIBRARY_PATH points at it. So clearly the LD_LIBRARY_PATH variable is \nstill set after make executed the shell - it's just not in effect on the \n*indirect* dependency (libpq.so) anymore.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sat, 16 Mar 2024 21:21:55 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro:\n> I'd look into whether there is a difference in the rules it uses for\n> deciding not to trust LD_LIBRARY_PATH, which seems to be around here\n> somewhere:\n> \n> https://github.com/bminor/musl/blob/7ada6dde6f9dc6a2836c3d92c2f762d35fd229e0/ldso/dynlink.c#L1812\n\nYeah, I have been looking at that, too. I had also experimented a bit \nwith setuid/setgid for that matter, but that didn't lead anywhere, yet. \nI'm not 100% sure, but I think this would also not match my other \nobservation, that LD_LIBRARY_PATH does work for libpqwalreceiver (direct \ndep), but not libpq (indirect dep).\n\n> Another interesting thing that came up when I googled musl/glibc\n> differences -- old but looks plausibly still true (not that I expect\n> our code to be modifying that stuff in place, just something to\n> check):\n> \n> https://www.openwall.com/lists/musl/2014/08/31/14\n\nTo me, this seems very much like what could happen - it matches all my \nobservations, so far. But I can't tell how likely that is, not knowing \nmuch of the postgres code.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sat, 16 Mar 2024 21:31:10 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Sun, Mar 17, 2024 at 9:19 AM Thomas Munro <[email protected]> wrote:\n> Another interesting thing that came up when I googled musl/glibc\n> differences -- old but looks plausibly still true (not that I expect\n> our code to be modifying that stuff in place, just something to\n> check):\n>\n> https://www.openwall.com/lists/musl/2014/08/31/14\n\nHmm, that does mention setproctitle, and our ps_status.c does indeed\nclobber some stuff in that region (in fact our ps_status.c is likely\nderived from the setproctitle() function from sendmail AFAICT). But\nthat's in our \"backend\" server processes, unlike the problems we have\non Macs... oh but you're failing to load libpqwalreceiver.so which\nmakes some sense for the backend hypothesis. What happens if you hack\nps_status.c to use PS_USE_NONE?\n\n\n", "msg_date": "Sun, 17 Mar 2024 09:54:52 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro:\n> Hmm, that does mention setproctitle, and our ps_status.c does indeed\n> clobber some stuff in that region (in fact our ps_status.c is likely\n> derived from the setproctitle() function from sendmail AFAICT). But\n> that's in our \"backend\" server processes, unlike the problems we have\n> on Macs... oh but you're failing to load libpqwalreceiver.so which\n> makes some sense for the backend hypothesis. What happens if you hack\n> ps_status.c to use PS_USE_NONE?\n\nNailed it. PS_USE_NONE fixes it.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sun, 17 Mar 2024 10:44:03 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "\n\n> On Mar 17, 2024, at 02:44, Wolfgang Walther <[email protected]> wrote:\n> \n> Nailed it. PS_USE_NONE fixes it.\n\nGiven the musl (still?) does not define a preprocessor macro specific to it, is there a way of improving the test in pg_status.c to catch this case? It seems wrong that the current test passes a case that doesn't actually work.\n\n", "msg_date": "Sun, 17 Mar 2024 03:33:52 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Christophe Pettus:\n> Given the musl (still?) does not define a preprocessor macro specific to it, is there a way of improving the test in pg_status.c to catch this case? It seems wrong that the current test passes a case that doesn't actually work.\n\nThe missing macro is on purpose and unlikely to change: \nhttps://openwall.com/lists/musl/2013/03/29/13\n\nI also found this thread, which discusses exactly our case: \nhttps://www.openwall.com/lists/musl/2022/08/17/1\n\nSome quotes from that thread:\n\n> I understand that what Postgres et al are doing is a nasty hack.\n\nAnd:\n\n> Applications that *really* want setproctitle type functionality can\n> presumably do something like re-exec themselves with a suitably large\n> argv[0] to give them safe space to overwrite with their preferred\n> message, rather than UB trying to relocate the environment (and auxv?\n> how? they can't tell libc they moved it) to some other location.\n\nCould that be a more portable way of doing this?\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sun, 17 Mar 2024 14:11:19 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "\n\n> On Mar 17, 2024, at 06:11, Wolfgang Walther <[email protected]> wrote:\n> The missing macro is on purpose and unlikely to change: https://openwall.com/lists/musl/2013/03/29/13\n\nIndeed.\n\n> I also found this thread, which discusses exactly our case: https://www.openwall.com/lists/musl/2022/08/17/1\n\nWhile getting proper setproctitle functionality on musl would be great, my goal was more modest: Have it correctly set PS_USE_NONE when compiling against musl.\n\n", "msg_date": "Sun, 17 Mar 2024 08:44:45 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Sun, Mar 17, 2024 at 11:45 AM Christophe Pettus <[email protected]> wrote:\n\n>\n>\n> > On Mar 17, 2024, at 06:11, Wolfgang Walther <[email protected]>\n> wrote:\n> > The missing macro is on purpose and unlikely to change:\n> https://openwall.com/lists/musl/2013/03/29/13\n>\n> Indeed.\n>\n\nThat seems a little shortsighted. If other libc implementations find it\nappropriate to have similar macros why should they be different?\n\n\n> > I also found this thread, which discusses exactly our case:\n> https://www.openwall.com/lists/musl/2022/08/17/1\n>\n> While getting proper setproctitle functionality on musl would be great, my\n> goal was more modest: Have it correctly set PS_USE_NONE when compiling\n> against musl.\n>\n\nOne simple thing might be for us to enclose the block in ps_status.c at\nlines 49-59 in #ifndef PS_USE_NONE/#endif. Then you could compile with\n-DPS_USE_NONE in your CPPFLAGS.\n\ncheers\n\nandrew\n\nOn Sun, Mar 17, 2024 at 11:45 AM Christophe Pettus <[email protected]> wrote:\n\n> On Mar 17, 2024, at 06:11, Wolfgang Walther <[email protected]> wrote:\n> The missing macro is on purpose and unlikely to change: https://openwall.com/lists/musl/2013/03/29/13\n\nIndeed.That seems a little shortsighted. If other libc implementations find it appropriate to have similar macros why should they be different?  \n\n> I also found this thread, which discusses exactly our case: https://www.openwall.com/lists/musl/2022/08/17/1\n\nWhile getting proper setproctitle functionality on musl would be great, my goal was more modest: Have it correctly set PS_USE_NONE when compiling against musl.One simple thing might be for us to enclose the block in ps_status.c at lines 49-59 in #ifndef PS_USE_NONE/#endif. Then you could compile with -DPS_USE_NONE in your CPPFLAGS. cheersandrew", "msg_date": "Sun, 17 Mar 2024 16:33:40 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "\n\n> On Mar 17, 2024, at 13:33, Andrew Dunstan <[email protected]> wrote:\n> \n> That seems a little shortsighted. If other libc implementations find it appropriate to have similar macros why should they be different?\n\nIt's a philosophical argument against checking for particular libc implementations instead of particular features. I'm not unsympathetic to that argument, but AFAICT there's no clean way of checking for this by examining feature #defines.\n\n", "msg_date": "Sun, 17 Mar 2024 14:05:54 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Mon, Mar 18, 2024 at 10:06 AM Christophe Pettus <[email protected]> wrote:\n> > On Mar 17, 2024, at 13:33, Andrew Dunstan <[email protected]> wrote:\n> > That seems a little shortsighted. If other libc implementations find it appropriate to have similar macros why should they be different?\n>\n> It's a philosophical argument against checking for particular libc implementations instead of particular features. I'm not unsympathetic to that argument, but AFAICT there's no clean way of checking for this by examining feature #defines.\n\nI like their philosophy, and I like their FAQ. Down with software\nmonocultures, up with standards and cooperation. But anyway...\n\nI wondered for a moment if there could be a runtime way to test if\nwe'd broken stuff, but it seems hard without a way to ask the runtime\nlinker for its search path to see if it has any pointers into the\nenvironment. We can't, that \"env_path\" variable in dynlink.c is not\naccessible to us by any reasonable means. And yeah, this whole thing\nis a nasty invasive hack that harks back to the 80s I assume, before\nmany systems provided a clean way to do this (and some never did)...\n\nHmm, I can think of one dirty hack on top of our existing dirty hack\nthat might work. I feel bad typing this out, but here goes nothing:\nIn save_ps_display_args(), we compute end_of_area, stepping past\ncontiguous arguments and environment variables. But what if we\nterminated that if we saw an environment entry beginning \"LD_\"? We'd\nstill feel free to clobber the memory up to that point (rather than\nlimiting ourselves to the argv space, another more conservative choice\nthat might truncate a few PS display messages, or maybe not given the\ntypical postmaster arguments, maye that'd work out OK), and we'd still\ncopy the environment to somewhere new, but anything like \"LD_XXX\" that\nthe runtime linker might have stashed a pointer to would remain valid.\n/me runs away and hides\n\n\n", "msg_date": "Mon, 18 Mar 2024 12:20:26 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "\n\n> On Mar 17, 2024, at 16:20, Thomas Munro <[email protected]> wrote:\n> \n> We'd\n> still feel free to clobber the memory up to that point (rather than\n> limiting ourselves to the argv space, another more conservative choice\n> that might truncate a few PS display messages, or maybe not given the\n> typical postmaster arguments, maye that'd work out OK), and we'd still\n> copy the environment to somewhere new, but anything like \"LD_XXX\" that\n> the runtime linker might have stashed a pointer to would remain valid.\n> /me runs away and hides\n\nIt doesn't lack for bravery! (And I have to just comment that the linker storing pointers into that space as a way of finding libraries... well, that doesn't get them the moral high ground for nasty hacks.)\n\nI'm comfortable with \"if you are using musl, you don't get the ps messages\" as a first solution, if we can find a way of detecting a libc that passes the other tests but doesn't support any of the existing hacks.\n\n", "msg_date": "Mon, 18 Mar 2024 02:33:45 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Mon, Mar 18, 2024 at 10:34 PM Christophe Pettus <[email protected]> wrote:\n> > On Mar 17, 2024, at 16:20, Thomas Munro <[email protected]> wrote:\n> > We'd\n> > still feel free to clobber the memory up to that point (rather than\n> > limiting ourselves to the argv space, another more conservative choice\n> > that might truncate a few PS display messages, or maybe not given the\n> > typical postmaster arguments, maye that'd work out OK), and we'd still\n> > copy the environment to somewhere new, but anything like \"LD_XXX\" that\n> > the runtime linker might have stashed a pointer to would remain valid.\n> > /me runs away and hides\n>\n> It doesn't lack for bravery! (And I have to just comment that the linker storing pointers into that space as a way of finding libraries... well, that doesn't get them the moral high ground for nasty hacks.)\n\nFWIW here is a blind patch if someone wants to try it out... no musl here.\n\n(Hmm, I think it's not that unreasonable on their part to assume the\ninitial environment is immutable if their implementation doesn't\nmutate it, and our doing so is undeniably UB; surprising, maybe, given\nthat the technique works on that other popular brand of C library on\nthat kind of kernel, not to mention dozens of old Unixen of yore...\nThe real solution may turn out to be the prctl() described in that\nthread, where you can tell the kernel where you're planning to move\nyour argv and it can find it to show ps/top, but I checked and you\nstill can't call that without special privileges, so maybe someone\nshould get onto complaining to kernel hackers about that? That thread\nis wrong about us clobbering auxv BTW, we're not animals!)", "msg_date": "Tue, 19 Mar 2024 02:25:33 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> (Hmm, I think it's not that unreasonable on their part to assume the\n> initial environment is immutable if their implementation doesn't\n> mutate it, and our doing so is undeniably UB; surprising, maybe, given\n> that the technique works on that other popular brand of C library on\n> that kind of kernel, not to mention dozens of old Unixen of yore...\n\nDoes their implementation also ignore the effects of putenv() or\nsetenv() on LD_LIBRARY_PATH? They have no moral high ground\nwhatsoever if that's the case. But if it doesn't, an alternative\nroute to a solution could be to scan the original environment, strdup\nand putenv each entry to move it to freshly malloc'd space, and\nthen reclaim the old environment area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Mar 2024 10:23:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Tue, Mar 19, 2024 at 3:23 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > (Hmm, I think it's not that unreasonable on their part to assume the\n> > initial environment is immutable if their implementation doesn't\n> > mutate it, and our doing so is undeniably UB; surprising, maybe, given\n> > that the technique works on that other popular brand of C library on\n> > that kind of kernel, not to mention dozens of old Unixen of yore...\n>\n> Does their implementation also ignore the effects of putenv() or\n> setenv() on LD_LIBRARY_PATH? They have no moral high ground\n> whatsoever if that's the case. But if it doesn't, an alternative\n> route to a solution could be to scan the original environment, strdup\n> and putenv each entry to move it to freshly malloc'd space, and\n> then reclaim the old environment area.\n\nYes, the musl linker/loader ignores putenv()/setenv() changes to\nLD_LIBRARY_PATH after process start (that is, changes only effect the\nsearch path when injected into a new program with exec*()). As does\nglibc, it's just that it captures by copy instead of reference\n(according to one of the links above, I didn't check the source). So\nsetenv() has no effect on dlopen() in *this* program, and using putenv\nin that way won't help. We simply can't move the value of\nLD_LIBRARY_PATH (though my patch could be a little sneakier and steal\nall the bytes right up to the = sign to get more space for our\nmessage!).\n\nOne way to tell if a copy has been made is to trace a program that does:\n\n getenv(\"LD_LIBRARY_PATH\")[2] = 'X';\n dlopen(\"foo.so\", RTLD_NOW | RTLD_GLOBAL);\n\n... when run with LD_LIBRARY_PATH set to /asdf. On FreeBSD I see it\ntries to open \"/aXdf...\", so now I know that FreeBSD also captures it\nby reference like musl. But we don't use the clobber trick on\nFreeBSD, it has a proper setproctitle() function that knows how to\nnegotiate with the kernel, so it doesn't matter. It also ignores\nchanges made with setent()/putenv(), because those create fresh\nentries but leave the initial environment strings untouched.\n\nSolaris also ignores changes made after startup (it's in the dlopen\nman page), and from a very quick look at its ld_lib_setup() I think it\nachieved that with a copy. I believe its ancestor SunOS 4 invented\nall of these conventions (and the mmap/virtual memory concepts they\nrode in on), later nailed down to some degree in the System V ABI and\nvery widely adopted, but I don't see anything in the latter that\nspecifically addresses this point, eg LD_LIBRARY copy vs reference and\ninteraction with dlopen() (perhaps I didn't look hard enough). I'm\nnot sure what else you can point to to make strong claims about this\nstuff, but I bet every system ignores changes after startup, it's just\nthat they found two ways to achieve that. POSIX says of dlopen that\nthe \"file [argument] is used in an implementation-defined manner\", and\nof environ that we're welcome to swap a whole new environ, but doesn't\nseem to tell us anything about the one that is replaced (who owns it?\nis the initial one set up at execution time special? etc). The line\nbanning manipulation of the pointers environ refers to doesn't exactly\ndescribe what we're doing (we're manipulating the strings pointed to\nby the *previous* environ). UB.\n\n\n", "msg_date": "Tue, 19 Mar 2024 10:17:36 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Tue, Mar 19, 2024 at 10:17 AM Thomas Munro <[email protected]> wrote:\n> ... (though my patch could be a little sneakier and steal\n> all the bytes right up to the = sign to get more space for our\n> message!).\n\nHere's one like that. No musl here -- does this work Wolfgang? Do we\nthink it's generous enough with space in practice that we could just\nalways do this for __linux__ systems without anyone noticing (ie\nincluding glibc users)? Should we be more specific about which LD_*\nvariables? Do people not doing hacking/testing ever really set those,\neg on production servers? This code path was once used by up to a\ndozen or so OSes but they're all dead, only Linux, Solaris and macOS\nleft, and I don't have any reason to think they suffer from this\nproblem and Macs don't even follow the SysV LD_ naming convention,\nhence gating on Linux.", "msg_date": "Tue, 19 Mar 2024 11:48:50 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Tue, Mar 19, 2024 at 11:48:50AM +1300, Thomas Munro wrote:\n> On Tue, Mar 19, 2024 at 10:17 AM Thomas Munro <[email protected]> wrote:\n> > ... (though my patch could be a little sneakier and steal\n> > all the bytes right up to the = sign to get more space for our\n> > message!).\n> \n> Here's one like that. No musl here -- does this work Wolfgang? Do we\n> think it's generous enough with space in practice that we could just\n> always do this for __linux__ systems without anyone noticing (ie\n> including glibc users)? Should we be more specific about which LD_*\n> variables? Do people not doing hacking/testing ever really set those,\n> eg on production servers? This code path was once used by up to a\n> dozen or so OSes but they're all dead, only Linux, Solaris and macOS\n> left, and I don't have any reason to think they suffer from this\n> problem and Macs don't even follow the SysV LD_ naming convention,\n> hence gating on Linux.\n\nSo this would truncate the process title on all Linux that have an LD_\nenvironment entry, even those without musl?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 19 Mar 2024 19:54:49 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 12:54 PM Bruce Momjian <[email protected]> wrote:\n> So this would truncate the process title on all Linux that have an LD_\n> environment entry, even those without musl?\n\nYep. How long is /proc/XXX/cmdline (check with wc -c /proc/...) in a\npostmaster near you? You'd always get that much, plus as much of\n/proc/XXX/environ as we can find before you reach LD_XXX=, which on a\ntypical system would, I guess, usually be never. If it's a problem\nyou could try to arrange for LD_ XXX to come later in environ[]. What\nI observe is that they seem to get copied in backwards, wrt the\nenvironment exported by the parent, so if you set DUMMY=XXXXXXXX just\nbefore starting the process it'll make sacrificial space in the right\nplace (but I'm not sure where that effect is coming from so don't\nquote me).\n\n\n", "msg_date": "Wed, 20 Mar 2024 14:12:54 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 02:12:54PM +1300, Thomas Munro wrote:\n> On Wed, Mar 20, 2024 at 12:54 PM Bruce Momjian <[email protected]> wrote:\n> > So this would truncate the process title on all Linux that have an LD_\n> > environment entry, even those without musl?\n> \n> Yep. How long is /proc/XXX/cmdline (check with wc -c /proc/...) in a\n> postmaster near you? You'd always get that much, plus as much of\n\n\t$ cat /proc/2000/cmdline |wc -c\n\t30\n\n> /proc/XXX/environ as we can find before you reach LD_XXX=, which on a\n> typical system would, I guess, usually be never. If it's a problem\n> you could try to arrange for LD_ XXX to come later in environ[]. What\n> I observe is that they seem to get copied in backwards, wrt the\n> environment exported by the parent, so if you set DUMMY=XXXXXXXX just\n> before starting the process it'll make sacrificial space in the right\n> place (but I'm not sure where that effect is coming from so don't\n> quote me).\n\nI am just cautious about changing behavior on our most common platform\nfor a libc library I have never heard of.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 19 Mar 2024 21:27:23 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Yep. How long is /proc/XXX/cmdline (check with wc -c /proc/...) in a\n> postmaster near you? You'd always get that much, plus as much of\n> /proc/XXX/environ as we can find before you reach LD_XXX=, which on a\n> typical system would, I guess, usually be never.\n\nI'd be happier about this if the target pattern were more restrictive.\nIs there reason to think that musl keeps a pointer to anything besides\nLD_LIBRARY_PATH?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Mar 2024 21:53:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 2:53 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > Yep. How long is /proc/XXX/cmdline (check with wc -c /proc/...) in a\n> > postmaster near you? You'd always get that much, plus as much of\n> > /proc/XXX/environ as we can find before you reach LD_XXX=, which on a\n> > typical system would, I guess, usually be never.\n>\n> I'd be happier about this if the target pattern were more restrictive.\n> Is there reason to think that musl keeps a pointer to anything besides\n> LD_LIBRARY_PATH?\n\nAlso LD_PRELOAD:\n\nhttps://github.com/bminor/musl/blob/7ada6dde6f9dc6a2836c3d92c2f762d35fd229e0/ldso/dynlink.c#L1824\n\nYeah we could do just those two.\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:01:09 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Wed, Mar 20, 2024 at 2:53 PM Tom Lane <[email protected]> wrote:\n>> I'd be happier about this if the target pattern were more restrictive.\n>> Is there reason to think that musl keeps a pointer to anything besides\n>> LD_LIBRARY_PATH?\n\n> Also LD_PRELOAD:\n> https://github.com/bminor/musl/blob/7ada6dde6f9dc6a2836c3d92c2f762d35fd229e0/ldso/dynlink.c#L1824\n> Yeah we could do just those two.\n\n+1 for stopping only at one of those two names.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Mar 2024 22:03:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 2:27 PM Bruce Momjian <[email protected]> wrote:\n> I am just cautious about changing behavior on our most common platform\n> for a libc library I have never heard of.\n\nYeah, I hear you. I don't have a dog in this race, I just like\nretro-computing mysteries and arguments about the meaning of\nstandards... That said I'm pretty sure no one should be running\nproduction PostgreSQL systems held together by LD_LIBRARY_PATH, and if\nthey do, it looks like systemd/rc.d scripts set a bunch of PG_OOM_BLAH\nstuff just before the start the cluster that would provide extra chaff\nin front of LD_XXX stuff defined earlier, and then pg_ctl inserts even\nmore, and even if they don't use any of that stuff and just run the\npostmaster directly with some other homegrown tooling, now we're down\nto very niche/expert scenarios where it should be acceptable to point\nto this thread that says \"try setting an extra dummy variable after\nyou set LD_XXX!\".\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:12:49 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 3:03 PM Tom Lane <[email protected]> wrote:\n> +1 for stopping only at one of those two names.\n\nHere's one like that for Wolfgang to test on musl.", "msg_date": "Wed, 20 Mar 2024 17:39:34 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On 17.03.24 11:33, Christophe Pettus wrote:\n>> On Mar 17, 2024, at 02:44, Wolfgang Walther <[email protected]> wrote:\n>>\n>> Nailed it. PS_USE_NONE fixes it.\n> \n> Given the musl (still?) does not define a preprocessor macro specific to it, is there a way of improving the test in pg_status.c to catch this case? It seems wrong that the current test passes a case that doesn't actually work.\n> \n\nWe could turn it around and do\n\n#if defined(__linux__)\n#if defined(__GLIBC__) || defined(__UCLIBC__ )\n#define PS_USE_CLOBBER_ARGV\n#else\n#define PS_USE_NONE\n#endif\n#endif\n\n\n\n", "msg_date": "Wed, 20 Mar 2024 07:02:48 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 2:03 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 17.03.24 11:33, Christophe Pettus wrote:\n> >> On Mar 17, 2024, at 02:44, Wolfgang Walther <[email protected]>\n> wrote:\n> >>\n> >> Nailed it. PS_USE_NONE fixes it.\n> >\n> > Given the musl (still?) does not define a preprocessor macro specific to\n> it, is there a way of improving the test in pg_status.c to catch this\n> case? It seems wrong that the current test passes a case that doesn't\n> actually work.\n> >\n>\n> We could turn it around and do\n>\n> #if defined(__linux__)\n> #if defined(__GLIBC__) || defined(__UCLIBC__ )\n> #define PS_USE_CLOBBER_ARGV\n> #else\n> #define PS_USE_NONE\n> #endif\n> #endif\n>\n>\n>\n>\nI like it. Neat and minimal.\n\ncheers\n\nandrew\n\nOn Wed, Mar 20, 2024 at 2:03 AM Peter Eisentraut <[email protected]> wrote:On 17.03.24 11:33, Christophe Pettus wrote:\n>> On Mar 17, 2024, at 02:44, Wolfgang Walther <[email protected]> wrote:\n>>\n>> Nailed it. PS_USE_NONE fixes it.\n> \n> Given the musl (still?) does not define a preprocessor macro specific to it, is there a way of improving the test in pg_status.c to catch this case?  It seems wrong that the current test passes a case that doesn't actually work.\n> \n\nWe could turn it around and do\n\n#if defined(__linux__)\n#if defined(__GLIBC__) || defined(__UCLIBC__ )\n#define PS_USE_CLOBBER_ARGV\n#else\n#define PS_USE_NONE\n#endif\n#endif\n\n\nI like it. Neat and minimal.cheersandrew", "msg_date": "Wed, 20 Mar 2024 03:16:14 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro:\n> On Wed, Mar 20, 2024 at 3:03 PM Tom Lane <[email protected]> wrote:\n>> +1 for stopping only at one of those two names.\n> \n> Here's one like that for Wolfgang to test on musl.\n\nWorks fine.\n\nPeter Eisentraut:\n> We could turn it around and do\n> \n> #if defined(__linux__)\n> #if defined(__GLIBC__) || defined(__UCLIBC__ )\n> #define PS_USE_CLOBBER_ARGV\n> #else\n> #define PS_USE_NONE\n> #endif\n> #endif\n\nThis works as well.\n\nI also put together a PoC of what was mentioned in musl's mailing list: \nInstead of clobbering environ at all, exec yourself again with padded \nargv0. This works, too. Attached.\n\nBest,\n\nWolfgang", "msg_date": "Wed, 20 Mar 2024 10:39:20 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 10:39:20AM +0100, Wolfgang Walther wrote:\n> Peter Eisentraut:\n> > We could turn it around and do\n> > \n> > #if defined(__linux__)\n> > #if defined(__GLIBC__) || defined(__UCLIBC__ )\n> > #define PS_USE_CLOBBER_ARGV\n> > #else\n> > #define PS_USE_NONE\n> > #endif\n> > #endif\n> \n> This works as well.\n\nYes, I prefer this. I am worried the environ hackery will bite us\nsomeday and the cause will be hard to find.\n\n> I also put together a PoC of what was mentioned in musl's mailing list:\n> Instead of clobbering environ at all, exec yourself again with padded argv0.\n> This works, too. Attached.\n\nIt is hard to imagine why we would add an extra exec on every Linux\nserver start for this.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 20 Mar 2024 09:35:51 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 09:35:51AM -0400, Bruce Momjian wrote:\n> On Wed, Mar 20, 2024 at 10:39:20AM +0100, Wolfgang Walther wrote:\n> > I also put together a PoC of what was mentioned in musl's mailing list:\n> > Instead of clobbering environ at all, exec yourself again with padded argv0.\n> > This works, too. Attached.\n> \n> It is hard to imagine why we would add an extra exec on every Linux\n> server start for this.\n\nI guess we could conditionally exec only if we find we must, but then\nsuch exec cases would be rare and rarely tested.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 20 Mar 2024 09:37:30 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Bruce Momjian:\n> I guess we could conditionally exec only if we find we must, but then\n> such exec cases would be rare and rarely tested.\n\nI think you might be seriously underestimating how often musl is used. \nAlpine Linux uses musl and is very widespread in the container world \nbecause of smaller image size.\n\nThe library/postgres docker image has been pulled about 8 billion times \nsince 2014 [1]. While we can't really tell how many of those pulled the \nalpine variant of the image, comparing the alpine [2] and ubuntu/debian \n[3,4] base images gives a rough estimate of >50% using alpine in general.\n\nThis is certainly not rare.\n\nBut yeah, buildfarm coverage for musl would be good, I agree. Maybe even \ndirectly in CI?\n\nBest,\n\nWolfgang\n\n[1]: https://hub.docker.com/v2/repositories/library/postgres\n[2]: https://hub.docker.com/v2/repositories/library/alpine\n[3]: https://hub.docker.com/v2/repositories/library/ubuntu\n[4]: https://hub.docker.com/v2/repositories/library/debian\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:24:58 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, 2024-03-20 at 15:24 +0100, Wolfgang Walther wrote:\n> I think you might be seriously underestimating how often musl is used. \n> Alpine Linux uses musl and is very widespread in the container world \n> because of smaller image size\n\nThe last time I looked, its collation support didn't work at all...\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:28:53 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Wed, 2024-03-20 at 15:24 +0100, Wolfgang Walther wrote:\n>> I think you might be seriously underestimating how often musl is used. \n>> Alpine Linux uses musl and is very widespread in the container world \n>> because of smaller image size\n\n> The last time I looked, its collation support didn't work at all...\n\nI think the same is true of some of the BSDen, so that's not a\nlarge impediment to us. But in any case, if somebody wants\nAlpine or musl to be considered a supported platform, they'd\nbest step up and run a buildfarm animal.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Mar 2024 10:36:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Bruce Momjian:\n> On Wed, Mar 20, 2024 at 10:39:20AM +0100, Wolfgang Walther wrote:\n>> Peter Eisentraut:\n>>> We could turn it around and do\n>>>\n>>> #if defined(__linux__)\n>>> #if defined(__GLIBC__) || defined(__UCLIBC__ )\n>>> #define PS_USE_CLOBBER_ARGV\n>>> #else\n>>> #define PS_USE_NONE\n>>> #endif\n>>> #endif\n>>\n>> This works as well.\n> \n> Yes, I prefer this. I am worried the environ hackery will bite us\n> someday and the cause will be hard to find.\n\nWell, the environ hackery already bit and it sure was hard to find. But \nthis approach would still clobber environ happily... which is undefined \nbehavior. But certainly the opt-in to known-to-be-good libc variants is \na better approach than before.\n\nBetween this and \"stop clobbering at LD_LIBRARY_PATH\", I prefer the \nlatter, though.\n\n>> I also put together a PoC of what was mentioned in musl's mailing list:\n>> Instead of clobbering environ at all, exec yourself again with padded argv0.\n>> This works, too. Attached.\n> \n> It is hard to imagine why we would add an extra exec on every Linux\n> server start for this.\n\nWould this be a problem? For a running server this would happen only \nonce when the postmaster starts up, AFAICT.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:37:07 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Laurenz Albe:\n> On Wed, 2024-03-20 at 15:24 +0100, Wolfgang Walther wrote:\n>> I think you might be seriously underestimating how often musl is used.\n>> Alpine Linux uses musl and is very widespread in the container world\n>> because of smaller image size\n> \n> The last time I looked, its collation support didn't work at all...\n\nIIUC, using icu collations should work. I didn't extensively try, \nthough. But yeah - musl itself doesn't do it, knowingly so.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:43:05 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Tom Lane:\n> But in any case, if somebody wants\n> Alpine or musl to be considered a supported platform, they'd\n> best step up and run a buildfarm animal.\n\nYeah, I was already thinking about that. But I guess we'd need to first \nmake the test suite pass on musl. i.e. $subject, but there are also some \nsmaller issues after that, before the full test suite will pass.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:46:39 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On 20.03.24 15:37, Wolfgang Walther wrote:\n>> It is hard to imagine why we would add an extra exec on every Linux\n>> server start for this.\n> \n> Would this be a problem? For a running server this would happen only \n> once when the postmaster starts up, AFAICT.\n\nI wonder if it would cause issues with systemd or similar, if the PID of \nthe running process is not the one that systemd started. If so, there \nis probably a workaround, but it would have to be analyzed.\n\n\n\n", "msg_date": "Wed, 20 Mar 2024 19:37:08 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Peter Eisentraut:\n>> Would this be a problem? For a running server this would happen only \n>> once when the postmaster starts up, AFAICT.\n> \n> I wonder if it would cause issues with systemd or similar, if the PID of \n> the running process is not the one that systemd started.  If so, there \n> is probably a workaround, but it would have to be analyzed.\n\nI don't think that exec even creates a new PID. The current process is \nreplaced, so the PID stays the same.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Wed, 20 Mar 2024 20:10:01 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 03:24:58PM +0100, Wolfgang Walther wrote:\n> Bruce Momjian:\n> > I guess we could conditionally exec only if we find we must, but then\n> > such exec cases would be rare and rarely tested.\n> \n> I think you might be seriously underestimating how often musl is used.\n> Alpine Linux uses musl and is very widespread in the container world because\n> of smaller image size.\n> \n> The library/postgres docker image has been pulled about 8 billion times\n> since 2014 [1]. While we can't really tell how many of those pulled the\n> alpine variant of the image, comparing the alpine [2] and ubuntu/debian\n> [3,4] base images gives a rough estimate of >50% using alpine in general.\n\nUh, what is the current behavior of Postgres on musl? It just fails if\nthe process title is longer than argv[0] plus the environment space to\nthe LD_ environment variable, and then linking fails for certain\nextensions? If there are many downloads, why would we only be getting\nthis report now?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:18:39 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Bruce Momjian:\n>> The library/postgres docker image has been pulled about 8 billion times\n>> since 2014 [1]. While we can't really tell how many of those pulled the\n>> alpine variant of the image, comparing the alpine [2] and ubuntu/debian\n>> [3,4] base images gives a rough estimate of >50% using alpine in general.\n> \n> Uh, what is the current behavior of Postgres on musl? It just fails if\n> the process title is longer than argv[0] plus the environment space to\n> the LD_ environment variable, and then linking fails for certain\n> extensions? If there are many downloads, why would we only be getting\n> this report now?\n\nThe process title works fine. It's just the way how space is cleared for \nthe process title, that is causing problems elsewhere.\n\nThe thing that is broken when running postgres on alpine/musl is, to put \nlibpq in a custom path and use LD_LIBRARY_PATH to find it when loading \nlibpqwalreceiver (+ some contrib modules). Nobody does that, especially \nnot in a container environment where postgres is likely the only thing \nrunning in that container, so there is no point in using any custom \nlibrary paths or anything - the image is built once and made to work, \nand everybody else is just using that working image.\n\nThe much more practical problem is that the test suite doesn't run, \nbecause it makes use of LD_LIBRARY_PATH for that purpose. In the past, \nthe packagers for alpine only disabled the failing tests, but IIRC they \nhave given up on that and just disabled the whole test suite by now.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Wed, 20 Mar 2024 20:29:21 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Wed, Mar 20, 2024 at 08:29:21PM +0100, Wolfgang Walther wrote:\n> Bruce Momjian:\n> > > The library/postgres docker image has been pulled about 8 billion times\n> > > since 2014 [1]. While we can't really tell how many of those pulled the\n> > > alpine variant of the image, comparing the alpine [2] and ubuntu/debian\n> > > [3,4] base images gives a rough estimate of >50% using alpine in general.\n> > \n> > Uh, what is the current behavior of Postgres on musl? It just fails if\n> > the process title is longer than argv[0] plus the environment space to\n> > the LD_ environment variable, and then linking fails for certain\n> > extensions? If there are many downloads, why would we only be getting\n> > this report now?\n> \n> The process title works fine. It's just the way how space is cleared for the\n> process title, that is causing problems elsewhere.\n> \n> The thing that is broken when running postgres on alpine/musl is, to put\n> libpq in a custom path and use LD_LIBRARY_PATH to find it when loading\n> libpqwalreceiver (+ some contrib modules). Nobody does that, especially not\n> in a container environment where postgres is likely the only thing running\n> in that container, so there is no point in using any custom library paths or\n> anything - the image is built once and made to work, and everybody else is\n> just using that working image.\n> \n> The much more practical problem is that the test suite doesn't run, because\n> it makes use of LD_LIBRARY_PATH for that purpose. In the past, the packagers\n> for alpine only disabled the failing tests, but IIRC they have given up on\n> that and just disabled the whole test suite by now.\n\nThanks, that is very clear.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:42:16 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Thu, Mar 21, 2024 at 2:35 AM Bruce Momjian <[email protected]> wrote:\n> Yes, I prefer this. I am worried the environ hackery will bite us\n> someday and the cause will be hard to find.\n\nSome speculation on how widespread this trick is: as mentioned, it\nseems to come from sendmail (I didn't spend the time to find a repo\nthat has ancient versions, so this is someone's random snapshot repo,\nbut it's referring to pretty old dead systems):\n\nhttps://github.com/Distrotech/sendmail/blob/master/sendmail/conf.c#L2436\n\nThat was once ubiquitous, back in the day. One of the most widespread\nenvon-clobberers these days must be openssh:\n\nhttps://github.com/openssh/openssh-portable/blob/86bdd3853f4d32c85e295e6216a2fe0953ad93f0/openbsd-compat/setproctitle.c#L69\n\nAnd funnily enough, googling LD_LIBRARY_PATH and openssh brings up a\nfew unsolved/unanswered questions about mysterious breakage (though I\ndidn't see any that mentioned musl by name and there could be other\nexplanations, *shrug*).\n\nThere is also Chromium/Chrome:\n\nhttps://github.com/chromium/chromium/blob/main/base/process/set_process_title_linux.cc#L136\n\nThat code has some interesting commentary and points to a commit in\nLinux which mentions setproctitle() and making sure it still works\n(funny because setproctitle() isn't a function in any standard\nuserspace library AFAIK so I guess it just means the trick in\ngeneral), and also mentions the failure of attempts to get an official\nway to do this negotiated between the relevant projects.\n\nOf course we have to distinguish between the basic argv[] clobbering\ntrick which is barely even a trick, and the more advanced environ\nstealing trick which confuses musl. A very widespread user of the\nbasic trick would be systemd, which tries to use the prctl() if it can\nto get a much bigger window of memory to write on, but otherwise falls\nback to accepting a small one. I guess we'd do the same if we could,\nie if a future Linux version didn't require CAP_SYS_RESOURCES to do\nit:\n\nhttps://github.com/systemd/systemd/blob/8810b782a17050d7f7a870b975f09e8a690b7bea/src/basic/argv-util.c\n\nAnyway, it looks like there is plenty of will out there to keep this\nworking, albeit in a weird semi-supported state whose cruftiness is\nundeniable.\n\n\n", "msg_date": "Thu, 21 Mar 2024 11:26:26 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro:\n> Of course we have to distinguish between the basic argv[] clobbering\n> trick which is barely even a trick, and the more advanced environ\n> stealing trick which confuses musl.\n\nRight. The latter not only confuses musl, but also makes \n/proc/<pid>/environ return garbage. This is also mentioned at the bottom \nof main.c, which has a workaround for the specific case of UBSan \ndepending on that. This is kind of funny: Because we are relying on \nundefined behavior regarding the modification of environ, we need a \nworkaround for the \"UndefinedBehaviorSanitizer\" - I guess by failing \nwithout this workaround, it wanted to tell us something..\n\nThis happens on glibc, too.\n\nSo summarizing:\n\n1. The simple approach is to use PS_USE_CLOBBER_ARGV on Linux only for \nglibc and other known-to-be-good-and-identifiable libc variants, \notherwise default to PS_USE_NONE. This will not only keep the problem \nfor /proc/../environ for glibc users, but also disable ps status for \nmusl entirely. Considering that probably the biggest use-case for musl \nis to run postgres in containers, it's quite likely to actually run more \nthan just one cluster on a single machine. In this case... ps status \nwould be especially handy to identify which cluster a process belongs to.\n\n2. The next proposal was to stop clobbering environ once LD_LIBRARY_PATH \n/ LD_PRELOAD is found to keep those intact. This will keep ps status \nsupport on musl, which is good. But the /proc/.../environ problem will \nstill be there, unchanged.\n\nBoth of those approaches rely on the undefined behavior of clobbering \nenviron.\n\n3. The logical consequence of this is, to stop clobbering environ and \nuse only the available argv space. However, this will quickly leave us \nwith a very small ps status buffer to work with, making the feature less \nuseful. Note, that this could happen theoretically by starting postgres \nwith the fewest arguments and environment possible, too. Not sure what \nthe minimal buffer size is that could be achieved that way. The point \nis: The buffer size is not guaranteed at all.\n\n4. The upstream (musl) suggestion of which I sent a PoC was to \"exec \nyourself with a bigger argv\". This works. I chose to pad argv0 with \ntrailing slashes. Those can safely be stripped away again, because any \nargv0 which would come with a trailing slash to start with, would not be \nthe current executable, but a directory - so would fail exec immediately \nanyway. This keeps /proc/.../environ intact and does not rely on \nundefined behavior. Additionally, we get a guaranteed ps buffer size of \n256, which is what we use on BSDs and Windows, too.\n\nI wonder why we actually fall back to PS_USE_NONE by default.. and how \nmuch of that is related to the environment clobbering to start with? \nCould we even use the exec-approach as the fallback in all other cases \nexcept BSDs and Windows and get rid of PS_USE_NONE? Clobbering only argv \nsure seems way safer to do than what we do right now.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Thu, 21 Mar 2024 21:16:46 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Here is what we could with this:\n\n> 2. The next proposal was to stop clobbering environ once LD_LIBRARY_PATH \n> / LD_PRELOAD is found to keep those intact.\n\nWe could backpatch this down to v12. This would be one step to make the \ntest suite pass on Alpine Linux with musl and ultimately allow setting \nup a buildfarm animal for that.\n\nIt does not solve the /proc/.../environ problem, but at least keeps ps \nstatus working on musl as it did before, so not a regression.\n\n> 4. The upstream (musl) suggestion of which I sent a PoC was to \"exec \n> yourself with a bigger argv\". \n\nWe could do this in HEAD now ...\n\n> Could we even use the exec-approach as the fallback in all other cases \n> except BSDs and Windows and get rid of PS_USE_NONE?\n\n... and then remove PS_USE_NONE at the beginning of the v18 cycle.\n\nThis would give a bit more time for those \"other systems\", which were \npreviously falling back PS_USE_NONE and would then clobber argv, too.\n\nOpinions?\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Thu, 21 Mar 2024 21:30:00 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Fri, Mar 22, 2024 at 9:30 AM <[email protected]> wrote:\n> > 4. The upstream (musl) suggestion of which I sent a PoC was to \"exec\n> > yourself with a bigger argv\".\n>\n> We could do this in HEAD now ...\n\nJust a thought: if we want to go this way, do we need a new exec call?\n We already control the initial exec in pg_ctl.c.\n\n> > Could we even use the exec-approach as the fallback in all other cases\n> > except BSDs and Windows and get rid of PS_USE_NONE?\n>\n> ... and then remove PS_USE_NONE at the beginning of the v18 cycle.\n>\n> This would give a bit more time for those \"other systems\", which were\n> previously falling back PS_USE_NONE and would then clobber argv, too.\n\nRIght. It's unspecified by POSIX whether ps shows changes to those\nstrings (and there are systems that don't), but it can't hurt to do so\nanyway, and it'd be better than having a PS_USE_NONE code path that is\nuntested. I dimly recall that it turned out that PS_USE_NONE was\nactually broken for a while without anyone noticing.\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:42:52 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Just a thought: if we want to go this way, do we need a new exec call?\n> We already control the initial exec in pg_ctl.c.\n\nI'm resistant to assuming the postmaster is launched through pg_ctl.\nsystemd, for example, might well prefer not to do that, not to\nmention all the troglodytes still using 1990s launch scripts.\n\nA question that seems worth debating in this thread is how much\nupdating the process title is even worth nowadays. It feels like\na hangover from before we had pg_stat_activity and other monitoring\nsupport. So I don't feel a huge need to support it on musl.\nThe previously-suggested patch to whitelist glibc and variants,\nand otherwise fall back to PS_USE_NONE, seems like it might be\nthe appropriate amount of effort.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Mar 2024 19:02:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Fri, Mar 22, 2024 at 12:02 PM Tom Lane <[email protected]> wrote:\n> The previously-suggested patch to whitelist glibc and variants,\n> and otherwise fall back to PS_USE_NONE, seems like it might be\n> the appropriate amount of effort.\n\nWhat about meeting musl halfway: clobber argv, but only clobber\nenviron for the libcs known to tolerate that? Then musl might see\ntruncation at 30-60 characters or whatever it is, but that's probably\nenough to see your cluster_name and backend type/user name which is\npretty useful information.\n\n\n", "msg_date": "Fri, 22 Mar 2024 12:20:11 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Thu, Mar 21, 2024 at 7:02 PM Tom Lane <[email protected]> wrote:\n\n> Thomas Munro <[email protected]> writes:\n> > Just a thought: if we want to go this way, do we need a new exec call?\n> > We already control the initial exec in pg_ctl.c.\n>\n> I'm resistant to assuming the postmaster is launched through pg_ctl.\n> systemd, for example, might well prefer not to do that, not to\n> mention all the troglodytes still using 1990s launch scripts.\n>\n> A question that seems worth debating in this thread is how much\n> updating the process title is even worth nowadays. It feels like\n> a hangover from before we had pg_stat_activity and other monitoring\n> support. So I don't feel a huge need to support it on musl.\n> The previously-suggested patch to whitelist glibc and variants,\n> and otherwise fall back to PS_USE_NONE, seems like it might be\n> the appropriate amount of effort.\n>\n>\n>\n\n\n+1\n\ncheers\n\nandrew\n\nOn Thu, Mar 21, 2024 at 7:02 PM Tom Lane <[email protected]> wrote:Thomas Munro <[email protected]> writes:\n> Just a thought: if we want to go this way, do we need a new exec call?\n>  We already control the initial exec in pg_ctl.c.\n\nI'm resistant to assuming the postmaster is launched through pg_ctl.\nsystemd, for example, might well prefer not to do that, not to\nmention all the troglodytes still using 1990s launch scripts.\n\nA question that seems worth debating in this thread is how much\nupdating the process title is even worth nowadays.  It feels like\na hangover from before we had pg_stat_activity and other monitoring\nsupport.  So I don't feel a huge need to support it on musl.\nThe previously-suggested patch to whitelist glibc and variants,\nand otherwise fall back to PS_USE_NONE, seems like it might be\nthe appropriate amount of effort.\n\n                     +1cheersandrew", "msg_date": "Thu, 21 Mar 2024 19:23:45 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Fri, Mar 22, 2024 at 12:02 PM Tom Lane <[email protected]> wrote:\n>> The previously-suggested patch to whitelist glibc and variants,\n>> and otherwise fall back to PS_USE_NONE, seems like it might be\n>> the appropriate amount of effort.\n\n> What about meeting musl halfway: clobber argv, but only clobber\n> environ for the libcs known to tolerate that? Then musl might see\n> truncation at 30-60 characters or whatever it is, but that's probably\n> enough to see your cluster_name and backend type/user name which is\n> pretty useful information.\n\nNo real objection here. I do wonder about the point you (or somebody)\nmade upthread that we don't have any testing of the PS_USE_NONE case;\nbut that could be addressed some other way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Mar 2024 19:30:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Fri, Mar 22, 2024 at 11:42:52AM +1300, Thomas Munro wrote:\n> On Fri, Mar 22, 2024 at 9:30 AM <[email protected]> wrote:\n> > > 4. The upstream (musl) suggestion of which I sent a PoC was to \"exec\n> > > yourself with a bigger argv\".\n> >\n> > We could do this in HEAD now ...\n> \n> Just a thought: if we want to go this way, do we need a new exec call?\n> We already control the initial exec in pg_ctl.c.\n> \n> > > Could we even use the exec-approach as the fallback in all other cases\n> > > except BSDs and Windows and get rid of PS_USE_NONE?\n> >\n> > ... and then remove PS_USE_NONE at the beginning of the v18 cycle.\n> >\n> > This would give a bit more time for those \"other systems\", which were\n> > previously falling back PS_USE_NONE and would then clobber argv, too.\n> \n> RIght. It's unspecified by POSIX whether ps shows changes to those\n> strings (and there are systems that don't), but it can't hurt to do so\n> anyway, and it'd be better than having a PS_USE_NONE code path that is\n> untested. I dimly recall that it turned out that PS_USE_NONE was\n> actually broken for a while without anyone noticing.\n\nActually, I was thinking the opposite. Since the musl libc is widely\nused, it will be tested, and I don't want to disable process display\nupdates for such a common platform.\n\nI suggest we use the #ifdef test to continue our existing behavior for\nthe libraries we know about, like glibc, and use the LD_* process title\ntruncation hack for libc's we don't recognize.\n\nAttached is a prototype patch which implements this based on previous\npatches.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 21 Mar 2024 20:07:14 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Fri, Mar 22, 2024 at 12:20:11PM +1300, Thomas Munro wrote:\n> On Fri, Mar 22, 2024 at 12:02 PM Tom Lane <[email protected]> wrote:\n> > The previously-suggested patch to whitelist glibc and variants,\n> > and otherwise fall back to PS_USE_NONE, seems like it might be\n> > the appropriate amount of effort.\n> \n> What about meeting musl halfway: clobber argv, but only clobber\n> environ for the libcs known to tolerate that? Then musl might see\n> truncation at 30-60 characters or whatever it is, but that's probably\n> enough to see your cluster_name and backend type/user name which is\n> pretty useful information.\n\nI just posted a prototype patch to implement this.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 21 Mar 2024 20:08:49 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Hi,\n\nOn 2024-03-21 21:16:46 +0100, Wolfgang Walther wrote:\n> Right. The latter not only confuses musl, but also makes /proc/<pid>/environ\n> return garbage. This is also mentioned at the bottom of main.c, which has a\n> workaround for the specific case of UBSan depending on that. This is kind of\n> funny: Because we are relying on undefined behavior regarding the\n> modification of environ, we need a workaround for the\n> \"UndefinedBehaviorSanitizer\" - I guess by failing without this workaround,\n> it wanted to tell us something..\n\nI don't think that's quite a fair description. Ubsan is basically doing\nundefined things itself, so it's turtles all the way down.\n\n\n> So summarizing:\n\nFWIW, independent of which fix we go with, I think we need a buildfarm animal\nusing musl. Even better if one of the CI tasks can be made to use musl as\nwell.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 21 Mar 2024 17:19:48 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "\nSent from my iPad\n\n> On Mar 22, 2024, at 10:49 AM, Andres Freund <[email protected]> wrote:\n> \n> Hi,\n> \n>> On 2024-03-21 21:16:46 +0100, Wolfgang Walther wrote:\n>> Right. The latter not only confuses musl, but also makes /proc/<pid>/environ\n>> return garbage. This is also mentioned at the bottom of main.c, which has a\n>> workaround for the specific case of UBSan depending on that. This is kind of\n>> funny: Because we are relying on undefined behavior regarding the\n>> modification of environ, we need a workaround for the\n>> \"UndefinedBehaviorSanitizer\" - I guess by failing without this workaround,\n>> it wanted to tell us something..\n> \n> I don't think that's quite a fair description. Ubsan is basically doing\n> undefined things itself, so it's turtles all the way down.\n> \n> \n>> So summarizing:\n> \n> FWIW, independent of which fix we go with, I think we need a buildfarm animal\n> using musl. Even better if one of the CI tasks can be made to use musl as\n> well.\n\n\nWe had one till 3 months ago. It’s on my list to recreate.\n\nCheers\n\nAndrew\n\n\n", "msg_date": "Fri, 22 Mar 2024 16:42:26 +1030", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On Mar 22, 2024, at 10:49 AM, Andres Freund <[email protected]> wrote:\n>> FWIW, independent of which fix we go with, I think we need a buildfarm animal\n>> using musl. Even better if one of the CI tasks can be made to use musl as\n>> well.\n\n> We had one till 3 months ago. It’s on my list to recreate.\n\nHow was it passing? The issue discussed in this thread has surely\nbeen there for a long time, and Wolfgang mentioned that he sees\nothers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 02:15:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "\n\n> On Mar 22, 2024, at 4:45 PM, Tom Lane <[email protected]> wrote:\n> \n> Andrew Dunstan <[email protected]> writes:age\n>>> On Mar 22, 2024, at 10:49 AM, Andres Freund <[email protected]> wrote:\n>>> FWIW, independent of which fix we go with, I think we need a buildfarm animal\n>>> using musl. Even better if one of the CI tasks can be made to use musl as\n>>> well.\n> \n>> We had one till 3 months ago. It’s on my list to recreate.\n> \n> How was it passing? The issue discussed in this thread has surely\n> been there for a long time, and Wolfgang mentioned that he sees\n> others.\n> \n> \n\n\nThe buildfarm client has a switch that delays running regression tests until after the install stages.\n\nCheers \n\nAndrew\n\n", "msg_date": "Fri, 22 Mar 2024 17:32:19 +1030", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Andres Freund:\n> FWIW, independent of which fix we go with, I think we need a buildfarm animal\n> using musl. Even better if one of the CI tasks can be made to use musl as\n> well.\n\nI am already working with Andrew to set up a buildfarm animal to run \nAlpine Linux/musl. I can look into the CI task as well. Are you \nsuggesting to change an existing task to run with Alpine/musl or to add \na new task for it? It would be docker image based for sure.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Fri, 22 Mar 2024 08:55:52 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Andrew Dunstan:\n>> On Mar 22, 2024, at 4:45 PM, Tom Lane <[email protected]> wrote:\n>> How was it passing? The issue discussed in this thread has surely\n>> been there for a long time, and Wolfgang mentioned that he sees\n>> others.\n> \n> The buildfarm client has a switch that delays running regression tests until after the install stages.\n\nHm. So while that switch makes the animal pass the build, it did hide \nexactly this problem. Not sure whether this switch should be used at \nall, then. Was this switch only implemented for the specific case of \nAlpine/musl or is there a different reason for it, as well?\n\nThe other issues I had been seeing were during make check-world, but not \nmake check. Those were things around setlocale() / /bin/locale, IIRC. \nNot sure whether all of the tests are run by the buildfarm?\n\nBest,\n\nWolfgang\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 09:02:04 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Tom Lane:\n> Thomas Munro <[email protected]> writes:\n>> Just a thought: if we want to go this way, do we need a new exec call?\n>> We already control the initial exec in pg_ctl.c.\n> \n> I'm resistant to assuming the postmaster is launched through pg_ctl.\n> systemd, for example, might well prefer not to do that, not to\n> mention all the troglodytes still using 1990s launch scripts.\n\nRight, the systemd example in the docs is not using pg_ctl.\n\nBut, it should be easily possible to have:\n- pg_ctl call postgres with a padded argv0\n- postgres call itself with padding, if it wasn't called with that already\n\nThis way, there would be no additional exec call when started through \npg_ctl, but one more call when started directly.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Fri, 22 Mar 2024 09:12:47 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Bruce Momjian:\n> I suggest we use the #ifdef test to continue our existing behavior for\n> the libraries we know about, like glibc, and use the LD_* process title\n> truncation hack for libc's we don't recognize.\n> \n> Attached is a prototype patch which implements this based on previous\n> patches.\n\nThe condition to check for linux/glibc in your patch is slightly off:\n\n #if ! defined(__linux__) || (! defined(__GLIBC__) && \ndefined(__UCLIBC__ ))\n\nshould be\n\n #if defined(__linux__) && ! (defined(__GLIBC__) || defined(__UCLIBC__ ))\n\nWith the latter, it passes tests with musl.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Fri, 22 Mar 2024 09:33:38 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Wolfgang Walther:\n> The other issues I had been seeing were during make check-world, but not \n> make check. Those were things around setlocale() / /bin/locale, IIRC. \n> Not sure whether all of the tests are run by the buildfarm?\n\nAh, those other tests only fail when building --with-icu, but Andrew's \nanimal didn't do that.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Fri, 22 Mar 2024 10:41:39 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Fri, Mar 22, 2024 at 4:02 AM Wolfgang Walther <[email protected]>\nwrote:\n\n> Andrew Dunstan:\n> >> On Mar 22, 2024, at 4:45 PM, Tom Lane <[email protected]> wrote:\n> >> How was it passing? The issue discussed in this thread has surely\n> >> been there for a long time, and Wolfgang mentioned that he sees\n> >> others.\n> >\n> > The buildfarm client has a switch that delays running regression tests\n> until after the install stages.\n>\n> Hm. So while that switch makes the animal pass the build, it did hide\n> exactly this problem. Not sure whether this switch should be used at\n> all, then. Was this switch only implemented for the specific case of\n> Alpine/musl or is there a different reason for it, as well?\n>\n\nAlpine was the main motivation, but it's also probably useful on Macs with\nSIP enabled.\n\nISTR raising the Alpine issue back then (2018) but I can't find a reference\nnow.\n\ncheers\n\nandrew\n\nOn Fri, Mar 22, 2024 at 4:02 AM Wolfgang Walther <[email protected]> wrote:Andrew Dunstan:\n>> On Mar 22, 2024, at 4:45 PM, Tom Lane <[email protected]> wrote:\n>> How was it passing?  The issue discussed in this thread has surely\n>> been there for a long time, and Wolfgang mentioned that he sees\n>> others.\n> \n> The buildfarm client has a switch that delays running regression tests until after the install stages.\n\nHm. So while that switch makes the animal pass the build, it did hide \nexactly this problem. Not sure whether this switch should be used at \nall, then. Was this switch only implemented for the specific case of \nAlpine/musl or is there a different reason for it, as well?Alpine was the main motivation, but it's also probably useful on Macs with SIP enabled.ISTR raising the Alpine issue back then (2018) but I can't find a reference now.cheersandrew", "msg_date": "Fri, 22 Mar 2024 08:10:27 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Fri, Mar 22, 2024 at 09:33:38AM +0100, [email protected] wrote:\n> Bruce Momjian:\n> > I suggest we use the #ifdef test to continue our existing behavior for\n> > the libraries we know about, like glibc, and use the LD_* process title\n> > truncation hack for libc's we don't recognize.\n> > \n> > Attached is a prototype patch which implements this based on previous\n> > patches.\n> \n> The condition to check for linux/glibc in your patch is slightly off:\n> \n> #if ! defined(__linux__) || (! defined(__GLIBC__) && defined(__UCLIBC__ ))\n> \n> should be\n> \n> #if defined(__linux__) && ! (defined(__GLIBC__) || defined(__UCLIBC__ ))\n> \n> With the latter, it passes tests with musl.\n\nYes, my logic was wrong. Not sure what I was thinking, frankly.\n\nI am not a big fan of negating a complex conditional, but would rather\npass the negation into the conditional, new patch attached.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 22 Mar 2024 09:36:19 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Fri, Mar 22, 2024 at 09:36:19AM -0400, Bruce Momjian wrote:\n> On Fri, Mar 22, 2024 at 09:33:38AM +0100, [email protected] wrote:\n> > Bruce Momjian:\n> > > I suggest we use the #ifdef test to continue our existing behavior for\n> > > the libraries we know about, like glibc, and use the LD_* process title\n> > > truncation hack for libc's we don't recognize.\n> > > \n> > > Attached is a prototype patch which implements this based on previous\n> > > patches.\n> > \n> > The condition to check for linux/glibc in your patch is slightly off:\n> > \n> > #if ! defined(__linux__) || (! defined(__GLIBC__) && defined(__UCLIBC__ ))\n> > \n> > should be\n> > \n> > #if defined(__linux__) && ! (defined(__GLIBC__) || defined(__UCLIBC__ ))\n> > \n> > With the latter, it passes tests with musl.\n> \n> Yes, my logic was wrong. Not sure what I was thinking, frankly.\n> \n> I am not a big fan of negating a complex conditional, but would rather\n> pass the negation into the conditional, new patch attached.\n\nWith no one \"hoping this patch dies in a fire\"*, I have updated it with\nmore details, which I now think is committable to master. Is this\nsomething to backpatch? Seems too rare a bug to me.\n\n* Robert Haas, https://www.postgresql.org/message-id/CA%2BTgmoYsyrCNmg%2BYh6rgP7K8r-bYPjCeF1tPxENRFwD4VZAZvw%40mail.gmail.com\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 22 Mar 2024 15:44:28 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Thu, Mar 21, 2024 at 11:26 AM Thomas Munro <[email protected]> wrote:\n> Some speculation on how widespread this trick is: as mentioned, it\n\nJust one more, which I just ran into by accident while looking for\nsomething else, which I'm posting just in case this thread ever gets\nused to try to convince musl hackers to change their end to support\nit:\n\nhttps://github.com/openzfs/zfs/blob/master/lib/libzutil/os/linux/zutil_setproctitle.c#L158\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:20:47 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Bruce Momjian:\n> With no one \"hoping this patch dies in a fire\"*, I have updated it with\n> more details, which I now think is committable to master. Is this\n> something to backpatch? Seems too rare a bug to me.\n\nI would like to see this backpatched to be able to run the regression \ntests easily on all stable branches.\n\nI have taken your's/Thomas' patch and extended it with a few more taking \nin many of the ideas in this thread:\n\n0001 Don't clobber LD_*\nThis is the patch you posted. This applies cleanly all the way down to \nv12. This fixes the bug and allows running most of the tests with musl - \nyeah!\nI also confirmed, that this will not create practical problems with \nlibrary/postgres docker image, where this is likely used the most. While \n\"postgres\" is called by default without any arguments here, plenty of \nenvironment variables are passed. The docker image does use LD_PRELOAD \nto trick initdb, but that's not set when running the postmaster, so not \na problem here.\nThis use-case also shows why the proposed patch to still partially \nclobber environ at this stage is better than to not clobber environ at \nall - in this case, the docker image would essentially have no ps status \nat all by default.-\n\n0002 Allow setting PS_USE_NONE via CPPFLAGS\nThis was proposed by Andrew and applies cleanly down to v12. Thus, it \ncould be backpatched, too. First and foremost this would allow setting a \nbuildfarm animal to use this flag to make sure this code path is \nactually build/tested at all. This is something that Thomas and Tom \nhinted at.\n\n0003 Don't ever clobber environ again\nThis is the approach I had previously posted as a PoC. This would not be \nbackpatched, but I suggested this could go into v17 now. This avoids the \nundefined behavior and sets the table to eventually set ps status via \nargv by default and remove PS_USE_NONE later.\nCompared to the PoC patch, I decided not to pad argv[0], because this \nwill break the ps status display for the postmaster itself. Instead exec \nis called with an additional argument consisting of exactly 255 spaces.\nI also tried avoiding the additional exec-call if postgres was called \nvia pg_ctl, as suggested by Peter. This quickly turned out to be more \ninvasive than I would have liked this to be, though.\nThe current approach works very well, the environment doesn't need to be \ncopied anymore and the workaround for /proc/<pid>/environ in main.c can \ngo away, too.\n\n0004 Default to PS_USE_CLOBBER_ARGV\nThis changes the default to display ps status on all other systems, too. \nThis could potentially go in now as well, or be delayed to the beginning \nof the v18 cycle. In the unlikely event that this breaks something on a \nplatform not considered here and we get a bug report, we can easily \nadvise to compile with CPPFLAGS=-DPS_USE_NONE, which is still there at \nthis stage.\n\n0005 Remove PS_USE_NONE\nHowever, if no reports come in and no problems are detected with 0004, \nthen this can be entirely removed. This for \"later\", whenever that is.\n\nBest,\n\nWolfgang", "msg_date": "Mon, 25 Mar 2024 22:58:30 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Mon, Mar 25, 2024 at 10:58:30PM +0100, [email protected] wrote:\n> Bruce Momjian:\n> > With no one \"hoping this patch dies in a fire\"*, I have updated it with\n> > more details, which I now think is committable to master. Is this\n> > something to backpatch? Seems too rare a bug to me.\n> \n> I would like to see this backpatched to be able to run the regression tests\n> easily on all stable branches.\n\nYou want to risk destabilizing Postgres by backpatching something this\ncomplex so the regression tests can be run on all stable branches? I\nthink you are overestimating our desire to take on risk.\n\nAlso, in my patch, the parentheses here:\n\n\t#if defined(__linux__) && (! defined(__GLIBC__) && ! defined(__UCLIBC__))\n\nare unnecessary so they should be removed:\n\n\t#if defined(__linux__) && ! defined(__GLIBC__) && ! defined(__UCLIBC__)\n\nI am only willing to apply my patch, and only to master. Other\ncommitters might be more willing.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 25 Mar 2024 18:35:06 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On 22.03.24 20:44, Bruce Momjian wrote:\n> +\t\t\t\t * linking (dlopen) might fail. Here, we truncate the update\n> +\t\t\t\t * of the process title when either of two important dynamic\n> +\t\t\t\t * linking environment variables are set. Musl does not\n> +\t\t\t\t * define any compiler symbols, so we have to do this for\n> +\t\t\t\t * any Linux libc we don't know is safe.\n> +\t\t\t\t */\n> +\t\t\t\tif (strstr(environ[i], \"LD_LIBRARY_PATH=\") == environ[i] ||\n> +\t\t\t\t\tstrstr(environ[i], \"LD_PRELOAD=\") == environ[i])\n\nWhat determines which variables require this treatment?\n\n\n\n", "msg_date": "Mon, 25 Mar 2024 23:46:09 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Mon, Mar 25, 2024 at 11:46:09PM +0100, Peter Eisentraut wrote:\n> On 22.03.24 20:44, Bruce Momjian wrote:\n> > +\t\t\t\t * linking (dlopen) might fail. Here, we truncate the update\n> > +\t\t\t\t * of the process title when either of two important dynamic\n> > +\t\t\t\t * linking environment variables are set. Musl does not\n> > +\t\t\t\t * define any compiler symbols, so we have to do this for\n> > +\t\t\t\t * any Linux libc we don't know is safe.\n> > +\t\t\t\t */\n> > +\t\t\t\tif (strstr(environ[i], \"LD_LIBRARY_PATH=\") == environ[i] ||\n> > +\t\t\t\t\tstrstr(environ[i], \"LD_PRELOAD=\") == environ[i])\n> \n> What determines which variables require this treatment?\n\nThomas Munro came up with that part of the patch. I just combined his\npatch with the macro test.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 25 Mar 2024 18:51:09 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Tue, Mar 26, 2024 at 11:46 AM Peter Eisentraut <[email protected]> wrote:\n> What determines which variables require this treatment?\n\nThat came from me peeking at their code:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGKNK5V8XwwJJZm36s3EUy8V51xu4XiE8%3D26n%3DWq3OGd4A%40mail.gmail.com\n\nI had originally proposed to avoid anything beginning \"LD_\" but Tom\nsuggested being more specific. I doubt LD_PRELOAD can really hurt you\nthough (the linker probably only needs the value at the start by\ndefinition, not at later dlopen() time (?)). I dunno. If you're\nasking if there is any standard or whatever supplying these names, the\nSystem V or at least ELF standards talk about LD_LIBRARY_PATH (though\nthose standards don't know/care what happens after userspace takes\nover control of the environment concept, they just talk about how the\nworld is created when you exec a process, so they AFAICS they don't\naddress this clobbering stuff, and AFAIK other LD_XXX stuff is\nprobably implementation specific).\n\n\n", "msg_date": "Tue, 26 Mar 2024 11:54:59 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Mon, Mar 25, 2024 at 10:58:30PM +0100, [email protected] wrote:\n>> I would like to see this backpatched to be able to run the regression tests\n>> easily on all stable branches.\n\n> You want to risk destabilizing Postgres by backpatching something this\n> complex so the regression tests can be run on all stable branches? I\n> think you are overestimating our desire to take on risk.\n\nIf we want a buildfarm animal testing this platform, we kind of need\nto support it on all branches. Having said that, I agree with you\nthat we are looking for a minimalist fix not a maximalist one.\nI think the 0001 patch is about right, but the rest seem to be solving\nproblems we don't have.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Mar 2024 19:14:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Tue, Mar 26, 2024 at 11:46 AM Peter Eisentraut <[email protected]> wrote:\n>> What determines which variables require this treatment?\n\n> I had originally proposed to avoid anything beginning \"LD_\" but Tom\n> suggested being more specific. I doubt LD_PRELOAD can really hurt you\n> though (the linker probably only needs the value at the start by\n> definition, not at later dlopen() time (?)).\n\nOh, good point. So we could simplify the patch by only looking for\nLD_LIBRARY_PATH.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Mar 2024 19:15:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Mon, Mar 25, 2024 at 07:14:25PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Mon, Mar 25, 2024 at 10:58:30PM +0100, [email protected] wrote:\n> >> I would like to see this backpatched to be able to run the regression tests\n> >> easily on all stable branches.\n> \n> > You want to risk destabilizing Postgres by backpatching something this\n> > complex so the regression tests can be run on all stable branches? I\n> > think you are overestimating our desire to take on risk.\n> \n> If we want a buildfarm animal testing this platform, we kind of need\n> to support it on all branches. Having said that, I agree with you\n> that we are looking for a minimalist fix not a maximalist one.\n> I think the 0001 patch is about right, but the rest seem to be solving\n> problems we don't have.\n\nI could support the minimalist patch applied to all branches.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 25 Mar 2024 19:21:48 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "I wrote:\n> Thomas Munro <[email protected]> writes:\n>> I had originally proposed to avoid anything beginning \"LD_\" but Tom\n>> suggested being more specific. I doubt LD_PRELOAD can really hurt you\n>> though (the linker probably only needs the value at the start by\n>> definition, not at later dlopen() time (?)).\n\n> Oh, good point. So we could simplify the patch by only looking for\n> LD_LIBRARY_PATH.\n\nI looked at the musl source code you identified and confirmed that\nonly the LD_LIBRARY_PATH string is remembered in a static variable;\nLD_PRELOAD is only accessed locally in that initialization function.\nSo we only need to do the attached. (I failed to resist the\ntemptation to rewrite the comments.)\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 25 Mar 2024 19:43:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Tue, Mar 26, 2024 at 12:43 PM Tom Lane <[email protected]> wrote:\n> I looked at the musl source code you identified and confirmed that\n> only the LD_LIBRARY_PATH string is remembered in a static variable;\n> LD_PRELOAD is only accessed locally in that initialization function.\n> So we only need to do the attached. (I failed to resist the\n> temptation to rewrite the comments.)\n\nLGTM.\n\n\n", "msg_date": "Tue, 26 Mar 2024 12:49:55 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Hi,\n\nOn 2024-03-22 08:55:52 +0100, Wolfgang Walther wrote:\n> Andres Freund:\n> > FWIW, independent of which fix we go with, I think we need a buildfarm animal\n> > using musl. Even better if one of the CI tasks can be made to use musl as\n> > well.\n>\n> I am already working with Andrew to set up a buildfarm animal to run Alpine\n> Linux/musl. I can look into the CI task as well. Are you suggesting to\n> change an existing task to run with Alpine/musl or to add a new task for it?\n> It would be docker image based for sure.\n\nI'd rather adapt one of the existing tasks, to avoid increasing CI costs\nunduly.\n\nThe way we currently run CI for testing of not-yet-merged patches runs\nall tasks other than macos as full VMs, that turned out to be faster &\ncheaper.\n\nFWIW, except for one small issue, building postgres against musl works on\ndebian and the tests pass if I install first.\n\n\nThe small problem mentioned above is that on debian linux/fs.h isn't available\nwhen building with musl, which in turn causes src/bin/pg_upgrade/file.c to\nfail to compile. I assume that's not the case on \"fully musl\" distro?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Mar 2024 17:14:47 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On Tue, Mar 26, 2024 at 12:49:55PM +1300, Thomas Munro wrote:\n> On Tue, Mar 26, 2024 at 12:43 PM Tom Lane <[email protected]> wrote:\n> > I looked at the musl source code you identified and confirmed that\n> > only the LD_LIBRARY_PATH string is remembered in a static variable;\n> > LD_PRELOAD is only accessed locally in that initialization function.\n> > So we only need to do the attached. (I failed to resist the\n> > temptation to rewrite the comments.)\n> \n> LGTM.\n\n+1\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 25 Mar 2024 20:53:05 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "On 26.03.24 00:43, Tom Lane wrote:\n> I wrote:\n>> Thomas Munro <[email protected]> writes:\n>>> I had originally proposed to avoid anything beginning \"LD_\" but Tom\n>>> suggested being more specific. I doubt LD_PRELOAD can really hurt you\n>>> though (the linker probably only needs the value at the start by\n>>> definition, not at later dlopen() time (?)).\n> \n>> Oh, good point. So we could simplify the patch by only looking for\n>> LD_LIBRARY_PATH.\n> \n> I looked at the musl source code you identified and confirmed that\n> only the LD_LIBRARY_PATH string is remembered in a static variable;\n> LD_PRELOAD is only accessed locally in that initialization function.\n> So we only need to do the attached. (I failed to resist the\n> temptation to rewrite the comments.)\n\nYeah, I was more looking for a comment for posterity for *why* we need \nto preserve this variable in particular. The updated comment looks \nreasonable.\n\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:19:09 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Bruce Momjian:\n> You want to risk destabilizing Postgres by backpatching something this\n> complex so the regression tests can be run on all stable branches? I\n> think you are overestimating our desire to take on risk.\n\nI specifically wrote about backpatching the first (and maybe second) \npatch. None of that is risky.\n\nPatches 3-5 were not meant for backpatching at all.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:36:40 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Tom Lane:\n> If we want a buildfarm animal testing this platform, we kind of need\n> to support it on all branches. Having said that, I agree with you\n> that we are looking for a minimalist fix not a maximalist one.\n> I think the 0001 patch is about right, but the rest seem to be solving\n> problems we don't have.\n\nThe second patch potentially solves the problem of PS_USE_NONE not being \ntested. Of course you could also set up a buildfarm animal on some other \nplatform, which is sure to fall through to PS_USE_NONE, but that seems \nto have not worked in the past:\n\nThomas Munro:\n> I dimly recall that it turned out that PS_USE_NONE was\n> actually broken for a while without anyone noticing\nBest,\n\nWolfgang\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:43:54 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "The need to do $subject came up in [1]. Moving this to a separate \ndiscussion on -hackers, because there are more issues to solve than just \nthe LD_LIBRARY_PATH problem.\n\nAndres Freund:\n> FWIW, except for one small issue, building postgres against musl works on\n> debian and the tests pass if I install first.\n> \n> \n> The small problem mentioned above is that on debian linux/fs.h isn't available\n> when building with musl, which in turn causes src/bin/pg_upgrade/file.c to\n> fail to compile. I assume that's not the case on \"fully musl\" distro?\n\nCorrect, I have not seen this before on Alpine.\n\nHere is my progress setting up a buildfarm animal to run on Alpine Linux \nand the issues I found, so far:\n\nThe animal runs in a docker container via GitHub Actions in [2]. Right \nnow it's still running with --test, until I get the credentials to \nactivate it.\n\nI tried to enable everything (except systemd, because Alpine doesn't \nhave it) and run all tests. The LDAP tests are failing right now, but \nthat is likely something that I need to fix in the Dockerfile - it's \nfailing to start the slapd, IIRC. There are other issues, though - all \nof them have open pull requests in that repo [3].\n\nI also had to skip the recovery check. Andrew mentioned that he had to \ndo that, too, when he was still running his animal on Alpine. Not sure \nwhat this is about, yet.\n\nBuilding --with-icu fails two tests. One of them (001_initdb) is fixed \nby having the \"locale\" command in your PATH, which is not the case on \nAlpine by default. I assume this will not break on your debian/musl \nbuild, Andres - but it will also probably not return any sane values, \nbecause it will run glibc's locale command.\nI haven't looked into that in detail, yet, but I think the other test \n(icu/010_database) fails because it expects that setlocale(LC_COLLATE, \n<illegal_value>) throws an error. I think it doesn't do that on musl, \nbecause LC_COLLATE is not implemented.\nThose failing tests are not \"just failing\", but probably mean that we \nneed to do something about how we deal with locale/setlocale on musl.\n\nThe last failure is about building --with-nls. This fails with something \nlike:\n\nld: src/port/strerror.c:72:(.text+0x2d8): undefined reference to \n`libintl_gettext'\n\nOf course, gettext-dev is installed, libintl.so is available in /usr/lib \nand it also contains the symbol. So not sure what's happening here.\n\nAndres, did you build --with-icu and/or --with-nls on debian/musl? Did \nyou run the recovery tests?\n\nBest,\n\nWolfgang\n\n[1]: \nhttps://postgr.es/m/fddd1cd6-dc16-40a2-9eb5-d7fef2101488%40technowledgy.de\n[2]: \nhttps://github.com/technowledgy/postgresql-buildfarm-alpine/actions/workflows/run.yaml\n[3]: https://github.com/technowledgy/postgresql-buildfarm-alpine/pulls\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:22:28 +0100", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Building with musl in CI and the build farm" }, { "msg_contents": "[email protected] writes:\n> The second patch potentially solves the problem of PS_USE_NONE not being \n> tested. Of course you could also set up a buildfarm animal on some other \n> platform, which is sure to fall through to PS_USE_NONE, but that seems \n> to have not worked in the past:\n\n> Thomas Munro:\n>> I dimly recall that it turned out that PS_USE_NONE was\n>> actually broken for a while without anyone noticing\n\nI think what Thomas is recollecting is this:\n\nAuthor: Tom Lane <[email protected]>\nBranch: master Release: REL_15_BR [0fb6954aa] 2022-03-27 12:57:46 -0400\nBranch: REL_14_STABLE Release: REL_14_3 [3f7a59c59] 2022-03-27 12:57:52 -0400\nBranch: REL_13_STABLE Release: REL_13_7 [9016a2a3d] 2022-03-27 12:57:57 -0400\n\n Fix breakage of get_ps_display() in the PS_USE_NONE case.\n \n Commit 8c6d30f21 caused this function to fail to set *displen\n in the PS_USE_NONE code path. If the variable's previous value\n had been negative, that'd lead to a memory clobber at some call\n sites. We'd managed not to notice due to very thin test coverage\n of such configurations, but this appears to explain buildfarm member\n lorikeet's recent struggles.\n \n Credit to Andrew Dunstan for spotting the problem. Back-patch\n to v13 where the bug was introduced.\n \n Discussion: https://postgr.es/m/[email protected]\n\n\nThe problem wasn't lack of coverage, it was that the failure was\nintermittent and erratic enough to be very hard to diagnose.\nI think that's more bad luck than anything else.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Mar 2024 10:35:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 26.03.24 00:43, Tom Lane wrote:\n>> I looked at the musl source code you identified and confirmed that\n>> only the LD_LIBRARY_PATH string is remembered in a static variable;\n>> LD_PRELOAD is only accessed locally in that initialization function.\n>> So we only need to do the attached. (I failed to resist the\n>> temptation to rewrite the comments.)\n\n> Yeah, I was more looking for a comment for posterity for *why* we need \n> to preserve this variable in particular. The updated comment looks \n> reasonable.\n\nOK, pushed after a bit more comment-fiddling.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Mar 2024 11:45:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with musl libc because libpq.so can't be\n loaded" }, { "msg_contents": "Here's an update on the progress to run musl (Alpine Linux) in the \nbuildfarm.\n\nWolfgang Walther:\n> The animal runs in a docker container via GitHub Actions in [2]. Right \n> now it's still running with --test, until I get the credentials to \n> activate it.\n\nThe animals have been activated and are reporting now. Thanks, Andrew!\n\n\n> I tried to enable everything (except systemd, because Alpine doesn't \n> have it) and run all tests. The LDAP tests are failing right now, but \n> that is likely something that I need to fix in the Dockerfile - it's \n> failing to start the slapd, IIRC. There are other issues, though - all \n> of them have open pull requests in that repo [3].\n\nldap tests are enabled, just a missing package.\n\n\n> I also had to skip the recovery check. Andrew mentioned that he had to \n> do that, too, when he was still running his animal on Alpine. Not sure \n> what this is about, yet.\n\nThis was about a missing init process in the docker image. Without an \ninit process reaping zombie processes, the recovery tests end up with \nsome supposed-to-be-terminated backends still running and can't start \nthem up again. Fixed by adding a minimal init process with \"tinit\".\n\n\n> Building --with-icu fails two tests. One of them (001_initdb) is fixed \n> by having the \"locale\" command in your PATH, which is not the case on \n> Alpine by default. I assume this will not break on your debian/musl \n> build, Andres - but it will also probably not return any sane values, \n> because it will run glibc's locale command.\n> I haven't looked into that in detail, yet, but I think the other test \n> (icu/010_database) fails because it expects that setlocale(LC_COLLATE, \n> <illegal_value>) throws an error. I think it doesn't do that on musl, \n> because LC_COLLATE is not implemented.\n> Those failing tests are not \"just failing\", but probably mean that we \n> need to do something about how we deal with locale/setlocale on musl.\n\nI still need to look into this in depth.\n\n\n> The last failure is about building --with-nls. This fails with something \n> like:\n> \n> ld: src/port/strerror.c:72:(.text+0x2d8): undefined reference to \n> `libintl_gettext'\n> \n> Of course, gettext-dev is installed, libintl.so is available in /usr/lib \n> and it also contains the symbol. So not sure what's happening here.\n\nThis is an Alpine Linux packaging issue. Theoretically, it could be made \nto work by introducing some configure/meson flag like \"--with-gettext\" \nor so, to prefer gettext's libintl over the libc-builtin. However, NixOS \n/ nixpkgs with its pkgsMusl overlay manages to solve this issue just \nfine, builds with --enable-nls and gettext work. Thus, I conclude this \nis best solved upstream in Alpine Linux.\n\nTLDR: The only real issue which is still open from PostgreSQL's side is \naround locales and ICU - certainly the pain point in musl. Will look \ninto it further.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sat, 30 Mar 2024 15:05:19 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Building with musl in CI and the build farm" }, { "msg_contents": "About building one of the CI tasks with musl:\n\nAndres Freund:\n> I'd rather adapt one of the existing tasks, to avoid increasing CI costs unduly.\n\nI looked into this and I think the only task that could be changed is \nthe SanityCheck. This is because this builds without any additional \nfeatures enabled. I guess that makes sense, because otherwise those \ndependencies would first have to be built with musl-gcc as well.\n\n\n> FWIW, except for one small issue, building postgres against musl works on debian and the tests pass if I install first.\n\nAfter the fix for LD_LIBRARY_PATH this now works as expected without \ninstalling first. I confirmed it works on debian with CC=musl-gcc.\n\n\n> The small problem mentioned above is that on debian linux/fs.h isn't available\n> when building with musl, which in turn causes src/bin/pg_upgrade/file.c to\n> fail to compile.\n\nAccording to [1], this can be worked around by linking some folders:\n\nln -s /usr/include/linux /usr/include/x86_64-linux-musl/\nln -s /usr/include/asm-generic /usr/include/x86_64-linux-musl/\nln -s /usr/include/x86_64-linux-gnu/asm /usr/include/x86_64-linux-musl/\n\nPlease find a patch to use musl-gcc in SanityCheck attached. Logs from \nthe CI run are in [2]. It has this in the configure phase:\n\n[13:19:52.712] Using 'CC' from environment with value: 'ccache musl-gcc'\n[13:19:52.712] C compiler for the host machine: ccache musl-gcc (gcc \n10.2.1 \"cc (Debian 10.2.1-6) 10.2.1 20210110\")\n[13:19:52.712] C linker for the host machine: musl-gcc ld.bfd 2.35.2\n[13:19:52.712] Using 'CC' from environment with value: 'ccache musl-gcc'\n\nSo meson picks up musl-gcc properly. I also double checked that without \nthe links above, the build does indeed fail with the linux/fs.h error.\n\nI assume the installation of musl-tools should be done in the \npg-vm-images repo instead of the additional script here?\n\nBest,\n\nWolfgang\n\n[1]: \nhttps://debian-bugs-dist.debian.narkive.com/VlFkLigg/bug-789789-musl-fails-to-compile-stuff-that-depends-on-kernel-headers\n[2]: https://cirrus-ci.com/task/5741892590108672", "msg_date": "Sun, 31 Mar 2024 15:34:23 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Building with musl in CI and the build farm" }, { "msg_contents": "On Wed, Mar 27, 2024 at 11:27 AM Wolfgang Walther\n<[email protected]> wrote:\n> The animal runs in a docker container via GitHub Actions in [2].\n\nGreat idea :-)\n\n\n", "msg_date": "Wed, 3 Apr 2024 13:00:01 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with musl in CI and the build farm" }, { "msg_contents": "On 31.03.24 15:34, [email protected] wrote:\n>> I'd rather adapt one of the existing tasks, to avoid increasing CI \n>> costs unduly.\n> \n> I looked into this and I think the only task that could be changed is \n> the SanityCheck.\n\nI think SanityCheck should run a simple, \"average\" environment, like the \ncurrent Debian one. Otherwise, niche problems with musl or multi-arch \nor whatever will throw off the entire build pipeline.\n\n\n", "msg_date": "Thu, 4 Apr 2024 15:56:25 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with musl in CI and the build farm" }, { "msg_contents": "Peter Eisentraut:\n> On 31.03.24 15:34, [email protected] wrote:\n>>> I'd rather adapt one of the existing tasks, to avoid increasing CI \n>>> costs unduly.\n>>\n>> I looked into this and I think the only task that could be changed is \n>> the SanityCheck.\n> \n> I think SanityCheck should run a simple, \"average\" environment, like the \n> current Debian one.  Otherwise, niche problems with musl or multi-arch \n> or whatever will throw off the entire build pipeline.\n\nAll the errors/problems I have seen so far, while setting up the \nbuildfarm animal on Alpine Linux, have been way beyond what SanityCheck \ndoes. Problems only appeared in the tests suites, of which sanity check \nonly runs *very* basic ones. I don't have much experience with the \n\"cross\" setup, that \"musl on debian\" essentially is, though.\n\nAll those things are certainly out of scope for CI - they are tested in \nthe build farm instead.\n\nI do agree: SanityCheck doesn't feel like the right place to put this. \nBut on the other side.. if it really fails to *build* with musl, then it \nshouldn't make a difference whether you will be notified about that \nimmediately or later in the CI pipeline. It certainly needs the fewest \nadditional resources to put it there.\n\nI'm not sure what Andres meant with \"adopting one of the existing \ntasks\". It could fit as another step into the \"Linux - Debian Bullseye - \nAutoconf\" task, too. A bit similar to how the meson task build for 32 \nand 64bit. This would still not be an entirely new task like I proposed \ninitially (to run in Docker).\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Thu, 4 Apr 2024 16:11:56 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Building with musl in CI and the build farm" }, { "msg_contents": "Wolfgang Walther <[email protected]> writes:\n> Peter Eisentraut:\n>> I think SanityCheck should run a simple, \"average\" environment, like the \n>> current Debian one.  Otherwise, niche problems with musl or multi-arch \n>> or whatever will throw off the entire build pipeline.\n\n> I do agree: SanityCheck doesn't feel like the right place to put this. \n> But on the other side.. if it really fails to *build* with musl, then it \n> shouldn't make a difference whether you will be notified about that \n> immediately or later in the CI pipeline. It certainly needs the fewest \n> additional resources to put it there.\n\nThat is not the concern here. What I think Peter is worried about,\nand certainly what I'm worried about, is that a breakage in\nSanityCheck comprehensively breaks all CI testing for all Postgres\ndevelopers. One buildfarm member that's failing does not halt\nprogress altogether, so it's not even in the same ballpark of\nbeing as critical. So I agree with Peter that SanityCheck had\nbetter use a very common, vanilla environment.\n\nTo be blunt, I do not think we need to test musl in the CI pipeline.\nI see it as one of the niche platforms that the buildfarm exists\nto test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Apr 2024 10:36:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with musl in CI and the build farm" }, { "msg_contents": "Tom Lane:\n> That is not the concern here. What I think Peter is worried about,\n> and certainly what I'm worried about, is that a breakage in\n> SanityCheck comprehensively breaks all CI testing for all Postgres\n> developers.\n\nYou'd have to commit a failing patch first to break CI for all other \ndevelopers. If you're only going to commit patches that pass those CI \ntasks, then this is not going to happen. Then it only becomes a question \nof how much feedback *you* get from a single CI run of your own patch.\n\n> To be blunt, I do not think we need to test musl in the CI pipeline.\n> I see it as one of the niche platforms that the buildfarm exists\n> to test.\n\nI don't really have an opinion on this. I'm fine with having musl in the \nbuildfarm only. I don't expect the core build itself to fail with musl \nanyway, this has been working fine for years. Andres asked for it to be \nadded to CI, so maybe he sees more value on top of just \"building with \nmusl\"?\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Thu, 4 Apr 2024 17:01:32 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Building with musl in CI and the build farm" }, { "msg_contents": "Wolfgang Walther <[email protected]> writes:\n>> That is not the concern here. What I think Peter is worried about,\n>> and certainly what I'm worried about, is that a breakage in\n>> SanityCheck comprehensively breaks all CI testing for all Postgres\n>> developers.\n\n> You'd have to commit a failing patch first to break CI for all other \n> developers.\n\nNo, what I'm more worried about is some change in the environment\ncausing the build to start failing. When that happens, it'd better\nbe an environment that many of us are familiar with and can test/fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Apr 2024 11:19:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building with musl in CI and the build farm" }, { "msg_contents": "Tom Lane:\n>> You'd have to commit a failing patch first to break CI for all other\n>> developers.\n> \n> No, what I'm more worried about is some change in the environment\n> causing the build to start failing. When that happens, it'd better\n> be an environment that many of us are familiar with and can test/fix.\n\nThe way I understand how this work is, that the images for the VMs in \nwhich those CI tasks run, are not just dynamically updated - but are \nactually tested before they are used in CI. So the environment doesn't \njust change suddenly.\n\nSee e.g. [1] for a pull request to the repo containing those images to \nupdate the linux debian image from bullseye to bookworm. This is exactly \nthe image we're talking about. Before this image is used in postgres CI, \nit's tested and confirmed that it actually works there. If one of the \njobs was using musl - that would be tested as well. So this job would \nnot just suddenly start failing for everybody.\n\nI do see the \"familiarity\" argument for the SanityCheck task, but for a \ndifferent reason: Even though it's unlikely for this job to fail for \nmusl specific reasons - if you're not familiar with musl and can't \neasily test it locally, you might not be able to tell immediately \nwhether it's musl specific or not. If musl was run in one of the later \njobs, it's much different: You see all tests failing - alright, not musl \nspecific. You see only the musl test failing - yeah, musl problem. This \nshould give developers much more confidence looking at the results.\n\nBest,\n\nWolfgang\n\n[1]: https://github.com/anarazel/pg-vm-images/pull/91\n\n\n", "msg_date": "Thu, 4 Apr 2024 21:09:30 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Building with musl in CI and the build farm" } ]
[ { "msg_contents": "Hello Team,\n\nHope everyone is doing well here.\n\nI am writing this email to understand an issue I'm facing when fetching data in our Java application. We are using PostgreSQL JDBC Driver version 42.6.0.\n\nIssue:\n\nWe are encountering an issue where the double precision data type in PostgreSQL is giving some intermittent results when fetching data. For example, in the database the value is 40, but sometimes this value is fetched as 40.0. Similarly, for a value of 0.0005, it is being fetched as 0.00050, resulting in extra trailing zeros.\n\nWhile debugging, it seems like this issue is caused by the different data formats, such as Text and Binary. There is some logic in the PgResultSet class that converts values based on this data format.\n\nExample:\n\nBelow is an example where we are getting different data formats for the same table:\n\nText Format: [Field(id,FLOAT8,8,T), Field(client_id,FLOAT8,8,T), Field(create_ts,TIMESTAMP,8,T), ...]\n\nBinary Format: [Field(id,FLOAT8,8,B), Field(client_id,FLOAT8,8,B), ...] (notice some format changes)\n\nWe are not sure why different formats are coming for the same table.\n\nSchema:\n\nBelow is the schema for the table used:\n\nSQL\n\n \n\n CREATE TABLE IF NOT EXISTS SUBMISSION_QUEUE(\n ID DOUBLE PRECISION,\n CLIENT_ID DOUBLE PRECISION,\n OCODE VARCHAR(20) NOT NULL,\n PAYLOAD_TYPE VARCHAR(20),\n REPOSITORY VARCHAR(16),\n SUB_REPOSITORY VARCHAR(20),\n FORCE_GENERATION_FLAG BOOLEAN,\nIS_JMX_CALL BOOLEAN,\nINSTANCE_ID DOUBLE PRECISION,\nCREATE_TS TIMESTAMP(6) NOT NULL,\n);\n\nRequest:\n\nTeam, would it be possible to give some insight on this issue? Any help would be greatly appreciated.\n\nThanks,\nHello Team,\nHope everyone is doing well here.\nI am writing this email to understand an issue I'm facing when fetching data in our Java application. We are using PostgreSQL JDBC Driver version 42.6.0.\nIssue:\nWe are encountering an issue where the double precision data type in PostgreSQL is giving some intermittent results when fetching data. For example, in the database the value is 40, but sometimes this value is fetched as 40.0. Similarly, for a value of 0.0005, it is being fetched as 0.00050, resulting in extra trailing zeros.\nWhile debugging, it seems like this issue is caused by the different data formats, such as Text and Binary. There is some logic in the PgResultSet class that converts values based on this data format.\nExample:\nBelow is an example where we are getting different data formats for the same table:\nText Format: [Field(id,FLOAT8,8,T), Field(client_id,FLOAT8,8,T), Field(create_ts,TIMESTAMP,8,T), ...]\nBinary Format: [Field(id,FLOAT8,8,B), Field(client_id,FLOAT8,8,B), ...] (notice some format changes)\nWe are not sure why different formats are coming for the same table.\nSchema:\nBelow is the schema for the table used:\nSQL\n \n CREATE TABLE IF NOT EXISTS SUBMISSION_QUEUE(\n  ID               DOUBLE PRECISION,\n  CLIENT_ID           DOUBLE PRECISION,\n  OCODE VARCHAR(20) NOT NULL,\n  PAYLOAD_TYPE        VARCHAR(20),\n  REPOSITORY VARCHAR(16),\n  SUB_REPOSITORY          VARCHAR(20),\n  FORCE_GENERATION_FLAG   BOOLEAN,\nIS_JMX_CALL BOOLEAN,\nINSTANCE_ID           DOUBLE PRECISION,\nCREATE_TS         TIMESTAMP(6) NOT NULL,\n);\n\nRequest:\nTeam, would it be possible to give some insight on this issue? Any help would be greatly appreciated.\nThanks,", "msg_date": "Sat, 16 Mar 2024 20:40:29 +0530", "msg_from": "Rahul Uniyal <[email protected]>", "msg_from_op": true, "msg_subject": "Java : Postgres double precession issue with different data format\n text and binary " }, { "msg_contents": "Hi,\n\nOn 03/16/24 11:10, Rahul Uniyal wrote:\n> We are encountering an issue where the double precision data type\n> in PostgreSQL is giving some intermittent results when fetching data.\n> For example, in the database the value is 40, but sometimes this value\n> is fetched as 40.0. Similarly, for a value of 0.0005, it is being\n> fetched as 0.00050, resulting in extra trailing zeros.\n\nAs a first observation, the column names in your schema suggest that\nthese columns are being used as IDs of some kind, for which a float type\nwould be an unusual choice. Unless something in your situation requires it,\nyou might consider changing to integer types for IDs.\n\nThat said, you may have found something interesting in how JDBC handles\nthe float8 type in text vs. binary format, but comparing the results\nof conversion to decimal string is not the most direct way to\ninvestigate it.\n\nIt would be clearer to compare the raw bits of the values.\n\nFor example, with SELECT float8send(ID) FROM SUBMISSION_QUEUE,\nyou should see \\x4044000000000000 if ID is 40, and you should see\n\\x3f40624dd2f1a9fc if ID is 0.0005.\n\nLikewise, on the Java side,\nLong.toHexString(Double.doubleToLongBits(id)) should also show you\n4044000000000000 for the value 40, and 3f40624dd2f1a9fc for the\nvalue 0.0005.\n\nIf you end up finding that the text/binary transmission format\nsometimes causes the Java value not to have the same bits as the\nPostgreSQL value, that information could be of interest on the\npgsql-jdbc list.\n\nRegards,\nChapman Flack\n\n\n", "msg_date": "Mon, 18 Mar 2024 13:23:14 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Java : Postgres double precession issue with different data\n format text and binary" } ]
[ { "msg_contents": "Hackers,\n\nThe jsonpath doc[1] has an excellent description of the format of strings, but for unquoted path keys, it simply says:\n\n> Member accessor that returns an object member with the specified key. If the key name matches some named variable starting with $ or does not meet the JavaScript rules for an identifier, it must be enclosed in double quotes to make it a string literal.\n\nI went looking for the JavaScript rules for an identifier and found this in the MDN docs[2]:\n\n> In JavaScript, identifiers can contain Unicode letters, $, _, and digits (0-9), but may not start with a digit. An identifier differs from a string in that a string is data, while an identifier is part of the code. In JavaScript, there is no way to convert identifiers to strings, but sometimes it is possible to parse strings into identifiers.\n\n\nHowever, the Postgres parsing of jsonpath keys appears to follow the same rules as strings, allowing backslash escapes:\n\ndavid=# select '$.fo\\u00f8 == $x'::jsonpath;\n jsonpath -------------------\n ($.\"foø\" == $\"x\")\n\nThis would seem to contradict the documentation. Is this behavior required by the SQL standard? Do the docs need updating? Or should the code actually follow the JSON identifier behavior?\n\nThanks,\n\nDavid\n\nPS: Those excellent docs on strings mentions support for \\v, but the grammar in the right nav of https://www.json.org/json-en.html does not. Another bonus feature?\n\n[1]: https://www.postgresql.org/docs/16/datatype-json.html#DATATYPE-JSONPATH\n[2]: https://developer.mozilla.org/en-US/docs/Glossary/Identifier\n\n\n\n", "msg_date": "Sat, 16 Mar 2024 14:39:20 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Q: Escapes in jsonpath Idents" }, { "msg_contents": "On Mar 16, 2024, at 14:39, David E. Wheeler <[email protected]> wrote:\n\n> I went looking for the JavaScript rules for an identifier and found this in the MDN docs[2]:\n> \n>> In JavaScript, identifiers can contain Unicode letters, $, _, and digits (0-9), but may not start with a digit. An identifier differs from a string in that a string is data, while an identifier is part of the code. In JavaScript, there is no way to convert identifiers to strings, but sometimes it is possible to parse strings into identifiers.\n\nCoda: Dollar signs don’t work at all outside double-quoted string identifiers:\n\ndavid=# select '$.$foo'::jsonpath;\nERROR: syntax error at or near \"$foo\" of jsonpath input\nLINE 1: select '$.$foo'::jsonpath;\n ^\n\ndavid=# select '$.f$oo'::jsonpath;\nERROR: syntax error at or near \"$oo\" of jsonpath input\nLINE 1: select '$.f$oo'::jsonpath;\n ^\n\ndavid=# select '$.\"$foo\"'::jsonpath;\n jsonpath \n----------\n $.\"$foo\"\n\nThis, too, contradicts the MDM definition an identifier (and some quick browser tests).\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Sat, 16 Mar 2024 16:33:09 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "Hi David,\n\nOn 2024-03-16 19:39 +0100, David E. Wheeler wrote:\n> The jsonpath doc[1] has an excellent description of the format of\n> strings, but for unquoted path keys, it simply says:\n> \n> > Member accessor that returns an object member with the specified\n> > key. If the key name matches some named variable starting with $ or\n> > does not meet the JavaScript rules for an identifier, it must be\n> > enclosed in double quotes to make it a string literal.\n> \n> I went looking for the JavaScript rules for an identifier and found\n> this in the MDN docs[2]:\n> \n> > In JavaScript, identifiers can contain Unicode letters, $, _, and\n> > digits (0-9), but may not start with a digit. An identifier differs\n> > from a string in that a string is data, while an identifier is part\n> > of the code. In JavaScript, there is no way to convert identifiers\n> > to strings, but sometimes it is possible to parse strings into\n> > identifiers.\n> \n> \n> However, the Postgres parsing of jsonpath keys appears to follow the\n> same rules as strings, allowing backslash escapes:\n> \n> david=# select '$.fo\\u00f8 == $x'::jsonpath;\n> jsonpath -------------------\n> ($.\"foø\" == $\"x\")\n> \n> This would seem to contradict the documentation. Is this behavior\n> required by the SQL standard? Do the docs need updating? Or should the\n> code actually follow the JSON identifier behavior?\n\nThat quoted MDN page does not give the whole picture. ECMAScript and JS\ndo allow Unicode escape sequences in identifier names:\n\nhttps://262.ecma-international.org/#sec-identifier-names\nhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Lexical_grammar#identifiers\n\n> PS: Those excellent docs on strings mentions support for \\v, but the\n> grammar in the right nav of https://www.json.org/json-en.html does\n> not. Another bonus feature?\n\nYou refer to that sentence: \"Other special backslash sequences include\nthose recognized in JSON strings: \\b, \\f, \\n, \\r, \\t, \\v for various\nASCII control characters, and \\uNNNN for a Unicode character identified\nby its 4-hex-digit code point.\"\n\nMentioning JSON and \\v in the same sentence is wrong: JavaScript allows\nthat escape in strings but JSON doesn't. I think the easiest is to just\nreplace \"JSON\" with \"JavaScript\" in that sentence to make it right. The\nparagraph also already says \"embedded string literals follow JavaScript/\nECMAScript conventions\", so mentioning JSON seems unnecessary to me.\n\nThe last sentence also mentions backslash escapes \\xNN and \\u{N...} as\ndeviations from JSON when in fact those are valid escape sequences from\nECMA-262: https://262.ecma-international.org/#prod-HexEscapeSequence\nSo I think it makes sense to reword the entire backslash part of the\nparagraph and remove references to JSON entirely. The attached patch\ndoes that and also formats the backslash escapes as a bulleted list for\nreadability.\n\n> [1]: https://www.postgresql.org/docs/16/datatype-json.html#DATATYPE-JSONPATH\n> [2]: https://developer.mozilla.org/en-US/docs/Glossary/Identifier\n\nOn 2024-03-16 21:33 +0100, David E. Wheeler wrote:\n> On Mar 16, 2024, at 14:39, David E. Wheeler <[email protected]>\n> wrote:\n> \n> > I went looking for the JavaScript rules for an identifier and found\n> > this in the MDN docs[2]:\n> > \n> >> In JavaScript, identifiers can contain Unicode letters, $, _, and\n> >> digits (0-9), but may not start with a digit. An identifier differs\n> >> from a string in that a string is data, while an identifier is part\n> >> of the code. In JavaScript, there is no way to convert identifiers\n> >> to strings, but sometimes it is possible to parse strings into\n> >> identifiers.\n> \n> Coda: Dollar signs don’t work at all outside double-quoted string\n> identifiers:\n> \n> david=# select '$.$foo'::jsonpath;\n> ERROR: syntax error at or near \"$foo\" of jsonpath input\n> LINE 1: select '$.$foo'::jsonpath;\n> ^\n> \n> david=# select '$.f$oo'::jsonpath;\n> ERROR: syntax error at or near \"$oo\" of jsonpath input\n> LINE 1: select '$.f$oo'::jsonpath;\n> ^\n> \n> david=# select '$.\"$foo\"'::jsonpath;\n> jsonpath \n> ----------\n> $.\"$foo\"\n> \n> This, too, contradicts the MDM definition an identifier (and some\n> quick browser tests).\n\nThe first case ($.$foo) is in line with the restriction on member\naccessors that you quoted first.\n\nThe error message 'syntax error at or near \"$oo\" of jsonpath input' for\nthe second case ($.f$oo), however, looks as if the scanner identifies\n'$oo' as a variable instead of contiuing the scan of identifier (f$oo)\nfor the member accessor. Looks like a bug to me because a variable\ndoesn't even make sense in that place.\n\nWhat works though, besides double quoting, is escaping the dollar sign:\n\n\tregress=# select '$.\\u0024foo'::jsonpath;\n\t jsonpath\n\t----------\n\t $.\"$foo\"\n\t(1 row)\n\nAnd we've come full circle :)\n\n-- \nErik", "msg_date": "Sun, 17 Mar 2024 20:12:03 +0100", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "On Mar 17, 2024, at 15:12, Erik Wienhold <[email protected]> wrote:\n\n> Hi David,\n\nHey Erik. Thanks for the detailed reply and patch!\n\n> So I think it makes sense to reword the entire backslash part of the\n> paragraph and remove references to JSON entirely. The attached patch\n> does that and also formats the backslash escapes as a bulleted list for\n> readability.\n\nAh, it’s JavaScript format, not JSON! This does clarify things quite nicely, thank you. Happy to add my review once it’s in a commit fest.\n\n> The first case ($.$foo) is in line with the restriction on member\n> accessors that you quoted first.\n\nHuh, that’s now how I read it. Here it is again:\n\n>> Member accessor that returns an object member with the specified\n>> key. If the key name matches some named variable starting with $ or\n>> does not meet the JavaScript rules for an identifier, it must be\n>> enclosed in double quotes to make it a string literal.\n\n\nNote that in my example `$foo` does not match a variable. I mean it looks like a variable, but none is used here. I guess it’s being conservative because it might be used in one of the functions, like jsonb_path_exists(), to which variables might be passed.\n\n> The error message 'syntax error at or near \"$oo\" of jsonpath input' for\n> the second case ($.f$oo), however, looks as if the scanner identifies\n> '$oo' as a variable instead of contiuing the scan of identifier (f$oo)\n> for the member accessor. Looks like a bug to me because a variable\n> doesn't even make sense in that place.\n\nRight. Maybe the docs should be updated to say that a literal dollar sign isn’t supported in identifiers, unlike in JavaScript, except through escapes like this:\n\n> What works though, besides double quoting, is escaping the dollar sign:\n> \n> regress=# select '$.\\u0024foo'::jsonpath;\n> jsonpath\n> ----------\n> $.\"$foo\"\n> (1 row)\n> \n> And we've come full circle :)\n\n🎉\n\nBest,\n\nDavid\n\n\n\n\n", "msg_date": "Sun, 17 Mar 2024 15:50:00 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "On 2024-03-17 20:50 +0100, David E. Wheeler wrote:\n> On Mar 17, 2024, at 15:12, Erik Wienhold <[email protected]> wrote:\n> > So I think it makes sense to reword the entire backslash part of the\n> > paragraph and remove references to JSON entirely. The attached patch\n> > does that and also formats the backslash escapes as a bulleted list for\n> > readability.\n> \n> Ah, it’s JavaScript format, not JSON! This does clarify things quite\n> nicely, thank you. Happy to add my review once it’s in a commit fest.\n\nThanks. https://commitfest.postgresql.org/48/4899/\n\n> > The first case ($.$foo) is in line with the restriction on member\n> > accessors that you quoted first.\n> \n> Huh, that’s now how I read it. Here it is again:\n> \n> >> Member accessor that returns an object member with the specified\n> >> key. If the key name matches some named variable starting with $ or\n> >> does not meet the JavaScript rules for an identifier, it must be\n> >> enclosed in double quotes to make it a string literal.\n> \n> \n> Note that in my example `$foo` does not match a variable. I mean it\n> looks like a variable, but none is used here. I guess it’s being\n> conservative because it might be used in one of the functions, like\n> jsonb_path_exists(), to which variables might be passed.\n\nI had the same reasoning while writing my first reply but scrapped that\npart because I found it obvious: That jsonpath is parsed before calling\njsonb_path_exists() and therefore the parser has no context about any\nvariables, which might not even be hardcoded but may result from a\nquery.\n\n> > The error message 'syntax error at or near \"$oo\" of jsonpath input' for\n> > the second case ($.f$oo), however, looks as if the scanner identifies\n> > '$oo' as a variable instead of contiuing the scan of identifier (f$oo)\n> > for the member accessor. Looks like a bug to me because a variable\n> > doesn't even make sense in that place.\n> \n> Right. Maybe the docs should be updated to say that a literal dollar\n> sign isn’t supported in identifiers, unlike in JavaScript, except\n> through escapes like this:\n\nUnfortunately, I don't have access to that part of the SQL spec. So I\ndon't know how the jsonpath grammar is specified.\n\nI had a look into Oracle, MySQL, and SQLite docs to see what they\nimplement:\n\n* Oracle requires the unquoted field names to match [A-Za-z][A-Za-z0-9]*\n (see \"object steps\"). It also supports variables.\n https://docs.oracle.com/en/database/oracle/oracle-database/23/adjsn/json-path-expressions.html\n\n* MySQL refers to ECMAScript identifiers but does not say anything about\n variables: https://dev.mysql.com/doc/refman/8.3/en/json.html#json-path-syntax\n\n* SQLite skimps on details and does not document a grammar:\n https://sqlite.org/json1.html#path_arguments\n But it looks as if it strives for compatibility with MySQL and our dear\n Postgres: https://sqlite.org/src/doc/json-in-core/doc/json-enhancements.md\n\nAlso checked git log src/backend/utils/adt/jsonpath_scan.l for some\ninsights but haven't found any yet.\n\n-- \nErik\n\n\n", "msg_date": "Mon, 18 Mar 2024 01:09:50 +0100", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "On Mar 17, 2024, at 20:09, Erik Wienhold <[email protected]> wrote:\n> \n> On 2024-03-17 20:50 +0100, David E. Wheeler wrote:\n>> On Mar 17, 2024, at 15:12, Erik Wienhold <[email protected]> wrote:\n>>> So I think it makes sense to reword the entire backslash part of the\n>>> paragraph and remove references to JSON entirely. The attached patch\n>>> does that and also formats the backslash escapes as a bulleted list for\n>>> readability.\n>> \n>> Ah, it’s JavaScript format, not JSON! This does clarify things quite\n>> nicely, thank you. Happy to add my review once it’s in a commit fest.\n> \n> Thanks. https://commitfest.postgresql.org/48/4899/\n\nApplies cleanly, `make -C doc/src/sgml check` runs without error. Doc improvement welcome and much clearer than before.\n\n> I had the same reasoning while writing my first reply but scrapped that\n> part because I found it obvious: That jsonpath is parsed before calling\n> jsonb_path_exists() and therefore the parser has no context about any\n> variables, which might not even be hardcoded but may result from a\n> query.\n\nRight, there’s a chicken/egg problem.\n\n> Unfortunately, I don't have access to that part of the SQL spec. So I\n> don't know how the jsonpath grammar is specified.\n\nSeems quite logical; I think it should be documented, but I’d also be interested to know what the 2016 and 2023 standards say, exactly.\n\n> Also checked git log src/backend/utils/adt/jsonpath_scan.l for some\n> insights but haven't found any yet.\n\nEverybody’s taking shortcuts relative to the standard, AFAICT. For example, jsonpath_scan.l matches unqouted identifiers with these two regular expressions:\n\n<xnq>{other}+\n<xnq>\\/\\*\n<xnq,xq,xvq>\\\\.\n\nPlus the backslash escapes. {other} is defined as:\n\n/* \"other\" means anything that's not special, blank, or '\\' or '\"' */\nother [^\\?\\%\\$\\.\\[\\]\\{\\}\\(\\)\\|\\&\\!\\=\\<\\>\\@\\#\\,\\*:\\-\\+\\/\\\\\\\" \\t\\n\\r\\f]\n\nWhich is waaaay more liberal than the ECMA standard[1], by my reading, but the MSDN[2] description is quite succinct (thanks for the links!):\n\n> In JavaScript, identifiers are commonly made of alphanumeric characters, underscores (_), and dollar signs ($). Identifiers are not allowed to start with numbers. However, JavaScript identifiers are not only limited to ASCII — many Unicode code points are allowed as well. Namely, any character in the ID_Start category can start an identifier, while any character in the ID_Continue category can appear after the first character.\n\n\nID_Start[3] and ID_Continue[4] point to the unicode standard codes lister, nether of which reference Emoji. Sure enough, in Safari:\n\n> x = {\"🎉\": true}\n< {🎉: true}\n> x.🎉\n< SyntaxError: Invalid character '\\ud83c’\n\nBut in Postgres jsonpath:\n\ndavid=# select '$.🎉'::jsonpath;\n jsonpath \n----------\n $.\"🎉\"\n\nIf the MSDN references to ID_Start and ID_Continue are correct, then the Postgres path parser is being overly-liberal. Maybe that’s totally fine? Not sure what should be documented and what’s not worth it.\n\nAside: I’m only digging into these details because I’m busy porting the path parser, so trying to figure out where to be compatible and where not to. So far I’m rejecting '$' (but allowing '\\$' and '\\u0024') but taking advantage of the unicode support in Go to specifically validate against ID_Start and ID_Continue.\n\nBest,\n\nDavid\n\n[1] https://262.ecma-international.org/#sec-identifier-names\n[2] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Lexical_grammar#identifiers\n[3] https://util.unicode.org/UnicodeJsps/list-unicodeset.jsp?a=%5Cp%7BID_Start%7D\n[4] https://util.unicode.org/UnicodeJsps/list-unicodeset.jsp?a=%5Cp%7BID_Continue%7D\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 19 Mar 2024 10:28:06 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "On 17.03.24 20:12, Erik Wienhold wrote:\n> Mentioning JSON and \\v in the same sentence is wrong: JavaScript allows\n> that escape in strings but JSON doesn't. I think the easiest is to just\n> replace \"JSON\" with \"JavaScript\" in that sentence to make it right. The\n> paragraph also already says \"embedded string literals follow JavaScript/\n> ECMAScript conventions\", so mentioning JSON seems unnecessary to me.\n> \n> The last sentence also mentions backslash escapes \\xNN and \\u{N...} as\n> deviations from JSON when in fact those are valid escape sequences from\n> ECMA-262:https://262.ecma-international.org/#prod-HexEscapeSequence\n> So I think it makes sense to reword the entire backslash part of the\n> paragraph and remove references to JSON entirely. The attached patch\n> does that and also formats the backslash escapes as a bulleted list for\n> readability.\n\nI have committed this patch, and backpatched it, as a bug fix, because \nthe existing description was wrong. To keep the patch minimal for \nbackpatching, I didn't do the conversion to a list. I'm not sure I like \nthat anyway, because it tends to draw more attention to that part over \nthe surrounding parts, which didn't seem appropriate in this case. But \nanyway, if you have any more non-bug-fix editing in this area, which \nwould then target PG18, please send more patches.\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 11:46:40 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "On 18.03.24 01:09, Erik Wienhold wrote:\n>>> The error message 'syntax error at or near \"$oo\" of jsonpath input' for\n>>> the second case ($.f$oo), however, looks as if the scanner identifies\n>>> '$oo' as a variable instead of contiuing the scan of identifier (f$oo)\n>>> for the member accessor. Looks like a bug to me because a variable\n>>> doesn't even make sense in that place.\n>> Right. Maybe the docs should be updated to say that a literal dollar\n>> sign isn’t supported in identifiers, unlike in JavaScript, except\n>> through escapes like this:\n> Unfortunately, I don't have access to that part of the SQL spec. So I\n> don't know how the jsonpath grammar is specified.\n\nThe SQL spec says that <JSON path identifier> corresponds to Identifier \nin ECMAScript.\n\nBut it also says,\n\n A <JSON path identifier> is classified as follows.\n\n Case:\n\n a) A <JSON path identifier> that is a <dollar sign> is a <JSON path\n context variable>.\n\n b) A <JSON path identifier> that begins with <dollar sign> is a\n <JSON path named variable>.\n\n c) Otherwise, a <JSON path identifier> is a <JSON path key name>.\n\nDoes this help? I wasn't following all the discussion to see if there \nis anything wrong with the implementation.\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 11:51:20 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "On Apr 24, 2024, at 05:51, Peter Eisentraut <[email protected]> wrote:\n\n> A <JSON path identifier> is classified as follows.\n> \n> Case:\n> \n> a) A <JSON path identifier> that is a <dollar sign> is a <JSON path\n> context variable>.\n> \n> b) A <JSON path identifier> that begins with <dollar sign> is a\n> <JSON path named variable>.\n> \n> c) Otherwise, a <JSON path identifier> is a <JSON path key name>.\n> \n> Does this help? I wasn't following all the discussion to see if there is anything wrong with the implementation.\n\nYes, it does, as it ties the special meaning of the dollar sign to the *beginning* of an expression. So it makes sense that this would be an error:\n\ndavid=# select '$.$foo'::jsonpath;\nERROR: syntax error at or near \"$foo\" of jsonpath input\nLINE 1: select '$.$foo'::jsonpath;\n ^\n\nBut I’m less sure when a dollar sign is used in the *middle* (or end) of a json path identifier:\n\ndavid=# select '$.xx$foo'::jsonpath;\nERROR: syntax error at or near \"$foo\" of jsonpath input\nLINE 1: select '$.xx$foo'::jsonpath;\n ^\n\nPerhaps that should be valid?\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 07:52:37 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "On Apr 24, 2024, at 05:46, Peter Eisentraut <[email protected]> wrote:\n\n> I have committed this patch, and backpatched it, as a bug fix, because the existing description was wrong. To keep the patch minimal for backpatching, I didn't do the conversion to a list. I'm not sure I like that anyway, because it tends to draw more attention to that part over the surrounding parts, which didn't seem appropriate in this case. But anyway, if you have any more non-bug-fix editing in this area, which would then target PG18, please send more patches.\n\nMakes sense, that level of detail gets into the weeks so maybe doesn’t need to be quite so prominent as a list. Thank you!\n\nDavid\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 07:53:33 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "On 2024-04-24 13:52 +0200, David E. Wheeler wrote:\n> On Apr 24, 2024, at 05:51, Peter Eisentraut <[email protected]> wrote:\n> \n> > A <JSON path identifier> is classified as follows.\n> > \n> > Case:\n> > \n> > a) A <JSON path identifier> that is a <dollar sign> is a <JSON\n> > path context variable>.\n> > \n> > b) A <JSON path identifier> that begins with <dollar sign> is a\n> > <JSON path named variable>.\n> > \n> > c) Otherwise, a <JSON path identifier> is a <JSON path key name>.\n> > \n> > Does this help? I wasn't following all the discussion to see if\n> > there is anything wrong with the implementation.\n\nThanks Peter! But what is the definition of the entire path expression?\nPerhaps something like:\n\n <JSON path> ::= <JSON path identifier> { \".\" <JSON path identifier> }\n\nThat would imply that \"$.$foo\" is a valid path that accesses a variable\nmember (but I guess the path evaluation is also specified somewhere).\n\nDoes it say anything about double-quoted accessors? In table 8.25[1] we\nallow member accessor .\"$varname\" and it says \"If the key name matches\nsome named variable starting with $ or does not meet the JavaScript\nrules for an identifier, it must be enclosed in double quotes to make it\na string literal.\"\n\nWhat bugs me about this description, after reading it a couple of times,\nis that it's not clear what is meant by .\"$varname\". It could mean two\nthings: (1) the double-quoting masks $varname in order to not interpret\nthose characters as a variable or (2) an interpolated string that\nresolves $varname and yields a dynamic member accessor.\n\nThe current implementation supports (1), i.e., .\"$foo\" does not refer to\nvariable foo but the actual property \"$foo\":\n\n => select jsonb_path_query('{\"$foo\":123,\"bar\":456}', '$.\"$foo\"', '{\"foo\":\"bar\"}');\n jsonb_path_query\n ------------------\n 123\n (1 row)\n\nUnder case (2) I'd expect that query to return 456 (because $foo\nresolves to \"bar\"). (Similar to how psql would resolve :'foo' to\n'bar'.)\n\nVariables already work in array accessors and table 8.25 says that \"The\nspecified index can be an integer, as well as an expression returning a\nsingle numeric value [...]\". A variable is such an expression.\n\n => select jsonb_path_query('[2,3,5]', '$[$i]', '{\"i\":1}');\n jsonb_path_query\n ------------------\n 3\n (1 row)\n\nSo I'd expect a similar behavior for member accessors as well when\nseeing .\"$varname\" in the same table.\n\n> Yes, it does, as it ties the special meaning of the dollar sign to the\n> *beginning* of an expression. So it makes sense that this would be an\n> error:\n> \n> david=# select '$.$foo'::jsonpath;\n> ERROR: syntax error at or near \"$foo\" of jsonpath input\n> LINE 1: select '$.$foo'::jsonpath;\n> ^\n> But I’m less sure when a dollar sign is used in the *middle* (or end)\n> of a json path identifier:\n> \n> david=# select '$.xx$foo'::jsonpath;\n> ERROR: syntax error at or near \"$foo\" of jsonpath input\n> LINE 1: select '$.xx$foo'::jsonpath;\n> ^\n> Perhaps that should be valid?\n\nYes, I think so. That would be case C in the spec excerpt provided by\nPeter. So it's just a key name that happens to contain (but not start\nwith) the dollar sign.\n\n[1] https://www.postgresql.org/docs/current/datatype-json.html#TYPE-JSONPATH-ACCESSORS\n\n-- \nErik\n\n\n", "msg_date": "Wed, 24 Apr 2024 21:22:03 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q: Escapes in jsonpath Idents" }, { "msg_contents": "On Apr 24, 2024, at 3:22 PM, Erik Wienhold <[email protected]> wrote:\n\n> Thanks Peter! But what is the definition of the entire path expression?\n> Perhaps something like:\n> \n> <JSON path> ::= <JSON path identifier> { \".\" <JSON path identifier> }\n> \n> That would imply that \"$.$foo\" is a valid path that accesses a variable\n> member (but I guess the path evaluation is also specified somewhere).\n\nI read it as “if it starts with a dollar sign, it’s a variable and not a path identifier”, and I assume any `.foo` expression is a path identifier.\n\n> What bugs me about this description, after reading it a couple of times,\n> is that it's not clear what is meant by .\"$varname\". It could mean two\n> things: (1) the double-quoting masks $varname in order to not interpret\n> those characters as a variable or (2) an interpolated string that\n> resolves $varname and yields a dynamic member accessor.\n\nMy understanding is that if it’s in double quotes it’s never anything other than a string (whether a string literal or a path identifier string literal). IOW, variables don’t interpolate inside strings.\n\n> Under case (2) I'd expect that query to return 456 (because $foo\n> resolves to \"bar\"). (Similar to how psql would resolve :'foo' to\n> 'bar'.)\n\nYes, I suspect this is the correct interpretation, but agree the wording could use some massaging, especially since path identifiers cannot start with a dollar sign anyway. Perhaps:\n\n\"If the key name starts with $ or does not meet the JavaScript rules for an identifier, it must be enclosed in double quotes to make it a string literal.\"\n\n> Variables already work in array accessors and table 8.25 says that \"The\n> specified index can be an integer, as well as an expression returning a\n> single numeric value [...]\". A variable is such an expression.\n> \n> => select jsonb_path_query('[2,3,5]', '$[$i]', '{\"i\":1}');\n> jsonb_path_query\n> ------------------\n> 3\n> (1 row)\n> \n> So I'd expect a similar behavior for member accessors as well when\n> seeing .\"$varname\" in the same table.\n\nOh, interesting point! Now I wonder if the standard has this inconsistency (and is aware of it).\n\n> Yes, I think so. That would be case C in the spec excerpt provided by\n> Peter. So it's just a key name that happens to contain (but not start\n> with) the dollar sign.\n\nExactly. It also matches the doc you quote above. Something would have to change in src/backend/utils/adt/jsonpath_scan.l to fix that, but that file makes my eyes water, so I’m not gonna take a stab at it. :-)\n\nD\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 16:30:52 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q: Escapes in jsonpath Idents" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1], I noticed that the backtrace_functions GUC code\ndoes its own string parsing and uses another extra variable\nbacktrace_function_list to store the processed form of\nbacktrace_functions GUC.\n\nI think the code can be simplified a bit by using\nSplitIdentifierString like in the attached patch. With this,\nbacktrace_function_list variable and assign_backtrace_functions() will\ngo away.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACXG1wzrb8%2B5HPNd8Qjr1h8GYkW-ijWhMYr2Y8_DzOB-%3Dg%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 17 Mar 2024 12:01:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Simplify backtrace_functions GUC code" }, { "msg_contents": "On 2024-Mar-17, Bharath Rupireddy wrote:\n\n> I think the code can be simplified a bit by using\n> SplitIdentifierString like in the attached patch. With this,\n> backtrace_function_list variable and assign_backtrace_functions() will\n> go away.\n\nDid you read the discussion that led to the current coding? What part\nof it is no longer valid, in such a way that you want to make the code\nlook like an approach that was rejected back then?\n\nhttps://www.postgresql.org/message-id/flat/35beac83-bf15-9d79-05c4-2dccd0834993%402ndquadrant.com#4dc9ccec753c0d99369be9d53bf24476\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 17 Mar 2024 14:07:42 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify backtrace_functions GUC code" } ]
[ { "msg_contents": "Hi,\n\nWe are pleased to announce the Release Management Team (RMT) (cc'd)\nfor the PostgreSQL 17 release:\n- Robert Haas\n- Heikki Linnakangas\n- Michael Paquier\n\nYou can find information about the responsibilities of the RMT here:\nhttps://wiki.postgresql.org/wiki/Release_Management_Team\n\nAdditionally, the RMT has set the feature freeze to be **April 8, 2024\nat 0:00 AoE** (see [1]). This is the last time to commit features for\nPostgreSQL 17. In other words, no new PostgreSQL 17 feature can be\ncommitted after April 8, 2024 at 0:00 AoE. As mentioned last year in\n[2], this uses the \"standard\" feature freeze date/time.\n\nYou can track open items for the PostgreSQL 17 release here:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n\nPlease let us know if you have any questions.\n\nOn behalf of the PG17 RMT,\n\nMichael\n\n[1]: https://en.wikipedia.org/wiki/Anywhere_on_Earth\n[2]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Mon, 18 Mar 2024 11:50:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "Dear Michael,\n\n> We are pleased to announce the Release Management Team (RMT) (cc'd)\n> for the PostgreSQL 17 release:\n> - Robert Haas\n> - Heikki Linnakangas\n> - Michael Paquier\n\nThanks for managing the release of PostgreSQL to proceed the right direction!\n\n> You can track open items for the PostgreSQL 17 release here:\n> https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n\nI think the entry can be closed:\n\n```\npg_upgrade with --link mode failing upgrade of publishers\nCommit: 29d0a77fa660\nOwner: Amit Kapila\n```\n\nThe reported failure was only related with the test script, not the feature.\nThe issue has already been analyzed and the fix patch was pushed as f17529b710.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/ \n\n\n\n", "msg_date": "Mon, 18 Mar 2024 03:49:24 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, Mar 18, 2024 at 03:49:24AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> I think the entry can be closed:\n> \n> ```\n> pg_upgrade with --link mode failing upgrade of publishers\n> Commit: 29d0a77fa660\n> Owner: Amit Kapila\n> ```\n> \n> The reported failure was only related with the test script, not the feature.\n> The issue has already been analyzed and the fix patch was pushed as f17529b710.\n\nIf you think that this is OK, and as far as I can see this looks OK on\nthe thread, then this open item should be moved under \"resolved before\n17beta1\", mentioning the commit involved in the fix. Please see [1]\nfor examples.\n\n[1]: https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items#resolved_before_16beta1\n--\nMichael", "msg_date": "Mon, 18 Mar 2024 14:16:51 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "Dear Michael,\n\n> If you think that this is OK, and as far as I can see this looks OK on\n> the thread, then this open item should be moved under \"resolved before\n> 17beta1\", mentioning the commit involved in the fix. Please see [1]\n> for examples.\n\nOK, I understood that I could wait checking from you. Thanks.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/ \n\n\n\n", "msg_date": "Mon, 18 Mar 2024 07:09:10 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, Mar 18, 2024 at 12:39 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > If you think that this is OK, and as far as I can see this looks OK on\n> > the thread, then this open item should be moved under \"resolved before\n> > 17beta1\", mentioning the commit involved in the fix. Please see [1]\n> > for examples.\n>\n> OK, I understood that I could wait checking from you. Thanks.\n>\n\nI don't think there is a need to wait here. The issue being tracked\nwas fixed, so I have updated the open items page accordingly.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Mar 2024 08:49:30 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, 18 Mar 2024 at 15:50, Michael Paquier <[email protected]> wrote:\n> Additionally, the RMT has set the feature freeze to be **April 8, 2024\n> at 0:00 AoE** (see [1]). This is the last time to commit features for\n> PostgreSQL 17. In other words, no new PostgreSQL 17 feature can be\n> committed after April 8, 2024 at 0:00 AoE. As mentioned last year in\n> [2], this uses the \"standard\" feature freeze date/time.\n\nSomeone asked me about this, so thought it might be useful to post here.\n\nTo express this as UTC, It's:\n\npostgres=# select '2024-04-08 00:00-12:00' at time zone 'UTC';\n timezone\n---------------------\n 2024-04-08 12:00:00\n\nOr, time remaining, relative to now:\n\nselect '2024-04-08 00:00-12:00' - now();\n\nDavid\n\n> [1]: https://en.wikipedia.org/wiki/Anywhere_on_Earth\n> [2]: https://www.postgresql.org/message-id/[email protected]\n\n\n", "msg_date": "Thu, 4 Apr 2024 15:10:27 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Thu, Apr 04, 2024 at 03:10:27PM +1300, David Rowley wrote:\n> Someone asked me about this, so thought it might be useful to post here.\n\nI've received the same question.\n\n> To express this as UTC, It's:\n> \n> postgres=# select '2024-04-08 00:00-12:00' at time zone 'UTC';\n> timezone\n> ---------------------\n> 2024-04-08 12:00:00\n> \n> Or, time remaining, relative to now:\n> \n> select '2024-04-08 00:00-12:00' - now();\n\nAnd, as of the moment of typing this email, I get:\n=# select '2024-04-08 00:00-12:00' - now() as time_remaining;\n time_remaining\n-----------------\n 13:10:35.688134\n(1 row)\n\nSo there is just a bit more than half a day remaining before the\nfeature freeze is in effect.\n--\nMichael", "msg_date": "Mon, 8 Apr 2024 07:50:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Sun, Apr 7, 2024 at 6:50 PM Michael Paquier <[email protected]> wrote:\n> And, as of the moment of typing this email, I get:\n> =# select '2024-04-08 00:00-12:00' - now() as time_remaining;\n> time_remaining\n> -----------------\n> 13:10:35.688134\n> (1 row)\n>\n> So there is just a bit more than half a day remaining before the\n> feature freeze is in effect.\n\nOK, so feature freeze is now in effect, then.\n\nIn the future, let's use GMT for these deadlines. There have got to be\na lot more people who can easily understand when a certain GMT\ntimestamp falls in their local timezone than there are people who can\neasily understand when a certain AOE timestamp falls in their local\ntimezone.\n\nAnd maybe we need to think of a way to further mitigate this crush of\nlast minute commits. e.g. In the last week, you can't have more\nfeature commits, or more lines of insertions in your commits, than you\ndid in the prior 3 weeks combined. I don't know. I think this mad rush\nof last-minute commits is bad for the project.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Apr 2024 09:26:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> And maybe we need to think of a way to further mitigate this crush of\n> last minute commits. e.g. In the last week, you can't have more\n> feature commits, or more lines of insertions in your commits, than you\n> did in the prior 3 weeks combined. I don't know. I think this mad rush\n> of last-minute commits is bad for the project.\n\nI was just about to pen an angry screed along the same lines.\nThe commit flux over the past couple days, and even the last\ntwelve hours, was flat-out ridiculous. These patches weren't\nready a week ago, and I doubt they were ready now.\n\nThe RMT should feel free to exercise their right to require\nrevert \"early and often\", or we are going to be shipping a\nvery buggy release.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 09:43:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, Apr 8, 2024 at 9:26 AM Robert Haas <[email protected]> wrote:\n>\n> On Sun, Apr 7, 2024 at 6:50 PM Michael Paquier <[email protected]> wrote:\n> > And, as of the moment of typing this email, I get:\n> > =# select '2024-04-08 00:00-12:00' - now() as time_remaining;\n> > time_remaining\n> > -----------------\n> > 13:10:35.688134\n> > (1 row)\n> >\n> > So there is just a bit more than half a day remaining before the\n> > feature freeze is in effect.\n>\n> OK, so feature freeze is now in effect, then.\n>\n> In the future, let's use GMT for these deadlines. There have got to be\n> a lot more people who can easily understand when a certain GMT\n> timestamp falls in their local timezone than there are people who can\n> easily understand when a certain AOE timestamp falls in their local\n> timezone.\n>\n> And maybe we need to think of a way to further mitigate this crush of\n> last minute commits. e.g. In the last week, you can't have more\n> feature commits, or more lines of insertions in your commits, than you\n> did in the prior 3 weeks combined. I don't know. I think this mad rush\n> of last-minute commits is bad for the project.\n\nWhat if we pick the actual feature freeze time randomly? That is,\nstarting on March 15th (or whenever but more than a week before), each\nnight someone from RMT generates a random number between $current_day\nand April 8th. If the number matches $current_day, that day at\nmidnight is the feature freeze.\n\n- Melanie\n\n\n", "msg_date": "Mon, 8 Apr 2024 10:26:58 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, Apr 8, 2024 at 10:27 AM Melanie Plageman\n<[email protected]> wrote:\n> On Mon, Apr 8, 2024 at 9:26 AM Robert Haas <[email protected]> wrote:\n> > On Sun, Apr 7, 2024 at 6:50 PM Michael Paquier <[email protected]> wrote:\n> > > And, as of the moment of typing this email, I get:\n> > > =# select '2024-04-08 00:00-12:00' - now() as time_remaining;\n> > > time_remaining\n> > > -----------------\n> > > 13:10:35.688134\n> > > (1 row)\n> > >\n> > > So there is just a bit more than half a day remaining before the\n> > > feature freeze is in effect.\n> >\n> > OK, so feature freeze is now in effect, then.\n> >\n> > In the future, let's use GMT for these deadlines. There have got to be\n> > a lot more people who can easily understand when a certain GMT\n> > timestamp falls in their local timezone than there are people who can\n> > easily understand when a certain AOE timestamp falls in their local\n> > timezone.\n> >\n> > And maybe we need to think of a way to further mitigate this crush of\n> > last minute commits. e.g. In the last week, you can't have more\n> > feature commits, or more lines of insertions in your commits, than you\n> > did in the prior 3 weeks combined. I don't know. I think this mad rush\n> > of last-minute commits is bad for the project.\n>\n> What if we pick the actual feature freeze time randomly? That is,\n> starting on March 15th (or whenever but more than a week before), each\n> night someone from RMT generates a random number between $current_day\n> and April 8th. If the number matches $current_day, that day at\n> midnight is the feature freeze.\n>\n\nUnfortunately many humans are hardwired towards procrastination and\nlast minute heroics (it's one reason why deadline driven development\nworks even though in the long run it tends to be bad practice), and\nhistorically was one of the driving factors in why we started doing\ncommitfests in the first place (you should have seen the mad dash of\ncommits before we had that process), so ISTM that it's not completely\navoidable...\n\nThat said, are you suggesting that the feature freeze deadline be\nrandom, and also held in secret by the RMT, only to be announced after\nthe freeze time has passed? This feels weird, but might apply enough\ndeadline related pressure while avoiding last minute shenanigans.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Mon, 8 Apr 2024 10:41:17 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, 2024-04-08 at 09:26 -0400, Robert Haas wrote:\n> And maybe we need to think of a way to further mitigate this crush of\n> last minute commits. e.g. In the last week, you can't have more\n> feature commits, or more lines of insertions in your commits, than you\n> did in the prior 3 weeks combined. I don't know. I think this mad rush\n> of last-minute commits is bad for the project.\n\nI found that there are a lot of people who can only get going with a\npressing deadline. But that's just an observation, not an excuse.\n\nI don't know if additional rules will achieve anything here. This can\nonly improve with buy-in from the committers, and that cannot be forced.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 08 Apr 2024 16:42:05 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "\n\n> On 8 Apr 2024, at 17:26, Melanie Plageman <[email protected]> wrote:\n> \n> What if we pick the actual feature freeze time randomly? That is,\n> starting on March 15th (or whenever but more than a week before), each\n> night someone from RMT generates a random number between $current_day\n> and April 8th. If the number matches $current_day, that day at\n> midnight is the feature freeze.\n\nBut this implies that actual date is not publicly known before feature freeze is in effect. Do I understand idea correctly?\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 8 Apr 2024 17:42:39 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On 08/04/2024 16:43, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> And maybe we need to think of a way to further mitigate this crush of\n>> last minute commits. e.g. In the last week, you can't have more\n>> feature commits, or more lines of insertions in your commits, than you\n>> did in the prior 3 weeks combined. I don't know. I think this mad rush\n>> of last-minute commits is bad for the project.\n> \n> I was just about to pen an angry screed along the same lines.\n> The commit flux over the past couple days, and even the last\n> twelve hours, was flat-out ridiculous. These patches weren't\n> ready a week ago, and I doubt they were ready now.\n> \n> The RMT should feel free to exercise their right to require\n> revert \"early and often\", or we are going to be shipping a\n> very buggy release.\n\n\nCan you elaborate, which patches you think were not ready? Let's make \nsure to capture any concrete concerns in the Open Items list.\n\nI agree the last-minute crunch felt more intense than in previous years. \nI'm guilty myself; I crunched the \"direct SSL\" patches in. My rationale \nfor that one: It's been in a pretty settled state for a long time. There \nhasn't been any particular concerns about the design or the \nimplementation. I haven't commit tit sooner because I was not \ncomfortable with the lack of tests, especially the libpq parts. So I \nmade a last minute dash to write the tests so that I'm comfortable with \nit, and I restructured the commits so that the tests and refactorings \ncome first. The resulting code changes look the same they have for a \nlong time, and I didn't uncover any significant new issues while doing \nthat. I would not have felt comfortable committing it otherwise.\n\nYeah, I should have done that sooner, but realistically, there's nothing \nlike a looming deadline as a motivator. One idea to avoid the mad rush \nin the future would be to make the feature freeze deadline more \nprogressive. For example:\n\nApril 1: If you are still working on a feature that you still want to \ncommit, you must explicitly flag it in the commitfest as \"I plan to \ncommit this very soon\".\n\nApril 4: You must give a short status update about anything that you \nhaven't committed yet, with an explanation of how you plan to proceed \nwith it.\n\nApril 5-8: Mandatory daily status updates, explicit approval by the \ncommitfest manager needed each day to continue working on it.\n\nApril 8: Hard feature freeze deadline\n\nThis would also give everyone more visibility, so that we're not all \nsurprised by the last minute flood of commits.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 8 Apr 2024 17:42:41 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, Apr 8, 2024 at 10:41 AM Robert Treat <[email protected]> wrote:\n>\n> On Mon, Apr 8, 2024 at 10:27 AM Melanie Plageman\n> <[email protected]> wrote:\n> > On Mon, Apr 8, 2024 at 9:26 AM Robert Haas <[email protected]> wrote:\n> > > On Sun, Apr 7, 2024 at 6:50 PM Michael Paquier <[email protected]> wrote:\n> > > > And, as of the moment of typing this email, I get:\n> > > > =# select '2024-04-08 00:00-12:00' - now() as time_remaining;\n> > > > time_remaining\n> > > > -----------------\n> > > > 13:10:35.688134\n> > > > (1 row)\n> > > >\n> > > > So there is just a bit more than half a day remaining before the\n> > > > feature freeze is in effect.\n> > >\n> > > OK, so feature freeze is now in effect, then.\n> > >\n> > > In the future, let's use GMT for these deadlines. There have got to be\n> > > a lot more people who can easily understand when a certain GMT\n> > > timestamp falls in their local timezone than there are people who can\n> > > easily understand when a certain AOE timestamp falls in their local\n> > > timezone.\n> > >\n> > > And maybe we need to think of a way to further mitigate this crush of\n> > > last minute commits. e.g. In the last week, you can't have more\n> > > feature commits, or more lines of insertions in your commits, than you\n> > > did in the prior 3 weeks combined. I don't know. I think this mad rush\n> > > of last-minute commits is bad for the project.\n> >\n> > What if we pick the actual feature freeze time randomly? That is,\n> > starting on March 15th (or whenever but more than a week before), each\n> > night someone from RMT generates a random number between $current_day\n> > and April 8th. If the number matches $current_day, that day at\n> > midnight is the feature freeze.\n> >\n>\n> Unfortunately many humans are hardwired towards procrastination and\n> last minute heroics (it's one reason why deadline driven development\n> works even though in the long run it tends to be bad practice), and\n> historically was one of the driving factors in why we started doing\n> commitfests in the first place (you should have seen the mad dash of\n> commits before we had that process), so ISTM that it's not completely\n> avoidable...\n>\n> That said, are you suggesting that the feature freeze deadline be\n> random, and also held in secret by the RMT, only to be announced after\n> the freeze time has passed? This feels weird, but might apply enough\n> deadline related pressure while avoiding last minute shenanigans.\n\nBasically, yes. The RMT would find out each day whether or not that\nday is the feature freeze but not tell anyone. Then they would push\nsome kind of feature freeze tag (do we already do a feature freeze\ntag? I didn't think so) at 11:59 PM (in some timezone) that evening\nand all commits that are feature commits after that are reverted.\n\nI basically thought it would be a way for people to know that they\nneed to have their work done before April but keep them from waiting\nuntil the actual last minute. The rationale for doing it this way is\nit requires way less human involvement than some of the other methods\nproposed. Figuring out how many commits each committer is allowed to\ndo based on past number of LOC,etc like Robert's suggestion sounds\nlike a lot of work. I was trying to think of a simple way to beat the\nnatural propensity people have toward procrastination.\n\nBut, an approach like Heikki suggested [1] with check-ins and\nstaggered deadlines is certainly much more principled. It just sounds\nlike it will require a lot of enforcement and oversight. And it might\nbe hard to ensure it doesn't end up being enforced only for some\npeople and not others. However, I suppose everyone is saying a mindset\nshift is needed. And you can't usually shift a mindset with a\ntechnical solution like I proposed (despite what Libertarians might\ntell you about carbon offsets).\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/a5485b74-059a-4ea0-b445-7c393d6fe0de%40iki.fi\n\n\n", "msg_date": "Mon, 8 Apr 2024 10:52:28 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, 8 Apr 2024 at 18:42, Heikki Linnakangas <[email protected]> wrote:\n\n> On 08/04/2024 16:43, Tom Lane wrote:\n> > Robert Haas <[email protected]> writes:\n> >> And maybe we need to think of a way to further mitigate this crush of\n> >> last minute commits. e.g. In the last week, you can't have more\n> >> feature commits, or more lines of insertions in your commits, than you\n> >> did in the prior 3 weeks combined. I don't know. I think this mad rush\n> >> of last-minute commits is bad for the project.\n> >\n> > I was just about to pen an angry screed along the same lines.\n> > The commit flux over the past couple days, and even the last\n> > twelve hours, was flat-out ridiculous. These patches weren't\n> > ready a week ago, and I doubt they were ready now.\n> >\n> > The RMT should feel free to exercise their right to require\n> > revert \"early and often\", or we are going to be shipping a\n> > very buggy release.\n>\n>\n> Can you elaborate, which patches you think were not ready? Let's make\n> sure to capture any concrete concerns in the Open Items list.\n>\n> I agree the last-minute crunch felt more intense than in previous years.\n> I'm guilty myself; I crunched the \"direct SSL\" patches in. My rationale\n> for that one: It's been in a pretty settled state for a long time. There\n> hasn't been any particular concerns about the design or the\n> implementation. I haven't commit tit sooner because I was not\n> comfortable with the lack of tests, especially the libpq parts. So I\n> made a last minute dash to write the tests so that I'm comfortable with\n> it, and I restructured the commits so that the tests and refactorings\n> come first. The resulting code changes look the same they have for a\n> long time, and I didn't uncover any significant new issues while doing\n> that. I would not have felt comfortable committing it otherwise.\n>\n> Yeah, I should have done that sooner, but realistically, there's nothing\n> like a looming deadline as a motivator. One idea to avoid the mad rush\n> in the future would be to make the feature freeze deadline more\n> progressive. For example:\n>\n> April 1: If you are still working on a feature that you still want to\n> commit, you must explicitly flag it in the commitfest as \"I plan to\n> commit this very soon\".\n>\n> April 4: You must give a short status update about anything that you\n> haven't committed yet, with an explanation of how you plan to proceed\n> with it.\n>\n> April 5-8: Mandatory daily status updates, explicit approval by the\n> commitfest manager needed each day to continue working on it.\n>\n> April 8: Hard feature freeze deadline\n>\n> This would also give everyone more visibility, so that we're not all\n> surprised by the last minute flood of commits.\n>\n\nIMO the fact that people struggle to work on patches, and make them better,\netc. is an immense blessing for the Postgres community. Is the peak of\ncommits really a big problem provided we have 6 months before actual\nrelease? I doubt March patches tend to be worse than the November ones.\n\nPeople are different, so are the ways they feel motivation and inspiration.\nThis could be easily broken with bureaucratic decisions some of them, like\nproposed counting of lines of code vs period of time look even little bit\nrepressive.\n\nLet's remain an open community, support inspiration in each other, and\ndon't build an over-regulated corporation. I feel that Postgres will win if\npeople feel less limited by formal rules. Personally, I believe RMT has\nenough experience and authority to stabilize and interact with authors if\nquestions arise.\n\nRegards,\nPavel Borisov\n\nOn Mon, 8 Apr 2024 at 18:42, Heikki Linnakangas <[email protected]> wrote:On 08/04/2024 16:43, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> And maybe we need to think of a way to further mitigate this crush of\n>> last minute commits. e.g. In the last week, you can't have more\n>> feature commits, or more lines of insertions in your commits, than you\n>> did in the prior 3 weeks combined. I don't know. I think this mad rush\n>> of last-minute commits is bad for the project.\n> \n> I was just about to pen an angry screed along the same lines.\n> The commit flux over the past couple days, and even the last\n> twelve hours, was flat-out ridiculous.  These patches weren't\n> ready a week ago, and I doubt they were ready now.\n> \n> The RMT should feel free to exercise their right to require\n> revert \"early and often\", or we are going to be shipping a\n> very buggy release.\n\n\nCan you elaborate, which patches you think were not ready? Let's make \nsure to capture any concrete concerns in the Open Items list.\n\nI agree the last-minute crunch felt more intense than in previous years. \nI'm guilty myself; I crunched the \"direct SSL\" patches in. My rationale \nfor that one: It's been in a pretty settled state for a long time. There \nhasn't been any particular concerns about the design or the \nimplementation. I haven't commit tit sooner because I was not \ncomfortable with the lack of tests, especially the libpq parts. So I \nmade a last minute dash to write the tests so that I'm comfortable with \nit, and I restructured the commits so that the tests and refactorings \ncome first. The resulting code changes look the same they have for a \nlong time, and I didn't uncover any significant new issues while doing \nthat. I would not have felt comfortable committing it otherwise.\n\nYeah, I should have done that sooner, but realistically, there's nothing \nlike a looming deadline as a motivator. One idea to avoid the mad rush \nin the future would be to make the feature freeze deadline more \nprogressive. For example:\n\nApril 1: If you are still working on a feature that you still want to \ncommit, you must explicitly flag it in the commitfest as \"I plan to \ncommit this very soon\".\n\nApril 4: You must give a short status update about anything that you \nhaven't committed yet, with an explanation of how you plan to proceed \nwith it.\n\nApril 5-8: Mandatory daily status updates, explicit approval by the \ncommitfest manager needed each day to continue working on it.\n\nApril 8: Hard feature freeze deadline\n\nThis would also give everyone more visibility, so that we're not all \nsurprised by the last minute flood of commits.IMO the fact that people struggle to work on patches, and make them better, etc. is an immense blessing for the Postgres community. Is the peak of commits really a big problem provided we have 6 months before actual release? I doubt March patches tend to be worse than the November ones.People are different, so are the ways they feel motivation and inspiration. This could be easily broken with bureaucratic decisions some of them, like proposed counting of lines of code vs period of time look even little bit repressive. Let's remain an open community, support inspiration in each other, and don't build an over-regulated corporation. I feel that Postgres will win if people feel less limited by formal rules. Personally, I believe RMT has enough experience and authority to stabilize and interact with authors if questions arise. Regards,Pavel Borisov", "msg_date": "Mon, 8 Apr 2024 18:57:31 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 08/04/2024 16:43, Tom Lane wrote:\n>> I was just about to pen an angry screed along the same lines.\n>> The commit flux over the past couple days, and even the last\n>> twelve hours, was flat-out ridiculous. These patches weren't\n>> ready a week ago, and I doubt they were ready now.\n\n> Can you elaborate, which patches you think were not ready? Let's make \n> sure to capture any concrete concerns in the Open Items list.\n\n[ shrug... ] There were fifty-some commits on the last day,\nsome of them quite large, and you want me to finger particular ones?\nI can't even have read them all yet.\n\n> Yeah, I should have done that sooner, but realistically, there's nothing \n> like a looming deadline as a motivator. One idea to avoid the mad rush \n> in the future would be to make the feature freeze deadline more \n> progressive. For example:\n> April 1: If you are still working on a feature that you still want to \n> commit, you must explicitly flag it in the commitfest as \"I plan to \n> commit this very soon\".\n> April 4: You must give a short status update about anything that you \n> haven't committed yet, with an explanation of how you plan to proceed \n> with it.\n> April 5-8: Mandatory daily status updates, explicit approval by the \n> commitfest manager needed each day to continue working on it.\n> April 8: Hard feature freeze deadline\n\n> This would also give everyone more visibility, so that we're not all \n> surprised by the last minute flood of commits.\n\nPerhaps something like that could help, but it seems like a lot\nof mechanism. I think the real problem is just that committers\nneed to re-orient their thinking a little. We must get *less*\nwilling to commit marginal patches, not more so, as the deadline\napproaches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 10:59:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "Pavel Borisov <[email protected]> writes:\n> IMO the fact that people struggle to work on patches, and make them better,\n> etc. is an immense blessing for the Postgres community. Is the peak of\n> commits really a big problem provided we have 6 months before actual\n> release? I doubt March patches tend to be worse than the November ones.\n\nYes, it's a problem, and yes the average quality of last-minute\npatches is visibly worse than that of patches committed in a less\nhasty fashion. We have been through this every year for the last\ncouple decades, seems like, and we keep re-learning that lesson\nthe hard way. I'm just distressed at our utter failure to learn\nfrom experience.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 11:05:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "\n\nOn 4/8/24 16:59, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> On 08/04/2024 16:43, Tom Lane wrote:\n>>> I was just about to pen an angry screed along the same lines.\n>>> The commit flux over the past couple days, and even the last\n>>> twelve hours, was flat-out ridiculous. These patches weren't\n>>> ready a week ago, and I doubt they were ready now.\n> \n>> Can you elaborate, which patches you think were not ready? Let's make \n>> sure to capture any concrete concerns in the Open Items list.\n> \n> [ shrug... ] There were fifty-some commits on the last day,\n> some of them quite large, and you want me to finger particular ones?\n> I can't even have read them all yet.\n> \n>> Yeah, I should have done that sooner, but realistically, there's nothing \n>> like a looming deadline as a motivator. One idea to avoid the mad rush \n>> in the future would be to make the feature freeze deadline more \n>> progressive. For example:\n>> April 1: If you are still working on a feature that you still want to \n>> commit, you must explicitly flag it in the commitfest as \"I plan to \n>> commit this very soon\".\n>> April 4: You must give a short status update about anything that you \n>> haven't committed yet, with an explanation of how you plan to proceed \n>> with it.\n>> April 5-8: Mandatory daily status updates, explicit approval by the \n>> commitfest manager needed each day to continue working on it.\n>> April 8: Hard feature freeze deadline\n> \n>> This would also give everyone more visibility, so that we're not all \n>> surprised by the last minute flood of commits.\n> \n> Perhaps something like that could help, but it seems like a lot\n> of mechanism. I think the real problem is just that committers\n> need to re-orient their thinking a little. We must get *less*\n> willing to commit marginal patches, not more so, as the deadline\n> approaches.\n> \n\nFor me the main problem with the pre-freeze crush is that it leaves\npretty much no practical chance to do meaningful review/testing, and\nsome of the patches likely went through significant changes (at least\njudging by the number of messages and patch versions in the associated\nthreads). That has to have a cost later ...\n\nThat being said, I'm not sure the \"progressive deadline\" proposed by\nHeikki would improve that materially, and it seems like a lot of effort\nto maintain etc. And even if someone updates the CF app with all the\ndetails, does it even give others sufficient opportunity to review the\nnew patch versions? I don't think so. (It anything, it does not seem\nfair to expect others to do last-minute reviews under pressure.)\n\nMaybe it'd be better to start by expanding the existing rule about not\ncommitting patches introduced for the first time in the last CF. What if\nwe said that patches in the last CF must not go through significant\nchanges, and if they do it'd mean the patch is moved to the next CF?\nPerhaps with exceptions to be granted by the RMT when appropriate?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 8 Apr 2024 17:21:41 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On 4/8/24 11:05, Tom Lane wrote:\n> Pavel Borisov <[email protected]> writes:\n>> IMO the fact that people struggle to work on patches, and make them better,\n>> etc. is an immense blessing for the Postgres community. Is the peak of\n>> commits really a big problem provided we have 6 months before actual\n>> release? I doubt March patches tend to be worse than the November ones.\n> \n> Yes, it's a problem, and yes the average quality of last-minute\n> patches is visibly worse than that of patches committed in a less\n> hasty fashion. We have been through this every year for the last\n> couple decades, seems like, and we keep re-learning that lesson\n> the hard way. I'm just distressed at our utter failure to learn\n> from experience.\n\n\nI don't dispute that we could do better, and this is just a simplistic \nlook based on \"number of commits per day\", but the attached does put it \nin perspective to some extent.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 8 Apr 2024 11:28:37 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "Hi,\n\nOn 2024-04-08 09:26:09 -0400, Robert Haas wrote:\n> On Sun, Apr 7, 2024 at 6:50 PM Michael Paquier <[email protected]> wrote:\n> And maybe we need to think of a way to further mitigate this crush of\n> last minute commits. e.g. In the last week, you can't have more\n> feature commits, or more lines of insertions in your commits, than you\n> did in the prior 3 weeks combined. I don't know. I think this mad rush\n> of last-minute commits is bad for the project.\n\nI don't think it's very useful to paint a very broad brush here,\nunfortunately. Some will just polish commits until the last minute, until the\nthe dot's on the i's really shine, others will continue picking up more CF\nentries until the freeze is reached, others will push half baked stuff. Of\ncourse there will be an increased commit rate, but it does looks like there\nwas some stuff that looked somewhat rickety.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Mon, 8 Apr 2024 08:29:48 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, 8 Apr 2024 at 17:21, Tomas Vondra <[email protected]> wrote:\n>\n>\n>\n> On 4/8/24 16:59, Tom Lane wrote:\n> > Heikki Linnakangas <[email protected]> writes:\n> >> On 08/04/2024 16:43, Tom Lane wrote:\n> >>> I was just about to pen an angry screed along the same lines.\n> >>> The commit flux over the past couple days, and even the last\n> >>> twelve hours, was flat-out ridiculous. These patches weren't\n> >>> ready a week ago, and I doubt they were ready now.\n> >\n> >> Can you elaborate, which patches you think were not ready? Let's make\n> >> sure to capture any concrete concerns in the Open Items list.\n> >\n> > [ shrug... ] There were fifty-some commits on the last day,\n> > some of them quite large, and you want me to finger particular ones?\n> > I can't even have read them all yet.\n> >\n> >> Yeah, I should have done that sooner, but realistically, there's nothing\n> >> like a looming deadline as a motivator. One idea to avoid the mad rush\n> >> in the future would be to make the feature freeze deadline more\n> >> progressive. For example:\n> >> April 1: If you are still working on a feature that you still want to\n> >> commit, you must explicitly flag it in the commitfest as \"I plan to\n> >> commit this very soon\".\n> >> April 4: You must give a short status update about anything that you\n> >> haven't committed yet, with an explanation of how you plan to proceed\n> >> with it.\n> >> April 5-8: Mandatory daily status updates, explicit approval by the\n> >> commitfest manager needed each day to continue working on it.\n> >> April 8: Hard feature freeze deadline\n> >\n> >> This would also give everyone more visibility, so that we're not all\n> >> surprised by the last minute flood of commits.\n> >\n> > Perhaps something like that could help, but it seems like a lot\n> > of mechanism. I think the real problem is just that committers\n> > need to re-orient their thinking a little. We must get *less*\n> > willing to commit marginal patches, not more so, as the deadline\n> > approaches.\n> >\n>\n> For me the main problem with the pre-freeze crush is that it leaves\n> pretty much no practical chance to do meaningful review/testing, and\n> some of the patches likely went through significant changes (at least\n> judging by the number of messages and patch versions in the associated\n> threads). That has to have a cost later ...\n>\n> That being said, I'm not sure the \"progressive deadline\" proposed by\n> Heikki would improve that materially, and it seems like a lot of effort\n> to maintain etc. And even if someone updates the CF app with all the\n> details, does it even give others sufficient opportunity to review the\n> new patch versions? I don't think so. (It anything, it does not seem\n> fair to expect others to do last-minute reviews under pressure.)\n>\n> Maybe it'd be better to start by expanding the existing rule about not\n> committing patches introduced for the first time in the last CF.\n\nI don't think adding more hurdles about what to include into the next\nrelease is a good solution. Why the March CF, and not earlier? Or\nlater? How about unregistered patches? Changes to the docs? Bug fixes?\nThe March CF already has a submission deadline of \"before march\", so\nthat already puts a soft limit on the patches considered for the april\ndeadline.\n\n> What if\n> we said that patches in the last CF must not go through significant\n> changes, and if they do it'd mean the patch is moved to the next CF?\n\nI also think there is already a big issue with a lack of interest in\ngetting existing patches from non-committers committed, reducing the\nset of patches that could be considered is just cheating the numbers\nand discouraging people from contributing. For one, I know I have\nmotivation issues keeping up with reviewing other people's patches\nwhen none (well, few, as of this CF) of my patches get reviewed\nmaterially and committed. I don't see how shrinking the window of\nopportunity for significant review from 9 to 7 months is going to help\nthere.\n\nSo, I think that'd be counter-productive, as this would get the\nperverse incentive to band-aid over (architectural) issues to limit\nchurn inside the patch, rather than fix those issues in a way that's\nappropriate for the project as a whole.\n\n-Matthias\n\n\n", "msg_date": "Mon, 8 Apr 2024 17:48:37 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, 8 Apr 2024 at 19:48, Matthias van de Meent <\[email protected]> wrote:\n\n> On Mon, 8 Apr 2024 at 17:21, Tomas Vondra <[email protected]>\n> wrote:\n> >\n> >\n> >\n> > On 4/8/24 16:59, Tom Lane wrote:\n> > > Heikki Linnakangas <[email protected]> writes:\n> > >> On 08/04/2024 16:43, Tom Lane wrote:\n> > >>> I was just about to pen an angry screed along the same lines.\n> > >>> The commit flux over the past couple days, and even the last\n> > >>> twelve hours, was flat-out ridiculous. These patches weren't\n> > >>> ready a week ago, and I doubt they were ready now.\n> > >\n> > >> Can you elaborate, which patches you think were not ready? Let's make\n> > >> sure to capture any concrete concerns in the Open Items list.\n> > >\n> > > [ shrug... ] There were fifty-some commits on the last day,\n> > > some of them quite large, and you want me to finger particular ones?\n> > > I can't even have read them all yet.\n> > >\n> > >> Yeah, I should have done that sooner, but realistically, there's\n> nothing\n> > >> like a looming deadline as a motivator. One idea to avoid the mad rush\n> > >> in the future would be to make the feature freeze deadline more\n> > >> progressive. For example:\n> > >> April 1: If you are still working on a feature that you still want to\n> > >> commit, you must explicitly flag it in the commitfest as \"I plan to\n> > >> commit this very soon\".\n> > >> April 4: You must give a short status update about anything that you\n> > >> haven't committed yet, with an explanation of how you plan to proceed\n> > >> with it.\n> > >> April 5-8: Mandatory daily status updates, explicit approval by the\n> > >> commitfest manager needed each day to continue working on it.\n> > >> April 8: Hard feature freeze deadline\n> > >\n> > >> This would also give everyone more visibility, so that we're not all\n> > >> surprised by the last minute flood of commits.\n> > >\n> > > Perhaps something like that could help, but it seems like a lot\n> > > of mechanism. I think the real problem is just that committers\n> > > need to re-orient their thinking a little. We must get *less*\n> > > willing to commit marginal patches, not more so, as the deadline\n> > > approaches.\n> > >\n> >\n> > For me the main problem with the pre-freeze crush is that it leaves\n> > pretty much no practical chance to do meaningful review/testing, and\n> > some of the patches likely went through significant changes (at least\n> > judging by the number of messages and patch versions in the associated\n> > threads). That has to have a cost later ...\n> >\n> > That being said, I'm not sure the \"progressive deadline\" proposed by\n> > Heikki would improve that materially, and it seems like a lot of effort\n> > to maintain etc. And even if someone updates the CF app with all the\n> > details, does it even give others sufficient opportunity to review the\n> > new patch versions? I don't think so. (It anything, it does not seem\n> > fair to expect others to do last-minute reviews under pressure.)\n> >\n> > Maybe it'd be better to start by expanding the existing rule about not\n> > committing patches introduced for the first time in the last CF.\n>\n> I don't think adding more hurdles about what to include into the next\n> release is a good solution. Why the March CF, and not earlier? Or\n> later? How about unregistered patches? Changes to the docs? Bug fixes?\n> The March CF already has a submission deadline of \"before march\", so\n> that already puts a soft limit on the patches considered for the april\n> deadline.\n>\n> > What if\n> > we said that patches in the last CF must not go through significant\n> > changes, and if they do it'd mean the patch is moved to the next CF?\n>\n> I also think there is already a big issue with a lack of interest in\n> getting existing patches from non-committers committed, reducing the\n> set of patches that could be considered is just cheating the numbers\n> and discouraging people from contributing. For one, I know I have\n> motivation issues keeping up with reviewing other people's patches\n> when none (well, few, as of this CF) of my patches get reviewed\n> materially and committed. I don't see how shrinking the window of\n> opportunity for significant review from 9 to 7 months is going to help\n> there.\n>\n> So, I think that'd be counter-productive, as this would get the\n> perverse incentive to band-aid over (architectural) issues to limit\n> churn inside the patch, rather than fix those issues in a way that's\n> appropriate for the project as a whole.\n>\n\nI second your opinion, Mattias! I also feel that the management of the\nreview of other contibutor's patches participation is much more important\nfor a community as a whole. And this could make process of patches\nproposals and improvement running, while motivating participation (in all\nthree roles of contributors, reviewers and committers), not vice versa.\n\nRegards,\nPavel.\n\nOn Mon, 8 Apr 2024 at 19:48, Matthias van de Meent <[email protected]> wrote:On Mon, 8 Apr 2024 at 17:21, Tomas Vondra <[email protected]> wrote:\n>\n>\n>\n> On 4/8/24 16:59, Tom Lane wrote:\n> > Heikki Linnakangas <[email protected]> writes:\n> >> On 08/04/2024 16:43, Tom Lane wrote:\n> >>> I was just about to pen an angry screed along the same lines.\n> >>> The commit flux over the past couple days, and even the last\n> >>> twelve hours, was flat-out ridiculous.  These patches weren't\n> >>> ready a week ago, and I doubt they were ready now.\n> >\n> >> Can you elaborate, which patches you think were not ready? Let's make\n> >> sure to capture any concrete concerns in the Open Items list.\n> >\n> > [ shrug... ]  There were fifty-some commits on the last day,\n> > some of them quite large, and you want me to finger particular ones?\n> > I can't even have read them all yet.\n> >\n> >> Yeah, I should have done that sooner, but realistically, there's nothing\n> >> like a looming deadline as a motivator. One idea to avoid the mad rush\n> >> in the future would be to make the feature freeze deadline more\n> >> progressive. For example:\n> >> April 1: If you are still working on a feature that you still want to\n> >> commit, you must explicitly flag it in the commitfest as \"I plan to\n> >> commit this very soon\".\n> >> April 4: You must give a short status update about anything that you\n> >> haven't committed yet, with an explanation of how you plan to proceed\n> >> with it.\n> >> April 5-8: Mandatory daily status updates, explicit approval by the\n> >> commitfest manager needed each day to continue working on it.\n> >> April 8: Hard feature freeze deadline\n> >\n> >> This would also give everyone more visibility, so that we're not all\n> >> surprised by the last minute flood of commits.\n> >\n> > Perhaps something like that could help, but it seems like a lot\n> > of mechanism.  I think the real problem is just that committers\n> > need to re-orient their thinking a little.  We must get *less*\n> > willing to commit marginal patches, not more so, as the deadline\n> > approaches.\n> >\n>\n> For me the main problem with the pre-freeze crush is that it leaves\n> pretty much no practical chance to do meaningful review/testing, and\n> some of the patches likely went through significant changes (at least\n> judging by the number of messages and patch versions in the associated\n> threads). That has to have a cost later ...\n>\n> That being said, I'm not sure the \"progressive deadline\" proposed by\n> Heikki would improve that materially, and it seems like a lot of effort\n> to maintain etc. And even if someone updates the CF app with all the\n> details, does it even give others sufficient opportunity to review the\n> new patch versions? I don't think so. (It anything, it does not seem\n> fair to expect others to do last-minute reviews under pressure.)\n>\n> Maybe it'd be better to start by expanding the existing rule about not\n> committing patches introduced for the first time in the last CF.\n\nI don't think adding more hurdles about what to include into the next\nrelease is a good solution. Why the March CF, and not earlier? Or\nlater? How about unregistered patches? Changes to the docs? Bug fixes?\nThe March CF already has a submission deadline of \"before march\", so\nthat already puts a soft limit on the patches considered for the april\ndeadline.\n\n> What if\n> we said that patches in the last CF must not go through significant\n> changes, and if they do it'd mean the patch is moved to the next CF?\n\nI also think there is already a big issue with a lack of interest in\ngetting existing patches from non-committers committed, reducing the\nset of patches that could be considered is just cheating the numbers\nand discouraging people from contributing. For one, I know I have\nmotivation issues keeping up with reviewing other people's patches\nwhen none (well, few, as of this CF) of my patches get reviewed\nmaterially and committed. I don't see how shrinking the window of\nopportunity for significant review from 9 to 7 months is going to help\nthere.\n\nSo, I think that'd be counter-productive, as this would get the\nperverse incentive to band-aid over (architectural) issues to limit\nchurn inside the patch, rather than fix those issues in a way that's\nappropriate for the project as a whole.I second your opinion, Mattias! I also feel that the management of the review of other contibutor's patches participation is much more important for a community as a whole. And this could make process of patches proposals and improvement running, while motivating participation (in all three roles of contributors, reviewers and committers), not vice versa.Regards,Pavel.", "msg_date": "Mon, 8 Apr 2024 19:56:44 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On 2024-Apr-08, Robert Haas wrote:\n\n> And maybe we need to think of a way to further mitigate this crush of\n> last minute commits. e.g. In the last week, you can't have more\n> feature commits, or more lines of insertions in your commits, than you\n> did in the prior 3 weeks combined. I don't know. I think this mad rush\n> of last-minute commits is bad for the project.\n\nAnother idea is to run a patch triage around mid March 15th, with the\nintention of punting patches to the next cycle early enough. But rather\nthan considering each patch in its own merits, consider the responsible\n_committers_ and the load that they are reasonably expected to handle:\ndetermine which patches each committer deems his or her responsibility\nfor the rest of that March commitfest, and punt all the rest. That way\nwe have a reasonably vetted amount of effort that each committer is\nallowed to spend for the remainder of that commitfest. Excesses should\nbe obvious enough and discouraged.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 8 Apr 2024 18:07:25 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On 4/8/24 8:29 AM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2024-04-08 09:26:09 -0400, Robert Haas wrote:\r\n>> On Sun, Apr 7, 2024 at 6:50 PM Michael Paquier <[email protected]> wrote:\r\n>> And maybe we need to think of a way to further mitigate this crush of\r\n>> last minute commits. e.g. In the last week, you can't have more\r\n>> feature commits, or more lines of insertions in your commits, than you\r\n>> did in the prior 3 weeks combined. I don't know. I think this mad rush\r\n>> of last-minute commits is bad for the project.\r\n> \r\n> I don't think it's very useful to paint a very broad brush here,\r\n> unfortunately. Some will just polish commits until the last minute, until the\r\n> the dot's on the i's really shine, others will continue picking up more CF\r\n> entries until the freeze is reached, others will push half baked stuff. Of\r\n> course there will be an increased commit rate, but it does looks like there\r\n> was some stuff that looked somewhat rickety.\r\n\r\nI agree with Andres here (though decline to comment on the rickety-ness \r\nof the patches). I think overcoming human nature to be more proactive at \r\na deadline is at least a NP-hard problem. This won't change if we adjust \r\ndeadlines. I think it's better to ensure we're enforcing best practices \r\nfor commits, and maybe that's a separate review to have.\r\n\r\nAs mentioned in different contexts, we do have several safeguards for a \r\nrelease:\r\n\r\n* We have a (fairly long) beta period; this allows us to remove patches \r\nprior to GA and get in further testing.\r\n* We have a RMT that (as Tom mentioned) can use its powers early and \r\noften to remove patches that are not up to our quality levels.\r\n* We can evaluate the quality of the commits coming in and coach folks \r\non what to do better.\r\n\r\nI understand that a revert is costly, particularly the longer a commit \r\nstays in, and I do 100% agree we should maintain the high commit bar we \r\nhave and not rush things in just so \"they're in for feature freeze and \r\nwe'll clean them up for beta.\" That's where best practices come in.\r\n\r\nI tend to judge the release by the outcome: once we go GA, how buggy is \r\nthe release? Did something during the release cycle (e.g. a sloppy \r\ncommit during feature freeze, lack of testing) lead to a bug that \r\nwarranted an out-of-cycle release? And yes, how we commit things at \r\nfeature freeze / through the beta impact that - we should ensure we're \r\nstill committing things that we would have committed at a least hectic \r\ntime, but understand that the deadline is still a strong motivator co \r\ncomplete the work.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Mon, 8 Apr 2024 12:23:14 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, Apr 8, 2024 at 11:48 AM Matthias van de Meent\n<[email protected]> wrote:\n> I also think there is already a big issue with a lack of interest in\n> getting existing patches from non-committers committed, reducing the\n> set of patches that could be considered is just cheating the numbers\n> and discouraging people from contributing. For one, I know I have\n> motivation issues keeping up with reviewing other people's patches\n> when none (well, few, as of this CF) of my patches get reviewed\n> materially and committed. I don't see how shrinking the window of\n> opportunity for significant review from 9 to 7 months is going to help\n> there.\n>\n> So, I think that'd be counter-productive, as this would get the\n> perverse incentive to band-aid over (architectural) issues to limit\n> churn inside the patch, rather than fix those issues in a way that's\n> appropriate for the project as a whole.\n\nI don't think you're wrong about any of this, but I don't think Tom\nand I are wrong to be upset about the volume of last-minute commits,\neither. There's a lot of this stuff that could have been committed a\nmonth ago, or two months ago, or six months ago, and it just wasn't. A\ncertain amount of that is, as Heikki says, understandable and\nexpected. People procrastinate. But, if too many people procrastinate\ntoo much, then it becomes a problem, and if you don't do anything\nabout that problem then, well, you still have one.\n\nI don't want more barriers to getting stuff committed here either, but\nI also don't want somebody whose 100-line patch is basically unchanged\nsince last July to commit it 19 hours before the feature freeze\ndeadline[1]. That's just making everyone's life more difficult. If\nthat patch happens to have been submitted by a non-committer, that\nnon-committer waited an extra 9 months for the commit, not knowing\nwhether it would actually happen, which like you say is demotivating.\nAnd if it was the committer's own patch then it was probably going in\nsooner or later, barring objections, so basically, they just deprived\nthe project of 9 months of in-tree testing that the patch could have\nhad basically for free. There's simply no world in which this kind of\nbehavior is actually helpful to committers, non-committers, reviews,\nor the project in general.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] This is a fictional example, I made up these numbers without\nchecking anything, but I think it's probably not far off some of what\nactually happened.\n\n\n", "msg_date": "Mon, 8 Apr 2024 12:24:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Tue, Apr 9, 2024 at 12:30 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-04-08 09:26:09 -0400, Robert Haas wrote:\n> > On Sun, Apr 7, 2024 at 6:50 PM Michael Paquier <[email protected]> wrote:\n> > And maybe we need to think of a way to further mitigate this crush of\n> > last minute commits. e.g. In the last week, you can't have more\n> > feature commits, or more lines of insertions in your commits, than you\n> > did in the prior 3 weeks combined. I don't know. I think this mad rush\n> > of last-minute commits is bad for the project.\n>\n> Some will just polish commits until the last minute, until the\n> the dot's on the i's really shine, others will continue picking up more CF\n> entries until the freeze is reached, others will push half baked stuff.\n\nI agree with this part.\n\nAside from considering how to institute some rules for mitigating the\nlast-minute rush, it might also be a good idea to consider how to\nimprove testing the new commits during beta. FWIW in each year, after\nfeature freeze I personally pick some new features that I didn't get\ninvolved with during the development and do intensive reviews in\nApril. It might be good if more people did things like that. That\nmight help finding half baked features earlier and improve the quality\nin general. So for example, we list features that could require more\nreviews (e.g. because of its volume, complexity, and a span of\ninfluence etc.) and we do intensive reviews for these items. Each item\nshould be reviewed by other than the author and the committer. We may\nwant to set aside a specific period for intensive testing.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 01:34:09 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "\nOn 4/8/24 17:48, Matthias van de Meent wrote:\n> On Mon, 8 Apr 2024 at 17:21, Tomas Vondra <[email protected]> wrote:\n>>\n>> ...\n>>\n>> For me the main problem with the pre-freeze crush is that it leaves\n>> pretty much no practical chance to do meaningful review/testing, and\n>> some of the patches likely went through significant changes (at least\n>> judging by the number of messages and patch versions in the associated\n>> threads). That has to have a cost later ...\n>>\n>> That being said, I'm not sure the \"progressive deadline\" proposed by\n>> Heikki would improve that materially, and it seems like a lot of effort\n>> to maintain etc. And even if someone updates the CF app with all the\n>> details, does it even give others sufficient opportunity to review the\n>> new patch versions? I don't think so. (It anything, it does not seem\n>> fair to expect others to do last-minute reviews under pressure.)\n>>\n>> Maybe it'd be better to start by expanding the existing rule about not\n>> committing patches introduced for the first time in the last CF.\n> \n> I don't think adding more hurdles about what to include into the next\n> release is a good solution. Why the March CF, and not earlier? Or\n> later? How about unregistered patches? Changes to the docs? Bug fixes?\n> The March CF already has a submission deadline of \"before march\", so\n> that already puts a soft limit on the patches considered for the april\n> deadline.\n> \n>> What if\n>> we said that patches in the last CF must not go through significant\n>> changes, and if they do it'd mean the patch is moved to the next CF?\n> \n> I also think there is already a big issue with a lack of interest in\n> getting existing patches from non-committers committed, reducing the\n> set of patches that could be considered is just cheating the numbers\n> and discouraging people from contributing. For one, I know I have\n> motivation issues keeping up with reviewing other people's patches\n> when none (well, few, as of this CF) of my patches get reviewed\n> materially and committed. I don't see how shrinking the window of\n> opportunity for significant review from 9 to 7 months is going to help\n> there.\n> \n\nI 100% understand how frustrating the lack of progress can be, and I\nagree we need to do better. I tried to move a number of stuck patches\nthis CF, and I hope (and plan) to do more of this in the future.\n\nBut I don't quite see how would this rule modification change the\nsituation for non-committers. AFAIK we already have the rule that\n(complex) patches should not appear in the last CF for the first time,\nand I'd argue that a substantial rework of a complex patch is not that\nfar from a completely new patch. Sure, the reworks may be based on a\nthorough review, so there's a lot of nuance. But still, is it possible\nto properly review if it gets reworked at the very end of the CF?\n\n> So, I think that'd be counter-productive, as this would get the\n> perverse incentive to band-aid over (architectural) issues to limit\n> churn inside the patch, rather than fix those issues in a way that's\n> appropriate for the project as a whole.\n> \n\nSurely those architectural shortcomings should be identified in a review\n- which however requires time to do properly, and thus is an argument\nfor ensuring there's enough time for such review (which is in direct\nconflict with the last-minute crush, IMO).\n\nOnce again, I 100% agree we need to do better in handling patches from\nnon-committers, absolutely no argument there. But does it require this\nlast-minute crush? I don't think so, it can't be at the expense of\nproper review and getting it right. A complex patch needs to be\nsubmitted early in the cycle, not in the last CF. If it's submitted\nearly, but does not receive enough interest/reviews, I think we need to\nfix & improve that - not to rework/push it at the very end.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 8 Apr 2024 20:15:20 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "Hi,\n\nOn 4/8/24 14:15, Tomas Vondra wrote:\n>I think we need to\n> fix & improve that - not to rework/push it at the very end.\n>\n\nThis is going to be very extreme...\n\nEither a patch is ready for merge or it isn't - when 2 or more \nCommitters agree on it then it can be merged - the policy have to be \ndiscussed of course.\n\nHowever, development happens all through out the year, so having to wait \nfor potential feedback during CommitFests windows can stop development \nduring the other months - I'm talking about non-Committers here...\n\nHaving patches on -hackers@ is best, but maybe there is a hybrid model \nwhere they exists in pull requests, tested through CfBot, and merged \nwhen ready - no matter what month it is.\n\nPull requests could still have labels that ties them to a \"CommitFest\" \nto \"high-light\" them, but it would make it easier to have a much clearer \ncut-off date for feature freeze.\n\nAnd, pull request labels are easily changed.\n\nMarch is seeing a lot of last-minute changes...\n\nJust something to think about.\n\nBest regards,\n Jesper\n\n\n\n", "msg_date": "Mon, 8 Apr 2024 15:00:04 -0400", "msg_from": "Jesper Pedersen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, 8 Apr 2024 at 20:15, Tomas Vondra <[email protected]> wrote:\n> I 100% understand how frustrating the lack of progress can be, and I\n> agree we need to do better. I tried to move a number of stuck patches\n> this CF, and I hope (and plan) to do more of this in the future.\n>\n> But I don't quite see how would this rule modification change the\n> situation for non-committers.\n\nThe problem that I feel I'm seeing is that committers mostly seem to\nmaterially review large patchsets in the last two commit fests. This\nmight be not based in reality, but it is definitely how it feels to\nme. If someone has stats on this, feel free to share.\n\nI'll sketch a situation: There's a big patch that some non-committer\nsubmitted that has been sitting on the mailinglist for 6 months or\nmore, only being reviewed by other non-committers, which the submitter\nquickly addresses. Then in the final commit fest it is finally\nreviewed by a committer, and they say it requires significant changes.\nRight now, the submitter can decide to quickly address those changes,\nand hope to keep the attention of this committer, to hopefully get it\nmerged before the deadline after probably a few more back and forths.\nBut this new rule would mean that those significant changes would be a\nreason not to put it in the upcoming release. Which I expect would\nfirst of all really feel like a slap in the face to the submitter,\nbecause it's not their fault that those changes were only requested in\nthe last commit fest. This would then likely result in the submitter\nnot following up quickly (why make time right at this moment, if\nyou're suddenly going to have another full year), which would then\ncause the committer to lose context of the patch and thus interest in\nthe patch. And then finally getting into the exact same situation next\nyear in the final commit fest, when some other committer didn't agree\nwith the redesign of the first one and request a new one pushing it\nanother year.\n\nSo yeah, I definitely agree with Matthias. I definitely feel like his\nrule would seriously impact contributions made by non-committers.\n\nMaybe a better solution to this problem would be to spread impactful\nreviews by committers more evenly throughout the year. Then there\nwouldn't be such a rush to address them in the last commit fest.\n\n\n", "msg_date": "Mon, 8 Apr 2024 21:32:14 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, Apr 8, 2024 at 3:32 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Maybe a better solution to this problem would be to spread impactful\n> reviews by committers more evenly throughout the year. Then there\n> wouldn't be such a rush to address them in the last commit fest.\n\nSpreading activity of all sorts more evenly through the year should\ndefinitely be the goal, I think. It's just not exactly clear how we\ncan do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Apr 2024 15:49:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "\n\nOn 4/8/24 21:32, Jelte Fennema-Nio wrote:\n> On Mon, 8 Apr 2024 at 20:15, Tomas Vondra <[email protected]> wrote:\n>> I 100% understand how frustrating the lack of progress can be, and I\n>> agree we need to do better. I tried to move a number of stuck patches\n>> this CF, and I hope (and plan) to do more of this in the future.\n>>\n>> But I don't quite see how would this rule modification change the\n>> situation for non-committers.\n> \n> The problem that I feel I'm seeing is that committers mostly seem to\n> materially review large patchsets in the last two commit fests. This\n> might be not based in reality, but it is definitely how it feels to\n> me. If someone has stats on this, feel free to share.\n> \n> I'll sketch a situation: There's a big patch that some non-committer\n> submitted that has been sitting on the mailinglist for 6 months or\n> more, only being reviewed by other non-committers, which the submitter\n> quickly addresses. Then in the final commit fest it is finally\n> reviewed by a committer, and they say it requires significant changes.\n> Right now, the submitter can decide to quickly address those changes,\n> and hope to keep the attention of this committer, to hopefully get it\n> merged before the deadline after probably a few more back and forths.\n> But this new rule would mean that those significant changes would be a\n> reason not to put it in the upcoming release. Which I expect would\n> first of all really feel like a slap in the face to the submitter,\n> because it's not their fault that those changes were only requested in\n> the last commit fest. This would then likely result in the submitter\n> not following up quickly (why make time right at this moment, if\n> you're suddenly going to have another full year), which would then\n> cause the committer to lose context of the patch and thus interest in\n> the patch. And then finally getting into the exact same situation next\n> year in the final commit fest, when some other committer didn't agree\n> with the redesign of the first one and request a new one pushing it\n> another year.\n> \n\nFWIW I have no doubt this problem is very real. It has never been easy\nto get stuff reviewed/committed, and I doubt it improved in last couple\nyears, considering how the traffic on pgsql-hackers exploded :-(\n\n> So yeah, I definitely agree with Matthias. I definitely feel like his\n> rule would seriously impact contributions made by non-committers.\n> \n> Maybe a better solution to this problem would be to spread impactful\n> reviews by committers more evenly throughout the year. Then there\n> wouldn't be such a rush to address them in the last commit fest.\n\nRight. I think that's mostly what I was aiming for, although I haven't\nmade it very clear/explicit. But yeah, if the consequence of the \"rule\"\nwas that some of the patches are neglected entirely, that'd be pretty\nterrible - both for the project and for the contributors.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 9 Apr 2024 00:34:10 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "\nOn 2024-04-08 Mo 12:07, Alvaro Herrera wrote:\n> On 2024-Apr-08, Robert Haas wrote:\n>\n>> And maybe we need to think of a way to further mitigate this crush of\n>> last minute commits. e.g. In the last week, you can't have more\n>> feature commits, or more lines of insertions in your commits, than you\n>> did in the prior 3 weeks combined. I don't know. I think this mad rush\n>> of last-minute commits is bad for the project.\n> Another idea is to run a patch triage around mid March 15th, with the\n> intention of punting patches to the next cycle early enough. But rather\n> than considering each patch in its own merits, consider the responsible\n> _committers_ and the load that they are reasonably expected to handle:\n> determine which patches each committer deems his or her responsibility\n> for the rest of that March commitfest, and punt all the rest. That way\n> we have a reasonably vetted amount of effort that each committer is\n> allowed to spend for the remainder of that commitfest. Excesses should\n> be obvious enough and discouraged.\n>\n\nI quite like the triage idea. But I think there's also a case for being \nmore a bit more flexible with those patches we don't throw out. A case \nclose to my heart: I'd have been very sad if the NESTED piece of \nJSON_TABLE hadn't made the cut, which it did with a few hours to spare, \nand I would not have been alone, far from it. I'd have been happy to \ngive Amit a few more days or a week if he needed it, for a significant \nheadline feature.\n\nI know there will be those who say it's the thin end of the wedge and \nrulez is rulez, but this is my view.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 8 Apr 2024 18:50:05 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> I quite like the triage idea. But I think there's also a case for being \n> more a bit more flexible with those patches we don't throw out. A case \n> close to my heart: I'd have been very sad if the NESTED piece of \n> JSON_TABLE hadn't made the cut, which it did with a few hours to spare, \n> and I would not have been alone, far from it. I'd have been happy to \n> give Amit a few more days or a week if he needed it, for a significant \n> headline feature.\n\n> I know there will be those who say it's the thin end of the wedge and \n> rulez is rulez, but this is my view.\n\nYou've certainly been around the project long enough to remember the\ntimes in the past when we let the schedule slip for \"one more big\npatch\". It just about never worked out well, so I'm definitely in\nfavor of a hard deadline. The trick is to control the tendency to\npush in patches that are only almost-ready in order to nominally\nmeet the deadline. (I don't pretend to be immune from that\ntemptation myself, but I think I resisted it better than some\nthis year.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 19:26:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, 8 Apr 2024 at 20:15, Tomas Vondra <[email protected]> wrote:\n>\n>\n> On 4/8/24 17:48, Matthias van de Meent wrote:\n>> On Mon, 8 Apr 2024 at 17:21, Tomas Vondra <[email protected]> wrote:\n>>>\n>>> Maybe it'd be better to start by expanding the existing rule about not\n>>> committing patches introduced for the first time in the last CF.\n>>\n>> I don't think adding more hurdles about what to include into the next\n>> release is a good solution. Why the March CF, and not earlier? Or\n>> later? How about unregistered patches? Changes to the docs? Bug fixes?\n>> The March CF already has a submission deadline of \"before march\", so\n>> that already puts a soft limit on the patches considered for the april\n>> deadline.\n>>\n>>> What if\n>>> we said that patches in the last CF must not go through significant\n>>> changes, and if they do it'd mean the patch is moved to the next CF?\n>>\n>> I also think there is already a big issue with a lack of interest in\n>> getting existing patches from non-committers committed, reducing the\n>> set of patches that could be considered is just cheating the numbers\n>> and discouraging people from contributing. For one, I know I have\n>> motivation issues keeping up with reviewing other people's patches\n>> when none (well, few, as of this CF) of my patches get reviewed\n>> materially and committed. I don't see how shrinking the window of\n>> opportunity for significant review from 9 to 7 months is going to help\n>> there.\n>>\n>\n> I 100% understand how frustrating the lack of progress can be, and I\n> agree we need to do better. I tried to move a number of stuck patches\n> this CF, and I hope (and plan) to do more of this in the future.\n\nThanks in advance.\n\n> But I don't quite see how would this rule modification change the\n> situation for non-committers. AFAIK we already have the rule that\n> (complex) patches should not appear in the last CF for the first time,\n> and I'd argue that a substantial rework of a complex patch is not that\n> far from a completely new patch. Sure, the reworks may be based on a\n> thorough review, so there's a lot of nuance. But still, is it possible\n> to properly review if it gets reworked at the very end of the CF?\n\nPossible? Probably, if there is a good shared understanding of why the\nprevious patch version's approach didn't work well, and if the next\napproach is well understood as well. But it's not likely, that I'll\nagree with.\n\nBut the main issue I have with your suggestion is that the March\ncommitfest effectively contains all new patches starting from January,\nand the leftovers not committed by February. If we start banning all\nnew patches and those with significant reworks from the March\ncommitfest, then combined with the lack of CF in May there wouldn't be\nany chance for new patches in the first half of the year, and an\neffective block on rewrites for 6 months- the next CF is only in July.\nSure, there is a bit of leeway there, as some patches get committed\nbefore the commitfest they're registered in is marked active, but our\ndevelopment workflow is already quite hostile to incidental\ncontributor's patches [^0], and increasing the periods in which\nauthors shouldn't expect their patches to be reviewed due to a major\nrelease that's planned in >5 months is probably not going to help the\ncase.\n\n> > So, I think that'd be counter-productive, as this would get the\n> > perverse incentive to band-aid over (architectural) issues to limit\n> > churn inside the patch, rather than fix those issues in a way that's\n> > appropriate for the project as a whole.\n> >\n>\n> Surely those architectural shortcomings should be identified in a review\n> - which however requires time to do properly, and thus is an argument\n> for ensuring there's enough time for such review (which is in direct\n> conflict with the last-minute crush, IMO).\n>\n> Once again, I 100% agree we need to do better in handling patches from\n> non-committers, absolutely no argument there. But does it require this\n> last-minute crush? I don't think so, it can't be at the expense of\n> proper review and getting it right.\n\nAgreed on this, 100%, but as mentioned above, the March commitfest\ncontains more than just \"last minute crushes\" [^1]. I don't think we\nshould throw out the baby with the bathwater here.\n\n> A complex patch needs to be\n> submitted early in the cycle, not in the last CF. If it's submitted\n> early, but does not receive enough interest/reviews, I think we need to\n> fix & improve that - not to rework/push it at the very end.\n\nAgree on all this, too.\n\n-Matthias\n\n\n[^0] (see e.g. the EXPLAIN SERIALIZE patch thread [0], where the\noriginal author did not have the time capacity to maintain the\npatchset over months:\nhttps://www.postgresql.org/message-id/[email protected]\n\n[^1] Are there metrics on how many of the committed patches this CF\nwere new, only registered this CF, and if they're more or less\ndistributed to the feature freeze when compared to longer-running\npatchess? I can probably build these statistics by extracting the data\nfrom the webpages, but that's quite tedious.\nA manual count gets me 68 new patches (~50% of all committed\nregistered patches); distribution comparison isn't in my time budget.\n\n\n", "msg_date": "Tue, 9 Apr 2024 11:25:06 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On 4/9/24 11:25, Matthias van de Meent wrote:\n> On Mon, 8 Apr 2024 at 20:15, Tomas Vondra <[email protected]> wrote:\n>>\n>>\n>> On 4/8/24 17:48, Matthias van de Meent wrote:\n>>> On Mon, 8 Apr 2024 at 17:21, Tomas Vondra <[email protected]> wrote:\n>>>>\n>>>> Maybe it'd be better to start by expanding the existing rule about not\n>>>> committing patches introduced for the first time in the last CF.\n>>>\n>>> I don't think adding more hurdles about what to include into the next\n>>> release is a good solution. Why the March CF, and not earlier? Or\n>>> later? How about unregistered patches? Changes to the docs? Bug fixes?\n>>> The March CF already has a submission deadline of \"before march\", so\n>>> that already puts a soft limit on the patches considered for the april\n>>> deadline.\n>>>\n>>>> What if\n>>>> we said that patches in the last CF must not go through significant\n>>>> changes, and if they do it'd mean the patch is moved to the next CF?\n>>>\n>>> I also think there is already a big issue with a lack of interest in\n>>> getting existing patches from non-committers committed, reducing the\n>>> set of patches that could be considered is just cheating the numbers\n>>> and discouraging people from contributing. For one, I know I have\n>>> motivation issues keeping up with reviewing other people's patches\n>>> when none (well, few, as of this CF) of my patches get reviewed\n>>> materially and committed. I don't see how shrinking the window of\n>>> opportunity for significant review from 9 to 7 months is going to help\n>>> there.\n>>>\n>>\n>> I 100% understand how frustrating the lack of progress can be, and I\n>> agree we need to do better. I tried to move a number of stuck patches\n>> this CF, and I hope (and plan) to do more of this in the future.\n> \n> Thanks in advance.\n> \n>> But I don't quite see how would this rule modification change the\n>> situation for non-committers. AFAIK we already have the rule that\n>> (complex) patches should not appear in the last CF for the first time,\n>> and I'd argue that a substantial rework of a complex patch is not that\n>> far from a completely new patch. Sure, the reworks may be based on a\n>> thorough review, so there's a lot of nuance. But still, is it possible\n>> to properly review if it gets reworked at the very end of the CF?\n> \n> Possible? Probably, if there is a good shared understanding of why the\n> previous patch version's approach didn't work well, and if the next\n> approach is well understood as well. But it's not likely, that I'll\n> agree with.\n> \n> But the main issue I have with your suggestion is that the March\n> commitfest effectively contains all new patches starting from January,\n> and the leftovers not committed by February. If we start banning all\n> new patches and those with significant reworks from the March\n> commitfest, then combined with the lack of CF in May there wouldn't be\n> any chance for new patches in the first half of the year, and an\n> effective block on rewrites for 6 months- the next CF is only in July.\n> Sure, there is a bit of leeway there, as some patches get committed\n> before the commitfest they're registered in is marked active, but our\n> development workflow is already quite hostile to incidental\n> contributor's patches [^0], and increasing the periods in which\n> authors shouldn't expect their patches to be reviewed due to a major\n> release that's planned in >5 months is probably not going to help the\n> case.\n> \n\nBut I don't think I suggested to ban such patches from the March CF\nentirely. Surely those patches can still be submitted, reviewed, and\neven reworked in the last CF. All I said is it might be better to treat\nthose patches as not committable by default. Sorry if that wasn't clear\nenough ...\n\nWould this be an additional restriction? I'm not quite sure. Surely if\nthe intent is to only commit patches that we agree are in \"sufficiently\ngood\" shape, and getting them into such shape requires time & reviews,\nthis would not be a significant change.\n\nFWIW I'm not a huge fan of hard unbreakable rules, so there should be\nsome leeway when justified, but the bar would be somewhat higher (clear\nconsensus, RMT having a chance to say no, ...).\n\n>>> So, I think that'd be counter-productive, as this would get the\n>>> perverse incentive to band-aid over (architectural) issues to limit\n>>> churn inside the patch, rather than fix those issues in a way that's\n>>> appropriate for the project as a whole.\n>>>\n>>\n>> Surely those architectural shortcomings should be identified in a review\n>> - which however requires time to do properly, and thus is an argument\n>> for ensuring there's enough time for such review (which is in direct\n>> conflict with the last-minute crush, IMO).\n>>\n>> Once again, I 100% agree we need to do better in handling patches from\n>> non-committers, absolutely no argument there. But does it require this\n>> last-minute crush? I don't think so, it can't be at the expense of\n>> proper review and getting it right.\n> \n> Agreed on this, 100%, but as mentioned above, the March commitfest\n> contains more than just \"last minute crushes\" [^1]. I don't think we\n> should throw out the baby with the bathwater here.\n> \n>> A complex patch needs to be\n>> submitted early in the cycle, not in the last CF. If it's submitted\n>> early, but does not receive enough interest/reviews, I think we need to\n>> fix & improve that - not to rework/push it at the very end.\n> \n> Agree on all this, too.\n> \n\nOK\n\n> -Matthias\n> \n> \n> [^0] (see e.g. the EXPLAIN SERIALIZE patch thread [0], where the\n> original author did not have the time capacity to maintain the\n> patchset over months:\n> https://www.postgresql.org/message-id/[email protected]\n> \n\nYeah, this is terrible :-(\n\n> [^1] Are there metrics on how many of the committed patches this CF\n> were new, only registered this CF, and if they're more or less\n> distributed to the feature freeze when compared to longer-running\n> patchess? I can probably build these statistics by extracting the data\n> from the webpages, but that's quite tedious.\n> A manual count gets me 68 new patches (~50% of all committed\n> registered patches); distribution comparison isn't in my time budget.\n\nI suppose there are, or would be possible to get from the CF app. And\nperhaps it'd be good to look at some of this, I think a lot of the\ndiscussion here is based on very subjective perceptions of how the\nprocess works.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 9 Apr 2024 12:18:32 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "\nOn 2024-04-08 Mo 19:26, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> I quite like the triage idea. But I think there's also a case for being\n>> more a bit more flexible with those patches we don't throw out. A case\n>> close to my heart: I'd have been very sad if the NESTED piece of\n>> JSON_TABLE hadn't made the cut, which it did with a few hours to spare,\n>> and I would not have been alone, far from it. I'd have been happy to\n>> give Amit a few more days or a week if he needed it, for a significant\n>> headline feature.\n>> I know there will be those who say it's the thin end of the wedge and\n>> rulez is rulez, but this is my view.\n> You've certainly been around the project long enough to remember the\n> times in the past when we let the schedule slip for \"one more big\n> patch\". It just about never worked out well, so I'm definitely in\n> favor of a hard deadline. The trick is to control the tendency to\n> push in patches that are only almost-ready in order to nominally\n> meet the deadline. (I don't pretend to be immune from that\n> temptation myself, but I think I resisted it better than some\n> this year.)\n>\n> \t\t\t\n\n\nIf we want to change how things are working I suspect we probably need \nsomething a lot more radical than any of the suggestions I've seen \nfloating around. I don't know what that might be, but ISTM we're not \nthinking boldly enough.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 9 Apr 2024 08:27:31 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, Apr 8, 2024 at 09:32:14PM +0200, Jelte Fennema-Nio wrote:\n> I'll sketch a situation: There's a big patch that some non-committer\n> submitted that has been sitting on the mailinglist for 6 months or\n> more, only being reviewed by other non-committers, which the submitter\n> quickly addresses. Then in the final commit fest it is finally\n> reviewed by a committer, and they say it requires significant changes.\n> Right now, the submitter can decide to quickly address those changes,\n> and hope to keep the attention of this committer, to hopefully get it\n> merged before the deadline after probably a few more back and forths.\n> But this new rule would mean that those significant changes would be a\n> reason not to put it in the upcoming release. Which I expect would\n> first of all really feel like a slap in the face to the submitter,\n> because it's not their fault that those changes were only requested in\n> the last commit fest. This would then likely result in the submitter\n> not following up quickly (why make time right at this moment, if\n> you're suddenly going to have another full year), which would then\n> cause the committer to lose context of the patch and thus interest in\n> the patch. And then finally getting into the exact same situation next\n> year in the final commit fest, when some other committer didn't agree\n> with the redesign of the first one and request a new one pushing it\n> another year.\n\nYes, I can see this happening. :-(\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 9 Apr 2024 17:11:57 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, Apr 8, 2024 at 10:41:17AM -0400, Robert Treat wrote:\n> Unfortunately many humans are hardwired towards procrastination and\n> last minute heroics (it's one reason why deadline driven development\n> works even though in the long run it tends to be bad practice), and\n> historically was one of the driving factors in why we started doing\n> commitfests in the first place (you should have seen the mad dash of\n> commits before we had that process), so ISTM that it's not completely\n> avoidable...\n> \n> That said, are you suggesting that the feature freeze deadline be\n> random, and also held in secret by the RMT, only to be announced after\n> the freeze time has passed? This feels weird, but might apply enough\n> deadline related pressure while avoiding last minute shenanigans.\n\nCommitting code is a hard job, no question. However, committers have to\ngive up the idea that they should wait for brilliant ideas before\nfinalizing patches. If you come up with a better idea later, great, but\ndon't wait to finalize patches.\n\nI used to write college papers much too late because I expected some\nbrilliant idea to come to me, and it rarely happened. I learned to\nwrite the paper with the ideas I had, and if I come up with a better\nidea later, I can add it to the end.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 9 Apr 2024 17:16:27 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Tue, Apr 9, 2024 at 5:16 PM Bruce Momjian <[email protected]> wrote:\n> Committing code is a hard job, no question. However, committers have to\n> give up the idea that they should wait for brilliant ideas before\n> finalizing patches. If you come up with a better idea later, great, but\n> don't wait to finalize patches.\n\nCompletely agreed. If your ideas are too bad, you should just give up.\nBut if they're basically fine and you're just waiting for inspiration\nto strike from above, you might as well get on with it. If it turns\nout that you've broken something, well, that sucks, but it's still\nbetter to get that out of the way in January ... or November ... or\nAugust ... rather than waiting until March or April.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Apr 2024 09:51:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" }, { "msg_contents": "On Mon, 8 Apr 2024 at 16:26, Robert Haas <[email protected]> wrote:\n\n> And maybe we need to think of a way to further mitigate this crush of\n> last minute commits. e.g. In the last week, you can't have more\n> feature commits, or more lines of insertions in your commits, than you\n> did in the prior 3 weeks combined. I don't know. I think this mad rush\n> of last-minute commits is bad for the project.\n>\n\nI think some part of this rush of commits could also be explained as a form\nof entrainment[1]. Only patches reasonably close to commit will get picked\nup with extra attention to get them ready before the deadline. After the\nrelease hammer drops, the pool of remaining patches will have few patches\nclose to commit remaining. And to make matters worse the attention of\nworking on them will be spread thinner. When repeated, this pattern can be\nself reinforcing.\n\nIf this hypothesis is true, maybe some forces could be introduced to\ncounteract this natural tendency. I don't have any bright ideas on how\nexactly yet.\n\nAnts\n\n[1] Emergent synchronization of interacting oscillators, see:\nhttps://en.wikipedia.org/wiki/Injection_locking#Entrainment\nhttps://en.wikipedia.org/wiki/Entrainment_(biomusicology)\n\nOn Mon, 8 Apr 2024 at 16:26, Robert Haas <[email protected]> wrote:And maybe we need to think of a way to further mitigate this crush of\nlast minute commits. e.g. In the last week, you can't have more\nfeature commits, or more lines of insertions in your commits, than you\ndid in the prior 3 weeks combined. I don't know. I think this mad rush\nof last-minute commits is bad for the project.I think some part of this rush of commits could also be explained as a form of entrainment[1]. Only patches reasonably close to commit will get picked up with extra attention to get them ready before the deadline. After the release hammer drops, the pool of remaining patches will have few patches close to commit remaining. And to make matters worse the attention of working on them will be spread thinner. When repeated, this pattern can be self reinforcing.If this hypothesis is true, maybe some forces could be introduced to counteract this natural tendency. I don't have any bright ideas on how exactly yet.Ants[1] Emergent synchronization of interacting oscillators, see:https://en.wikipedia.org/wiki/Injection_locking#Entrainmenthttps://en.wikipedia.org/wiki/Entrainment_(biomusicology)", "msg_date": "Wed, 10 Apr 2024 17:37:28 +0300", "msg_from": "Ants Aasma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Release Management Team & Feature Freeze" } ]
[ { "msg_contents": "I was looking at the documentation index this morning[1], and I can't\nhelp feeling like there are some parts of it that are over-emphasized\nand some parts that are under-emphasized. I'm not sure what we can do\nabout this exactly, but I thought it worth writing an email and seeing\nwhat other people think.\n\nThe two sections of the documentation that seem really\nunder-emphasized to me are the GUC documentation and the SQL\nreference. The GUC documentation is all buried under \"20. Server\nConfiguration\" and the SQL command reference is under \"I. SQL\ncommands\". For reasons that I don't understand, all chapters except\nfor those in \"VI. Reference\" are numbered, but the chapters in that\nsection have Roman numerals instead.\n\nI don't know what other people's experience is, but for me, wanting to\nknow what a command does or what a setting does is extremely common.\nTherefore, I think these chapters are disproportionately important and\nshould be emphasized more. In the case of the GUC reference, one idea\nI have is to split up \"III. Server Administration\". My proposal is\nthat we divide it into three sections. The first would be called \"III.\nServer Installation\" and would cover chapters 16 (installation from\nbinaries) through 19 (server setup and operation). The second would be\ncalled \"IV. Server Configuration\" -- so every section that's currently\na subsection of \"server configuration\" would become a top-level\nchapter. The third division would be \"V. Server Administration,\" and\nwould cover the current chapters 21-33. This is probably far from\nperfect, but it seems like a relatively simple change and better than\nwhat we have now.\n\nI don't know what to do about \"I. SQL commands\". It's obviously\nimpractical to promote that to a top-level section, because it's got a\nzillion sub-pages which I don't think we want in the top-level\ndocumentation index. But having it as one of several unnumbered\nchapters interposed between 51 and 52 doesn't seem great either.\n\nThe stuff that I think is over-emphasized is as follows: (a) chapters\n1-3, the tutorial; (b) chapters 4-6, which are essentially a\ncontinuation of the tutorial, and not at all similar to chapters 8-11\nwhich are chalk-full of detailed technical information; (c) chapters\n43-46, one per procedural language; perhaps these could just be\ndemoted to sub-sections of chapter 42 on procedural languages; (d)\nchapters 47 (server programming interface), 50 (replication progress\ntracking), and 51 (archive modules), all of which are important to\ndocument but none of which seem important enough to put them in the\ntop-level documentation index; and (e) large parts of section \"VII.\nInternals,\" which again contain tons of stuff of very marginal\ninterest. The first ~4 chapters of the internals section seem like\nthey might be mainstream enough to justify the level of prominence\nthat we give them, but the rest has got to be of interest to a tiny\nminority of readers.\n\nI think it might be possible to consolidate the internals section by\ngrouping a bunch of existing entries together by category. Basically,\nafter the first few chapters, you've got stuff that is of interest to\nC programmers writing core or extension code; and you've got\nexplainers on things like GEQO and index op-classes and support\nfunctions which might be of interest even to non-programmers. I think\nfor example that we don't need separate top-level chapters on writing\nprocedural language handlers, FDWs, tablesample methods, custom scan\nproviders, table access methods, index access methods, and WAL\nresource managers. Some or all of those could be grouped under a\nsingle chapter, perhaps, e.g. Using PostgreSQL Extensibility\nInterfaces.\n\nThoughts? I realize that this topic is HIGHLY prone to ENDLESS\nbikeshedding, and it's inevitable that not everybody is going to\nagree. But I hope we can agree that it's completely silly that it's\nvastly easier to find the documentation about the backup manifest\nformat than it is to find the documentation on CREATE TABLE or\nshared_buffers, and if we can agree on that, then perhaps we can agree\non some way to make things better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/docs/16/index.html\n\n\n", "msg_date": "Mon, 18 Mar 2024 10:11:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "documentation structure" }, { "msg_contents": "On Mon, 18 Mar 2024 at 15:12, Robert Haas <[email protected]> wrote:\n\nI'm not going into detail about the other docs comments, I don't have\nmuch of an opinion either way on the mentioned sections. You make good\narguments; yet I don't usually use those sections of the docs but\nrather do code searches.\n\n> I don't know what to do about \"I. SQL commands\". It's obviously\n> impractical to promote that to a top-level section, because it's got a\n> zillion sub-pages which I don't think we want in the top-level\n> documentation index. But having it as one of several unnumbered\n> chapters interposed between 51 and 52 doesn't seem great either.\n\nCould \"SQL Commands\" be a top-level construct, with subsections for\nSQL/DML, SQL/DDL, SQL/Transaction management, and PG's\nextensions/administrative/misc features? I sometimes find myself\ntrying to mentally organize what SQL commands users can use vs those\naccessible to database owners and administrators, which is not\ncurrently organized as such in the SQL Commands section.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 18 Mar 2024 15:54:45 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Mar 18, 2024 at 10:55 AM Matthias van de Meent\n<[email protected]> wrote:\n> > I don't know what to do about \"I. SQL commands\". It's obviously\n> > impractical to promote that to a top-level section, because it's got a\n> > zillion sub-pages which I don't think we want in the top-level\n> > documentation index. But having it as one of several unnumbered\n> > chapters interposed between 51 and 52 doesn't seem great either.\n>\n> Could \"SQL Commands\" be a top-level construct, with subsections for\n> SQL/DML, SQL/DDL, SQL/Transaction management, and PG's\n> extensions/administrative/misc features? I sometimes find myself\n> trying to mentally organize what SQL commands users can use vs those\n> accessible to database owners and administrators, which is not\n> currently organized as such in the SQL Commands section.\n\nYeah, I wondered about that, too. Or for example you could group all\nCREATE commands together, all ALTER commands together, all DROP\ncommands together, etc. But I can't really see a future in such\nschemes, because having a single page that links to the reference\ndocumentation for every single command we have in alphabetical order\nis incredibly handy, or at least I have found it so. So my feeling -\nat least at present - is that it's more fruitful to look into cutting\ndown the amount of clutter that appears in the top-level documentation\nindex, and maybe finding ways to make important sections like the SQL\nreference more prominent.\n\nGiven how much documentation we have, it's just not going to be\npossible to make everything that matters conveniently visible at the\ntop level. I think if people have to click down a level for the SQL\nreference, that's fine, as long as the link they need to click on is\nreasonably visible. What annoys me about the present structure is that\nit isn't. You don't get any visual clue that the \"SQL Commands\" page\nwith ~100 subpages is more important than \"51. Archive Modules\" or\n\"33. Regression Tests\" or \"58. Writing a Procedural Language Handler,\"\nbut it totally is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Mar 2024 13:21:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Mar 18, 2024 at 10:12 AM Robert Haas <[email protected]> wrote:\n\n> I was looking at the documentation index this morning[1], and I can't\n> help feeling like there are some parts of it that are over-emphasized\n> and some parts that are under-emphasized. I'm not sure what we can do\n> about this exactly, but I thought it worth writing an email and seeing\n> what other people think.\n>\n\nI agree, and my usage patterns of the docs are similar.\n\nAs the project progresses and more features are added and tacked on to\nexisting docs, things can get\nmurky or buried. I imagine that web access and search logs could paint a\npicture of documentation usage.\n\nI don't know what other people's experience is, but for me, wanting to\n> know what a command does or what a setting does is extremely common.\n> Therefore, I think these chapters are disproportionately important and\n> should be emphasized more. In the case of the GUC reference, one idea\n>\n\n+1\n\nI have is to split up \"III. Server Administration\". My proposal is\n> that we divide it into three sections. The first would be called \"III.\n> Server Installation\" and would cover chapters 16 (installation from\n> binaries) through 19 (server setup and operation). The second would be\n> called \"IV. Server Configuration\" -- so every section that's currently\n> a subsection of \"server configuration\" would become a top-level\n>\nchapter. The third division would be \"V. Server Administration,\" and\n> would cover the current chapters 21-33. This is probably far from\n\n\nI like all of those.\n\n\n> I don't know what to do about \"I. SQL commands\". It's obviously\n> impractical to promote that to a top-level section, because it's got a\n> zillion sub-pages which I don't think we want in the top-level\n> documentation index. But having it as one of several unnumbered\n> chapters interposed between 51 and 52 doesn't seem great either.\n>\n\nI think it'd be easier to read if current \"VI. Reference\" came right after\n\"Server Administration\",\nahead of \"Client Interfaces\" and \"Server Programming\", which are of\ninterest to a much smaller\nsubset of users.\n\nAlso if the subchapters were numbered like the rest of them. I don't think\nthe roman numerals are\nparticularly helpful.\n\nThe stuff that I think is over-emphasized is as follows: (a) chapters\n> 1-3, the tutorial; (b) chapters 4-6, which are essentially a\n\n...\n\nAlso +1\n\nThoughts? I realize that this topic is HIGHLY prone to ENDLESS\n> bikeshedding, and it's inevitable that not everybody is going to\n> agree. But I hope we can agree that it's completely silly that it's\n> vastly easier to find the documentation about the backup manifest\n> format than it is to find the documentation on CREATE TABLE or\n> shared_buffers, and if we can agree on that, then perhaps we can agree\n> on some way to make things better.\n>\n\nImpossible to please everyone, but I'm sure we can improve things.\n\nI've contributed to different parts of the docs over the years, and would\nbe happy\nto help with this work.\n\nRoberto\n\nOn Mon, Mar 18, 2024 at 10:12 AM Robert Haas <[email protected]> wrote:I was looking at the documentation index this morning[1], and I can't\nhelp feeling like there are some parts of it that are over-emphasized\nand some parts that are under-emphasized. I'm not sure what we can do\nabout this exactly, but I thought it worth writing an email and seeing\nwhat other people think.I agree, and my usage patterns of the docs are similar.As the project progresses and more features are added and tacked on to existing docs, things can getmurky or buried. I imagine that web access and search logs could paint a picture of documentation usage.\nI don't know what other people's experience is, but for me, wanting to\nknow what a command does or what a setting does is extremely common.\nTherefore, I think these chapters are disproportionately important and\nshould be emphasized more. In the case of the GUC reference, one idea+1 \nI have is to split up \"III. Server Administration\". My proposal is\nthat we divide it into three sections. The first would be called \"III.\nServer Installation\" and would cover chapters 16 (installation from\nbinaries) through 19 (server setup and operation). The second would be\ncalled \"IV. Server Configuration\" -- so every section that's currently\na subsection of \"server configuration\" would become a top-level\nchapter. The third division would be \"V. Server Administration,\" and\nwould cover the current chapters 21-33. This is probably far from I like all of those. \nI don't know what to do about \"I. SQL commands\". It's obviously\nimpractical to promote that to a top-level section, because it's got a\nzillion sub-pages which I don't think we want in the top-level\ndocumentation index. But having it as one of several unnumbered\nchapters interposed between 51 and 52 doesn't seem great either.I think it'd be easier to read if current \"VI. Reference\" came right after \"Server Administration\", ahead of \"Client Interfaces\" and \"Server Programming\", which are of interest to a much smallersubset of users.Also if the subchapters were numbered like the rest of them. I don't think the roman numerals areparticularly helpful. \nThe stuff that I think is over-emphasized is as follows: (a) chapters\n1-3, the tutorial; (b) chapters 4-6, which are essentially a...Also +1 \nThoughts? I realize that this topic is HIGHLY prone to ENDLESS\nbikeshedding, and it's inevitable that not everybody is going to\nagree. But I hope we can agree that it's completely silly that it's\nvastly easier to find the documentation about the backup manifest\nformat than it is to find the documentation on CREATE TABLE or\nshared_buffers, and if we can agree on that, then perhaps we can agree\non some way to make things better.Impossible to please everyone, but I'm sure we can improve things.I've contributed to different parts of the docs over the years, and would be happyto help with this work.Roberto", "msg_date": "Mon, 18 Mar 2024 15:06:38 -0400", "msg_from": "Roberto Mello <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, 2024-03-18 at 10:11 -0400, Robert Haas wrote:\n> The two sections of the documentation that seem really\n> under-emphasized to me are the GUC documentation and the SQL\n> reference. The GUC documentation is all buried under \"20. Server\n> Configuration\" and the SQL command reference is under \"I. SQL\n> commands\". For reasons that I don't understand, all chapters except\n> for those in \"VI. Reference\" are numbered, but the chapters in that\n> section have Roman numerals instead.\n\nThat last fact is very odd indeed and could be easily fixed.\n\n> I don't know what other people's experience is, but for me, wanting to\n> know what a command does or what a setting does is extremely common.\n> Therefore, I think these chapters are disproportionately important and\n> should be emphasized more. In the case of the GUC reference, one idea\n> I have is to split up \"III. Server Administration\". My proposal is\n> that we divide it into three sections. The first would be called \"III.\n> Server Installation\" and would cover chapters 16 (installation from\n> binaries) through 19 (server setup and operation). The second would be\n> called \"IV. Server Configuration\" -- so every section that's currently\n> a subsection of \"server configuration\" would become a top-level\n> chapter. The third division would be \"V. Server Administration,\" and\n> would cover the current chapters 21-33. This is probably far from\n> perfect, but it seems like a relatively simple change and better than\n> what we have now.\n\nI'm fine with splitting up \"Server Administration\" into three sections\nlike you propose.\n\n> I don't know what to do about \"I. SQL commands\". It's obviously\n> impractical to promote that to a top-level section, because it's got a\n> zillion sub-pages which I don't think we want in the top-level\n> documentation index. But having it as one of several unnumbered\n> chapters interposed between 51 and 52 doesn't seem great either.\n\nI think that both the GUCs and the SQL reference could be top-level\nsections. For the GUCs there is an obvious split in sub-chapters,\nand the SQL reference could be a top-level section without any chapters\nunder it.\n\n> The stuff that I think is over-emphasized is as follows: (a) chapters\n> 1-3, the tutorial; (b) chapters 4-6, which are essentially a\n> continuation of the tutorial, and not at all similar to chapters 8-11\n> which are chalk-full of detailed technical information; (c) chapters\n> 43-46, one per procedural language; perhaps these could just be\n> demoted to sub-sections of chapter 42 on procedural languages; (d)\n> chapters 47 (server programming interface), 50 (replication progress\n> tracking), and 51 (archive modules), all of which are important to\n> document but none of which seem important enough to put them in the\n> top-level documentation index; and (e) large parts of section \"VII.\n> Internals,\" which again contain tons of stuff of very marginal\n> interest. The first ~4 chapters of the internals section seem like\n> they might be mainstream enough to justify the level of prominence\n> that we give them, but the rest has got to be of interest to a tiny\n> minority of readers.\n\nI disagree that the tutorial is over-emphasized.\n\nI also disagree that chapters 4 to 6 are a continuation of the tutorial.\nOr at least, they shouldn't be.\nWhen I am looking for a documentation reference on something like\nsecurity considerations of SECURITY DEFINER functions, my first\nimpulse is to look in chapter 5 (Data Definition) or in chapter 38\n(Extending SQL), and I am surprised to find it discussed in the\nSQL reference of CREATE FUNCTION.\n\nAnother case in point is the \"Notes\" section for CREATE VIEW. Why is\nthat not somewhere under \"Data Definition\"?\n\nFor me, the reference should be terse and focused on the syntax.\n\nChanging that is probably a lost cause by now, but I feel that we need\nnot encourage that development any more by playing down the earlier\nchapters.\n\n> I think it might be possible to consolidate the internals section by\n> grouping a bunch of existing entries together by category. Basically,\n> after the first few chapters, you've got stuff that is of interest to\n> C programmers writing core or extension code; and you've got\n> explainers on things like GEQO and index op-classes and support\n> functions which might be of interest even to non-programmers. I think\n> for example that we don't need separate top-level chapters on writing\n> procedural language handlers, FDWs, tablesample methods, custom scan\n> providers, table access methods, index access methods, and WAL\n> resource managers. Some or all of those could be grouped under a\n> single chapter, perhaps, e.g. Using PostgreSQL Extensibility\n> Interfaces.\n\nI have no strong feelings about that.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 18 Mar 2024 22:40:07 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Mon, 2024-03-18 at 10:11 -0400, Robert Haas wrote:\n>> I don't know what to do about \"I. SQL commands\". It's obviously\n>> impractical to promote that to a top-level section, because it's got a\n>> zillion sub-pages which I don't think we want in the top-level\n>> documentation index. But having it as one of several unnumbered\n>> chapters interposed between 51 and 52 doesn't seem great either.\n\n> I think that both the GUCs and the SQL reference could be top-level\n> sections. For the GUCs there is an obvious split in sub-chapters,\n> and the SQL reference could be a top-level section without any chapters\n> under it.\n\nI'd be in favor of promoting all three of the \"Reference\" things to\nthe top level, except that as Robert says, it seems likely that that\nwould end in having a hundred individual command reference pages\nvisible in the topmost table of contents. Also, if we manage to\nsuppress that, did we really make it any more prominent? Not sure.\n\nMaking \"SQL commands\" top-level with half a dozen subsections would\nsolve the visibility problem, but I'm not real eager to go there,\nbecause I foresee endless arguments about which subsection a given\ncommand goes in. Robert's point about wanting a single alphabetized\nlist is valid too (although you could imagine that being a list in an\nintroductory section, similar to what we have for system catalogs).\n\nThis might be a silly suggestion, but: could we just render the\n\"most important\" chapter titles in a larger font?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Mar 2024 18:51:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Mar 18, 2024 at 6:51 PM Tom Lane <[email protected]> wrote:\n> This might be a silly suggestion, but: could we just render the\n> \"most important\" chapter titles in a larger font?\n\nIt's not the silliest suggestion ever -- you could have proposed\n<blink>! -- but I also suspect it's not the right answer. Of course,\nvarying the font size can be a great way of emphasizing some things\nmore than others, but it doesn't usually work out well to just take a\ndocument that was designed to be displayed in a uniform font size and\nenlarge bits of text here and there. You usually want to have some\nkind of overall plan of which font size is a single component.\n\nFor example, on a corporate home page, it's quite common to have two\nnav bars, the larger of which has entries that correspond to the\ncompany's product offerings and/or marketing materials, and the\nsmaller of which has \"utility functions\" like \"login\", \"contact us\",\nand \"search\". Font size can be an effective tool for emphasizing the\nrelative importance of one nav bar versus the other, but you don't\nstart by deciding which things are going to get displayed in a larger\nfont. You start with an overall idea of the layout and then the font\nsize flows out of that.\n\nJust riffing a bit, you could imagine adding a nav bar to our\ndocumentation, either across the top or along the side, that is always\nthere on every page of the documentation and contains those links that\nwe want to make sure are always visible. Necessarily, these must be\nlimited in number. Then on the home page you could have the whole\ntable of contents as we do today, and you use that to navigate to\neverything that isn't one of the quick links.\n\nOr you can imagine that the home page of our documentation isn't just\na tree view like it is today; it might instead be written in paragraph\nform. \"Welcome to the PostgreSQL documentation! If you're new here,\ncheck out our <link>tutorial</link>! Otherwise, you might be\ninterested in our <link>SQL reference</link>, our <link>configuration\nreference</link>, or our <link>banana plantation</link>. If none of\nthose sound like what you want, check out the <link>documentation\nindex</link>.\" Obviously in order to actually work, something like\nthis would need to be expanded into enough paragraphs to actually\ncover all of the important sections of the documentation, and probably\nnot mention banana plantations. Or maybe it wouldn't be just\nparagraphs, but a two-column table, with each row of the table having\na main title and link in the narrower lefthand column and a blurb with\nmore links in the wider righthand column.\n\nI'm sure there are a lot of other ways to do this, too. Our main\ndocumentation page is very old-school, and there are probably a bunch\nof ways to do better.\n\nBut I'm not sure how easy it would be to get agreement on something\nspecific, and I don't know how well our toolchain can support anything\nother than what we've already got. I've also learned from painful\nexperience that you can't fix bad content with good markup. I think it\nis worth spending some effort on trying to beat the existing format\ninto submission, promoting things that seem to deserve it and demoting\nthose that seem to deserve that. At some point, we'll probably reach a\npoint of diminishing returns, either because we all agree we've done\nas well as we can, or because we can't agree on what else to do, and\nmaybe at that point the only way to improve further is with better web\ndesign and/or a different documentation toolchain. But I think it's\nfairly clear that we're not at that point now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Mar 2024 19:34:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "> On 18 Mar 2024, at 22:40, Laurenz Albe <[email protected]> wrote:\n> On Mon, 2024-03-18 at 10:11 -0400, Robert Haas wrote:\n\n>> For reasons that I don't understand, all chapters except\n>> for those in \"VI. Reference\" are numbered, but the chapters in that\n>> section have Roman numerals instead.\n> \n> That last fact is very odd indeed and could be easily fixed.\n\nIt's actually not very odd, the reference section is using <reference> elements\nand we had missed the arabic numerals setting on those. The attached fixes\nthat for me. That being said, we've had roman numerals for the reference\nsection since forever (all the way down to the 7.2 docs online has it) so maybe\nit was intentional? Or no one managed to see it until Robert did, I've\ncertainly never noticed it until now.\n\n--\nDaniel Gustafsson", "msg_date": "Tue, 19 Mar 2024 10:05:46 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> It's actually not very odd, the reference section is using <reference> elements\n> and we had missed the arabic numerals setting on those. The attached fixes\n> that for me. That being said, we've had roman numerals for the reference\n> section since forever (all the way down to the 7.2 docs online has it) so maybe\n> it was intentional?\n\nI'm quite sure it *was* intentional. Maybe it was a bad idea, but\nit's not that way simply because nobody thought about it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Mar 2024 09:50:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Mar 18, 2024 at 10:12 AM Robert Haas <[email protected]> wrote:\n\n> I was looking at the documentation index this morning[1], and I can't\n> help feeling like there are some parts of it that are over-emphasized\n> and some parts that are under-emphasized. I'm not sure what we can do\n> about this exactly, but I thought it worth writing an email and seeing\n> what other people think.\n>\n> The two sections of the documentation that seem really\n> under-emphasized to me are the GUC documentation and the SQL\n> reference. The GUC documentation is all buried under \"20. Server\n> Configuration\" and the SQL command reference is under \"I. SQL\n> commands\". For reasons that I don't understand, all chapters except\n> for those in \"VI. Reference\" are numbered, but the chapters in that\n> section have Roman numerals instead.\n>\n> I don't know what other people's experience is, but for me, wanting to\n> know what a command does or what a setting does is extremely common.\n> Therefore, I think these chapters are disproportionately important and\n> should be emphasized more. In the case of the GUC reference, one idea\n> I have is to split up \"III. Server Administration\". My proposal is\n> that we divide it into three sections. The first would be called \"III.\n> Server Installation\" and would cover chapters 16 (installation from\n> binaries) through 19 (server setup and operation). The second would be\n> called \"IV. Server Configuration\" -- so every section that's currently\n> a subsection of \"server configuration\" would become a top-level\n> chapter. The third division would be \"V. Server Administration,\" and\n> would cover the current chapters 21-33. This is probably far from\n> perfect, but it seems like a relatively simple change and better than\n> what we have now.\n>\n> I don't know what to do about \"I. SQL commands\". It's obviously\n> impractical to promote that to a top-level section, because it's got a\n> zillion sub-pages which I don't think we want in the top-level\n> documentation index. But having it as one of several unnumbered\n> chapters interposed between 51 and 52 doesn't seem great either.\n>\n> The stuff that I think is over-emphasized is as follows: (a) chapters\n> 1-3, the tutorial; (b) chapters 4-6, which are essentially a\n> continuation of the tutorial, and not at all similar to chapters 8-11\n> which are chalk-full of detailed technical information; (c) chapters\n> 43-46, one per procedural language; perhaps these could just be\n> demoted to sub-sections of chapter 42 on procedural languages; (d)\n> chapters 47 (server programming interface), 50 (replication progress\n> tracking), and 51 (archive modules), all of which are important to\n> document but none of which seem important enough to put them in the\n> top-level documentation index; and (e) large parts of section \"VII.\n> Internals,\" which again contain tons of stuff of very marginal\n> interest. The first ~4 chapters of the internals section seem like\n> they might be mainstream enough to justify the level of prominence\n> that we give them, but the rest has got to be of interest to a tiny\n> minority of readers.\n>\n> I think it might be possible to consolidate the internals section by\n> grouping a bunch of existing entries together by category. Basically,\n> after the first few chapters, you've got stuff that is of interest to\n> C programmers writing core or extension code; and you've got\n> explainers on things like GEQO and index op-classes and support\n> functions which might be of interest even to non-programmers. I think\n> for example that we don't need separate top-level chapters on writing\n> procedural language handlers, FDWs, tablesample methods, custom scan\n> providers, table access methods, index access methods, and WAL\n> resource managers. Some or all of those could be grouped under a\n> single chapter, perhaps, e.g. Using PostgreSQL Extensibility\n> Interfaces.\n>\n> Thoughts? I realize that this topic is HIGHLY prone to ENDLESS\n> bikeshedding, and it's inevitable that not everybody is going to\n> agree. But I hope we can agree that it's completely silly that it's\n> vastly easier to find the documentation about the backup manifest\n> format than it is to find the documentation on CREATE TABLE or\n> shared_buffers, and if we can agree on that, then perhaps we can agree\n> on some way to make things better.\n>\n>\n>\n+many for improving the index.\n\nMy own pet docs peeve is a purely editorial one: func.sgml is a 30k line\nbeast, and I think there's a good case for splitting out at least the\nlarger chunks of it.\n\ncheers\n\nandrew\n\nOn Mon, Mar 18, 2024 at 10:12 AM Robert Haas <[email protected]> wrote:I was looking at the documentation index this morning[1], and I can't\nhelp feeling like there are some parts of it that are over-emphasized\nand some parts that are under-emphasized. I'm not sure what we can do\nabout this exactly, but I thought it worth writing an email and seeing\nwhat other people think.\n\nThe two sections of the documentation that seem really\nunder-emphasized to me are the GUC documentation and the SQL\nreference. The GUC documentation is all buried under \"20. Server\nConfiguration\" and the SQL command reference is under \"I. SQL\ncommands\". For reasons that I don't understand, all chapters except\nfor those in \"VI. Reference\" are numbered, but the chapters in that\nsection have Roman numerals instead.\n\nI don't know what other people's experience is, but for me, wanting to\nknow what a command does or what a setting does is extremely common.\nTherefore, I think these chapters are disproportionately important and\nshould be emphasized more. In the case of the GUC reference, one idea\nI have is to split up \"III. Server Administration\". My proposal is\nthat we divide it into three sections. The first would be called \"III.\nServer Installation\" and would cover chapters 16 (installation from\nbinaries) through 19 (server setup and operation). The second would be\ncalled \"IV. Server Configuration\" -- so every section that's currently\na subsection of \"server configuration\" would become a top-level\nchapter. The third division would be \"V. Server Administration,\" and\nwould cover the current chapters 21-33. This is probably far from\nperfect, but it seems like a relatively simple change and better than\nwhat we have now.\n\nI don't know what to do about \"I. SQL commands\". It's obviously\nimpractical to promote that to a top-level section, because it's got a\nzillion sub-pages which I don't think we want in the top-level\ndocumentation index. But having it as one of several unnumbered\nchapters interposed between 51 and 52 doesn't seem great either.\n\nThe stuff that I think is over-emphasized is as follows: (a) chapters\n1-3, the tutorial; (b) chapters 4-6, which are essentially a\ncontinuation of the tutorial, and not at all similar to chapters 8-11\nwhich are chalk-full of detailed technical information; (c) chapters\n43-46, one per procedural language; perhaps these could just be\ndemoted to sub-sections of chapter 42 on procedural languages; (d)\nchapters 47 (server programming interface), 50 (replication progress\ntracking), and 51 (archive modules), all of which are important to\ndocument but none of which seem important enough to put them in the\ntop-level documentation index; and (e) large parts of section \"VII.\nInternals,\" which again contain tons of stuff of very marginal\ninterest. The first ~4 chapters of the internals section seem like\nthey might be mainstream enough to justify the level of prominence\nthat we give them, but the rest has got to be of interest to a tiny\nminority of readers.\n\nI think it might be possible to consolidate the internals section by\ngrouping a bunch of existing entries together by category. Basically,\nafter the first few chapters, you've got stuff that is of interest to\nC programmers writing core or extension code; and you've got\nexplainers on things like GEQO and index op-classes and support\nfunctions which might be of interest even to non-programmers. I think\nfor example that we don't need separate top-level chapters on writing\nprocedural language handlers, FDWs, tablesample methods, custom scan\nproviders, table access methods, index access methods, and WAL\nresource managers. Some or all of those could be grouped under a\nsingle chapter, perhaps, e.g. Using PostgreSQL Extensibility\nInterfaces.\n\nThoughts? I realize that this topic is HIGHLY prone to ENDLESS\nbikeshedding, and it's inevitable that not everybody is going to\nagree. But I hope we can agree that it's completely silly that it's\nvastly easier to find the documentation about the backup manifest\nformat than it is to find the documentation on CREATE TABLE or\nshared_buffers, and if we can agree on that, then perhaps we can agree\non some way to make things better.\n+many for improving the index.My own pet docs peeve is a purely editorial one: func.sgml is a 30k line beast, and I think there's a good case for splitting out at least the larger chunks of it.cheersandrew", "msg_date": "Tue, 19 Mar 2024 17:39:39 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Mar 18, 2024 at 5:40 PM Laurenz Albe <[email protected]> wrote:\n> I also disagree that chapters 4 to 6 are a continuation of the tutorial.\n> Or at least, they shouldn't be.\n> When I am looking for a documentation reference on something like\n> security considerations of SECURITY DEFINER functions, my first\n> impulse is to look in chapter 5 (Data Definition) or in chapter 38\n> (Extending SQL), and I am surprised to find it discussed in the\n> SQL reference of CREATE FUNCTION.\n\nI looked at this a bit more closely. There's actually a lot of\ndetailed technical information in chapters 4 and 5, but chapter 6 is\nextremely short and mostly recapitulates chapter 2.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 10:32:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Tue, Mar 19, 2024 at 5:39 PM Andrew Dunstan <[email protected]> wrote:\n> +many for improving the index.\n\nHere's a series of four patches. Taken together, they cut down the\nnumber of numbered chapters from 76 to 68. I think we could easily\nsave that much again if I wrote a few more patches along similar\nlines, but I'm posting these first to see what people think.\n\n0001 removes the \"Installation from Binaries\" chapter. The whole thing\nis four sentences. I moved the most important information into the\n\"Installation from Source Code\" chapter and retitled it\n\"Installation\".\n\n0002 removes the \"Monitoring Disk Usage\" chapter by folding it into\nthe immediately-preceding \"Monitoring Database Activity\" chapter. I\nkind of feel like the \"Monitoring Disk Usage\" chapter might be in need\nof a bigger rewrite or just outright removal, but there's surely not\nenough content here to justify making it a top-level chapter.\n\n0003 merges all of the \"Internals\" chapters whose names are the names\nof built-in index access methods (Btree, Gin, etc.) into a single\nchapter called \"Built-In Index Access Methods\". All of these chapters\nhave a very similar structure and none of them are very long, so it\nmakes a lot of sense, at least in my mind, to consolidate them into\none.\n\n0004 merges the \"Generic WAL Records\" and \"Custom WAL Resource\nManagers\" chapter together, creating a new chapter called \"Write Ahead\nLogging for Extensions\".\n\nOverall, I think this achieves a minor but pleasant level of\nde-cluttering of the index. It's going to take a lot more than one\nmorning's work to produce a major improvement, but at least this is\nsomething.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 20 Mar 2024 12:43:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Wed, Mar 20, 2024 at 12:43:08PM -0400, Robert Haas wrote:\n> Overall, I think this achieves a minor but pleasant level of\n> de-cluttering of the index. It's going to take a lot more than one\n> morning's work to produce a major improvement, but at least this is\n> something.\n\nI think this kind of doc structure review is long overdue.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 20 Mar 2024 13:35:42 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Wed, Mar 20, 2024 at 1:35 PM Bruce Momjian <[email protected]> wrote:\n> On Wed, Mar 20, 2024 at 12:43:08PM -0400, Robert Haas wrote:\n> > Overall, I think this achieves a minor but pleasant level of\n> > de-cluttering of the index. It's going to take a lot more than one\n> > morning's work to produce a major improvement, but at least this is\n> > something.\n>\n> I think this kind of doc structure review is long overdue.\n\nThanks, Bruce!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 14:16:07 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 2024-Mar-20, Robert Haas wrote:\n\n> 0003 merges all of the \"Internals\" chapters whose names are the names\n> of built-in index access methods (Btree, Gin, etc.) into a single\n> chapter called \"Built-In Index Access Methods\". All of these chapters\n> have a very similar structure and none of them are very long, so it\n> makes a lot of sense, at least in my mind, to consolidate them into\n> one.\n\nI think you can achieve this with a much smaller patch that just changes\nthe outer tag in each file so that each file is a <sect1>, then create a\nsingle file that includes all of these plus an additional outer tag for\nthe <chapter> (or maybe just add the <chapter> in postgres.sgml). This\nhas the advantage that each AM continues to be a separate single file,\nand you still have your desired structure.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 20 Mar 2024 22:05:45 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Wed, Mar 20, 2024 at 5:05 PM Alvaro Herrera <[email protected]> wrote:\n> I think you can achieve this with a much smaller patch that just changes\n> the outer tag in each file so that each file is a <sect1>, then create a\n> single file that includes all of these plus an additional outer tag for\n> the <chapter> (or maybe just add the <chapter> in postgres.sgml). This\n> has the advantage that each AM continues to be a separate single file,\n> and you still have your desired structure.\n\nRight, that could also be done, and not just for 0003. I just wasn't\nsure that was the right approach. It would mean that the division of\nthe SGML into files continues to reflect the original chapter\ndivisions rather than the current ones forever. In the short run\nthat's less churn, less back-patching pain, etc.; but in the long term\nit means you've got relics of a structure that doesn't exist any more\nsticking around forever.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 17:21:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Mar 20, 2024 at 5:05 PM Alvaro Herrera <[email protected]> wrote:\n>> I think you can achieve this with a much smaller patch that just changes\n>> the outer tag in each file so that each file is a <sect1>, then create a\n>> single file that includes all of these plus an additional outer tag for\n>> the <chapter> (or maybe just add the <chapter> in postgres.sgml). This\n>> has the advantage that each AM continues to be a separate single file,\n>> and you still have your desired structure.\n\n> Right, that could also be done, and not just for 0003. I just wasn't\n> sure that was the right approach. It would mean that the division of\n> the SGML into files continues to reflect the original chapter\n> divisions rather than the current ones forever. In the short run\n> that's less churn, less back-patching pain, etc.; but in the long term\n> it means you've got relics of a structure that doesn't exist any more\n> sticking around forever.\n\nI'd say that a separate file per AM is a good thing regardless.\nElsewhere in this same thread are grumblings about how big func.sgml\nis; why would you think it good to start down that same path for the\nAM documentation?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Mar 2024 17:25:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Wed, Mar 20, 2024 at 5:25 PM Tom Lane <[email protected]> wrote:\n> I'd say that a separate file per AM is a good thing regardless.\n> Elsewhere in this same thread are grumblings about how big func.sgml\n> is; why would you think it good to start down that same path for the\n> AM documentation?\n\nWell, I suppose I thought it was a good idea because (1) we don't seem\nto have any existing precedent for file-per-sect1 rather than\nfile-per-chapter and (2) all of the per-AM files combined are less\nthan 20% of the size of func.sgml.\n\nBut, OK, if you want to establish a new paradigm here, sure. I see two\nways to do it. We can either put the <chapter> tag directly in\npostgres.sgml, or I can still create a new indextypes.sgml and put\n&btree; etc. inside of it. Which way do you prefer?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 09:23:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Well, I suppose I thought it was a good idea because (1) we don't seem\n> to have any existing precedent for file-per-sect1 rather than\n> file-per-chapter and (2) all of the per-AM files combined are less\n> than 20% of the size of func.sgml.\n\nWe have done (1) in places, eg. json.sgml, array.sgml,\nrangetypes.sgml, rowtypes.sgml, and the bulk of extend.sgml is split\nout into xaggr, xfunc, xindex, xoper, xtypes. I'd be the first to\nconcede it's a bit haphazard, but it's not like there's no precedent.\n\nAs for (2), func.sgml likely should have been split years ago.\n\n> But, OK, if you want to establish a new paradigm here, sure. I see two\n> ways to do it. We can either put the <chapter> tag directly in\n> postgres.sgml, or I can still create a new indextypes.sgml and put\n> &btree; etc. inside of it. Which way do you prefer?\n\nI'd follow the extend.sgml precedent: have a file corresponding to the\nchapter and containing any top-level text we need, then that includes\na file per sect1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Mar 2024 09:38:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Thu, Mar 21, 2024 at 9:38 AM Tom Lane <[email protected]> wrote:\n> I'd follow the extend.sgml precedent: have a file corresponding to the\n> chapter and containing any top-level text we need, then that includes\n> a file per sect1.\n\nOK, here's a new patch set. I've revised 0003 and 0004 to use this\napproach, and I've added a new 0005 that does essentially the same\nthing for the PL chapters.\n\n0001 and 0002 are changed. Should 0002 use the include-an-entity\napproach as well?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 21 Mar 2024 10:31:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Thu, Mar 21, 2024 at 10:31 AM Robert Haas <[email protected]> wrote:\n> 0001 and 0002 are changed. Should 0002 use the include-an-entity\n> approach as well?\n\nWoops. I meant to say that 0001 and 0002 are *unchanged*.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 12:16:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 2024-Mar-21, Robert Haas wrote:\n\n> On Thu, Mar 21, 2024 at 9:38 AM Tom Lane <[email protected]> wrote:\n> > I'd follow the extend.sgml precedent: have a file corresponding to the\n> > chapter and containing any top-level text we need, then that includes\n> > a file per sect1.\n> \n> OK, here's a new patch set. I've revised 0003 and 0004 to use this\n> approach, \n\nGreat, thanks. Looking at the index in the PDF after (only) 0003, we\nnow have this structure\n\n62. Table Access Method Interface Definition ....................................................... 2475\n63. Index Access Method Interface Definition ....................................................... 2476\n63.1. Basic API Structure for Indexes .......................................................... 2476\n63.2. Index Access Method Functions .......................................................... 2479\n63.3. Index Scanning ................................................................................ 2485\n63.4. Index Locking Considerations ............................................................. 2486\n63.5. Index Uniqueness Checks .................................................................. 2487\n63.6. Index Cost Estimation Functions ......................................................... 2489\n64. Generic WAL Records ................................................................................. 2492\n65. Custom WAL Resource Managers ................................................................. 2494\n66. Built-in Index Access Methods ...................................................................... 2496\n\nwhich is a bit odd: why are the two WAL chapters in the middle of the\nchapters 62 and 63 talking about AMs? Maybe put 66 right after 63\ninstead. Also, is it really better to have 62/63 first and 66\nlater? It sounds to me like 66 is more user-oriented and the other two\nare developer-oriented, so I'm inclined to suggest putting them the\nother way around, but I'm not really sure about this. (Also, starting\nchapter 66 straight with 66.1 BTree without any intro text looks a bit\nodd; maybe one short introductory paragraph is sufficient?)\n\n> and I've added a new 0005 that does essentially the same\n> thing for the PL chapters.\n\nI was looking at the PL chapters earlier today too, wondering whether\nthis would be valuable; but I worry that there are too many\nsub-sub-sections there, so it could end up being a bit messy. I didn't\nlook at the resulting output though.\n\n> 0001 and 0002 are [un]changed. Should 0002 use the include-an-entity\n> approach as well?\n\nShrug, I wouldn't, doesn't look worth it.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No es bueno caminar con un hombre muerto\"\n\n\n", "msg_date": "Thu, 21 Mar 2024 17:42:58 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Thu, Mar 21, 2024 at 12:43 PM Alvaro Herrera <[email protected]> wrote:\n> which is a bit odd: why are the two WAL chapters in the middle of the\n> chapters 62 and 63 talking about AMs? Maybe put 66 right after 63\n> instead. Also, is it really better to have 62/63 first and 66\n> later? It sounds to me like 66 is more user-oriented and the other two\n> are developer-oriented, so I'm inclined to suggest putting them the\n> other way around, but I'm not really sure about this. (Also, starting\n> chapter 66 straight with 66.1 BTree without any intro text looks a bit\n> odd; maybe one short introductory paragraph is sufficient?)\n\nI had similar thoughts. I think that we should consider some changes\nto the chapter ordering, but I didn't want to try to change too many\nthings all at once, since somebody only has to hate one thing about\nthe patch to sink the whole thing.\n\nBut since you brought it up, what I've been thinking about is that the\nwhole division into parts might need to be rethought a bit. I feel\nlike \"VII. Internals\" is a mix of about four different kinds of\ncontent. First, the biggest portion of it is information about\ndeveloping certain kinds of C extensions -- all the \"Writing a\nWhatever\" chapters, the \"Whatever Access Method Interface Definition\"\nchapters, \"Generic WAL Records\", \"Custom WAL Resource Managers\", and\nall the index-related chapters. Second, we've got some information\nthat I think is mainly of interest to people developing PostgreSQL\nitself, namely, \"PostgreSQL Coding Conventions\", \"Native Language\nSupport\", and \"System Catalog Declarations and Initial Contents\". You\n*might* care about these if you're developing extensions, or even if\nyou're not a developer at all, but then again you might not. Third,\nwe've got some reference material, namely \"System Catalogs\", \"System\nViews\", and perhaps \"Frontend/Backend Protocol\". I distinguish these\nfrom the previous two categories because I think you could care about\nthis stuff as a random user, or a developer of products that\ninteroperate with PostgreSQL but don't link with it or share any\ncommon code. Finally, there's just a bunch of random bits and bobs\nthat we've decided to document here for one reason or another, either\nbecause somebody else did a bunch of the work, like \"Overview of\nPostgreSQL Internals\", or because some developer did something and\nsomeone said \"hey, that should be documented!\", like \"Backup Manifest\nFormat.\"\n\nSo my first thought is to pull out the stuff that's mainly for\nPostgreSQL core developers and move it to an appendix. I propose we\ncreate an appendix called \"Developer Guide\" and that it absorb the\nexisting appendix I, \"The Source Code Repository\", possibly all or\npart of appendix J, \"Documentation\", and the chapters from \"VII.\nInternals\" that are mostly of developer interest. I think that\npossibly some of what's in \"J. Documentation\" should actually be moved\ninto the \"Installation\" chapter where we talk about building the\nsource code, because it doesn't make much sense to document the build\ntool chain in one part of the documentation and the documentation\ntoolchain someplace else entirely, but \"J.6. Style Guide\" is developer\ninformation, not build instructions.\n\nMy second thought is that the stuff from \"VII. Internals\" that I\ncategorized as reference material should move into section \"VI.\nReference\". I think we should also consider moving appendix F,\n\"Additional Supplied Modules and Extensions,\" and appendix G,\n\"Additional Supplied Programs\" to the reference section. However,\nprior to doing that, I think that appendix G needs some cleanup or\npossibly we should just find a way to remove it outright. We're\nshipping an appendix G with two major subsections, one of which is\ncompletely empty and has been since v14, and the other of which\ncontains only two things. I think we should just remove the empty\nsub-section entirely. I'm not sure what to do about the only with only\n2 things in it (vacuumlo and oid2name). Would it be a bad idea to just\nmerge those bits into the client applications reference section?\n\nMy third thought is about what to do with the material in \"VII.\nInternals\" that is about developing specific kind of extensions, like,\nsay, \"Writing a Foreign Data Wrapper.\" If you look at \"V. Server\nProgramming\", you see that we actually have some very similar sections\nthere, like chapter 47, \"Background Worker Processes\" and chapter 50,\n\"Archive Modules\". I think it's not very clear in the current\nstructure where topics that are relevant for extension developers\nshould go under \"Server Programming\" or under \"Internals\", and it\nlooks to me like different people have just done different things and\nit's all a bit haphazard. One idea is to decide that the correct\nanswer is \"Server Programming\" and move all of the internals chapters\nthat fall into this category over to there. I don't think this is the\nright answer, because that section also contains information about a\nbunch of stuff that's strictly SQL-level, like rules and triggers. So\nwhat I think we should do is create either [A] a new top-level part,\njust before or just after what's currently called \"VI. Reference\" or\n[B] a new appendix or [C] a new \"Reference\" section, that is\nspecifically for documentation of server APIs intended for extension\nuse. And then all the chapters under \"V. Server Programming\" or \"VII.\nInternals\" that are documenting APIs would get moved there.\n\nIf we adopted all of the patches that I proposed in my previous email\nand all of the suggestions that I just dropped in the preceding wall\nof text, then the internals section would be left with only these\nchapters:\n\n- Overview of PostgreSQL Internals\n- Genetic Query Optimizer\n- Database Physical Storage\n- Transaction Processing\n- How the Planner Uses Statistics\n- Backup Manifest Format\n\nA lot of those chapters are pretty dated and maybe not that useful in\n2024, but this email is already long enough and full of\nsufficiently-aggressive proposals that I'm not inclined to opine too\nmuch further on what we might want to do if and when we've done\neverything I just proposed. For now, suffice it to say that I think we\nmight choose to either rewrite and expand some of these to make them\nmore useful, or demote some of them to some less prominent place in\nthe documentation, or just delete some of them entirely; but we can\nfigure that out if and when we get there.\n\n> I was looking at the PL chapters earlier today too, wondering whether\n> this would be valuable; but I worry that there are too many\n> sub-sub-sections there, so it could end up being a bit messy. I didn't\n> look at the resulting output though.\n\nThat thought occurred to me as well. I certainly think that if we\nperform the sort of aggressive purging of the top-level index for\nwhich I'm advocating, there are going to be some people who are grumpy\nthat the stuff they're trying to find isn't where it used to be, or\nwho legitimately had trouble finding the content that they want. It\nseems to me that if you're looking for the documentation on one of the\nindividual procedural languages and you don't see it, you'll try\nclicking on \"Procedural Languages\" and then you'll find that it's now\nunder there. Now, what's maybe a bit unfortunate is that chapter\nindexes only show two levels of section headings, and the PL/pgsql\nchapter in particular has a lot of <sect2> items. If those get demoted\nto <sect3> as I am proposing, they won't show up the chapter index any\nmore. I do think there's a possibility that this could be a problem\nfor someone.\n\nOn the other hand, the table of contents for\nhttps://www.postgresql.org/docs/devel/plpgsql.html is so long right\nnow that it doesn't fit on the page, so maybe losing a level of\nsubsections won't be so bad. Alternately, maybe we could revise the\nstructure of the section a bit to ameliorate the problem. It seems to\nme that most of the first-level section headers are actually pretty\nclear about what you're likely to find underneath. If you're looking\nfor WHILE and you see a section called \"Control Structures\", it seems\nlike you're chances of guessing that WHILE will be underneath that\nsection are pretty good. The major exception that I see is 45.2,\n\"Basic Statements,\" which isn't very clear about what might be covered\nthere. But what if we split that apart into separate sections called\n\"Assignment\", \"Executing SQL\", and \"Doing Nothing at All\"? And maybe\nwe'd even pull \"Returning\" out of \"Control Structures\" as well. I\nthink that would be clear enough for people to find what they need\nwithout the extra level of headers.\n\n(For the sake of completeness, let me note that PL/python and PL/perl\nhave a few <sect2> headings as well, but I don't think it would create\na problem for users if all of those got changed to <sect3>. PL/Tcl has\nno <sect2> headings.)\n\nI'm not sure if this kind of rearrangement is actually necessary or\nnot; but my point here is that if we think that people will have\nproblems or we find out that they actually did have problems, we can\nlook at doing this kind of stuff to compensate. What I don't think we\nshould do is decide that the only workable solution is to keep having\nso many separate chapters at the top level. We're way, way beyond the\npoint where you can easily find anything on that page, and trying to\nemphasize everything just ends up emphasizing nothing. We need to push\nin a direction where every chapter and every appendix is expected to\nhave a large amount of content under it, so that the top-level index\nbecomes a way of finding the kind of content you want (SQL reference\npages, extension APIs, built-in SQL-callable functions, whatever) and\nthen you use that page to find the specific content that you want\nwithin that category.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 14:29:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Wed, Mar 20, 2024 at 9:43 AM Robert Haas <[email protected]> wrote:\n\n> On Tue, Mar 19, 2024 at 5:39 PM Andrew Dunstan <[email protected]>\n> wrote:\n> > +many for improving the index.\n>\n> Here's a series of four patches.\n\n\nI reviewed the most recent set of 5 patches.\n\n\n> Taken together, they cut down the\n> number of numbered chapters from 76 to 68. I think we could easily\n> save that much again if I wrote a few more patches along similar\n> lines, but I'm posting these first to see what people think.\n>\n> 0001 removes the \"Installation from Binaries\" chapter. The whole thing\n> is four sentences. I moved the most important information into the\n> \"Installation from Source Code\" chapter and retitled it\n> \"Installation\".\n>\n\nMakes sense\n\n\n> 0002 removes the \"Monitoring Disk Usage\" chapter by folding it into\n> the immediately-preceding \"Monitoring Database Activity\" chapter. I\n> kind of feel like the \"Monitoring Disk Usage\" chapter might be in need\n> of a bigger rewrite or just outright removal, but there's surely not\n> enough content here to justify making it a top-level chapter.\n>\n\nJust going to note that the section on the cumulative statistics views\nbeing a single page is still a strongly bothersome issue here. Though the\nquick fix actually requires upgrading the section to chapter status...\n\nMaybe we can stub out that section in the \"Monitoring Database Activity\"\nchapter and move that entire section after \"System Views\" in the Internals\npart?\n\nI agree with subordinating Monitoring Disk Usage.\n\n\n> 0003 merges all of the \"Internals\" chapters whose names are the names\n> of built-in index access methods (Btree, Gin, etc.) into a single\n> chapter called \"Built-In Index Access Methods\". All of these chapters\n> have a very similar structure and none of them are very long, so it\n> makes a lot of sense, at least in my mind, to consolidate them into\n> one.\n>\n\nOne of the more impactful and wanted improvements, IMO.\n\n\n> 0004 merges the \"Generic WAL Records\" and \"Custom WAL Resource\n> Managers\" chapter together, creating a new chapter called \"Write Ahead\n> Logging for Extensions\".\n>\n>\nThe positioning of this and the preceding Built-in Index Access Methods\nchapter seem like they should be switched.\n\nIf this sticks we should add an introductory paragraph for the chapter.\n\nand I've added a new 0005 that does essentially the same\n> thing for the PL chapters.\n>\n\nThe following page needs to be reworded to take the new structure into\naccount:\n\nhttps://www.postgresql.org/docs/current/xfunc-pl.html\n\nNot having pl/pgsql appear on the main ToC seems like a loss but the others\nmake sense and a special exception for it probably isn't warranted.\n\nMaybe \"pl/pgsql and Other Procedural Languages\" as the title?\n\nDavid J.\n\nOn Wed, Mar 20, 2024 at 9:43 AM Robert Haas <[email protected]> wrote:On Tue, Mar 19, 2024 at 5:39 PM Andrew Dunstan <[email protected]> wrote:\n> +many for improving the index.\n\nHere's a series of four patches.I reviewed the most recent set of 5 patches.  Taken together, they cut down the\nnumber of numbered chapters from 76 to 68. I think we could easily\nsave that much again if I wrote a few more patches along similar\nlines, but I'm posting these first to see what people think.\n\n0001 removes the \"Installation from Binaries\" chapter. The whole thing\nis four sentences. I moved the most important information into the\n\"Installation from Source Code\" chapter and retitled it\n\"Installation\".Makes sense\n\n0002 removes the \"Monitoring Disk Usage\" chapter by folding it into\nthe immediately-preceding \"Monitoring Database Activity\" chapter. I\nkind of feel like the \"Monitoring Disk Usage\" chapter might be in need\nof a bigger rewrite or just outright removal, but there's surely not\nenough content here to justify making it a top-level chapter.Just going to note that the section on the cumulative statistics views being a single page is still a strongly bothersome issue here.  Though the quick fix actually requires upgrading the section to chapter status...Maybe we can stub out that section in the \"Monitoring Database Activity\" chapter and move that entire section after \"System Views\" in the Internals part?I agree with subordinating Monitoring Disk Usage.\n\n0003 merges all of the \"Internals\" chapters whose names are the names\nof built-in index access methods (Btree, Gin, etc.) into a single\nchapter called \"Built-In Index Access Methods\". All of these chapters\nhave a very similar structure and none of them are very long, so it\nmakes a lot of sense, at least in my mind, to consolidate them into\none.One of the more impactful and wanted improvements, IMO.\n\n0004 merges the \"Generic WAL Records\" and \"Custom WAL Resource\nManagers\" chapter together, creating a new chapter called \"Write Ahead\nLogging for Extensions\".\nThe positioning of this and the preceding Built-in Index Access Methods chapter seem like they should be switched.If this sticks we should add an introductory paragraph for the chapter.and I've added a new 0005 that does essentially the samething for the PL chapters.The following page needs to be reworded to take the new structure into account:https://www.postgresql.org/docs/current/xfunc-pl.htmlNot having pl/pgsql appear on the main ToC seems like a loss but the others make sense and a special exception for it probably isn't warranted.Maybe \"pl/pgsql and Other Procedural Languages\" as the title?David J.", "msg_date": "Thu, 21 Mar 2024 15:31:37 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Thu, Mar 21, 2024 at 11:30 AM Robert Haas <[email protected]> wrote:\n\n>\n> My second thought is that the stuff from \"VII. Internals\" that I\n> categorized as reference material should move into section \"VI.\n> Reference\". I think we should also consider moving appendix F,\n> \"Additional Supplied Modules and Extensions,\" and appendix G,\n> \"Additional Supplied Programs\" to the reference section.\n>\n>\nFor \"VI. Reference\" I propose the following Chapters:\n\nSQL Commands\nPL/pgSQL\nCumulative Statistics Views\nSystem Views\nSystem Catalogs\nClient Applications\nServer Applications\nModules and Extensions\n\n-- Remove Appendix G (Programs) altogether and just note for the two that\nare listed that they are in contrib as opposed to core.\n\n-- The PostgreSQL qualifier doesn't seem helpful and once you add the\nadditional chapters its unusual presence stands out even more.\n\n-- PL/pgSQL gets its own reference chapter since we wrote it. Stuff like\nPerl and Python have entire books that the user can consult as reference\nmaterial for those languages.\n\nDavid J.\n\nOn Thu, Mar 21, 2024 at 11:30 AM Robert Haas <[email protected]> wrote:\nMy second thought is that the stuff from \"VII. Internals\" that I\ncategorized as reference material should move into section \"VI.\nReference\". I think we should also consider moving appendix F,\n\"Additional Supplied Modules and Extensions,\" and appendix G,\n\"Additional Supplied Programs\" to the reference section.For \"VI. Reference\" I propose the following Chapters:SQL CommandsPL/pgSQLCumulative Statistics ViewsSystem ViewsSystem CatalogsClient ApplicationsServer ApplicationsModules and Extensions-- Remove Appendix G (Programs) altogether and just note for the two that are listed that they are in contrib as opposed to core.-- The PostgreSQL qualifier doesn't seem helpful and once you add the additional chapters its unusual presence stands out even more.-- PL/pgSQL gets its own reference chapter since we wrote it.  Stuff like Perl and Python have entire books that the user can consult as reference material for those languages.David J.", "msg_date": "Thu, 21 Mar 2024 15:57:05 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 19.03.24 14:50, Tom Lane wrote:\n> Daniel Gustafsson <[email protected]> writes:\n>> It's actually not very odd, the reference section is using <reference> elements\n>> and we had missed the arabic numerals setting on those. The attached fixes\n>> that for me. That being said, we've had roman numerals for the reference\n>> section since forever (all the way down to the 7.2 docs online has it) so maybe\n>> it was intentional?\n> \n> I'm quite sure it *was* intentional. Maybe it was a bad idea, but\n> it's not that way simply because nobody thought about it.\n\nLooks to me it was just that way because it's the default setting of the \nstylesheets.\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 00:33:22 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 20.03.24 17:43, Robert Haas wrote:\n> 0001 removes the \"Installation from Binaries\" chapter. The whole thing\n> is four sentences. I moved the most important information into the\n> \"Installation from Source Code\" chapter and retitled it\n> \"Installation\".\n\nBut this separation was explicitly added a few years ago, because most \npeople just want to read about the binaries.\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 00:37:19 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 21.03.24 15:31, Robert Haas wrote:\n> On Thu, Mar 21, 2024 at 9:38 AM Tom Lane <[email protected]> wrote:\n>> I'd follow the extend.sgml precedent: have a file corresponding to the\n>> chapter and containing any top-level text we need, then that includes\n>> a file per sect1.\n> \n> OK, here's a new patch set. I've revised 0003 and 0004 to use this\n> approach, and I've added a new 0005 that does essentially the same\n> thing for the PL chapters.\n\nI'm highly against this. If I want to read about PL/Python, why should \nI have to wade through PL/Perl and PL/Tcl?\n\nI think, abstractly, in a book, PL/Python should be a chapter of its \nown. Just like GiST should be a chapter of its own. Because they are \nself-contained topics.\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 00:40:38 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "> On 22 Mar 2024, at 00:33, Peter Eisentraut <[email protected]> wrote:\n> \n> On 19.03.24 14:50, Tom Lane wrote:\n>> Daniel Gustafsson <[email protected]> writes:\n>>> It's actually not very odd, the reference section is using <reference> elements\n>>> and we had missed the arabic numerals setting on those. The attached fixes\n>>> that for me. That being said, we've had roman numerals for the reference\n>>> section since forever (all the way down to the 7.2 docs online has it) so maybe\n>>> it was intentional?\n>> I'm quite sure it *was* intentional. Maybe it was a bad idea, but\n>> it's not that way simply because nobody thought about it.\n> \n> Looks to me it was just that way because it's the default setting of the stylesheets.\n\nThat's quite possible. I don't have strong opinions on whether we should\nchange, or keep it the way it is.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 01:12:30 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 01:12:30AM +0100, Daniel Gustafsson wrote:\n> > On 22 Mar 2024, at 00:33, Peter Eisentraut <[email protected]> wrote:\n> > \n> > On 19.03.24 14:50, Tom Lane wrote:\n> >> Daniel Gustafsson <[email protected]> writes:\n> >>> It's actually not very odd, the reference section is using <reference> elements\n> >>> and we had missed the arabic numerals setting on those. The attached fixes\n> >>> that for me. That being said, we've had roman numerals for the reference\n> >>> section since forever (all the way down to the 7.2 docs online has it) so maybe\n> >>> it was intentional?\n> >> I'm quite sure it *was* intentional. Maybe it was a bad idea, but\n> >> it's not that way simply because nobody thought about it.\n> > \n> > Looks to me it was just that way because it's the default setting of the stylesheets.\n> \n> That's quite possible. I don't have strong opinions on whether we should\n> change, or keep it the way it is.\n\nIf we can't justify why it should be different, it should be like the\nsurrounding sections.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 21 Mar 2024 20:14:37 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Thu, Mar 21, 2024 at 6:32 PM David G. Johnston\n<[email protected]> wrote:\n> Just going to note that the section on the cumulative statistics views being a single page is still a strongly bothersome issue here. Though the quick fix actually requires upgrading the section to chapter status...\n\nYeah, I've been bothered by this in the past, too. I'm not very keen\nto start promoting things to the top-level, though. I think we need a\nmore thoughtful fix than that.\n\nOne question I have is why all of these views are documented here\nrather than in chapter 53, \"System Views,\" because surely they are\nsystem views. I feel like if our documentation index weren't a mile\nlong and if you could easily find the entry for \"System Views,\" that's\nwhere you would naturally look for these details. I don't think it's\nnatural for a user to expect that most of the system views are going\nto be documented in section VII, chapter 53 but one particular kind is\ngoing to be documented in section III, chapter 27, under a chapter\ntitle that gives no hint that it will document any views.\n\n> Maybe \"pl/pgsql and Other Procedural Languages\" as the title?\n\nI guess I have a hard time seeing this as an improvement. It would\nhelp someone who knows that plpgsql exists but doesn't know that it\nfalls into the general category called procedural languages, but I\nsuspect that's not a very common confusion. I think it's better to\nkeep the chapter titles short and to the point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 08:32:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Thu, Mar 21, 2024 at 7:37 PM Peter Eisentraut <[email protected]> wrote:\n> On 20.03.24 17:43, Robert Haas wrote:\n> > 0001 removes the \"Installation from Binaries\" chapter. The whole thing\n> > is four sentences. I moved the most important information into the\n> > \"Installation from Source Code\" chapter and retitled it\n> > \"Installation\".\n>\n> But this separation was explicitly added a few years ago, because most\n> people just want to read about the binaries.\n\nI really doubt that this is true. I've been installing software on\nUNIX-like operating systems for more than 30 years now, and I don't\nthink there's been a single time when I have ever consulted the\ndocumentation for a software package to find the download location for\nthat package. When I first started out, everything was ftp rather than\nwww, so you went to ftp.whatever.{com,org,net,gov,edu} and tried to\ndownload the distribution bundle, and then you untarred it and ran\nconfigure and make. Then you read the README or the documentation or\nwhatever afterward. These days, I think what people do is either (a)\nuse their package manager to install PostgreSQL and then come to the\ndocumentation afterward to find out how to use it or (b) do a search\nfor \"PostgreSQL download\" and click on whatever comes up. I'm not\nsaying there's never been a user who made use of this section of the\ndocumentation to find the download location, but surely the normal\nthing to do if you come to www.postgresql.org and you want to download\nthe software is to click \"Download\" on the nav bar, not\n\"Documentation,\" then a specific version, then chapter 16, then the\nexact same download link that's already there on the nav bar.\n\nI do agree that it is very questionable whether \"Installation from\nSource Code\" is of sufficient interest to ordinary users to justify\nincluding it in \"III. Server Administration.\" Most people, probably\nincluding many extension developers, are only going to install the\nbinary packages. But the solution to that isn't to have a\nfour-sentence chapter telling me about a download location that I\nlikely found long before I looked at the documentation, and that I can\ncertainly find very easily without needing the documentation. Rather,\nwhat we should do if we think that installing from source code is of\nmarginal interest is move it to an appendix. As I said to Alvaro\nyesterday, I think that a \"Developer Guide\" appendix could be a good\nplace to house a number of things that currently have toplevel\nchapters but don't really need them because they're only of interest\nto a small minority of users. This might be another thing that could\ngo there.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 08:50:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 22.03.24 13:50, Robert Haas wrote:\n> On Thu, Mar 21, 2024 at 7:37 PM Peter Eisentraut <[email protected]> wrote:\n>> On 20.03.24 17:43, Robert Haas wrote:\n>>> 0001 removes the \"Installation from Binaries\" chapter. The whole thing\n>>> is four sentences. I moved the most important information into the\n>>> \"Installation from Source Code\" chapter and retitled it\n>>> \"Installation\".\n>>\n>> But this separation was explicitly added a few years ago, because most\n>> people just want to read about the binaries.\n> \n> I really doubt that this is true.\n\nHere is the thread: \nhttps://www.postgresql.org/message-id/flat/CABUevExRCf8waYOsrCO-QxQL50XGapMf5dnWScOXj7X%3DMXW--g%40mail.gmail.com\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 14:35:47 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Thu, Mar 21, 2024 at 7:40 PM Peter Eisentraut <[email protected]> wrote:\n> I'm highly against this. If I want to read about PL/Python, why should\n> I have to wade through PL/Perl and PL/Tcl?\n>\n> I think, abstractly, in a book, PL/Python should be a chapter of its\n> own. Just like GiST should be a chapter of its own. Because they are\n> self-contained topics.\n\nOn the other hand, in a book, chapters tend to be of relatively\nuniform length. People don't usually write a book with some chapters\nthat are 100+ pages long, and others that are a single page, or even\njust a couple of sentences. I mean, I'm sure it's been done, but it's\nnot a normal way to write a book.\n\nAnd I don't believe that if someone were writing a physical book about\nPostgreSQL from scratch, they'd ever end up with a top-level chapter\nthat looks anything like our GiST chapter. All of the index AM\nchapters are quite obviously clones of each other, and they're all\nquite short. Surely you'd make them sections within a chapter, not\nentire chapters.\n\nI do agree that PL/pgsql is more arguable. I can imagine somebody\nwriting a book about PostgreSQL and choosing to make that topic into a\nwhole chapter.\n\nHowever, I also think that people don't make decisions about what\nshould be a chapter in a vacuum. If you've got 100 people writing a\nbook together, which is essentially what we actually do have, and each\nof those people makes decisions in isolation about what is worthy of\nbeing a chapter, then you end up with exactly the kind of mess that we\nnow have. Some chapters are long and some are short. Some are\nwell-written and some are poorly written. Some are updated regularly\nand others have hardly been touched in a decade. Books have editors to\nstraighten out those kinds of inconsistencies so that there's some\nuniformity to the product as a whole.\n\nThe problem with that, of course, is that it invites bike-shedding. As\nyou say, every decision that is reflected in our documentation was\nmade for some reason, and most of them will have been made by\nprominent, active committers. So discussions about how to improve\nthings can easily bog down even when people agree on the overall\ngoals, simply because few individual changes find consensus. I hope\nthat doesn't happen here, because I think most people who have\ncommented so far agree that there is a problem here and that we should\ntry to fix it. Let's not let the perfect be the enemy of the good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 09:59:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 9:35 AM Peter Eisentraut <[email protected]> wrote:\n> >> But this separation was explicitly added a few years ago, because most\n> >> people just want to read about the binaries.\n> >\n> > I really doubt that this is true.\n>\n> Here is the thread:\n> https://www.postgresql.org/message-id/flat/CABUevExRCf8waYOsrCO-QxQL50XGapMf5dnWScOXj7X%3DMXW--g%40mail.gmail.com\n\nSorry. I didn't mean to dispute the point that the section was added a\nfew years ago, nor the point that most people just want to read about\nthe binaries. I am confident that both of those things are true. What\nI do want to dispute is that having a four-sentence chapter in the\ndocumentation index that tells people something they can find much\nmore easily without using the documentation at all is a good plan. I\nagree with the concern that Magnus expressed on the thread, i.e:\n\n> It's kind of strange that if you start your PostgreSQL journey by reading our instructions, you get nothing useful about installing PostgreSQL from binary packages other than \"go ask somebody else about it\".\n\nBut I don't agree that this was the right way to address that problem.\nI think it would have been better to just add the download link to the\nexisting installation chapter. That's actually what we had in chapter\n18, \"Installation from Source Code on Windows\", since removed. But for\nsome reason we decided that on non-Windows platforms, it needed a\nwhole new chapter rather than an extra sentence in the existing one. I\nthink that's massively overkill.\n\nAlternately, I think it would be reasonable to address the concern by\njust moving all the stuff about building from source code to an\nappendix, and assume people can figure out how to download the\nsoftware without us needing to say anything in the documentation at\nall. What was weird about the state before that patch, IMHO, was that\nwe both talked about building from source code and didn't talk about\nbinary packages. That can be addressed either by adding a mention of\nbinary packages, or by deemphasizing the idea of installing from\nsource code.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 10:10:22 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 7:10 AM Robert Haas <[email protected]> wrote:\n\n>\n> That's actually what we had in chapter\n> 18, \"Installation from Source Code on Windows\", since removed. But for\n> some reason we decided that on non-Windows platforms, it needed a\n> whole new chapter rather than an extra sentence in the existing one. I\n> think that's massively overkill.\n>\n>\nI agree with the premise that we should have a single chapter, in the main\ndocumentation flow, named \"Installation\". It should cover the\narchitectural overview and point people to where they can find the stuff\nthey need to install PostgreSQL in the various ways available to them. I\nagree with moving the source installation material to the appendix. None\nof the sections under Installation would then actually detail how to\ninstall the software since that isn't something the project itself handles\nbut has delegated to packagers for the vast majority of cases and the\nsource install details are in the appendix for the one \"supported\"\nmechanism that most people do not use.\n\nDavid J.\n\nOn Fri, Mar 22, 2024 at 7:10 AM Robert Haas <[email protected]> wrote:That's actually what we had in chapter\n18, \"Installation from Source Code on Windows\", since removed. But for\nsome reason we decided that on non-Windows platforms, it needed a\nwhole new chapter rather than an extra sentence in the existing one. I\nthink that's massively overkill.I agree with the premise that we should have a single chapter, in the main documentation flow, named \"Installation\".  It should cover the architectural overview and point people to where they can find the stuff they need to install PostgreSQL in the various ways available to them.  I agree with moving the source installation material to the appendix.  None of the sections under Installation would then actually detail how to install the software since that isn't something the project itself handles but has delegated to packagers for the vast majority of cases and the source install details are in the appendix for the one \"supported\" mechanism that most people do not use.David J.", "msg_date": "Fri, 22 Mar 2024 08:50:21 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 11:50 AM David G. Johnston\n<[email protected]> wrote:\n> On Fri, Mar 22, 2024 at 7:10 AM Robert Haas <[email protected]> wrote:\n>> That's actually what we had in chapter\n>> 18, \"Installation from Source Code on Windows\", since removed. But for\n>> some reason we decided that on non-Windows platforms, it needed a\n>> whole new chapter rather than an extra sentence in the existing one. I\n>> think that's massively overkill.\n>\n> I agree with the premise that we should have a single chapter, in the main documentation flow, named \"Installation\". It should cover the architectural overview and point people to where they can find the stuff they need to install PostgreSQL in the various ways available to them. I agree with moving the source installation material to the appendix. None of the sections under Installation would then actually detail how to install the software since that isn't something the project itself handles but has delegated to packagers for the vast majority of cases and the source install details are in the appendix for the one \"supported\" mechanism that most people do not use.\n\nHmm, that's not quite the same as my position. I'm fine with either\nmoving the installation from source material to an appendix, or\nleaving it where it is. But I'm strongly against continuing to have a\nchapter with four sentences in it that says to use the same download\nlink that is on the main navigation bar of every page on the\npostgresql.org web site. We're never going to get the chapter index\ndown to a reasonable size if we insist on having chapters that have a\ntotally de minimis amount of content.\n\nSo my feeling is that if we keep the installation from source material\nwhere it is, then we can make it also mention the download link, just\nas we used to do in the installation-on-windows chapter. But if we\nbanish installation from source to the appendixes, then we shouldn't\nkeep a whole chapter in the main documentation to tell people\nsomething that is anyway obvious. I don't really think that material\nneeds to be there at all, but if we want to have it, surely we can\nfind someplace to put it such that it doesn't require a whole chapter\nto say that and nothing else. It could for example go at the beginning\nof the \"Server Setup and Operation\" chapter, for instance; if that\nwere the first chapter of section III, I think that would be natural\nenough.\n\nI notice that you say that the \"Installation\" section should \"cover\nthe architectural overview and point people to where they can find the\nstuff they need to install PostgreSQL in the various ways available to\nthem\" so maybe you're not imagining a four-sentence chapter, either.\nBut this project is going to be impossible unless we stick to limited\ngoals. We can, and should, rewrite some sections of the documentation\nto be more useful; but if we try to do that as part of the same\nproject that aims to tidy up the index, the chances of us getting\nstuck in an endless bikeshedding loop go from \"high\" to \"certain\". So\nI don't want to hypothesize the existence of an installation chapter\nthat isn't any of the things we have today. Let's try to get the\nthings we have into places that make sense, and then consider other\nimprovements separately.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 12:32:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024, 09:32 Robert Haas <[email protected]> wrote:\n\n>\n>\n> I notice that you say that the \"Installation\" section should \"cover\n> the architectural overview and point people to where they can find the\n> stuff they need to install PostgreSQL in the various ways available to\n> them\" so maybe you're not imagining a four-sentence chapter, either.\n>\n\nFair point but I posit that new users are looking for a chapter named\nInstallation in the documentation. At least the ones willing to read\ndocumentation. Having two of them isn't needed but having zero doesn't\nmake sense either.\n\nThe current proposal does that so I'm ok as-is but it can be further\nimproved by moving source install talk elsewhere and having the\ninstallation chapter redirect the reader there for details. I'm not\nconcerned with how long or short the resultant installation chapter is.\n\nDavid J.\n\n>\n>\n\nOn Fri, Mar 22, 2024, 09:32 Robert Haas <[email protected]> wrote:\nI notice that you say that the \"Installation\" section should \"cover\nthe architectural overview and point people to where they can find the\nstuff they need to install PostgreSQL in the various ways available to\nthem\" so maybe you're not imagining a four-sentence chapter, either.Fair point but I posit that new users are looking for a chapter named Installation in the documentation.  At least the ones willing to read documentation.  Having two of them isn't needed but having zero doesn't make sense either.The current proposal does that so I'm ok as-is but it can be further improved by moving source install talk elsewhere and having the installation chapter redirect the reader there for details.  I'm not concerned with how long or short the resultant installation chapter is.David J.", "msg_date": "Fri, 22 Mar 2024 09:55:24 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 08:32:14AM -0400, Robert Haas wrote:\n> On Thu, Mar 21, 2024 at 6:32 PM David G. Johnston\n> <[email protected]> wrote:\n> > Just going to note that the section on the cumulative statistics views being a single page is still a strongly bothersome issue here. Though the quick fix actually requires upgrading the section to chapter status...\n> \n> Yeah, I've been bothered by this in the past, too. I'm not very keen\n> to start promoting things to the top-level, though. I think we need a\n> more thoughtful fix than that.\n> \n> One question I have is why all of these views are documented here\n> rather than in chapter 53, \"System Views,\" because surely they are\n> system views. I feel like if our documentation index weren't a mile\n> long and if you could easily find the entry for \"System Views,\" that's\n> where you would naturally look for these details. I don't think it's\n> natural for a user to expect that most of the system views are going\n> to be documented in section VII, chapter 53 but one particular kind is\n> going to be documented in section III, chapter 27, under a chapter\n\nWell, until this commit in 2022, the system views were _under_ the\nsystem catalogs chapter:\n\n\tcommit 64d364bb39c\n\tAuthor: Bruce Momjian <[email protected]>\n\tDate: Thu Jul 14 16:07:12 2022 -0400\n\t\n\t doc: move system views section to its own chapter\n\t\n\t Previously it was inside the system catalogs chapter.\n\t\n\t Reported-by: Peter Smith\n\t\n\t Discussion: https://postgr.es/m/CAHut+PsMc18QP60D+L0hJBOXrLQT5m88yVaCDyxLq34gfPHsow@mail.gmail.com\n\t\n\t Backpatch-through: 15\n\nThe thread contains more discussion the issue, and I think it still needs help:\n\n\thttps://www.postgresql.org/message-id/flat/CAHut%2BPsMc18QP60D%2BL0hJBOXrLQT5m88yVaCDyxLq34gfPHsow%40mail.gmail.com\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 22 Mar 2024 13:35:17 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 1:35 PM Bruce Momjian <[email protected]> wrote:\n> > One question I have is why all of these views are documented here\n> > rather than in chapter 53, \"System Views,\" because surely they are\n> > system views. I feel like if our documentation index weren't a mile\n> > long and if you could easily find the entry for \"System Views,\" that's\n> > where you would naturally look for these details. I don't think it's\n> > natural for a user to expect that most of the system views are going\n> > to be documented in section VII, chapter 53 but one particular kind is\n> > going to be documented in section III, chapter 27, under a chapter\n>\n> Well, until this commit in 2022, the system views were _under_ the\n> system catalogs chapter:\n\nEven before that commit, the statistics collector views were\ndocumented in a completely separate part of the documentation from all\nof the other system views.\n\nI think that commit was a good idea, even though it made the top-level\ndocumentation index bigger, because in v14, the \"System Catalogs\"\nchapter looks like this:\n\n...\n52.61. pg_ts_template\n52.62. pg_type\n52.63. pg_user_mapping\n52.64. System Views\n52.65. pg_available_extensions\n52.66. pg_available_extension_versions\n52.67. pg_backend_memory_contexts\n...\n\nIf you were actually looking for the section called \"System Views\",\nyou weren't likely to see it here unless you already knew it was\nthere, because it was 64 items into a 97-item list. Having one of\nthese two sections inside the other just doesn't work at all. We could\nhave alternatively chosen to have one chapter with two <sect1> tags\ninside of it, but I think what you actually did was perfectly fine.\nIMHO, \"System Views\" is important enough (and big enough) that giving\nit its own chapter is perfectly reasonable.\n\nBut that all seems like a separate question from why we have the\nstatistic collector views in a completely different part of the\ndocumentation from the rest of the system views. My guess is that it's\njust kind of a historical accident, but maybe there was some other\nlogic to it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 14:19:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 02:19:29PM -0400, Robert Haas wrote:\n> If you were actually looking for the section called \"System Views\",\n> you weren't likely to see it here unless you already knew it was\n> there, because it was 64 items into a 97-item list. Having one of\n> these two sections inside the other just doesn't work at all. We could\n> have alternatively chosen to have one chapter with two <sect1> tags\n> inside of it, but I think what you actually did was perfectly fine.\n> IMHO, \"System Views\" is important enough (and big enough) that giving\n> it its own chapter is perfectly reasonable.\n> \n> But that all seems like a separate question from why we have the\n> statistic collector views in a completely different part of the\n> documentation from the rest of the system views. My guess is that it's\n> just kind of a historical accident, but maybe there was some other\n> logic to it.\n\nI assume statistics collector views are in \"Monitoring Database\nActivity\" because that is their purpose.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 22 Mar 2024 14:59:44 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 11:19 AM Robert Haas <[email protected]> wrote:\n\n> On Fri, Mar 22, 2024 at 1:35 PM Bruce Momjian <[email protected]> wrote:\n>\n> But that all seems like a separate question from why we have the\n> statistic collector views in a completely different part of the\n> documentation from the rest of the system views. My guess is that it's\n> just kind of a historical accident, but maybe there was some other\n> logic to it.\n>\n>\nThe details under-pinning the cumulative statistics subsystem are\ndefinitely large enough to warrant their own subsection. And it isn't like\nplacing them into the monitoring chapter is wrong and aside from a couple\nof views those under System Views don't fit into what we've defined as\nmonitoring. I don't have any desire to lump them under the generic system\nviews; which itself could probably use a level of categorization since the\nnature of pg_locks and pg_cursors is decidedly different than pg_indexes\nand pg_config. This all becomes more appealing to work on once we solve\nthe problem of all sect2 entries being placed on a single page.\n\nI struggled for a long while where I'd always look for pg_stat_activity\nunder system views instead of monitoring. Amending my prior suggestion in\nlight of this I would suggest we move the Cumulative Statistics Views into\nReference but as its own Chapter, not part of System Views, and change its\nname to \"Monitoring Views\" (going more generalized here feels like a win to\nme). I'd move pg_locks, pg_cursors, pg_backend_memory_contexts,\npg_prepared_*, pg_shmem_allocations, and pg_replication_*. Those all have\nthe same general monitoring nature to them compared to the others that\nbasically provide details regarding schema and static or session\nconfiguration.\n\nThe original server admin monitoring section can go into detail regarding\nCumulative Statistics versus other kinds of monitoring. We can use\nsection ordering to fulfill logical grouping desires until we are able to\nmake section3 entries appear on their own pages.\n\nDavid J.\n\nOn Fri, Mar 22, 2024 at 11:19 AM Robert Haas <[email protected]> wrote:On Fri, Mar 22, 2024 at 1:35 PM Bruce Momjian <[email protected]> wrote:\nBut that all seems like a separate question from why we have the\nstatistic collector views in a completely different part of the\ndocumentation from the rest of the system views. My guess is that it's\njust kind of a historical accident, but maybe there was some other\nlogic to it.The details under-pinning the cumulative statistics subsystem are definitely large enough to warrant their own subsection. And it isn't like placing them into the monitoring chapter is wrong and aside from a couple of views those under System Views don't fit into what we've defined as monitoring.  I don't have any desire to lump them under the generic system views; which itself could probably use a level of categorization since the nature of pg_locks and pg_cursors is decidedly different than pg_indexes and pg_config.  This all becomes more appealing to work on once we solve the problem of all sect2 entries being placed on a single page.I struggled for a long while where I'd always look for pg_stat_activity under system views instead of monitoring.  Amending my prior suggestion in light of this I would suggest we move the Cumulative Statistics Views into Reference but as its own Chapter, not part of System Views, and change its name to \"Monitoring Views\" (going more generalized here feels like a win to me). I'd move pg_locks, pg_cursors, pg_backend_memory_contexts, pg_prepared_*, pg_shmem_allocations, and pg_replication_*.  Those all have the same general monitoring nature to them compared to the others that basically provide details regarding schema and static or session configuration.The original server admin monitoring section can go into detail regarding Cumulative Statistics versus other kinds of monitoring.  We can use section ordering to fulfill logical grouping desires until we are able to make section3 entries appear on their own pages.David J.", "msg_date": "Fri, 22 Mar 2024 12:06:18 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 2:59 PM Bruce Momjian <[email protected]> wrote:\n> I assume statistics collector views are in \"Monitoring Database\n> Activity\" because that is their purpose.\n\nWell, yes. :-)\n\nBut the point is that all other statistics views are in a single\nsection regardless of their purpose. We don't document pg_roles in the\n\"Database Roles\" chapter, for example.\n\nAnd on the flip side, pg_locks and pg_replication_origin_status are\nalso for monitoring database activity, but they're in the \"System\nViews\" chapter anyway. The only system views that are in \"Monitoring\nDatabase Activity\" rather than \"System Views\" are the ones where the\nname starts with \"pg_stat_\".\n\nSo the reason you state is why these views are under \"Monitoring\nDatabase Activity\" rather than a chapter chosen at random. But it\ndoesn't really explain why they're separate from the other system\nviews at all. That seems to be a pretty much random choice, AFAICT.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 15:13:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 03:13:29PM -0400, Robert Haas wrote:\n> On Fri, Mar 22, 2024 at 2:59 PM Bruce Momjian <[email protected]> wrote:\n> > I assume statistics collector views are in \"Monitoring Database\n> > Activity\" because that is their purpose.\n> \n> Well, yes. :-)\n> \n> But the point is that all other statistics views are in a single\n> section regardless of their purpose. We don't document pg_roles in the\n> \"Database Roles\" chapter, for example.\n> \n> And on the flip side, pg_locks and pg_replication_origin_status are\n> also for monitoring database activity, but they're in the \"System\n> Views\" chapter anyway. The only system views that are in \"Monitoring\n> Database Activity\" rather than \"System Views\" are the ones where the\n> name starts with \"pg_stat_\".\n> \n> So the reason you state is why these views are under \"Monitoring\n> Database Activity\" rather than a chapter chosen at random. But it\n> doesn't really explain why they're separate from the other system\n> views at all. That seems to be a pretty much random choice, AFAICT.\n\nI agree and they should be with the other views. I was just explaining\nwhy, at the time, I didn't touch them.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 22 Mar 2024 15:17:48 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 22, 2024 at 3:17 PM Bruce Momjian <[email protected]> wrote:\n> I agree and they should be with the other views. I was just explaining\n> why, at the time, I didn't touch them.\n\nAh, OK. That makes total sense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 15:20:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 22.03.24 14:59, Robert Haas wrote:\n> And I don't believe that if someone were writing a physical book about\n> PostgreSQL from scratch, they'd ever end up with a top-level chapter\n> that looks anything like our GiST chapter. All of the index AM\n> chapters are quite obviously clones of each other, and they're all\n> quite short. Surely you'd make them sections within a chapter, not\n> entire chapters.\n> \n> I do agree that PL/pgsql is more arguable. I can imagine somebody\n> writing a book about PostgreSQL and choosing to make that topic into a\n> whole chapter.\n\nYeah, I think there is probably a range of of things from pretty obvious \nto mostly controversial.\n\n\n", "msg_date": "Mon, 25 Mar 2024 16:26:20 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 22.03.24 15:10, Robert Haas wrote:\n> Sorry. I didn't mean to dispute the point that the section was added a\n> few years ago, nor the point that most people just want to read about\n> the binaries. I am confident that both of those things are true. What\n> I do want to dispute is that having a four-sentence chapter in the\n> documentation index that tells people something they can find much\n> more easily without using the documentation at all is a good plan.\n\nI think a possible problem we need to consider with these proposals to \ncombine chapters is that they could make the chapters themselves too \ndeep and harder to navigate. For example, if we combined the \ninstallation from source and binaries chapters, the structure of the new \nchapter would presumably be\n\n<chapter> Installation\n <sect1> Installation from Binaries\n <sect1> Installation from Source\n <sect2> Requirements\n <sect2> Getting the Source\n <sect2> Building and Installation with Autoconf and Make\n <sect2> Building and Installation with Meson\netc.\n\nThis would mean that the entire \"Installation from Source\" part would be \nrendered on a single HTML page.\n\nThe rendering can be adjusted to some degree, but then we also need to \nmake sure any new chunking makes sense in other chapters. (And it might \nalso change a bunch of externally known HTML links.)\n\nI think maybe more could also be done at the top-level structure, too. \nRight now, we have <book> -> <part> -> <chapter>. We could add <set> on \ntop of that.\n\nWe could also play with CSS or JavaScript to make the top-level table of \ncontents more navigable, with collapsing subsections or whatever.\n\nWe could also render additional tables of contents or indexes, so there \nis more than one way to navigate into the content from the top.\n\nWe could also build better search.\n\n\n\n", "msg_date": "Mon, 25 Mar 2024 16:40:52 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Mar 25, 2024 at 11:40 AM Peter Eisentraut <[email protected]> wrote:\n> I think a possible problem we need to consider with these proposals to\n> combine chapters is that they could make the chapters themselves too\n> deep and harder to navigate. For example, if we combined the\n> installation from source and binaries chapters, the structure of the new\n> chapter would presumably be\n\nI agree with this in theory, but in practice I think the patches that\nI posted don't have this issue to a degree that is problematic, and I\nposted some specific proposals on adjustments that we could make to\nameliorate the problem if other people feel differently.\n\n> I think maybe more could also be done at the top-level structure, too.\n> Right now, we have <book> -> <part> -> <chapter>. We could add <set> on\n> top of that.\n>\n> We could also play with CSS or JavaScript to make the top-level table of\n> contents more navigable, with collapsing subsections or whatever.\n>\n> We could also render additional tables of contents or indexes, so there\n> is more than one way to navigate into the content from the top.\n>\n> We could also build better search.\n\nThese are all reasonable ideas. I think some better CSS and JavaScript\ncould definitely help, and I also wondered whether the entrypoint to\nthe documentation has to be the index page, or whether it could maybe\nbe a page we've crafted specifically for that purpose, that might\ninclude some text as well as a bunch of links.\n\nBut that having been said, I don't believe that any of those ideas (or\nanything else we do) will obviate the need for some curation of the\ntoplevel index. If you're going to add another level, as you propose\nin the first point, you still need to make decisions about which\nthings properly go at which levels. If you're going to allow for\ncollapsing subsections, you still want the overall tree in which\nsubsections are be expanded and collapsed to make logical sense. If\nyou have multiple ways to navigate to the content, one of them will\nprobably be still the index, and it should be good. And good search is\ngood, but it shouldn't be the only convenient way to find the content.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 12:01:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "OK, so I'm coming back to this thread after giving it a few days to\ncool off. My last series of patches proposed to do five things:\n\n1. Merge the four-sentence \"Installation from Binaries\" chapter back\ninto \"Installation from Source\". I thought this was a slam-dunk, but\nPeter pointed out that exactly the opposite of this was done a few\nyears ago to create the \"Installation from Binaries\" chapter in the\nfirst place. Based on subsequent discussion, what I'm now inclined to\ndo is come up with a new proposal that involves moving the information\nabout compiling from source to an appendix. So never mind about this\none for now.\n\n2. Demote \"Monitoring Disk Usage\" from a chapter on its own to a\nsection of the \"Monitoring Database Activity\" chapter. I haven't seen\nany objections to this, and I'd like to move ahead with it.\n\n3. Merge the separate chapters on various built-in index AMs into one.\nPeter didn't think this was a good idea, but Tom and Alvaro's comments\nfocused on how to do it mechanically, and on whether the chapters\nneeded to be reordered afterwards, which I took to mean that they were\nOK with the basic concept. David Johnston was also clearly in favor of\nit. So I'd like to move ahead with this one, too.\n\n4. Consolidate the \"Generic WAL Records\" and \"Custom WAL Resource\nManagers\" chapters, which cover related topics, into a single one. I\ndidn't see anyone object to this, but David Johnston pointed out that\nthe patch I posted was a few bricks short of a load, because it really\nneeded to put some introductory text into the new chapter. I'll study\nthis a bit more and propose a new patch that does the same thing a bit\nmore carefully than my previous version did.\n\n5. Consolidate all of the procedural language chapters into one. This\nwas clearly the most controversial part of the proposal. I'm going to\nlay this one aside for now and possibly come back to it at a later\ntime.\n\nI hope that this way of proceeding makes sense to people.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Mar 2024 09:40:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Mar 29, 2024 at 9:40 AM Robert Haas <[email protected]> wrote:\n> 2. Demote \"Monitoring Disk Usage\" from a chapter on its own to a\n> section of the \"Monitoring Database Activity\" chapter. I haven't seen\n> any objections to this, and I'd like to move ahead with it.\n>\n> 3. Merge the separate chapters on various built-in index AMs into one.\n> Peter didn't think this was a good idea, but Tom and Alvaro's comments\n> focused on how to do it mechanically, and on whether the chapters\n> needed to be reordered afterwards, which I took to mean that they were\n> OK with the basic concept. David Johnston was also clearly in favor of\n> it. So I'd like to move ahead with this one, too.\n\nI committed these two patches.\n\n> 4. Consolidate the \"Generic WAL Records\" and \"Custom WAL Resource\n> Managers\" chapters, which cover related topics, into a single one. I\n> didn't see anyone object to this, but David Johnston pointed out that\n> the patch I posted was a few bricks short of a load, because it really\n> needed to put some introductory text into the new chapter. I'll study\n> this a bit more and propose a new patch that does the same thing a bit\n> more carefully than my previous version did.\n\nHere is a new version of this patch. I think this is v18 material at\nthis point, absent an outcry to the contrary. Sometimes we're flexible\nabout doc patches.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 5 Apr 2024 11:11:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Mar 25, 2024 at 11:40 AM Peter Eisentraut <[email protected]> wrote:\n> I think a possible problem we need to consider with these proposals to\n> combine chapters is that they could make the chapters themselves too\n> deep and harder to navigate.\n\nI looked into various options for further combining chapters and/or\nappendixes and found that this is indeed a huge problem. For example,\nI had thought of creating a Developer Information chapter in the\nappendix and moving various existing chapters and appendixes inside of\nit, but that means that the <sect1> elements in those chapters get\ndemoted to <sect2>, and what used to be a whole chapter or appendix\nbecomes a <sect1>. And since you get one HTML page per <sect1>, that\nmeans that instead of a bunch of individual HTML pages of very\npleasant length, you suddenly get one very long HTML page that is,\nexactly as you say, hard to navigate.\n\n> The rendering can be adjusted to some degree, but then we also need to\n> make sure any new chunking makes sense in other chapters. (And it might\n> also change a bunch of externally known HTML links.)\n\nI looked into this and I'm unclear how much customization is possible.\nI gather that the current scheme comes from having chunk.section.depth\nof 1, and I guess you can change that to 2 to get an HTML page per\n<sect2>, but it seems like it would take a LOT of restructuring to\nmake that work. It would be much easier if you could vary this across\ndifferent parts of the documentation; for instance, if you could say,\nwell, in this particular chapter or appendix, I want\nchunk.section.depth of 2, but elsewhere 1, that would be quite handy,\nbut after several hours reading various things about DocBook on the\nInternet, I was still unable to determine conclusively whether this\nwas possible. There's an interesting comment in\nstylesheet-speedup-xhtml.xsl that says \"Since we set a fixed\n$chunk.section.depth, we can do away with a bunch of complicated XPath\nsearches for the previous and next sections at various levels.\" That\nsounds like it's suggesting that it is in fact possible for this\nsetting to vary, but I don't know if that's true, or how to do it, and\nit sounds like there might be performance consequences, too.\n\n> I think maybe more could also be done at the top-level structure, too.\n> Right now, we have <book> -> <part> -> <chapter>. We could add <set> on\n> top of that.\n\nDoes this let you create structures of non-uniform depth? i.e. is\nthere a way that we can group some chapters into sets while leaving\nothers as standalone chapters, or somesuch?\n\nI'm not 100% confident that non-uniform depth (either via <set> or via\nchunk.section.depth or via some other mechanism) is a good idea.\nThere's a sort of uniformity to our navigation right now that does\nhave some real appeal. The downside, though, is that if you want\nsomething to be a single HTML page, it's got to either be a chapter\n(or appendix) by itself with no sections inside of it, or it's got to\nbe a <sect1> inside of a chapter, and so anything that's long enough\nthat it should be an HTML page by itself can never be more than one\nlevel below the index. And that seems to make it quite difficult to\nkeep the index small.\n\nWithout some kind of variable-depth structure, the only other ways\nthat I can see to improve things are:\n\n1. Make chunk.section.depth 2 and restructure the entire documentation\nuntil the results look reasonable. This might be possible but I bet\nit's difficult. We have, at present, chapters of *wildly* varying\nlength, from a few sentences to many, many pages. That is perhaps a\nbad thing; you most likely wouldn't do that in a printed book. But\nfixing it is a huge project. We don't necessarily have the same amount\nof content about each topic, and there isn't necessarily a way of\ngrouping related topics together that produces units of relatively\nuniform length. I think it's sensible to try to make improvements\nwhere we can, by pushing stuff down that's short and not that\nimportant, but finding our way to a chunk.section.depth=2 world that\nfeels good to most people compared to what we have today seems like\nit's going to be challening.\n\n2. Replace the current index with a custom index or landing page of\nsome kind. Or keep the current index and add a new landing page\nalongside it. Something that isn't derived automatically from the\ndocumentation structure but is created by hand.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Apr 2024 12:01:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Apr 5, 2024 at 9:01 AM Robert Haas <[email protected]> wrote:\n\n>\n> > The rendering can be adjusted to some degree, but then we also need to\n> > make sure any new chunking makes sense in other chapters. (And it might\n> > also change a bunch of externally known HTML links.)\n>\n> I looked into this and I'm unclear how much customization is possible.\n>\n>\nHere is a link to my attempt at this a couple of years ago. It basically\n\"abuses\" refentry.\n\nhttps://www.postgresql.org/message-id/CAKFQuwaVm%3D6d_sw9Wrp4cdSm5_k%3D8ZVx0--v2v4BH4KnJtqXqg%40mail.gmail.com\n\nI never did dive into the man page or PDF dynamics of this\nparticular change but it seemed to solve HTML pagination without negative\nconsequences and with minimal risk of unintended consequences since only\nthe markup on the pages we want to alter is changed, not global\nconfiguration.\n\nDavid J.\n\nOn Fri, Apr 5, 2024 at 9:01 AM Robert Haas <[email protected]> wrote:\n> The rendering can be adjusted to some degree, but then we also need to\n> make sure any new chunking makes sense in other chapters.  (And it might\n> also change a bunch of externally known HTML links.)\n\nI looked into this and I'm unclear how much customization is possible.Here is a link to my attempt at this a couple of years ago.  It basically \"abuses\" refentry.https://www.postgresql.org/message-id/CAKFQuwaVm%3D6d_sw9Wrp4cdSm5_k%3D8ZVx0--v2v4BH4KnJtqXqg%40mail.gmail.comI never did dive into the man page or PDF dynamics of this particular change but it seemed to solve HTML pagination without negative consequences and with minimal risk of unintended consequences since only the markup on the pages we want to alter is changed, not global configuration.David J.", "msg_date": "Fri, 5 Apr 2024 09:14:56 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Apr 5, 2024 at 12:15 PM David G. Johnston\n<[email protected]> wrote:\n> Here is a link to my attempt at this a couple of years ago. It basically \"abuses\" refentry.\n>\n> https://www.postgresql.org/message-id/CAKFQuwaVm%3D6d_sw9Wrp4cdSm5_k%3D8ZVx0--v2v4BH4KnJtqXqg%40mail.gmail.com\n>\n> I never did dive into the man page or PDF dynamics of this particular change but it seemed to solve HTML pagination without negative consequences and with minimal risk of unintended consequences since only the markup on the pages we want to alter is changed, not global configuration.\n\nHmm, but it seems like that might have generated some man page entries\nthat we don't want?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Apr 2024 12:18:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Apr 5, 2024 at 9:18 AM Robert Haas <[email protected]> wrote:\n\n> On Fri, Apr 5, 2024 at 12:15 PM David G. Johnston\n> <[email protected]> wrote:\n> > Here is a link to my attempt at this a couple of years ago. It\n> basically \"abuses\" refentry.\n> >\n> >\n> https://www.postgresql.org/message-id/CAKFQuwaVm%3D6d_sw9Wrp4cdSm5_k%3D8ZVx0--v2v4BH4KnJtqXqg%40mail.gmail.com\n> >\n> > I never did dive into the man page or PDF dynamics of this particular\n> change but it seemed to solve HTML pagination without negative consequences\n> and with minimal risk of unintended consequences since only the markup on\n> the pages we want to alter is changed, not global configuration.\n>\n> Hmm, but it seems like that might have generated some man page entries\n> that we don't want?\n>\n\nIf so (didn't check) maybe just remove them in post?\n\nDavid J.\n\nOn Fri, Apr 5, 2024 at 9:18 AM Robert Haas <[email protected]> wrote:On Fri, Apr 5, 2024 at 12:15 PM David G. Johnston\n<[email protected]> wrote:\n> Here is a link to my attempt at this a couple of years ago.  It basically \"abuses\" refentry.\n>\n> https://www.postgresql.org/message-id/CAKFQuwaVm%3D6d_sw9Wrp4cdSm5_k%3D8ZVx0--v2v4BH4KnJtqXqg%40mail.gmail.com\n>\n> I never did dive into the man page or PDF dynamics of this particular change but it seemed to solve HTML pagination without negative consequences and with minimal risk of unintended consequences since only the markup on the pages we want to alter is changed, not global configuration.\n\nHmm, but it seems like that might have generated some man page entries\nthat we don't want?\nIf so (didn't check) maybe just remove them in post?David J.", "msg_date": "Fri, 5 Apr 2024 09:22:36 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 05.04.24 17:11, Robert Haas wrote:\n>> 4. Consolidate the \"Generic WAL Records\" and \"Custom WAL Resource\n>> Managers\" chapters, which cover related topics, into a single one. I\n>> didn't see anyone object to this, but David Johnston pointed out that\n>> the patch I posted was a few bricks short of a load, because it really\n>> needed to put some introductory text into the new chapter. I'll study\n>> this a bit more and propose a new patch that does the same thing a bit\n>> more carefully than my previous version did.\n> \n> Here is a new version of this patch. I think this is v18 material at\n> this point, absent an outcry to the contrary. Sometimes we're flexible\n> about doc patches.\n\nLooks good to me. I think this could go into PG17.\n\n\n", "msg_date": "Mon, 8 Apr 2024 16:15:29 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Wed, Mar 20, 2024 at 5:40 AM Andrew Dunstan <[email protected]> wrote:\n>\n>\n> +many for improving the index.\n>\n> My own pet docs peeve is a purely editorial one: func.sgml is a 30k line beast, and I think there's a good case for splitting out at least the larger chunks of it.\n>\n\nI think I successfully reduced func.sgml from 311322 lines to 13167 lines.\n(base-commit: 93582974315174d544592185d797a2b44696d1e5)\n\nwriting a patch would be unreviewable.\nkey gotcha is put the contents between opening `<sect1>` and closing\n`</sect1>` (both inclusive)\ninto a new file.\nin func.sgml, using `&entity` to refernce the new file.\nalso update filelist.sgml\n\nhere is how I do it:\n\nI found out these build html files are the biggest one:\ndoc/src/sgml/html/functions-string.html\ndoc/src/sgml/html/functions-matching.html\ndoc/src/sgml/html/functions-datetime.html\ndoc/src/sgml/html/functions-json.html\ndoc/src/sgml/html/functions-aggregate.html\ndoc/src/sgml/html/functions-info.html\ndoc/src/sgml/html/functions-admin.html\n\nso create these new sgml files hold corrspedoning content:\nfunc-string.sgml\nfunc-matching.sgml\nfunc-datetime.sgml\nfunc-json.sgml\nfunc-aggregate.sgml\nfunc-info.sgml\nfunc-admin.sgml\n\nbased on funs.sgml structure pattern:\n<sect1 id=\"functions-string\">\nnext section1 line number:\n<sect1 id=\"functions-binarystring\">\n\n<sect1 id=\"functions-matching\">\nnext section1 line number:\n<sect1 id=\"functions-formatting\">\n\n<sect1 id=\"functions-datetime\">\nnext section1 line number:\n<sect1 id=\"functions-enum\">\n\n<sect1 id=\"functions-json\">\nnext section1 line number:\n<sect1 id=\"functions-sequence\">\n\n<sect1 id=\"functions-aggregate\">\nnext section1 line number:\n<sect1 id=\"functions-window\">\n\n<sect1 id=\"functions-info\">\nnext section1 line number:\n<sect1 id=\"functions-admin\">\n\n<sect1 id=\"functions-admin\">\nnext section1 line number:\n<sect1 id=\"functions-trigger\">\n------------------------------------\nstep1: pipe the relative line range contents to new sgml files.\n(example: line 2407 to line 4177 include all the content correspond to\nfunctions-string.html)\n\nsed -n '2407,4177 p' func.sgml > func-string.sgml\nsed -n '5328,7756 p' func.sgml > func-matching.sgml\nsed -n '8939,11122 p' func.sgml > func-datetime.sgml\nsed -n '15498,19348 p' func.sgml > func-json.sgml\nsed -n '21479,22896 p' func.sgml > func-aggregate.sgml\nsed -n '24257,27896 p' func.sgml > func-info.sgml\nsed -n '27898,30579 p' func.sgml > func-admin.sgml\n\nstep2:\nin place delete these line ranges in func.sgml\nsed --in-place \"2407,4177d ; 5328,7756d ; 8939,11122d ; 15498,19348d\n; 21479,22896d ; 24257,27896d ; 27898,30579d\" \\\n func.sgml\nreference: https://unix.stackexchange.com/questions/676210/matching-multiple-ranges-with-sed-range-expressions\n https://www.gnu.org/software/sed/manual/sed.html#Command_002dLine-Options\n\nstep3:\nput following lines into relative position in func.sgml:\n(based on above structure pattern, quickly location line position)\n\n`\n&func-string\n&func-matching\n&func-datetime\n&func-json\n&func-aggregate\n&func-info\n&func-admin\n`\n\nstep4: update filelist.sgml:\ndiff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml\nindex 3fb0709f..0b78a361 100644\n--- a/doc/src/sgml/filelist.sgml\n+++ b/doc/src/sgml/filelist.sgml\n@@ -18,6 +18,13 @@\n <!ENTITY ddl SYSTEM \"ddl.sgml\">\n <!ENTITY dml SYSTEM \"dml.sgml\">\n <!ENTITY func SYSTEM \"func.sgml\">\n+<!ENTITY func-string SYSTEM \"func-string.sgml\">\n+<!ENTITY func-matching SYSTEM \"func-matching.sgml\">\n+<!ENTITY func-datetime SYSTEM \"func-datetime.sgml\">\n+<!ENTITY func-json SYSTEM \"func-json.sgml\">\n+<!ENTITY func-aggregate SYSTEM \"func-aggregate.sgml\">\n+<!ENTITY func-info SYSTEM \"func-info.sgml\">\n+<!ENTITY func-admin SYSTEM \"func-admin.sgml\">\n <!ENTITY indices SYSTEM \"indices.sgml\">\n <!ENTITY json SYSTEM \"json.sgml\">\n <!ENTITY mvcc SYSTEM \"mvcc.sgml\">\n\n doc/src/sgml/filelist.sgml | 7 +\n doc/src/sgml/func-admin.sgml | 2682 +++++\n doc/src/sgml/func-aggregate.sgml | 1418 +++\n doc/src/sgml/func-datetime.sgml | 2184 ++++\n doc/src/sgml/func-info.sgml | 3640 ++++++\n doc/src/sgml/func-json.sgml | 3851 ++++++\n doc/src/sgml/func-matching.sgml | 2429 ++++\n doc/src/sgml/func-string.sgml | 1771 +++\n doc/src/sgml/func.sgml | 17979 +----------------------------\n\nwe can do it one by one, but it's still worth it.\n\n\n", "msg_date": "Mon, 15 Apr 2024 13:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Apr 8, 2024 at 10:15 AM Peter Eisentraut <[email protected]> wrote:\n> > Here is a new version of this patch. I think this is v18 material at\n> > this point, absent an outcry to the contrary. Sometimes we're flexible\n> > about doc patches.\n>\n> Looks good to me. I think this could go into PG17.\n\nHearing no objections, done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 16:06:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Hi,\n\nOn 2024-03-19 17:39:39 -0400, Andrew Dunstan wrote:\n> My own pet docs peeve is a purely editorial one: func.sgml is a 30k line\n> beast, and I think there's a good case for splitting out at least the\n> larger chunks of it.\n\nI think we should work on generating a lot of func.sgml. Particularly the\nsignature etc should just come from pg_proc.dat, it's pointlessly painful to\ngenerate that by hand. And for a lot of the functions we should probably move\nthe existing func.sgml comments to the description in pg_proc.dat.\n\nI suspect that we can't just generate all the documentation from pg_proc,\nbecause of xrefs etc. Although perhaps we could just strip those out for\npg_proc.\n\nWe'd need to add some more metadata to pg_proc, for grouping kinds of\nfunctions together. But that seems doable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 Apr 2024 11:23:10 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> I think we should work on generating a lot of func.sgml. Particularly the\n> signature etc should just come from pg_proc.dat, it's pointlessly painful to\n> generate that by hand. And for a lot of the functions we should probably move\n> the existing func.sgml comments to the description in pg_proc.dat.\n\nWhere are you going to get the examples and text descriptions from?\n(And no, I don't agree that the pg_description string should match\nwhat's in the docs. The description string has to be a short\none-liner in just about every case.)\n\nThis sounds to me like it would be a painful exercise with not a\nlot of benefit in the end.\n\nI do agree with Andrew that splitting func.sgml into multiple files\nwould be beneficial.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Apr 2024 15:05:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Tue, Apr 16, 2024 at 03:05:32PM -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I think we should work on generating a lot of func.sgml. Particularly the\n> > signature etc should just come from pg_proc.dat, it's pointlessly painful to\n> > generate that by hand. And for a lot of the functions we should probably move\n> > the existing func.sgml comments to the description in pg_proc.dat.\n> \n> Where are you going to get the examples and text descriptions from?\n> (And no, I don't agree that the pg_description string should match\n> what's in the docs. The description string has to be a short\n> one-liner in just about every case.)\n> \n> This sounds to me like it would be a painful exercise with not a\n> lot of benefit in the end.\n\nMaybe we could _verify_ the contents of func.sgml against pg_proc.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 16 Apr 2024 15:29:29 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Hi,\n\nOn 2024-04-16 15:05:32 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I think we should work on generating a lot of func.sgml. Particularly the\n> > signature etc should just come from pg_proc.dat, it's pointlessly painful to\n> > generate that by hand. And for a lot of the functions we should probably move\n> > the existing func.sgml comments to the description in pg_proc.dat.\n>\n> Where are you going to get the examples and text descriptions from?\n\nI think there's a few different way to do that. E.g. having long_desc, example\nfields in pg_proc.dat. Or having examples and description in a separate file\nand \"enriching\" that with auto-generated function signatures.\n\n\n> (And no, I don't agree that the pg_description string should match\n> what's in the docs. The description string has to be a short\n> one-liner in just about every case.)\n\nDefinitely shouldn't be the same in all cases, but I think there's a decent\nnumber of cases where they can be the same. The differences between the two is\noften minimal today.\n\nEntirely randomly chosen example:\n\n{ oid => '2825',\n descr => 'slope of the least-squares-fit linear equation determined by the (X, Y) pairs',\n proname => 'regr_slope', prokind => 'a', proisstrict => 'f',\n prorettype => 'float8', proargtypes => 'float8 float8',\n prosrc => 'aggregate_dummy' },\n\nand\n\n <row>\n <entry role=\"func_table_entry\"><para role=\"func_signature\">\n <indexterm>\n <primary>regression slope</primary>\n </indexterm>\n <indexterm>\n <primary>regr_slope</primary>\n </indexterm>\n <function>regr_slope</function> ( <parameter>Y</parameter> <type>double precision</type>, <parameter>X</parameter> <type>double precision</type> )\n <returnvalue>double precision</returnvalue>\n </para>\n <para>\n Computes the slope of the least-squares-fit linear equation determined\n by the (<parameter>X</parameter>, <parameter>Y</parameter>)\n pairs.\n </para></entry>\n <entry>Yes</entry>\n </row>\n\n\nThe description is quite similar, the pg_proc entry lacks argument names. \n\n\n> This sounds to me like it would be a painful exercise with not a\n> lot of benefit in the end.\n\nI think the manual work for writing signatures in sgml is not insignificant,\nnor is the volume of sgml for them. Manually maintaining the signatures makes\nit impractical to significantly improve the presentation - which I don't think\nis all that great today.\n\nAnd the lack of argument names in the pg_proc entries is occasionally fairly\nannoying, because a \\df+ doesn't provide enough information to use functions.\n\nIt'd also be quite useful if clients could render more of the documentation\nfor functions. People are used to language servers providing full\ndocumentation for functions etc...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 Apr 2024 13:17:00 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": ">\n> > This sounds to me like it would be a painful exercise with not a\n> > lot of benefit in the end.\n>\n> Maybe we could _verify_ the contents of func.sgml against pg_proc.\n>\n\nAll of the functions redefined in catalog/system_functions.sql complicate\nusing pg_proc.dat as a doc generator or source of validation. We'd probably\ndo better to validate against a live instance, and even then the benefit\nwouldn't be great.\n\n> This sounds to me like it would be a painful exercise with not a\n> lot of benefit in the end.\n\nMaybe we could _verify_ the contents of func.sgml against pg_proc.All of the functions redefined in catalog/system_functions.sql complicate using pg_proc.dat as a doc generator or source of validation. We'd probably do better to validate against a live instance, and even then the benefit wouldn't be great.", "msg_date": "Wed, 17 Apr 2024 02:46:53 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n\n> Definitely shouldn't be the same in all cases, but I think there's a decent\n> number of cases where they can be the same. The differences between the two is\n> often minimal today.\n>\n> Entirely randomly chosen example:\n>\n> { oid => '2825',\n> descr => 'slope of the least-squares-fit linear equation determined by the (X, Y) pairs',\n> proname => 'regr_slope', prokind => 'a', proisstrict => 'f',\n> prorettype => 'float8', proargtypes => 'float8 float8',\n> prosrc => 'aggregate_dummy' },\n>\n> and\n>\n> <row>\n> <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> <indexterm>\n> <primary>regression slope</primary>\n> </indexterm>\n> <indexterm>\n> <primary>regr_slope</primary>\n> </indexterm>\n> <function>regr_slope</function> ( <parameter>Y</parameter> <type>double precision</type>, <parameter>X</parameter> <type>double precision</type> )\n> <returnvalue>double precision</returnvalue>\n> </para>\n> <para>\n> Computes the slope of the least-squares-fit linear equation determined\n> by the (<parameter>X</parameter>, <parameter>Y</parameter>)\n> pairs.\n> </para></entry>\n> <entry>Yes</entry>\n> </row>\n>\n>\n> The description is quite similar, the pg_proc entry lacks argument names. \n>\n>\n>> This sounds to me like it would be a painful exercise with not a\n>> lot of benefit in the end.\n>\n> I think the manual work for writing signatures in sgml is not insignificant,\n> nor is the volume of sgml for them. Manually maintaining the signatures makes\n> it impractical to significantly improve the presentation - which I don't think\n> is all that great today.\n\nAnd it's very inconsistent. For example, some functions use <optional>\ntags for optional parameters, others use square brackets, and some use\n<literal>VARIADIC</literal> to indicate variadic parameters, others use\nellipses (sometimes in <optional> tags or brackets).\n\n> And the lack of argument names in the pg_proc entries is occasionally fairly\n> annoying, because a \\df+ doesn't provide enough information to use functions.\n\nI was also annoyed by this the other day (specifically wrt. the boolean\narguments to pg_ls_dir), and started whipping up a Perl script to parse\nfunc.sgml and generate missing proargnames values for pg_proc.dat, which\nis how I discovered the above. The script currently has a pile of hacky\nregexes to cope with that, so I'd be happy to submit a doc patch to turn\nit into actual markup to get rid of that, if people think that's a\nworhtwhile use of time and won't clash with any other plans for the\ndocumentation.\n\n> It'd also be quite useful if clients could render more of the documentation\n> for functions. People are used to language servers providing full\n> documentation for functions etc...\n\nA more user-friendly version of \\df+ (maybe spelled \\hf, for symmetry\nwith \\h for commands?) would certainly be nice.\n\n> Greetings,\n>\n> Andres Freund\n\n- ilmari\n\n\n", "msg_date": "Wed, 17 Apr 2024 12:07:24 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": ">\n> And it's very inconsistent. For example, some functions use <optional>\n> tags for optional parameters, others use square brackets, and some use\n> <literal>VARIADIC</literal> to indicate variadic parameters, others use\n> ellipses (sometimes in <optional> tags or brackets).\n\n\nHaving just written a couple of those functions, I wasn't able to find any\nguidance on how to document them with regards to <optional> vs [], etc.\nHaving such a thing would be helpful.\n\nWhile we're throwing out ideas, does it make sense to have function\nparameters and return values be things that can accept COMMENTs? Like so:\n\nCOMMENT ON FUNCTION function_name [ ( [ [ argmode ] [ argname ] argtype [,\n...] ] ) ] ARGUMENT argname IS '....';\nCOMMENT ON FUNCTION function_name [ ( [ [ argmode ] [ argname ] argtype [,\n...] ] ) ] RETURN VALUE IS '....';\n\nI don't think this is a great idea, but if we're going to auto-generate\ndocumentation then we've got to store the metadata somewhere, and\npg_proc.dat is already lacking relevant details.\n\nAnd it's very inconsistent.  For example, some functions use <optional>\ntags for optional parameters, others use square brackets, and some use\n<literal>VARIADIC</literal> to indicate variadic parameters, others use\nellipses (sometimes in <optional> tags or brackets).Having just written a couple of those functions, I wasn't able to find any guidance on how to document them with regards to <optional> vs [], etc. Having such a thing would be helpful.While we're throwing out ideas, does it make sense to have function parameters and return values be things that can accept COMMENTs? Like so:COMMENT ON FUNCTION function_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] ARGUMENT argname IS '....';COMMENT ON FUNCTION function_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] RETURN VALUE IS '....';I don't think this is a great idea, but if we're going to auto-generate documentation then we've got to store the metadata somewhere, and pg_proc.dat is already lacking relevant details.", "msg_date": "Wed, 17 Apr 2024 13:11:50 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Hi,\n\nOn 2024-04-17 02:46:53 -0400, Corey Huinker wrote:\n> > > This sounds to me like it would be a painful exercise with not a\n> > > lot of benefit in the end.\n> >\n> > Maybe we could _verify_ the contents of func.sgml against pg_proc.\n> >\n> \n> All of the functions redefined in catalog/system_functions.sql complicate\n> using pg_proc.dat as a doc generator or source of validation. We'd probably\n> do better to validate against a live instance, and even then the benefit\n> wouldn't be great.\n\nThere are 80 'CREATE OR REPLACE's in system_functions.sql, 1016 occurrences of\nfunc_table_entry in funcs.sgml and 3.3k functions in pg_proc. I'm not saying\nthat differences due to system_functions.sql wouldn't be annoying to deal\nwith, but it'd also be far from the end of the world.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Apr 2024 10:21:34 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Hi,\n\nOn 2024-04-17 12:07:24 +0100, Dagfinn Ilmari Manns�ker wrote:\n> Andres Freund <[email protected]> writes:\n> > I think the manual work for writing signatures in sgml is not insignificant,\n> > nor is the volume of sgml for them. Manually maintaining the signatures makes\n> > it impractical to significantly improve the presentation - which I don't think\n> > is all that great today.\n> \n> And it's very inconsistent. For example, some functions use <optional>\n> tags for optional parameters, others use square brackets, and some use\n> <literal>VARIADIC</literal> to indicate variadic parameters, others use\n> ellipses (sometimes in <optional> tags or brackets).\n\nThat seems almost inevitably the outcome of many people having to manually\ninfer the recommended semantics, for writing something boring but nontrivial,\nfrom a 30k line file.\n\n\n> > And the lack of argument names in the pg_proc entries is occasionally fairly\n> > annoying, because a \\df+ doesn't provide enough information to use functions.\n> \n> I was also annoyed by this the other day (specifically wrt. the boolean\n> arguments to pg_ls_dir),\n\nMy bane is regexp_match et al, I have given up on remembering the argument\norder.\n\n\n> and started whipping up a Perl script to parse func.sgml and generate\n> missing proargnames values for pg_proc.dat, which is how I discovered the\n> above.\n\nNice.\n\n\n> The script currently has a pile of hacky regexes to cope with that,\n> so I'd be happy to submit a doc patch to turn it into actual markup to get\n> rid of that, if people think that's a worhtwhile use of time and won't clash\n> with any other plans for the documentation.\n\nI guess it's a bit hard to say without knowing how voluminious the changes\nwould be. If we end up rewriting the whole file the tradeoff is less clear\nthan if it's a dozen inconsistent entries.\n\n\n> > It'd also be quite useful if clients could render more of the documentation\n> > for functions. People are used to language servers providing full\n> > documentation for functions etc...\n> \n> A more user-friendly version of \\df+ (maybe spelled \\hf, for symmetry\n> with \\h for commands?) would certainly be nice.\n\nIndeed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Apr 2024 10:28:03 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n\n> Hi,\n>\n> On 2024-04-17 12:07:24 +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Andres Freund <[email protected]> writes:\n>> > I think the manual work for writing signatures in sgml is not insignificant,\n>> > nor is the volume of sgml for them. Manually maintaining the signatures makes\n>> > it impractical to significantly improve the presentation - which I don't think\n>> > is all that great today.\n>> \n>> And it's very inconsistent. For example, some functions use <optional>\n>> tags for optional parameters, others use square brackets, and some use\n>> <literal>VARIADIC</literal> to indicate variadic parameters, others use\n>> ellipses (sometimes in <optional> tags or brackets).\n>\n> That seems almost inevitably the outcome of many people having to manually\n> infer the recommended semantics, for writing something boring but nontrivial,\n> from a 30k line file.\n\nAs Corey mentioned elsethread, having a markup style guide (maybe a\ncomment at the top of the file?) would be nice.\n\n>> > And the lack of argument names in the pg_proc entries is occasionally fairly\n>> > annoying, because a \\df+ doesn't provide enough information to use functions.\n>> \n>> I was also annoyed by this the other day (specifically wrt. the boolean\n>> arguments to pg_ls_dir),\n>\n> My bane is regexp_match et al, I have given up on remembering the argument\n> order.\n\nThere's a thread elsewhere about those specifically, but I can't be\nbothered to find the link right now.\n\n>> and started whipping up a Perl script to parse func.sgml and generate\n>> missing proargnames values for pg_proc.dat, which is how I discovered the\n>> above.\n>\n> Nice.\n>\n>> The script currently has a pile of hacky regexes to cope with that,\n>> so I'd be happy to submit a doc patch to turn it into actual markup to get\n>> rid of that, if people think that's a worhtwhile use of time and won't clash\n>> with any other plans for the documentation.\n>\n> I guess it's a bit hard to say without knowing how voluminious the changes\n> would be. If we end up rewriting the whole file the tradeoff is less clear\n> than if it's a dozen inconsistent entries.\n\nIt turned out to not be that many that used [] for optional parameters,\nsee the attached patch. \n\nI havent dealt with variadic yet, since the two styles are visually\ndifferent, not just markup (<optional>...</optional> renders as [...]).\n\nThe two styles for variadic are the what I call caller-style:\n\n concat ( val1 \"any\" [, val2 \"any\" [, ...] ] )\n format(formatstr text [, formatarg \"any\" [, ...] ])\n\nwhich shows more clearly how you'd call it, versus definition-style:\n\n num_nonnulls ( VARIADIC \"any\" )\n jsonb_extract_path ( from_json jsonb, VARIADIC path_elems text[] )\n\nwhich matches the CREATE FUNCTION statement. I don't have a strong\nopinion on which we should use, but we should be consistent.\n\n> Greetings,\n>\n> Andres Freund\n\n- ilmari", "msg_date": "Wed, 17 Apr 2024 19:37:21 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Thu, Apr 18, 2024 at 2:37 AM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n>\n> Andres Freund <[email protected]> writes:\n>\n> > Hi,\n> >\n> > On 2024-04-17 12:07:24 +0100, Dagfinn Ilmari Mannsåker wrote:\n> >> Andres Freund <[email protected]> writes:\n> >> > I think the manual work for writing signatures in sgml is not insignificant,\n> >> > nor is the volume of sgml for them. Manually maintaining the signatures makes\n> >> > it impractical to significantly improve the presentation - which I don't think\n> >> > is all that great today.\n> >>\n> >> And it's very inconsistent. For example, some functions use <optional>\n> >> tags for optional parameters, others use square brackets, and some use\n> >> <literal>VARIADIC</literal> to indicate variadic parameters, others use\n> >> ellipses (sometimes in <optional> tags or brackets).\n> >\n> > That seems almost inevitably the outcome of many people having to manually\n> > infer the recommended semantics, for writing something boring but nontrivial,\n> > from a 30k line file.\n>\n> As Corey mentioned elsethread, having a markup style guide (maybe a\n> comment at the top of the file?) would be nice.\n>\n> >> > And the lack of argument names in the pg_proc entries is occasionally fairly\n> >> > annoying, because a \\df+ doesn't provide enough information to use functions.\n> >>\n> >> I was also annoyed by this the other day (specifically wrt. the boolean\n> >> arguments to pg_ls_dir),\n> >\n> > My bane is regexp_match et al, I have given up on remembering the argument\n> > order.\n>\n> There's a thread elsewhere about those specifically, but I can't be\n> bothered to find the link right now.\n>\n> >> and started whipping up a Perl script to parse func.sgml and generate\n> >> missing proargnames values for pg_proc.dat, which is how I discovered the\n> >> above.\n> >\n> > Nice.\n> >\n> >> The script currently has a pile of hacky regexes to cope with that,\n> >> so I'd be happy to submit a doc patch to turn it into actual markup to get\n> >> rid of that, if people think that's a worhtwhile use of time and won't clash\n> >> with any other plans for the documentation.\n> >\n> > I guess it's a bit hard to say without knowing how voluminious the changes\n> > would be. If we end up rewriting the whole file the tradeoff is less clear\n> > than if it's a dozen inconsistent entries.\n>\n> It turned out to not be that many that used [] for optional parameters,\n> see the attached patch.\n>\n\nhi.\nI manually checked the html output. It looks good to me.\n\n\n", "msg_date": "Thu, 18 Apr 2024 21:28:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": ">\n> I havent dealt with variadic yet, since the two styles are visually\n> different, not just markup (<optional>...</optional> renders as [...]).\n>\n> The two styles for variadic are the what I call caller-style:\n>\n> concat ( val1 \"any\" [, val2 \"any\" [, ...] ] )\n> format(formatstr text [, formatarg \"any\" [, ...] ])\n>\n\nWhile this style is obviously clumsier for us to compose, it does avoid\nrelying on the user understanding what the word variadic means. Searching\nthrough online documentation of the python *args parameter, the word\nvariadic never comes up, the closest they get is \"variable length\nargument\". I realize that python is not SQL, but I think it's a good point\nof reference for what concepts the average reader is likely to know.\n\nLooking at the patch, I think it is good, though I'd consider doing some\nindentation for the nested <optional>s to allow the author to do more\nvisual tag-matching. The ']'s were sufficiently visually distinct that we\ndidn't really need or want nesting, but <optional> is just another tag to\nmy eyes in a sea of tags.\n\nI havent dealt with variadic yet, since the two styles are visually\ndifferent, not just markup (<optional>...</optional> renders as [...]).\n\nThe two styles for variadic are the what I call caller-style:\n\n   concat ( val1 \"any\" [, val2 \"any\" [, ...] ] )\n   format(formatstr text [, formatarg \"any\" [, ...] ])While this style is obviously clumsier for us to compose, it does avoid relying on the user understanding what the word variadic means. Searching through online documentation of the python *args parameter, the word variadic never comes up, the closest they get is \"variable length argument\". I realize that python is not SQL, but I think it's a good point of reference for what concepts the average reader is likely to know.Looking at the patch, I think it is good, though I'd consider doing some indentation for the nested <optional>s to allow the author to do more visual tag-matching. The ']'s were sufficiently visually distinct that we didn't really need or want nesting, but <optional> is just another tag to my eyes in a sea of tags.", "msg_date": "Thu, 18 Apr 2024 13:51:10 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Corey Huinker <[email protected]> writes:\n\n>>\n>> I havent dealt with variadic yet, since the two styles are visually\n>> different, not just markup (<optional>...</optional> renders as [...]).\n>>\n>> The two styles for variadic are the what I call caller-style:\n>>\n>> concat ( val1 \"any\" [, val2 \"any\" [, ...] ] )\n>> format(formatstr text [, formatarg \"any\" [, ...] ])\n>>\n>\n> While this style is obviously clumsier for us to compose, it does avoid\n> relying on the user understanding what the word variadic means. Searching\n> through online documentation of the python *args parameter, the word\n> variadic never comes up, the closest they get is \"variable length\n> argument\". I realize that python is not SQL, but I think it's a good point\n> of reference for what concepts the average reader is likely to know.\n\nYeah, we can't expect everyone wanting to call a built-in function to\nknow how they would define an equivalent one themselves. In that case I\npropos marking it up like this:\n\n <function>format</function> (\n <parameter>formatstr</parameter> <type>text</type>\n <optional>, <parameter>formatarg</parameter> <type>\"any\"</type>\n <optional>, ...</optional> </optional> )\n <returnvalue>text</returnvalue>\n\n\n> Looking at the patch, I think it is good, though I'd consider doing some\n> indentation for the nested <optional>s to allow the author to do more\n> visual tag-matching. The ']'s were sufficiently visually distinct that we\n> didn't really need or want nesting, but <optional> is just another tag to\n> my eyes in a sea of tags.\n\nThe requisite nesting when there are multiple optional parameters makes\nit annoying to wrap and indent it \"properly\" per XML convention, but how\nabout something like this, with each parameter on a line of its own, and\nall the closing </optional> tags on one line?\n\n <function>regexp_substr</function> (\n <parameter>string</parameter> <type>text</type>,\n <parameter>pattern</parameter> <type>text</type>\n <optional>, <parameter>start</parameter> <type>integer</type>\n <optional>, <parameter>N</parameter> <type>integer</type>\n <optional>, <parameter>flags</parameter> <type>text</type>\n <optional>, <parameter>subexpr</parameter> <type>integer</type>\n </optional> </optional> </optional> </optional> )\n <returnvalue>text</returnvalue>\n\nA lot of functions mostly follow this style, except they tend to put the\nfirst parameter on the same line of the function namee, even when that\nmakes the line overly long. I propose going the other way, with each\nparameter on a line of its own, even if the first one would fit after\nthe function name, except the whole parameter list fits after the\nfunction name.\n\nAlso, when there's only one optional argument, or they're independently\noptional, not nested, the </optional> tag should go on the same line as\nthe parameter.\n\n <function>substring</function> (\n <parameter>bits</parameter> <type>bit</type>\n <optional> <literal>FROM</literal> <parameter>start</parameter> <type>integer</type> </optional>\n <optional> <literal>FOR</literal> <parameter>count</parameter> <type>integer</type> </optional> )\n <returnvalue>bit</returnvalue>\n\n\nI'm not quite sure what to with things like json_object which have even\nmore complex nexting of optional parameters, but I do think the current\n200+ character lines are too long.\n\n- ilmari\n\n\n", "msg_date": "Thu, 18 Apr 2024 22:34:26 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": ">\n> Yeah, we can't expect everyone wanting to call a built-in function to\n> know how they would define an equivalent one themselves. In that case I\n> propos marking it up like this:\n>\n> <function>format</function> (\n> <parameter>formatstr</parameter> <type>text</type>\n> <optional>, <parameter>formatarg</parameter> <type>\"any\"</type>\n> <optional>, ...</optional> </optional> )\n> <returnvalue>text</returnvalue>\n>\n\nLooks good, but I guess I have to ask: is there a parameter-list tag out\nthere instead of (, and should we be using that?\n\n\n\n> The requisite nesting when there are multiple optional parameters makes\n> it annoying to wrap and indent it \"properly\" per XML convention, but how\n> about something like this, with each parameter on a line of its own, and\n> all the closing </optional> tags on one line?\n>\n> <function>regexp_substr</function> (\n> <parameter>string</parameter> <type>text</type>,\n> <parameter>pattern</parameter> <type>text</type>\n> <optional>, <parameter>start</parameter> <type>integer</type>\n> <optional>, <parameter>N</parameter> <type>integer</type>\n> <optional>, <parameter>flags</parameter> <type>text</type>\n> <optional>, <parameter>subexpr</parameter> <type>integer</type>\n> </optional> </optional> </optional> </optional> )\n> <returnvalue>text</returnvalue>\n>\n\nYes, that has an easy count-the-vertical, count-the-horizontal,\ndo-they-match flow to it.\n\n\n> A lot of functions mostly follow this style, except they tend to put the\n> first parameter on the same line of the function namee, even when that\n> makes the line overly long. I propose going the other way, with each\n> parameter on a line of its own, even if the first one would fit after\n> the function name, except the whole parameter list fits after the\n> function name.\n>\n\n+1\n\n\n>\n> Also, when there's only one optional argument, or they're independently\n> optional, not nested, the </optional> tag should go on the same line as\n> the parameter.\n>\n> <function>substring</function> (\n> <parameter>bits</parameter> <type>bit</type>\n> <optional> <literal>FROM</literal> <parameter>start</parameter>\n> <type>integer</type> </optional>\n> <optional> <literal>FOR</literal> <parameter>count</parameter>\n> <type>integer</type> </optional> )\n> <returnvalue>bit</returnvalue>\n>\n\n+1\n\nYeah, we can't expect everyone wanting to call a built-in function to\nknow how they would define an equivalent one themselves. In that case I\npropos marking it up like this:\n\n    <function>format</function> (\n    <parameter>formatstr</parameter> <type>text</type>\n    <optional>, <parameter>formatarg</parameter> <type>\"any\"</type>\n    <optional>, ...</optional> </optional> )\n    <returnvalue>text</returnvalue>Looks good, but I guess I have to ask: is there a parameter-list tag out there instead of (, and should we be using that? The requisite nesting when there are multiple optional parameters makes\nit annoying to wrap and indent it \"properly\" per XML convention, but how\nabout something like this, with each parameter on a line of its own, and\nall the closing </optional> tags on one line?\n\n    <function>regexp_substr</function> (\n    <parameter>string</parameter> <type>text</type>,\n    <parameter>pattern</parameter> <type>text</type>\n    <optional>, <parameter>start</parameter> <type>integer</type>\n    <optional>, <parameter>N</parameter> <type>integer</type>\n    <optional>, <parameter>flags</parameter> <type>text</type>\n    <optional>, <parameter>subexpr</parameter> <type>integer</type>\n    </optional> </optional> </optional> </optional> )\n    <returnvalue>text</returnvalue>Yes, that has an easy count-the-vertical, count-the-horizontal, do-they-match flow to it. A lot of functions mostly follow this style, except they tend to put the\nfirst parameter on the same line of the function namee, even when that\nmakes the line overly long. I propose going the other way, with each\nparameter on a line of its own, even if the first one would fit after\nthe function name, except the whole parameter list fits after the\nfunction name.+1 \n\nAlso, when there's only one optional argument, or they're independently\noptional, not nested, the </optional> tag should go on the same line as\nthe parameter.\n\n    <function>substring</function> (\n    <parameter>bits</parameter> <type>bit</type>\n    <optional> <literal>FROM</literal> <parameter>start</parameter> <type>integer</type> </optional>\n    <optional> <literal>FOR</literal> <parameter>count</parameter> <type>integer</type> </optional> )\n    <returnvalue>bit</returnvalue>+1", "msg_date": "Thu, 18 Apr 2024 20:57:17 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Wed, Apr 17, 2024 at 7:07 PM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n>\n>\n> > It'd also be quite useful if clients could render more of the documentation\n> > for functions. People are used to language servers providing full\n> > documentation for functions etc...\n>\n> A more user-friendly version of \\df+ (maybe spelled \\hf, for symmetry\n> with \\h for commands?) would certainly be nice.\n>\n\nI think `\\hf` is useful.\notherwise people first need google to find out the function html page,\nthen need Ctrl + F to locate specific function entry.\n\nfor \\hf\nwe may need to offer a doc url link.\nbut currently many functions are unlinkable in the doc.\nAlso one section can have many sections.\nI guess just linking directly to a nearby position in a html page\nshould be fine.\n\n\nWe can also add a url for functions decorated as underscore\nlike mysql (https://dev.mysql.com/doc/refman/8.3/en/string-functions.html#function_concat).\nI am not sure it is an elegant solution.\n\n\n", "msg_date": "Fri, 19 Apr 2024 19:54:43 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Apr 15, 2024 at 1:00 PM jian he <[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 5:40 AM Andrew Dunstan <[email protected]> wrote:\n> >\n> >\n> > +many for improving the index.\n> >\n> > My own pet docs peeve is a purely editorial one: func.sgml is a 30k line beast, and I think there's a good case for splitting out at least the larger chunks of it.\n> >\n>\n> I think I successfully reduced func.sgml from 311322 lines to 13167 lines.\n> (base-commit: 93582974315174d544592185d797a2b44696d1e5)\n>\n> writing a patch would be unreviewable.\n\nI've splitted it to7 patches.\neach patch split one <sect1> into separate new files.\n\n> func-string.sgml\n> func-matching.sgml\n> func-datetime.sgml\n> func-json.sgml\n> func-aggregate.sgml\n> func-info.sgml\n> func-admin.sgml\n\nthe above will be newly created files, each corresponding to related\nindividual patches.", "msg_date": "Sun, 28 Apr 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": ">\n> I've splitted it to7 patches.\n> each patch split one <sect1> into separate new files.\n>\n\nSeems like a good start. Looking at the diffs of these, I wonder if we\nwould be better off with a func/ directory, each function gets its own file\nin that dir, and either these files above include the individual files, or\nthe original func.sgml just becomes the organizer of all the functions.\nThat would allow us to do future reorganizations with minimal churn, make\nvalidation of this patch a bit more straightforward, and make it easier for\nfuture editors to find the function they need to edit.\n\nI've splitted it to7 patches.\neach patch split one <sect1> into separate new files.Seems like a good start. Looking at the diffs of these, I wonder if we would be better off with a func/ directory, each function gets its own file in that dir, and either these files above include the individual files, or the original func.sgml just becomes the organizer of all the functions. That would allow us to do future reorganizations with minimal churn, make validation of this patch a bit more straightforward, and make it easier for future editors to find the function they need to edit.", "msg_date": "Mon, 29 Apr 2024 01:17:39 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Apr 29, 2024 at 1:17 PM Corey Huinker <[email protected]> wrote:\n>>\n>> I've splitted it to7 patches.\n>> each patch split one <sect1> into separate new files.\n>\n>\n> Seems like a good start. Looking at the diffs of these, I wonder if we would be better off with a func/ directory, each function gets its own file in that dir, and either these files above include the individual files, or the original func.sgml just becomes the organizer of all the functions. That would allow us to do future reorganizations with minimal churn, make validation of this patch a bit more straightforward, and make it easier for future editors to find the function they need to edit.\n\nlooking back.\nThe patch is big. no convenient way to review/validate it.\nso I created a python script to automate it.\nwe can review the python script.\n(just googling around, I know little about python).\n\n* create new files for holding the content.\nfunc-string.sgml\nfunc-matching.sgml\nfunc-datetime.sgml\nfunc-json.sgml\nfunc-aggregate.sgml\nfunc-info.sgml\nfunc-admin.sgml\n\n* locate parts that need to copy paste to a newly created file, based\non line number.\nline number pattern is mentioned here:\nhttp://postgr.es/m/CACJufxEcMjjn-m6fpC2wXHsQbE5nyd%3Dxt6k-jDizBVUKK6O4KQ%40mail.gmail.com\n\n* insert placeholder string in func.sgml:\n&func-string;\n&func-matching;\n&func-datetime;\n&func-json;\n&func-aggregate;\n&func-info;\n&func-admin;\n\n* copy the parts to new files.\n\n* validate newly created file. (must only have 2 occurrences of \"sect1\").\n\n* delete the parts from func.sgml files, since they already copy to new files.\nsed --in-place \"2408,4180d ; 5330,7760d ; 8942,11127d ; 15502,19436d ;\n21567,22985d ; 24346,28017d ; 28020,30714d \" func.sgml\n\n* manually change doc/src/sgml/filelist.sgml\n <!ENTITY func SYSTEM \"func.sgml\">\n+<!ENTITY func-string SYSTEM \"func-string.sgml\">\n+<!ENTITY func-matching SYSTEM \"func-matching.sgml\">\n+<!ENTITY func-datetime SYSTEM \"func-datetime.sgml\">\n+<!ENTITY func-json SYSTEM \"func-json.sgml\">\n+<!ENTITY func-aggregate SYSTEM \"func-aggregate.sgml\">\n+<!ENTITY func-info SYSTEM \"func-info.sgml\">\n+<!ENTITY func-admin SYSTEM \"func-admin.sgml\">\n\n\n\n2 requirements.\n1. manual change doc/src/sgml/filelist.sgml as mentioned before;\n2. in python script, at line 35, i use\n\"os.chdir(\"/home/jian/Desktop/pg_src/src7/postgres/doc/src/sgml\")\"\nyou need to change to your \"doc/src/sgml\" directory accordingly.", "msg_date": "Mon, 15 Jul 2024 14:35:36 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": ">\n>\n> looking back.\n> The patch is big. no convenient way to review/validate it.\n>\n\nPerhaps we can break up the patches as follows:\n\n1. create the filelist.sgml entries, and create new files as you detailed,\nempty with func.sgml still managing the sections, but each section now has\nit's corresponding &func-something; The files are included, but they're\ncompletely empty.\n\n2 to 999+. one commit per function moved from func.sgml to it's\ncorresponding func-something.sgml.\n\nIt'll be a ton of commits, but each one will be very easy to review.\n\nAlternately, if we put each function in its own file, there would be a\nseries of commits, one per function like such:\n\n1. create the new func-my-function.sgml, copying the definition of the same\nname\n2. delete the definition in func.sgml, replaced with the &func-include;\n3. new entry in the filelist.\n\nThis approach looks (and IS) tedious, but it has several key advantages:\n\n1. Each one is very easy to review.\n2. Big reduction in future merge conflicts on func.sgml.\n3. location of a given functions docs is now trivial.\n4. separation of concerns with regard to content of function def vs\nplacement of same.\n5. Easy to ensure that all functions have an anchor.\n6. The effort can stall and be resumed at our own pace.\n\nPerhaps your python script can be adapted to this approach? I'm willing to\nreview, or collaborate, or both.\n\nlooking back.\nThe patch is big. no convenient way to review/validate it.Perhaps we can break up the patches as follows:1. create the filelist.sgml entries, and create new files as you detailed, empty with func.sgml still managing the sections, but each section now has it's corresponding &func-something; The files are included, but they're completely empty.2 to 999+. one commit per function moved from func.sgml to it's corresponding func-something.sgml.It'll be a ton of commits, but each one will be very easy to review.Alternately, if we put each function in its own file, there would be a series of commits, one per function like such:1. create the new func-my-function.sgml, copying the definition of the same name2. delete the definition in func.sgml, replaced with the &func-include;3. new entry in the filelist.This approach looks (and IS) tedious, but it has several key advantages:1. Each one is very easy to review.2. Big reduction in future merge conflicts on func.sgml.3. location of a given functions docs is now trivial.4. separation of concerns with regard to content of function def vs placement of same.5. Easy to ensure that all functions have an anchor.6. The effort can stall and be resumed at our own pace.Perhaps your python script can be adapted to this approach? I'm willing to review, or collaborate, or both.", "msg_date": "Thu, 18 Jul 2024 16:16:37 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On 2024-07-18 Th 4:16 PM, Corey Huinker wrote:\n>\n>\n> looking back.\n> The patch is big. no convenient way to review/validate it.\n>\n>\n> Perhaps we can break up the patches as follows:\n>\n> 1. create the filelist.sgml entries, and create new files as you \n> detailed, empty with func.sgml still managing the sections, but each \n> section now has it's corresponding &func-something; The files are \n> included, but they're completely empty.\n>\n> 2 to 999+. one commit per function moved from func.sgml to it's \n> corresponding func-something.sgml.\n>\n> It'll be a ton of commits, but each one will be very easy to review.\n>\n> Alternately, if we put each function in its own file, there would be a \n> series of commits, one per function like such:\n>\n> 1. create the new func-my-function.sgml, copying the definition of the \n> same name\n> 2. delete the definition in func.sgml, replaced with the &func-include;\n> 3. new entry in the filelist.\n>\n> This approach looks (and IS) tedious, but it has several key advantages:\n>\n> 1. Each one is very easy to review.\n> 2. Big reduction in future merge conflicts on func.sgml.\n> 3. location of a given functions docs is now trivial.\n> 4. separation of concerns with regard to content of function def vs \n> placement of same.\n> 5. Easy to ensure that all functions have an anchor.\n> 6. The effort can stall and be resumed at our own pace.\n>\n> Perhaps your python script can be adapted to this approach? I'm \n> willing to review, or collaborate, or both.\n\n\nI'm opposed to having a separate file for every function. I think \nbreaking up func.sgml into one piece per sect1 is about right. If that \nproves cumbersome still we can look at breaking it up further, but let's \nstart with that.\n\nMore concretely, sometimes the bits that relate to a particular function \nare not contiguous. e.g. you might have an entry in a table for a \nfunction and then at the end Notes relating to that function. That make \ndoing a file per function not very practical.\n\nAlso, being able to view the context for your function documentation is \nuseful when editing.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-18 Th 4:16 PM, Corey Huinker\n wrote:\n\n\n\n\n\n\n looking back.\n The patch is big. no convenient way to review/validate it.\n\n\n Perhaps we can break up the patches as follows:\n\n\n1. create the filelist.sgml entries, and create new files\n as you detailed, empty with func.sgml still managing the\n sections, but each section now has it's corresponding\n &func-something; The files are included, but they're\n completely empty.\n\n\n2 to 999+. one commit per function moved from func.sgml\n to it's corresponding func-something.sgml.\n\n\nIt'll be a ton of commits, but each one will be very easy\n to review.\n\n\nAlternately, if we put each function in its own file,\n there would be a series of commits, one per function like\n such:\n\n\n1. create the new func-my-function.sgml, copying the\n definition of the same name\n2. delete the definition in func.sgml, replaced with the\n &func-include;\n3. new entry in the filelist.\n\n\nThis approach looks (and IS) tedious, but it has several\n key advantages:\n\n\n1. Each one is very easy to review.\n 2. Big reduction in future merge conflicts on func.sgml.\n3. location of a given functions docs is now trivial.\n4. separation of concerns with regard to content of\n function def vs placement of same.\n5. Easy to ensure that all functions have an anchor.\n6. The effort can stall and be resumed at our own pace.\n\n\nPerhaps your python script can be adapted to this\n approach? I'm willing to review, or collaborate, or both.\n\n\n\n\n\n\nI'm opposed to having a separate file for every function. I think\n breaking up func.sgml into one piece per sect1 is about right. If\n that proves cumbersome still we can look at breaking it up\n further, but let's start with that. \n\nMore concretely, sometimes the bits that relate to a particular\n function are not contiguous. e.g. you might have an entry in a\n table for a function and then at the end Notes relating to that\n function. That make doing a file per function not very practical.\nAlso, being able to view the context for your function\n documentation is useful when editing.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 18 Jul 2024 16:33:08 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> I'm opposed to having a separate file for every function.\n\nI also think that would be a disaster. It would result in a huge\nnumber of files which would make global editing (eg markup changes)\nreally painful, and it by no means flattens the document structure.\nSomewhere there will still need to be decisions that functions\nA, B, C go into one documentation section while D, E, F go somewhere\nelse.\n\n> I think \n> breaking up func.sgml into one piece per sect1 is about right. If that \n> proves cumbersome still we can look at breaking it up further, but let's \n> start with that.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Jul 2024 16:40:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "> I'm opposed to having a separate file for every function. I think\n> breaking up func.sgml into one piece per sect1 is about right. If that\n> proves cumbersome still we can look at breaking it up further, but\n> let's start with that.\n\nThat will create at least 30 func-xx.sgml files.\n\nt-ishii$ grep '<sect1' func.sgml|wc -l\n30\n\nI am afraid that's too many?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 19 Jul 2024 12:06:22 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Thu, Jul 18, 2024 at 8:06 PM Tatsuo Ishii <[email protected]> wrote:\n\n> > I'm opposed to having a separate file for every function. I think\n> > breaking up func.sgml into one piece per sect1 is about right. If that\n> > proves cumbersome still we can look at breaking it up further, but\n> > let's start with that.\n>\n> That will create at least 30 func-xx.sgml files.\n>\n> t-ishii$ grep '<sect1' func.sgml|wc -l\n> 30\n>\n> I am afraid that's too many?\n>\n>\nThe premise and the resultant number of files both seem reasonable to me.\nI could get that number down to maybe 20 if pressed but I don't see any\nbenefit to doing so. I look at a page on the website that needs updating\nthen go open its source file. Nice and tidy.\n\nDavid J.\n\nOn Thu, Jul 18, 2024 at 8:06 PM Tatsuo Ishii <[email protected]> wrote:> I'm opposed to having a separate file for every function. I think\n> breaking up func.sgml into one piece per sect1 is about right. If that\n> proves cumbersome still we can look at breaking it up further, but\n> let's start with that.\n\nThat will create at least 30 func-xx.sgml files.\n\nt-ishii$ grep '<sect1' func.sgml|wc -l\n30\n\nI am afraid that's too many?The premise and the resultant number of files both seem reasonable to me.  I could get that number down to maybe 20 if pressed but I don't see any benefit to doing so. I look at a page on the website that needs updating then go open its source file.  Nice and tidy.David J.", "msg_date": "Thu, 18 Jul 2024 21:07:20 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Jul 19, 2024 at 12:07 PM David G. Johnston\n<[email protected]> wrote:\n>\n> On Thu, Jul 18, 2024 at 8:06 PM Tatsuo Ishii <[email protected]> wrote:\n>>\n>> > I'm opposed to having a separate file for every function. I think\n>> > breaking up func.sgml into one piece per sect1 is about right. If that\n>> > proves cumbersome still we can look at breaking it up further, but\n>> > let's start with that.\n>>\n>> That will create at least 30 func-xx.sgml files.\n>>\n>> t-ishii$ grep '<sect1' func.sgml|wc -l\n>> 30\n>>\n>> I am afraid that's too many?\n>>\n>\n> The premise and the resultant number of files both seem reasonable to me. I could get that number down to maybe 20 if pressed but I don't see any benefit to doing so. I look at a page on the website that needs updating then go open its source file. Nice and tidy.\n>\n\nhi. my python test.py script in [1] cut func.sgml from 31k lines to\n13k lines by putting some contents to these 7 new files.\n new file: func-admin.sgml\n new file: func-aggregate.sgml\n new file: func-datetime.sgml\n new file: func-info.sgml\n new file: func-json.sgml\n new file: func-matching.sgml\n new file: func-string.sgml\n\ndoc/src/sgml/filelist.sgml changes are addressed in the attached patch.\ncutting func.sgml into half is ok for me.\n\n\nin test.py , at line 35, i use\n\"os.chdir(\"/home/jian/Desktop/pg_src/src7/postgres/doc/src/sgml\")\"\nyou need to change to your corresponding \"doc/src/sgml\" directory.\noverall, you need apply the attached patch, change test.py line 35.\n\n\n\nmain gotcha is based on pattern mentioned in [2] and\nindex layout from https://www.postgresql.org/docs/devel/functions.html\n\n[1] https://postgr.es/m/CACJufxH%2BYi521QrncwnW4sFGOhPmJQpsmoJ%2BYnj%2BVpHu5wAahQ%40mail.gmail.com\\\n[2] http://postgr.es/m/CACJufxEcMjjn-m6fpC2wXHsQbE5nyd%3Dxt6k-jDizBVUKK6O4KQ%40mail.gmail.com", "msg_date": "Fri, 19 Jul 2024 14:55:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Thu, Jul 18, 2024 at 8:06 PM Tatsuo Ishii <[email protected]> wrote:\n>>> I'm opposed to having a separate file for every function. I think\n>>> breaking up func.sgml into one piece per sect1 is about right.\n\n>> That will create at least 30 func-xx.sgml files.\n>> I am afraid that's too many?\n\n> The premise and the resultant number of files both seem reasonable to me.\n\nI agree. The hundreds that would result from file-per-function, or\nanything close to that, would be too many. But I can deal with\nfile-per-sect1. For context, I count currently 167 sgml/*.sgml files\nplus 219 ref/*.sgml, so adding 30 more would be an 8% increase.\n\nDo we want to use a \"func-\" prefix on the file names? I could\nimagine dispensing with that as unnecessary; or another idea\ncould be to make a new subdirectory func/ for these.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jul 2024 12:22:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "> Do we want to use a \"func-\" prefix on the file names? I could\n> imagine dispensing with that as unnecessary;\n\nIf we don't use the prefix and we generate new file names from sect1\ntag, we could have file name collision: for example, json.sgml because\nthere's sect1 tag \"functions-json\".\n\n> or another idea\n> could be to make a new subdirectory func/ for these.\n\n+1. Looks better than adding +30 files right under sgml directory.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sat, 20 Jul 2024 05:00:46 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Jul 19, 2024 at 1:01 PM Tatsuo Ishii <[email protected]> wrote:\n\n> > Do we want to use a \"func-\" prefix on the file names? I could\n> > imagine dispensing with that as unnecessary;\n>\n> If we don't use the prefix and we generate new file names from sect1\n> tag, we could have file name collision: for example, json.sgml because\n> there's sect1 tag \"functions-json\".\n>\n> > or another idea\n> > could be to make a new subdirectory func/ for these.\n>\n> +1. Looks better than adding +30 files right under sgml directory.\n>\n>\n+many for placing these under a subdirectory.\n\nIMO the file name should match the ID of the sect1 element with the leading\n\"functions-\" removed, naming the directory \"functions\". Thus when viewing\nthe web page the corresponding sgml file is determinable.\n\nDavid J.\n\nOn Fri, Jul 19, 2024 at 1:01 PM Tatsuo Ishii <[email protected]> wrote:> Do we want to use a \"func-\" prefix on the file names?  I could\n> imagine dispensing with that as unnecessary;\n\nIf we don't use the prefix and we generate new file names from sect1\ntag, we could have file name collision: for example, json.sgml because\nthere's sect1 tag \"functions-json\".\n\n> or another idea\n> could be to make a new subdirectory func/ for these.\n\n+1. Looks better than adding +30 files right under sgml directory.+many for placing these under a subdirectory.IMO the file name should match the ID of the sect1 element with the leading \"functions-\" removed, naming the directory \"functions\".  Thus when viewing the web page the corresponding sgml file is determinable.David J.", "msg_date": "Fri, 19 Jul 2024 13:11:46 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> +many for placing these under a subdirectory.\n\n> IMO the file name should match the ID of the sect1 element with the leading\n> \"functions-\" removed, naming the directory \"functions\". Thus when viewing\n> the web page the corresponding sgml file is determinable.\n\nI'd go for shorter myself (ie \"func/\"), mainly due to the precedent\nof the existing subdirectory which is \"ref/\" not \"reference/\".\nIt's hardly a big deal though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jul 2024 16:17:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": ">> IMO the file name should match the ID of the sect1 element with the leading\n>> \"functions-\" removed, naming the directory \"functions\". Thus when viewing\n>> the web page the corresponding sgml file is determinable.\n> \n> I'd go for shorter myself (ie \"func/\"), mainly due to the precedent\n> of the existing subdirectory which is \"ref/\" not \"reference/\".\n> It's hardly a big deal though.\n\nI don't have strong preference neither but I agree that \"func/\" is\nmore consistent with existing subdirectory names.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sat, 20 Jul 2024 09:47:17 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Fri, Jul 19, 2024 at 5:47 PM Tatsuo Ishii <[email protected]> wrote:\n\n> >> IMO the file name should match the ID of the sect1 element with the\n> leading\n> >> \"functions-\" removed, naming the directory \"functions\". Thus when\n> viewing\n> >> the web page the corresponding sgml file is determinable.\n> >\n> > I'd go for shorter myself (ie \"func/\"), mainly due to the precedent\n> > of the existing subdirectory which is \"ref/\" not \"reference/\".\n> > It's hardly a big deal though.\n>\n> I don't have strong preference neither but I agree that \"func/\" is\n> more consistent with existing subdirectory names.\n>\n>\nWorks for me. I was definitely on the fence between the two.\n\nDavid J.\n\nOn Fri, Jul 19, 2024 at 5:47 PM Tatsuo Ishii <[email protected]> wrote:>> IMO the file name should match the ID of the sect1 element with the leading\n>> \"functions-\" removed, naming the directory \"functions\".  Thus when viewing\n>> the web page the corresponding sgml file is determinable.\n> \n> I'd go for shorter myself (ie \"func/\"), mainly due to the precedent\n> of the existing subdirectory which is \"ref/\" not \"reference/\".\n> It's hardly a big deal though.\n\nI don't have strong preference neither but I agree that \"func/\" is\nmore consistent with existing subdirectory names.Works for me.  I was definitely on the fence between the two.David J.", "msg_date": "Fri, 19 Jul 2024 18:00:18 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "1. manually change line 4 in split_func_sgml.py and run the script.\n2. git apply v6-0001-all-filelist-for-directory-doc-src-sgml-func.patch\nnow you can \"ninja doc/src/sgml/html\"\nThe logic is simple, but I am using verbose variable names, that's why\nsplit_func_sgml.py size is large.\n\n\nadding a new directory: doc/src/sgml/func\nnew directory files: doc/src/sgml/func/allfiles.sgml and others.\nFor filenames, we can use \"func-pattern\" or just \"pattern\". right now,\nI use the prefix \"func\".\neach newly created file validates by: only 2 \"sect1\" exists, each file\nhas only \"<sect1 id=\" pattern.\n\n\nthe following ar content of doc/src/sgml/func/allfiles.sgml\n\n<!--\ndoc/src/sgml/func/allfiles.sgml\nPostgreSQL documentation\nComplete list of usable sgml source files in this directory.\n-->\n\n<!-- function references -->\n\n<!ENTITY func SYSTEM \"func.sgml\">\n<!ENTITY func-logical SYSTEM \"func-logical.sgml\">\n<!ENTITY func-comparison SYSTEM \"func-comparison.sgml\">\n<!ENTITY func-math SYSTEM \"func-math.sgml\">\n<!ENTITY func-string SYSTEM \"func-string.sgml\">\n<!ENTITY func-binarystring SYSTEM \"func-binarystring.sgml\">\n<!ENTITY func-bitstring SYSTEM \"func-bitstring.sgml\">\n<!ENTITY func-matching SYSTEM \"func-matching.sgml\">\n<!ENTITY func-formatting SYSTEM \"func-formatting.sgml\">\n<!ENTITY func-datetime SYSTEM \"func-datetime.sgml\">\n<!ENTITY func-enum SYSTEM \"func-enum.sgml\">\n<!ENTITY func-geometry SYSTEM \"func-geometry.sgml\">\n<!ENTITY func-net SYSTEM \"func-net.sgml\">\n<!ENTITY func-textsearch SYSTEM \"func-textsearch.sgml\">\n<!ENTITY func-uuid SYSTEM \"func-uuid.sgml\">\n<!ENTITY func-xml SYSTEM \"func-xml.sgml\">\n<!ENTITY func-json SYSTEM \"func-json.sgml\">\n<!ENTITY func-sequence SYSTEM \"func-sequence.sgml\">\n<!ENTITY func-conditional SYSTEM \"func-conditional.sgml\">\n<!ENTITY func-array SYSTEM \"func-array.sgml\">\n<!ENTITY func-range SYSTEM \"func-range.sgml\">\n<!ENTITY func-aggregate SYSTEM \"func-aggregate.sgml\">\n<!ENTITY func-window SYSTEM \"func-window.sgml\">\n<!ENTITY func-merge-support SYSTEM \"func-merge-support.sgml\">\n<!ENTITY func-subquery SYSTEM \"func-subquery.sgml\">\n<!ENTITY func-comparisons SYSTEM \"func-comparisons.sgml\">\n<!ENTITY func-srf SYSTEM \"func-srf.sgml\">\n<!ENTITY func-info SYSTEM \"func-info.sgml\">\n<!ENTITY func-admin SYSTEM \"func-admin.sgml\">\n<!ENTITY func-trigger SYSTEM \"func-trigger.sgml\">\n<!ENTITY func-event-triggers SYSTEM \"func-event-triggers.sgml\">\n<!ENTITY func-statistics SYSTEM \"func-statistics.sgml\">", "msg_date": "Mon, 22 Jul 2024 10:42:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Mon, Jul 22, 2024 at 10:42 AM jian he <[email protected]> wrote:\n>\nhi. this time everything should be ok!\n\n\nstep1. python3 split_func_sgml.py\nstep2. git apply\nv6-0001-all-filelist-for-directory-doc-src-sgml-func.patch (step2,\ncannot use \"git am\").\n\n\nwhat v7_split_func_sgml.py did:\n1. The new file only has 2 occurrences of \"sect1\"\n2. Each new sgml file has its own unique identifier, e.g. for\nfunc-logical.sgml unique string is \"<sect1 id=\"functions-logical\">\"\n3. sed copy, inplace delete command string will be printed out.\nyou can check the line number in func.sgml to verify the sed command.", "msg_date": "Tue, 20 Aug 2024 11:18:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "On Tue, Aug 20, 2024 at 11:18 AM jian he <[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 10:42 AM jian he <[email protected]> wrote:\n> >\n> hi. this time everything should be ok!\n>\n>\n> step1. python3 split_func_sgml.py\n> step2. git apply\n> v6-0001-all-filelist-for-directory-doc-src-sgml-func.patch (step2,\n> cannot use \"git am\").\n>\n\nsorry. I mean the file attached in the previous mail.\nstep1. python3 v7_split_func_sgml.py\nstep2. git apply v7-0001-all-filelist-for-directory-doc-src-sgml-func.patch\n(step2, cannot use \"git am\").\n\n\n", "msg_date": "Tue, 20 Aug 2024 11:37:53 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "turns out I hardcoded some line number information,\nwhich makes the script v7_split_func_sgml.py cannot apply anymore.\nthis time, it should be bullet-proof.\n\nsame as before:\nstep1. python3 v8_split_func_sgml.py\nstep2. git apply v8-0001-all-filelist-for-directory-doc-src-sgml-func.patch\n(step2, cannot use \"git am\").\n\nI don't know perl, but written in perl, I guess the logic will be the\nsame, based on line number, do the copy, delete on func.sgml.", "msg_date": "Mon, 30 Sep 2024 15:34:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" }, { "msg_contents": "In, 0001-func.sgml-Consistently-use-optional-to-indicate-opti.patch\n\n-<function>format</function>(<parameter>formatstr</parameter>\n<type>text</type> [, <parameter>formatarg</parameter>\n<type>\"any\"</type> [, ...] ])\n+<function>format</function>(<parameter>formatstr</parameter>\n<type>text</type> <optional>, <parameter>formatarg</parameter>\n<type>\"any\"</type> [, ...] </optional>)\n\ni change it further to\n+<function>format</function>(<parameter>formatstr</parameter>\n<type>text</type> <optional>, <parameter>formatarg</parameter>\n<type>\"any\"</type> <optional>, ...</optional> </optional>)\n\ni did these kind of change to <function>format</function>,\n<function>concat_ws</function>, <function>concat</function>\n\n\n\nI've rebased your patch,\nadded a commitfest entry: https://commitfest.postgresql.org/50/5278/\nit seems I cannot mark you as the author in commitfest.\nanyway, you ( Dagfinn Ilmari Mannsåker ) are the author of it.", "msg_date": "Mon, 30 Sep 2024 16:15:29 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: documentation structure" } ]
[ { "msg_contents": "Hello,\nI would like to make a feature request for existing implementation of connection object. What I would like to request you is as follows.\n\nI have configured primary and hot standby PostgreSQL server. The replication works fine with repmanager. All I need to achieve is to keep the client connection string using a common hostname (also without specifying multiple hosts) and not to change due to role change. To simplify my need the details are as follows.\n\nPrimary host: host1.home.network, IP: 192.168.0.2\nStandby host: host2.home.network, IP: 192.168.0.3 \nCommon host: myapphost.home.network: this resolves to IP: 192.168.0.2 and IP: 192.168.0.3\n\nThe connection string I want my applications to use is:\n\nuser@localnode:~> psql \"host=myapphost.home.network dbname=appdb sslmode=require target_session_attrs=primary\"\n\nBut when I test the connection multiple times, I have observe strange behaviour and I am not sure if that's the expectation. The client application uses the myapphost.home.network. By using \"target_session_attrs=primary\", my expectation is to connect always to primary node all the time, no matter which one it is due to manual role change, either host1 or host2.home.network\n\nThe reason of having two IPs to common hostname is, sometimes we manually perform the role switch using repmanager. And because of this, I do not want any changes to client connection and I expect attribute target_session_attrs will always hook client connection to primary node. But after testing few times, I get error:\n\nconnection to server at \"myapphost.home.network\" (192.168.0.3), port 5432 failed: server is in hot standby mode\n\nNow here my expectation is, despite myapphost.home.network resolves to two different servers (primary and slave/standby), because of using attribute target_session_attrs=primary, why my session does not get redirected to only primary host? And of course load_balance_hosts default value is disable. Furthermore, this also gives me flexibility to add new ore remove hosts from DNS entry myapphost.home.network.\n\nBased on above requirement, I would like to request you if it is possible to make an enhacenment to connection object when using target_session_attrs=primary, it will hook to always primary server and not to thrown an error saying next on in the list is server is in hot standby mode or read only etc…\n\nMany thanks in advance,\n\nRegards\nSameer Deshpande\nHello,I would like to make a feature request for existing implementation of connection object. What I would like to request you is as follows.I have configured primary and hot standby PostgreSQL server. The replication works fine with repmanager. All I need to achieve is to keep the client connection string using a common hostname (also without specifying multiple hosts) and not to change due to role change. To simplify my need the details are as follows.Primary host: host1.home.network, IP: 192.168.0.2Standby host: host2.home.network, IP: 192.168.0.3 Common host: myapphost.home.network: this resolves to IP: 192.168.0.2 and IP: 192.168.0.3The connection string I want my applications to use is:user@localnode:~> psql \"host=myapphost.home.network dbname=appdb sslmode=require target_session_attrs=primary\"But when I test the connection multiple times, I have observe strange behaviour and I am not sure if that's the expectation. The client application uses the myapphost.home.network. By using \"target_session_attrs=primary\", my expectation is to connect always to primary node all the time, no matter which one it is due to manual role change, either host1 or host2.home.networkThe reason of having two IPs to common hostname is, sometimes we manually perform the role switch using repmanager. And because of this, I do not want any changes to client connection and I expect attribute target_session_attrs will always hook client connection to primary node. But after testing few times, I get error:connection to server at \"myapphost.home.network\" (192.168.0.3), port 5432 failed: server is in hot standby modeNow here my expectation is, despite myapphost.home.network resolves to two different servers (primary and slave/standby), because of using attribute target_session_attrs=primary, why my session does not get redirected to only primary host? And of course load_balance_hosts default value is disable. Furthermore, this also gives me flexibility to add new ore remove hosts from DNS entry myapphost.home.network.Based on above requirement, I would like to request you if it is possible to make an enhacenment to connection object when using target_session_attrs=primary, it will hook to always primary server and not to thrown an error saying next on in the list is server is in hot standby mode or read only etc…Many thanks in advance,RegardsSameer Deshpande", "msg_date": "Mon, 18 Mar 2024 16:57:53 +0100", "msg_from": "\"Sameer M. Deshpande\" <[email protected]>", "msg_from_op": true, "msg_subject": "libpq behavior with hostname with multiple addresses and\n target_session_attrs=primary" } ]
[ { "msg_contents": "Hello Chapman,\n\nThanks for the reply and suggestion.\n\nBelow are my observations when i was debugging the code of postgres-jdbc driver for double precision data type.\n\n1- When the value in DB is 40 and fetched value is also 40\n A - In the QueryExecuterImpl class method - receiveFields() , we create Fields metadata \n\n private Field[] receiveFields() throws IOException {\n pgStream.receiveInteger4(); // MESSAGE SIZE\n int size = pgStream.receiveInteger2();\n Field[] fields = new Field[size];\n\n if (LOGGER.isLoggable(Level.FINEST)) {\n LOGGER.log(Level.FINEST, \" <=BE RowDescription({0})\", size);\n }\n\n for (int i = 0; i < fields.length; i++) {\n String columnLabel = pgStream.receiveCanonicalString();\n int tableOid = pgStream.receiveInteger4();\n short positionInTable = (short) pgStream.receiveInteger2();\n int typeOid = pgStream.receiveInteger4();\n int typeLength = pgStream.receiveInteger2();\n int typeModifier = pgStream.receiveInteger4();\n int formatType = pgStream.receiveInteger2();\n fields[i] = new Field(columnLabel,\n typeOid, typeLength, typeModifier, tableOid, positionInTable);\n fields[i].setFormat(formatType);\n\n LOGGER.log(Level.FINEST, \" {0}\", fields[i]);\n }\n\n return fields;\n }\n\nOutput of this method is - [Field(id,FLOAT8,8,T), Field(client_id,FLOAT8,8,T), Field(create_ts,TIMESTAMP,8,T), Field(force_generation_flag,VARCHAR,65535,T), Field(instance_id,FLOAT8,8,T), Field(is_jmx_call,VARCHAR,65535,T), Field(ocode,VARCHAR,65535,T), Field(payload_type,VARCHAR,65535,T), Field(repository,VARCHAR,65535,T), Field(sub_repository,VARCHAR,65535,T)]\n\n \n\n \n B- Then in the class PgResultSet , it calls the method \n public java.math.@Nullable BigDecimal getBigDecimal(@Positive int columnIndex) throws SQLException {\n return getBigDecimal(columnIndex, -1);\n }\n and then it calls the method \n @Pure\n private @Nullable Number getNumeric(\n int columnIndex, int scale, boolean allowNaN) throws SQLException {\n byte[] value = getRawValue(columnIndex);\n if (value == null) {\n return null;\n }\n\n if (isBinary(columnIndex)) {\n int sqlType = getSQLType(columnIndex);\n if (sqlType != Types.NUMERIC && sqlType != Types.DECIMAL) {\n Object obj = internalGetObject(columnIndex, fields[columnIndex - 1]);\n if (obj == null) {\n return null;\n }\n if (obj instanceof Long || obj instanceof Integer || obj instanceof Byte) {\n BigDecimal res = BigDecimal.valueOf(((Number) obj).longValue());\n res = scaleBigDecimal(res, scale);\n return res;\n }\n return toBigDecimal(trimMoney(String.valueOf(obj)), scale);\n } else {\n Number num = ByteConverter.numeric(value);\n if (allowNaN && Double.isNaN(num.doubleValue())) {\n return Double.NaN;\n }\n\n return num;\n }\n }\nSince the column format is text and not binary it converts the value to BigDecimal and give back the value as 40 .\n\n2- When the value in DB is 40 and fetched value is 40.0 (trailing zero)\n In this case the field metadata is -\n\n [Field(id,FLOAT8,8,B), Field(client_id,FLOAT8,8,B), Field(ocode,VARCHAR,65535,T), Field(payload_type,VARCHAR,65535,T), Field(repository,VARCHAR,65535,T), Field(sub_repository,VARCHAR,65535,T), Field(force_generation_flag,VARCHAR,65535,T), Field(is_jmx_call,VARCHAR,65535,T), Field(instance_id,FLOAT8,8,B), Field(create_ts,TIMESTAMP,8,B)] \n\nNow since the format is Binary Type hence in PgResultSet class and in Numeric method condition isBinary(columnIndex) is true.\nand it returns DOUBLE from there result in 40.0\n\nNow i am not sure for the same table and same column why we have two different format and this issue is intermittent.\n\nThanks,\n\nRahul \n\n> On 19-Mar-2024, at 1:02 AM, Rahul Uniyal <[email protected]> wrote:\n> \n\nHello Chapman,\n\nThanks for the reply and suggestion.\n\nBelow are my observations when i was debugging the code of postgres-jdbc driver for double precision data type.\n\n1- When the value in DB is 40 and fetched value is also 40\n     A - In the QueryExecuterImpl class method - receiveFields() , we create Fields metadata \n\n     private Field[] receiveFields() throws IOException {\n    pgStream.receiveInteger4(); // MESSAGE SIZE\n    int size = pgStream.receiveInteger2();\n    Field[] fields = new Field[size];\n\n    if (LOGGER.isLoggable(Level.FINEST)) {\n      LOGGER.log(Level.FINEST, \" <=BE RowDescription({0})\", size);\n    }\n\n    for (int i = 0; i < fields.length; i++) {\n      String columnLabel = pgStream.receiveCanonicalString();\n      int tableOid = pgStream.receiveInteger4();\n      short positionInTable = (short) pgStream.receiveInteger2();\n      int typeOid = pgStream.receiveInteger4();\n      int typeLength = pgStream.receiveInteger2();\n      int typeModifier = pgStream.receiveInteger4();\n      int formatType = pgStream.receiveInteger2();\n      fields[i] = new Field(columnLabel,\n          typeOid, typeLength, typeModifier, tableOid, positionInTable);\n      fields[i].setFormat(formatType);\n\n      LOGGER.log(Level.FINEST, \"        {0}\", fields[i]);\n    }\n\n    return fields;\n  }\n\nOutput of this method is - [Field(id,FLOAT8,8,T), Field(client_id,FLOAT8,8,T), Field(create_ts,TIMESTAMP,8,T), Field(force_generation_flag,VARCHAR,65535,T), Field(instance_id,FLOAT8,8,T), Field(is_jmx_call,VARCHAR,65535,T), Field(ocode,VARCHAR,65535,T), Field(payload_type,VARCHAR,65535,T), Field(repository,VARCHAR,65535,T), Field(sub_repository,VARCHAR,65535,T)]\n \n         \n     B- Then in the class PgResultSet , it calls the method  \n              public java.math.@Nullable BigDecimal getBigDecimal(@Positive int columnIndex) throws SQLException {\n                   return getBigDecimal(columnIndex, -1);\n                }\n      and then it calls the method \n       @Pure\n  private @Nullable Number getNumeric(\n      int columnIndex, int scale, boolean allowNaN) throws SQLException {\n    byte[] value = getRawValue(columnIndex);\n    if (value == null) {\n      return null;\n    }\n\n    if (isBinary(columnIndex)) {\n      int sqlType = getSQLType(columnIndex);\n      if (sqlType != Types.NUMERIC && sqlType != Types.DECIMAL) {\n        Object obj = internalGetObject(columnIndex, fields[columnIndex - 1]);\n        if (obj == null) {\n          return null;\n        }\n        if (obj instanceof Long || obj instanceof Integer || obj instanceof Byte) {\n          BigDecimal res = BigDecimal.valueOf(((Number) obj).longValue());\n          res = scaleBigDecimal(res, scale);\n          return res;\n        }\n        return toBigDecimal(trimMoney(String.valueOf(obj)), scale);\n      } else {\n        Number num = ByteConverter.numeric(value);\n        if (allowNaN && Double.isNaN(num.doubleValue())) {\n          return Double.NaN;\n        }\n\n        return num;\n      }\n    }\nSince the column format is text and not binary it converts the value to BigDecimal and give back the value as 40 .\n\n2- When the value in DB is 40 and fetched value is 40.0 (trailing zero)\n   In this case the field metadata is -\n   [Field(id,FLOAT8,8,B), Field(client_id,FLOAT8,8,B), Field(ocode,VARCHAR,65535,T), Field(payload_type,VARCHAR,65535,T), Field(repository,VARCHAR,65535,T), Field(sub_repository,VARCHAR,65535,T), Field(force_generation_flag,VARCHAR,65535,T), Field(is_jmx_call,VARCHAR,65535,T), Field(instance_id,FLOAT8,8,B), Field(create_ts,TIMESTAMP,8,B)] \n\nNow since the format is Binary Type hence in  PgResultSet  class and in Numeric method condition  isBinary(columnIndex) is true.\nand it returns  DOUBLE from there result in 40.0\n\nNow i am not sure for the same table and same column why we have two different format and this issue is intermittent.Thanks,Rahul On 19-Mar-2024, at 1:02 AM, Rahul Uniyal <[email protected]> wrote:", "msg_date": "Tue, 19 Mar 2024 01:22:00 +0530", "msg_from": "Rahul Uniyal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Java : Postgres double precession issue with different data\n format text and binary " }, { "msg_contents": "Hi Rahul,\n\nOn 03/18/24 15:52, Rahul Uniyal wrote:\n> Since the column format is text and not binary it converts the value\n> to BigDecimal and give back the value as 40 .\n> ...\n> Now since the format is Binary ... it returns DOUBLE from there\n> result in 40.0\n> \n> Now i am not sure for the same table and same column why we have two\n> different format and this issue is intermittent.\n\nI don't see, in this message or your earlier one, which public\nResultSet API method your Java client code is calling. It sounds as if\nyou are simply calling getObject, the flavor without a second parameter\nnarrowing the type, and you are finding that the object returned is\nsometimes of class Double and sometimes of class BigDecimal. Is that\naccurate? That would seem to be the nub of the issue.\n\nYou seem to have found that the class of the returned object is\ninfluenced by whether text or binary format was used on the wire.\nI will guess that would be worth reporting to the PGJDBC devs, using\nthe pgsql-jdbc list.\n\nThe question of why the driver might sometimes use one wire format\nand sometimes the other seems secondary. There may be some technical\nexplanation, but it would not be very interesting except as an\nimplementation detail, if it did not have this visible effect of\nchanging the returned object's class.\n\nFor the time being, I assume that if your Java code calls a more\nspecific method, such as getObject(..., BigDecimal.class) or\ngetObject(..., Double.class), or simply getDouble, you will get\nresults of the desired class whatever wire format is used.\n\nThe issue of the wire format influencing what class of object\ngetObject returns (when a specific class hasn't been requested)\nis probably worth raising on pgsql-jdbc.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 18 Mar 2024 20:26:33 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Java : Postgres double precession issue with different data\n format text and binary" }, { "msg_contents": "On 03/18/24 23:05, Rahul Uniyal wrote:\n> This is the public method of PgResultSet class . \n> Apart from which format type it is ,\n> getBigDecimal public method will get call\n\nThat is interesting. That's why it's important to include the basics\nwhen making a report, including which public API method your code\nis calling.\n\nAs you call getBigDecimal, you always will get a BigDecimal (or an\nexception). But, as you have shown, the implementation detail of\nthe on-the-wire format influences how the BigDecimal is constructed.\n\nWhen the wire format is text, you get a BigDecimal constructed\ndirectly from the on-the-wire string value.\n\nWhen the wire format is binary (and the type is not NUMERIC or\nDECIMAL), there first is an object constructed by internalGetObject,\nand (if that is not an instance of Long, Integer, or Byte), it gets\nconverted to a BigDecimal this way:\n\n return toBigDecimal(trimMoney(String.valueOf(obj)), scale);\n\nAssuming the object returned by internalGetObject is a Double, the\nreason for the divergent results is the String.valueOf in that\nconversion. BigDecimal and Double have different conventions for\nhow they are rendered as a String; the representation of a Double\nwill always have a decimal place:\n\njshell> new BigDecimal(\"40\")\n$1 ==> 40\njshell> Double.valueOf(\"40\")\n$2 ==> 40.0\n\nIf you construct a BigDecimal directly from the Double, you get\nthe one you expect:\n\njshell> new BigDecimal($2)\n$3 ==> 40\n\nBut if you take the String value of the Double first, the extra\ndecimal place shown in the Double's String value results in a\nBigDecimal with scale of 1 rather than 0:\n\njshell> new BigDecimal(String.valueOf($2))\n$4 ==> 40.0\n\njshell> $3.scale(); $4.scale()\n$5 ==> 0\n$6 ==> 1\n\nThis is very probably worth reporting on pgsql-jdbc.\n\nI also wonder why the trimMoney() is there. The method\nseems to exist to deal with a string that could have $\nin it, or parentheses instead of a negative sign, but\nI am unsure what kind of object could be obtained from\ninternalGetObject that would require such treatment.\n\nUnrelated to your case, I wonder also about the handling\nof NUMERIC or DECIMAL when the value is NaN. The\nByteConverter.numeric method is declared to return Number,\nand documented to return either a BigDecimal or Double.NaN.\nLikewise, PgResultSet.getNumeric can return Double.NaN\nin that case. But getBigDecimal contains an unconditional\ncast to BigDecimal, which seems like a ClassCastException\nwaiting to happen for the NaN case. (getBigDecimal clearly\ncan't return Double.NaN, but maybe a more-specific exception\nwould be more helpful than \"Double cannot be cast to BigInteger\"?)\n\nI've added pgsql-jdbc to the To: for this message, but it will\nprobably be delayed somewhat in moderation, as I am not\nsubscribed to that list.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 19 Mar 2024 11:02:38 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Java : Postgres double precession issue with different data\n format text and binary" } ]
[ { "msg_contents": "hi.\nI think the \"X\" and \"-\" mean in this matrix [0] is not very intuitive.\nmainly because \"X\" tends to mean negative things in most cases.\nwe can write a sentence saying \"X\" means this, \"-\" means that.\n\nor maybe Check mark [1] and Cross mark [2] are more universal.\nand we can use these marks.\n\n\"Only for local objects\"\nis there any reference explaining \"local objects\"?\nI think local object means objects that only affect one single database?\n\n\n\n[0] https://www.postgresql.org/docs/current/event-trigger-matrix.html\n[1] https://en.wikipedia.org/wiki/Check_mark\n[2] https://en.wikipedia.org/wiki/X_mark\n\n\n", "msg_date": "Tue, 19 Mar 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "doc issues in event-trigger-matrix.html" }, { "msg_contents": "On Tue, Mar 19, 2024 at 08:00:00AM +0800, jian he wrote:\n> I think the \"X\" and \"-\" mean in this matrix [0] is not very intuitive.\n> mainly because \"X\" tends to mean negative things in most cases.\n> we can write a sentence saying \"X\" means this, \"-\" means that.\n> \n> or maybe Check mark [1] and Cross mark [2] are more universal.\n> and we can use these marks.\n> \n> \"Only for local objects\"\n> is there any reference explaining \"local objects\"?\n> I think local object means objects that only affect one single database?\n\nIt is true that in Japan the cross mark refers to a negation, and\nthat's the opposite in France: I would put a cross on a table in the\ncase where something is supported. I've never seen anybody complain\nabout the format of these tables, FWIW, but if these were to be\nchanged, the update should happen across the board for all the tables\nand not only one.\n\nUsing ASCII characters also has the advantage to make the maintenance\nclightly easier, in my opinion, because there is no translation effort\nbetween these special characters and their XML equivalent, like \"&lt\"\n<-> \"<\".\n--\nMichael", "msg_date": "Tue, 19 Mar 2024 10:14:10 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc issues in event-trigger-matrix.html" }, { "msg_contents": "> On 19 Mar 2024, at 02:14, Michael Paquier <[email protected]> wrote:\n> \n> On Tue, Mar 19, 2024 at 08:00:00AM +0800, jian he wrote:\n>> I think the \"X\" and \"-\" mean in this matrix [0] is not very intuitive.\n>> mainly because \"X\" tends to mean negative things in most cases.\n>> we can write a sentence saying \"X\" means this, \"-\" means that.\n>> \n>> or maybe Check mark [1] and Cross mark [2] are more universal.\n>> and we can use these marks.\n>> \n>> \"Only for local objects\"\n>> is there any reference explaining \"local objects\"?\n>> I think local object means objects that only affect one single database?\n\nThat's a bigger problem than the table representation, we never define what\n\"local object\" mean anywhere in the EVT docs. EV's are global for a database,\nbut not a cluster, so I assume what this means is that EVs for non-DDL commands\nlike COMMENT can only fire for a specific relation they are attached to and not\ndatabase wide?\n\n> It is true that in Japan the cross mark refers to a negation, and\n> that's the opposite in France: I would put a cross on a table in the\n> case where something is supported. I've never seen anybody complain\n> about the format of these tables, FWIW, but if these were to be\n> changed, the update should happen across the board for all the tables\n> and not only one.\n\nAFAICT we only have one other table with \"X\" denoting support, the \"Conflicting\nLock Modes\" table under Concurrency Control chapter, and there we simply leave\nthe \"not supported\" column empty instead of using a dash. Maybe the simple fix\nhere is to make these tables consistent by removing the dash from the event\ntrigger firing matrix?\n\nAs a sidenote, the table should gain a sentence explaining why the login column\nis missing to avoid confusion.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 19 Mar 2024 10:34:32 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc issues in event-trigger-matrix.html" }, { "msg_contents": "On 19.03.24 10:34, Daniel Gustafsson wrote:\n>>> \"Only for local objects\"\n>>> is there any reference explaining \"local objects\"?\n>>> I think local object means objects that only affect one single database?\n> That's a bigger problem than the table representation, we never define what\n> \"local object\" mean anywhere in the EVT docs. EV's are global for a database,\n> but not a cluster, so I assume what this means is that EVs for non-DDL commands\n> like COMMENT can only fire for a specific relation they are attached to and not\n> database wide?\n\nI think we could replace this whole table by a few definitions:\n\n- \"Local objects\" are everything except \"global objects\".\n\n- \"Global objects\", for the purpose of event triggers, are databases, \ntablespaces, roles, role memberships, and parameter ACLs.\n\n- DDL commands are all commands except SELECT, INSERT, UPDATE, DELETE, \nMERGE.\n\n- Events triggers are supported for all DDL commands on all local objects.\n\nIs this table saying anything else?\n\nIs there any way to check if it's even correct? For example, it shows \nthat the event \"sql_​drop\" can fire for a few ALTER commands, but how is \nthis determined? If tomorrow someone changes ALTER DOMAIN to possibly \ndo a table rewrite, will they remember to update this table?\n\n\n\n", "msg_date": "Thu, 21 Mar 2024 22:47:17 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc issues in event-trigger-matrix.html" }, { "msg_contents": "> On 21 Mar 2024, at 22:47, Peter Eisentraut <[email protected]> wrote:\n> \n> On 19.03.24 10:34, Daniel Gustafsson wrote:\n>>>> \"Only for local objects\"\n>>>> is there any reference explaining \"local objects\"?\n>>>> I think local object means objects that only affect one single database?\n>> That's a bigger problem than the table representation, we never define what\n>> \"local object\" mean anywhere in the EVT docs. EV's are global for a database,\n>> but not a cluster, so I assume what this means is that EVs for non-DDL commands\n>> like COMMENT can only fire for a specific relation they are attached to and not\n>> database wide?\n> \n> I think we could replace this whole table by a few definitions:\n\nSimply extending the \"Overview of Event Trigger Behavior\" section slightly\nmight even be enough?\n\n> If tomorrow someone changes ... will they remember to update this table?\n\nHighly unlikely.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 21 Mar 2024 23:10:37 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc issues in event-trigger-matrix.html" }, { "msg_contents": "On Fri, Mar 22, 2024 at 5:47 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 19.03.24 10:34, Daniel Gustafsson wrote:\n> >>> \"Only for local objects\"\n> >>> is there any reference explaining \"local objects\"?\n> >>> I think local object means objects that only affect one single database?\n> > That's a bigger problem than the table representation, we never define what\n> > \"local object\" mean anywhere in the EVT docs. EV's are global for a database,\n> > but not a cluster, so I assume what this means is that EVs for non-DDL commands\n> > like COMMENT can only fire for a specific relation they are attached to and not\n> > database wide?\n>\n> I think we could replace this whole table by a few definitions:\n>\n> - \"Local objects\" are everything except \"global objects\".\n>\n> - \"Global objects\", for the purpose of event triggers, are databases,\n> tablespaces, roles, role memberships, and parameter ACLs.\n>\n> - DDL commands are all commands except SELECT, INSERT, UPDATE, DELETE,\n> MERGE.\n>\n> - Events triggers are supported for all DDL commands on all local objects.\n>\n> Is this table saying anything else?\n>\n> Is there any way to check if it's even correct? For example, it shows\n\ncomparing these two html files:\nhttps://www.postgresql.org/docs/devel/sql-commands.html\nhttps://www.postgresql.org/docs/devel/event-trigger-matrix.html\n\nsummary:\nall commands begin with \"CREATE\"\nexcept the following two are not supported by event trigger.\nCREATE TRANSFORM\nCREATE EVENT TRIGGER\n\ngenerally, one \"CREATE\" corresponds to one \"DROP\" and one \"ALTER\".\nbut I found there is more to \"CREATE\" than \"ALTER\". (i didn't bother why)\nthere is one more \"DROP\" than \"CREATE\",\nbecause of \"DROP ROUTINE\" and \"DROP OWNED\"\nalso\n\"CREATE TABLE\"\n\"CREATE TABLE AS\"\ncorresponds to one \"DROP TABLE\"\n\nother command not begin with \"CREATE\" supported by event trigger (per\nevent-trigger-matrix) are:\nCOMMENT\nGRANT Only for local objects\nIMPORT FOREIGN SCHEMA\nREFRESH MATERIALIZED VIEW\nREINDEX\nREVOKE\nSECURITY LABEL\nSELECT INTO\n\nall commands\nthat is not begin with \"CREATE\" | \"DROP\", \"ALTER\" (per sql-commands.html) are:\nABORT\nANALYZE\nBEGIN\nCALL\nCHECKPOINT\nCLOSE\nCLUSTER\nCOMMENT\nCOMMIT\nCOMMIT PREPARED\nCOPY\nDEALLOCATE\nDECLARE\nDELETE\nDISCARD\nDO\nEND\nEXECUTE\nEXPLAIN\nFETCH\nGRANT\nIMPORT FOREIGN SCHEMA\nINSERT\nLISTEN\nLOAD\nLOCK\nMERGE\nMOVE\nNOTIFY\nPREPARE\nPREPARE TRANSACTION\nREASSIGN OWNED\nREFRESH MATERIALIZED VIEW\nREINDEX\nRELEASE SAVEPOINT\nRESET\nREVOKE\nROLLBACK\nROLLBACK PREPARED\nROLLBACK TO SAVEPOINT\nSAVEPOINT\nSECURITY LABEL\nSELECT\nSELECT INTO\nSET\nSET CONSTRAINTS\nSET ROLE\nSET SESSION AUTHORIZATION\nSET TRANSACTION\nSHOW\nSTART TRANSACTION\nTRUNCATE\nUNLISTEN\nUPDATE\nVACUUM\nVALUES\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:58:51 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doc issues in event-trigger-matrix.html" } ]
[ { "msg_contents": "Hi,\n\nI'm looking at an open proposal to introduce row-level security policy\ntemplates [0], and I have been making some progress on it.\n\nThe proposal aims to introduce templates for RLS policies, where the idea\nis to allow users to define policies as a template, and apply it to\nmultiple tables. The proposed benefit is that there is reduction in\nmanagement overhead as there are situations where policies are similar\nacross multiple tables.\n\nHowever, ever since I started working on this proposal, I noticed that\nthere are a few open questions I wanted to ask to existing contributors\nregarding how this functionality should be exposed.\n\nThere are two ways to address this proposal:\n\n1. Introduction of new keywords/statements\n\nExpected usage:\n\n-- create template\nCREATE RLS TEMPLATE rls_template\nUSING (user_id = current_user)\nWITH (SELECT);\n\n-- attach templates to tables\nALTER TABLE employees\nATTACH RLS TEMPLATE rls_template;\n\nALTER TABLE customers\nATTACH RLS TEMPLATE rls_template;\n\n-- alter template\nALTER RLS TEMPLATE rls_template\nWITH (SELECT, UPDATE);\n\nThis option is non-intrusive, and can possibly operate in complete\nisolation from existing row-level security logic, however, this also brings\nthe difficulty of introducing divergent behavior between normal RLS policy\ncreation and template creation as both of them would have a different SQL\nsyntax. This is undesired. This also requires users to learn the\nnewly-introduced syntax.\n\n2. Modifying existing CREATE POLICY logic (or introduce a new CREATE POLICY\nTEMPLATE statement)\n\nWe could consider adding a new statement called CREATE POLICY TEMPLATE with\nthe similar options but without the table name:\n\nCREATE POLICY TEMPLATE name\n [ AS { PERMISSIVE | RESTRICTIVE } ]\n [ FOR { ALL | SELECT | INSERT | UPDATE | DELETE } ]\n [ TO { role_name | PUBLIC | CURRENT_ROLE | CURRENT_USER | SESSION_USER\n} [, ...] ]\n [ USING ( using_expression ) ]\n [ WITH CHECK ( check_expression ) ]\n\nThe major challenge here is the construction of the qualifiers for the\npolicy, as the entire process [1] relies on a table ID, however, we don’t\nhave access to any table names in this statement.\n\nI also find the aspect of constructing qualifiers directly from the\nplain-text state less ideal, and I honestly have no clue if this is\npossible.\n\nor, we could integrate it in CREATE POLICY as an option (but in this case,\nthe table name is required, rendering the template creation\ntable-dependent):\n\nCREATE POLICY name ON table_name\n [ AS { PERMISSIVE | RESTRICTIVE } ]\n [ FOR { ALL | SELECT | INSERT | UPDATE | DELETE } ]\n [ TO { role_name | PUBLIC | CURRENT_ROLE | CURRENT_USER | SESSION_USER\n} [, ...] ]\n [ USING ( using_expression ) ]\n [ WITH CHECK ( check_expression ) ]\n [ TEMPLATE template_name ]\n\nWould love to hear any thoughts on the preferred way to introduce this\nfunctionality.\n\nApologies for any mistakes I might have made in the above statements, I'm\nfairly new to pgsql-hackers (this is my first post here!), and this is my\nfirst time taking a look at existing RLS logic, so I might be wrong on the\ninterpretation of qualifier expr constructions.\n\nRegards,\nAadhav\n\n[0]: https://wiki.postgresql.org/wiki/GSoC_2024#Row-level_security_templates\n[1]:\nhttps://github.com/postgres/postgres/blob/bb5604ba9e53e3a0fb9967f960e36cff4d36b0ab/src/backend/commands/policy.c#L633-L659\n\nHi,I'm looking at an open proposal to introduce row-level security policy templates [0], and I have been making some progress on it.The proposal aims to introduce templates for RLS policies, where the idea is to allow users to define policies as a template, and apply it to multiple tables. The proposed benefit is that there is reduction in management overhead as there are situations where policies are similar across multiple tables.However, ever since I started working on this proposal, I noticed that there are a few open questions I wanted to ask to existing contributors regarding how this functionality should be exposed.There are two ways to address this proposal:1. Introduction of new keywords/statementsExpected usage:-- create templateCREATE RLS TEMPLATE rls_templateUSING (user_id = current_user)WITH (SELECT);-- attach templates to tablesALTER TABLE employeesATTACH RLS TEMPLATE rls_template;ALTER TABLE customersATTACH RLS TEMPLATE rls_template;-- alter templateALTER RLS TEMPLATE rls_templateWITH (SELECT, UPDATE);This option is non-intrusive, and can possibly operate in complete isolation from existing row-level security logic, however, this also brings the difficulty of introducing divergent behavior between normal RLS policy creation and template creation as both of them would have a different SQL syntax. This is undesired. This also requires users to learn the newly-introduced syntax.2. Modifying existing CREATE POLICY logic (or introduce a new CREATE POLICY TEMPLATE statement)We could consider adding a new statement called CREATE POLICY TEMPLATE with the similar options but without the table name:CREATE POLICY TEMPLATE name    [ AS { PERMISSIVE | RESTRICTIVE } ]    [ FOR { ALL | SELECT | INSERT | UPDATE | DELETE } ]    [ TO { role_name | PUBLIC | CURRENT_ROLE | CURRENT_USER | SESSION_USER } [, ...] ]    [ USING ( using_expression ) ]    [ WITH CHECK ( check_expression ) ]The major challenge here is the construction of the qualifiers for the policy, as the entire process [1] relies on a table ID, however, we don’t have access to any table names in this statement.I also find the aspect of constructing qualifiers directly from the plain-text state less ideal, and I honestly have no clue if this is possible.or, we could integrate it in CREATE POLICY as an option (but in this case, the table name is required, rendering the template creation table-dependent):CREATE POLICY name ON table_name    [ AS { PERMISSIVE | RESTRICTIVE } ]    [ FOR { ALL | SELECT | INSERT | UPDATE | DELETE } ]    [ TO { role_name | PUBLIC | CURRENT_ROLE | CURRENT_USER | SESSION_USER } [, ...] ]    [ USING ( using_expression ) ]    [ WITH CHECK ( check_expression ) ]    [ TEMPLATE template_name ]Would love to hear any thoughts on the preferred way to introduce this functionality.Apologies for any mistakes I might have made in the above statements, I'm fairly new to pgsql-hackers (this is my first post here!), and this is my first time taking a look at existing RLS logic, so I might be wrong on the interpretation of qualifier expr constructions.Regards,Aadhav[0]: https://wiki.postgresql.org/wiki/GSoC_2024#Row-level_security_templates[1]: https://github.com/postgres/postgres/blob/bb5604ba9e53e3a0fb9967f960e36cff4d36b0ab/src/backend/commands/policy.c#L633-L659", "msg_date": "Tue, 19 Mar 2024 11:53:19 +0530", "msg_from": "Aadhav Vignesh <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: Introduce row-level security templates" } ]
[ { "msg_contents": "Over in the thread discussing the addition of UUIDv7 support [0], there \nis some uncertainty about what timestamp precision one can expect from \ngettimeofday().\n\nUUIDv7 uses milliseconds since Unix epoch, but can optionally use up to \n12 additional bits of timestamp precision (see [1]), but it can also \njust use a counter instead of the extra precision. The current patch \nuses the counter method \"because of portability concerns\" (source code \ncomment).\n\nI feel that we don't actually have any information about this \nportability concern. Does anyone know what precision we can expect from \ngettimeofday()? Can we expect the full microsecond precision usually?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CAAhFRxitJv=yoGnXUgeLB_O+M7J2BJAmb5jqAT9gZ3bij3uLDA@mail.gmail.com\n[1]: \nhttps://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis#section-6.2-5.6.1\n\n\n", "msg_date": "Tue, 19 Mar 2024 09:28:37 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "What is a typical precision of gettimeofday()?" }, { "msg_contents": "Hi,\n\ncc: Andrey\n\n> Over in the thread discussing the addition of UUIDv7 support [0], there\n> is some uncertainty about what timestamp precision one can expect from\n> gettimeofday().\n>\n> UUIDv7 uses milliseconds since Unix epoch, but can optionally use up to\n> 12 additional bits of timestamp precision (see [1]), but it can also\n> just use a counter instead of the extra precision. The current patch\n> uses the counter method \"because of portability concerns\" (source code\n> comment).\n>\n> I feel that we don't actually have any information about this\n> portability concern. Does anyone know what precision we can expect from\n> gettimeofday()? Can we expect the full microsecond precision usually?\n\nSpecifically in the UUIDv7 application the goal is to generate not\nnecessarily time-precise UUIDs but rather do our best to get *unique*\nUUIDs. As I understand, this is the actual reason why the patch needs\ncounters.\n\nAs Linux man page puts it:\n\n\"\"\"\nThe time returned by gettimeofday() is affected by discontinuous jumps\nin the system time (e.g., if the system administrator manually\nchanges the system time).\n\"\"\"\n\nOn top of that MacOS man page says:\n\n\"\"\"\nThe resolution of the system clock is hardware dependent, and the time\nmay be updated continuously or in ``ticks.''\n\"\"\"\n\nOn Windows our gettimeofday() implementation is a wrapper for\nGetSystemTimePreciseAsFileTime(). The corresponding MSDN page [1] is\nsomewhat laconic.\n\nConsidering the number of environments PostgreSQL can run in (OS +\nhardware + virtualization technologies) and the fact that\nhardware/software changes I doubt that it's realistic to expect any\nparticular guarantees from gettimeofday() in the general case.\n\n[1]: https://learn.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getsystemtimepreciseasfiletime\n\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 19 Mar 2024 12:38:44 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "On 19.03.24 10:38, Aleksander Alekseev wrote:\n> Considering the number of environments PostgreSQL can run in (OS +\n> hardware + virtualization technologies) and the fact that\n> hardware/software changes I doubt that it's realistic to expect any\n> particular guarantees from gettimeofday() in the general case.\n\nIf we want to be robust without any guarantees from gettimeofday(), then \narguably gettimeofday() is not the right underlying function to use for \nUUIDv7. I'm not arguing that, I think we can assume some reasonable \nbaseline for what gettimeofday() produces. But it would be good to get \nsome information about what that might be.\n\nBtw., here is util-linux saying\n\n /* Assume that the gettimeofday() has microsecond granularity */\n\nhttps://github.com/util-linux/util-linux/blob/master/libuuid/src/gen_uuid.c#L232\n\n\n\n", "msg_date": "Wed, 20 Mar 2024 07:35:22 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "On Wed, 20 Mar 2024 at 07:35, Peter Eisentraut <[email protected]> wrote:\n> If we want to be robust without any guarantees from gettimeofday(), then\n> arguably gettimeofday() is not the right underlying function to use for\n> UUIDv7.\n\nThere's also clock_gettime which exposes its resolution using clock_getres\n\n\n", "msg_date": "Wed, 20 Mar 2024 19:35:20 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "\n\n> On 19 Mar 2024, at 13:28, Peter Eisentraut <[email protected]> wrote:\n> \n> I feel that we don't actually have any information about this portability concern. Does anyone know what precision we can expect from gettimeofday()? Can we expect the full microsecond precision usually?\n\nAt PGConf.dev Hannu Krossing draw attention to pg_test_timing module. I’ve tried this module(slightly modified to measure nanoseconds) on some systems, and everywhere I found ~100ns resolution (95% of ticks fall into 64ns and 128ns buckets).\n\nI’ll add cc Hannu, and also pg_test_timing module authors Ants ang Greg. Maybe they can add some context.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 18 Jun 2024 10:47:52 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "I plan to send patch to pg_test_timing in a day or two\n\nthe underlying time precision on modern linux seems to be\n\n2 ns for some Intel CPUs\n10 ns for Zen4\n40 ns for ARM (Ampere)\n\n---\nHannu\n\n\n\n|\n\n\n\n\nOn Tue, Jun 18, 2024 at 7:48 AM Andrey M. Borodin <[email protected]>\nwrote:\n\n>\n>\n> > On 19 Mar 2024, at 13:28, Peter Eisentraut <[email protected]> wrote:\n> >\n> > I feel that we don't actually have any information about this\n> portability concern. Does anyone know what precision we can expect from\n> gettimeofday()? Can we expect the full microsecond precision usually?\n>\n> At PGConf.dev Hannu Krossing draw attention to pg_test_timing module. I’ve\n> tried this module(slightly modified to measure nanoseconds) on some\n> systems, and everywhere I found ~100ns resolution (95% of ticks fall into\n> 64ns and 128ns buckets).\n>\n> I’ll add cc Hannu, and also pg_test_timing module authors Ants ang Greg.\n> Maybe they can add some context.\n>\n>\n> Best regards, Andrey Borodin.\n\nI plan to send patch to pg_test_timing in a day or two the underlying time precision on modern linux seems to be2 ns for some Intel CPUs10 ns for Zen440 ns for ARM (Ampere)---Hannu| On Tue, Jun 18, 2024 at 7:48 AM Andrey M. Borodin <[email protected]> wrote:\n\n> On 19 Mar 2024, at 13:28, Peter Eisentraut <[email protected]> wrote:\n> \n> I feel that we don't actually have any information about this portability concern.  Does anyone know what precision we can expect from gettimeofday()?  Can we expect the full microsecond precision usually?\n\nAt PGConf.dev Hannu Krossing draw attention to pg_test_timing module. I’ve tried this module(slightly modified to measure nanoseconds) on some systems, and everywhere I found ~100ns resolution (95% of ticks fall into 64ns and 128ns buckets).\n\nI’ll add cc Hannu, and also pg_test_timing module authors Ants ang Greg. Maybe they can add some context.\n\n\nBest regards, Andrey Borodin.", "msg_date": "Tue, 18 Jun 2024 17:08:57 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Here is the output of nanosecond-precision pg_test_timing on M1 Macbook Air\n\n/work/postgres/src/bin/pg_test_timing % ./pg_test_timing\nTesting timing overhead for 3 seconds.\nPer loop time including overhead: 21.54 ns\nHistogram of timing durations:\n <= ns % of total running % count\n 0 49.1655 49.1655 68481688\n 1 0.0000 49.1655 0\n 3 0.0000 49.1655 0\n 7 0.0000 49.1655 0\n 15 0.0000 49.1655 0\n 31 0.0000 49.1655 0\n 63 50.6890 99.8545 70603742\n 127 0.1432 99.9976 199411\n 255 0.0015 99.9991 2065\n 511 0.0001 99.9992 98\n 1023 0.0001 99.9993 140\n 2047 0.0002 99.9995 284\n 4095 0.0000 99.9995 50\n 8191 0.0000 99.9996 65\n 16383 0.0002 99.9997 240\n 32767 0.0001 99.9998 128\n 65535 0.0001 99.9999 97\n 131071 0.0000 99.9999 58\n 262143 0.0000 100.0000 44\n 524287 0.0000 100.0000 22\n 1048575 0.0000 100.0000 7\n 2097151 0.0000 100.0000 2\nFirst 128 exact nanoseconds:\n 0 49.1655 49.1655 68481688\n 41 16.8964 66.0619 23534708\n 42 33.7926 99.8545 47069034\n 83 0.0835 99.9380 116362\n 84 0.0419 99.9799 58349\n 125 0.0177 99.9976 24700\n\nAs you see the 40 ns internal tick gets somehow blurred into\nnot-quite-40-ns timing step\n\nOn Linux / ARM Ampere where __builtin_readcyclecounter() works (it\ncompiles but crashes on Mac OS M1, I have not yet tested on Linux M1)\nthe tick is exactly 40 ns and I'd expect it to be the same on M1.\n\n\nOn Tue, Jun 18, 2024 at 5:08 PM Hannu Krosing <[email protected]> wrote:\n>\n> I plan to send patch to pg_test_timing in a day or two\n>\n> the underlying time precision on modern linux seems to be\n>\n> 2 ns for some Intel CPUs\n> 10 ns for Zen4\n> 40 ns for ARM (Ampere)\n>\n> ---\n> Hannu\n>\n>\n>\n> |\n>\n>\n>\n>\n> On Tue, Jun 18, 2024 at 7:48 AM Andrey M. Borodin <[email protected]> wrote:\n>>\n>>\n>>\n>> > On 19 Mar 2024, at 13:28, Peter Eisentraut <[email protected]> wrote:\n>> >\n>> > I feel that we don't actually have any information about this portability concern. Does anyone know what precision we can expect from gettimeofday()? Can we expect the full microsecond precision usually?\n>>\n>> At PGConf.dev Hannu Krossing draw attention to pg_test_timing module. I’ve tried this module(slightly modified to measure nanoseconds) on some systems, and everywhere I found ~100ns resolution (95% of ticks fall into 64ns and 128ns buckets).\n>>\n>> I’ll add cc Hannu, and also pg_test_timing module authors Ants ang Greg. Maybe they can add some context.\n>>\n>>\n>> Best regards, Andrey Borodin.\n\n\n", "msg_date": "Wed, 19 Jun 2024 12:55:10 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "On 18.06.24 07:47, Andrey M. Borodin wrote:\n> \n> \n>> On 19 Mar 2024, at 13:28, Peter Eisentraut <[email protected]> wrote:\n>>\n>> I feel that we don't actually have any information about this portability concern. Does anyone know what precision we can expect from gettimeofday()? Can we expect the full microsecond precision usually?\n> \n> At PGConf.dev Hannu Krossing draw attention to pg_test_timing module. I’ve tried this module(slightly modified to measure nanoseconds) on some systems, and everywhere I found ~100ns resolution (95% of ticks fall into 64ns and 128ns buckets).\n\nAFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't \nreally address the original question.\n\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:44:45 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't \n> really address the original question.\n\nIt's not exactly hard to make it do so (see attached).\n\nI tried this on several different machines, and my conclusion is that\ngettimeofday() reports full microsecond precision on any platform\nanybody is likely to be running PG on today. Even my one surviving\npet dinosaur, NetBSD 10 on PowerPC Mac (mamba), shows results like\nthis:\n\n$ ./pg_test_timing\nTesting timing overhead for 3 seconds.\nPer loop time including overhead: 901.41 ns\nHistogram of timing durations:\n < us % of total count\n 1 10.46074 348148\n 2 89.51495 2979181\n 4 0.00574 191\n 8 0.00430 143\n 16 0.00691 230\n 32 0.00376 125\n 64 0.00012 4\n 128 0.00303 101\n 256 0.00027 9\n 512 0.00009 3\n 1024 0.00009 3\n\nI also modified pg_test_timing to measure nanoseconds not\nmicroseconds (second patch attached), and got this:\n\n$ ./pg_test_timing\nTesting timing overhead for 3 seconds.\nPer loop time including overhead: 805.50 ns\nHistogram of timing durations:\n < ns % of total count\n 1 19.84234 739008\n 2 0.00000 0\n 4 0.00000 0\n 8 0.00000 0\n 16 0.00000 0\n 32 0.00000 0\n 64 0.00000 0\n 128 0.00000 0\n 256 0.00000 0\n 512 0.00000 0\n 1024 80.14013 2984739\n 2048 0.00078 29\n 4096 0.00658 245\n 8192 0.00290 108\n 16384 0.00252 94\n 32768 0.00250 93\n 65536 0.00016 6\n131072 0.00185 69\n262144 0.00008 3\n524288 0.00008 3\n1048576 0.00008 3\n\nconfirming that when the result changes it generally does so by 1usec.\n\nApplying just the second patch, I find that clock_gettime on this\nold hardware seems to be limited to 1us resolution, but on my more\nmodern machines (mac M1, x86_64) it can tick at 40ns or less.\nEven a raspberry pi 4 shows\n\n$ ./pg_test_timing \nTesting timing overhead for 3 seconds.\nPer loop time including overhead: 69.12 ns\nHistogram of timing durations:\n < ns % of total count\n 1 0.00000 0\n 2 0.00000 0\n 4 0.00000 0\n 8 0.00000 0\n 16 0.00000 0\n 32 0.00000 0\n 64 37.59583 16317040\n 128 62.38568 27076131\n 256 0.01674 7265\n 512 0.00002 8\n 1024 0.00000 0\n 2048 0.00000 0\n 4096 0.00153 662\n 8192 0.00019 83\n 16384 0.00001 3\n 32768 0.00001 5\n\nsuggesting that the clock_gettime resolution is better than 64 ns.\n\nSo I concur with Hannu that it's time to adjust pg_test_timing to\nresolve nanoseconds not microseconds. I gather he's created a\npatch that does more than mine below, so I'll wait for that.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 19 Jun 2024 12:36:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "(resending to list and other CC:s )\n\nHi Tom\n\nThis is my current patch which also adds running % and optionally uses\nfaster way to count leading zeros, though I did not see a change from\nthat.\n\nIt also bucketizes first 128 ns to get better overview of exact behaviour.\n\nWe may want to put reporting this behind a flag\n\n---\nHannu\n\nOn Wed, Jun 19, 2024 at 6:36 PM Tom Lane <[email protected]> wrote:\n>\n> Peter Eisentraut <[email protected]> writes:\n> > AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't\n> > really address the original question.\n>\n> It's not exactly hard to make it do so (see attached).\n>\n> I tried this on several different machines, and my conclusion is that\n> gettimeofday() reports full microsecond precision on any platform\n> anybody is likely to be running PG on today. Even my one surviving\n> pet dinosaur, NetBSD 10 on PowerPC Mac (mamba), shows results like\n> this:\n>\n> $ ./pg_test_timing\n> Testing timing overhead for 3 seconds.\n> Per loop time including overhead: 901.41 ns\n> Histogram of timing durations:\n> < us % of total count\n> 1 10.46074 348148\n> 2 89.51495 2979181\n> 4 0.00574 191\n> 8 0.00430 143\n> 16 0.00691 230\n> 32 0.00376 125\n> 64 0.00012 4\n> 128 0.00303 101\n> 256 0.00027 9\n> 512 0.00009 3\n> 1024 0.00009 3\n>\n> I also modified pg_test_timing to measure nanoseconds not\n> microseconds (second patch attached), and got this:\n>\n> $ ./pg_test_timing\n> Testing timing overhead for 3 seconds.\n> Per loop time including overhead: 805.50 ns\n> Histogram of timing durations:\n> < ns % of total count\n> 1 19.84234 739008\n> 2 0.00000 0\n> 4 0.00000 0\n> 8 0.00000 0\n> 16 0.00000 0\n> 32 0.00000 0\n> 64 0.00000 0\n> 128 0.00000 0\n> 256 0.00000 0\n> 512 0.00000 0\n> 1024 80.14013 2984739\n> 2048 0.00078 29\n> 4096 0.00658 245\n> 8192 0.00290 108\n> 16384 0.00252 94\n> 32768 0.00250 93\n> 65536 0.00016 6\n> 131072 0.00185 69\n> 262144 0.00008 3\n> 524288 0.00008 3\n> 1048576 0.00008 3\n>\n> confirming that when the result changes it generally does so by 1usec.\n>\n> Applying just the second patch, I find that clock_gettime on this\n> old hardware seems to be limited to 1us resolution, but on my more\n> modern machines (mac M1, x86_64) it can tick at 40ns or less.\n> Even a raspberry pi 4 shows\n>\n> $ ./pg_test_timing\n> Testing timing overhead for 3 seconds.\n> Per loop time including overhead: 69.12 ns\n> Histogram of timing durations:\n> < ns % of total count\n> 1 0.00000 0\n> 2 0.00000 0\n> 4 0.00000 0\n> 8 0.00000 0\n> 16 0.00000 0\n> 32 0.00000 0\n> 64 37.59583 16317040\n> 128 62.38568 27076131\n> 256 0.01674 7265\n> 512 0.00002 8\n> 1024 0.00000 0\n> 2048 0.00000 0\n> 4096 0.00153 662\n> 8192 0.00019 83\n> 16384 0.00001 3\n> 32768 0.00001 5\n>\n> suggesting that the clock_gettime resolution is better than 64 ns.\n>\n> So I concur with Hannu that it's time to adjust pg_test_timing to\n> resolve nanoseconds not microseconds. I gather he's created a\n> patch that does more than mine below, so I'll wait for that.\n>\n> regards, tom lane\n>", "msg_date": "Thu, 20 Jun 2024 12:41:54 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "I also have a variant that uses the low-level CPU cycle counter\ndirectly (attached)\n\nIt currently only works on clang, as it is done using\n__builtin_readcyclecounter() in order to support both x64 and ARM.\n\nThis one is there to understand the overhead of the calculations when\ngoing from cycle counter to POSIX time struct\n\nThis works OK with Clang, but we should probably not integrate this\ndirectly into the code as it has some interesting corner cases. For\nexample Apple's clang does compile __builtin_readcyclecounter() but\ncrashes with unknown instruction when trying to run it.\n\nTherefore I have not integrated it into Makefile so if you want to use\nit, just copy it into src/bin/pg_test_timing and run\n\ncd src/bin/pgtest_timing\nmv pg_test_timing.c pg_test_timing.c.backup\ncp pg_test_cyclecounter.c pg_test_timing.c\nmake\nmv pg_test_timing pg_test_cyclecounter\nmv pg_test_timing.c.backup pg_test_timing.c\n\nIt gives output like\n\nTesting timing overhead for 3 seconds.\nTotal 25000001 ticks in 1000000073 ns, 24999999.175000 ticks per ns\nThis CPU is running at 24999999 ticks / second, will run test for 74999997 ticks\nloop_count 287130958Per loop time including overhead: 10.45 ns, min: 0\nticks (0.0 ns), same: 212854591\nTotal ticks in: 74999997, in: 3000000541 nr\nLog2 histogram of timing durations:\n < ticks ( < ns) % of total running % count\n 1 ( 40.0) 74.1315 74.1315 212854591\n 2 ( 80.0) 25.8655 99.9970 74267876\n 4 ( 160.0) 0.0000 99.9970 7\n 8 ( 320.0) 0.0000 99.9970 3\n 16 ( 640.0) 0.0000 99.9970 1\n 32 ( 1280.0) 0.0000 99.9971 27\n 64 ( 2560.0) 0.0012 99.9983 3439\n 128 ( 5120.0) 0.0016 99.9999 4683\n 256 ( 10240.0) 0.0001 100.0000 265\n 512 ( 20480.0) 0.0000 100.0000 37\n 1024 ( 40960.0) 0.0000 100.0000 23\n 2048 ( 81920.0) 0.0000 100.0000 6\nFirst 64 ticks --\n 0 ( 0.0) 74.1315 74.1315 212854591\n 1 ( 40.0) 25.8655 99.9970 74267876\n 2 ( 80.0) 0.0000 99.9970 2\n 3 ( 120.0) 0.0000 99.9970 5\n 4 ( 160.0) 0.0000 99.9970 2\n 6 ( 240.0) 0.0000 99.9983 1\n 13 ( 520.0) 0.0000 100.0000 1\n...\n 59 ( 2360.0) 0.0000 100.0000 140\n 60 ( 2400.0) 0.0001 100.0000 210\n 61 ( 2440.0) 0.0002 100.0000 497\n 62 ( 2480.0) 0.0002 100.0000 524\n 63 ( 2520.0) 0.0001 100.0000 391\n\n\nIf you run on some interesting hardware, please share the results.\n\nIf we her enough I will put them together in a spreadsheet and share\n\nI also attach my lightning talk slides here\n\n---\nHannu\n\nOn Thu, Jun 20, 2024 at 12:41 PM Hannu Krosing <[email protected]> wrote:\n>\n> (resending to list and other CC:s )\n>\n> Hi Tom\n>\n> This is my current patch which also adds running % and optionally uses\n> faster way to count leading zeros, though I did not see a change from\n> that.\n>\n> It also bucketizes first 128 ns to get better overview of exact behaviour.\n>\n> We may want to put reporting this behind a flag\n>\n> ---\n> Hannu\n>\n> On Wed, Jun 19, 2024 at 6:36 PM Tom Lane <[email protected]> wrote:\n> >\n> > Peter Eisentraut <[email protected]> writes:\n> > > AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't\n> > > really address the original question.\n> >\n> > It's not exactly hard to make it do so (see attached).\n> >\n> > I tried this on several different machines, and my conclusion is that\n> > gettimeofday() reports full microsecond precision on any platform\n> > anybody is likely to be running PG on today. Even my one surviving\n> > pet dinosaur, NetBSD 10 on PowerPC Mac (mamba), shows results like\n> > this:\n> >\n> > $ ./pg_test_timing\n> > Testing timing overhead for 3 seconds.\n> > Per loop time including overhead: 901.41 ns\n> > Histogram of timing durations:\n> > < us % of total count\n> > 1 10.46074 348148\n> > 2 89.51495 2979181\n> > 4 0.00574 191\n> > 8 0.00430 143\n> > 16 0.00691 230\n> > 32 0.00376 125\n> > 64 0.00012 4\n> > 128 0.00303 101\n> > 256 0.00027 9\n> > 512 0.00009 3\n> > 1024 0.00009 3\n> >\n> > I also modified pg_test_timing to measure nanoseconds not\n> > microseconds (second patch attached), and got this:\n> >\n> > $ ./pg_test_timing\n> > Testing timing overhead for 3 seconds.\n> > Per loop time including overhead: 805.50 ns\n> > Histogram of timing durations:\n> > < ns % of total count\n> > 1 19.84234 739008\n> > 2 0.00000 0\n> > 4 0.00000 0\n> > 8 0.00000 0\n> > 16 0.00000 0\n> > 32 0.00000 0\n> > 64 0.00000 0\n> > 128 0.00000 0\n> > 256 0.00000 0\n> > 512 0.00000 0\n> > 1024 80.14013 2984739\n> > 2048 0.00078 29\n> > 4096 0.00658 245\n> > 8192 0.00290 108\n> > 16384 0.00252 94\n> > 32768 0.00250 93\n> > 65536 0.00016 6\n> > 131072 0.00185 69\n> > 262144 0.00008 3\n> > 524288 0.00008 3\n> > 1048576 0.00008 3\n> >\n> > confirming that when the result changes it generally does so by 1usec.\n> >\n> > Applying just the second patch, I find that clock_gettime on this\n> > old hardware seems to be limited to 1us resolution, but on my more\n> > modern machines (mac M1, x86_64) it can tick at 40ns or less.\n> > Even a raspberry pi 4 shows\n> >\n> > $ ./pg_test_timing\n> > Testing timing overhead for 3 seconds.\n> > Per loop time including overhead: 69.12 ns\n> > Histogram of timing durations:\n> > < ns % of total count\n> > 1 0.00000 0\n> > 2 0.00000 0\n> > 4 0.00000 0\n> > 8 0.00000 0\n> > 16 0.00000 0\n> > 32 0.00000 0\n> > 64 37.59583 16317040\n> > 128 62.38568 27076131\n> > 256 0.01674 7265\n> > 512 0.00002 8\n> > 1024 0.00000 0\n> > 2048 0.00000 0\n> > 4096 0.00153 662\n> > 8192 0.00019 83\n> > 16384 0.00001 3\n> > 32768 0.00001 5\n> >\n> > suggesting that the clock_gettime resolution is better than 64 ns.\n> >\n> > So I concur with Hannu that it's time to adjust pg_test_timing to\n> > resolve nanoseconds not microseconds. I gather he's created a\n> > patch that does more than mine below, so I'll wait for that.\n> >\n> > regards, tom lane\n> >", "msg_date": "Thu, 20 Jun 2024 12:54:55 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Another thing I changed in reporting was to report <= ns instead of < ns\n\nThis was inspired by not wanting to report \"zero ns\" as \"< 1 ns\" and\neasiest was to change them all to <=\n\nOn Thu, Jun 20, 2024 at 12:41 PM Hannu Krosing <[email protected]> wrote:\n>\n> (resending to list and other CC:s )\n>\n> Hi Tom\n>\n> This is my current patch which also adds running % and optionally uses\n> faster way to count leading zeros, though I did not see a change from\n> that.\n>\n> It also bucketizes first 128 ns to get better overview of exact behaviour.\n>\n> We may want to put reporting this behind a flag\n>\n> ---\n> Hannu\n>\n> On Wed, Jun 19, 2024 at 6:36 PM Tom Lane <[email protected]> wrote:\n> >\n> > Peter Eisentraut <[email protected]> writes:\n> > > AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't\n> > > really address the original question.\n> >\n> > It's not exactly hard to make it do so (see attached).\n> >\n> > I tried this on several different machines, and my conclusion is that\n> > gettimeofday() reports full microsecond precision on any platform\n> > anybody is likely to be running PG on today. Even my one surviving\n> > pet dinosaur, NetBSD 10 on PowerPC Mac (mamba), shows results like\n> > this:\n> >\n> > $ ./pg_test_timing\n> > Testing timing overhead for 3 seconds.\n> > Per loop time including overhead: 901.41 ns\n> > Histogram of timing durations:\n> > < us % of total count\n> > 1 10.46074 348148\n> > 2 89.51495 2979181\n> > 4 0.00574 191\n> > 8 0.00430 143\n> > 16 0.00691 230\n> > 32 0.00376 125\n> > 64 0.00012 4\n> > 128 0.00303 101\n> > 256 0.00027 9\n> > 512 0.00009 3\n> > 1024 0.00009 3\n> >\n> > I also modified pg_test_timing to measure nanoseconds not\n> > microseconds (second patch attached), and got this:\n> >\n> > $ ./pg_test_timing\n> > Testing timing overhead for 3 seconds.\n> > Per loop time including overhead: 805.50 ns\n> > Histogram of timing durations:\n> > < ns % of total count\n> > 1 19.84234 739008\n> > 2 0.00000 0\n> > 4 0.00000 0\n> > 8 0.00000 0\n> > 16 0.00000 0\n> > 32 0.00000 0\n> > 64 0.00000 0\n> > 128 0.00000 0\n> > 256 0.00000 0\n> > 512 0.00000 0\n> > 1024 80.14013 2984739\n> > 2048 0.00078 29\n> > 4096 0.00658 245\n> > 8192 0.00290 108\n> > 16384 0.00252 94\n> > 32768 0.00250 93\n> > 65536 0.00016 6\n> > 131072 0.00185 69\n> > 262144 0.00008 3\n> > 524288 0.00008 3\n> > 1048576 0.00008 3\n> >\n> > confirming that when the result changes it generally does so by 1usec.\n> >\n> > Applying just the second patch, I find that clock_gettime on this\n> > old hardware seems to be limited to 1us resolution, but on my more\n> > modern machines (mac M1, x86_64) it can tick at 40ns or less.\n> > Even a raspberry pi 4 shows\n> >\n> > $ ./pg_test_timing\n> > Testing timing overhead for 3 seconds.\n> > Per loop time including overhead: 69.12 ns\n> > Histogram of timing durations:\n> > < ns % of total count\n> > 1 0.00000 0\n> > 2 0.00000 0\n> > 4 0.00000 0\n> > 8 0.00000 0\n> > 16 0.00000 0\n> > 32 0.00000 0\n> > 64 37.59583 16317040\n> > 128 62.38568 27076131\n> > 256 0.01674 7265\n> > 512 0.00002 8\n> > 1024 0.00000 0\n> > 2048 0.00000 0\n> > 4096 0.00153 662\n> > 8192 0.00019 83\n> > 16384 0.00001 3\n> > 32768 0.00001 5\n> >\n> > suggesting that the clock_gettime resolution is better than 64 ns.\n> >\n> > So I concur with Hannu that it's time to adjust pg_test_timing to\n> > resolve nanoseconds not microseconds. I gather he's created a\n> > patch that does more than mine below, so I'll wait for that.\n> >\n> > regards, tom lane\n> >\n\n\n", "msg_date": "Thu, 20 Jun 2024 13:08:57 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> This is my current patch which also adds running % and optionally uses\n> faster way to count leading zeros, though I did not see a change from\n> that.\n\nI've not read the patch yet, but I did create a CF entry [1]\nto get some CI cycles on this. The cfbot complains [2] about\n\n[19:24:31.951] pg_test_timing.c: In function ‘output’:\n[19:24:31.951] pg_test_timing.c:229:11: error: format ‘%ld’ expects argument of type ‘long int’, but argument 3 has type ‘int64’ {aka ‘long long int’} [-Werror=format=]\n[19:24:31.951] 229 | printf(\"%*ld %*.4f %*.4f %*lld\\n\",\n[19:24:31.951] | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n[19:24:31.951] 230 | Max(8, len1), i,\n[19:24:31.951] | ~\n[19:24:31.951] | |\n[19:24:31.951] | int64 {aka long long int}\n\nwhich seems a bit confused, but anyway you cannot assume that int64 is\na match for \"%ld\", or \"%lld\" either. What we generally do for this\nelsewhere is to explicitly cast printf arguments to long long int.\n\nAlso there's this on Windows:\n\n[19:23:48.231] ../src/bin/pg_test_timing/pg_test_timing.c(162): warning C4067: unexpected tokens following preprocessor directive - expected a newline\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/48/5066/\n[2] http://cfbot.cputube.org/highlights/all.html#5066\n\n\n", "msg_date": "Fri, 21 Jun 2024 16:51:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "I wrote:\n> Hannu Krosing <[email protected]> writes:\n>> This is my current patch which also adds running % and optionally uses\n>> faster way to count leading zeros, though I did not see a change from\n>> that.\n\n> I've not read the patch yet, but I did create a CF entry [1]\n> to get some CI cycles on this. The cfbot complains [2] about\n> [ a couple of things ]\n\nHere's a cleaned-up code patch addressing the cfbot complaints\nand making the output logic a bit neater.\n\nI think this is committable code-wise, but the documentation needs\nwork, if not indeed a complete rewrite. The examples are now\nhorribly out of date, and it seems that the \"Clock Hardware and Timing\nAccuracy\" section is quite obsolete as well, since it suggests that\nthe best available accuracy is ~100ns.\n\nTBH I'm inclined to rip most of the OS-specific and hardware-specific\ninformation out of there, as it's not something we're likely to\nmaintain well even if we got it right for current reality.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 02 Jul 2024 12:55:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "BTW, getting back to the original point of the thread: I duplicated\nHannu's result showing that on Apple M1 the clock tick seems to be\nabout 40ns. But look at what I got with the v2 patch on my main\nworkstation (full output attached):\n\n$ ./pg_test_timing\n...\nPer loop time including overhead: 16.60 ns\n...\nTiming durations less than 128 ns:\n ns % of total running % count\n 15 3.2738 3.2738 5914914\n 16 49.0772 52.3510 88668783\n 17 36.4662 88.8172 65884173\n 18 9.5639 98.3810 17279249\n 19 1.5746 99.9556 2844873\n 20 0.0416 99.9972 75125\n 21 0.0004 99.9976 757\n...\n\nIt sure looks like this is exact-to-the-nanosecond results,\nsince the modal values match the overall per-loop timing,\nand there are no zero measurements.\n\nThis is a Dell tower from 2021, running RHEL8 on an Intel Xeon W-2245.\nNot exactly top-of-the-line stuff.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 02 Jul 2024 13:20:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Hi Tom,\n\nOn various Intel CPUs I got either steps close to single nanosecond or\nsometimes a little more on older ones\n\nOne specific CPU moved in in 2 tick increments while the ration to ns\nwas 2,1/1 or 2100 ticks per microsecond.\n\nOn Zen4 AMD the step seems to be 10 ns, even though the tick-to-ns\nratio is 2.6 / 1 , so reading ticks directly gives 26, 54, ...\n\nAlso, reading directly in ticks on M1 gave \"loop time including\noverhead: 2.13 ns\" (attached code works on Clang, not sure about GCC)\n\n\nI'll also take a look at the docs and try to propose something\n\nDo we also need tests for this one ?\n\n----\nHannu\n\n\n\nOn Tue, Jul 2, 2024 at 7:20 PM Tom Lane <[email protected]> wrote:\n>\n> BTW, getting back to the original point of the thread: I duplicated\n> Hannu's result showing that on Apple M1 the clock tick seems to be\n> about 40ns. But look at what I got with the v2 patch on my main\n> workstation (full output attached):\n>\n> $ ./pg_test_timing\n> ...\n> Per loop time including overhead: 16.60 ns\n> ...\n> Timing durations less than 128 ns:\n> ns % of total running % count\n> 15 3.2738 3.2738 5914914\n> 16 49.0772 52.3510 88668783\n> 17 36.4662 88.8172 65884173\n> 18 9.5639 98.3810 17279249\n> 19 1.5746 99.9556 2844873\n> 20 0.0416 99.9972 75125\n> 21 0.0004 99.9976 757\n> ...\n>\n> It sure looks like this is exact-to-the-nanosecond results,\n> since the modal values match the overall per-loop timing,\n> and there are no zero measurements.\n>\n> This is a Dell tower from 2021, running RHEL8 on an Intel Xeon W-2245.\n> Not exactly top-of-the-line stuff.\n>\n> regards, tom lane\n>", "msg_date": "Tue, 2 Jul 2024 19:31:21 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Also the step on M1 is slightly above 40ns (41.7ns) , but exactly 40\nns on Ampere Altra.\n\n## M1 on MacBooc Air\n\nTesting timing overhead for 3 seconds.\nTotal 24000177 ticks in 1000000056 ns, 24000175.655990 ticks per ns\nThis CPU is running at 24000175 ticks / second, will run test for 72000525 ticks\nloop_count 1407639953Per loop time including overhead: 2.13 ns, min: 0\nticks (0.0 ns), same: 1335774969\nTotal ticks in: 72000525, in: 3000002260 nr\nLog2(x+1) histogram of timing durations:\n<= ticks ( <= ns) % of total running % count\n0 ( 41.7) 94.8946 94.8946 1335774969\n2 ( 83.3) 5.1051 99.9997 71861227\n6 ( 166.7) 0.0001 99.9998 757\n14 ( 333.3) 0.0000 99.9998 0\n30 ( 666.7) 0.0002 99.9999 2193\n62 ( 1333.3) 0.0000 100.0000 274\n126 ( 2666.6) 0.0000 100.0000 446\n254 ( 5333.3) 0.0000 100.0000 87\nFirst 64 ticks --\n0 ( 0.0) 94.8946 94.8946 1335774969\n1 ( 41.7) 5.1032 99.9997 71834980\n2 ( 83.3) 0.0019 99.9998 26247\n3 ( 125.0) 0.0001 99.9998 757\n15 ( 625.0) 0.0000 100.0000 1\n\n## Ampere Altra\n\nTesting timing overhead for 3 seconds.\nTotal 25000002 ticks in 1000000074 ns, 25000000.150000 ticks per ns\nThis CPU is running at 25000000 ticks / second, will run test for 75000000 ticks\nloop_count 291630863Per loop time including overhead: 10.29 ns, min: 0\nticks (0.0 ns), same: 217288944\nTotal ticks in: 75000000, in: 3000000542 nr\nLog2(x+1) histogram of timing durations:\n<= ticks ( <= ns) % of total running % count\n0 ( 40.0) 74.5082 74.5082 217288944\n2 ( 80.0) 25.4886 99.9968 74332703\n6 ( 160.0) 0.0000 99.9968 5\n14 ( 320.0) 0.0000 99.9968 0\n30 ( 640.0) 0.0000 99.9968 31\n62 ( 1280.0) 0.0011 99.9979 3123\n126 ( 2560.0) 0.0020 99.9999 5848\n254 ( 5120.0) 0.0001 100.0000 149\n510 ( 10240.0) 0.0000 100.0000 38\n1022 ( 20480.0) 0.0000 100.0000 21\n2046 ( 40960.0) 0.0000 100.0000 1\nFirst 64 ticks --\n0 ( 0.0) 74.5082 74.5082 217288944\n1 ( 40.0) 25.4886 99.9968 74332699\n2 ( 80.0) 0.0000 99.9968 4\n3 ( 120.0) 0.0000 99.9968 1\n4 ( 160.0) 0.0000 99.9968 3\n\nOn Tue, Jul 2, 2024 at 7:31 PM Hannu Krosing <[email protected]> wrote:\n>\n> Hi Tom,\n>\n> On various Intel CPUs I got either steps close to single nanosecond or\n> sometimes a little more on older ones\n>\n> One specific CPU moved in in 2 tick increments while the ration to ns\n> was 2,1/1 or 2100 ticks per microsecond.\n>\n> On Zen4 AMD the step seems to be 10 ns, even though the tick-to-ns\n> ratio is 2.6 / 1 , so reading ticks directly gives 26, 54, ...\n>\n> Also, reading directly in ticks on M1 gave \"loop time including\n> overhead: 2.13 ns\" (attached code works on Clang, not sure about GCC)\n>\n>\n> I'll also take a look at the docs and try to propose something\n>\n> Do we also need tests for this one ?\n>\n> ----\n> Hannu\n>\n>\n>\n> On Tue, Jul 2, 2024 at 7:20 PM Tom Lane <[email protected]> wrote:\n> >\n> > BTW, getting back to the original point of the thread: I duplicated\n> > Hannu's result showing that on Apple M1 the clock tick seems to be\n> > about 40ns. But look at what I got with the v2 patch on my main\n> > workstation (full output attached):\n> >\n> > $ ./pg_test_timing\n> > ...\n> > Per loop time including overhead: 16.60 ns\n> > ...\n> > Timing durations less than 128 ns:\n> > ns % of total running % count\n> > 15 3.2738 3.2738 5914914\n> > 16 49.0772 52.3510 88668783\n> > 17 36.4662 88.8172 65884173\n> > 18 9.5639 98.3810 17279249\n> > 19 1.5746 99.9556 2844873\n> > 20 0.0416 99.9972 75125\n> > 21 0.0004 99.9976 757\n> > ...\n> >\n> > It sure looks like this is exact-to-the-nanosecond results,\n> > since the modal values match the overall per-loop timing,\n> > and there are no zero measurements.\n> >\n> > This is a Dell tower from 2021, running RHEL8 on an Intel Xeon W-2245.\n> > Not exactly top-of-the-line stuff.\n> >\n> > regards, tom lane\n> >\n\n\n", "msg_date": "Tue, 2 Jul 2024 19:37:35 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Also, reading directly in ticks on M1 gave \"loop time including\n> overhead: 2.13 ns\" (attached code works on Clang, not sure about GCC)\n\nI don't think we should mess with that, given the portability\nproblems you mentioned upthread.\n\n> I'll also take a look at the docs and try to propose something\n\nOK.\n\n> Do we also need tests for this one ?\n\nYeah, it was annoying me that we are eating the overhead of a TAP test\nfor pg_test_timing and yet it covers barely a third of the code [1].\nWe obviously can't expect any specific numbers out of a test, but I\nwas contemplating running \"pg_test_timing -d 1\" and just checking for\n(a) zero exit code and (b) the expected header lines in the output.\n\n\t\t\tregards, tom lane\n\n[1] https://coverage.postgresql.org/src/bin/pg_test_timing/pg_test_timing.c.gcov.html\n\n\n", "msg_date": "Tue, 02 Jul 2024 13:50:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "On Tue, Jul 2, 2024 at 7:50 PM Tom Lane <[email protected]> wrote:\n>\n\n> > Do we also need tests for this one ?\n>\n> Yeah, it was annoying me that we are eating the overhead of a TAP test\n> for pg_test_timing and yet it covers barely a third of the code [1].\n> We obviously can't expect any specific numbers out of a test, but I\n> was contemplating running \"pg_test_timing -d 1\" and just checking for\n> (a) zero exit code and (b) the expected header lines in the output.\n\nAt least \"does it run\" tests should be there -\n\nFor example with the current toolchain on MacOS I was able to compile\n__builtin_readcyclecounter(); but it crashed when the result was\nexecuted.\n\nThe same code compiled *and run* fine on same laptop with Ubuntu 24.04\n\nWe might also want to have some testing about available speedups from\npg_bitmanip.h being used, but that could be tricky to test in an\nuniversal way.\n\n--\nHannu\n\n\n", "msg_date": "Tue, 2 Jul 2024 20:15:59 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> At least \"does it run\" tests should be there -\n\n> For example with the current toolchain on MacOS I was able to compile\n> __builtin_readcyclecounter(); but it crashed when the result was\n> executed.\n\n> The same code compiled *and run* fine on same laptop with Ubuntu 24.04\n\n> We might also want to have some testing about available speedups from\n> pg_bitmanip.h being used, but that could be tricky to test in an\n> universal way.\n\nKeep in mind that pg_test_timing is not just some random exercise in a\nvacuum. The point of it IMV is to provide data about the performance\none can expect from the instr_time.h infrastructure, which bears on\nwhat kind of resolution EXPLAIN ANALYZE and other features have. So\nif we did want to depend on read_tsc() or __builtin_readcyclecounter()\nor what-have-you, the way to go about it would be to change\ninstr_time.h to compile code that uses that. I would consider that\nto be a separate patch from what we're doing to pg_test_timing here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2024 14:33:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "\n\n> On 2 Jul 2024, at 22:20, Tom Lane <[email protected]> wrote:\n> \n> It sure looks like this is exact-to-the-nanosecond results,\n> since the modal values match the overall per-loop timing,\n> and there are no zero measurements.\n\nThat’s a very interesting result, from the UUID POV!\nIf time is almost always advancing, using time readings instead of a counter is very reasonable: we have interprocess monotonicity almost for free.\nThough time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use 10 more bits for nanoseconds…\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 3 Jul 2024 10:43:19 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "\"Andrey M. Borodin\" <[email protected]> writes:\n> That’s a very interesting result, from the UUID POV!\n> If time is almost always advancing, using time readings instead of a counter is very reasonable: we have interprocess monotonicity almost for free.\n> Though time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use 10 more bits for nanoseconds…\n\nKeep in mind also that instr_time.h does not pretend to provide\nreal time --- the clock origin is arbitrary. But these results\ndo give me additional confidence that gettimeofday() should be\ngood to the microsecond on any remotely-modern platform.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2024 04:03:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Hi,\n\n> That’s a very interesting result, from the UUID POV!\n> If time is almost always advancing, using time readings instead of a counter is very reasonable: we have interprocess monotonicity almost for free.\n> Though time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use 10 more bits for nanoseconds…\n\nA counter is mandatory since someone can for instance change the\nsystem's time while the process is generating UUIDs. You can't\ngenerally assume that local time of the system is monotonic.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 3 Jul 2024 11:48:44 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "On Wed, Jul 3, 2024 at 10:03 AM Tom Lane <[email protected]> wrote:\nKeep in mind also that instr_time.h does not pretend to provide\n> real time --- the clock origin is arbitrary. But these results\n> do give me additional confidence that gettimeofday() should be\n> good to the microsecond on any remotely-modern platform.\n\nThe only platform I have found where the resolution is only a\nmicrosecond is RISC-V ( https://www.sifive.com/boards/hifive-unmatched\n)\n\nEverywhere else it seems to be much more precise.\n\n--\nHannu\n\n\n", "msg_date": "Wed, 3 Jul 2024 12:31:27 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "\n\n> On 3 Jul 2024, at 13:48, Aleksander Alekseev <[email protected]> wrote:\n> \n> Hi,\n> \n>> That’s a very interesting result, from the UUID POV!\n>> If time is almost always advancing, using time readings instead of a counter is very reasonable: we have interprocess monotonicity almost for free.\n>> Though time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use 10 more bits for nanoseconds…\n> \n> A counter is mandatory since someone can for instance change the\n> system's time while the process is generating UUIDs. You can't\n> generally assume that local time of the system is monotonic.\n\nAFAIR according to RFC when time jumps backwards, we just use time microseconds as a counter. Until time starts to advance again.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 3 Jul 2024 15:38:14 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "We currently do something similar with OIDs where we just keep\ngenerating them and then testing for conflicts.\n\nI don't think this is the best way to do it but it mostly works when\nyou can actually test for uniqueness, like for example in TOAST or\nsystem tables.\n\nNot sure this works even reasonably well for UUIDv7.\n\n--\nHannu\n\nOn Wed, Jul 3, 2024 at 12:38 PM Andrey M. Borodin <[email protected]> wrote:\n>\n>\n>\n> > On 3 Jul 2024, at 13:48, Aleksander Alekseev <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> >> That’s a very interesting result, from the UUID POV!\n> >> If time is almost always advancing, using time readings instead of a counter is very reasonable: we have interprocess monotonicity almost for free.\n> >> Though time is advancing in a very small steps… RFC assumes that we use microseconds, I’m not sure it’s ok to use 10 more bits for nanoseconds…\n> >\n> > A counter is mandatory since someone can for instance change the\n> > system's time while the process is generating UUIDs. You can't\n> > generally assume that local time of the system is monotonic.\n>\n> AFAIR according to RFC when time jumps backwards, we just use time microseconds as a counter. Until time starts to advance again.\n>\n>\n> Best regards, Andrey Borodin.\n\n\n", "msg_date": "Wed, 3 Jul 2024 13:29:22 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "\n\n> On 3 Jul 2024, at 16:29, Hannu Krosing <[email protected]> wrote:\n> \n> We currently do something similar with OIDs where we just keep\n> generating them and then testing for conflicts.\n> \n> I don't think this is the best way to do it but it mostly works when\n> you can actually test for uniqueness, like for example in TOAST or\n> system tables.\n> \n> Not sure this works even reasonably well for UUIDv7.\n\nUniqueness is ensured with extra 60+ bits of randomness. Timestamp and counter\\microseconds are there to promote sortability (thus ensuring data locality).\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 3 Jul 2024 16:46:30 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" }, { "msg_contents": "Hi,\n\n> We currently do something similar with OIDs where we just keep\n> generating them and then testing for conflicts.\n>\n> I don't think this is the best way to do it but it mostly works when\n> you can actually test for uniqueness, like for example in TOAST or\n> system tables.\n>\n> Not sure this works even reasonably well for UUIDv7.\n\nUUIDv7 is not guaranteed to be unique. It just does it best to reduce\nthe number of possible conflicts. So I don't think we should worry\nabout it.\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 3 Jul 2024 14:47:26 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is a typical precision of gettimeofday()?" } ]
[ { "msg_contents": "Hi,\n\nI do not understand why hot_updates value is not 0 for pg_database? Given\nthat reloptions is empty for this table that means it has a default value\nof 100%\n\nRegards,\n\nFabrice\n\n SELECT\n relname AS table_name,\n seq_scan AS sequential_scans,\n idx_scan AS index_scans,\n n_tup_ins AS inserts,\n n_tup_upd AS updates,\n n_tup_hot_upd AS hot_updates\nFROM\n pg_stat_all_tables\nORDER BY\n hot_updates DESC;\n\n table_name | sequential_scans | index_scans | inserts |\nupdates | hot_updates\n-------------------------+------------------+-------------+---------+---------+-------------\n pg_database | 5038104 | 187427486 | 3 |\n16 | * 16*\n\n\npostgres [1728332]=# select t.relname as table_name,\n t.reloptions\nfrom pg_class t\n join pg_namespace n on n.oid = t.relnamespace\nwhere t.relname in ('pg_database')\n and n.nspname = 'pg_catalog';\n\n table_name | reloptions\n-------------+------------\n pg_database |\n(1 row)\n\nHi,I do not understand why hot_updates value is not 0 for pg_database? Given that reloptions is empty for this table that means it has a default value of 100%Regards,Fabrice SELECT    relname AS table_name,    seq_scan AS sequential_scans,    idx_scan AS index_scans,    n_tup_ins AS inserts,    n_tup_upd AS updates,    n_tup_hot_upd AS hot_updatesFROM    pg_stat_all_tablesORDER BY    hot_updates DESC;       table_name        | sequential_scans | index_scans | inserts | updates | hot_updates-------------------------+------------------+-------------+---------+---------+------------- pg_database             |          5038104 |   187427486 |       3 |      16 |          16postgres [1728332]=# select t.relname as table_name,       t.reloptionsfrom pg_class t  join pg_namespace n on n.oid = t.relnamespacewhere t.relname in ('pg_database')  and n.nspname = 'pg_catalog'; table_name  | reloptions-------------+------------ pg_database |(1 row)", "msg_date": "Tue, 19 Mar 2024 09:58:20 +0100", "msg_from": "Fabrice Chapuis <[email protected]>", "msg_from_op": true, "msg_subject": "hot updates and fillfactor" }, { "msg_contents": "Hi Fabrice,\n\n> I do not understand why hot_updates value is not 0 for pg_database? Given that reloptions is empty for this table that means it has a default value of 100%\n\nMaybe I didn't entirely understand your question, but why would you\nassume they are somehow related?\n\nAccording to the documentation [1][2]:\n\npg_class.reloptions:\n Access-method-specific options, as “keyword=value” strings\n\npg_stat_all_tables.n_tup_hot_upd:\n Number of rows HOT updated. These are updates where no successor\nversions are required in indexes.\n\nThe value of n_tup_hot_upd is not zero because there are tuples that\nwere HOT-updated. That's it. You can read more about HOT here [3].\n\n[1]: https://www.postgresql.org/docs/current/catalog-pg-class.html\n[2]: https://www.postgresql.org/docs/current/monitoring-stats.html\n[3]: https://www.postgresql.org/docs/current/storage-hot.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 19 Mar 2024 13:17:23 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hot updates and fillfactor" }, { "msg_contents": "Thanks for your explanation and for the links\n\n\nOn Tue, Mar 19, 2024 at 11:17 AM Aleksander Alekseev <\[email protected]> wrote:\n\n> Hi Fabrice,\n>\n> > I do not understand why hot_updates value is not 0 for pg_database?\n> Given that reloptions is empty for this table that means it has a default\n> value of 100%\n>\n> Maybe I didn't entirely understand your question, but why would you\n> assume they are somehow related?\n>\n> According to the documentation [1][2]:\n>\n> pg_class.reloptions:\n> Access-method-specific options, as “keyword=value” strings\n>\n> pg_stat_all_tables.n_tup_hot_upd:\n> Number of rows HOT updated. These are updates where no successor\n> versions are required in indexes.\n>\n> The value of n_tup_hot_upd is not zero because there are tuples that\n> were HOT-updated. That's it. You can read more about HOT here [3].\n>\n> [1]: https://www.postgresql.org/docs/current/catalog-pg-class.html\n> [2]: https://www.postgresql.org/docs/current/monitoring-stats.html\n> [3]: https://www.postgresql.org/docs/current/storage-hot.html\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nThanks for your explanation and for the linksOn Tue, Mar 19, 2024 at 11:17 AM Aleksander Alekseev <[email protected]> wrote:Hi Fabrice,\n\n> I do not understand why hot_updates value is not 0 for pg_database? Given that reloptions is empty for this table that means it has a default value of 100%\n\nMaybe I didn't entirely understand your question, but why would you\nassume they are somehow related?\n\nAccording to the documentation [1][2]:\n\npg_class.reloptions:\n  Access-method-specific options, as “keyword=value” strings\n\npg_stat_all_tables.n_tup_hot_upd:\n  Number of rows HOT updated. These are updates where no successor\nversions are required in indexes.\n\nThe value of n_tup_hot_upd is not zero because there are tuples that\nwere HOT-updated. That's it. You can read more about HOT here [3].\n\n[1]: https://www.postgresql.org/docs/current/catalog-pg-class.html\n[2]: https://www.postgresql.org/docs/current/monitoring-stats.html\n[3]: https://www.postgresql.org/docs/current/storage-hot.html\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 22 Mar 2024 16:05:43 +0100", "msg_from": "Fabrice Chapuis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hot updates and fillfactor" } ]
[ { "msg_contents": "While reviewing the patch for SET ACCESS METHOD[1] I noticed that\npg_class.relam is not documented fully for partitioned tables, so I\nproposed the attached. Also, I remove a comment that merely repeats\nwhat was already said a few lines above.\n\nThis is intended for backpatch to 12.\n\n[1] https://postgr.es/m/[email protected]\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/", "msg_date": "Tue, 19 Mar 2024 11:34:15 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "minor tweak to catalogs.sgml pg_class.reltablespace" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> While reviewing the patch for SET ACCESS METHOD[1] I noticed that\n> pg_class.relam is not documented fully for partitioned tables, so I\n> proposed the attached.\n\nThe bit about \"(Not meaningful if the relation has no on-disk file.)\"\nis not correct, and now it's adjacent to text that contradicts it.\nMaybe more like\n\n The tablespace in which this relation is stored.\n If zero, the database's default tablespace is implied.\n Not meaningful if the relation has no on-disk file,\n except for partitioned tables, where this is the tablespace\n in which partitions will be created when one is not\n specified in the creation command.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Mar 2024 09:34:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: minor tweak to catalogs.sgml pg_class.reltablespace" }, { "msg_contents": "On 2024-Mar-19, Tom Lane wrote:\n\n> The bit about \"(Not meaningful if the relation has no on-disk file.)\"\n> is not correct, and now it's adjacent to text that contradicts it.\n> Maybe more like\n> \n> The tablespace in which this relation is stored.\n> If zero, the database's default tablespace is implied.\n> Not meaningful if the relation has no on-disk file,\n> except for partitioned tables, where this is the tablespace\n> in which partitions will be created when one is not\n> specified in the creation command.\n\nI like that wording, thanks, pushed like that.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Right now the sectors on the hard disk run clockwise, but I heard a rumor that\nyou can squeeze 0.2% more throughput by running them counterclockwise.\nIt's worth the effort. Recommended.\" (Gerry Pourwelle)\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:31:17 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: minor tweak to catalogs.sgml pg_class.reltablespace" } ]
[ { "msg_contents": "Hey,\n\nI'm trying to build a postgres export tool that reads data from table pages\nand exports it to an S3 bucket. I'd like to avoid manual commands like\npg_dump, I need access to the raw data.\n\nCan you please point me to the postgres source header / cc files that\nencapsulate this functionality?\n - List all pages for a table\n- Read a given page for a table\n\nAny pointers to the relevant source code would be appreciated.\n\nThanks,\nSushrut\n\nHey,I'm trying to build a postgres export tool that reads data from table pages and exports it to an S3 bucket. I'd like to avoid manual commands like pg_dump, I need access to the raw data.Can you please point me to the postgres source header / cc files that encapsulate this functionality? - List all pages for a table- Read a given page for a tableAny pointers to the relevant source code would be appreciated.Thanks,Sushrut", "msg_date": "Tue, 19 Mar 2024 19:52:44 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Read data from Postgres table pages" }, { "msg_contents": "Hi\n\nOn Tue, Mar 19, 2024 at 4:23 PM Sushrut Shivaswamy\n<[email protected]> wrote:\n> I'm trying to build a postgres export tool that reads data from table pages and exports it to an S3 bucket. I'd like to avoid manual commands like pg_dump, I need access to the raw data.\n>\n> Can you please point me to the postgres source header / cc files that encapsulate this functionality?\n> - List all pages for a table\n> - Read a given page for a table\n>\n> Any pointers to the relevant source code would be appreciated.\n\nWhy do you need to work on the source code level?\nPlease, check this about having a binary copy of the database on the\nfilesystem level.\nhttps://www.postgresql.org/docs/current/backup-file.html\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 19 Mar 2024 16:28:06 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read data from Postgres table pages" }, { "msg_contents": "I'd like to read individual rows from the pages as they are updated and\nstream them to a server to create a copy of the data.\nThe data will be rewritten to columnar format for analytics queries.\n\nOn Tue, Mar 19, 2024 at 7:58 PM Alexander Korotkov <[email protected]>\nwrote:\n\n> Hi\n>\n> On Tue, Mar 19, 2024 at 4:23 PM Sushrut Shivaswamy\n> <[email protected]> wrote:\n> > I'm trying to build a postgres export tool that reads data from table\n> pages and exports it to an S3 bucket. I'd like to avoid manual commands\n> like pg_dump, I need access to the raw data.\n> >\n> > Can you please point me to the postgres source header / cc files that\n> encapsulate this functionality?\n> > - List all pages for a table\n> > - Read a given page for a table\n> >\n> > Any pointers to the relevant source code would be appreciated.\n>\n> Why do you need to work on the source code level?\n> Please, check this about having a binary copy of the database on the\n> filesystem level.\n> https://www.postgresql.org/docs/current/backup-file.html\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nI'd like to read individual rows from the pages as they are updated and stream them to a server to create a copy of the data.The data will be rewritten to columnar format for analytics queries.On Tue, Mar 19, 2024 at 7:58 PM Alexander Korotkov <[email protected]> wrote:Hi\n\nOn Tue, Mar 19, 2024 at 4:23 PM Sushrut Shivaswamy\n<[email protected]> wrote:\n> I'm trying to build a postgres export tool that reads data from table pages and exports it to an S3 bucket. I'd like to avoid manual commands like pg_dump, I need access to the raw data.\n>\n> Can you please point me to the postgres source header / cc files that encapsulate this functionality?\n>  - List all pages for a table\n> - Read a given page for a table\n>\n> Any pointers to the relevant source code would be appreciated.\n\nWhy do you need to work on the source code level?\nPlease, check this about having a binary  copy of the database on the\nfilesystem level.\nhttps://www.postgresql.org/docs/current/backup-file.html\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 19 Mar 2024 20:03:35 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read data from Postgres table pages" }, { "msg_contents": "The binary I\"m trying to create should automatically be able to read data\nfrom a postgres instance without users having to\nrun commands for backup / pg_dump etc.\nHaving access to the appropriate source headers would allow me to read the\ndata.\n\nOn Tue, Mar 19, 2024 at 8:03 PM Sushrut Shivaswamy <\[email protected]> wrote:\n\n> I'd like to read individual rows from the pages as they are updated and\n> stream them to a server to create a copy of the data.\n> The data will be rewritten to columnar format for analytics queries.\n>\n> On Tue, Mar 19, 2024 at 7:58 PM Alexander Korotkov <[email protected]>\n> wrote:\n>\n>> Hi\n>>\n>> On Tue, Mar 19, 2024 at 4:23 PM Sushrut Shivaswamy\n>> <[email protected]> wrote:\n>> > I'm trying to build a postgres export tool that reads data from table\n>> pages and exports it to an S3 bucket. I'd like to avoid manual commands\n>> like pg_dump, I need access to the raw data.\n>> >\n>> > Can you please point me to the postgres source header / cc files that\n>> encapsulate this functionality?\n>> > - List all pages for a table\n>> > - Read a given page for a table\n>> >\n>> > Any pointers to the relevant source code would be appreciated.\n>>\n>> Why do you need to work on the source code level?\n>> Please, check this about having a binary copy of the database on the\n>> filesystem level.\n>> https://www.postgresql.org/docs/current/backup-file.html\n>>\n>> ------\n>> Regards,\n>> Alexander Korotkov\n>>\n>\n\nThe binary I\"m trying to create should automatically be able to read data from a postgres instance without users having to run commands for backup / pg_dump etc.Having access to the appropriate source headers would allow me to read the data.On Tue, Mar 19, 2024 at 8:03 PM Sushrut Shivaswamy <[email protected]> wrote:I'd like to read individual rows from the pages as they are updated and stream them to a server to create a copy of the data.The data will be rewritten to columnar format for analytics queries.On Tue, Mar 19, 2024 at 7:58 PM Alexander Korotkov <[email protected]> wrote:Hi\n\nOn Tue, Mar 19, 2024 at 4:23 PM Sushrut Shivaswamy\n<[email protected]> wrote:\n> I'm trying to build a postgres export tool that reads data from table pages and exports it to an S3 bucket. I'd like to avoid manual commands like pg_dump, I need access to the raw data.\n>\n> Can you please point me to the postgres source header / cc files that encapsulate this functionality?\n>  - List all pages for a table\n> - Read a given page for a table\n>\n> Any pointers to the relevant source code would be appreciated.\n\nWhy do you need to work on the source code level?\nPlease, check this about having a binary  copy of the database on the\nfilesystem level.\nhttps://www.postgresql.org/docs/current/backup-file.html\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 19 Mar 2024 20:05:03 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read data from Postgres table pages" }, { "msg_contents": "On Tue, Mar 19, 2024 at 4:35 PM Sushrut Shivaswamy\n<[email protected]> wrote:\n> The binary I\"m trying to create should automatically be able to read data from a postgres instance without users having to\n> run commands for backup / pg_dump etc.\n> Having access to the appropriate source headers would allow me to read the data.\n\nPlease, avoid the top-posting.\nhttps://en.wikipedia.org/wiki/Posting_style#Top-posting\n\nIf you're looking to have a separate binary, why can't your binary\njust *connect* to the postgres database and query the data? This is\nwhat pg_dump does, you can just do the same directly. pg_dump doesn't\naccess the raw data.\n\nTrying to read raw postgres data from the separate binary looks flat\nwrong for your purposes. First, you would have to replicate pretty\nmuch postgres internals inside. Second, you can read the consistent\ndata only when postgres is stopped or didn't do any modifications\nsince the last checkpoint.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 19 Mar 2024 16:42:30 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read data from Postgres table pages" }, { "msg_contents": "If we query the DB directly, is it possible to know which new rows have been added since the last query?\nIs there a change pump that can be latched onto?\n\nI’m assuming the page data structs are encapsulated in specific headers which can be used to list / read pages.\nWhy would Postgres need to be stopped to read the data? The read / query path in Postgres would also be reading these pages when the instance is running?\n\n", "msg_date": "Tue, 19 Mar 2024 20:18:42 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read data from Postgres table pages" }, { "msg_contents": "On Tue, Mar 19, 2024 at 4:48 PM Sushrut Shivaswamy\n<[email protected]> wrote:\n>\n> If we query the DB directly, is it possible to know which new rows have been added since the last query?\n> Is there a change pump that can be latched onto?\n\nPlease, check this.\nhttps://www.postgresql.org/docs/current/logicaldecoding.html\n\n> I’m assuming the page data structs are encapsulated in specific headers which can be used to list / read pages.\n> Why would Postgres need to be stopped to read the data? The read / query path in Postgres would also be reading these pages when the instance is running?\n\nI think this would be a good point to start studying.\nhttps://www.interdb.jp/\nThe information there should be more than enough to forget this idea forever :)\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 19 Mar 2024 17:15:23 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read data from Postgres table pages" }, { "msg_contents": ">\n>\n> lol, thanks for the inputs Alexander :)!\n\nlol, thanks for the inputs Alexander :)!", "msg_date": "Tue, 19 Mar 2024 21:51:03 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read data from Postgres table pages" } ]
[ { "msg_contents": "For the last few days, buildfarm member parula has been intermittently\nfailing the partition_prune regression test, due to unexpected plan\nchanges [1][2][3][4]. The symptoms can be reproduced exactly by\ninserting a \"vacuum\" of one or another of the partitions of table\n\"ab\", so we can presume that the underlying cause is an autovacuum run\nagainst one of those tables. But why is that happening? None of\nthose tables receive any insertions during the test, so I don't\nunderstand why autovacuum would trigger on them.\n\nI suppose we could attach \"autovacuum=off\" settings to these tables,\nbut it doesn't seem to me that that should be necessary. These test\ncases are several years old and haven't given trouble before.\nMoreover, if that's necessary then there are a lot of other regression\ntests that would presumably need the same treatment.\n\nI'm also baffled why this wasn't happening before. I scraped the\nbuildfarm logs for 3 months back and confirmed my impression that\nthis is a new failure mode. But one of these four runs is on\nREL_14_STABLE, eliminating the theory that the cause is a recent\nHEAD-only change.\n\nAny ideas?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2024-03-19%2016%3A09%3A02\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2024-03-18%2011%3A13%3A02\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2024-03-14%2011%3A40%3A02\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2024-03-14%2019%3A00%3A02\n\n\n", "msg_date": "Tue, 19 Mar 2024 15:58:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Why is parula failing?" }, { "msg_contents": "On Tue, 19 Mar 2024 at 20:58, Tom Lane <[email protected]> wrote:\n>\n> For the last few days, buildfarm member parula has been intermittently\n> failing the partition_prune regression test, due to unexpected plan\n> changes [1][2][3][4]. The symptoms can be reproduced exactly by\n> inserting a \"vacuum\" of one or another of the partitions of table\n> \"ab\", so we can presume that the underlying cause is an autovacuum run\n> against one of those tables. But why is that happening? None of\n> those tables receive any insertions during the test, so I don't\n> understand why autovacuum would trigger on them.\n>\n> I suppose we could attach \"autovacuum=off\" settings to these tables,\n> but it doesn't seem to me that that should be necessary. These test\n> cases are several years old and haven't given trouble before.\n> Moreover, if that's necessary then there are a lot of other regression\n> tests that would presumably need the same treatment.\n>\n> I'm also baffled why this wasn't happening before. I scraped the\n> buildfarm logs for 3 months back and confirmed my impression that\n> this is a new failure mode. But one of these four runs is on\n> REL_14_STABLE, eliminating the theory that the cause is a recent\n> HEAD-only change.\n>\n> Any ideas?\n\nThis may be purely coincidental, but yesterday I also did have a\nseemingly random failure in the recovery test suite locally, in\nt/027_stream_regress.pl, where it changed the join order of exactly\none of the queries (that uses the tenk table multiple times, iirc 3x\nor so). As the work I was doing wasn't related to join order-related\nproblems, this surprised me. After checking for test file changes\n(none), I re-ran the tests without recompilation and the failure went\naway, so I attributed this to an untimely autoanalize. However, as\nthis was also an unexpected plan change in the tests this could be\nrelated.\n\nSadly, I did not save the output of that test run, so this is just\nabout all the information I have. The commit I was testing on was\nbased on ca108be7, and system config is available if needed.\n\n-Matthias\n\n\n", "msg_date": "Wed, 20 Mar 2024 11:50:10 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Wed, 20 Mar 2024 at 11:50, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Tue, 19 Mar 2024 at 20:58, Tom Lane <[email protected]> wrote:\n> >\n> > For the last few days, buildfarm member parula has been intermittently\n> > failing the partition_prune regression test, due to unexpected plan\n> > changes [1][2][3][4]. The symptoms can be reproduced exactly by\n> > inserting a \"vacuum\" of one or another of the partitions of table\n> > \"ab\", so we can presume that the underlying cause is an autovacuum run\n> > against one of those tables. But why is that happening? None of\n> > those tables receive any insertions during the test, so I don't\n> > understand why autovacuum would trigger on them.\n\n> This may be purely coincidental, but yesterday I also did have a\n> seemingly random failure in the recovery test suite locally, in\n> t/027_stream_regress.pl, where it changed the join order of exactly\n> one of the queries (that uses the tenk table multiple times, iirc 3x\n> or so).\n[...]\n> Sadly, I did not save the output of that test run, so this is just\n> about all the information I have. The commit I was testing on was\n> based on ca108be7, and system config is available if needed.\n\nIt looks like Cirrus CI reproduced my issue with recent commit\na0390f6c [0] and previously also with 4665cebc [1], 875e46a0 [2],\n449e798c [3], and other older commits, on both the Windows Server 2019\nbuild and Debian for a39f1a36 (with slightly different plan changes on\nthis Debian run). That should rule out most of the environments.\n\nAfter analyzing the logs produced by the various primaries, I can't\nfind a good explanation why they would have this issue. The table is\nvacuum analyzed before the regression tests start, and in this run\nautovacuum/autoanalyze doesn't seem to kick in until (at least)\nseconds after this query was run.\n\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://cirrus-ci.com/task/6295909005262848\n[1] https://cirrus-ci.com/task/5229745516838912\n[2] https://cirrus-ci.com/task/5098544567156736\n[3] https://cirrus-ci.com/task/4783906419900416\n\n\n", "msg_date": "Wed, 20 Mar 2024 19:55:43 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Wed, 20 Mar 2024 at 08:58, Tom Lane <[email protected]> wrote:\n> I suppose we could attach \"autovacuum=off\" settings to these tables,\n> but it doesn't seem to me that that should be necessary. These test\n> cases are several years old and haven't given trouble before.\n> Moreover, if that's necessary then there are a lot of other regression\n> tests that would presumably need the same treatment.\n\nIs it worth running that animal with log_autovacuum_min_duration = 0\nso we can see what's going on in terms of auto-vacuum auto-analyze in\nthe log?\n\nDavid\n\n\n", "msg_date": "Thu, 21 Mar 2024 10:35:45 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Is it worth running that animal with log_autovacuum_min_duration = 0\n> so we can see what's going on in terms of auto-vacuum auto-analyze in\n> the log?\n\nMaybe, but I'm not sure. I thought that if parula were somehow\nhitting an ill-timed autovac/autoanalyze, it should be possible to\nmake that reproducible by inserting \"pg_sleep(60)\" or so in the test\nscript, to give the autovac daemon plenty of time to come around and\ndo the dirty deed. No luck though --- the results didn't change for\nme. So now I'm not sure what is going on.\n\nPerhaps though it is autovacuum, and there's some environment-specific\nenabling condition that parula has and my machine doesn't (which\ncould also help explain why no other animal is doing this).\nSo yeah, if we could have log_autovacuum_min_duration = 0 perhaps\nthat would yield a clue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Mar 2024 19:36:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Thu, 21 Mar 2024 at 12:36, Tom Lane <[email protected]> wrote:\n> So yeah, if we could have log_autovacuum_min_duration = 0 perhaps\n> that would yield a clue.\n\nFWIW, I agree with your earlier statement about it looking very much\nlike auto-vacuum has run on that table, but equally, if something like\nthe pg_index record was damaged we could get the same plan change.\n\nWe could also do something like the attached just in case we're\nbarking up the wrong tree.\n\nDavid", "msg_date": "Thu, 21 Mar 2024 13:53:31 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> We could also do something like the attached just in case we're\n> barking up the wrong tree.\n\nYeah, checking indisvalid isn't a bad idea. I'd put another\none further down, just before the DROP of table ab, so we\ncan see the state both before and after the unstable tests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Mar 2024 21:19:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Thu, 21 Mar 2024 at 14:19, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > We could also do something like the attached just in case we're\n> > barking up the wrong tree.\n>\n> Yeah, checking indisvalid isn't a bad idea. I'd put another\n> one further down, just before the DROP of table ab, so we\n> can see the state both before and after the unstable tests.\n\nSo it's taken quite a while to finally fail again.\n\nEffectively, we're getting:\n\n relname | relpages | reltuples | indisvalid | autovacuum_count\n| autoanalyze_count\n----------------+----------+-----------+------------+------------------+-------------------\n- ab_a2_b2 | 0 | -1 | |\n0 | 0\n+ ab_a2_b2 | 0 | 48 | |\n0 | 0\n\nI see AddNewRelationTuple() does set reltuples to -1, so I can't quite\nfigure out why 48 is in there. Even if auto-analyze had somehow\nmistakenly run and the autoanalyze_count stats just were not\nup-to-date yet, the table has zero blocks, and I don't see how\nacquire_sample_rows() would set *totalrows to anything other than 0.0\nin this case. For the vacuum case, I see that reltuples is set from:\n\n/* now we can compute the new value for pg_class.reltuples */\nvacrel->new_live_tuples = vac_estimate_reltuples(vacrel->rel, rel_pages,\nvacrel->scanned_pages,\nvacrel->live_tuples);\n\nAgain, hard to see how that could come to anything other than zero\ngiven that rel_pages and scanned_pages should be 0.\n\nLooking at the binary representation of a float of -1 vs 48, they're\nnot nearly the same. 0xBF800000 vs 0x42400000, so it's not looking\nlike a flipped bit.\n\nIt would be good to have log_autovacuum_min_duration = 0 on this\nmachine for a while.\n\nDavid\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:33:16 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "Hi David / Tom,\r\n\r\n> David Rowley <[email protected]> writes:\r\n> It would be good to have log_autovacuum_min_duration = 0 on this machine for a while.\r\n\r\n- Have set log_autovacuum_min_duration=0 on parula and a test run came out okay.\r\n- Also added REL_16_STABLE to the branches being tested (in case it matters here).\r\n\r\nLet me know if I can help with any other changes here.\r\n-\r\nRobins | tharar@ | adelaide@australia\r\n", "msg_date": "Tue, 26 Mar 2024 08:03:36 +0000", "msg_from": "\"Tharakan, Robins\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Why is parula failing?" }, { "msg_contents": "On Tue, 26 Mar 2024 at 21:03, Tharakan, Robins <[email protected]> wrote:\n>\n> > David Rowley <[email protected]> writes:\n> > It would be good to have log_autovacuum_min_duration = 0 on this machine for a while.\n>\n> - Have set log_autovacuum_min_duration=0 on parula and a test run came out okay.\n> - Also added REL_16_STABLE to the branches being tested (in case it matters here).\n\nThanks for doing that.\n\nI see PG_16_STABLE has failed twice already with the same issue. I\ndon't see any autovacuum / autoanalyze in the log, so I guess that\nrules out auto vacuum activity causing this.\n\nUnfortunately, REL_16_STABLE does not have the additional debugging,\nso don't get to know what reltuples was set to.\n\nDavid\n\n\n", "msg_date": "Wed, 27 Mar 2024 15:11:15 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Unfortunately, REL_16_STABLE does not have the additional debugging,\n> so don't get to know what reltuples was set to.\n\nLet's wait a bit to see if it fails in HEAD ... but if not, would\nit be reasonable to back-patch the additional debugging output?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Mar 2024 01:28:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Wed, 27 Mar 2024 at 18:28, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > Unfortunately, REL_16_STABLE does not have the additional debugging,\n> > so don't get to know what reltuples was set to.\n>\n> Let's wait a bit to see if it fails in HEAD ... but if not, would\n> it be reasonable to back-patch the additional debugging output?\n\nI think REL_16_STABLE has told us that it's not an auto-vacuum issue.\nI'm uncertain what a few more failures in master will tell us aside\nfrom if reltuples == 48 is consistent or if that value is going to\nfluctuate.\n\nLet's give it a week and see if it fails a few more times.\n\nDavid\n\n\n", "msg_date": "Wed, 27 Mar 2024 22:35:19 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Wed, 27 Mar 2024 at 18:28, Tom Lane <[email protected]> wrote:\n>> Let's wait a bit to see if it fails in HEAD ... but if not, would\n>> it be reasonable to back-patch the additional debugging output?\n\n> I think REL_16_STABLE has told us that it's not an auto-vacuum issue.\n> I'm uncertain what a few more failures in master will tell us aside\n> from if reltuples == 48 is consistent or if that value is going to\n> fluctuate.\n\n> Let's give it a week and see if it fails a few more times.\n\nWe have another failure today [1] with the same symptom:\n\n ab_a2 | 0 | -1 | | 0 | 0\n- ab_a2_b1 | 0 | -1 | | 0 | 0\n+ ab_a2_b1 | 0 | 48 | | 0 | 0\n ab_a2_b1_a_idx | 1 | 0 | t | | \n\nDifferent table, same \"48\" reltuples. But I have to confess that\nI'd not looked closely enough at the previous failure, because\nnow that I have, this is well out in WTFF territory: how can\nreltuples be greater than zero when relpages is zero? This can't\nbe a state that autovacuum would have left behind, unless it's\nreally seriously broken. I think we need to be looking for\nexplanations like \"memory stomp\" or \"compiler bug\".\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2024-03-29%2012%3A46%3A02\n\n\n", "msg_date": "Fri, 29 Mar 2024 15:45:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "I wrote:\n> I'd not looked closely enough at the previous failure, because\n> now that I have, this is well out in WTFF territory: how can\n> reltuples be greater than zero when relpages is zero? This can't\n> be a state that autovacuum would have left behind, unless it's\n> really seriously broken. I think we need to be looking for\n> explanations like \"memory stomp\" or \"compiler bug\".\n\n... in connection with which, I can't help noticing that parula\nis using a very old compiler:\n\nconfigure: using compiler=gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-17)\n\n From some quick checking around, that would have to be near the\nbeginning of aarch64 support in RHEL (Fedora hadn't promoted aarch64\nto a primary architecture until earlier that same year). It's not\nexactly hard to believe that there were some lingering compiler bugs.\nI wonder why parula is using that when its underlying system seems\nmarkedly newer (the kernel at least has a recent build date).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Mar 2024 16:17:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Sat, 30 Mar 2024 at 09:17, Tom Lane <[email protected]> wrote:\n>\n> I wrote:\n> > I'd not looked closely enough at the previous failure, because\n> > now that I have, this is well out in WTFF territory: how can\n> > reltuples be greater than zero when relpages is zero? This can't\n> > be a state that autovacuum would have left behind, unless it's\n> > really seriously broken. I think we need to be looking for\n> > explanations like \"memory stomp\" or \"compiler bug\".\n>\n> ... in connection with which, I can't help noticing that parula\n> is using a very old compiler:\n>\n> configure: using compiler=gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-17)\n>\n> From some quick checking around, that would have to be near the\n> beginning of aarch64 support in RHEL (Fedora hadn't promoted aarch64\n> to a primary architecture until earlier that same year). It's not\n> exactly hard to believe that there were some lingering compiler bugs.\n> I wonder why parula is using that when its underlying system seems\n> markedly newer (the kernel at least has a recent build date).\n\nIt could be, but wouldn't the bug have to relate to the locking code\nand be caused by some other backend stomping on the memory?\nOtherwise, shouldn't it be failing consistently every run rather than\nsporadically?\n\nDavid\n\n\n", "msg_date": "Tue, 2 Apr 2024 12:28:40 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Sat, 30 Mar 2024 at 09:17, Tom Lane <[email protected]> wrote:\n>> ... in connection with which, I can't help noticing that parula\n>> is using a very old compiler:\n>> configure: using compiler=gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-17)\n>> I wonder why parula is using that when its underlying system seems\n>> markedly newer (the kernel at least has a recent build date).\n\n> It could be, but wouldn't the bug have to relate to the locking code\n> and be caused by some other backend stomping on the memory?\n> Otherwise, shouldn't it be failing consistently every run rather than\n> sporadically?\n\nYour guess is as good as mine ... but we still haven't seen this\nclass of failure on any other animal, so the idea that it's strictly\na chance timing issue is getting weaker all the time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Apr 2024 20:11:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "> ... in connection with which, I can't help noticing that parula is using a very old compiler:\n>\n> configure: using compiler=gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-17)\n>\n> From some quick checking around, that would have to be near the beginning of aarch64\n> support in RHEL (Fedora hadn't promoted aarch64 to a primary architecture until earlier\n> that same year). It's not exactly hard to believe that there were some lingering compiler bugs.\n\n> I wonder why parula is using that when its underlying system seems markedly newer (the kernel at least has a recent build date).\n\nParula used GCC v7.3.1 since that's what came by default.\nI've now switched to GCC v13.2 and triggered a run. Let's see if the tests stabilize now.\n-\nRobins\n\n\n", "msg_date": "Tue, 2 Apr 2024 02:06:14 +0000", "msg_from": "\"Tharakan, Robins\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Why is parula failing?" }, { "msg_contents": "> I've now switched to GCC v13.2 and triggered a run. Let's see if the tests stabilize now.\n\nSo although HEAD ran fine, but I saw multiple failures (v12, v13, v16) all of which passed on subsequent-tries,\nof which some were even\"signal 6: Aborted\".\n\nFWIW, I compiled gcc v13.2 (default options) from source which IIUC shouldn't be to blame. Two other possible\nreasons could be that the buildfarm doesn't have an aarch64 + gcc 13.2 combination (quick check I couldn't\nsee any), or, that this hardware is flaky.\n\nEither way, this instance is a throw-away so let me know if this isn't helping. I'll swap it out in case there's not\nmuch benefit to be had.\n\nAlso, it'd be great if someone could point me to a way to update the \"Compiler\" section in \"System Detail\" on\nthe buildfarm page (it wrongly shows GCC as v7.3.1).\n\n-\nthanks\nrobins\n\n\n", "msg_date": "Tue, 2 Apr 2024 04:27:48 +0000", "msg_from": "\"Tharakan, Robins\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Why is parula failing?" }, { "msg_contents": "\"Tharakan, Robins\" <[email protected]> writes:\n>> I've now switched to GCC v13.2 and triggered a run. Let's see if the tests stabilize now.\n\n> So although HEAD ran fine, but I saw multiple failures (v12, v13, v16) all of which passed on subsequent-tries,\n> of which some were even\"signal 6: Aborted\".\n\nUgh...\n\n> Also, it'd be great if someone could point me to a way to update the \"Compiler\" section in \"System Detail\" on\n> the buildfarm page (it wrongly shows GCC as v7.3.1).\n\nThe update_personality.pl script in the buildfarm client distro\nis what to use to adjust OS version or compiler version data.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Apr 2024 00:31:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Tue, 2 Apr 2024 at 15:01, Tom Lane <[email protected]> wrote:\n> \"Tharakan, Robins\" <[email protected]> writes:\n> > So although HEAD ran fine, but I saw multiple failures (v12, v13, v16)\nall of which passed on subsequent-tries,\n> > of which some were even\"signal 6: Aborted\".\n>\n> Ugh...\n\n\nparula didn't send any reports to buildfarm for the past 44 hours. Logged in\nto see that postgres was stuck on pg_sleep(), which was quite odd! I\ncaptured\nthe backtrace and triggered another run on HEAD, which came out\nokay.\n\nI'll keep an eye on this instance more often for the next few days.\n(Let me know if I could capture more if a run gets stuck again)\n\n\n(gdb) bt\n#0 0x0000ffff952ae954 in epoll_pwait () from /lib64/libc.so.6\n#1 0x000000000083e9c8 in WaitEventSetWaitBlock (nevents=1,\noccurred_events=<optimized out>, cur_timeout=297992, set=0x2816dac0) at\nlatch.c:1570\n#2 WaitEventSetWait (set=0x2816dac0, timeout=timeout@entry=600000,\noccurred_events=occurred_events@entry=0xffffc395ed28, nevents=nevents@entry=1,\nwait_event_info=wait_event_info@entry=150994946) at latch.c:1516\n#3 0x000000000083ed84 in WaitLatch (latch=<optimized out>,\nwakeEvents=wakeEvents@entry=41, timeout=600000,\nwait_event_info=wait_event_info@entry=150994946) at latch.c:538\n#4 0x0000000000907404 in pg_sleep (fcinfo=<optimized out>) at misc.c:406\n#5 0x0000000000696b10 in ExecInterpExpr (state=0x28384040,\necontext=0x28383e38, isnull=<optimized out>) at execExprInterp.c:764\n#6 0x00000000006ceef8 in ExecEvalExprSwitchContext (isNull=0xffffc395ee9f,\necontext=0x28383e38, state=<optimized out>) at\n../../../src/include/executor/executor.h:356\n#7 ExecProject (projInfo=<optimized out>) at\n../../../src/include/executor/executor.h:390\n#8 ExecResult (pstate=<optimized out>) at nodeResult.c:135\n#9 0x00000000006b7aec in ExecProcNode (node=0x28383d28) at\n../../../src/include/executor/executor.h:274\n#10 gather_getnext (gatherstate=0x28383b38) at nodeGather.c:287\n#11 ExecGather (pstate=0x28383b38) at nodeGather.c:222\n#12 0x000000000069aa4c in ExecProcNode (node=0x28383b38) at\n../../../src/include/executor/executor.h:274\n#13 ExecutePlan (execute_once=<optimized out>, dest=0x2831ffb0,\ndirection=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\noperation=CMD_SELECT, use_parallel_mode=<optimized out>,\nplanstate=0x28383b38, estate=0x28383910) at execMain.c:1646\n#14 standard_ExecutorRun (queryDesc=0x283239c0, direction=<optimized out>,\ncount=0, execute_once=<optimized out>) at execMain.c:363\n#15 0x000000000086d454 in PortalRunSelect (portal=portal@entry=0x281f0fb0,\nforward=forward@entry=true, count=0, count@entry=9223372036854775807,\ndest=dest@entry=0x2831ffb0) at pquery.c:924\n#16 0x000000000086ec70 in PortalRun (portal=portal@entry=0x281f0fb0,\ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\nrun_once=run_once@entry=true, dest=dest@entry=0x2831ffb0,\naltdest=altdest@entry=0x2831ffb0, qc=qc@entry=0xffffc395f250) at\npquery.c:768\n#17 0x000000000086a944 in exec_simple_query\n(query_string=query_string@entry=0x28171c90\n\"SELECT pg_sleep(0.1);\") at postgres.c:1274\n#18 0x000000000086b480 in PostgresMain (dbname=<optimized out>,\nusername=<optimized out>) at postgres.c:4680\n#19 0x0000000000866a0c in BackendMain (startup_data=<optimized out>,\nstartup_data_len=<optimized out>) at backend_startup.c:101\n#20 0x00000000007c1738 in postmaster_child_launch\n(child_type=child_type@entry=B_BACKEND,\nstartup_data=startup_data@entry=0xffffc395f718\n\"\", startup_data_len=startup_data_len@entry=4,\nclient_sock=client_sock@entry=0xffffc395f720)\nat launch_backend.c:265\n#21 0x00000000007c5120 in BackendStartup (client_sock=0xffffc395f720) at\npostmaster.c:3593\n#22 ServerLoop () at postmaster.c:1674\n#23 0x00000000007c6dc8 in PostmasterMain (argc=argc@entry=8,\nargv=argv@entry=0x2816d320)\nat postmaster.c:1372\n#24 0x0000000000496bb8 in main (argc=8, argv=0x2816d320) at main.c:197\n\n\n>\n> The update_personality.pl script in the buildfarm client distro\n> is what to use to adjust OS version or compiler version data.\n>\nThanks. Fixed that.\n\n-\nrobins\n\nOn Tue, 2 Apr 2024 at 15:01, Tom Lane <[email protected]> wrote:> \"Tharakan, Robins\" <[email protected]> writes:> > So although HEAD ran fine, but I saw multiple failures (v12, v13, v16) all of which passed on subsequent-tries,> > of which some were even\"signal 6: Aborted\".>> Ugh...parula didn't send any reports to buildfarm for the past 44 hours. Logged into see that postgres was stuck on pg_sleep(), which was quite odd! I capturedthe backtrace and triggered another run on HEAD, which came outokay.I'll keep an eye on this instance more often for the next few days.(Let me know if I could capture more if a run gets stuck again)(gdb) bt#0  0x0000ffff952ae954 in epoll_pwait () from /lib64/libc.so.6#1  0x000000000083e9c8 in WaitEventSetWaitBlock (nevents=1, occurred_events=<optimized out>, cur_timeout=297992, set=0x2816dac0) at latch.c:1570#2  WaitEventSetWait (set=0x2816dac0, timeout=timeout@entry=600000, occurred_events=occurred_events@entry=0xffffc395ed28, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=150994946) at latch.c:1516#3  0x000000000083ed84 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=41, timeout=600000, wait_event_info=wait_event_info@entry=150994946) at latch.c:538#4  0x0000000000907404 in pg_sleep (fcinfo=<optimized out>) at misc.c:406#5  0x0000000000696b10 in ExecInterpExpr (state=0x28384040, econtext=0x28383e38, isnull=<optimized out>) at execExprInterp.c:764#6  0x00000000006ceef8 in ExecEvalExprSwitchContext (isNull=0xffffc395ee9f, econtext=0x28383e38, state=<optimized out>) at ../../../src/include/executor/executor.h:356#7  ExecProject (projInfo=<optimized out>) at ../../../src/include/executor/executor.h:390#8  ExecResult (pstate=<optimized out>) at nodeResult.c:135#9  0x00000000006b7aec in ExecProcNode (node=0x28383d28) at ../../../src/include/executor/executor.h:274#10 gather_getnext (gatherstate=0x28383b38) at nodeGather.c:287#11 ExecGather (pstate=0x28383b38) at nodeGather.c:222#12 0x000000000069aa4c in ExecProcNode (node=0x28383b38) at ../../../src/include/executor/executor.h:274#13 ExecutePlan (execute_once=<optimized out>, dest=0x2831ffb0, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x28383b38, estate=0x28383910) at execMain.c:1646#14 standard_ExecutorRun (queryDesc=0x283239c0, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:363#15 0x000000000086d454 in PortalRunSelect (portal=portal@entry=0x281f0fb0, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x2831ffb0) at pquery.c:924#16 0x000000000086ec70 in PortalRun (portal=portal@entry=0x281f0fb0, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x2831ffb0, altdest=altdest@entry=0x2831ffb0, qc=qc@entry=0xffffc395f250) at pquery.c:768#17 0x000000000086a944 in exec_simple_query (query_string=query_string@entry=0x28171c90 \"SELECT pg_sleep(0.1);\") at postgres.c:1274#18 0x000000000086b480 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4680#19 0x0000000000866a0c in BackendMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at backend_startup.c:101#20 0x00000000007c1738 in postmaster_child_launch (child_type=child_type@entry=B_BACKEND, startup_data=startup_data@entry=0xffffc395f718 \"\", startup_data_len=startup_data_len@entry=4, client_sock=client_sock@entry=0xffffc395f720) at launch_backend.c:265#21 0x00000000007c5120 in BackendStartup (client_sock=0xffffc395f720) at postmaster.c:3593#22 ServerLoop () at postmaster.c:1674#23 0x00000000007c6dc8 in PostmasterMain (argc=argc@entry=8, argv=argv@entry=0x2816d320) at postmaster.c:1372#24 0x0000000000496bb8 in main (argc=8, argv=0x2816d320) at main.c:197  >> The update_personality.pl script in the buildfarm client distro> is what to use to adjust OS version or compiler version data.>Thanks. Fixed that.-robins", "msg_date": "Mon, 8 Apr 2024 21:25:51 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Mon, 8 Apr 2024 at 23:56, Robins Tharakan <[email protected]> wrote:\n> #3 0x000000000083ed84 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=41, timeout=600000, wait_event_info=wait_event_info@entry=150994946) at latch.c:538\n> #4 0x0000000000907404 in pg_sleep (fcinfo=<optimized out>) at misc.c:406\n\n> #17 0x000000000086a944 in exec_simple_query (query_string=query_string@entry=0x28171c90 \"SELECT pg_sleep(0.1);\") at postgres.c:1274\n\nI have no idea why WaitLatch has timeout=600000. That should be no\nhigher than timeout=100 for \"SELECT pg_sleep(0.1);\". I have no\ntheories aside from a failing RAM module, cosmic ray or a well-timed\nclock change between the first call to gettimeofday() in pg_sleep()\nand the next one.\n\nI know this animal is running debug_parallel_query = regress, so that\n0.1 Const did have to get serialized and copied to the worker, so\nthere's another opportunity for the sleep duration to be stomped on,\nbut that seems pretty unlikely.\n\nI can't think of a reason why the erroneous reltuples=48 would be\nconsistent over 2 failing runs if it were failing RAM or a cosmic ray.\n\nStill no partition_prune failures on master since the compiler version\nchange. There has been one [1] in REL_16_STABLE. I'm thinking it\nmight be worth backpatching the partition_prune debug to REL_16_STABLE\nto see if we can learn anything new from it.\n\nDavid\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2024-04-08%2002%3A12%3A02\n\n\n", "msg_date": "Tue, 9 Apr 2024 15:48:06 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Tue, 9 Apr 2024 at 15:48, David Rowley <[email protected]> wrote:\n> Still no partition_prune failures on master since the compiler version\n> change. There has been one [1] in REL_16_STABLE. I'm thinking it\n> might be worth backpatching the partition_prune debug to REL_16_STABLE\n> to see if we can learn anything new from it.\n\nMaster failed today for the first time since the compiler upgrade.\nAgain reltuples == 48.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2024-04-10%2000%3A27%3A02\n\nDavid\n\n\n", "msg_date": "Wed, 10 Apr 2024 12:54:10 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Wed, 10 Apr 2024 at 10:24, David Rowley <[email protected]> wrote:\n> Master failed today for the first time since the compiler upgrade.\n> Again reltuples == 48.\n\n From the buildfarm members page, parula seems to be the only aarch64 + gcc\n13.2\ncombination today, and then I suspect whether this is about gcc v13.2\nmaturity on aarch64?\n\nI'll try to upgrade one of the other aarch64s I have (massasauga or\nsnakefly) and\nsee if this is more about gcc 13.2 maturity on this architecture.\n-\nrobins\n\nOn Wed, 10 Apr 2024 at 10:24, David Rowley <[email protected]> wrote:> Master failed today for the first time since the compiler upgrade.> Again reltuples == 48.From the buildfarm members page, parula seems to be the only aarch64 + gcc 13.2combination today, and then I suspect whether this is about gcc v13.2 maturity on aarch64?I'll try to upgrade one of the other aarch64s I have (massasauga or snakefly) andsee if this is more about gcc 13.2 maturity on this architecture.-robins", "msg_date": "Wed, 10 Apr 2024 10:43:23 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Wed, 10 Apr 2024 at 10:24, David Rowley <[email protected]> wrote:\n>\n> Master failed today for the first time since the compiler upgrade.\n> Again reltuples == 48.\n\nHere's what I can add over the past few days:\n- Almost all failures are either reltuples=48 or SIGABRTs\n- Almost all SIGABRTs are DDLs - CREATE INDEX / CREATE AGGREGATEs / CTAS\n - A little too coincidental? Recent crashes have stack-trace if\ninterested.\n\nBarring the initial failures (during move to gcc 13.2), in the past week:\n- v15 somehow hasn't had a failure yet\n- v14 / v16 have got only 1 failure each\n- but v12 / v13 are lit up - failed multiple times.\n\n-\nrobins\n\nOn Wed, 10 Apr 2024 at 10:24, David Rowley <[email protected]> wrote:>> Master failed today for the first time since the compiler upgrade.> Again reltuples == 48.Here's what I can add over the past few days:- Almost all failures are either reltuples=48 or SIGABRTs- Almost all SIGABRTs are DDLs - CREATE INDEX / CREATE AGGREGATEs / CTAS  - A little too coincidental? Recent crashes have stack-trace if interested.Barring the initial failures (during move to gcc 13.2), in the past week:- v15 somehow hasn't had a failure yet- v14 / v16 have got only 1 failure each- but v12 / v13 are lit up - failed multiple times.-robins", "msg_date": "Sat, 13 Apr 2024 22:32:52 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Mon, 8 Apr 2024 at 21:25, Robins Tharakan <[email protected]> wrote:\n>\n>\n> I'll keep an eye on this instance more often for the next few days.\n> (Let me know if I could capture more if a run gets stuck again)\n\n\nHEAD is stuck again on pg_sleep(), no CPU for the past hour or so.\nStack trace seems to be similar to last time.\n\n\n$ pstack 24930\n#0 0x0000ffffb8280954 in epoll_pwait () from /lib64/libc.so.6\n#1 0x0000000000843408 in WaitEventSetWaitBlock (nevents=1,\noccurred_events=<optimized out>, cur_timeout=600000, set=0x3b38dac0) at\nlatch.c:1570\n#2 WaitEventSetWait (set=0x3b38dac0, timeout=timeout@entry=600000,\noccurred_events=occurred_events@entry=0xfffffd1d66c8, nevents=nevents@entry=1,\nwait_event_info=wait_event_info@entry=150994946) at latch.c:1516\n#3 0x00000000008437c4 in WaitLatch (latch=<optimized out>,\nwakeEvents=wakeEvents@entry=41, timeout=600000,\nwait_event_info=wait_event_info@entry=150994946) at latch.c:538\n#4 0x000000000090c384 in pg_sleep (fcinfo=<optimized out>) at misc.c:406\n#5 0x0000000000699350 in ExecInterpExpr (state=0x3b5a41a0,\necontext=0x3b5a3f98, isnull=<optimized out>) at execExprInterp.c:764\n#6 0x00000000006d1668 in ExecEvalExprSwitchContext (isNull=0xfffffd1d683f,\necontext=0x3b5a3f98, state=<optimized out>) at\n../../../src/include/executor/executor.h:356\n#7 ExecProject (projInfo=<optimized out>) at\n../../../src/include/executor/executor.h:390\n#8 ExecResult (pstate=<optimized out>) at nodeResult.c:135\n#9 0x00000000006ba26c in ExecProcNode (node=0x3b5a3e88) at\n../../../src/include/executor/executor.h:274\n#10 gather_getnext (gatherstate=0x3b5a3c98) at nodeGather.c:287\n#11 ExecGather (pstate=0x3b5a3c98) at nodeGather.c:222\n#12 0x000000000069d28c in ExecProcNode (node=0x3b5a3c98) at\n../../../src/include/executor/executor.h:274\n#13 ExecutePlan (execute_once=<optimized out>, dest=0x3b5ae8e0,\ndirection=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\noperation=CMD_SELECT, use_parallel_mode=<optimized out>,\nplanstate=0x3b5a3c98, estate=0x3b5a3a70) at execMain.c:1646\n#14 standard_ExecutorRun (queryDesc=0x3b59c250, direction=<optimized out>,\ncount=0, execute_once=<optimized out>) at execMain.c:363\n#15 0x00000000008720e4 in PortalRunSelect (portal=portal@entry=0x3b410fb0,\nforward=forward@entry=true, count=0, count@entry=9223372036854775807,\ndest=dest@entry=0x3b5ae8e0) at pquery.c:924\n#16 0x0000000000873900 in PortalRun (portal=portal@entry=0x3b410fb0,\ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\nrun_once=run_once@entry=true, dest=dest@entry=0x3b5ae8e0,\naltdest=altdest@entry=0x3b5ae8e0, qc=qc@entry=0xfffffd1d6bf0) at\npquery.c:768\n#17 0x000000000086f5d4 in exec_simple_query\n(query_string=query_string@entry=0x3b391c90\n\"SELECT pg_sleep(0.1);\") at postgres.c:1274\n#18 0x0000000000870110 in PostgresMain (dbname=<optimized out>,\nusername=<optimized out>) at postgres.c:4680\n#19 0x000000000086b6a0 in BackendMain (startup_data=<optimized out>,\nstartup_data_len=<optimized out>) at backend_startup.c:105\n#20 0x00000000007c6268 in postmaster_child_launch\n(child_type=child_type@entry=B_BACKEND,\nstartup_data=startup_data@entry=0xfffffd1d70b8\n\"\", startup_data_len=startup_data_len@entry=4,\nclient_sock=client_sock@entry=0xfffffd1d70c0)\nat launch_backend.c:265\n#21 0x00000000007c9c50 in BackendStartup (client_sock=0xfffffd1d70c0) at\npostmaster.c:3593\n#22 ServerLoop () at postmaster.c:1674\n#23 0x00000000007cb8f8 in PostmasterMain (argc=argc@entry=8,\nargv=argv@entry=0x3b38d320)\nat postmaster.c:1372\n#24 0x0000000000496e18 in main (argc=8, argv=0x3b38d320) at main.c:197\n\n\n\n CPU% MEM% TIME+ Command\n.\n.\n 0.0 0.0 0:00.00 │ └─ /bin/sh -c cd /opt/postgres/build-farm-14 &&\nPATH=/opt/gcc/home/ec2-user/proj/gcc/target/bin/\n 0.0 0.1 0:00.07 │ └─ /usr/bin/perl ./run_build.pl\n--config=build-farm.conf HEAD --verbose\n 0.0 0.0 0:00.00 │ └─ sh -c { cd pgsql.build/src/test/regress\n&& make NO_LOCALE=1 check; echo $? > /opt/postg\n 0.0 0.0 0:00.00 │ └─ make NO_LOCALE=1 check\n 0.0 0.0 0:00.00 │ └─ /bin/sh -c echo \"# +++ regress\ncheck in src/test/regress +++\" && PATH=\"/opt/postg\n 0.0 0.0 0:00.10 │ └─\n../../../src/test/regress/pg_regress --temp-instance=./tmp_check\n--inputdir=.\n 0.0 0.0 0:00.01 │ ├─ psql -X -a -q -d regression\n-v HIDE_TABLEAM=on -v HIDE_TOAST_COMPRESSION=on\n 0.0 0.1 0:02.64 │ └─ postgres -D\n/opt/postgres/build-farm-14/buildroot/HEAD/pgsql.build/src/test\n 0.0 0.2 0:00.05 │ ├─ postgres: postgres\nregression [local] SELECT\n 0.0 0.0 0:00.06 │ ├─ postgres: logical\nreplication launcher\n 0.0 0.1 0:00.36 │ ├─ postgres: autovacuum\nlauncher\n 0.0 0.1 0:00.34 │ ├─ postgres: walwriter\n 0.0 0.0 0:00.32 │ ├─ postgres: background\nwriter\n 0.0 0.3 0:00.05 │ └─ postgres: checkpointer\n\n-\nrobins\n\n>\n\nOn Mon, 8 Apr 2024 at 21:25, Robins Tharakan <[email protected]> wrote:>>> I'll keep an eye on this instance more often for the next few days.> (Let me know if I could capture more if a run gets stuck again) HEAD is stuck again on pg_sleep(), no CPU for the past hour or so.Stack trace seems to be similar to last time.$ pstack 24930#0  0x0000ffffb8280954 in epoll_pwait () from /lib64/libc.so.6#1  0x0000000000843408 in WaitEventSetWaitBlock (nevents=1, occurred_events=<optimized out>, cur_timeout=600000, set=0x3b38dac0) at latch.c:1570#2  WaitEventSetWait (set=0x3b38dac0, timeout=timeout@entry=600000, occurred_events=occurred_events@entry=0xfffffd1d66c8, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=150994946) at latch.c:1516#3  0x00000000008437c4 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=41, timeout=600000, wait_event_info=wait_event_info@entry=150994946) at latch.c:538#4  0x000000000090c384 in pg_sleep (fcinfo=<optimized out>) at misc.c:406#5  0x0000000000699350 in ExecInterpExpr (state=0x3b5a41a0, econtext=0x3b5a3f98, isnull=<optimized out>) at execExprInterp.c:764#6  0x00000000006d1668 in ExecEvalExprSwitchContext (isNull=0xfffffd1d683f, econtext=0x3b5a3f98, state=<optimized out>) at ../../../src/include/executor/executor.h:356#7  ExecProject (projInfo=<optimized out>) at ../../../src/include/executor/executor.h:390#8  ExecResult (pstate=<optimized out>) at nodeResult.c:135#9  0x00000000006ba26c in ExecProcNode (node=0x3b5a3e88) at ../../../src/include/executor/executor.h:274#10 gather_getnext (gatherstate=0x3b5a3c98) at nodeGather.c:287#11 ExecGather (pstate=0x3b5a3c98) at nodeGather.c:222#12 0x000000000069d28c in ExecProcNode (node=0x3b5a3c98) at ../../../src/include/executor/executor.h:274#13 ExecutePlan (execute_once=<optimized out>, dest=0x3b5ae8e0, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x3b5a3c98, estate=0x3b5a3a70) at execMain.c:1646#14 standard_ExecutorRun (queryDesc=0x3b59c250, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:363#15 0x00000000008720e4 in PortalRunSelect (portal=portal@entry=0x3b410fb0, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x3b5ae8e0) at pquery.c:924#16 0x0000000000873900 in PortalRun (portal=portal@entry=0x3b410fb0, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x3b5ae8e0, altdest=altdest@entry=0x3b5ae8e0, qc=qc@entry=0xfffffd1d6bf0) at pquery.c:768#17 0x000000000086f5d4 in exec_simple_query (query_string=query_string@entry=0x3b391c90 \"SELECT pg_sleep(0.1);\") at postgres.c:1274#18 0x0000000000870110 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4680#19 0x000000000086b6a0 in BackendMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at backend_startup.c:105#20 0x00000000007c6268 in postmaster_child_launch (child_type=child_type@entry=B_BACKEND, startup_data=startup_data@entry=0xfffffd1d70b8 \"\", startup_data_len=startup_data_len@entry=4, client_sock=client_sock@entry=0xfffffd1d70c0) at launch_backend.c:265#21 0x00000000007c9c50 in BackendStartup (client_sock=0xfffffd1d70c0) at postmaster.c:3593#22 ServerLoop () at postmaster.c:1674#23 0x00000000007cb8f8 in PostmasterMain (argc=argc@entry=8, argv=argv@entry=0x3b38d320) at postmaster.c:1372#24 0x0000000000496e18 in main (argc=8, argv=0x3b38d320) at main.c:197 CPU% MEM%   TIME+  Command..  0.0  0.0  0:00.00 │     └─ /bin/sh -c cd /opt/postgres/build-farm-14 && PATH=/opt/gcc/home/ec2-user/proj/gcc/target/bin/  0.0  0.1  0:00.07 │        └─ /usr/bin/perl ./run_build.pl --config=build-farm.conf HEAD --verbose  0.0  0.0  0:00.00 │           └─ sh -c { cd pgsql.build/src/test/regress && make NO_LOCALE=1 check; echo $? > /opt/postg  0.0  0.0  0:00.00 │              └─ make NO_LOCALE=1 check  0.0  0.0  0:00.00 │                 └─ /bin/sh -c echo \"# +++ regress check in src/test/regress +++\" && PATH=\"/opt/postg  0.0  0.0  0:00.10 │                    └─ ../../../src/test/regress/pg_regress --temp-instance=./tmp_check --inputdir=.  0.0  0.0  0:00.01 │                       ├─ psql -X -a -q -d regression -v HIDE_TABLEAM=on -v HIDE_TOAST_COMPRESSION=on  0.0  0.1  0:02.64 │                       └─ postgres -D /opt/postgres/build-farm-14/buildroot/HEAD/pgsql.build/src/test  0.0  0.2  0:00.05 │                          ├─ postgres: postgres regression [local] SELECT  0.0  0.0  0:00.06 │                          ├─ postgres: logical replication launcher  0.0  0.1  0:00.36 │                          ├─ postgres: autovacuum launcher  0.0  0.1  0:00.34 │                          ├─ postgres: walwriter  0.0  0.0  0:00.32 │                          ├─ postgres: background writer  0.0  0.3  0:00.05 │                          └─ postgres: checkpointer-robins", "msg_date": "Sat, 13 Apr 2024 23:01:49 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "Robins Tharakan <[email protected]> writes:\n> HEAD is stuck again on pg_sleep(), no CPU for the past hour or so.\n> Stack trace seems to be similar to last time.\n\n> #3 0x00000000008437c4 in WaitLatch (latch=<optimized out>,\n> wakeEvents=wakeEvents@entry=41, timeout=600000,\n> wait_event_info=wait_event_info@entry=150994946) at latch.c:538\n> #4 0x000000000090c384 in pg_sleep (fcinfo=<optimized out>) at misc.c:406\n> ...\n> #17 0x000000000086f5d4 in exec_simple_query\n> (query_string=query_string@entry=0x3b391c90\n> \"SELECT pg_sleep(0.1);\") at postgres.c:1274\n\nIf we were only supposed to sleep 0.1 seconds, how is it waiting\nfor 600000 ms (and, presumably, repeating that)? The logic in\npg_sleep is pretty simple, and it's hard to think of anything except\nthe system clock jumping (far) backwards that would make this\nhappen. Any chance of extracting the local variables from the\npg_sleep stack frame?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Apr 2024 10:42:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "\n\nOn 4/9/24 05:48, David Rowley wrote:\n> On Mon, 8 Apr 2024 at 23:56, Robins Tharakan <[email protected]> wrote:\n>> #3 0x000000000083ed84 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=41, timeout=600000, wait_event_info=wait_event_info@entry=150994946) at latch.c:538\n>> #4 0x0000000000907404 in pg_sleep (fcinfo=<optimized out>) at misc.c:406\n> \n>> #17 0x000000000086a944 in exec_simple_query (query_string=query_string@entry=0x28171c90 \"SELECT pg_sleep(0.1);\") at postgres.c:1274\n> \n> I have no idea why WaitLatch has timeout=600000. That should be no\n> higher than timeout=100 for \"SELECT pg_sleep(0.1);\". I have no\n> theories aside from a failing RAM module, cosmic ray or a well-timed\n> clock change between the first call to gettimeofday() in pg_sleep()\n> and the next one.\n> \n> I know this animal is running debug_parallel_query = regress, so that\n> 0.1 Const did have to get serialized and copied to the worker, so\n> there's another opportunity for the sleep duration to be stomped on,\n> but that seems pretty unlikely.\n> \n\nAFAIK that GUC is set only for HEAD, so it would not explain the\nfailures on the other branches.\n\n> I can't think of a reason why the erroneous reltuples=48 would be\n> consistent over 2 failing runs if it were failing RAM or a cosmic ray.\n> \n\nYeah, that seems very unlikely.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 13 Apr 2024 20:05:36 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On 4/13/24 15:02, Robins Tharakan wrote:\n> On Wed, 10 Apr 2024 at 10:24, David Rowley <[email protected]> wrote:\n>>\n>> Master failed today for the first time since the compiler upgrade.\n>> Again reltuples == 48.\n> \n> Here's what I can add over the past few days:\n> - Almost all failures are either reltuples=48 or SIGABRTs\n> - Almost all SIGABRTs are DDLs - CREATE INDEX / CREATE AGGREGATEs / CTAS\n> - A little too coincidental? Recent crashes have stack-trace if\n> interested.\n> \n> Barring the initial failures (during move to gcc 13.2), in the past week:\n> - v15 somehow hasn't had a failure yet\n> - v14 / v16 have got only 1 failure each\n> - but v12 / v13 are lit up - failed multiple times.\n> \n\nI happened to have an unused rpi5, so I installed Armbian aarch64 with\ngcc 13.2.0, built with exactly the same configure options as parula, and\ndid ~300 loops of \"make check\" without a single failure.\n\nSo either parula has packages in a different way, or maybe it's a more\nof a timing issue and rpi5 is way slower than graviton3.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 13 Apr 2024 20:11:46 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Sun, 14 Apr 2024 at 00:12, Tom Lane <[email protected]> wrote:\n> If we were only supposed to sleep 0.1 seconds, how is it waiting\n> for 600000 ms (and, presumably, repeating that)? The logic in\n> pg_sleep is pretty simple, and it's hard to think of anything except\n> the system clock jumping (far) backwards that would make this\n> happen. Any chance of extracting the local variables from the\n> pg_sleep stack frame?\n\n- I now have 2 separate runs stuck on pg_sleep() - HEAD / REL_16_STABLE\n- I'll keep them (stuck) for this week, in case there's more we can get\nfrom them (and to see how long they take)\n- Attached are 'bt full' outputs for both (b.txt - HEAD / a.txt -\nREL_16_STABLE)\n\nA few things to add:\n- To reiterate, this instance has gcc v13.2 compiled without any\nflags (my first time ever TBH) IIRC 'make -k check' came out okay,\nso at this point I don't think I did something obviously wrong when\nbuilding gcc from git.\n- I installed gcc v14.0.1 experimental on massasauga (also an aarch64\nand built from git) and despite multiple runs, it seems to be doing okay\n[1].\n- Next week (if I'm still scratching my head - and unless someone advises\notherwise), I'll upgrade parula to gcc 14 experimental to see if this is\nabout\ngcc maturity on graviton (for some reason). I don't expect much to come\nout of it though (given Tomas testing on rpi5, but doesn't hurt)\n\nRef:\n1.\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=massasauga&br=REL_12_STABLE\n\n-\nrobins", "msg_date": "Mon, 15 Apr 2024 13:39:55 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Mon, 15 Apr 2024 at 16:10, Robins Tharakan <[email protected]> wrote:\n> - I now have 2 separate runs stuck on pg_sleep() - HEAD / REL_16_STABLE\n> - I'll keep them (stuck) for this week, in case there's more we can get\n> from them (and to see how long they take)\n> - Attached are 'bt full' outputs for both (b.txt - HEAD / a.txt - REL_16_STABLE)\n\nThanks for getting those.\n\n#4 0x000000000090b7b4 in pg_sleep (fcinfo=<optimized out>) at misc.c:406\n delay = <optimized out>\n delay_ms = <optimized out>\n endtime = 0\n\nThis endtime looks like a problem. It seems unlikely to be caused by\ngettimeofday's timeval fields being zeroed given that the number of\nseconds should have been added to that.\n\nI can't quite make sense of how we end up sleeping at all with a zero\nendtime. Assuming the subsequent GetNowFloats() worked, \"delay =\nendtime - GetNowFloat();\" would result in a negative sleep duration\nand we'd break out of the sleep loop.\n\nIf GetNowFloat() somehow was returning a negative number then we could\nend up with a large delay. But if gettimeofday() was so badly broken\nthen wouldn't there be some evidence of this in the log timestamps on\nfailing runs?\n\nI'm not that familiar with the buildfarm config, but I do see some\nValgrind related setting in there. Is PostgreSQL running under\nValgrind on these runs?\n\nDavid\n\n\n", "msg_date": "Mon, 15 Apr 2024 17:24:56 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Mon, 15 Apr 2024 at 14:55, David Rowley <[email protected]> wrote:\n> If GetNowFloat() somehow was returning a negative number then we could\n> end up with a large delay. But if gettimeofday() was so badly broken\n> then wouldn't there be some evidence of this in the log timestamps on\n> failing runs?\n\n3 things stand out for me here, unsure if they're related somehow:\n\n1. Issue where reltuples=48 (in essence runs complete, but few tests fail)\n2. SIGABRT - most of which are DDLs (runs complete, but engine crashes +\nmany tests fail)\n3. pg_sleep() stuck - (runs never complete, IIUC never gets reported to\nbuildfarm)\n\nFor #3, one thing I had done earlier (and then reverted) was to set the\n'wait_timeout' from current undef to 2 hours. I'll set it again to 2hrs\nin hopes that #3 starts getting reported to buildfarm too.\n\n> I'm not that familiar with the buildfarm config, but I do see some\n> Valgrind related setting in there. Is PostgreSQL running under\n> Valgrind on these runs?\n\nNot yet. I was tempted, but valgrind has not yet been enabled on\nthis member. IIUC by default they're disabled.\n\n 'use_valgrind' => undef,\n\n-\nrobins\n\nOn Mon, 15 Apr 2024 at 14:55, David Rowley <[email protected]> wrote:> If GetNowFloat() somehow was returning a negative number then we could> end up with a large delay.  But if gettimeofday() was so badly broken> then wouldn't there be some evidence of this in the log timestamps on> failing runs?3 things stand out for me here, unsure if they're related somehow:1. Issue where reltuples=48 (in essence runs complete, but few tests fail)2. SIGABRT - most of which are DDLs (runs complete, but engine crashes + many tests fail)3. pg_sleep() stuck - (runs never complete, IIUC never gets reported to buildfarm)For #3, one thing I had done earlier (and then reverted) was to set the'wait_timeout' from current undef to 2 hours. I'll set it again to 2hrsin hopes that #3 starts getting reported to buildfarm too.> I'm not that familiar with the buildfarm config, but I do see some> Valgrind related setting in there. Is PostgreSQL running under> Valgrind on these runs?Not yet. I was tempted, but valgrind has not yet been enabled onthis member. IIUC by default they're disabled. 'use_valgrind' => undef,\n-robins", "msg_date": "Mon, 15 Apr 2024 15:12:32 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> #4 0x000000000090b7b4 in pg_sleep (fcinfo=<optimized out>) at misc.c:406\n> delay = <optimized out>\n> delay_ms = <optimized out>\n> endtime = 0\n\n> This endtime looks like a problem. It seems unlikely to be caused by\n> gettimeofday's timeval fields being zeroed given that the number of\n> seconds should have been added to that.\n\nYes, that is very odd.\n\n> I can't quite make sense of how we end up sleeping at all with a zero\n> endtime. Assuming the subsequent GetNowFloats() worked, \"delay =\n> endtime - GetNowFloat();\" would result in a negative sleep duration\n> and we'd break out of the sleep loop.\n\nIf GetCurrentTimestamp() were returning assorted random values, it\nwouldn't be hard to imagine this loop sleeping for a long time.\nBut it's very hard to see how that theory leads to an \"endtime\"\nof exactly zero rather than some other number, and even harder\nto credit two different runs getting \"endtime\" of exactly zero.\n\n> If GetNowFloat() somehow was returning a negative number then we could\n> end up with a large delay. But if gettimeofday() was so badly broken\n> then wouldn't there be some evidence of this in the log timestamps on\n> failing runs?\n\nAnd indeed that too. I'm finding the \"compiler bug\" theory\npalatable. Robins mentioned having built the compiler from\nsource, which theoretically should work, but maybe something\nwent wrong? Or it's missing some important bug fix?\n\nIt might be interesting to back the animal's CFLAGS down\nto -O0 and see if things get more stable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Apr 2024 02:31:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Mon, 15 Apr 2024 at 16:02, Tom Lane <[email protected]> wrote:\n> David Rowley <[email protected]> writes:\n> > If GetNowFloat() somehow was returning a negative number then we could\n> > end up with a large delay. But if gettimeofday() was so badly broken\n> > then wouldn't there be some evidence of this in the log timestamps on\n> > failing runs?\n>\n> And indeed that too. I'm finding the \"compiler bug\" theory\n> palatable. Robins mentioned having built the compiler from\n> source, which theoretically should work, but maybe something\n> went wrong? Or it's missing some important bug fix?\n>\n> It might be interesting to back the animal's CFLAGS down\n> to -O0 and see if things get more stable.\n\nThe last 25 consecutive runs have passed [1] after switching\nREL_12_STABLE to -O0 ! So I am wondering whether that confirms that\nthe compiler version is to blame, and while we're still here,\nis there anything else I could try?\n\nIf not, by Sunday, I am considering switching parula to gcc v12 (or even\nv14 experimental - given that massasauga [2] has been pretty stable since\nits upgrade a few days back).\n\nReference:\n1.\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=parula&br=REL_12_STABLE\n2.\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=massasauga&br=REL_12_STABLE\n-\nrobins\n\nOn Mon, 15 Apr 2024 at 16:02, Tom Lane <[email protected]> wrote:> David Rowley <[email protected]> writes:> > If GetNowFloat() somehow was returning a negative number then we could> > end up with a large delay.  But if gettimeofday() was so badly broken> > then wouldn't there be some evidence of this in the log timestamps on> > failing runs?>> And indeed that too.  I'm finding the \"compiler bug\" theory> palatable.  Robins mentioned having built the compiler from> source, which theoretically should work, but maybe something> went wrong?  Or it's missing some important bug fix?>> It might be interesting to back the animal's CFLAGS down> to -O0 and see if things get more stable.The last 25 consecutive runs have passed [1] after switchingREL_12_STABLE to -O0 ! So I am wondering whether that confirms thatthe compiler version is to blame, and while we're still here,is there anything else I could try?If not, by Sunday, I am considering switching parula to gcc v12 (or evenv14 experimental - given that massasauga [2] has been pretty stable sinceits upgrade a few days back).Reference:1. https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=parula&br=REL_12_STABLE2. https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=massasauga&br=REL_12_STABLE-robins", "msg_date": "Tue, 16 Apr 2024 16:27:41 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Tue, 16 Apr 2024 at 18:58, Robins Tharakan <[email protected]> wrote:\n> The last 25 consecutive runs have passed [1] after switching\n> REL_12_STABLE to -O0 ! So I am wondering whether that confirms that\n> the compiler version is to blame, and while we're still here,\n> is there anything else I could try?\n\nI don't think it's conclusive proof that it's a compiler bug. -O0\ncould equally just have changed the timing sufficiently to not trigger\nthe issue.\n\nIt might be a long shot, but I wonder if it might be worth running a\nworkload such as:\n\npsql -c \"create table a (a int primary key, b text not null, c int not\nnull); insert into a values(1,'',0);\" postgres\necho \"update a set b = repeat('a', random(1,10)), c=c+1 where a = 1;\"\n> bench.sql\npgbench -n -t 12500 -c 8 -j 8 -f bench.sql postgres\npsql -c \"table a;\" postgres\n\nOn a build with the original CFLAGS. I expect the value of \"c\" to be\n100k after running that. If it's not then something bad has happened.\n\nThat would exercise the locking code heavily and would show us if any\nupdates were missed due to locks not being correctly respected or seen\nby other backends.\n\nDavid\n\n\n", "msg_date": "Wed, 17 Apr 2024 13:34:48 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Thu, 21 Mar 2024 at 13:53, David Rowley <[email protected]> wrote:\n>\n> On Thu, 21 Mar 2024 at 12:36, Tom Lane <[email protected]> wrote:\n> > So yeah, if we could have log_autovacuum_min_duration = 0 perhaps\n> > that would yield a clue.\n>\n> FWIW, I agree with your earlier statement about it looking very much\n> like auto-vacuum has run on that table, but equally, if something like\n> the pg_index record was damaged we could get the same plan change.\n>\n> We could also do something like the attached just in case we're\n> barking up the wrong tree.\n\nI've not seen any recent failures from Parula that relate to this\nissue. The last one seems to have been about 4 weeks ago.\n\nI'm now wondering if it's time to revert the debugging code added in\n1db689715. Does anyone think differently?\n\nDavid\n\n\n", "msg_date": "Tue, 14 May 2024 11:24:53 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I've not seen any recent failures from Parula that relate to this\n> issue. The last one seems to have been about 4 weeks ago.\n\n> I'm now wondering if it's time to revert the debugging code added in\n> 1db689715. Does anyone think differently?\n\n+1. It seems like we wrote off the other issue as a compiler bug,\nso I don't have much trouble assuming that this one was too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 May 2024 19:29:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is parula failing?" }, { "msg_contents": "On Tue, 14 May 2024 at 08:55, David Rowley <[email protected]> wrote:\n\n> I've not seen any recent failures from Parula that relate to this\n> issue. The last one seems to have been about 4 weeks ago.\n>\n> I'm now wondering if it's time to revert the debugging code added in\n> 1db689715. Does anyone think differently?\n>\n\nThanks for keeping an eye. Sadly the older machine was decommissioned\nand thus parula hasn't been sending results to buildfarm the past few days.\n\nI'll try to build a similar machine (but newer gcc etc.) and reopen this\nthread in case I hit\nsomething similar.\n-\nrobins\n\nOn Tue, 14 May 2024 at 08:55, David Rowley <[email protected]> wrote:I've not seen any recent failures from Parula that relate to this\nissue.  The last one seems to have been about 4 weeks ago.\n\nI'm now wondering if it's time to revert the debugging code added in\n1db689715.  Does anyone think differently?\nThanks for keeping an eye. Sadly the older machine was decommissionedand thus parula hasn't been sending results to buildfarm the past few days.I'll try to build a similar machine (but newer gcc etc.) and reopen this thread in case I hitsomething similar.-robins", "msg_date": "Thu, 16 May 2024 20:18:43 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is parula failing?" } ]
[ { "msg_contents": "Hello hackers,\n\nAmong many recoveryCheck (more concretely, 027_stream_regress) failures\noccurred on a number of buildfarm animals after switching to meson, which\ncan be explained by timeouts, I saw a different failure on adder:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-03-18%2023%3A43%3A00\n[23:48:52.521](9.831s) ok 13 - startup deadlock: cursor holding conflicting pin, also waiting for lock, established\n[23:55:13.749](381.228s) # poll_query_until timed out executing this query:\n#\n# SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;\n#\n# expecting this output:\n# waiting\n# last actual query output:\n#\n# with stderr:\n[23:55:13.763](0.013s) not ok 14 - startup deadlock: lock acquisition is waiting\n\nand I suspect that it might be caused by autovacuum.\n\nI've managed to reproduced it locally (running 10 tests in parallel on a\n2-core VM with disk bandwidth limited to 80MB/sec I get failures on\niterations 10, 1, 3) and observed the following (with wal_debug = on):\n031_recovery_conflict_standby.log:\n2024-03-20 04:12:06.519 UTC|vagrant|test_db|65fa6214.111ede|LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR \nFOR SELECT a FROM test_recovery_conflict_table1;\n2024-03-20 04:12:06.520 UTC|vagrant|test_db|65fa6214.111ede|LOG: statement: FETCH FORWARD FROM \ntest_recovery_conflict_cursor;\n2024-03-20 04:12:06.520 UTC|vagrant|test_db|65fa6214.111ede|LOG: statement: SELECT * FROM test_recovery_conflict_table2;\n...\n2024-03-20 04:12:07.073 UTC|||65fa620d.111ec8|LOG:  REDO @ 0/3438360; LSN 0/3438460: prev 0/3438338; xid 0; len 9; \nblkref #0: rel 1663/16385/16392, blk 0 - Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 0, \nisCatalogRel: F, nunused: 100, redirected: [], dead: [], unused: [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, \n17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, \n47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, \n77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101]\n2024-03-20 04:12:07.084 UTC|||65fa620d.111ec8|LOG:  recovery still waiting after 11.241 ms: recovery conflict on buffer pin\n2024-03-20 04:12:07.084 UTC|||65fa620d.111ec8|CONTEXT:  WAL redo at 0/3438360 for Heap2/PRUNE: snapshotConflictHorizon: \n0, nredirected: 0, ndead: 0, isCatalogRel: F, nunused: 100, redirected: [], dead: [], unused: [2, 3, 4, 5, 6, 7, 8, 9, \n10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, \n40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, \n70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, \n100, 101]; blkref #0: rel 1663/16385/16392, blk 0\n2024-03-20 04:12:07.095 UTC|vagrant|test_db|65fa6214.111ede|ERROR: canceling statement due to conflict with recovery at \ncharacter 15\n2024-03-20 04:12:07.095 UTC|vagrant|test_db|65fa6214.111ede|DETAIL: User transaction caused buffer deadlock with recovery.\n...\n2024-03-20 04:12:08.093 UTC|vagrant|postgres|65fa6216.111f1a|LOG: statement: SELECT 'waiting' FROM pg_locks WHERE \nlocktype = 'relation' AND NOT granted;\n\n031_recovery_conflict_primary.log:\n2024-03-20 04:12:05.980 UTC|||65fa6215.111f02|DEBUG:  Autovacuum VacuumUpdateCosts(db=16385, rel=16392, dobalance=yes, \ncost_limit=200, cost_delay=2 active=yes failsafe=no)\n2024-03-20 04:12:05.980 UTC|||65fa6215.111f02|DEBUG:  Autovacuum VacuumUpdateCosts(db=16385, rel=16392, dobalance=yes, \ncost_limit=200, cost_delay=2 active=yes failsafe=no)\n2024-03-20 04:12:05.980 UTC|||65fa6215.111f02|LOG:  INSERT @ 0/3438460:  - Heap2/PRUNE: snapshotConflictHorizon: 0, \nnredirected: 0, ndead: 0, isCatalogRel: F, nunused: 100, redirected: [], dead: [], unused: [2, 3, 4, 5, 6, 7, 8, 9, 10, \n11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, \n41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, \n101]\n2024-03-20 04:12:05.980 UTC|||65fa6215.111f02|CONTEXT:  while scanning block 0 of relation \n\"public.test_recovery_conflict_table1\"\n...\n2024-03-20 04:12:05.981 UTC|||65fa6215.111f02|LOG:  automatic vacuum of table \n\"test_db.public.test_recovery_conflict_table1\": index scans: 0\n\nThe corresponding fragment of 031_recovery_conflict.pl:\n$res = $psql_standby->query_until(\n     qr/^1$/m, qq[\n     BEGIN;\n     -- hold pin\n     DECLARE $cursor1 CURSOR FOR SELECT a FROM $table1;\n     FETCH FORWARD FROM $cursor1;\n     -- wait for lock held by prepared transaction\n     SELECT * FROM $table2;\n     ]);\nok(1,\n     \"$sect: cursor holding conflicting pin, also waiting for lock, established\"\n);\n\n# just to make sure we're waiting for lock already\nok( $node_standby->poll_query_until(\n         'postgres', qq[\nSELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;\n], 'waiting'),\n     \"$sect: lock acquisition is waiting\");\n\n# VACUUM FREEZE will prune away rows, causing a buffer pin conflict, while\n# standby psql is waiting on lock\n$node_primary->safe_psql($test_db, qq[VACUUM FREEZE $table1;]);\n\nSo if autovacuum happens to process \"$table1\" before SELECT ... FROM\npg_locks, a buffer pin conflict occurs before the manual VACUUM FREEZE\nand poll_query_until() fails.\n\nWith autovacuum = off in TEMP_CONFIG 50 iterations passed for me in\nthe same environment.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 20 Mar 2024 16:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Test 031_recovery_conflict.pl is not immune to autovacuum" }, { "msg_contents": "On Wed, Mar 20, 2024 at 9:00 AM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> Among many recoveryCheck (more concretely, 027_stream_regress) failures\n> occurred on a number of buildfarm animals after switching to meson, which\n> can be explained by timeouts, I saw a different failure on adder:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=adder&dt=2024-03-18%2023%3A43%3A00\n> [23:48:52.521](9.831s) ok 13 - startup deadlock: cursor holding conflicting pin, also waiting for lock, established\n> [23:55:13.749](381.228s) # poll_query_until timed out executing this query:\n> #\n> # SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;\n> #\n> # expecting this output:\n> # waiting\n> # last actual query output:\n> #\n> # with stderr:\n> [23:55:13.763](0.013s) not ok 14 - startup deadlock: lock acquisition is waiting\n>\n> and I suspect that it might be caused by autovacuum.\n>\n> I've managed to reproduced it locally (running 10 tests in parallel on a\n> 2-core VM with disk bandwidth limited to 80MB/sec I get failures on\n> iterations 10, 1, 3) and observed the following (with wal_debug = on):\n> 031_recovery_conflict_standby.log:\n> 2024-03-20 04:12:06.519 UTC|vagrant|test_db|65fa6214.111ede|LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR\n> FOR SELECT a FROM test_recovery_conflict_table1;\n> 2024-03-20 04:12:06.520 UTC|vagrant|test_db|65fa6214.111ede|LOG: statement: FETCH FORWARD FROM\n> test_recovery_conflict_cursor;\n> 2024-03-20 04:12:06.520 UTC|vagrant|test_db|65fa6214.111ede|LOG: statement: SELECT * FROM test_recovery_conflict_table2;\n> ...\n> 2024-03-20 04:12:07.073 UTC|||65fa620d.111ec8|LOG: REDO @ 0/3438360; LSN 0/3438460: prev 0/3438338; xid 0; len 9;\n> blkref #0: rel 1663/16385/16392, blk 0 - Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 0,\n> isCatalogRel: F, nunused: 100, redirected: [], dead: [], unused: [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,\n> 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46,\n> 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76,\n> 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101]\n> 2024-03-20 04:12:07.084 UTC|||65fa620d.111ec8|LOG: recovery still waiting after 11.241 ms: recovery conflict on buffer pin\n> 2024-03-20 04:12:07.084 UTC|||65fa620d.111ec8|CONTEXT: WAL redo at 0/3438360 for Heap2/PRUNE: snapshotConflictHorizon:\n> 0, nredirected: 0, ndead: 0, isCatalogRel: F, nunused: 100, redirected: [], dead: [], unused: [2, 3, 4, 5, 6, 7, 8, 9,\n> 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,\n> 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,\n> 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99,\n> 100, 101]; blkref #0: rel 1663/16385/16392, blk 0\n> 2024-03-20 04:12:07.095 UTC|vagrant|test_db|65fa6214.111ede|ERROR: canceling statement due to conflict with recovery at\n> character 15\n> 2024-03-20 04:12:07.095 UTC|vagrant|test_db|65fa6214.111ede|DETAIL: User transaction caused buffer deadlock with recovery.\n> ...\n> 2024-03-20 04:12:08.093 UTC|vagrant|postgres|65fa6216.111f1a|LOG: statement: SELECT 'waiting' FROM pg_locks WHERE\n> locktype = 'relation' AND NOT granted;\n>\n> 031_recovery_conflict_primary.log:\n> 2024-03-20 04:12:05.980 UTC|||65fa6215.111f02|DEBUG: Autovacuum VacuumUpdateCosts(db=16385, rel=16392, dobalance=yes,\n> cost_limit=200, cost_delay=2 active=yes failsafe=no)\n> 2024-03-20 04:12:05.980 UTC|||65fa6215.111f02|DEBUG: Autovacuum VacuumUpdateCosts(db=16385, rel=16392, dobalance=yes,\n> cost_limit=200, cost_delay=2 active=yes failsafe=no)\n> 2024-03-20 04:12:05.980 UTC|||65fa6215.111f02|LOG: INSERT @ 0/3438460: - Heap2/PRUNE: snapshotConflictHorizon: 0,\n> nredirected: 0, ndead: 0, isCatalogRel: F, nunused: 100, redirected: [], dead: [], unused: [2, 3, 4, 5, 6, 7, 8, 9, 10,\n> 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n> 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70,\n> 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n> 101]\n> 2024-03-20 04:12:05.980 UTC|||65fa6215.111f02|CONTEXT: while scanning block 0 of relation\n> \"public.test_recovery_conflict_table1\"\n> ...\n> 2024-03-20 04:12:05.981 UTC|||65fa6215.111f02|LOG: automatic vacuum of table\n> \"test_db.public.test_recovery_conflict_table1\": index scans: 0\n>\n> The corresponding fragment of 031_recovery_conflict.pl:\n> $res = $psql_standby->query_until(\n> qr/^1$/m, qq[\n> BEGIN;\n> -- hold pin\n> DECLARE $cursor1 CURSOR FOR SELECT a FROM $table1;\n> FETCH FORWARD FROM $cursor1;\n> -- wait for lock held by prepared transaction\n> SELECT * FROM $table2;\n> ]);\n> ok(1,\n> \"$sect: cursor holding conflicting pin, also waiting for lock, established\"\n> );\n>\n> # just to make sure we're waiting for lock already\n> ok( $node_standby->poll_query_until(\n> 'postgres', qq[\n> SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;\n> ], 'waiting'),\n> \"$sect: lock acquisition is waiting\");\n>\n> # VACUUM FREEZE will prune away rows, causing a buffer pin conflict, while\n> # standby psql is waiting on lock\n> $node_primary->safe_psql($test_db, qq[VACUUM FREEZE $table1;]);\n>\n> So if autovacuum happens to process \"$table1\" before SELECT ... FROM\n> pg_locks, a buffer pin conflict occurs before the manual VACUUM FREEZE\n> and poll_query_until() fails.\n>\n> With autovacuum = off in TEMP_CONFIG 50 iterations passed for me in\n> the same environment.\n\nHmmm. Thanks for finding this and taking the time to reproduce it. I\ndon't know why I didn't think of this.\n\nSeems like we could just add autovacuum_enabled=false to the table like this:\n\ndiff --git a/src/test/recovery/t/031_recovery_conflict.pl\nb/src/test/recovery/t/031_recovery_conflict.pl\nindex d87efa823fd..65bc858c02d 100644\n--- a/src/test/recovery/t/031_recovery_conflict.pl\n+++ b/src/test/recovery/t/031_recovery_conflict.pl\n@@ -59,7 +59,7 @@ my $table1 = \"test_recovery_conflict_table1\";\n my $table2 = \"test_recovery_conflict_table2\";\n $node_primary->safe_psql(\n $test_db, qq[\n-CREATE TABLE ${table1}(a int, b int);\n+CREATE TABLE ${table1}(a int, b int) WITH (autovacuum_enabled = false);\n INSERT INTO $table1 SELECT i % 3, 0 FROM generate_series(1,20) i;\n CREATE TABLE ${table2}(a int, b int);\n ]);\n\n- Melanie\n\n\n", "msg_date": "Wed, 20 Mar 2024 09:15:07 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 031_recovery_conflict.pl is not immune to autovacuum" }, { "msg_contents": "Hello Melanie,\n\n20.03.2024 16:15, Melanie Plageman wrote:\n> Seems like we could just add autovacuum_enabled=false to the table like this:\n> diff --git a/src/test/recovery/t/031_recovery_conflict.pl\n> b/src/test/recovery/t/031_recovery_conflict.pl\n> index d87efa823fd..65bc858c02d 100644\n> --- a/src/test/recovery/t/031_recovery_conflict.pl\n> +++ b/src/test/recovery/t/031_recovery_conflict.pl\n> @@ -59,7 +59,7 @@ my $table1 = \"test_recovery_conflict_table1\";\n> my $table2 = \"test_recovery_conflict_table2\";\n> $node_primary->safe_psql(\n> $test_db, qq[\n> -CREATE TABLE ${table1}(a int, b int);\n> +CREATE TABLE ${table1}(a int, b int) WITH (autovacuum_enabled = false);\n> INSERT INTO $table1 SELECT i % 3, 0 FROM generate_series(1,20) i;\n> CREATE TABLE ${table2}(a int, b int);\n> ]);\n\nThanks for paying attention to it!\n\nWith such modification applied I've got another failure (on iteration 2):\n[13:27:39.034](2.317s) ok 14 - startup deadlock: lock acquisition is waiting\nWaiting for replication conn standby's replay_lsn to pass 0/343E6D0 on primary\ndone\ntimed out waiting for match: (?^:User transaction caused buffer deadlock with recovery.) at t/031_recovery_conflict.pl \nline 318.\n# Postmaster PID for node \"primary\" is 1523036\n### Stopping node \"primary\" using mode immediate\n\n031_recovery_conflict_standby.log really doesn't contain the expected\nmessage. I can share log files from a successful and failed test runs, if\nthey can be helpful, or I'll investigate this case today/tomorrow.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 20 Mar 2024 17:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Test 031_recovery_conflict.pl is not immune to autovacuum" }, { "msg_contents": "On Wed, Mar 20, 2024 at 10:00 AM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello Melanie,\n>\n> 20.03.2024 16:15, Melanie Plageman wrote:\n> > Seems like we could just add autovacuum_enabled=false to the table like this:\n> > diff --git a/src/test/recovery/t/031_recovery_conflict.pl\n> > b/src/test/recovery/t/031_recovery_conflict.pl\n> > index d87efa823fd..65bc858c02d 100644\n> > --- a/src/test/recovery/t/031_recovery_conflict.pl\n> > +++ b/src/test/recovery/t/031_recovery_conflict.pl\n> > @@ -59,7 +59,7 @@ my $table1 = \"test_recovery_conflict_table1\";\n> > my $table2 = \"test_recovery_conflict_table2\";\n> > $node_primary->safe_psql(\n> > $test_db, qq[\n> > -CREATE TABLE ${table1}(a int, b int);\n> > +CREATE TABLE ${table1}(a int, b int) WITH (autovacuum_enabled = false);\n> > INSERT INTO $table1 SELECT i % 3, 0 FROM generate_series(1,20) i;\n> > CREATE TABLE ${table2}(a int, b int);\n> > ]);\n>\n> Thanks for paying attention to it!\n>\n> With such modification applied I've got another failure (on iteration 2):\n\nThanks for trying it out!\n\n> [13:27:39.034](2.317s) ok 14 - startup deadlock: lock acquisition is waiting\n> Waiting for replication conn standby's replay_lsn to pass 0/343E6D0 on primary\n> done\n> timed out waiting for match: (?^:User transaction caused buffer deadlock with recovery.) at t/031_recovery_conflict.pl\n> line 318.\n> # Postmaster PID for node \"primary\" is 1523036\n> ### Stopping node \"primary\" using mode immediate\n>\n> 031_recovery_conflict_standby.log really doesn't contain the expected\n> message. I can share log files from a successful and failed test runs, if\n> they can be helpful, or I'll investigate this case today/tomorrow.\n\nHmm. The log file from the failed test run with\n(autovacuum_enabled=false) would be helpful. I can't tell without the\nlog if it hit a different type of conflict. Unfortunately it was very\ndifficult to trigger the specific type of recovery conflict we were\ntrying to test and not hit another of the recovery conflict types\nfirst. It'll take me some time to swap this back in my head, though.\n\n- Melanie\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:24:01 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 031_recovery_conflict.pl is not immune to autovacuum" }, { "msg_contents": "20.03.2024 22:24, Melanie Plageman wrote:\n>> [13:27:39.034](2.317s) ok 14 - startup deadlock: lock acquisition is waiting\n>> Waiting for replication conn standby's replay_lsn to pass 0/343E6D0 on primary\n>> done\n>> timed out waiting for match: (?^:User transaction caused buffer deadlock with recovery.) at t/031_recovery_conflict.pl\n>> line 318.\n>> # Postmaster PID for node \"primary\" is 1523036\n>> ### Stopping node \"primary\" using mode immediate\n>>\n>> 031_recovery_conflict_standby.log really doesn't contain the expected\n>> message. I can share log files from a successful and failed test runs, if\n>> they can be helpful, or I'll investigate this case today/tomorrow.\n> Hmm. The log file from the failed test run with\n> (autovacuum_enabled=false) would be helpful. I can't tell without the\n> log if it hit a different type of conflict. Unfortunately it was very\n> difficult to trigger the specific type of recovery conflict we were\n> trying to test and not hit another of the recovery conflict types\n> first. It'll take me some time to swap this back in my head, though.\n\nPlease look at the attached logs. For a successful run, I see in 031_recovery_conflict_standby.log:\n2024-03-20 13:28:08.084 UTC|||65fae466.1744d7|CONTEXT:  WAL redo at 0/347F9B8 for Heap2/PRUNE: snapshotConflictHorizon: \n0, nredirected: 0, ndead: 0, isCatalogRel: F, nunused: 100, redirected: [], dead: [], unused: [2, 3, 4, 5, 6, 7, 8, 9, \n10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, \n40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, \n70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, \n100, 101]; blkref #0: rel 1663/16385/16396, blk 0\n2024-03-20 13:28:08.084 UTC|vagrant|test_db|65fae467.174503|ERROR: canceling statement due to conflict with recovery at \ncharacter 15\n2024-03-20 13:28:08.084 UTC|vagrant|test_db|65fae467.174503|DETAIL: User transaction caused buffer deadlock with recovery.\n2024-03-20 13:28:08.084 UTC|||65fae466.1744d7|LOG:  recovery finished waiting after 10.432 ms: recovery conflict on \nbuffer pin\n2024-03-20 13:28:08.084 UTC|||65fae466.1744d7|CONTEXT:  WAL redo at 0/347F9B8 for Heap2/PRUNE: snapshotConflictHorizon: \n0, nredirected: 0, ndead: 0, isCatalogRel: F, nunused: 100, redirected: [], dead: [], unused: [2, 3, 4, 5, 6, 7, 8, 9, \n10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, \n40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, \n70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, \n100, 101]; blkref #0: rel 1663/16385/16396, blk 0\n2024-03-20 13:28:08.084 UTC|||65fae466.1744d7|LOG:  REDO @ 0/347FAB8; LSN 0/347FB00: prev 0/347F9B8; xid 0; len 7; \nblkref #0: rel 1663/16385/16396, blk 0 - Heap2/FREEZE_PAGE: snapshotConflictHorizon: 762, nplans: 1, isCatalogRel: F, \nplans: [{ xmax: 0, infomask: 2817, infomask2: 2, ntuples: 1, offsets: [1] }]\n\nAnd the corresponding fragment from 031_recovery_conflict_primary.log:\n2024-03-20 13:28:07.846 UTC|vagrant|test_db|65fae467.174549|LOG: xlog flush request 0/347DCB0; write 0/0; flush 0/0\n2024-03-20 13:28:07.846 UTC|vagrant|test_db|65fae467.174549|CONTEXT:  writing block 16 of relation base/16385/1249\n2024-03-20 13:28:07.847 UTC|vagrant|test_db|65fae467.174549|LOG: statement: VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|LOG: INSERT @ 0/347FAB8:  - Heap2/PRUNE: \nsnapshotConflictHorizon: 0, nredirected: 0, ndead: 0, isCatalogRel: F, nunused: 100, redirected: [], dead: [], unused: \n[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, \n34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, \n64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, \n94, 95, 96, 97, 98, 99, 100, 101]\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|CONTEXT:  while scanning block 0 of relation \n\"public.test_recovery_conflict_table1\"\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|LOG: INSERT @ 0/347FB00:  - Heap2/FREEZE_PAGE: \nsnapshotConflictHorizon: 762, nplans: 1, isCatalogRel: F, plans: [{ xmax: 0, infomask: 2817, infomask2: 2, ntuples: 1, \noffsets: [1] }]\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|CONTEXT:  while scanning block 0 of relation \n\"public.test_recovery_conflict_table1\"\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|LOG: INSERT @ 0/3481B50:  - XLOG/FPI_FOR_HINT:\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|CONTEXT:  while scanning block 0 of relation \n\"public.test_recovery_conflict_table1\"\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|LOG: INSERT @ 0/3483BA0:  - XLOG/FPI_FOR_HINT:\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|CONTEXT:  while scanning relation \n\"public.test_recovery_conflict_table1\"\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|LOG: INSERT @ 0/3485BF0:  - XLOG/FPI_FOR_HINT:\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|CONTEXT:  while scanning relation \n\"public.test_recovery_conflict_table1\"\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|LOG: INSERT @ 0/3485CB0:  - Heap/INPLACE: off: 4\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|LOG: INSERT @ 0/3485D10:  - Standby/INVALIDATIONS: ; inval \nmsgs: catcache 55 catcache 54 relcache 16396\n2024-03-20 13:28:07.848 UTC|vagrant|test_db|65fae467.174549|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n\nWhilst 031_recovery_conflict_primary.log for the failed run contains:\n2024-03-20 13:27:39.042 UTC|vagrant|test_db|65fae44b.17419a|LOG: statement: VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:27:39.043 UTC|vagrant|test_db|65fae44b.17419a|LOG: INSERT @ 0/343E718:  - Heap2/FREEZE_PAGE: \nsnapshotConflictHorizon: 751, nplans: 1, isCatalogRel: F, plans: [{ xmax: 0, infomask: 2817, infomask2: 2, ntuples: 1, \noffsets: [1] }]\n2024-03-20 13:27:39.043 UTC|vagrant|test_db|65fae44b.17419a|CONTEXT:  while scanning block 0 of relation \n\"public.test_recovery_conflict_table1\"\n2024-03-20 13:27:39.043 UTC|vagrant|test_db|65fae44b.17419a|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:27:39.043 UTC|vagrant|test_db|65fae44b.17419a|LOG: INSERT @ 0/343E7D8:  - Heap/INPLACE: off: 4\n2024-03-20 13:27:39.043 UTC|vagrant|test_db|65fae44b.17419a|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n2024-03-20 13:27:39.043 UTC|vagrant|test_db|65fae44b.17419a|LOG: INSERT @ 0/343E838:  - Standby/INVALIDATIONS: ; inval \nmsgs: catcache 55 catcache 54 relcache 16392\n2024-03-20 13:27:39.043 UTC|vagrant|test_db|65fae44b.17419a|STATEMENT:  VACUUM FREEZE test_recovery_conflict_table1;\n\n(there is no Heap2/PRUNE record)\n\nI've modified the test as follows:\n--- a/src/test/recovery/t/031_recovery_conflict.pl\n+++ b/src/test/recovery/t/031_recovery_conflict.pl\n@@ -59,7 +59,7 @@ my $table1 = \"test_recovery_conflict_table1\";\n  my $table2 = \"test_recovery_conflict_table2\";\n  $node_primary->safe_psql(\n         $test_db, qq[\n-CREATE TABLE ${table1}(a int, b int);\n+CREATE TABLE ${table1}(a int, b int) WITH (autovacuum_enabled = false);\n  INSERT INTO $table1 SELECT i % 3, 0 FROM generate_series(1,20) i;\n  CREATE TABLE ${table2}(a int, b int);\n  ]);\n\n\nBest regards,\nAlexander", "msg_date": "Thu, 21 Mar 2024 13:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Test 031_recovery_conflict.pl is not immune to autovacuum" } ]
[ { "msg_contents": "David Wheeler complained over in [1] that genbki.pl fails to produce a\nuseful error message if there's anything wrong in a catalog data file.\nHe's right about that, but the band-aid patch he proposed doesn't\nimprove the situation much. The actual problem is that key error\nmessages in genbki.pl expect $bki_values->{line_number} to be set,\nwhich it is not because we're invoking Catalog::ParseData with\n$preserve_formatting = 0, and that runs a fast path that doesn't do\nline-by-line processing and hence doesn't/can't fill that field.\n\nI'm quite sure that those error cases worked as-intended when first\ncoded. I did not check the git history, but I suppose that somebody\nadded the non-preserve_formatting fast path later without any\nconsideration for the damage it was doing to error reporting ability.\nI'm of the opinion that this change was penny-wise and pound-foolish.\nOn my machine, the time to re-make the bki files with the fast path\nenabled is 0.597s, and with it disabled (measured with the attached\nquick-hack patch) it's 0.612s. Is that savings worth future hackers\nhaving to guess what they broke and where in a large .dat file?\n\nAs you can see from the quick-hack patch, it's kind of a mess to use\nthe $preserve_formatting = 1 case, because there are a lot of loops\nthat have to be taught to ignore comment lines, which we don't really\ncare about except in reformat_dat_file.pl. What I suggest doing, but\nhave not yet coded up, is to nuke the fast path in Catalog::ParseData\nand reinterpret the $preserve_formatting argument as controlling\nwhether comments and whitespace are entered in the output data\nstructure, but not whether we parse it line-by-line. That should fix\nthis problem with zero change in the callers, and also buy back a\nlittle bit of the cost compared to this quick hack.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/60EF4E11-BC1C-4034-B37B-695662D28AD2%40justatheory.com", "msg_date": "Wed, 20 Mar 2024 12:44:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Broken error detection in genbki.pl" }, { "msg_contents": "\nOn 2024-03-20 We 12:44, Tom Lane wrote:\n> David Wheeler complained over in [1] that genbki.pl fails to produce a\n> useful error message if there's anything wrong in a catalog data file.\n> He's right about that, but the band-aid patch he proposed doesn't\n> improve the situation much. The actual problem is that key error\n> messages in genbki.pl expect $bki_values->{line_number} to be set,\n> which it is not because we're invoking Catalog::ParseData with\n> $preserve_formatting = 0, and that runs a fast path that doesn't do\n> line-by-line processing and hence doesn't/can't fill that field.\n>\n> I'm quite sure that those error cases worked as-intended when first\n> coded. I did not check the git history, but I suppose that somebody\n> added the non-preserve_formatting fast path later without any\n> consideration for the damage it was doing to error reporting ability.\n> I'm of the opinion that this change was penny-wise and pound-foolish.\n> On my machine, the time to re-make the bki files with the fast path\n> enabled is 0.597s, and with it disabled (measured with the attached\n> quick-hack patch) it's 0.612s. Is that savings worth future hackers\n> having to guess what they broke and where in a large .dat file?\n>\n> As you can see from the quick-hack patch, it's kind of a mess to use\n> the $preserve_formatting = 1 case, because there are a lot of loops\n> that have to be taught to ignore comment lines, which we don't really\n> care about except in reformat_dat_file.pl. What I suggest doing, but\n> have not yet coded up, is to nuke the fast path in Catalog::ParseData\n> and reinterpret the $preserve_formatting argument as controlling\n> whether comments and whitespace are entered in the output data\n> structure, but not whether we parse it line-by-line. That should fix\n> this problem with zero change in the callers, and also buy back a\n> little bit of the cost compared to this quick hack.\n>\n> Thoughts?\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/60EF4E11-BC1C-4034-B37B-695662D28AD2%40justatheory.com\n>\n\nMakes sense\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 1 Apr 2024 15:37:19 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Broken error detection in genbki.pl" } ]
[ { "msg_contents": "Greetings,\n\nI am getting the following error\n\nmeson.build:1479:17: ERROR: Can not run test applications in this cross\nenvironment.\n\nHave configured for amd64_x86\n\nRunning `meson setup --wipe build --prefix=c:\\postgres86`\n\nThe docs say it is possible to build postgres for x86. Are there specific\ninstructions ?\n\n\nDave Cramer\n\nGreetings,I am getting the following error meson.build:1479:17: ERROR: Can not run test applications in this cross environment.Have configured for amd64_x86Running `meson setup --wipe build --prefix=c:\\postgres86`The docs say it is possible to build postgres for x86. Are there specific instructions ?Dave Cramer", "msg_date": "Wed, 20 Mar 2024 16:14:23 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Trying to build x86 version on windows using meson" }, { "msg_contents": "Hi,\n\nOn 2024-03-20 16:14:23 -0400, Dave Cramer wrote:\n> I am getting the following error\n> \n> meson.build:1479:17: ERROR: Can not run test applications in this cross\n> environment.\n> \n> Have configured for amd64_x86\n> \n> Running `meson setup --wipe build --prefix=c:\\postgres86`\n\nThis is not enough information to debug anything. At the very least we need\nthe exact steps performed to set up the build and meson-logs/meson-log.txt\n\n\n> The docs say it is possible to build postgres for x86. Are there specific\n> instructions ?\n\nIt should work.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Mar 2024 14:11:33 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "On Wed, 20 Mar 2024 at 17:11, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-03-20 16:14:23 -0400, Dave Cramer wrote:\n> > I am getting the following error\n> >\n> > meson.build:1479:17: ERROR: Can not run test applications in this cross\n> > environment.\n> >\n> > Have configured for amd64_x86\n> >\n> > Running `meson setup --wipe build --prefix=c:\\postgres86`\n>\n> This is not enough information to debug anything. At the very least we need\n> the exact steps performed to set up the build and meson-logs/meson-log.txt\n>\n> First off this is on an ARM64 machine\n\nThe last error from meson-log.txt is\n\n...\nChecking if \"c99\" compiles: YES\n\nmeson.build:1479:17: ERROR: Can not run test applications in this cross\nenvironment.\n...\n\n\n>\n> > The docs say it is possible to build postgres for x86. Are there specific\n> > instructions ?\n>\n> It should work.\n>\n> Greetings,\n>\n> Andres Freund\n>", "msg_date": "Wed, 20 Mar 2024 17:49:14 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "Hi,\n\nOn 2024-03-20 17:49:14 -0400, Dave Cramer wrote:\n> On Wed, 20 Mar 2024 at 17:11, Andres Freund <[email protected]> wrote:\n> > On 2024-03-20 16:14:23 -0400, Dave Cramer wrote:\n> > > I am getting the following error\n> > >\n> > > meson.build:1479:17: ERROR: Can not run test applications in this cross\n> > > environment.\n> > >\n> > > Have configured for amd64_x86\n> > >\n> > > Running `meson setup --wipe build --prefix=c:\\postgres86`\n> >\n> > This is not enough information to debug anything. At the very least we need\n> > the exact steps performed to set up the build and meson-logs/meson-log.txt\n> >\n> First off this is on an ARM64 machine\n\nUh, that's a fairly crucial bit - you're actually trying to cross compile\nthen. I don't know much about cross compiling on windows, so it's certainly\npossible there's still some gaps there.\n\n\n> \n> The last error from meson-log.txt is\n> \n> ...\n> Checking if \"c99\" compiles: YES\n> \n> meson.build:1479:17: ERROR: Can not run test applications in this cross\n> environment.\n> ...\n\nThat's not the meson-log.txt that you attached though?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:00:08 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "On Thu, 21 Mar 2024 at 11:00, Andres Freund <[email protected]> wrote:\n>\n> On 2024-03-20 17:49:14 -0400, Dave Cramer wrote:\n> > First off this is on an ARM64 machine\n>\n> Uh, that's a fairly crucial bit - you're actually trying to cross compile\n> then. I don't know much about cross compiling on windows, so it's certainly\n> possible there's still some gaps there.\n\nHow would initdb.exe / pg_regress.exe even run on the x86 build\nmachine if it's compiled for ARM?\n\nDavid\n\n\n", "msg_date": "Thu, 21 Mar 2024 11:02:27 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "Hi,\n\nOn 2024-03-21 11:02:27 +1300, David Rowley wrote:\n> On Thu, 21 Mar 2024 at 11:00, Andres Freund <[email protected]> wrote:\n> >\n> > On 2024-03-20 17:49:14 -0400, Dave Cramer wrote:\n> > > First off this is on an ARM64 machine\n> >\n> > Uh, that's a fairly crucial bit - you're actually trying to cross compile\n> > then. I don't know much about cross compiling on windows, so it's certainly\n> > possible there's still some gaps there.\n> \n> How would initdb.exe / pg_regress.exe even run on the x86 build\n> machine if it's compiled for ARM?\n\nI think this is building on an ARM64 host, targeting 32bit x86.\n\nObviously tests can't run in that environment, but building should be\npossible. I can e.g. build postgres for x86-64 windows on my linux machine,\nbut can't run the tests (in theory they could be run with wine, but wine isn't\ncomplete enough to run postgres).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Mar 2024 15:20:57 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "On Wed, Mar 20, 2024 at 6:21 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-03-21 11:02:27 +1300, David Rowley wrote:\n> > On Thu, 21 Mar 2024 at 11:00, Andres Freund <[email protected]> wrote:\n> > >\n> > > On 2024-03-20 17:49:14 -0400, Dave Cramer wrote:\n> > > > First off this is on an ARM64 machine\n> > >\n> > > Uh, that's a fairly crucial bit - you're actually trying to cross\n> compile\n> > > then. I don't know much about cross compiling on windows, so it's\n> certainly\n> > > possible there's still some gaps there.\n> >\n> > How would initdb.exe / pg_regress.exe even run on the x86 build\n> > machine if it's compiled for ARM?\n>\n> I think this is building on an ARM64 host, targeting 32bit x86.\n>\n> Obviously tests can't run in that environment, but building should be\n> possible. I can e.g. build postgres for x86-64 windows on my linux machine,\n> but can't run the tests (in theory they could be run with wine, but wine\n> isn't\n> complete enough to run postgres).\n>\n\n\nWindows apparently has some magic built in for this:\n\nhttps://learn.microsoft.com/en-us/windows/arm/apps-on-arm-x86-emulation\n\ncheers\n\nandrew\n\nOn Wed, Mar 20, 2024 at 6:21 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-03-21 11:02:27 +1300, David Rowley wrote:\n> On Thu, 21 Mar 2024 at 11:00, Andres Freund <[email protected]> wrote:\n> >\n> > On 2024-03-20 17:49:14 -0400, Dave Cramer wrote:\n> > > First off this is on an ARM64 machine\n> >\n> > Uh, that's a fairly crucial bit - you're actually trying to cross compile\n> > then.  I don't know much about cross compiling on windows, so it's certainly\n> > possible there's still some gaps there.\n> \n> How would initdb.exe / pg_regress.exe even run on the x86 build\n> machine if it's compiled for ARM?\n\nI think this is building on an ARM64 host, targeting 32bit x86.\n\nObviously tests can't run in that environment, but building should be\npossible. I can e.g. build postgres for x86-64 windows on my linux machine,\nbut can't run the tests (in theory they could be run with wine, but wine isn't\ncomplete enough to run postgres).Windows apparently has some magic built in for this:https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-x86-emulationcheersandrew", "msg_date": "Thu, 21 Mar 2024 01:48:46 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "On 20.03.24 22:49, Dave Cramer wrote:\n> \n> \n> \n> On Wed, 20 Mar 2024 at 17:11, Andres Freund <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> Hi,\n> \n> On 2024-03-20 16:14:23 -0400, Dave Cramer wrote:\n> > I am getting the following error\n> >\n> > meson.build:1479:17: ERROR: Can not run test applications in this\n> cross\n> > environment.\n> >\n> > Have configured for amd64_x86\n> >\n> > Running `meson setup --wipe build --prefix=c:\\postgres86`\n> \n> This is not enough information to debug anything. At the very least\n> we need\n> the exact steps performed to set up the build and\n> meson-logs/meson-log.txt\n> \n> First off this is on an ARM64 machine\n> \n> The last error from meson-log.txt is\n> \n> ...\n> Checking if \"c99\" compiles: YES\n> \n> meson.build:1479:17: ERROR: Can not run test applications in this cross \n> environment.\n> ...\n\nI have never tried this, but there are instructions for cross-compiling \nwith meson: https://mesonbuild.com/Cross-compilation.html\n\n\n\n\n", "msg_date": "Thu, 21 Mar 2024 08:56:53 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "On Thu, 21 Mar 2024 at 03:56, Peter Eisentraut <[email protected]> wrote:\n\n> On 20.03.24 22:49, Dave Cramer wrote:\n> >\n> >\n> >\n> > On Wed, 20 Mar 2024 at 17:11, Andres Freund <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > Hi,\n> >\n> > On 2024-03-20 16:14:23 -0400, Dave Cramer wrote:\n> > > I am getting the following error\n> > >\n> > > meson.build:1479:17: ERROR: Can not run test applications in this\n> > cross\n> > > environment.\n> > >\n> > > Have configured for amd64_x86\n> > >\n> > > Running `meson setup --wipe build --prefix=c:\\postgres86`\n> >\n> > This is not enough information to debug anything. At the very least\n> > we need\n> > the exact steps performed to set up the build and\n> > meson-logs/meson-log.txt\n> >\n> > First off this is on an ARM64 machine\n> >\n> > The last error from meson-log.txt is\n> >\n> > ...\n> > Checking if \"c99\" compiles: YES\n> >\n> > meson.build:1479:17: ERROR: Can not run test applications in this cross\n> > environment.\n> > ...\n>\n> I have never tried this, but there are instructions for cross-compiling\n> with meson: https://mesonbuild.com/Cross-compilation.html\n\n\nIt seems that attempting to cross-compile on an ARM machine might be asking\ntoo much as the use cases are pretty limited.\n\nSo the impetus for this is that folks require 32bit versions of psqlODBC.\nUnfortunately EDB is no longer distributing a 32 bit windows version.\n\nAll I really need is a 32bit libpq. This seems like a much smaller lift.\nSuggestions ?\n\nDave\n\nOn Thu, 21 Mar 2024 at 03:56, Peter Eisentraut <[email protected]> wrote:On 20.03.24 22:49, Dave Cramer wrote:\n> \n> \n> \n> On Wed, 20 Mar 2024 at 17:11, Andres Freund <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n>     Hi,\n> \n>     On 2024-03-20 16:14:23 -0400, Dave Cramer wrote:\n>      > I am getting the following error\n>      >\n>      > meson.build:1479:17: ERROR: Can not run test applications in this\n>     cross\n>      > environment.\n>      >\n>      > Have configured for amd64_x86\n>      >\n>      > Running `meson setup --wipe build --prefix=c:\\postgres86`\n> \n>     This is not enough information to debug anything. At the very least\n>     we need\n>     the exact steps performed to set up the build and\n>     meson-logs/meson-log.txt\n> \n> First off this is on an ARM64 machine\n> \n> The last error from meson-log.txt is\n> \n> ...\n> Checking if \"c99\" compiles: YES\n> \n> meson.build:1479:17: ERROR: Can not run test applications in this cross \n> environment.\n> ...\n\nI have never tried this, but there are instructions for cross-compiling \nwith meson: https://mesonbuild.com/Cross-compilation.htmlIt seems that attempting to cross-compile on an ARM machine might be asking too much as the use cases are pretty limited.So the impetus for this is that folks require 32bit versions of psqlODBC. Unfortunately EDB is no longer distributing a 32 bit windows version.All I really need is a 32bit libpq. This seems like a much smaller lift. Suggestions ?Dave", "msg_date": "Thu, 21 Mar 2024 07:11:23 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "Hi,\n\nOn 2024-03-21 07:11:23 -0400, Dave Cramer wrote:\n> It seems that attempting to cross-compile on an ARM machine might be asking\n> too much as the use cases are pretty limited.\n\nIt for sure is if you don't even provide the precise commands and logs of a\nfailed run...\n\n\n> So the impetus for this is that folks require 32bit versions of psqlODBC.\n> Unfortunately EDB is no longer distributing a 32 bit windows version.\n>\n> All I really need is a 32bit libpq. This seems like a much smaller lift.\n> Suggestions ?\n\nFWIW, I can cross compile postgres from linux to 32bit windows without an\nissue. If you really just need a 32bit libpq, that might actually be easier.\n\ncd /tmp/ && rm -rf /tmp/meson-w32 && m setup --buildtype debug -Dcassert=true -Db_pch=true --cross-file ~/src/meson/cross/linux-mingw-w64-32bit.txt /tmp/meson-w32 ~/src/postgresql && cd /tmp/meson-w32 && ninja\n\nfile src/interfaces/libpq/libpq.dll\nsrc/interfaces/libpq/libpq.dll: PE32 executable (DLL) (console) Intel 80386, for MS Windows, 19 sections\n\nYou'd need a windows openssl to actually have a useful libpq, but that should\nbe fairly simple.\n\n\nThere are two warnings that I think point to us doing something wrong, but they're not affecting libpq:\n\n[1585/1945 42 81%] Linking target src/bin/pgevent/pgevent.dll\n/usr/bin/i686-w64-mingw32-ld: warning: resolving _DllRegisterServer by linking to _DllRegisterServer@0\nUse --enable-stdcall-fixup to disable these warnings\nUse --disable-stdcall-fixup to disable these fixups\n/usr/bin/i686-w64-mingw32-ld: warning: resolving _DllUnregisterServer by linking to _DllUnregisterServer@0\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 21 Mar 2024 09:51:37 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "Andres,\n\n\nOn Thu, 21 Mar 2024 at 12:51, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-03-21 07:11:23 -0400, Dave Cramer wrote:\n> > It seems that attempting to cross-compile on an ARM machine might be\n> asking\n> > too much as the use cases are pretty limited.\n>\n> It for sure is if you don't even provide the precise commands and logs of a\n> failed run...\n>\n>\n> > So the impetus for this is that folks require 32bit versions of psqlODBC.\n> > Unfortunately EDB is no longer distributing a 32 bit windows version.\n> >\n> > All I really need is a 32bit libpq. This seems like a much smaller lift.\n> > Suggestions ?\n>\n> FWIW, I can cross compile postgres from linux to 32bit windows without an\n> issue. If you really just need a 32bit libpq, that might actually be\n> easier.\n>\n> cd /tmp/ && rm -rf /tmp/meson-w32 && m setup --buildtype debug\n> -Dcassert=true -Db_pch=true --cross-file\n> ~/src/meson/cross/linux-mingw-w64-32bit.txt /tmp/meson-w32 ~/src/postgresql\n> && cd /tmp/meson-w32 && ninja\n>\n> file src/interfaces/libpq/libpq.dll\n> src/interfaces/libpq/libpq.dll: PE32 executable (DLL) (console) Intel\n> 80386, for MS Windows, 19 sections\n>\n> You'd need a windows openssl to actually have a useful libpq, but that\n> should\n> be fairly simple.\n>\n>\n> There are two warnings that I think point to us doing something wrong, but\n> they're not affecting libpq:\n>\n> [1585/1945 42 81%] Linking target src/bin/pgevent/pgevent.dll\n> /usr/bin/i686-w64-mingw32-ld: warning: resolving _DllRegisterServer by\n> linking to _DllRegisterServer@0\n> Use --enable-stdcall-fixup to disable these warnings\n> Use --disable-stdcall-fixup to disable these fixups\n> /usr/bin/i686-w64-mingw32-ld: warning: resolving _DllUnregisterServer by\n> linking to _DllUnregisterServer@0\n>\n>\n>\nAttached correct log file\n\nDave", "msg_date": "Thu, 21 Mar 2024 13:17:44 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "Hi,\n\nOn 2024-03-21 13:17:44 -0400, Dave Cramer wrote:\n> Attached correct log file\n\nHm. So there's something a bit odd:\n\n\n> Build started at 2024-03-21T13:07:08.707715\n> Main binary: C:\\Program Files\\Meson\\meson.exe\n> Build Options: '-Dextra_include_dirs=c:\\Program Files\\OpenSSL-Win64\\include' -Derrorlogs=True '-Dextra_lib_dirs=c:\\Program Files\\OpenSSL-win64' '-Dprefix=c:\\postgres86'\n> Python system: Windows\n> The Meson build system\n> Version: 1.3.1\n> Source dir: C:\\Users\\davec\\projects\\postgresql\n> Build dir: C:\\Users\\davec\\projects\\postgresql\\build\n> Build type: native build\n\nSo meson thinks this is a native build, not a cross build. But then later\nrealizes that generated binaries and the current platform aren't the same. And\nthus errors out.\n\nThe line numbers don't match my tree, but I think what's failing is the\nsizeof() check. Which has support for cross builds, but it only uses that\n(slower) path if it knows that a cross build is being used.\n\n\nI suggest actually telling meson to cross compile. I don't quite know what\nproperties you're going to need, but something like the following (put it in a\nfile, point meson to it wity --cross-file) might give you a start:\n\n\n[properties]\nneeds_exe_wrapper = false\n\n[binaries]\nc = 'cl'\ncpp = 'cl'\nar = 'lib'\nwindres = 'rc'\n\n[host_machine]\nsystem = 'windows'\ncpu_family = 'x86_64'\ncpu = 'x86_64'\nendian = 'little'\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 22 Mar 2024 17:03:36 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to build x86 version on windows using meson" }, { "msg_contents": "On Thu, Mar 21, 2024 at 11:21 AM Andres Freund <[email protected]> wrote:\n> Obviously tests can't run in that environment, but building should be\n> possible. I can e.g. build postgres for x86-64 windows on my linux machine,\n> but can't run the tests (in theory they could be run with wine, but wine isn't\n> complete enough to run postgres).\n\nFor anyone interested in that, the most apparent reason why WINE\ncurrently can't run a cross-build of psql.exe or postgres.exe on a\nLinux/FreeBSD/macOS host is now on their bug list[1] and just needs\nsomeone to write an easy patch. I dimly recall there were more subtle\nthings that broke here and there before the relevant change of ours\nwent in, and it would be pretty cool if we could make a list and\nreport 'em...\n\n[1] https://bugs.winehq.org/show_bug.cgi?id=56951\n\n\n", "msg_date": "Mon, 22 Jul 2024 00:20:58 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to build x86 version on windows using meson" } ]
[ { "msg_contents": "Hi Aadhav Vignesh,\n\nInterestingly, Alexander asked for ideas for GSoC projects at Supabase\n(I'm on the Auth team), and I proposed the RLS templates idea. As you\nalready pointed out, the idea comes out of us seeing how RLS policies\nare used in real-life and the pain points associated with managing a\nnon-trivial number of them.\n\nI am not a code contributor to Postgres, so please excuse my lack of\nknowledge on how internals work or what's best to implement.\n\n From your original message:\nhttps://www.postgresql.org/message-id/CAMuaUMJ10_4CDxtHOTHbp%2BY%2Bh2uR2wxcVtJPbBvp9A9Njs5kUA%40mail.gmail.com\n\nIn my opinion, as a user of the policies, I prefer the direction of\noption (2) with `CREATE POLICY TEMPLATE`. It’s much closer to the\nexisting `CREATE POLICY` statements, a lot of the internals could\nprobably be reused, which would make the project move along faster.\n\nI do think there should be the option to attach it immediately to\ntables, something like `CREATE POLICY TEMPLATE <name> ATTACH TO\n<tables> AS ...` but it’s not a deal-breaker for me.\n\nI don’t really know enough about how this actually works internally,\nbut on this point:\n\n> The major challenge here is the construction of the qualifiers for the\n> policy, as the entire process [1] relies on a table ID, however, we don’t\n> have access to any table names in this statement.\n>\n> I also find the aspect of constructing qualifiers directly from the\n> plain-text state less ideal, and I honestly have no clue if this is\n> possible.\n\nI would say that because templates are not active until they’re\nattached, the logic for “validating” the query part should come when\nthe template is being attached on the table. So a non-qualifying\n`USING` clause would error out on the `ATTACH` command. Similarly,\naltering the policy template would force validation on the clause over\nthe already attached tables to the policy.\n\nI would also suggest looking into the ability to attach a template to\na schema. There are some obvious benefits of this — creating a table\nin the schema automatically gets the correct RLS behavior without\nhaving to explicitly attach a template or policies, making it\nsecure-by-default. There is some interesting behavior of this as well\nwhich _may_ be beneficial. For example, if you create a schema-wide\ntemplate, say `user_id = current_user`, and you create a table that\ndoesn’t have a `user_id` column — the creation would fail. This is in\nmany practical situations beneficial, as it lessens the likelihood of\ncreating a table that can’t be properly secured in that application\ncontext. On the flip side, it’s probably going to be challenging to\nimplement this and there may be other implications too (like adding it\nin `ALTER TABLE`, etc).\n\nSome fringe ideas that I also think are worth exploring are:\n\n1. Make the `ON table_name` part of the `CREATE POLICY` statement\noptional, which would create the “template.” This would require\naltering the internals of the policy-table relationship to support 0,\n1, 2, … tables instead of the current 1. Again I have no idea how this\nis implemented internally, but it could be a fairly simple change\nwithout having to introduce new concepts, objects, and commands.\n2. Have templates only as the object that enables the one-to-many\nrelationship between a policy and table. For example, you could create\na policy like `CREATE POLICY owned_by_user ON table ...`, and then you\ncould do something like `CREATE POLICY TEMPLATE owned_by_user AS\nPOLICY schema.table.owned_by_user ATTACH TO tables...`. So essentially\nthe “template object” just references an already existing policy\nattached to a table, and it allows you to attach it to other tables\ntoo.\n\nLet me know what you think!\n\n(Apology if this email is not threaded in your email client, I'm\nwriting it after subscribing to the list, which means the Reply option\nis not available.)\n\n\n", "msg_date": "Thu, 21 Mar 2024 12:37:24 +0700", "msg_from": "Stojan Dimitrovski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Introduce row-level security templates" }, { "msg_contents": "Hi Stojan,\n\n\n> I do think there should be the option to attach it immediately to\n> tables, something like `CREATE POLICY TEMPLATE <name> ATTACH TO\n> <tables> AS ...` but it’s not a deal-breaker for me.\n\n\nI would say that because templates are not active until they’re\n> attached, the logic for “validating” the query part should come when\n> the template is being attached on the table. So a non-qualifying\n> `USING` clause would error out on the `ATTACH` command. Similarly,\n> altering the policy template would force validation on the clause over\n> the already attached tables to the policy.\n\n\nThat's an interesting idea. I believe that it could be achieved with some\nmodification, but I'm thinking about the preferred behavior on attaching\ntemplates to tables: do we want to error out instantly if we encounter a\ncase where the qualifier isn't applicable to a particular table, or do we\nlet the template get attached to other tables silently?\n\nI would also suggest looking into the ability to attach a template to\n> a schema. There are some obvious benefits of this — creating a table\n> in the schema automatically gets the correct RLS behavior without\n> having to explicitly attach a template or policies, making it\n> secure-by-default. There is some interesting behavior of this as well\n> which _may_ be beneficial. For example, if you create a schema-wide\n> template, say `user_id = current_user`, and you create a table that\n> doesn’t have a `user_id` column — the creation would fail. This is in\n> many practical situations beneficial, as it lessens the likelihood of\n> creating a table that can’t be properly secured in that application\n> context.\n\n\nI like this idea, as a schema-level template would be beneficial in some\ncases, but this change would introduce more rigidity or disruptions. For\nexample, if a table needs to be created in a schema without any\nrestrictions on access, it would fail as the schema now enforces RLS checks\non table creation. I do feel that this proposal has its benefits, but this\nalso introduces a binary/dichotomous decision: either you enable RLS on\neach table in the schema, or you don't.\n\nOne way to solve this is to manually modify each table that doesn't need\nRLS checks by disabling it: `ALTER TABLE <table_name> DISABLE ROW LEVEL\nSECURITY;`, but I'm not sure if this is ideal, as this introduces more\noperational/administration complexity.\n\n1. Make the `ON table_name` part of the `CREATE POLICY` statement\n> optional, which would create the “template.” This would require\n> altering the internals of the policy-table relationship to support 0,\n> 1, 2, … tables instead of the current 1. Again I have no idea how this\n> is implemented internally, but it could be a fairly simple change\n> without having to introduce new concepts, objects, and commands.\n\n\nInteresting, what does `0` entail in this case? Current behavior is to\nenforce a policy on a table, if that's made optional, would that mean if no\ntables are specified in `CREATE POLICY`, would it be considered as a\nschema-level policy?\n\nWouldn't it be better if we had a way to explicitly specify when a\nschema-level policy is required to be created? With the proposed behavior,\nthere might be cases where users might accidentally trigger/enforce a\nschema-level policy if they failed to specify any table names.\n\n2. Have templates only as the object that enables the one-to-many\n> relationship between a policy and table. For example, you could create\n> a policy like `CREATE POLICY owned_by_user ON table ...`, and then you\n> could do something like `CREATE POLICY TEMPLATE owned_by_user AS\n> POLICY schema.table.owned_by_user ATTACH TO tables...`. So essentially\n> the “template object” just references an already existing policy\n> attached to a table, and it allows you to attach it to other tables\n> too.\n\n\nI believe that's possible by utilizing the system catalogs, and finding\nreferences to the policy as you mentioned, but it's highly sensitive to\ncases where the original policy is deleted, as now you can't refer to the\noriginal policy. There can be modifications made to `DROP POLICY` to also\nremove the top-level/parent template when the original policy is deleted,\nbut I'm not sure if that behavior is preferred.\n\nThanks,\nAadhav\n\nHi Stojan, I do think there should be the option to attach it immediately totables, something like `CREATE POLICY TEMPLATE <name> ATTACH TO<tables> AS ...` but it’s not a deal-breaker for me.I would say that because templates are not active until they’reattached, the logic for “validating” the query part should come whenthe template is being attached on the table. So a non-qualifying`USING` clause would error out on the `ATTACH` command. Similarly,altering the policy template would force validation on the clause overthe already attached tables to the policy.That's an interesting idea. I believe that it could be achieved with some modification, but I'm thinking about the preferred behavior on attaching templates to tables: do we want to error out instantly if we encounter a case where the qualifier isn't applicable to a particular table, or do we let the template get attached to other tables silently?I would also suggest looking into the ability to attach a template toa schema. There are some obvious benefits of this — creating a tablein the schema automatically gets the correct RLS behavior withouthaving to explicitly attach a template or policies, making itsecure-by-default. There is some interesting behavior of this as wellwhich _may_ be beneficial. For example, if you create a schema-widetemplate, say `user_id = current_user`, and you create a table thatdoesn’t have a `user_id` column — the creation would fail. This is inmany practical situations beneficial, as it lessens the likelihood ofcreating a table that can’t be properly secured in that applicationcontext.I like this idea, as a schema-level template would be beneficial in some cases, but this change would introduce more rigidity or disruptions. For example, if a table needs to be created in a schema without any restrictions on access, it would fail as the schema now enforces RLS checks on table creation. I do feel that this proposal has its benefits, but this also introduces a binary/dichotomous decision: either you enable RLS on each table in the schema, or you don't.One way to solve this is to manually modify each table that doesn't need RLS checks by disabling it: `ALTER TABLE <table_name> DISABLE ROW LEVEL SECURITY;`, but I'm not sure if this is ideal, as this introduces more operational/administration complexity.1. Make the `ON table_name` part of the `CREATE POLICY` statementoptional, which would create the “template.” This would requirealtering the internals of the policy-table relationship to support 0,1, 2, … tables instead of the current 1. Again I have no idea how thisis implemented internally, but it could be a fairly simple changewithout having to introduce new concepts, objects, and commands.Interesting, what does `0` entail in this case? Current behavior is to enforce a policy on a table, if that's made optional, would that mean if no tables are specified in `CREATE POLICY`, would it be considered as a schema-level policy?Wouldn't it be better if we had a way to explicitly specify when a schema-level policy is required to be created? With the proposed behavior, there might be cases where users might accidentally trigger/enforce a schema-level policy if they failed to specify any table names.2. Have templates only as the object that enables the one-to-manyrelationship between a policy and table. For example, you could createa policy like `CREATE POLICY owned_by_user ON table ...`, and then youcould do something like `CREATE POLICY TEMPLATE owned_by_user ASPOLICY schema.table.owned_by_user ATTACH TO tables...`. So essentiallythe “template object” just references an already existing policyattached to a table, and it allows you to attach it to other tablestoo.I believe that's possible by utilizing the system catalogs, and finding references to the policy as you mentioned, but it's highly sensitive to cases where the original policy is deleted, as now you can't refer to the original policy. There can be modifications made to `DROP POLICY` to also remove the top-level/parent template when the original policy is deleted, but I'm not sure if that behavior is preferred.Thanks,Aadhav", "msg_date": "Thu, 28 Mar 2024 10:23:19 +0530", "msg_from": "Aadhav Vignesh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Introduce row-level security templates" } ]
[ { "msg_contents": "Hello,\n\nwe run multiple versions of PostgreSQL instances on production. Some time ago\nwe add new physical servers and decided to go with latest GA from pgdg APT\nrepository, that is PostgreSQL 16.\n\nWe encounter slow `GRANT ROLES` only on PostgreSQL 16 instances up to 42 seconds\nin production, the client process at PostgresSQL would use 100% of the CPU.\nWhich is a surprise compared to other instances running older PostgreSQL\nreleases. On production we have a *LOT* of ROLEs, which unfortunately a case\nthat we did not test before switching the new servers into production mode.\n\n\nThe Application & ROLEs\n-----------------------\nOur application make use of ROLEs. We create group ROLEs for each tenant of our\napplication, these ROLEs are named with `d_` and `a_` prefix.\n\nA special ROLE, called `acc`, it will be a member to each of these `d_` and\n`a_` ROLEs.\n\nThe application have a concept of \"session\", which it would mantain and I think\noutside the scope of this e-mail. In relation to PostgreSQL, the application\nwould create a PostgreSQL ROLE that corresponds to its own (application)\nsession. It would name these ROLEs with `s_` prefix, which CREATEd and\nGRANTed its permission on every application's \"session\".\n\nWhen an application \"session\" started, user with `acc` ROLE would grant\nmembersip of `d_` ROLE to `s_` ROLE (ie. GRANT ROLE `d_xxxx` TO `s_xxxx`;)\n\nTo make this clear, for example, we have (say) role `d_202402` already existing\nand application would create a new ROLE `s_0000001` which corresponds to\napplication's \"session\". Application that connects with special ROLE `acc`\nwould GRANT ROLE `d_202402` to the ROLE `s_0000001`, like so:\n\nGRANT d_202402 TO s_0000001;\n\n\nIn production we have up to 13 thousands of these ROLEs, each:\n\n$ sudo -u postgres psql -p 5531\npsql (16.2 (Debian 16.2-1.pgdg120+2))\nType \"help\" for help.\n\npostgres=# select count(*) s_roles_count from pg_catalog.pg_authid\nwhere rolname like 's_%';\ns_roles_count\n---------------\n13299\n(1 row)\n\npostgres=# select count(*) a_roles_count from pg_catalog.pg_authid\nwhere rolname like 'a_%';\na_roles_count\n---------------\n12776\n(1 row)\n\npostgres=# select count(*) d_roles_count from pg_catalog.pg_authid\nwhere rolname like 'd_%';\nd_roles_count\n---------------\n13984\n(1 row)\n\n\nThe Setup\n---------\n\nInvestigating this slow `GRANT ROLE` we start a VM running Debian 11,\nand create a lot of roles.\n\ncreate special `acc` role and write to some file:\n$ echo -e \"CREATE ROLE acc WITH LOGIN NOSUPERUSER INHERIT CREATEDB\nCREATEROLE NOREPLICATION;\\n\\n\" > create_acc.sql\n\ncreate a lot of `a_` roles and make sure `acc` is member of each one of them:\n$ for idx1 in $(seq -w 1 100); do for idx2 in $(seq -w 1 12); do for\nidx3 in $(seq -w 1 10); do echo \"CREATE ROLE a_${idx1}${idx2}${idx3}\nWITH NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;\"; echo \"GRANT\na_${idx1}${idx2}${idx3} TO acc WITH ADMIN OPTION;\"; done; done; done >\ncreate_a.sql\n\ncreate a lot of `d_` roles and make sure `acc` is member of each one of them:\n$ for idx1 in $(seq -w 1 100); do for idx2 in $(seq -w 1 12); do for\nidx3 in $(seq -w 1 10); do echo \"CREATE ROLE d_${idx1}${idx2}${idx3}\nWITH NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;\"; echo \"GRANT\nd_${idx1}${idx2}${idx3} TO acc WITH ADMIN OPTION;\"; done; done; done >\ncreate_d.sql\n\ncreate a lot of `s_` roles:\n$ for idx1 in $(seq -w 1 100); do for idx2 in $(seq -w 1 12); do for\nidx3 in $(seq -w 1 10); do echo \"CREATE ROLE s_${idx1}${idx2}${idx3}\nWITH NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;\"; done; done;\ndone > create_s.sql\n\nmerge ROLE creation into one file:\n$ cat create_acc.sql create_a.sql create_d.sql create_s.sql >\n/tmp/create-roles.sql\n\n\nPostgreSQL 16\n-------------\n\nInstall PostgreSQL 16:\n--\n$ sudo sh -c 'echo \"deb https://apt.postgresql.org/pub/repos/apt\n$(lsb_release -cs)-pgdg main\" > /etc/apt/sources.list.d/pgdg.list'\n$ sudo apt install gnupg2\n$ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc\n| sudo apt-key add -\n$ sudo apt-get update\n$ sudo apt-get -y install postgresql-16 postgresql-client-16\n\n\nCreate PostgreSQL 16 instance:\n--\n$ sudo pg_dropcluster --stop 16 main # drop default Debian cluster\n$ sudo pg_createcluster 16 pg16\n$ echo \"local all acc trust\" | sudo tee\n/etc/postgresql/16/pg16/pg_hba.conf\n$ echo \"local all postgres peer\" | sudo tee -a\n/etc/postgresql/16/pg16/pg_hba.conf\n$ sudo systemctl start [email protected]\n\n\nImport lots of roles:\n--\n$ sudo -u postgres /usr/lib/postgresql/16/bin/psql -f\n/tmp/create-roles.sql -p 5432 -d postgres\n\n\nUsing ROLE `acc`, grant `d_` ROLE to a session ROLE:\n--\n$ time sudo -u postgres /usr/lib/postgresql/16/bin/psql -U acc\npostgres -c 'GRANT d_0010109 TO s_0010109;'\nGRANT ROLE\n\nreal 0m7.579s\nuser 0m0.054s\nsys 0m0.020s\n\n\nThis is the surprising behavior for PostgreSQL 16. It seems there's a new logic\nin PostgreSQL that checks against each role, and it took 100% of CPU.\n\nAt this point we know `acc` is just another ROLE that happens to have ADMIN\nprivilege that is a member of `d_0010109` group ROLE.\n\nBut what happens when `acc` is a SUPERUSER?\n\n\nAlter role `acc` as SUPERUSER:\n--\n$ sudo -u postgres /usr/lib/postgresql/16/bin/psql -c 'ALTER ROLE acc\nWITH SUPERUSER'\nALTER ROLE\n\nThis is a workaround to make GRANT ROLE bearable.\n\n\nUsing ROLE `acc`, grant `d_` ROLE to a session ROLE, again:\n--\n$ time sudo -u postgres /usr/lib/postgresql/16/bin/psql -U acc\npostgres -c 'GRANT d_0010108 TO s_0010108;'\nGRANT ROLE\n\nreal 0m0.079s\nuser 0m0.054s\nsys 0m0.019s\n\nOK this is fast.\n\nBut what hapens when `acc` is back being not a SUPERUSER?\n\n\nAlter role `acc` to stop being SUPERUSER:\n--\n$ sudo -u postgres psql -c 'ALTER ROLE acc WITH NOSUPERUSER'\nALTER ROLE\n\n\nUsing ROLE `acc`, grant `d_` ROLE to a session ROLE with `acc` not a SUPERUSER:\n--\n$ time sudo -u postgres /usr/lib/postgresql/16/bin/psql -U acc\npostgres -c 'GRANT d_0010107 TO s_0010107;'\nGRANT ROLE\n\nreal 0m7.741s\nuser 0m0.055s\nsys 0m0.021s\n\nAs expected, slow `GRANT ROLE` again.\n\n\n\nAt this point, we try with PostgreSQL 15 just to make sure that this\nis new to PostgreSQL 16.\n\n$ sudo systemctl stop postgresql@16-pg16\n\n\nPostgreSQL 15\n-------------\n\nInstall PostgreSQL 15:\n--\n$ sudo apt-get update\n$ sudo apt-get -y install postgresql-15 postgresql-client-15\n\n$ sudo pg_dropcluster --stop 15 main # drop default Debian cluster\n$ sudo pg_createcluster 15 pg15\n$ echo \"local all acc trust\" | sudo tee\n/etc/postgresql/15/pg15/pg_hba.conf\n$ echo \"local all postgres peer\" | sudo tee -a\n/etc/postgresql/15/pg15/pg_hba.conf\n$ sudo systemctl start [email protected]\n\n\nImport lots of roles:\n--\n$ sudo -u postgres /usr/lib/postgresql/15/bin/psql -f\n/tmp/create-roles.sql -p 5433 -d postgres\n\n\nUsing ROLE `acc`, grant `d_` ROLE to a session ROLE:\n--\n$ time sudo -u postgres /usr/lib/postgresql/15/bin/psql -U acc -p 5433\npostgres -c 'GRANT d_0010109 TO s_0010109;'\nGRANT ROLE\n\nreal 0m0.077s\nuser 0m0.054s\nsys 0m0.017s\n\nSeems OK with the same amount of ROLEs. The `acc` ROLE is not a SUPERUSER here.\n\n\nAlter role `acc` as SUPERUSER:\n--\n$ sudo -u postgres /usr/lib/postgresql/15/bin/psql -p 5433 -c 'ALTER\nROLE acc WITH SUPERUSER'\nALTER ROLE\n\n\nUsing ROLE `acc`, grant `d_` ROLE to a session ROLE, again:\n--\n$ time sudo -u postgres /usr/lib/postgresql/15/bin/psql -p 5433 -U acc\npostgres -c 'GRANT d_0010108 TO s_0010108;'\nGRANT ROLE\n\nreal 0m0.084s\nuser 0m0.057s\nsys 0m0.021s\n\nDoesn't matter, GRANT ROLE works still as fast.\n\n\nAlter role `acc` to stop being a SUPERUSER:\n--\n$ sudo -u postgres /usr/lib/postgresql/15/bin/psql -p 5433 -c 'ALTER\nROLE acc WITH NOSUPERUSER'\nALTER ROLE\n\n\nUsing ROLE `acc`, grant `d_` ROLE to a session ROLE with `acc` not a SUPERUSER:\n--\n$ time sudo -u postgres /usr/lib/postgresql/15/bin/psql -p 5433 -U acc\npostgres -c 'GRANT d_0010107 TO s_0010107;'\nGRANT ROLE\n\nreal 0m0.077s\nuser 0m0.054s\nsys 0m0.017s\n\nAgain, doesn't matter, GRANT ROLE works still as fast.\n\n\nLooking At The Source Code\n--------------------------\n\nLooking at git diff of `REL_15_6` against `REL_16_0`, it seems the\n`roles_is_member_of` function called by the new in PostgreSQL 16\n`check_role_membership_authorization`\nis expensive for our use case.\n\n\n REL_16_0: src/backend/commands/user.c:1562\n ---8<------\n(errcode(ERRCODE_INVALID_GRANT_OPERATION),\nerrmsg(\"column names cannot be included in GRANT/REVOKE ROLE\")));\n\nroleid = get_role_oid(rolename, false);\ncheck_role_membership_authorization(currentUserId,\nroleid, stmt->is_grant);\nif (stmt->is_grant)\n --->8------\n\nWhile I can see the value in improvements on how ROLEs are being handled\nPostgreSQL 16 onward, I'm curious what would help for setups that has thousands\nof ROLEs like us outside of patching the source code?\n\n\n", "msg_date": "Thu, 21 Mar 2024 14:10:06 +0700", "msg_from": "alex work <[email protected]>", "msg_from_op": true, "msg_subject": "Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Thu, Mar 21, 2024 at 8:10 AM alex work <[email protected]> wrote:\n\n> We encounter slow `GRANT ROLES` only on PostgreSQL 16 instances up to 42\n> seconds\n> in production, the client process at PostgresSQL would use 100% of the\n> CPU. [...]\n>\nUsing ROLE `acc`, grant `d_` ROLE to a session ROLE:\n> real 0m7.579s [...]\n\nPostgreSQL 15\n> Using ROLE `acc`, grant `d_` ROLE to a session ROLE:\n> real 0m0.077s\n>\n\nOuch, that's a ~ 100x regression. Thanks for the write-up, that's worrying.\nWe don't have as many ROLEs, but we do have plenty, so this is worrying.\n\nOn top of the v16 ROLE changes breaking on ROLE logic, which was fine prior\n(v12-v15).\nWe've paused for now our planned v16 upgrade, until we have more time to\nadapt.\n\nLike you, I welcome the changes. But it turns out more expensive to adapt\nto them.\nAnd your report certainly makes me wonder whether we should hold off until\nthat perf regression is addressed.\n\nThanks, --DD\n\nOn Thu, Mar 21, 2024 at 8:10 AM alex work <[email protected]> wrote:We encounter slow `GRANT ROLES` only on PostgreSQL 16 instances up to 42 seconds\nin production, the client process at PostgresSQL would use 100% of the CPU. [...]Using ROLE `acc`, grant `d_` ROLE to a session ROLE:real    0m7.579s [...] PostgreSQL 15Using ROLE `acc`, grant `d_` ROLE to a session ROLE:real    0m0.077sOuch, that's a ~ 100x regression. Thanks for the write-up, that's worrying.We don't have as many ROLEs, but we do have plenty, so this is worrying.On top of the v16 ROLE changes breaking on ROLE logic, which was fine prior (v12-v15).We've paused for now our planned v16 upgrade, until we have more time to adapt.Like you, I welcome the changes. But it turns out more expensive to adapt to them.And your report certainly makes me wonder whether we should hold off until that perf regression is addressed.Thanks, --DD", "msg_date": "Thu, 21 Mar 2024 08:59:29 +0100", "msg_from": "Dominique Devienne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "[ redirecting to -hackers ]\n\nalex work <[email protected]> writes:\n> We encounter slow `GRANT ROLES` only on PostgreSQL 16 instances up to 42 seconds\n> in production, the client process at PostgresSQL would use 100% of the CPU.\n> Which is a surprise compared to other instances running older PostgreSQL\n> releases. On production we have a *LOT* of ROLEs, which unfortunately a case\n> that we did not test before switching the new servers into production mode.\n\nI poked into this a bit. It seems the problem is that as of v16, we\ntry to search for the \"best\" role membership path from the current\nuser to the target role, and that's done in a very brute-force way,\nas a side effect of computing the set of *all* role memberships the\ncurrent role has. In the given case, we could have skipped all that\nif we simply tested whether the current role is directly a member\nof the target: it is, so there can't be any shorter path. But in\nany case roles_is_member_of has horrid performance when the current\nrole is a member of a lot of roles.\n\nIt looks like part of the blame might be ascribable to catcache.c,\nas if you look at the problem microscopically you find that\nroles_is_member_of is causing catcache to make a ton of AUTHMEMMEMROLE\ncatcache lists, and SearchSysCacheList is just iterating linearly\nthrough the cache's list-of-lists, so that search is where the O(N^2)\ntime is actually getting taken. Up to now that code has assumed that\nany one catcache would not have very many catcache lists. Maybe it's\ntime to make that smarter; but since we've gotten away with this\nimplementation for decades, I can't help feeling that the real issue\nis with roles_is_member_of's usage pattern.\n\nFor self-containedness, attached is a directly usable shell script\nto reproduce the problem. The complaint is that the last GRANT\ntakes multiple seconds (about 5s on my machine), rather than\nmilliseconds.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 21 Mar 2024 12:07:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "I wrote:\n> I poked into this a bit. It seems the problem is that as of v16, we\n> try to search for the \"best\" role membership path from the current\n> user to the target role, and that's done in a very brute-force way,\n> as a side effect of computing the set of *all* role memberships the\n> current role has.\n\nActually, roles_is_member_of sucks before v16 too; the new thing\nis only that it's being invoked during GRANT ROLE. Using the\nroles created by the given test case, I see in v15:\n\n$ psql\npsql (15.6)\nType \"help\" for help.\n\nregression=# drop table at;\nDROP TABLE\nregression=# set role a_0010308;\nSET\nregression=> create table at(f1 int);\nCREATE TABLE\nregression=> \\timing\nTiming is on.\nregression=> set role acc;\nSET\nTime: 0.493 ms\nregression=> insert into at values(1);\nINSERT 0 1\nTime: 3565.029 ms (00:03.565)\nregression=> insert into at values(1);\nINSERT 0 1\nTime: 2.308 ms\n\nSo it takes ~3.5s to populate the roles_is_member_of cache for \"acc\"\ngiven this membership set. This is actually considerably worse than\nin v16 or HEAD, where the same test takes about 1.6s for me.\n\nApparently the OP has designed their use-case so that they dodge these\nimplementation problems in v15 and earlier, but that's a far cry from\nsaying that there were no problems with lots-o-roles before v16.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Mar 2024 13:00:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Tom Lane:\n> Actually, roles_is_member_of sucks before v16 too; the new thing\n> is only that it's being invoked during GRANT ROLE. Using the\n> roles created by the given test case, I see in v15:\n> \n> [...]\n> So it takes ~3.5s to populate the roles_is_member_of cache for \"acc\"\n> given this membership set. This is actually considerably worse than\n> in v16 or HEAD, where the same test takes about 1.6s for me.\n\nAh, this reminds me that I hit the same problem about a year ago, but \nhaven't had the time to put together a test-case, yet. In my case, it's \nlike this:\n- I have one role \"authenticator\" with which the application (PostgREST) \nconnects to the database.\n- This role has been granted all of the actual user roles and will then \ndo a SET ROLE for each authenticated request it handles.\n- In my case that's currently about 120k roles granted to \n\"authenticator\", back then it was probably around 60k.\n- The first request (SET ROLE) for each session took between 5 and 10 \n*minutes* to succeed - subsequent requests were instant.\n- When the \"authenticator\" role is made SUPERUSER, the first request is \ninstant, too.\n\nI guess this matches exactly what you are observing.\n\nThere is one more thing that is actually even worse, though: When you \ntry to cancel the query or terminate the backend while the SET ROLE is \nstill running, this will not work. It will not only not cancel the \nquery, but somehow leave the process for that backend in some kind of \nzombie state that is impossible to recover from.\n\nAll of this was v15.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Thu, 21 Mar 2024 19:47:17 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "I wrote:\n> It looks like part of the blame might be ascribable to catcache.c,\n> as if you look at the problem microscopically you find that\n> roles_is_member_of is causing catcache to make a ton of AUTHMEMMEMROLE\n> catcache lists, and SearchSysCacheList is just iterating linearly\n> through the cache's list-of-lists, so that search is where the O(N^2)\n> time is actually getting taken. Up to now that code has assumed that\n> any one catcache would not have very many catcache lists. Maybe it's\n> time to make that smarter; but since we've gotten away with this\n> implementation for decades, I can't help feeling that the real issue\n> is with roles_is_member_of's usage pattern.\n\nI wrote a quick finger exercise to make catcache.c use a hash table\ninstead of a single list for CatCLists, modeling it closely on the\nexisting hash logic for simple catcache entries. This helps a good\ndeal, but I still see the problematic GRANT taking ~250ms, compared\nto 5ms in v15. roles_is_member_of is clearly on the hook for that.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 21 Mar 2024 15:42:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "I wrote:\n> ... I still see the problematic GRANT taking ~250ms, compared\n> to 5ms in v15. roles_is_member_of is clearly on the hook for that.\n\nAh: looks like that is mainly the fault of the list_append_unique_oid\ncalls in roles_is_member_of. That's also an O(N^2) cost of course,\nthough with a much smaller constant factor.\n\nI don't think we have any really cheap way to de-duplicate the role\nOIDs, especially seeing that it has to be done on-the-fly within the\ncollection loop, and the order of roles_list is at least potentially\ninteresting. Not sure how to make further progress without a lot of\nwork.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Mar 2024 16:31:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Thu, Mar 21, 2024 at 04:31:45PM -0400, Tom Lane wrote:\n> I wrote:\n>> ... I still see the problematic GRANT taking ~250ms, compared\n>> to 5ms in v15. roles_is_member_of is clearly on the hook for that.\n> \n> Ah: looks like that is mainly the fault of the list_append_unique_oid\n> calls in roles_is_member_of. That's also an O(N^2) cost of course,\n> though with a much smaller constant factor.\n> \n> I don't think we have any really cheap way to de-duplicate the role\n> OIDs, especially seeing that it has to be done on-the-fly within the\n> collection loop, and the order of roles_list is at least potentially\n> interesting. Not sure how to make further progress without a lot of\n> work.\n\nAssuming these are larger lists, this might benefit from optimizations\ninvolving SIMD intrinsics. I looked into that a while ago [0], but the\neffort was abandoned because we didn't have any concrete use-cases at the\ntime. (I'm looking into some additional optimizations in a separate thread\n[1] that would likely apply here, too.)\n\n[0] https://postgr.es/m/20230308002502.GA3378449%40nathanxps13\n[1] https://postgr.es/m/20240321183823.GA1800896%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 15:40:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Thu, Mar 21, 2024 at 03:40:12PM -0500, Nathan Bossart wrote:\n> On Thu, Mar 21, 2024 at 04:31:45PM -0400, Tom Lane wrote:\n>> I don't think we have any really cheap way to de-duplicate the role\n>> OIDs, especially seeing that it has to be done on-the-fly within the\n>> collection loop, and the order of roles_list is at least potentially\n>> interesting. Not sure how to make further progress without a lot of\n>> work.\n> \n> Assuming these are larger lists, this might benefit from optimizations\n> involving SIMD intrinsics. I looked into that a while ago [0], but the\n> effort was abandoned because we didn't have any concrete use-cases at the\n> time. (I'm looking into some additional optimizations in a separate thread\n> [1] that would likely apply here, too.)\n\nNever mind. With the reproduction script, I'm only seeing a ~2%\nimprovement with my patches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 19:51:39 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Thu, Mar 21, 2024 at 03:40:12PM -0500, Nathan Bossart wrote:\n>> On Thu, Mar 21, 2024 at 04:31:45PM -0400, Tom Lane wrote:\n>>> I don't think we have any really cheap way to de-duplicate the role\n>>> OIDs, especially seeing that it has to be done on-the-fly within the\n>>> collection loop, and the order of roles_list is at least potentially\n>>> interesting. Not sure how to make further progress without a lot of\n>>> work.\n\n>> Assuming these are larger lists, this might benefit from optimizations\n>> involving SIMD intrinsics.\n\n> Never mind. With the reproduction script, I'm only seeing a ~2%\n> improvement with my patches.\n\nYeah, you cannot beat an O(N^2) problem by throwing SIMD at it.\n\nHowever ... I just remembered that we have a Bloom filter implementation\nin core now (src/backend/lib/bloomfilter.c). How about using that\nto quickly reject (hopefully) most role OIDs, and only do the\nlist_member_oid check if the filter passes?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Mar 2024 20:59:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Thu, Mar 21, 2024 at 08:59:54PM -0400, Tom Lane wrote:\n> However ... I just remembered that we have a Bloom filter implementation\n> in core now (src/backend/lib/bloomfilter.c). How about using that\n> to quickly reject (hopefully) most role OIDs, and only do the\n> list_member_oid check if the filter passes?\n\nSeems worth a try. I've been looking for an excuse to use that...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 20:03:32 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Thu, Mar 21, 2024 at 08:03:32PM -0500, Nathan Bossart wrote:\n> On Thu, Mar 21, 2024 at 08:59:54PM -0400, Tom Lane wrote:\n>> However ... I just remembered that we have a Bloom filter implementation\n>> in core now (src/backend/lib/bloomfilter.c). How about using that\n>> to quickly reject (hopefully) most role OIDs, and only do the\n>> list_member_oid check if the filter passes?\n> \n> Seems worth a try. I've been looking for an excuse to use that...\n\nThe Bloom filter appears to help a bit, although it regresses the\ncreate-roles.sql portion of the test. I'm assuming that's thanks to all\nthe extra pallocs and pfrees, which are probably avoidable if we store the\nfilter in a long-lived context and just clear it at the beginning of each\ncall to roles_is_member_of().\n\n HEAD hash hash+bloom\ncreate 2.02 2.06 2.92\ngrant 4.63 0.27 0.08\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 21 Mar 2024 20:52:51 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "First of all thank you for looking into this.\n\nAt the moment we workaround the problem by altering `acc` ROLE into a SUPERUSER\nin PostgreSQL 16 instances. It sidestep the problem and having the lowest cost\nto implement for us. While at first we think this feels like opening a security\nhole, it does not introduce side effects for **our use case** by the way our\napplication make use of this `acc` ROLE.\n\nOf course we cannot recommend the workaround we took to others having similar\nsituation.\n\nOn Fri, Mar 22, 2024 at 7:59 AM Tom Lane <[email protected]> wrote:\n>\n> Nathan Bossart <[email protected]> writes:\n> > On Thu, Mar 21, 2024 at 03:40:12PM -0500, Nathan Bossart wrote:\n> >> On Thu, Mar 21, 2024 at 04:31:45PM -0400, Tom Lane wrote:\n> >>> I don't think we have any really cheap way to de-duplicate the role\n> >>> OIDs, especially seeing that it has to be done on-the-fly within the\n> >>> collection loop, and the order of roles_list is at least potentially\n> >>> interesting. Not sure how to make further progress without a lot of\n> >>> work.\n>\n> >> Assuming these are larger lists, this might benefit from optimizations\n> >> involving SIMD intrinsics.\n>\n> > Never mind. With the reproduction script, I'm only seeing a ~2%\n> > improvement with my patches.\n>\n> Yeah, you cannot beat an O(N^2) problem by throwing SIMD at it.\n>\n> However ... I just remembered that we have a Bloom filter implementation\n> in core now (src/backend/lib/bloomfilter.c). How about using that\n> to quickly reject (hopefully) most role OIDs, and only do the\n> list_member_oid check if the filter passes?\n>\n> regards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 09:02:57 +0700", "msg_from": "alex work <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> The Bloom filter appears to help a bit, although it regresses the\n> create-roles.sql portion of the test. I'm assuming that's thanks to all\n> the extra pallocs and pfrees, which are probably avoidable if we store the\n> filter in a long-lived context and just clear it at the beginning of each\n> call to roles_is_member_of().\n\nThe zero-fill to reinitialize the filter probably costs a good deal\nall by itself, considering we're talking about at least a megabyte.\nMaybe a better idea is to not enable the filter till we're dealing\nwith at least 1000 or so entries in roles_list, though obviously that\nwill complicate the logic.\n\n+ if (bloom_lacks_element(bf, (unsigned char *) &otherid, sizeof(Oid)))\n+ roles_list = lappend_oid(roles_list, otherid);\n+ else\n+ roles_list = list_append_unique_oid(roles_list, otherid);\n+ bloom_add_element(bf, (unsigned char *) &otherid, sizeof(Oid));\n\nHmm, I was imagining something more like\n\n if (bloom_lacks_element(bf, (unsigned char *) &otherid, sizeof(Oid)) ||\n !list_member_oid(roles_list, otherid))\n {\n roles_list = lappend_oid(roles_list, otherid);\n bloom_add_element(bf, (unsigned char *) &otherid, sizeof(Oid));\n }\n\nto avoid duplicate bloom_add_element calls. Probably wouldn't move\nthe needle much in this specific test case, but this formulation\nwould simplify letting the filter kick in later. Very roughly,\n\n if ((bf && bloom_lacks_element(bf, (unsigned char *) &otherid, sizeof(Oid))) ||\n !list_member_oid(roles_list, otherid))\n {\n if (bf == NULL && list_length(roles_list) > 1000)\n {\n ... create bf and populate with existing list entries\n }\n roles_list = lappend_oid(roles_list, otherid);\n if (bf)\n bloom_add_element(bf, (unsigned char *) &otherid, sizeof(Oid));\n }\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Mar 2024 22:07:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Thu, Mar 21, 2024 at 08:59:54PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> On Thu, Mar 21, 2024 at 03:40:12PM -0500, Nathan Bossart wrote:\n>>> On Thu, Mar 21, 2024 at 04:31:45PM -0400, Tom Lane wrote:\n>>>> I don't think we have any really cheap way to de-duplicate the role\n>>>> OIDs, especially seeing that it has to be done on-the-fly within the\n>>>> collection loop, and the order of roles_list is at least potentially\n>>>> interesting. Not sure how to make further progress without a lot of\n>>>> work.\n> \n>>> Assuming these are larger lists, this might benefit from optimizations\n>>> involving SIMD intrinsics.\n> \n>> Never mind. With the reproduction script, I'm only seeing a ~2%\n>> improvement with my patches.\n> \n> Yeah, you cannot beat an O(N^2) problem by throwing SIMD at it.\n\nI apparently had some sort of major brain fade when I did this because I\ndidn't apply your hashing patch when I ran this SIMD test. With it\napplied, I see a speedup of ~39%, which makes a whole lot more sense to me.\nIf I add the Bloom patch (updated with your suggestions), I get another\n~73% improvement from there, and a much smaller regression in the role\ncreation portion.\n\n hash hash+simd hash+simd+bloom\n create 1.27 1.27 1.28\n grant 0.18 0.11 0.03\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 22 Mar 2024 09:47:39 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Fri, Mar 22, 2024 at 09:47:39AM -0500, Nathan Bossart wrote:\n> hash hash+simd hash+simd+bloom\n> create 1.27 1.27 1.28\n> grant 0.18 0.11 0.03\n\nFor just hash+bloom, I'm seeing 1.29 and 0.04.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 09:55:32 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Fri, Mar 22, 2024 at 09:47:39AM -0500, Nathan Bossart wrote:\n>> hash hash+simd hash+simd+bloom\n>> create 1.27 1.27 1.28\n>> grant 0.18 0.11 0.03\n\n> For just hash+bloom, I'm seeing 1.29 and 0.04.\n\nYeah, that's about what I'd expect: hash+bloom ought to remove\nmost (not quite all) of the opportunity for simd to shine, because\nthe bloom filter should eliminate most of the list_member_oid calls.\n\nPossibly we could fix that small regression in the create phase\nwith more careful tuning of the magic constants in the bloom\nlogic? Although I'd kind of expect that the create step doesn't\never invoke the bloom filter, else it would have been showing a\nperformance problem before; so this might not be a good test case\nfor helping us tune those.\n\nI think remaining questions are:\n\n* Is there any case where the new hash catcache logic could lose\nmeasurably? I kind of doubt it, because we were already computing\nthe hash value for list searches; so basically the only overhead\nis one more palloc per cache during the first list search. (If\nyou accumulate enough lists to cause a rehash, you're almost\nsurely well into winning territory.)\n\n* Same question for the bloom logic, but here I think it's mostly\na matter of tuning those constants.\n\n* Do we want to risk back-patching any of this, to fix the performance\nregression in v16? I think that the OP's situation is a pretty\nnarrow one, but maybe he's not the only person who managed to dodge\nroles_is_member_of's performance issues in most other cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:27:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Fri, Mar 22, 2024 at 11:27:46AM -0400, Tom Lane wrote:\n> Yeah, that's about what I'd expect: hash+bloom ought to remove\n> most (not quite all) of the opportunity for simd to shine, because\n> the bloom filter should eliminate most of the list_member_oid calls.\n\nRight. IMHO the SIMD work is still worth considering because there are\nprobably even more extreme cases where it'll make a decent amount of\ndifference. Plus, that stuff is pretty low overhead for what you get in\nreturn. That being said, the hash table and Bloom filter should definitely\nbe the higher priority.\n\n> * Same question for the bloom logic, but here I think it's mostly\n> a matter of tuning those constants.\n\nI suspect there might be some regressions just after the point where we\nconstruct the filter, but I'd still expect that to be a reasonable\ntrade-off. We could probably pretty easily construct some benchmarks to\nunderstand the impact with a given number of roles. (I'm not sure I'll be\nable to get to that today.)\n\n> * Do we want to risk back-patching any of this, to fix the performance\n> regression in v16? I think that the OP's situation is a pretty\n> narrow one, but maybe he's not the only person who managed to dodge\n> roles_is_member_of's performance issues in most other cases.\n\nI've heard complaints about performance with many roles before, so I\ncertainly think this area is worth optimizing. As far as back-patching\ngoes, my current feeling is that the hash table is probably pretty safe and\nprovides the majority of the benefit, but anything fancier should probably\nbe reserved for v17 or v18.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:39:52 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Fri, Mar 22, 2024 at 11:27:46AM -0400, Tom Lane wrote:\n>> * Do we want to risk back-patching any of this, to fix the performance\n>> regression in v16? I think that the OP's situation is a pretty\n>> narrow one, but maybe he's not the only person who managed to dodge\n>> roles_is_member_of's performance issues in most other cases.\n\n> I've heard complaints about performance with many roles before, so I\n> certainly think this area is worth optimizing. As far as back-patching\n> goes, my current feeling is that the hash table is probably pretty safe and\n> provides the majority of the benefit, but anything fancier should probably\n> be reserved for v17 or v18.\n\nYeah. Although both the catcache and list_append_unique_oid bits\nare O(N^2), the catcache seems to have a much bigger constant\nfactor --- when I did a \"perf\" check on the unpatched code,\nI saw catcache eating over 90% of the runtime and list_member_oid\nabout 2%. So let's fix that part in v16 and call it a day.\nIt should be safe to back-patch the catcache changes as long as\nwe put the new fields at the end of the struct and leave cc_lists\npresent but empty.\n\nWould you like to review the catcache patch further, or do you\nthink it's good to go?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 12:53:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Fri, Mar 22, 2024 at 12:53:15PM -0400, Tom Lane wrote:\n> Would you like to review the catcache patch further, or do you\n> think it's good to go?\n\nI'll take another look this afternoon.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:54:48 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Fri, Mar 22, 2024 at 11:54:48AM -0500, Nathan Bossart wrote:\n> On Fri, Mar 22, 2024 at 12:53:15PM -0400, Tom Lane wrote:\n>> Would you like to review the catcache patch further, or do you\n>> think it's good to go?\n> \n> I'll take another look this afternoon.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 14:35:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Fri, Mar 22, 2024 at 11:54:48AM -0500, Nathan Bossart wrote:\n>> On Fri, Mar 22, 2024 at 12:53:15PM -0400, Tom Lane wrote:\n>>> Would you like to review the catcache patch further, or do you\n>>> think it's good to go?\n\n>> I'll take another look this afternoon.\n\n> LGTM\n\nThanks for looking, I'll push that shortly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 16:41:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Fri, Mar 22, 2024 at 04:41:49PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> LGTM\n> \n> Thanks for looking, I'll push that shortly.\n\nAre there any changes you'd like to see for the Bloom patch [0]? I'd like\nto see about getting that committed for v17. One thing that crossed my\nmind is creating a combined list/filter that transparently created a filter\nwhen necessary (for reuse elsewhere), but I'm not sure that's v17 material.\n\n[0] https://postgr.es/m/attachment/158079/bloom_v2.patch\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 09:47:43 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Are there any changes you'd like to see for the Bloom patch [0]? I'd like\n> to see about getting that committed for v17. One thing that crossed my\n> mind is creating a combined list/filter that transparently created a filter\n> when necessary (for reuse elsewhere), but I'm not sure that's v17 material.\n\nYeah, that thought occurred to me too, but I think we ought to have a\nfew more use-cases in view before trying to write an API.\n\nAs for the patch, I agree it could go into v17, but I think there is\nstill a little bit of work to do:\n\n* The magic constants (crossover list length and bloom filter size)\nneed some testing to see if there are better values. They should\nprobably be made into named #defines, too. I suspect, with little\nproof, that the bloom filter size isn't particularly critical --- but\nI know we pulled the crossover of 1000 out of thin air, and I have\nno certainty that it's even within an order of magnitude of being a\ngood choice.\n\n* Code needs more than zero comments.\n\n* Is it worth trying to make a subroutine, or at least a macro,\nso as not to have 2 copies of the code?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Mar 2024 11:08:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Mon, Mar 25, 2024 at 11:08:39AM -0400, Tom Lane wrote:\n> * The magic constants (crossover list length and bloom filter size)\n> need some testing to see if there are better values. They should\n> probably be made into named #defines, too. I suspect, with little\n> proof, that the bloom filter size isn't particularly critical --- but\n> I know we pulled the crossover of 1000 out of thin air, and I have\n> no certainty that it's even within an order of magnitude of being a\n> good choice.\n\nI'll try to construct a couple of tests to see if we can determine a proper\norder of magnitude.\n\n> * Code needs more than zero comments.\n\nYup.\n\n> * Is it worth trying to make a subroutine, or at least a macro,\n> so as not to have 2 copies of the code?\n\nI think so. I'll try that in the next version.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 10:16:47 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Here is a new version of the patch that I feel is in decent shape.\n\nOn Mon, Mar 25, 2024 at 10:16:47AM -0500, Nathan Bossart wrote:\n> On Mon, Mar 25, 2024 at 11:08:39AM -0400, Tom Lane wrote:\n>> * The magic constants (crossover list length and bloom filter size)\n>> need some testing to see if there are better values. They should\n>> probably be made into named #defines, too. I suspect, with little\n>> proof, that the bloom filter size isn't particularly critical --- but\n>> I know we pulled the crossover of 1000 out of thin air, and I have\n>> no certainty that it's even within an order of magnitude of being a\n>> good choice.\n> \n> I'll try to construct a couple of tests to see if we can determine a proper\n> order of magnitude.\n\nI spent some time trying to get some ballpark figures but have thus far\nbeen unsuccessful. Even if I was able to get good numbers, I'm not sure\nhow much they'd help us, as we'll still need to decide how much overhead we\nare willing to take in comparison to the linear search. I don't think\n~1000 is an unreasonable starting point, as it seems generally more likely\nthat you will have many more roles to process at that point than if the\nthreshold was, say, 100. And if the threshold is too high (e.g., 10,000),\nthis optimization will only kick in for the most extreme cases, so we'd\nlikely be leaving a lot on the table. But, I will be the first to admit\nthat my reasoning here is pretty unscientific, and I'm open to suggestions\nfor how to make it less so.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 26 Mar 2024 11:59:18 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> I spent some time trying to get some ballpark figures but have thus far\n> been unsuccessful. Even if I was able to get good numbers, I'm not sure\n> how much they'd help us, as we'll still need to decide how much overhead we\n> are willing to take in comparison to the linear search. I don't think\n> ~1000 is an unreasonable starting point, as it seems generally more likely\n> that you will have many more roles to process at that point than if the\n> threshold was, say, 100. And if the threshold is too high (e.g., 10,000),\n> this optimization will only kick in for the most extreme cases, so we'd\n> likely be leaving a lot on the table. But, I will be the first to admit\n> that my reasoning here is pretty unscientific, and I'm open to suggestions\n> for how to make it less so.\n\nI did a little experimentation using the attached quick-hack C\nfunction, and came to the conclusion that setting up the bloom filter\ncosts more or less as much as inserting 1000 or so OIDs the dumb way.\nSo we definitely want a threshold that's not much less than that.\nFor example, with ROLES_LIST_BLOOM_THRESHOLD = 100 I saw:\n\nregression=# select drive_bloom(100, 10, 100000);\n drive_bloom \n-------------\n \n(1 row)\n\nTime: 319.931 ms\nregression=# select drive_bloom(101, 10, 100000);\n drive_bloom \n-------------\n \n(1 row)\n\nTime: 319.385 ms\nregression=# select drive_bloom(102, 10, 100000);\n drive_bloom \n-------------\n \n(1 row)\n\nTime: 9904.786 ms (00:09.905)\n\nThat's a pretty big jump in context. With the threshold set to 1024,\n\nregression=# select drive_bloom(1024, 10, 100000);\n drive_bloom \n-------------\n \n(1 row)\n\nTime: 14597.510 ms (00:14.598)\nregression=# select drive_bloom(1025, 10, 100000);\n drive_bloom \n-------------\n \n(1 row)\n\nTime: 14589.197 ms (00:14.589)\nregression=# select drive_bloom(1026, 10, 100000);\n drive_bloom \n-------------\n \n(1 row)\n\nTime: 25947.000 ms (00:25.947)\nregression=# select drive_bloom(1027, 10, 100000);\n drive_bloom \n-------------\n \n(1 row)\n\nTime: 25399.718 ms (00:25.400)\nregression=# select drive_bloom(2048, 10, 100000);\n drive_bloom \n-------------\n \n(1 row)\n\nTime: 33809.536 ms (00:33.810)\n\nSo I'm now content with choosing a threshold of 1000 or 1024 or so.\n\nAs for the bloom filter size, I see that bloom_create does\n\n\tbitset_bytes = Min(bloom_work_mem * UINT64CONST(1024), total_elems * 2);\n\tbitset_bytes = Max(1024 * 1024, bitset_bytes);\n\nwhich means that any total_elems input less than 512K is disregarded\naltogether. So I'm not sold on your \"ROLES_LIST_BLOOM_THRESHOLD * 10\"\nvalue. Maybe it doesn't matter though.\n\nI do not like, even a little bit, your use of a static variable to\nhold the bloom filter pointer. That code will misbehave horribly\nif we throw an error partway through the role-accumulation loop;\nthe next call will try to carry on using the old filter, which would\nbe wrong even if it still existed which it likely won't. It's not\nthat much worse notationally to keep it as a local variable, as I\ndid in the attached.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 26 Mar 2024 14:16:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Tue, Mar 26, 2024 at 02:16:03PM -0400, Tom Lane wrote:\n> I did a little experimentation using the attached quick-hack C\n> function, and came to the conclusion that setting up the bloom filter\n> costs more or less as much as inserting 1000 or so OIDs the dumb way.\n> So we definitely want a threshold that's not much less than that.\n\nThanks for doing this.\n\n> So I'm now content with choosing a threshold of 1000 or 1024 or so.\n\nCool.\n\n> As for the bloom filter size, I see that bloom_create does\n> \n> \tbitset_bytes = Min(bloom_work_mem * UINT64CONST(1024), total_elems * 2);\n> \tbitset_bytes = Max(1024 * 1024, bitset_bytes);\n> \n> which means that any total_elems input less than 512K is disregarded\n> altogether. So I'm not sold on your \"ROLES_LIST_BLOOM_THRESHOLD * 10\"\n> value. Maybe it doesn't matter though.\n\nYeah, I wasn't sure how much to worry about this. I figured that we might\nas well set it to a reasonable estimate based on the description of the\nparameter. This description claims that the filter should work well if\nthis is off by a factor of 5 or more, and 50x the threshold sounded like it\nought to be good enough for anyone, so that's how I landed on 10x. But as\nyou point out, this value will be disregarded altogether, and it will\ncontinue to be ignored unless the filter implementation changes, which\nseems unlikely. If you have a different value in mind that you would\nrather use, I'm fine with changing it.\n\n> I do not like, even a little bit, your use of a static variable to\n> hold the bloom filter pointer. That code will misbehave horribly\n> if we throw an error partway through the role-accumulation loop;\n> the next call will try to carry on using the old filter, which would\n> be wrong even if it still existed which it likely won't. It's not\n> that much worse notationally to keep it as a local variable, as I\n> did in the attached.\n\nAh, yes, that's no good. I fixed this in the new version.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 26 Mar 2024 13:48:19 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Tue, Mar 26, 2024 at 02:16:03PM -0400, Tom Lane wrote:\n>> ... I'm not sold on your \"ROLES_LIST_BLOOM_THRESHOLD * 10\"\n>> value. Maybe it doesn't matter though.\n\n> Yeah, I wasn't sure how much to worry about this. I figured that we might\n> as well set it to a reasonable estimate based on the description of the\n> parameter. This description claims that the filter should work well if\n> this is off by a factor of 5 or more, and 50x the threshold sounded like it\n> ought to be good enough for anyone, so that's how I landed on 10x. But as\n> you point out, this value will be disregarded altogether, and it will\n> continue to be ignored unless the filter implementation changes, which\n> seems unlikely. If you have a different value in mind that you would\n> rather use, I'm fine with changing it.\n\nNo, I have no better idea. As you say, we should try to provide some\nsemi-reasonable number in case bloom_create ever starts paying\nattention, and this one seems fine.\n\n>> I do not like, even a little bit, your use of a static variable to\n>> hold the bloom filter pointer.\n\n> Ah, yes, that's no good. I fixed this in the new version.\n\nMy one remaining suggestion is that this comment isn't very precise\nabout what's happening:\n\n * If there is a previously-created Bloom filter, use it to determine\n * whether the role is missing from the list. Otherwise, do an ordinary\n * linear search through the existing role list.\n\nMaybe more like\n\n * If there is a previously-created Bloom filter, use it to try to\n * determine whether the role is missing from the list. If it\n * says yes, that's a hard fact and we can go ahead and add the\n * role. If it says no, that's only probabilistic and we'd better\n * search the list. Without a filter, we must always do an ordinary\n * linear search through the existing list.\n\nLGTM other than that nit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Mar 2024 15:08:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Tue, Mar 26, 2024 at 03:08:00PM -0400, Tom Lane wrote:\n> My one remaining suggestion is that this comment isn't very precise\n> about what's happening:\n> \n> * If there is a previously-created Bloom filter, use it to determine\n> * whether the role is missing from the list. Otherwise, do an ordinary\n> * linear search through the existing role list.\n> \n> Maybe more like\n> \n> * If there is a previously-created Bloom filter, use it to try to\n> * determine whether the role is missing from the list. If it\n> * says yes, that's a hard fact and we can go ahead and add the\n> * role. If it says no, that's only probabilistic and we'd better\n> * search the list. Without a filter, we must always do an ordinary\n> * linear search through the existing list.\n> \n> LGTM other than that nit.\n\nCommitted with that change. Thanks for the guidance on this one.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 14:49:25 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" } ]
[ { "msg_contents": "Hi,\n\nThere is a failure in 040_standby_failover_slots_sync on tamandua:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-03-21%2008%3A28%3A58\n\nThe reason of the timeout is that the table sync worker for the new table is not started\nafter executing ALTER SUBSCRIPTION REFRESH PUBLICATION.\n\nWe have seen similar failures in other tests as well[1]. AFAICS, the reasons of\nthem are the same which is because of a race condition in logicalrep apply worker. The\nanalysis has been posted on another thread[2] and the fix is also being\nreviewed.\n\n[1] https://www.postgresql.org/message-id/OSZPR01MB6310D6F48372F52F1D85E1C5FD609%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n[2] https://www.postgresql.org/message-id/flat/CALDaNm1XeB3bF%2BVEJZi%3DBT31PZAL_UVys-26%2BYSv_AxCq0G2eg%40mail.gmail.com#87b153fd7676652746406a6f114eb67b\n\nBest Regards,\nHou Zhijie\n\n\n\n", "msg_date": "Thu, 21 Mar 2024 09:47:12 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Buildfarm failure on tamandua - \"timed out waiting for subscriber to\n synchronize data\"" } ]
[ { "msg_contents": "Good day-\n I'll start by admitting that I am a typical American who only speaks\none language.\n I maintain the PL./Haskell extension (\nhttps://github.com/ed-o-saurus/PLHaskell). I recently received a bug report\nfrom a user who is unable to load the extension when they have set lc_messages\nto something other than 'C' or 'en_US.utf8' (\nhttps://github.com/ed-o-saurus/PLHaskell/issues/9). I'm not sure how to\naddress this issue. I know it has something to do with NLS, but I'm\nconfused about what I need to do to get an extension to work. The\ndocumentation in chapter 57 of the user's manual seems to address\nstand-alone programs. At least that is my understanding.\n I am using my own makefile, not the one from pg_config.\n Any guidance would be appreciated.\n -Ed\n\nGood day-    I'll start by admitting that I am a typical American who only speaks one language.     I maintain the PL./Haskell extension (https://github.com/ed-o-saurus/PLHaskell). I recently received a bug report from a user who is unable to load the extension when they have set lc_messages to something other than 'C' or 'en_US.utf8' (https://github.com/ed-o-saurus/PLHaskell/issues/9). I'm not sure how to address this issue. I know it has something to do with NLS, but I'm confused about what I need to do to get an extension to work. The documentation in chapter 57 of the user's manual seems to address stand-alone programs. At least that is my understanding.     I am using my own makefile, not the one from pg_config.    Any guidance would be appreciated.                     -Ed", "msg_date": "Thu, 21 Mar 2024 07:48:57 -0400", "msg_from": "Ed Behn <[email protected]>", "msg_from_op": true, "msg_subject": "NLS for extension" }, { "msg_contents": "On Thu, Mar 21, 2024 at 7:49 AM Ed Behn <[email protected]> wrote:\n> I'll start by admitting that I am a typical American who only speaks one language.\n> I maintain the PL./Haskell extension (https://github.com/ed-o-saurus/PLHaskell). I recently received a bug report from a user who is unable to load the extension when they have set lc_messages to something other than 'C' or 'en_US.utf8' (https://github.com/ed-o-saurus/PLHaskell/issues/9). I'm not sure how to address this issue. I know it has something to do with NLS, but I'm confused about what I need to do to get an extension to work. The documentation in chapter 57 of the user's manual seems to address stand-alone programs. At least that is my understanding.\n> I am using my own makefile, not the one from pg_config.\n\nThat seems really weird. You wouldn't think that an NLS problem could\ncause an invalid ELF header.\n\nIf I were trying to troubleshoot this, I'd set up the failing scenario\n(i.e. lc_messages=pt_BR.utf8), set a breakpoint in the server on\nerrstart, and then perform the failing action. Then I'd use gdb to get\na backtrace from the point where the error was thrown, and hope that\nthe contents of that backtrace made it more clear what was actually\nhappening.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 08:49:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NLS for extension" }, { "msg_contents": "Robert-\n Thank you for your guidance. It turns out the call was coming from\ninside the house. The error isn't caused by Postgres. It's the library that\nI'm using that reported the error. My extension passes any errors it\ngenerates as Postgres ERRORs.\n -Ed\n\nOn Thu, Mar 21, 2024 at 8:49 AM Robert Haas <[email protected]> wrote:\n\n> On Thu, Mar 21, 2024 at 7:49 AM Ed Behn <[email protected]> wrote:\n> > I'll start by admitting that I am a typical American who only speaks\n> one language.\n> > I maintain the PL./Haskell extension (\n> https://github.com/ed-o-saurus/PLHaskell). I recently received a bug\n> report from a user who is unable to load the extension when they have set\n> lc_messages to something other than 'C' or 'en_US.utf8' (\n> https://github.com/ed-o-saurus/PLHaskell/issues/9). I'm not sure how to\n> address this issue. I know it has something to do with NLS, but I'm\n> confused about what I need to do to get an extension to work. The\n> documentation in chapter 57 of the user's manual seems to address\n> stand-alone programs. At least that is my understanding.\n> > I am using my own makefile, not the one from pg_config.\n>\n> That seems really weird. You wouldn't think that an NLS problem could\n> cause an invalid ELF header.\n>\n> If I were trying to troubleshoot this, I'd set up the failing scenario\n> (i.e. lc_messages=pt_BR.utf8), set a breakpoint in the server on\n> errstart, and then perform the failing action. Then I'd use gdb to get\n> a backtrace from the point where the error was thrown, and hope that\n> the contents of that backtrace made it more clear what was actually\n> happening.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nRobert-    Thank you for your guidance. It turns out the call was coming from inside the house. The error isn't caused by Postgres. It's the library that I'm using that reported the error. My extension passes any errors it generates as Postgres ERRORs.           -EdOn Thu, Mar 21, 2024 at 8:49 AM Robert Haas <[email protected]> wrote:On Thu, Mar 21, 2024 at 7:49 AM Ed Behn <[email protected]> wrote:\n>     I'll start by admitting that I am a typical American who only speaks one language.\n>     I maintain the PL./Haskell extension (https://github.com/ed-o-saurus/PLHaskell). I recently received a bug report from a user who is unable to load the extension when they have set lc_messages to something other than 'C' or 'en_US.utf8' (https://github.com/ed-o-saurus/PLHaskell/issues/9). I'm not sure how to address this issue. I know it has something to do with NLS, but I'm confused about what I need to do to get an extension to work. The documentation in chapter 57 of the user's manual seems to address stand-alone programs. At least that is my understanding.\n>     I am using my own makefile, not the one from pg_config.\n\nThat seems really weird. You wouldn't think that an NLS problem could\ncause an invalid ELF header.\n\nIf I were trying to troubleshoot this, I'd set up the failing scenario\n(i.e. lc_messages=pt_BR.utf8), set a breakpoint in the server on\nerrstart, and then perform the failing action. Then I'd use gdb to get\na backtrace from the point where the error was thrown, and hope that\nthe contents of that backtrace made it more clear what was actually\nhappening.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 22 Mar 2024 09:26:41 -0400", "msg_from": "Ed Behn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NLS for extension" }, { "msg_contents": "On Fri, Mar 22, 2024 at 9:27 AM Ed Behn <[email protected]> wrote:\n> Thank you for your guidance. It turns out the call was coming from inside the house. The error isn't caused by Postgres. It's the library that I'm using that reported the error. My extension passes any errors it generates as Postgres ERRORs.\n\nYeah, I was kind of wondering if it was something like that. But I\nfigured it was a capital mistake to theorize in advance of the data.\n\nGlad you figured it out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 10:11:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NLS for extension" } ]
[ { "msg_contents": "Hello Team,\n\nWe are working on AIX systems and noticed that the thread on removing AIX support in Postgres going forward.\n\nhttps://github.com/postgres/postgres/commit/0b16bb8776bb834eb1ef8204ca95dd7667ab948b”\n\nWe would be glad to understand any outstanding issues hindering the support on AIX.\nIt is important for us to have Postgres to be supported on AIX. As we are using Postgres extensively on AIX.\nAlso we would like to provide any feasible support from our end for enabling the support on AIX.\n\nWe would request the community to extend the support on AIX ..\n\n\nThanks & regards,\nSriram.\n\n\n\n\n\n\n\n\nHello Team,\n\n \n\nWe are working on AIX systems and noticed that the thread on removing AIX support in Postgres going forward.\n\n \n\nhttps://github.com/postgres/postgres/commit/0b16bb8776bb834eb1ef8204ca95dd7667ab948b”\n\n \n\nWe would be glad to understand any outstanding issues hindering the support on AIX.\n\nIt is important for us to have Postgres to be supported on AIX. As we are using Postgres extensively on AIX.\n\nAlso we would like to provide any feasible support from our end for enabling the support on AIX.\n\n \n\nWe would request the community to extend the support on AIX ..\n\n \n\n \n\nThanks & regards,\n\nSriram.", "msg_date": "Thu, 21 Mar 2024 12:55:31 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "AIX support" }, { "msg_contents": "On 2024-Mar-21, Sriram RK wrote:\n\n> Hello Team,\n> \n> We are working on AIX systems and noticed that the thread on removing AIX support in Postgres going forward.\n> \n> https://github.com/postgres/postgres/commit/0b16bb8776bb834eb1ef8204ca95dd7667ab948b”\n> \n> We would be glad to understand any outstanding issues hindering the support on AIX.\n\nThere's a Discussion link at the bottom of that commit message. I\nsuggest you read that discussion complete, and consider how much effort\nyou or your company are willing to spend on doing the maintenance of the\nport yourselves for the community. Maybe ponder this question: would it\nbe less onerous to migrate your Postgres servers to Linux, like Phil\nFlorent described on the currently-last message of that thread?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Para tener más hay que desear menos\"\n\n\n", "msg_date": "Thu, 21 Mar 2024 14:57:24 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Sriram RK <[email protected]> writes:\n> We are working on AIX systems and noticed that the thread on removing AIX support in Postgres going forward.\n> https://github.com/postgres/postgres/commit/0b16bb8776bb834eb1ef8204ca95dd7667ab948b\n> We would be glad to understand any outstanding issues hindering the\n> support on AIX.\n\nDid you read the linked thread? Starting say about here:\n\nhttps://www.postgresql.org/message-id/flat/20240224172345.32%40rfd.leadboat.com#8b41e50c2190c82c861d91644eed9c30\n\n> Also we would like to provide any feasible support from our end for enabling the support on AIX.\n\nWho is \"we\", and how much resources are you prepared to put into this?\n\n> We would request the community to extend the support on AIX ..\n\nThe community, in the sense of the existing people doing significant\nwork on Postgres, are absolutely not going to do that. If you can\nbring a bunch of work to fix all the problems noted in the discussion\nthread, and commit to providing ongoing development manpower and\nhardware to keep it working, maybe something could happen. But I\nsuspect you will find it cheaper to start thinking about migrating\noff AIX.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Mar 2024 09:57:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Thanks, Tom and Alvaro, for the info.\r\nWe shall look into to details and get back.\r\n\r\nFrom: Tom Lane <[email protected]>\r\nDate: Thursday, 21 March 2024 at 7:27 PM\r\nTo: Sriram RK <[email protected]>\r\nCc: [email protected] <[email protected]>\r\nSubject: Re: AIX support\r\nSriram RK <[email protected]> writes:\r\n> We are working on AIX systems and noticed that the thread on removing AIX support in Postgres going forward.\r\n> https://github.com/postgres/postgres/commit/0b16bb8776bb834eb1ef8204ca95dd7667ab948b\r\n> We would be glad to understand any outstanding issues hindering the\r\n> support on AIX.\r\n\r\nDid you read the linked thread? Starting say about here:\r\n\r\nhttps://www.postgresql.org/message-id/flat/20240224172345.32%40rfd.leadboat.com#8b41e50c2190c82c861d91644eed9c30\r\n\r\n> Also we would like to provide any feasible support from our end for enabling the support on AIX.\r\n\r\nWho is \"we\", and how much resources are you prepared to put into this?\r\n\r\n> We would request the community to extend the support on AIX ..\r\n\r\nThe community, in the sense of the existing people doing significant\r\nwork on Postgres, are absolutely not going to do that. If you can\r\nbring a bunch of work to fix all the problems noted in the discussion\r\nthread, and commit to providing ongoing development manpower and\r\nhardware to keep it working, maybe something could happen. But I\r\nsuspect you will find it cheaper to start thinking about migrating\r\noff AIX.\r\n\r\n regards, tom lane\r\n\n\n\n\n\n\n\n\n\nThanks, Tom and Alvaro, for the info.\nWe shall look into to details and get back.\n \n\n\n\nFrom:\r\nTom Lane <[email protected]>\nDate: Thursday, 21 March 2024 at 7:27 PM\nTo: Sriram RK <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: AIX support\n\n\nSriram RK <[email protected]> writes:\r\n> We are working on AIX systems and noticed that the thread on removing AIX support in Postgres going forward.\r\n> \r\nhttps://github.com/postgres/postgres/commit/0b16bb8776bb834eb1ef8204ca95dd7667ab948b\r\n> We would be glad to understand any outstanding issues hindering the\r\n> support on AIX.\n\r\nDid you read the linked thread?  Starting say about here:\n\nhttps://www.postgresql.org/message-id/flat/20240224172345.32%40rfd.leadboat.com#8b41e50c2190c82c861d91644eed9c30\n\r\n> Also we would like to provide any feasible support from our end for enabling the support on AIX.\n\r\nWho is \"we\", and how much resources are you prepared to put into this?\n\r\n> We would request the community to extend the support on AIX ..\n\r\nThe community, in the sense of the existing people doing significant\r\nwork on Postgres, are absolutely not going to do that.  If you can\r\nbring a bunch of work to fix all the problems noted in the discussion\r\nthread, and commit to providing ongoing development manpower and\r\nhardware to keep it working, maybe something could happen.  But I\r\nsuspect you will find it cheaper to start thinking about migrating\r\noff AIX.\n\r\n                        regards, tom lane", "msg_date": "Thu, 21 Mar 2024 16:35:26 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Team,\r\n\r\n\r\n\r\nWe are setting up the build environment and trying to build the source and also trying to analyze the assert from the Aix point of view.\r\n\r\nAlso, would like to know if we can access the buildfarm(power machines) to run any of the specific tests to hit this assert.\r\n\r\nThanks & regards,\r\nSriram.\r\n\r\n > From: Sriram RK <[email protected]>\r\n > Date: Thursday, 21 March 2024 at 10:05 PM\r\n > To: Tom Lane [email protected]<mailto:[email protected]>, Alvaro Herrera <[email protected]>\r\n > Cc: [email protected]<mailto:[email protected]> <[email protected]>\r\n > Subject: Re: AIX support\r\n > Thanks, Tom and Alvaro, for the info.\r\n > We shall look into to details and get back.\r\n\r\n\n\n\n\n\n\n\n\n\nHi Team,\n \nWe are setting up the build environment and trying to build the source and also trying to analyze the assert from the Aix point of view.\nAlso, would like to know if we can access the buildfarm(power machines) to run any of the specific tests to hit this assert.\n \nThanks & regards,\nSriram.\n \n\n\n\n  >\r\nFrom: Sriram RK <[email protected]>\n  > \r\nDate: Thursday, 21 March 2024 at 10:05 PM\n  > \r\nTo: Tom Lane\r\[email protected], Alvaro Herrera <[email protected]>\n  > \r\nCc: [email protected] <[email protected]>\n  > \r\nSubject: Re: AIX support\n\n\n  >\r\nThanks, Tom and Alvaro, for the info.\n  >\r\nWe shall look into to details and get back.", "msg_date": "Thu, 28 Mar 2024 11:09:43 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Thu, Mar 28, 2024 at 11:09:43AM +0000, Sriram RK wrote:\n> We are setting up the build environment and trying to build the source and also trying to analyze the assert from the Aix point of view.\n\nThe thread Alvaro and Tom cited contains an analysis. It's a compiler bug.\nYou can get past the compiler bug by upgrading your compiler; both ibm-clang\n17.1.1.2 and gcc 13.2.0 are free from the bug.\n\n> Also, would like to know if we can access the buildfarm(power machines) to run any of the specific tests to hit this assert.\n\nhttps://portal.cfarm.net/users/new/ is the form to request access. It lists\nthe eligibility criteria.\n\n\n", "msg_date": "Thu, 28 Mar 2024 19:48:32 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Fri, Mar 29, 2024 at 3:48 PM Noah Misch <[email protected]> wrote:\n> On Thu, Mar 28, 2024 at 11:09:43AM +0000, Sriram RK wrote:\n> > We are setting up the build environment and trying to build the source and also trying to analyze the assert from the Aix point of view.\n>\n> The thread Alvaro and Tom cited contains an analysis. It's a compiler bug.\n> You can get past the compiler bug by upgrading your compiler; both ibm-clang\n> 17.1.1.2 and gcc 13.2.0 are free from the bug.\n\nFor the specific issue that triggered that, I strongly suspect that it\nwould go away if we just used smgrzeroextend() instead of smgrextend()\nusing that variable with the alignment requirement, since, as far as I\ncan tell from build farm clues, the otherwise similar function-local\nstatic variable used by the former (ie one that the linker must still\ncontrol the location of AFAIK?) seems to work fine.\n\nBut we didn't remove AIX because of that, it was just the straw that\nbroke the camel's back.\n\n\n", "msg_date": "Fri, 29 Mar 2024 16:00:22 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Thu, Mar 28, 2024 at 11:09:43AM +0000, Sriram RK wrote:\n>> Also, would like to know if we can access the buildfarm(power machines) to run any of the specific tests to hit this assert.\n\n> https://portal.cfarm.net/users/new/ is the form to request access. It lists\n> the eligibility criteria.\n\nThere might be some confusion here about what system we are talking\nabout. The Postgres buildfarm is described at\nhttps://buildfarm.postgresql.org/index.html\nbut it consists of a large number of individual machines run by\nindividual owners. There would not be a lot of point in adding a\nnew AIX machine to the Postgres buildfarm right now, because it\nwould surely fail to build HEAD. What Noah is referencing is\nthe GCC compile farm, which happens to include some AIX machines.\nThe existing AIX entries in the Postgres buildfarm are run (by Noah)\non those GCC compile farm machines, which really the GCC crowd have\nbeen *very* forgiving about letting us abuse like that. If you have\nyour own AIX hardware there's not a lot of reason that you should\nneed to access the GCC farm.\n\nWhat you do need to do to reproduce the described problems is\ncheck out the Postgres git tree and rewind to just before\ncommit 0b16bb877, where we deleted AIX support. Any attempt\nto restore AIX support would have to start with reverting that\ncommit (and perhaps the followup f0827b443).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Mar 2024 23:07:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Fri, Mar 29, 2024 at 4:00 PM Thomas Munro <[email protected]> wrote:\n> On Fri, Mar 29, 2024 at 3:48 PM Noah Misch <[email protected]> wrote:\n> > The thread Alvaro and Tom cited contains an analysis. It's a compiler bug.\n> > You can get past the compiler bug by upgrading your compiler; both ibm-clang\n> > 17.1.1.2 and gcc 13.2.0 are free from the bug.\n>\n> For the specific issue that triggered that, I strongly suspect that it\n> would go away if we just used smgrzeroextend() instead of smgrextend()\n> using that variable with the alignment requirement, since, as far as I\n> can tell from build farm clues, the otherwise similar function-local\n> static variable used by the former (ie one that the linker must still\n> control the location of AFAIK?) seems to work fine.\n\nOh, sorry, I had missed the part where newer compilers fix the issue\ntoo. Old out-of-support versions of AIX running old compilers, what\nfun.\n\n\n", "msg_date": "Fri, 29 Mar 2024 16:14:37 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Oh, sorry, I had missed the part where newer compilers fix the issue\n> too. Old out-of-support versions of AIX running old compilers, what\n> fun.\n\nIndeed. One of the topics that needs investigation if you want to\npursue this is which AIX system and compiler versions still deserve\nsupport, and which of the AIX hacks we had been carrying still need\nto be there based on that analysis. For context, we've been pruning\nsupport for extinct-in-the-wild OS versions pretty aggressively\nover the past couple of years, and I'd expect to apply the same\nstandard to AIX.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Mar 2024 23:33:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "> What you do need to do to reproduce the described problems is\r\n> check out the Postgres git tree and rewind to just before\r\n> commit 0b16bb877, where we deleted AIX support. Any attempt\r\n> to restore AIX support would have to start with reverting that\r\n> commit (and perhaps the followup f0827b443).\r\n\r\n> regards, tom lane\r\n\r\nHi Team, thank you for all the info.\r\n\r\nWe progressed to build the source on our nodes and the build was successful with the below configuration.\r\n\r\nPostgres - github-bcdfa5f2e2f\r\nAIX - 71c\r\nXlc - 13.1.0\r\nBison - 3.0.5\r\n\r\nGoing ahead, we want to build the changes that were removed as part of “0b16bb8776bb8”, with latest Xlc and gcc version.\r\n\r\nWe were building the source from the postgres ftp server(https://ftp.postgresql.org/pub/source/), would like to understand if there are any source level changes between the ftp server and the source on github?\r\n\r\n\r\nRegards,\r\nSriram.\r\n\r\n\r\nFrom: Tom Lane <[email protected]>\r\nDate: Friday, 29 March 2024 at 9:03 AM\r\nTo: Thomas Munro <[email protected]>\r\nCc: Noah Misch <[email protected]>, Sriram RK <[email protected]>, Alvaro Herrera <[email protected]>, [email protected] <[email protected]>\r\nSubject: Re: AIX support\r\nThomas Munro <[email protected]> writes:\r\n> Oh, sorry, I had missed the part where newer compilers fix the issue\r\n> too. Old out-of-support versions of AIX running old compilers, what\r\n> fun.\r\n\r\nIndeed. One of the topics that needs investigation if you want to\r\npursue this is which AIX system and compiler versions still deserve\r\nsupport, and which of the AIX hacks we had been carrying still need\r\nto be there based on that analysis. For context, we've been pruning\r\nsupport for extinct-in-the-wild OS versions pretty aggressively\r\nover the past couple of years, and I'd expect to apply the same\r\nstandard to AIX.\r\n\r\n regards, tom lane\r\n\n\n\n\n\n\n\n\n\n\r\n> What you do need to do to reproduce the described problems is\r\n> check out the Postgres git tree and rewind to just before\r\n> commit 0b16bb877, where we deleted AIX support.  Any attempt\r\n> to restore AIX support would have to start with reverting that\r\n> commit (and perhaps the followup f0827b443).\n\r\n>                         regards, tom lane\n \nHi Team, thank you for all the info.\n \nWe progressed to build the source on our nodes and the build was successful with the below configuration.\n \nPostgres              - github-bcdfa5f2e2f\nAIX                         - 71c\r\n\nXlc                         - 13.1.0\r\n\nBison                    - 3.0.5\n \nGoing ahead, we want to build the changes that were removed as part of “0b16bb8776bb8”, with latest Xlc and gcc version.\n \nWe were building the source from the postgres ftp server(https://ftp.postgresql.org/pub/source/), would like to understand if there are any\r\n source level changes between the ftp server and the source on github?\n \n \nRegards, \nSriram.\n \n \n\n\n\nFrom:\r\nTom Lane <[email protected]>\nDate: Friday, 29 March 2024 at 9:03 AM\nTo: Thomas Munro <[email protected]>\nCc: Noah Misch <[email protected]>, Sriram RK <[email protected]>, Alvaro Herrera <[email protected]>, [email protected] <[email protected]>\nSubject: Re: AIX support\n\n\nThomas Munro <[email protected]> writes:\r\n> Oh, sorry, I had missed the part where newer compilers fix the issue\r\n> too.  Old out-of-support versions of AIX running old compilers, what\r\n> fun.\n\r\nIndeed.  One of the topics that needs investigation if you want to\r\npursue this is which AIX system and compiler versions still deserve\r\nsupport, and which of the AIX hacks we had been carrying still need\r\nto be there based on that analysis.  For context, we've been pruning\r\nsupport for extinct-in-the-wild OS versions pretty aggressively\r\nover the past couple of years, and I'd expect to apply the same\r\nstandard to AIX.\n\r\n                        regards, tom lane", "msg_date": "Fri, 5 Apr 2024 16:12:06 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Fri, Apr 05, 2024 at 04:12:06PM +0000, Sriram RK wrote:\n> \n> > What you do need to do to reproduce the described problems is\n> > check out the Postgres git tree and rewind to just before\n> > commit 0b16bb877, where we deleted AIX support. Any attempt\n> > to restore AIX support would have to start with reverting that\n> > commit (and perhaps the followup f0827b443).\n\n> Going ahead, we want to build the changes that were removed as part of “0b16bb8776bb8”, with latest Xlc and gcc version.\n> \n> We were building the source from the postgres ftp server(https://ftp.postgresql.org/pub/source/), would like to understand if there are any source level changes between the ftp server and the source on github?\n\nTo succeed in this endeavor, you'll need to develop fluency in the tools to\nanswer questions like that, or bring in someone who is fluent. In this case,\nGNU diff is the standard tool for answering your question. These resources\ncover other topics you would need to learn:\n\nhttps://wiki.postgresql.org/wiki/Developer_FAQ\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n\n\n", "msg_date": "Fri, 5 Apr 2024 10:26:49 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Thanks Noah and Team,\n\nWe (IBM-AIX team) looked into this issue\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nThis is related to the compiler issue. The compilers xlc(13.1) and gcc(8.0) have issues. But we verified that this issue is resolved with the newer compiler versions openXL(xlc17.1) and gcc(12.0) onwards.\n\nWe reported this issue to the xlc team and they have noted this issue. A fix might be possible in May for this issue in xlc v16. We would like to understand if the community can start using the latest compilers to build the source.\n\nAlso as part of the support, we will help in fixing all the issues related to AIX and continue to support AIX for Postgres. If we need another CI environment we can work to make one available. But for time being can we start reverting the patch that has removed AIX support.\n\nWe want to make a note that postgres is used extensively in our IBM product and is being exploited by multiple customers.\n\nPlease let us know if there are any specific details you'd like to discuss further.\n\nRegards,\nSriram.\n\n\n\n\n\n\n\n\n\nThanks Noah and Team,\nWe (IBM-AIX team) looked into this issue\nhttps://www.postgresql.org/message-id/[email protected]\nThis is related to the compiler issue. The compilers xlc(13.1) and gcc(8.0) have issues. But we verified that this issue is resolved with the newer compiler versions openXL(xlc17.1) and gcc(12.0) onwards.\nWe reported this issue to the xlc team and they have noted this issue. A fix might be possible in May for this issue in xlc v16.  We would like to understand if the community can start using the latest compilers to build the source.\nAlso as part of the support, we will help in fixing all the issues related to AIX and continue to support AIX for Postgres. If we need another CI environment we can work to make one available. But for time being can we start\n reverting the patch that has removed AIX support.\nWe want to make a note that postgres is used extensively in our IBM product and is being exploited by multiple customers.\nPlease let us know if there are any specific details you'd like to discuss further.\n \nRegards,\nSriram.", "msg_date": "Thu, 18 Apr 2024 11:15:43 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "On 18 April 2024 14:15:43 GMT+03:00, Sriram RK <[email protected]> wrote:\n>Thanks Noah and Team,\n>\n>We (IBM-AIX team) looked into this issue\n>\n>https://www.postgresql.org/message-id/[email protected]\n>\n>This is related to the compiler issue. The compilers xlc(13.1) and gcc(8.0) have issues. But we verified that this issue is resolved with the newer compiler versions openXL(xlc17.1) and gcc(12.0) onwards.\n>\n>We reported this issue to the xlc team and they have noted this issue. A fix might be possible in May for this issue in xlc v16. We would like to understand if the community can start using the latest compilers to build the source.\n>\n>Also as part of the support, we will help in fixing all the issues related to AIX and continue to support AIX for Postgres. If we need another CI environment we can work to make one available. But for time being can we start reverting the patch that has removed AIX support.\n\nLet's start by setting up a new AIX buildfarm member. Regardless of what we do with v17, we continue to support AIX on the stable branches, and we really need a buildfarm member to keep the stable branches working anyway.\n\n>We want to make a note that postgres is used extensively in our IBM product and is being exploited by multiple customers.\n\nNoted. I'm glad to hear you are interested to put in some effort for this. The situation from the current maintainers is that none of us have much interest, or resources or knowledge to keep the AIX build working, so we'll definitely need the help.\n\nNo promises on v17, but let's at least make sure the stable branches keep working. And with the patches and buildfarm support from you, maybe v17 is feasible too.\n\n\n- Heikki\n\n\n", "msg_date": "Thu, 18 Apr 2024 14:25:23 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "> Let's start by setting up a new AIX buildfarm member. Regardless of what we do with v17, we continue to support AIX on the stable branches, and we really need a buildfarm member to keep the stable branches working anyway.\n\nThanks Heikki. We had already build the source code(v17+ bcdfa5f2e2f) on our local nodes. We will try to setup the buildfarm and let you know.\nIs there any specific configuration we are looking for?\n\nRegards,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\n> Let's start by setting up a new AIX buildfarm member. Regardless of what we do with v17, we continue to support AIX on the stable branches, and we really need a buildfarm member to keep the stable branches working anyway.\n \nThanks Heikki. We had already build the source code(v17+ bcdfa5f2e2f) on our local nodes. We will try to setup the buildfarm and let you know.\nIs there any specific configuration we are looking for?\n \nRegards,\nSriram.", "msg_date": "Thu, 18 Apr 2024 11:57:48 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi,\n\nOn 2024-04-18 11:15:43 +0000, Sriram RK wrote:\n> We (IBM-AIX team) looked into this issue\n>\n> https://www.postgresql.org/message-id/[email protected]\n>\n> This is related to the compiler issue. The compilers xlc(13.1) and gcc(8.0)\n> have issues. But we verified that this issue is resolved with the newer\n> compiler versions openXL(xlc17.1) and gcc(12.0) onwards.\n\nThe reason we used these compilers was that those were the only ones we had\nkinda somewhat reasonable access to, via the gcc projects'\n\"compile farm\" https://portal.cfarm.net/\nWe have to rely on whatever the aix machines there provide. They're not\nparticularly plentiful resource-wise either.\n\n\nThis is generally one of the big issues with AIX support. There are other\nniche-y OSs that don't have a lot of users, e.g. openbsd, but in contrast to\nAIX I can just start an openbsd VM within a few minutes and iron out whatever\nportability issue I'm dealing with.\n\nNot being AIX customers we also can't raise bugs about compiler bugs, so we're\nstuck doing bad workarounds.\n\n\n> Also as part of the support, we will help in fixing all the issues related\n> to AIX and continue to support AIX for Postgres. If we need another CI\n> environment we can work to make one available. But for time being can we\n> start reverting the patch that has removed AIX support.\n\nThe state when was removed was not in a state that I am OK with adding back.\n\n\n> We want to make a note that postgres is used extensively in our IBM product\n> and is being exploited by multiple customers.\n\nTo be blunt: Then it'd have been nice to see some investment in that before\nnow. Both on the code level and the infrastructure level (i.e. access to\nmachines etc).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Apr 2024 11:01:28 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Fri, Apr 19, 2024 at 6:01 AM Andres Freund <[email protected]> wrote:\n> On 2024-04-18 11:15:43 +0000, Sriram RK wrote:\n> > We (IBM-AIX team) looked into this issue\n> >\n> > https://www.postgresql.org/message-id/[email protected]\n> >\n> > This is related to the compiler issue. The compilers xlc(13.1) and gcc(8.0)\n> > have issues. But we verified that this issue is resolved with the newer\n> > compiler versions openXL(xlc17.1) and gcc(12.0) onwards.\n>\n> The reason we used these compilers was that those were the only ones we had\n> kinda somewhat reasonable access to, via the gcc projects'\n> \"compile farm\" https://portal.cfarm.net/\n> We have to rely on whatever the aix machines there provide. They're not\n> particularly plentiful resource-wise either.\n\nTo be fair, those OSUOSL machines are donated by IBM:\n\nhttps://osuosl.org/services/powerdev/\n\nIt's just that they seem to be mostly focused on supporting Linux on\nPOWER, with only a couple of AIX hosts (partitions/virtual machines?)\nmade available via portal.cfarm.net, and they only very recently added\na modern AIX 7.3 host. That's cfarm119, upgraded in September-ish,\nlong after many threads on this list that between-the-lines threatened\nto drop support.\n\n> This is generally one of the big issues with AIX support. There are other\n> niche-y OSs that don't have a lot of users, e.g. openbsd, but in contrast to\n> AIX I can just start an openbsd VM within a few minutes and iron out whatever\n> portability issue I'm dealing with.\n\nYeah. It is a known secret that you can run AIX inside Qemu/kvm (it\nappears that IBM has made changes to it to make that possible, because\nearlier AIX versions didn't like Qemu's POWER emulation or\nvirtualisation, there are blog posts about it), but IBM doesn't\nactually make the images available to non-POWER-hardware owners (you\nneed a serial number). If I were an OS vendor and wanted developers\nto target my OS for free, at the very top of my TODO list I would\nhave: provide an easy to use image for developers to be able to spin\nsomething up in minutes and possibly even use in CI systems. That's\nthe reason I can fix any minor portability issue on Linux, illumos,\n*BSD quickly and Windows with only moderate extra pain. Even Oracle\nknows this, see Solaris CBE.\n\n> > We want to make a note that postgres is used extensively in our IBM product\n> > and is being exploited by multiple customers.\n>\n> To be blunt: Then it'd have been nice to see some investment in that before\n> now. Both on the code level and the infrastructure level (i.e. access to\n> machines etc).\n\nIn the AIX space generally, there were even clues that funding had\nbeen completely cut even for packaging PostgreSQL. I was aware of two\npackaging projects (not sure how they were related):\n\n1. The ATOS packaging group, who used to show up on our mailing lists\nand discuss code changes, which announced it was shutting down:\n\nhttps://github.com/power-devops/bullfreeware\n\n2. And last time I looked a few months back, the IBM AIX Toolbox\npackaging project only had PostgreSQL 10 or 11 packages, already out\nof support by us, meaning that their maintainer had given up, too:\n\nhttps://www.ibm.com/support/pages/aix-toolbox-open-source-software-downloads-alpha\n\nHowever I see that recently (last month?) someone has added PostgreSQL\n15, so something has only just reawoken there?\n\nThere are quite a lot of threads about AIX problems, but they are\nalmost all just us non-AIX-users trying to unbreak stupid stuff on the\nbuild farm, which at some points began to seem distinctly quixotic:\nchivalric hackers valiantly trying to keep the entire Unix family tree\nworking even though we don't remember why and th versions involved are\nout of support even by the vendor. Of the three old giant commercial\nUnixes, HP-UX was dropped without another mention (it really was a\nwindmill after all), Solaris is somehow easier to deal with (I could\nguess it's because it influenced Linux and BSD so much, ELF and linker\ndetails spring to mind), while AIX fails on every dimension:\nunrepresented by users, lacking in build farm, unavailable to\nnon-customers, and unusual among Unixen.\n\n\n", "msg_date": "Fri, 19 Apr 2024 12:40:12 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "For any complier/hardware related issue we should able to provide support.\nWe are in the process of identifying the AIX systems that can be added to the CI/buildfarm environment.\n\nRegards,\nSriram.\n\n\n\n\n\n\n\n\n\nFor any complier/hardware related issue we should able to provide support.\nWe are in the process of identifying the AIX systems that can be added to the CI/buildfarm environment.\n\n\n\n \nRegards,\nSriram.", "msg_date": "Fri, 19 Apr 2024 11:04:07 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "On 19.04.24 13:04, Sriram RK wrote:\n> For any complier/hardware related issue we should able to provide support.\n> \n> We are in the process of identifying the AIX systems that can be added \n> to the CI/buildfarm environment.\n\nI think we should manage expectations here, if there is any hope of \ngetting AIX support back into PG17.\n\nI have some sympathy for this. The discussion about removing AIX \nsupport had a very short turnaround and happened in an unrelated thread, \nwithout any sort of public announcement or consultation. So this report \nof \"hey, we were still using that\" is timely and fair.\n\nBut the underlying issue that led to the removal (something to do with \ndirect I/O support and alignment) would still need to be addressed. And \nthis probably wouldn't just need some infrastructure support; it would \nrequire contributions from someone who actively knows how to develop on \nthis platform. Now, direct I/O is currently sort of an experimental \nfeature, so disabling it on AIX, as was initially suggested in that \ndiscussion, might be okay for now, but the issue will come up again.\n\nEven if this new buildfarm support is forthcoming, there has to be some \nsort of deadline in any resurrection attempts for PG17. The first beta \ndate has been set for 23 May. If we are making the reinstatement of AIX \nsupport contingent on new buildfarm support, those machines need to be \navailable, at least initially, at least for backbranches, like in a \nweek. Which seems tight.\n\nI can see several ways going forward:\n\n1. We revert the removal of AIX support and carry on with the status quo \nante. (The removal of AIX is a regression; it is timely and in scope \nnow to revert the change.)\n\n2. Like (1), but we consider that notice has been given, and we will \nremove it early in PG18 (like August) unless the situation improves.\n\n3. We leave it out of PG17 and consider a new AIX port for PG18 on its \nown merits.\n\nNote that such a \"new\" port would probably require quite a bit of \ndevelopment and research work, to clean up all the cruft that had \naccumulated over the years in the old port. Another looming issue is \nthat the meson build system only supported AIX with gcc before the \nremoval. I don't know what it would take to expand that to support \nxclang, but if it requires meson upstream work, you have that to do, too.\n\n\n", "msg_date": "Sat, 20 Apr 2024 17:42:03 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I have some sympathy for this. The discussion about removing AIX \n> support had a very short turnaround and happened in an unrelated thread, \n> without any sort of public announcement or consultation. So this report \n> of \"hey, we were still using that\" is timely and fair.\n\nYup, that's a totally fair complaint. Still ...\n\n> I can see several ways going forward:\n> 1. We revert the removal of AIX support and carry on with the status quo \n> ante. (The removal of AIX is a regression; it is timely and in scope \n> now to revert the change.)\n> 2. Like (1), but we consider that notice has been given, and we will \n> remove it early in PG18 (like August) unless the situation improves.\n> 3. We leave it out of PG17 and consider a new AIX port for PG18 on its \n> own merits.\n\nAndres has ably summarized the reasons why the status quo ante was\ngetting untenable. The direct-I/O problem could have been tolerable\non its own, but in reality it was the straw that broke the camel's\nback so far as our willingness to maintain AIX support went. There\nwere just too many hacks and workarounds for too many problems,\nwith too few people interested in looking for better answers.\n\nSo I'm totally not in favor of #1, at least not without some hard\ncommitments and follow-through on really cleaning up the mess\n(which maybe looks more like your #2). What's needed here, as\nyou said, is for someone with a decent amount of expertise in\nmodern AIX to review all the issues. Maybe framing that as a\n\"new port\" per #3 would be a good way to think about it. But\nI don't want to just revert the AIX-ectomy and continue drifting.\n\nOn the whole, it wouldn't be the worst thing in the world if PG 17\nlacks AIX support but that comes back in PG 18. That approach would\nsolve the schedule-crunch aspect and give time for considered review\nof how many of the hacks removed in 0b16bb877 really need to be put\nback, versus being obsolete or amenable to a nicer solution in\nlate-model AIX. If we take a \"new port\" mindset then it would be\ntotally reasonable to say that it only supports very recent AIX\nreleases, so I'd hope at least some of the cruft could be removed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Apr 2024 12:25:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Team,\n\n\n\n> I have some sympathy for this. The discussion about removing AIX\n\n> support had a very short turnaround and happened in an unrelated thread,\n\n> without any sort of public announcement or consultation. So this report\n\n> of \"hey, we were still using that\" is timely and fair.\n\n We would really like to thank you & the team for considering our request,\n\n and really appreciate for providing all the possible options to support AIX.\n\n\n\n> But the underlying issue that led to the removal (something to do with\n\n> direct I/O support and alignment) would still need to be addressed.\n\n As we already validated that these alignment specific issues are resolved\n\n with the latest versions of the compilers (gcc/ibm-clang). We would request\n\n you to use the latest versions for the build.\n\n\n\n> If we are making the reinstatement of AIX\n\n> support contingent on new buildfarm support, those machines need to be\n\n> available, at least initially, at least for back branches, like in a\n\n> week. Which seems tight.\n\n We are already working with the internal team in procuring the nodes\n\n for the buildfarm, which can be accessible by the community.\n\n\n\n> I can see several ways going forward:\n\n> 1. We revert the removal of AIX support and carry on with the status quo\n\n> ante. (The removal of AIX is a regression; it is timely and in scope\n\n> now to revert the change.)\n\n> 2. Like (1), but we consider that notice has been given, and we will\n\n> remove it early in PG18 (like August) unless the situation improves.\n\n We would really appreciate you for providing the possible options\n\n and we are very much inclined to these above approaches.\n\n\n\n\n\nRegards,\n\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Team,\n \n> I have some sympathy for this. The discussion about removing AIX\n\n> support had a very short turnaround and happened in an unrelated thread,\n\n> without any sort of public announcement or consultation. So this report\n\n> of \"hey, we were still using that\" is timely and fair.\n    We would really like to thank you & the team for considering our request,\n\n    and really appreciate for providing all the possible options to support AIX.\n \n> But the underlying issue that led to the removal (something to do with\n\n> direct I/O support and alignment) would still need to be addressed.\n    As we already validated that these alignment specific issues are resolved\n\n    with the latest versions of the compilers (gcc/ibm-clang). We would request\n\n    you to use the latest versions for the build.\n \n> If we are making the reinstatement of AIX\n\n> support contingent on new buildfarm support, those machines need to be\n\n> available, at least initially, at least for back branches, like in a\n\n> week. Which seems tight.\n    We are already working with the internal team in procuring the nodes\n\n    for the buildfarm, which can be accessible by the community.\n\n \n> I can see several ways going forward:\n> 1. We revert the removal of AIX support and carry on with the status quo\n\n> ante. (The removal of AIX is a regression; it is timely and in scope\n\n> now to revert the change.)\n> 2. Like (1), but we consider that notice has been given, and we will\n\n> remove it early in PG18 (like August) unless the situation improves.\n    We would really appreciate you for providing the possible options\n\n    and we are very much inclined to these above approaches.\n \n \nRegards,\nSriram.", "msg_date": "Mon, 22 Apr 2024 10:12:23 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Sat Apr 20, 2024 at 10:42 AM CDT, Peter Eisentraut wrote:\n>\n> 3. We leave it out of PG17 and consider a new AIX port for PG18 on its \n> own merits.\n>\n> Note that such a \"new\" port would probably require quite a bit of \n> development and research work, to clean up all the cruft that had \n> accumulated over the years in the old port. Another looming issue is \n> that the meson build system only supported AIX with gcc before the \n> removal. I don't know what it would take to expand that to support \n> xclang, but if it requires meson upstream work, you have that to do, too.\n\nHappy to help advocate for any PRs from AIX folks on the Meson side. You \ncan find me as @tristan957 on github.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 22 Apr 2024 09:54:50 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Sat, Apr 20, 2024 at 12:25:47PM -0400, Tom Lane wrote:\n> > I can see several ways going forward:\n> > 1. We revert the removal of AIX support and carry on with the status quo \n> > ante. (The removal of AIX is a regression; it is timely and in scope \n> > now to revert the change.)\n> > 2. Like (1), but we consider that notice has been given, and we will \n> > remove it early in PG18 (like August) unless the situation improves.\n> > 3. We leave it out of PG17 and consider a new AIX port for PG18 on its \n> > own merits.\n> \n> Andres has ably summarized the reasons why the status quo ante was\n> getting untenable. The direct-I/O problem could have been tolerable\n> on its own, but in reality it was the straw that broke the camel's\n> back so far as our willingness to maintain AIX support went. There\n> were just too many hacks and workarounds for too many problems,\n> with too few people interested in looking for better answers.\n> \n> So I'm totally not in favor of #1, at least not without some hard\n> commitments and follow-through on really cleaning up the mess\n> (which maybe looks more like your #2). What's needed here, as\n> you said, is for someone with a decent amount of expertise in\n> modern AIX to review all the issues. Maybe framing that as a\n> \"new port\" per #3 would be a good way to think about it. But\n> I don't want to just revert the AIX-ectomy and continue drifting.\n> \n> On the whole, it wouldn't be the worst thing in the world if PG 17\n> lacks AIX support but that comes back in PG 18. That approach would\n> solve the schedule-crunch aspect and give time for considered review\n> of how many of the hacks removed in 0b16bb877 really need to be put\n> back, versus being obsolete or amenable to a nicer solution in\n> late-model AIX. If we take a \"new port\" mindset then it would be\n> totally reasonable to say that it only supports very recent AIX\n> releases, so I'd hope at least some of the cruft could be removed.\n\nI agree that targeting PG 18 for a new-er AIX port is the reasonable\napproach. If there is huge demand, someone can create an AIX fork for\nPG 17 using the reverted patches --- yeah, lots of pain there, but we\nhave carried the AIX pain for too long with too little support.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 24 Apr 2024 23:39:37 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Wed, Apr 24, 2024 at 11:39:37PM -0400, Bruce Momjian wrote:\n> On Sat, Apr 20, 2024 at 12:25:47PM -0400, Tom Lane wrote:\n>> So I'm totally not in favor of #1, at least not without some hard\n>> commitments and follow-through on really cleaning up the mess\n>> (which maybe looks more like your #2). What's needed here, as\n>> you said, is for someone with a decent amount of expertise in\n>> modern AIX to review all the issues. Maybe framing that as a\n>> \"new port\" per #3 would be a good way to think about it. But\n>> I don't want to just revert the AIX-ectomy and continue drifting.\n>> \n>> On the whole, it wouldn't be the worst thing in the world if PG 17\n>> lacks AIX support but that comes back in PG 18. That approach would\n>> solve the schedule-crunch aspect and give time for considered review\n>> of how many of the hacks removed in 0b16bb877 really need to be put\n>> back, versus being obsolete or amenable to a nicer solution in\n>> late-model AIX. If we take a \"new port\" mindset then it would be\n>> totally reasonable to say that it only supports very recent AIX\n>> releases, so I'd hope at least some of the cruft could be removed.\n> \n> I agree that targeting PG 18 for a new-er AIX port is the reasonable\n> approach. If there is huge demand, someone can create an AIX fork for\n> PG 17 using the reverted patches --- yeah, lots of pain there, but we\n> have carried the AIX pain for too long with too little support.\n\nSome of the portability changes removed in 0b16bb877 feel indeed\nobsolete, so it may not hurt to start an analysis from scratch to see\nthe minimum amount of work that would be really required with the\nlatest versions of xlc, using the newest compilers as a supported\nbase. I'd like to think backporting these to stable branches should\nbe OK at some point, once the new port is proving baked enough.\n\nAnyway, getting an access to such compilers to be able to debug issues\non hosts that take less than 12h to just compile the code would\ncertainly help its adoption. So seeing commitment in the form of\npatches and access to environments would help a lot. Overall,\nstudying that afresh with v18 looks like a good idea, assuming that\nanybody who commits such patches has access to hosts to evaluate them,\nwith buildfarm members running on top, of course.\n\nMy 2c.\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 13:06:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Some of the portability changes removed in 0b16bb877 feel indeed\n> obsolete, so it may not hurt to start an analysis from scratch to see\n> the minimum amount of work that would be really required with the\n> latest versions of xlc, using the newest compilers as a supported\n> base.\n\nSomething I've been mulling over is whether to suggest that the\nproposed \"new port\" should only target building with gcc.\n\nOn the one hand, that would (I think) remove a number of annoying\nissues, and the average end user is unlikely to care which compiler\ntheir database server was built with. On the other hand, I'm a strong\nproponent of avoiding software monocultures, and xlc is one of the few\nC compilers still standing that aren't gcc or clang.\n\nIt would definitely make sense for a new port to start by getting\nthings going with gcc only, and then look at resurrecting xlc\nsupport.\n\n> I'd like to think backporting these to stable branches should\n> be OK at some point, once the new port is proving baked enough.\n\nIf things go as I expect, the \"new port\" would effectively drop\nsupport for older AIX and/or older compiler versions. So back-\nporting seems like an unlikely decision.\n\n> Anyway, getting an access to such compilers to be able to debug issues\n> on hosts that take less than 12h to just compile the code would\n> certainly help its adoption.\n\n+many\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Apr 2024 00:20:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Thu, Apr 25, 2024 at 12:20:05AM -0400, Tom Lane wrote:\n> It would definitely make sense for a new port to start by getting\n> things going with gcc only, and then look at resurrecting xlc\n> support.\n\nSriram mentioned upthread that he was looking at both of them. I'd be\nready to assume that most of the interest is in xlc, not gcc. But I\nmay be wrong.\n\nSaying that, dividing the effort into more successive steps is\nsensible here (didn't consider that previously, you have a good\npoint).\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 13:45:13 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On 25.04.24 06:20, Tom Lane wrote:\n> Something I've been mulling over is whether to suggest that the\n> proposed \"new port\" should only target building with gcc.\n> \n> On the one hand, that would (I think) remove a number of annoying\n> issues, and the average end user is unlikely to care which compiler\n> their database server was built with. On the other hand, I'm a strong\n> proponent of avoiding software monocultures, and xlc is one of the few\n> C compilers still standing that aren't gcc or clang.\n\nMy understanding is that the old xlc is dead and has been replaced by \n\"xlclang\", which is presumably an xlc-compatible frontend on top of \nclang/llvm. Hopefully, that will have fewer issues.\n\n\n\n", "msg_date": "Thu, 25 Apr 2024 10:03:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On 2024-Apr-24, Bruce Momjian wrote:\n\n> I agree that targeting PG 18 for a new-er AIX port is the reasonable\n> approach. If there is huge demand, someone can create an AIX fork for\n> PG 17 using the reverted patches --- yeah, lots of pain there, but we\n> have carried the AIX pain for too long with too little support.\n\nI'm not sure how large the demand would be for an AIX port specifically\nof 17, though. I mean, people using older versions can continue to use\n16 until 18 is released. Upgrading past one major version is hardly\nunheard of.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If you want to have good ideas, you must have many ideas. Most of them\nwill be wrong, and what you have to learn is which ones to throw away.\"\n (Linus Pauling)\n\n\n", "msg_date": "Thu, 25 Apr 2024 10:16:34 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 25.04.24 06:20, Tom Lane wrote:\n>> Something I've been mulling over is whether to suggest that the\n>> proposed \"new port\" should only target building with gcc.\n\n> My understanding is that the old xlc is dead and has been replaced by \n> \"xlclang\", which is presumably an xlc-compatible frontend on top of \n> clang/llvm. Hopefully, that will have fewer issues.\n\n[ googles... ] Actually it seems to be the other way around:\nper [1] xlclang is a clang-based front end to IBM's existing\ncodegen+optimization logic, and the xlc front end is still there too.\nIt's not at all clear that they have any intention of killing off xlc.\n\nNot sure where that leaves us in terms of either one being an\ninteresting target to support. xlclang is presumably an easier lift\nto get working, but that also makes it much less interesting from\nthe preserve-our-portability standpoint.\n\n\t\t\tregards, tom lane\n\n[1] https://www.ibm.com/docs/en/xl-c-and-cpp-aix/16.1?topic=new-clang-based-front-end\n\n\n", "msg_date": "Thu, 25 Apr 2024 09:54:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi,\n\nOn 2024-04-25 00:20:05 -0400, Tom Lane wrote:\n> Something I've been mulling over is whether to suggest that the\n> proposed \"new port\" should only target building with gcc.\n\nYes. I also wonder if such a port should only support building with sysv\nstyle shared library support, rather than the AIX (and windows) style. That'd\nmake it considerably less impactful on the buildsystem level. I don't know\nwhat the performance difference is these days.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Apr 2024 08:05:37 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Thu, Apr 25, 2024 at 10:16:34AM +0200, Álvaro Herrera wrote:\n> On 2024-Apr-24, Bruce Momjian wrote:\n> \n> > I agree that targeting PG 18 for a new-er AIX port is the reasonable\n> > approach. If there is huge demand, someone can create an AIX fork for\n> > PG 17 using the reverted patches --- yeah, lots of pain there, but we\n> > have carried the AIX pain for too long with too little support.\n> \n> I'm not sure how large the demand would be for an AIX port specifically\n> of 17, though. I mean, people using older versions can continue to use\n> 16 until 18 is released. Upgrading past one major version is hardly\n> unheard of.\n\nAgreed. They seem to have packages for 11/12, and only 15 recently. I\ndon't see how PG 17 would be missed, unless there are many people\ncompiling from source.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 26 Apr 2024 13:04:13 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Thu, Apr 25, 2024 at 01:06:24PM +0900, Michael Paquier wrote:\n> Anyway, getting an access to such compilers to be able to debug issues\n> on hosts that take less than 12h to just compile the code would\n> certainly help its adoption. So seeing commitment in the form of\n> patches and access to environments would help a lot. Overall,\n> studying that afresh with v18 looks like a good idea, assuming that\n> anybody who commits such patches has access to hosts to evaluate them,\n> with buildfarm members running on top, of course.\n\nAgreed. They can't even have buildfarm member for PG 17 since it\ndoesn't compile anymore, so someone has to go over the reverted patch,\nfigure out which ones are still valid, and report back. Trying to add a\nport, with possible breakage, during beta seems too risky compared to\nthe value.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 26 Apr 2024 13:06:12 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "> > It would definitely make sense for a new port to start by getting\n> > things going with gcc only, and then look at resurrecting xlc\n> > support.\n\n> Sriram mentioned upthread that he was looking at both of them. I'd be\n> ready to assume that most of the interest is in xlc, not gcc. But I\n> may be wrong.\n\nJust a heads-up, we got a node in the OSU lab for the buildfarm. Will let you know once we have the buildfarm setup on that node.\n\nAlso, we are making progress on setting up the buildfarm on a local node as well.\nBut currently there are some tests failing, seems some issue with plperl.\n\n aix01::build-farm-17#\n./run_build.pl --keepall --nosend --nostatus --verbose=5 --force REL_16_STABLE\n\nFri Apr 26 00:53:50 2024: buildfarm run for AIXnode01:REL_16_STABLE starting\nAIXnode01:REL_16_STABLE [00:53:50] checking out source ...\nAIXnode01:REL_16_STABLE [00:53:56] checking if build run needed ...\nAIXnode01:REL_16_STABLE [00:53:56] copying source to pgsql.build ...\nAIXnode01:REL_16_STABLE [00:54:08] running configure ...\nAIXnode01:REL_16_STABLE [00:55:01] running build ...\nAIXnode01:REL_16_STABLE [01:08:09] running basic regression tests ...\nAIXnode01:REL_16_STABLE [01:09:51] running make contrib ...\nAIXnode01:REL_16_STABLE [01:11:08] running make testmodules ...\nAIXnode01:REL_16_STABLE [01:11:19] running install ...\nAIXnode01:REL_16_STABLE [01:11:48] running make contrib install ...\nAIXnode01:REL_16_STABLE [01:12:01] running testmodules install ...\nAIXnode01:REL_16_STABLE [01:12:06] checking test-decoding\ngmake: gcc: A file or directory in the path name does not exist.\nAIXnode01:REL_16_STABLE [01:12:28] running make check miscellaneous modules ...\ngmake: gcc: A file or directory in the path name does not exist.\nAIXnode01:REL_16_STABLE [01:13:50] setting up db cluster (C)...\nAIXnode01:REL_16_STABLE [01:13:53] starting db (C)...\nAIXnode01:REL_16_STABLE [01:13:53] running installcheck (C)...\nAIXnode01:REL_16_STABLE [01:15:05] restarting db (C)...\nAIXnode01:REL_16_STABLE [01:15:07] running make isolation check ...\nAIXnode01:REL_16_STABLE [01:15:51] restarting db (C)...\nAIXnode01:REL_16_STABLE [01:15:56] running make PL installcheck (C)...\nBranch: REL_16_STABLE\nStage PLCheck-C failed with status 2\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n> > It would definitely make sense for a new port to start by getting\n\n\n> > things going with gcc only, and then look at resurrecting xlc\n\n\n> > support.\n\n\n \n\n\n> Sriram mentioned upthread that he was looking at both of them.  I'd be\n\n\n> ready to assume that most of the interest is in xlc, not gcc.  But I\n\n\n> may be wrong.\n\n\n \nJust a heads-up, we got a node in the OSU lab for the buildfarm. Will let you know once we have the buildfarm setup on that node.\n \nAlso, we are making progress on setting up the buildfarm on a local node as well.\nBut currently there are some tests failing, seems some issue with plperl.\n \n                aix01::build-farm-17#\n\n./run_build.pl --keepall  --nosend --nostatus --verbose=5  --force REL_16_STABLE\n \nFri Apr 26 00:53:50 2024: buildfarm run for AIXnode01:REL_16_STABLE starting\nAIXnode01:REL_16_STABLE [00:53:50] checking out source ...\nAIXnode01:REL_16_STABLE [00:53:56] checking if build run needed ...\nAIXnode01:REL_16_STABLE [00:53:56] copying source to pgsql.build ...\nAIXnode01:REL_16_STABLE [00:54:08] running configure ...\nAIXnode01:REL_16_STABLE [00:55:01] running build ...\nAIXnode01:REL_16_STABLE [01:08:09] running basic regression tests ...\nAIXnode01:REL_16_STABLE [01:09:51] running make contrib ...\nAIXnode01:REL_16_STABLE [01:11:08] running make testmodules ...\nAIXnode01:REL_16_STABLE [01:11:19] running install ...\nAIXnode01:REL_16_STABLE [01:11:48] running make contrib install ...\nAIXnode01:REL_16_STABLE [01:12:01] running testmodules install ...\nAIXnode01:REL_16_STABLE [01:12:06] checking test-decoding\ngmake: gcc: A file or directory in the path name does not exist.\nAIXnode01:REL_16_STABLE [01:12:28] running make check miscellaneous modules ...\ngmake: gcc: A file or directory in the path name does not exist.\nAIXnode01:REL_16_STABLE [01:13:50] setting up db cluster (C)...\nAIXnode01:REL_16_STABLE [01:13:53] starting db (C)...\nAIXnode01:REL_16_STABLE [01:13:53] running installcheck (C)...\nAIXnode01:REL_16_STABLE [01:15:05] restarting db (C)...\nAIXnode01:REL_16_STABLE [01:15:07] running make isolation check ...\nAIXnode01:REL_16_STABLE [01:15:51] restarting db (C)...\nAIXnode01:REL_16_STABLE [01:15:56] running make PL installcheck (C)...\nBranch: REL_16_STABLE\nStage PLCheck-C failed with status 2", "msg_date": "Fri, 26 Apr 2024 17:30:00 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Team,\n\nThere are couple of updates, firstly we got an AIX node on the OSU lab.\nPlease feel free to reach me, so that we can provide access to the node.\nWe have started working on setting up the buildfarm on that node.\n\nSecondly, as part of the buildfarm setup on our local nodes, we are hitting\nan issue with the plperl extension. In the logs we noticed that when the\nplperl extension is being created, it is failing to load the perl library.\n\n\n - CREATE EXTENSION plperlu;\n + server closed the connection unexpectedly\n + This probably means the server terminated abnormally\n + before or while processing the request.\n + connection to server was lost\n\nIn the logfile we could see these\n\n 2024-05-04 05:05:17.537 CDT [3997786:17] pg_regress/plperl_setup LOG: statement: CREATE EXTENSION plperl;\n Util.c: loadable library and perl binaries are mismatched (got first handshake key 9b80080, needed 9a80080)\n\nWe tried to resolve some of the suggestions mentioned here, but things couldn’t resolve.\n\nhttps://www.postgresql.org/message-id/CALDaNm15qaRFb3WiPFAdFqoB9pj1E5SCPPUGB%2BnJ4iF4gXO6Rw%40mail.gmail.com\n\nAny inputs here would be greatly appreciated.\n\nRegards,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Team,\n \nThere are couple of updates, firstly we got an AIX node on the OSU lab.\nPlease feel free to reach me, so that we can provide access to the node.\nWe have started working on setting up the buildfarm on that node.\n \nSecondly, as part of the buildfarm setup on our local nodes, we are hitting\nan issue with the plperl extension. In the logs we noticed that when the\n\nplperl extension is being created, it is failing to load the perl library.\n \n \n    - CREATE EXTENSION plperlu;\n    + server closed the connection unexpectedly\n    +       This probably means the server terminated abnormally\n    +       before or while processing the request.\n    + connection to server was lost\n \nIn the logfile we could see these\n\n \n    2024-05-04 05:05:17.537 CDT [3997786:17] pg_regress/plperl_setup LOG:  statement: CREATE EXTENSION plperl;\n    Util.c: loadable library and perl binaries are mismatched (got first handshake key 9b80080, needed 9a80080)\n \nWe tried to resolve some of the suggestions mentioned here, but things couldn’t resolve.\n \nhttps://www.postgresql.org/message-id/CALDaNm15qaRFb3WiPFAdFqoB9pj1E5SCPPUGB%2BnJ4iF4gXO6Rw%40mail.gmail.com\n \nAny inputs here would be greatly appreciated.\n \nRegards,\nSriram.", "msg_date": "Sat, 4 May 2024 16:01:06 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Team, on further investigation we were able to resolve the perl issue by setting the right PERL env location. Earlier it was pointing to the 32bit perl, as a result the perl lib mismatch seems to be happening.\nNow we have successfully built release 15 and 16 stable branches on the OSU lab node.\n\np9aix (OSU)\nOS: AIX 72Z\n RELEASE 16\n p9aix:REL_16_STABLE [08:31:26] OK\n ======== log passed to send_result ===========\n Branch: REL_16_STABLE\n All stages succeeded\n\n RELEASE 15\n p9aix:REL_15_STABLE [08:55:37] OK\n ======== log passed to send_result ===========\n Branch: REL_15_STABLE\n All stages succeeded\n\n\nAlso, we had successfully built release 16 branch on our local nodes as well\nOS: AIX 71C\n pgsql-aix71C:REL_16_STABLE [02:25:32] OK\n ======== log passed to send_result ===========\n Branch: REL_16_STABLE\n All stages succeeded\n\nOS: AIX72Z\n pgsql-aix72Z:REL_16_STABLE [02:35:03] OK\n ======== log passed to send_result ===========\n Branch: REL_16_STABLE\n All stages succeeded\n\nOS: AIX73D\n pgsql-aix73D:REL_16_STABLE [05:32:29] OK\n ======== log passed to send_result ===========\n Branch: REL_16_STABLE\n All stages succeeded\n\n\nRegards,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Team, on further investigation we were able to resolve the perl issue by setting the right PERL env location. Earlier it was pointing to the 32bit perl, as a result\n the perl lib mismatch seems to be happening.\nNow we have successfully built release 15 and 16 stable branches on the OSU lab node.\n \np9aix (OSU)\n\nOS: AIX 72Z\n\n    RELEASE 16\n\n    \np9aix:REL_16_STABLE [08:31:26] OK\n\n    \n======== log passed to send_result ===========\n\n    \nBranch: REL_16_STABLE\n\n    \nAll stages succeeded\n\n \n\n    RELEASE 15\n   \np9aix:REL_15_STABLE [08:55:37] OK \n\n    \n======== log passed to send_result =========== \n\n    \nBranch: REL_15_STABLE \n\n    \nAll stages succeeded\n\n \n \nAlso, we had successfully built release 16 branch on our local nodes as well\nOS: AIX 71C\n\n    pgsql-aix71C:REL_16_STABLE [02:25:32] OK \n\n    \n======== log passed to send_result =========== \n\n    \nBranch: REL_16_STABLE \n\n    \nAll stages succeeded\n\n \n\nOS: AIX72Z\n\n    \npgsql-aix72Z:REL_16_STABLE [02:35:03] OK \n\n    \n======== log passed to send_result =========== \n\n    \nBranch: REL_16_STABLE \n\n    \nAll stages succeeded\n \nOS: AIX73D\n\n    \npgsql-aix73D:REL_16_STABLE [05:32:29] OK\n   \n======== log passed to send_result ===========\n   \nBranch: REL_16_STABLE\n   \nAll stages succeeded\n \n \nRegards,\nSriram.", "msg_date": "Mon, 6 May 2024 14:12:22 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Team, We have the AIX node ready in OSU lab, and the branches 15 and 16 got build on the node. We had raised a request to register this node as buildfarm member. Yet to receive the approval.\n\nWe would like to understand your inputs/plans on reverting the changes for AIX.\n\nThanks,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Team, We have the AIX node ready in OSU lab, and the branches 15 and 16 got build on the node. We had raised a request to register this node as buildfarm member. Yet to receive\n the approval. \n \nWe would like to understand your inputs/plans on reverting the changes for AIX.\n \nThanks,\n\nSriram.", "msg_date": "Wed, 8 May 2024 11:39:17 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "On 08.05.24 13:39, Sriram RK wrote:\n> We would like to understand your inputs/plans on reverting the changes \n> for AIX.\n\nI think the ship has sailed for PG17. The way forward would be that you \nsubmit a patch for new, modernized AIX support for PG18.\n\n\n\n", "msg_date": "Wed, 8 May 2024 15:44:12 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Wed, May 8, 2024 at 03:44:12PM +0200, Peter Eisentraut wrote:\n> On 08.05.24 13:39, Sriram RK wrote:\n> > We would like to understand your inputs/plans on reverting the changes\n> > for AIX.\n> \n> I think the ship has sailed for PG17. The way forward would be that you\n> submit a patch for new, modernized AIX support for PG18.\n\nYes, I think we were clear that someone needs to review the reverted\npatch and figure out which parts are still needed, and why. We have no\n\"plans\" to restore support.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 8 May 2024 10:06:53 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Team, we have any updated from the XLC team, the issue specific to the alignment is fixed\nand XLC had released it as part of 16.1.0.18. The PTF is available at the below location,\n\nYou can also find a link here:\nhttps://www.ibm.com/support/pages/fix-list-xl-cc-aix.\n\n>>/opt/IBM/xlC/16.1.0/bin/xlC align.c -o align.xl\n\n>>./align.xl\nal4096 4096 @ 0x20008000 (mod 0)\nal4096_initialized 4096 @ 0x20004000 (mod 0)\nal4096_const 4096 @ 0x2000b000 (mod 0)\nal4096_const_initialized 4096 @ 0x10008000 (mod 0)\nal4096_static 4096 @ 0x2000e000 (mod 0)\nal4096_static_initialized 4096 @ 0x20001000 (mod 0)\nal4096_static_const 4096 @ 0x20011000 (mod 0)\nal4096_static_const_initialized 4096 @ 0x10001000 (mod 0)\n\n\nAlso would like to know some info related to the request raised for buildfarm access, to register the node in OSU lab. Where can I get the status of the request? Whom can I contact to get the request approved? So that we can add the node to the buildfarm.\n\nRegards,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Team, we have any updated from the XLC team, the issue specific to the alignment is fixed\n\nand XLC had released it as part of 16.1.0.18. The PTF is available at the below location,\n\n \n\nYou can also find a link here:  \n\n\nhttps://www.ibm.com/support/pages/fix-list-xl-cc-aix.\n\n \n\n>>/opt/IBM/xlC/16.1.0/bin/xlC align.c -o align.xl\n\n \n\n>>./align.xl\n\nal4096                           4096 @ 0x20008000 (mod 0)\n\nal4096_initialized               4096 @ 0x20004000 (mod 0)\n\nal4096_const                     4096 @ 0x2000b000 (mod 0)\n\nal4096_const_initialized         4096 @ 0x10008000 (mod 0)\n\nal4096_static                    4096 @ 0x2000e000 (mod 0)\n\nal4096_static_initialized        4096 @ 0x20001000 (mod 0)\n\nal4096_static_const              4096 @ 0x20011000 (mod 0)\n\nal4096_static_const_initialized  4096 @ 0x10001000 (mod 0)\n\n \n\n \n\nAlso would like to know some info related to the request raised for buildfarm access, to register the node in OSU lab. Where can I get the status of the request? Whom can I contact to get the request approved?\n So that we can add the node to the buildfarm.\n\n \n\nRegards,\n\nSriram.", "msg_date": "Wed, 15 May 2024 15:33:25 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Wed, May 15, 2024 at 03:33:25PM +0000, Sriram RK wrote:\n> Hi Team, we have any updated from the XLC team, the issue specific to the alignment is fixed\n> and XLC had released it as part of 16.1.0.18. The PTF is available at the below location,\n> \n> You can also find a link here:\n> https://www.ibm.com/support/pages/fix-list-xl-cc-aix.\n> \n> >>/opt/IBM/xlC/16.1.0/bin/xlC align.c -o align.xl\n> \n> >>./align.xl\n> al4096 4096 @ 0x20008000 (mod 0)\n> al4096_initialized 4096 @ 0x20004000 (mod 0)\n> al4096_const 4096 @ 0x2000b000 (mod 0)\n> al4096_const_initialized 4096 @ 0x10008000 (mod 0)\n> al4096_static 4096 @ 0x2000e000 (mod 0)\n> al4096_static_initialized 4096 @ 0x20001000 (mod 0)\n> al4096_static_const 4096 @ 0x20011000 (mod 0)\n> al4096_static_const_initialized 4096 @ 0x10001000 (mod 0)\n\nThat is good news. PGIOAlignedBlock is now in the IBM publication,\nhttps://www.ibm.com/support/pages/apar/IJ51032\n\n> Also would like to know some info related to the request raised for buildfarm access, to register the node in OSU lab. Where can I get the status of the request? Whom can I contact to get the request approved? So that we can add the node to the buildfarm.\n\nI assume you filled out the form at\nhttps://buildfarm.postgresql.org/cgi-bin/register-form.pl? It can take a few\nweeks, so I wouldn't worry yet.\n\n\n", "msg_date": "Wed, 15 May 2024 08:52:41 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "> > Also would like to know some info related to the request raised for buildfarm access, to register the\n> > node in OSU lab. Where can I get the status of the request? Whom can I contact to get the request\n> > approved? So that we can add the node to the buildfarm.\n\n> I assume you filled out the form at\n> https://buildfarm.postgresql.org/cgi-bin/register-form.pl? It can take a few\n> weeks, so I wouldn't worry yet.\n\nThanks Noha, I had already submitted a form a week back, hope it might take another couple of weeks to get it approved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n> > Also would like to know some info related to the request raised for buildfarm access, to register the\n> > node in OSU lab. Where can I get the status of the request? Whom can I contact to get the request\n> >\napproved? So that we can add the node to the buildfarm.\n\n> I assume you filled out the form at\n> https://buildfarm.postgresql.org/cgi-bin/register-form.pl?  It can take a few\n> weeks, so I wouldn't worry yet.\n \nThanks Noha, I had already submitted a form a week back, hope it might take another couple of weeks to get it approved.", "msg_date": "Wed, 15 May 2024 16:22:10 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Team,\n\nWe have an update wrt to the PG17 AIX port.\n\nWe have reverted the changes specific to AIX (that were removed in 0b16bb8776bb8) to the latest PG17 (head).\n\nThe Buildfarm succeeded for these changes. All the tests passed.\n\n\n\n System config\n\n OS level : AIX-73D\n\n Compiler : gcc-12 & xlc(16.1.0.18)\n\n\n\n Wed May 15 21:26:00 2024: buildfarm run for AIXnode01:HEAD starting\n\n AIXnode01:HEAD [21:26:00] running configure ...\n\n AIXnode01:HEAD [21:27:03] running build ...\n\n AIXnode01:HEAD [21:27:27] running basic regression tests ...\n\n AIXnode01:HEAD [21:34:41] running make contrib ...\n\n AIXnode01:HEAD [21:34:43] running make testmodules ...\n\n AIXnode01:HEAD [21:34:44] running install ...\n\n AIXnode01:HEAD [21:34:58] running make contrib install ...\n\n AIXnode01:HEAD [21:35:05] running testmodules install ...\n\n AIXnode01:HEAD [21:35:08] checking pg_upgrade\n\n AIXnode01:HEAD [21:35:08] checking test-decoding\n\n AIXnode01:HEAD [21:35:29] running make check miscellaneous modules ...\n\n AIXnode01:HEAD [21:36:16] setting up db cluster (C)...\n\n AIXnode01:HEAD [21:36:19] starting db (C)...\n\n AIXnode01:HEAD [21:36:19] running installcheck (C)...\n\n AIXnode01:HEAD [21:46:27] restarting db (C)...\n\n AIXnode01:HEAD [21:46:29] running make isolation check ...\n\n AIXnode01:HEAD [21:49:57] restarting db (C)...\n\n AIXnode01:HEAD [21:50:02] running make PL installcheck (C)...\n\n AIXnode01:HEAD [21:50:09] restarting db (C)...\n\n AIXnode01:HEAD [21:50:12] running make contrib installcheck (C)...\n\n AIXnode01:HEAD [21:53:53] restarting db (C)...\n\n AIXnode01:HEAD [21:53:56] running make test-modules installcheck (C)...\n\n AIXnode01:HEAD [21:54:28] stopping db (C)...\n\n AIXnode01:HEAD [21:54:29] running make ecpg check ...\n\n AIXnode01:HEAD [21:54:45] OK\n\n Branch: HEAD\n\n All stages succeeded\n\n\n\n\n\n\n\nThe below changes are applied on this commit level\n\ncommit 54b69f1bd730a228a666441592a12d2a0cbe2a06 (HEAD -> pgAIX, origin/master, origin/HEAD, master)\n\n\n\n On branch pgAIX\n\n Changes to be committed:\n\n (use \"git restore --staged <file>...\" to unstage)\n\n new file: src/backend/port/aix/mkldexport.sh\n\n new file: src/include/port/aix.h\n\n new file: src/makefiles/Makefile.aix\n\n new file: src/template/aix\n\n\n\n Changes not staged for commit:\n\n (use \"git add <file>...\" to update what will be committed)\n\n (use \"git restore <file>...\" to discard changes in working directory)\n\n modified: Makefile\n\n modified: config/c-compiler.m4\n\n modified: configure\n\n modified: configure.ac\n\n modified: doc/src/sgml/dfunc.sgml\n\n modified: doc/src/sgml/installation.sgml\n\n modified: doc/src/sgml/runtime.sgml\n\n modified: meson.build\n\n modified: src/Makefile.shlib\n\n modified: src/backend/Makefile\n\n modified: src/backend/meson.build\n\n modified: src/backend/storage/buffer/bufmgr.c\n\n modified: src/backend/utils/error/elog.c\n\n modified: src/backend/utils/misc/ps_status.c\n\n modified: src/bin/pg_basebackup/t/010_pg_basebackup.pl\n\n modified: src/bin/pg_verifybackup/t/008_untar.pl\n\n modified: src/bin/pg_verifybackup/t/010_client_untar.pl\n\n modified: src/include/c.h\n\n modified: src/include/port/atomics.h\n\n modified: src/include/storage/s_lock.h\n\n modified: src/interfaces/libpq/Makefile\n\n modified: src/interfaces/libpq/meson.build\n\n modified: src/port/README\n\n modified: src/port/strerror.c\n\n modified: src/test/regress/Makefile\n\n modified: src/test/regress/expected/sanity_check.out\n\n modified: src/test/regress/expected/test_setup.out\n\n modified: src/test/regress/regress.c\n\n modified: src/test/regress/sql/sanity_check.sql\n\n modified: src/test/regress/sql/test_setup.sql\n\n modified: src/tools/gen_export.pl\n\n modified: src/tools/pginclude/headerscheck\n\n\n\nCan you please let us know, the process to post the changes for review?\n\n\n\nRegards,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Team,\n\nWe have an update wrt to the PG17 AIX port.\n\n\nWe have reverted the changes specific to AIX (that were removed in 0b16bb8776bb8) to the latest PG17 (head).\n\nThe Buildfarm succeeded for these changes. All the tests passed.\n \n   \nSystem config\n          OS level : AIX-73D\n          Compiler : gcc-12 & xlc(16.1.0.18)\n \n    Wed May 15 21:26:00 2024: buildfarm run for AIXnode01:HEAD starting\n    AIXnode01:HEAD          [21:26:00] running configure ...\n    AIXnode01:HEAD          [21:27:03] running build ...\n    AIXnode01:HEAD          [21:27:27] running basic regression tests ...\n    AIXnode01:HEAD          [21:34:41] running make contrib ...\n    AIXnode01:HEAD          [21:34:43] running make testmodules ...\n    AIXnode01:HEAD          [21:34:44] running install ...\n    AIXnode01:HEAD          [21:34:58] running make contrib install ...\n    AIXnode01:HEAD          [21:35:05] running testmodules install ...\n    AIXnode01:HEAD          [21:35:08] checking pg_upgrade\n    AIXnode01:HEAD          [21:35:08] checking test-decoding\n    AIXnode01:HEAD          [21:35:29] running make check miscellaneous modules ...\n    AIXnode01:HEAD          [21:36:16] setting up db cluster (C)...\n    AIXnode01:HEAD          [21:36:19] starting db (C)...\n    AIXnode01:HEAD          [21:36:19] running installcheck (C)...\n    AIXnode01:HEAD          [21:46:27] restarting db (C)...\n    AIXnode01:HEAD          [21:46:29] running make isolation check ...\n    AIXnode01:HEAD          [21:49:57] restarting db (C)...\n    AIXnode01:HEAD          [21:50:02] running make PL installcheck (C)...\n    AIXnode01:HEAD          [21:50:09] restarting db (C)...\n    AIXnode01:HEAD          [21:50:12] running make contrib installcheck (C)...\n    AIXnode01:HEAD          [21:53:53] restarting db (C)...\n    AIXnode01:HEAD          [21:53:56] running make test-modules installcheck (C)...\n    AIXnode01:HEAD          [21:54:28] stopping db (C)...\n    AIXnode01:HEAD          [21:54:29] running make ecpg check ...\n    AIXnode01:HEAD          [21:54:45] OK\n    Branch: HEAD\n    All stages succeeded\n \n \n \nThe below changes are applied on this commit level\ncommit 54b69f1bd730a228a666441592a12d2a0cbe2a06 (HEAD -> pgAIX, origin/master, origin/HEAD, master)\n \n    On branch pgAIX\n    Changes to be committed:\n      (use \"git restore --staged <file>...\" to unstage)\n            new file:   src/backend/port/aix/mkldexport.sh\n            new file:   src/include/port/aix.h\n            new file:   src/makefiles/Makefile.aix\n            new file:   src/template/aix\n \n    Changes not staged for commit:\n      (use \"git add <file>...\" to update what will be committed)\n      (use \"git restore <file>...\" to discard changes in working directory)\n            modified:   Makefile\n            modified:   config/c-compiler.m4\n            modified:   configure\n            modified:   configure.ac\n            modified:   doc/src/sgml/dfunc.sgml\n            modified:   doc/src/sgml/installation.sgml\n            modified:   doc/src/sgml/runtime.sgml\n            modified:   meson.build\n            modified:   src/Makefile.shlib\n            modified:   src/backend/Makefile\n            modified:   src/backend/meson.build\n            modified:   src/backend/storage/buffer/bufmgr.c\n            modified:   src/backend/utils/error/elog.c\n            modified:   src/backend/utils/misc/ps_status.c\n            modified:   src/bin/pg_basebackup/t/010_pg_basebackup.pl\n            modified:   src/bin/pg_verifybackup/t/008_untar.pl\n            modified:   src/bin/pg_verifybackup/t/010_client_untar.pl\n            modified:   src/include/c.h\n            modified:   src/include/port/atomics.h\n            modified:   src/include/storage/s_lock.h\n            modified:   src/interfaces/libpq/Makefile\n            modified:   src/interfaces/libpq/meson.build\n            modified:   src/port/README\n            modified:   src/port/strerror.c\n            modified:   src/test/regress/Makefile\n            modified:   src/test/regress/expected/sanity_check.out\n            modified:   src/test/regress/expected/test_setup.out\n            modified:   src/test/regress/regress.c\n            modified:   src/test/regress/sql/sanity_check.sql\n            modified:   src/test/regress/sql/test_setup.sql\n            modified:   src/tools/gen_export.pl\n            modified:   src/tools/pginclude/headerscheck\n \n \nCan you please let us know, the process to post the changes for review?\n \n \n \nRegards,\nSriram.", "msg_date": "Thu, 16 May 2024 14:17:38 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "On 2024-May-16, Sriram RK wrote:\n\n> Hi Team,\n> \n> We have an update wrt to the PG17 AIX port.\n> \n> We have reverted the changes specific to AIX (that were removed in 0b16bb8776bb8) to the latest PG17 (head).\n> \n> The Buildfarm succeeded for these changes. All the tests passed.\n\nExcellent.\n\n> Can you please let us know, the process to post the changes for review?\n\nHere's some very good advice\nhttps://postgr.es/m/[email protected]\n\nRegards\n\n-- \nÁlvaro Herrera\n\n\n", "msg_date": "Thu, 16 May 2024 16:30:22 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Thanks Alvaro, for the info…\n\nHi Team,\nWe referred to the below links to build this patch …\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\nhttps://peter.eisentraut.org/blog/2023/05/09/how-to-submit-a-patch-by-email-2023-edition\n\nPlease find the attached patch.\n\nApart from the AIX specific changes, there is a minor change in this file wrt to XLC, below is the error for which we removed inline.\nLater, the build and tests passed for both XLC(16.1.0.18) and gcc(12) as well.\n\n src/backend/storage/buffer/bufmgr.c\n\n \"bufmgr.c\", line 811.39: 1506-780 (S) Reference to \"RelationGetSmgr\" with internal linkage is not allowed within inline definition of \"ReadBufferExtended\".\n \"bufmgr.c\", line 811.15: 1506-780 (S) Reference to \"ReadBuffer_common\" with internal linkage is not allowed within inline definition of \"ReadBufferExtended\".\n gmake[4]: *** [<builtin>: bufmgr.o] Error 1\n\n\nPlease let us know your feedback.\n\nThanks,\nSriram.", "msg_date": "Wed, 22 May 2024 16:15:35 +0000", "msg_from": "Sriram RK <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AIX support" }, { "msg_contents": "On 22.05.24 18:15, Sriram RK wrote:\n> Please find the attached patch.\n> \n> Apart from the AIX specific changes, there is a minor change in this \n> file wrt to XLC, below is the error for which we removed inline.\n> \n> Later, the build and tests passed for both XLC(16.1.0.18) and gcc(12) as \n> well.\n\nI think what you should do next is aggressively trim anything that does \nnot apply to current versions of AIX or the current compiler.\n\nFor example,\n\n+ # Old xlc versions (<13.1) don't have support for -qvisibility. Use \nexpfull to force\n\n+ <para>\n+ <productname>AIX</productname> versions before 7.1 are no longer\n+ tested nor supported by the <productname>PostgreSQL</productname>\n+ community.\n+ </para>\n\n(Probably most of that section needs to be retested and rewritten.)\n\n+ # Native memset() is faster, tested on:\n+ # - AIX 5.1 and 5.2, XLC 6.0 (IBM's cc)\n+ # - AIX 5.3 ML3, gcc 4.0.1\n+ memset_loop_limit = 0\n\n+ # for the base executable (AIX 4.2 and up)\n\n+ * \"IBM XL C/C++ for AIX, V12.1\" miscompiles, for 32-bit, some inline\n\n\nOne of the reasons that the AIX port ultimately became unmaintainable \nwas that so many hacks and caveats were accumulated over the years. A \nnew port should set a more recent baseline and trim all those hacks.\n\n\n\n", "msg_date": "Thu, 23 May 2024 07:59:39 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Peter, thanks for your feedback.\n\nWe are eager to extend our support in resolving the issues specific to AIX or corresponding\ncompilers (XLC and cLang).\n\nBut as there are no issues with the patch after reverting the changes(with the latest compilers\ngcc12 and xlc-16.0.1.18), we were wondering if this patch can be merged with the current release 17??\n\nHaving said that, we are committed to resolve all the hacks and caveats that got\naccumulated specific to AIX over the period by picking and resolving one after the other,\nrather than waiting for all the hacks to be fixed.\n\n > One of the reasons that the AIX port ultimately became unmaintainable\n > was that so many hacks and caveats were accumulated over the years. A\n > new port should set a more recent baseline and trim all those hacks.\nPlease help me understand this, with respect to the AIX specific hacks, is it just we can find\nall the location where _AIX macros are involved OR can we just look at the patch changes only, as all\nthe changes that were made were specific to AIX. If not, is there any other location where\nwe could find all the hacks to be resolved.\nCan you provide some more details on the expectations here?\n\n\nWarm regards,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\nHi Peter, thanks for your feedback.\n \nWe are eager to extend our support in resolving the issues specific to AIX or corresponding\n\ncompilers (XLC and cLang).\n\n \nBut as there are no issues with the patch after reverting the changes(with the latest compilers\n\ngcc12 and xlc-16.0.1.18), we were wondering if this patch can be merged with the current release 17??\n \nHaving said that, we are committed to resolve all the hacks and caveats that got\n\naccumulated specific to AIX over the period by picking and resolving one after the other,\n\nrather than waiting for all the hacks to be fixed.\n \n > One of the reasons that the AIX port ultimately became unmaintainable  \n  > was that so many hacks and caveats were accumulated over the years.  A \n  > new port should set a more recent baseline and trim all those hacks.\nPlease help me understand this, with respect to the AIX specific hacks, is it just we can find\n\nall the location where _AIX macros are involved OR can we just look at the patch changes only, as all\n\nthe changes that were made were specific to AIX. If not, is there any other location where\n\nwe could find all the hacks to be resolved.\n\nCan you provide some more details on the expectations here?\n \n \n\nWarm regards,\nSriram.", "msg_date": "Thu, 23 May 2024 15:36:24 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "On 23/05/2024 18:36, Srirama Kucherlapati wrote:\n> Hi Peter, thanks for your feedback.\n> \n> We are eager to extend our support in resolving the issues specific \n> to AIX or corresponding compilers (XLC and cLang).\n> \n> But as there are no issues with the patch after reverting the \n> changes(with the latest compilers gcc12 and xlc-16.0.1.18), we were \n> wondering if this patch can be merged with the current release 17??\n> \n> Having said that, we are committed to resolve all the hacks and \n> caveats that got accumulated specific to AIX over the period by \n> picking and resolving one after the other, rather than waiting for \n> all the hacks to be fixed.\n\nI'm not eager to put back those hacks just to have them be removed\nagain. So I'd like to see a minimal patch, with the *minimal* changes\nrequired for AIX support. And perhaps split that into two patches: First\nadd back AIX support with GCC, and second patch to add XLC support. I'd\nlike to to see how much of the changes are because of the different\ncompiler and how much from the OS.\n\nNo promises for v17, but if the patch is small and non-intrusive, I\nwould consider it at least. But let's see what it looks like first. It's\nthe same work that needs to be done whether it goes into v17 or v18 anyway.\n\n>> One of the reasons that the AIX port ultimately became \n>> unmaintainable was that so many hacks and caveats were accumulated\n>> over the years. A new port should set a more recent baseline and\n>> trim all those hacks.\n> \n> Please help me understand this, with respect to the AIX specific \n> hacks, is it just we can find all the location where _AIX macros are \n> involved OR can we just look at the patch changes only, as all the \n> changes that were made were specific to AIX. If not, is there any \n> other location where we could find all the hacks to be resolved.\n> \n> Can you provide some more details on the expectations here?\n\nSmallest possible patch that makes Postgres work on AIX again.\n\nPerhaps start with the patch you posted yesterday, but remove hunks from \nit one by one, to see which ones are still needed.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 23 May 2024 19:03:20 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Thu, May 23, 2024 at 07:03:20PM +0300, Heikki Linnakangas wrote:\n> > Can you provide some more details on the expectations here?\n> \n> Smallest possible patch that makes Postgres work on AIX again.\n> \n> Perhaps start with the patch you posted yesterday, but remove hunks from it\n> one by one, to see which ones are still needed.\n\nYes, bingo, that is exactly what needs to be done, and for the minimal\ncompiler, gcc, and the most recently supported versions of AIX.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 26 May 2024 00:00:33 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Team, We are pursuing to trim the changes wrt AIX. As of now we trimmed\nthe changes with respect to XLC and currently with trimmed changes the\nbuildfarm script passed (build and all the regression tests)\nThe XLC changes were trimmed only in the below file\n modified: configure\n modified: configure.ac\nWe are looking further into the other file changes as well.\n\nWarm regards,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\nHi Team, We are pursuing to trim the changes wrt AIX. As of now we trimmed\nthe changes with respect to XLC and currently with trimmed changes the\nbuildfarm script passed (build and all the regression tests)\nThe XLC changes were trimmed only in the below file\n    modified: configure\n    modified: configure.ac\nWe are looking further into the other file changes as well.\n \n\nWarm regards,\nSriram.", "msg_date": "Fri, 7 Jun 2024 16:30:35 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "On Fri, 2024-06-07 at 16:30 +0000, Srirama Kucherlapati wrote:\n> Hi Team, We are pursuing to trim the changes wrt AIX. As of now we trimmed\n> the changes with respect to XLC and currently with trimmed changes the\n> buildfarm script passed (build and all the regression tests)\n> The XLC changes were trimmed only in the below file\n>     modified: configure\n>     modified: configure.ac\n\nDid you forget an attachment?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 10 Jun 2024 11:30:06 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Laurenz, we are working on the other file changes, we shall post you the updates soon.\nWarm regards,\nSriram.\n\n\n\n\n\n\n\n\n\n \nHi\nLaurenz, we are working on the other file changes, we shall post you the updates soon.\n\nWarm regards,\nSriram.", "msg_date": "Tue, 11 Jun 2024 11:38:14 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "Hi Team,\nPlease find the attached patch, which resumes the AIX support with gcc alone. We have\nremoved changes wrt to XLC on AIX.\n\nWe are also continuing to work on the XLC and IBM-clang(openXLC) specific patch as well.\nOnce we get an approval for the above patch we can submit a subsequent patch to support\nXLC/IBM-clang changes.\n\nKindly let us know your inputs/feedback.\n\nWarm regards,\nSriram.", "msg_date": "Wed, 19 Jun 2024 14:55:47 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "On 19/06/2024 17:55, Srirama Kucherlapati wrote:\n> +/* Commenting for XLC\n> + * \"IBM XL C/C++ for AIX, V12.1\" miscompiles, for 32-bit, some inline\n> + * expansions of ginCompareItemPointers() \"long long\" arithmetic. To take\n> + * advantage of inlining, build a 64-bit PostgreSQL.\n> +#if defined(__ILP32__) && defined(__IBMC__)\n> +#define PG_FORCE_DISABLE_INLINE\n> +#endif\n> + */\n\nThis seems irrelevant.\n\n> + * Ordinarily, we'd code the branches here using GNU-style local symbols, that\n> + * is \"1f\" referencing \"1:\" and so on. But some people run gcc on AIX with\n> + * IBM's assembler as backend, and IBM's assembler doesn't do local symbols.\n> + * So hand-code the branch offsets; fortunately, all PPC instructions are\n> + * exactly 4 bytes each, so it's not too hard to count.\n\nCould you use GCC assembler to avoid this?\n\n> @@ -662,6 +666,21 @@ tas(volatile slock_t *lock)\n> \n> #if !defined(HAS_TEST_AND_SET)\t/* We didn't trigger above, let's try here */\n> \n> +#if defined(_AIX)\t/* AIX */\n> +/*\n> + * AIX (POWER)\n> + */\n> +#define HAS_TEST_AND_SET\n> +\n> +#include <sys/atomic_op.h>\n> +\n> +typedef int slock_t;\n> +\n> +#define TAS(lock)\t\t\t_check_lock((slock_t *) (lock), 0, 1)\n> +#define S_UNLOCK(lock)\t\t_clear_lock((slock_t *) (lock), 0)\n> +#endif\t /* _AIX */\n> +\n> +\n> /* These are in sunstudio_(sparc|x86).s */\n> \n> #if defined(__SUNPRO_C) && (defined(__i386) || defined(__x86_64__) || defined(__sparc__) || defined(__sparc))\n\nWhat CPI/compiler/OS configuration is this for, exactly? Could we rely \non GCC-provided __sync_lock_test_and_set() builtin function instead?\n\n> +# Allow platforms with buggy compilers to force restrict to not be\n> +# used by setting $FORCE_DISABLE_RESTRICT=yes in the relevant\n> +# template.\n\nSurely we don't need that anymore? Or is the compiler still buggy?\n\nDo you still care about 32-bit binaries on AIX? If not, let's make that \nthe default in configure or a check for it, and remove the instructions \non building 32-bit binaries from the docs.\n\nPlease try hard to remove any changes from the diff that are not \nabsolutely necessary.\n\n- Heikki\n\n\n\n", "msg_date": "Wed, 19 Jun 2024 18:15:14 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Thanks Hikki, for going through the changes.\r\n\r\n\r\n> +/* Commenting for XLC\r\n> + * \"IBM XL C/C++ for AIX, V12.1\" miscompiles, for 32-bit, some inline\r\n> + * expansions of ginCompareItemPointers() \"long long\" arithmetic. To take\r\n> + * advantage of inlining, build a 64-bit PostgreSQL.\r\n> +#if defined(__ILP32__) && defined(__IBMC__)\r\n> +#define PG_FORCE_DISABLE_INLINE\r\n> +#endif\r\n> + */\r\nI can remove these unwanted comments.\r\n\r\nI have to analyze the changes for the rest of your comment and will get back to you.\r\n\r\nWarm regards,\r\nSriram.\r\n\r\n\r\n\r\nFrom: Heikki Linnakangas <[email protected]>\r\nDate: Wednesday, 19 June 2024 at 8:45 PM\r\nTo: Srirama Kucherlapati <[email protected]>, Laurenz Albe <[email protected]>, Bruce Momjian <[email protected]>, Heikki Linnakangas <[email protected]>\r\nCc: Peter Eisentraut <[email protected]>, Alvaro Herrera <[email protected]>, [email protected] <[email protected]>, Noah Misch <[email protected]>, Michael Paquier <[email protected]>, Andres Freund <[email protected]>, Tom Lane <[email protected]>, Thomas Munro <[email protected]>, [email protected] <[email protected]>, [email protected] <[email protected]>\r\nSubject: [EXTERNAL] Re: AIX support\r\nOn 19/06/2024 17:55, Srirama Kucherlapati wrote:\r\n> +/* Commenting for XLC\r\n> + * \"IBM XL C/C++ for AIX, V12.1\" miscompiles, for 32-bit, some inline\r\n> + * expansions of ginCompareItemPointers() \"long long\" arithmetic. To take\r\n> + * advantage of inlining, build a 64-bit PostgreSQL.\r\n> +#if defined(__ILP32__) && defined(__IBMC__)\r\n> +#define PG_FORCE_DISABLE_INLINE\r\n> +#endif\r\n> + */\r\n\r\nThis seems irrelevant.\r\n\r\n> + * Ordinarily, we'd code the branches here using GNU-style local symbols, that\r\n> + * is \"1f\" referencing \"1:\" and so on. But some people run gcc on AIX with\r\n> + * IBM's assembler as backend, and IBM's assembler doesn't do local symbols.\r\n> + * So hand-code the branch offsets; fortunately, all PPC instructions are\r\n> + * exactly 4 bytes each, so it's not too hard to count.\r\n\r\nCould you use GCC assembler to avoid this?\r\n\r\n> @@ -662,6 +666,21 @@ tas(volatile slock_t *lock)\r\n>\r\n> #if !defined(HAS_TEST_AND_SET) /* We didn't trigger above, let's try here */\r\n>\r\n> +#if defined(_AIX) /* AIX */\r\n> +/*\r\n> + * AIX (POWER)\r\n> + */\r\n> +#define HAS_TEST_AND_SET\r\n> +\r\n> +#include <sys/atomic_op.h>\r\n> +\r\n> +typedef int slock_t;\r\n> +\r\n> +#define TAS(lock) _check_lock((slock_t *) (lock), 0, 1)\r\n> +#define S_UNLOCK(lock) _clear_lock((slock_t *) (lock), 0)\r\n> +#endif /* _AIX */\r\n> +\r\n> +\r\n> /* These are in sunstudio_(sparc|x86).s */\r\n>\r\n> #if defined(__SUNPRO_C) && (defined(__i386) || defined(__x86_64__) || defined(__sparc__) || defined(__sparc))\r\n\r\nWhat CPI/compiler/OS configuration is this for, exactly? Could we rely\r\non GCC-provided __sync_lock_test_and_set() builtin function instead?\r\n\r\n> +# Allow platforms with buggy compilers to force restrict to not be\r\n> +# used by setting $FORCE_DISABLE_RESTRICT=yes in the relevant\r\n> +# template.\r\n\r\nSurely we don't need that anymore? Or is the compiler still buggy?\r\n\r\nDo you still care about 32-bit binaries on AIX? If not, let's make that\r\nthe default in configure or a check for it, and remove the instructions\r\non building 32-bit binaries from the docs.\r\n\r\nPlease try hard to remove any changes from the diff that are not\r\nabsolutely necessary.\r\n\r\n- Heikki\r\n\n\n\n\n\n\n\n\n\nThanks Hikki, for going through the changes.\n \n \n> +/* Commenting for XLC\r\n> + * \"IBM XL C/C++ for AIX, V12.1\" miscompiles, for 32-bit, some inline\r\n> + * expansions of ginCompareItemPointers() \"long long\" arithmetic.  To take\r\n> + * advantage of inlining, build a 64-bit PostgreSQL.\r\n> +#if defined(__ILP32__) && defined(__IBMC__)\r\n> +#define PG_FORCE_DISABLE_INLINE\r\n> +#endif\r\n> + */\nI can remove these unwanted comments.\n \nI have to analyze the changes for the rest of your comment and will get back to you.  \n \n\nWarm regards,\nSriram.\n \n\n \n \n\n\n\nFrom:\r\nHeikki Linnakangas <[email protected]>\nDate: Wednesday, 19 June 2024 at 8:45 PM\nTo: Srirama Kucherlapati <[email protected]>, Laurenz Albe <[email protected]>, Bruce Momjian <[email protected]>, Heikki Linnakangas <[email protected]>\nCc: Peter Eisentraut <[email protected]>, Alvaro Herrera <[email protected]>, [email protected] <[email protected]>, Noah Misch <[email protected]>, Michael Paquier <[email protected]>, Andres Freund <[email protected]>,\r\n Tom Lane <[email protected]>, Thomas Munro <[email protected]>, [email protected] <[email protected]>, [email protected] <[email protected]>\nSubject: [EXTERNAL] Re: AIX support\n\n\nOn 19/06/2024 17:55, Srirama Kucherlapati wrote:\r\n> +/* Commenting for XLC\r\n> + * \"IBM XL C/C++ for AIX, V12.1\" miscompiles, for 32-bit, some inline\r\n> + * expansions of ginCompareItemPointers() \"long long\" arithmetic.  To take\r\n> + * advantage of inlining, build a 64-bit PostgreSQL.\r\n> +#if defined(__ILP32__) && defined(__IBMC__)\r\n> +#define PG_FORCE_DISABLE_INLINE\r\n> +#endif\r\n> + */\n\r\nThis seems irrelevant.\n\r\n> + * Ordinarily, we'd code the branches here using GNU-style local symbols, that\r\n> + * is \"1f\" referencing \"1:\" and so on.  But some people run gcc on AIX with\r\n> + * IBM's assembler as backend, and IBM's assembler doesn't do local symbols.\r\n> + * So hand-code the branch offsets; fortunately, all PPC instructions are\r\n> + * exactly 4 bytes each, so it's not too hard to count.\n\r\nCould you use GCC assembler to avoid this?\n\r\n> @@ -662,6 +666,21 @@ tas(volatile slock_t *lock)\r\n>  \r\n>  #if !defined(HAS_TEST_AND_SET)       /* We didn't trigger above, let's try here */\r\n>  \r\n> +#if defined(_AIX)    /* AIX */\r\n> +/*\r\n> + * AIX (POWER)\r\n> + */\r\n> +#define HAS_TEST_AND_SET\r\n> +\r\n> +#include <sys/atomic_op.h>\r\n> +\r\n> +typedef int slock_t;\r\n> +\r\n> +#define TAS(lock)                    _check_lock((slock_t *) (lock), 0, 1)\r\n> +#define S_UNLOCK(lock)               _clear_lock((slock_t *) (lock), 0)\r\n> +#endif        /* _AIX */\r\n> +\r\n> +\r\n>  /* These are in sunstudio_(sparc|x86).s */\r\n>  \r\n>  #if defined(__SUNPRO_C) && (defined(__i386) || defined(__x86_64__) || defined(__sparc__) || defined(__sparc))\n\r\nWhat CPI/compiler/OS configuration is this for, exactly? Could we rely \r\non GCC-provided __sync_lock_test_and_set() builtin function instead?\n\r\n> +# Allow platforms with buggy compilers to force restrict to not be\r\n> +# used by setting $FORCE_DISABLE_RESTRICT=yes in the relevant\r\n> +# template.\n\r\nSurely we don't need that anymore? Or is the compiler still buggy?\n\r\nDo you still care about 32-bit binaries on AIX? If not, let's make that \r\nthe default in configure or a check for it, and remove the instructions \r\non building 32-bit binaries from the docs.\n\r\nPlease try hard to remove any changes from the diff that are not \r\nabsolutely necessary.\n\r\n- Heikki", "msg_date": "Wed, 19 Jun 2024 15:57:08 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "We are continuing to work on the changes…\n\n > Do you still care about 32-bit binaries on AIX? If not, let's make that\n > the default in configure or a check for it, and remove the instructions\n > on building 32-bit binaries from the docs.\n\nAs most of the products are moving towards 64bit, we will try to remove\nthis 32bit support.\n\n\nWarm regards,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe are continuing to work on the changes…\n\n  > Do you still care about 32-bit binaries on AIX? If not, let's make that \n  > the default in configure or a check for it, and remove the instructions \n  > on building 32-bit binaries from the docs.\n \nAs most of the products are moving towards 64bit, we will try to remove\n\nthis 32bit support.\n \n\nWarm regards,\nSriram.", "msg_date": "Fri, 21 Jun 2024 17:00:31 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "Hi Heikki & Team,\n\n\n\nI tried to look at the assembly code changes with our team, in the below file.\n\n\n\ndiff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h\n\nindex 29ac6cdcd9..69582f4ae7 100644\n\n--- a/src/include/storage/s_lock.h\n\n+++ b/src/include/storage/s_lock.h\n\nstatic __inline__ int\n\ntas(volatile slock_t *lock)\n\n@@ -424,17 +430,15 @@ tas(volatile slock_t *lock)\n\n__asm__ __volatile__(\n\n\" lwarx %0,0,%3,1 \\n\"\n\n\" cmpwi %0,0 \\n\"\n\n\" bne $+16 \\n\" /* branch to li %1,1 */\n\n\" addi %0,%0,1 \\n\"\n\n\" stwcx. %0,0,%3 \\n\"\n\n\" beq $+12 \\n\" /* branch to lwsync */\n\n\" li %1,1 \\n\"\n\n\" b $+12 \\n\" /* branch to end of asm sequence */\n\n\" lwsync \\n\"\n\n\" li %1,0 \\n\"\n\n\n\n: \"=&b\"(_t), \"=r\"(_res), \"+m\"(*lock)\n\n: \"r\"(lock)\n\n: \"memory\", \"cc\");\n\n\n\nFor the changes in the above file, this code is very specific to power architecture we need to use the IBM Power specific asm code only, rather than using the GNU assembler. Also, all these asm specific code is under the macro __ppc__, which should not impact any other platforms. I see there is a GCC specific implementation (under this macro #if defined(HAVE_GCC__SYNC_INT32_TAS)) in the same file as well.\n\n\n\n+#define TAS(lock) _check_lock((slock_t *) (lock), 0, 1)\n\n+#define S_UNLOCK(lock) _clear_lock((slock_t *) (lock), 0)\n\n\n\nThe above changes are specific to AIX kernel and it operates on fixed kernel memory. This is more like a compare_and_swap functionality with sync capability. For all the assemble code I think it would be better to use the IBM Power specific asm code to gain additional performance.\n\nI was trying to understand here wrt to both the assemble changes if you are looking for anything specific to the architecture.\n\nAttached is the patch for the previous comments, kindly please let me know your comments.\n\nWarm regards,\nSriram.", "msg_date": "Wed, 14 Aug 2024 03:31:21 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "On 14/08/2024 06:31, Srirama Kucherlapati wrote:\n> Hi Heikki & Team,\n> \n> I tried to look at the assembly code changes with our team, in the below \n> file.\n> \n> diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h\n> index 29ac6cdcd9..69582f4ae7 100644\n> --- a/src/include/storage/s_lock.h\n> +++ b/src/include/storage/s_lock.h\n> static __inline__ int\n> tas(volatile slock_t *lock)\n> @@ -424,17 +430,15 @@ tas(volatile slock_t *lock)\n> __asm__ __volatile__(\n> \"        lwarx   %0,0,%3,1        \\n\"\n> \"        cmpwi   %0,0                \\n\"\n> \"        bne     $+16                \\n\"                /* branch to li \n> %1,1 */\n> \"        addi    %0,%0,1                \\n\"\n> \"        stwcx.  %0,0,%3                \\n\"\n> \"        beq     $+12                \\n\"                /* branch to \n> lwsync */\n> \"        li      %1,1                \\n\"\n> \"        b       $+12                \\n\"                /* branch to end \n> of asm sequence */\n> \"        lwsync                                \\n\"\n> \"        li      %1,0                \\n\"\n> :        \"=&b\"(_t), \"=r\"(_res), \"+m\"(*lock)\n> :        \"r\"(lock)\n> :        \"memory\", \"cc\");\n> \n> For the changes in the above file,  this code is very specific to power \n> architecture we need to use the IBM Power specific asm code only, rather \n> than using the GNU assembler. Also, all these asm specific code is under \n> the macro __ppc__, which should not impact any other platforms. I see \n> there is a GCC specific implementation (under this macro #if \n> defined(HAVE_GCC__SYNC_INT32_TAS)) in the same file as well.\n\nI'm sorry, I don't understand what you're saying here. Do you mean that \nwe don't need to do anything here, and the code we have in s_lock.h in \n'master' now will work fine on AIX? Or do we need to (re-)do some \nchanges to support AIX again? If we only support GCC, can we use the \n__sync_lock_test_and_set() builtin instead?\n\nIf any changes are required, please include them in the patch. That'll \nmake it clear what exactly you're proposing.\n\n> +#define TAS(lock)                      _check_lock((slock_t *) (lock), \n> 0, 1)\n> \n> +#define S_UNLOCK(lock)         _clear_lock((slock_t *) (lock), 0)\n> \n> The above changes are specific to AIX kernel and it operates on fixed \n> kernel memory. This is more like a compare_and_swap functionality with \n> sync capability. For all the assemble code I think it would be better to \n> use the IBM Power specific asm code to gain additional performance.\n\nYou mean we don't need the above? Ok, good.\n\n> I was trying to understand here wrt to both the assemble changes if you \n> are looking for anything specific to the architecture.\n\nI don't know. You tell me what makes most sense on AIX / powerpc.\n\n> Attached is the patch for the previous comments, kindly please let me \n> know your comments.\n\nIs this all that's needed to resurrect AIX support?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 10:46:38 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Heikki,\nI have attached the merged patch with all the changes, the earlier patch was\njust only the changes specific to older review comments.\n\n\n > I'm sorry, I don't understand what you're saying here. Do you mean that\n > we don't need to do anything here, and the code we have in s_lock.h in\n > 'master' now will work fine on AIX? Or do we need to (re-)do some\n > changes to support AIX again? If we only support GCC, can we use the\n > __sync_lock_test_and_set() builtin instead?\n\nHere we need these changes for ppc. These changes are not for enabling\nthe AIX support, but this is implementing “Enhanced PowerPC Architecture”.\nThis routine is more of compare_and_increment, which is different from\nGCC __sync_lock_test_and_set(). Also I tried to write a sample function to\ncheck the assemble generated by __sync_lock_test_and_set(), which turned out to\nbe different set of assemble code.\n\n\n > > +#define TAS(lock) _check_lock((slock_t *) (lock),\n > > 0, 1)\n > >\n >> +#define S_UNLOCK(lock) _clear_lock((slock_t *) (lock), 0)\n >>\n > > The above changes are specific to AIX kernel and it operates on fixed\n > > kernel memory. This is more like a compare_and_swap functionality with\n > > sync capability. For all the assemble code I think it would be better to\n > > use the IBM Power specific asm code to gain additional performance.\n\n > You mean we don't need the above? Ok, good.\n\nI mean this part of the code is needed as this is specific to AIX kernel memory\noperation which is different from __sync_lock_test_and_set().\n\nI would like to mention that the changes made in src/include/storage/s_lock.h\nare pretty much required and need to be operated in assemble specific to IBM\nPower architecture.\n\nWarm regards,\nSriram.", "msg_date": "Wed, 14 Aug 2024 15:22:18 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "On 14/08/2024 18:22, Srirama Kucherlapati wrote:\n> Hi Heikki,\n> \n> I have attached the merged patch with all the changes, the earlier patch was\n> \n> just only the changes specific to older review comments.\n> \n>     > I'm sorry, I don't understand what you're saying here. Do you \n> mean that\n>     > we don't need to do anything here, and the code we have in \n> s_lock.h in\n>     > 'master' now will work fine on AIX? Or do we need to (re-)do some\n>     > changes to support AIX again? If we only support GCC, can we use the\n>     > __sync_lock_test_and_set() builtin instead?\n> \n> Here we need these changes for ppc. These changes are not for\n> enabling the AIX support, but this is implementing “Enhanced PowerPC\n> Architecture”. This routine is more of compare_and_increment, which\n> is different from GCC __sync_lock_test_and_set(). Also I tried to\n> write a sample function to check the assemble generated by\n> __sync_lock_test_and_set(), which turned out to be different set of\n> assemble code.\n\nI still don't understand. We have Linux powerpc systems running happily \nin the buildfarm. They are happy with the current spinlock \nimplementation. Why is this needed? What happens without it?\n\nHow is this different from __sync_lock_test_and_set()? Is the \n__sync_lock_test_and_set() on that platform broken, less efficient, or \njust different but equally good?\n\n>     > > +#define TAS(lock)                      _check_lock((slock_t *) \n> (lock),\n>     > > 0, 1)\n>     > >\n>     >>  +#define S_UNLOCK(lock)         _clear_lock((slock_t *) (lock), 0)\n>     >>\n>     > > The above changes are specific to AIX kernel and it operates on \n> fixed\n>     > > kernel memory. This is more like a compare_and_swap \n> functionality with\n>     > > sync capability. For all the assemble code I think it would be \n> better to\n>     > > use the IBM Power specific asm code to gain additional performance.\n> \n>     > You mean we don't need the above? Ok, good.\n> \n> I mean this part of the code is needed as this is specific to AIX\n> kernel memory operation which is different from\n> __sync_lock_test_and_set().\n\nHow is it different from __sync_lock_test_and_set()? Why is it needed? \nWhat is AIX kernel memory operation?\n\n> I would like to mention that the changes made in \n> src/include/storage/s_lock.h\n> \n> are pretty much required and need to be operated in assemble specific to IBM\n> Power architecture.\n\nNote that your patch both modifies the existing powerpc implementation, \nand introduces a new AIX-specific one. They cannot *both* be required, \nbecause only one of them will ever be compiled on a given platform. \nWhich is it? Or are you trying to make this work on multiple different \nCPUs on AIX, so that different implementation gets chosen on different CPUs?\n\nIs the mkldexport stuff still needed on modern AIX? Or was it specific \nto XLC and never needed on GCC? How do other products do that?\n\nOn a general note: it's your responsibility to explain all the changes \nin a way that others will understand and can verify. It is especially \nimportant for something critical and platform-specific like spinlocks, \nbecause others don't have easy access to the hardware to test these \nthings independently. I also want to remind you that from the Postgres \ncommunity point of view, you are introducing support for a new platform, \nAIX, not merely resurrecting the old stuff. Every change needs to be \njustified anew.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 23:09:06 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Heikki and Team,\n\nThanks for your comments.\n\nHere are some more details\n\n > I still don't understand. We have Linux powerpc systems\n > running happily in the buildfarm. They are happy with the\n > current spinlock implementation. Why is this needed?\n > What happens without it?\nNot sure, by the time the below commits were made if there was a consideration\nto use the gcc routines. My guess is that by using this PPC assembler code\nwould make the code independent of the compilers. Even the Linux ppc would use the\nsame asm. Since gcc is available on AIX, I have replaced the asm changes with\nthe gcc routine __sync_lock_test_and_set() to set the lock.\n\nWe have the gcc(package) build on the AIX platform and as part of the testsuite\nthere are no issues wrt this routine. We tried to test with the sample test\nprogram extracted from gcc testsuite. Also we discussed the same with our\ncompiler teams internally and they see no issues using this routine.\n\n <attached sample test> gcc-12.4.0/libgomp/testsuite/libgomp.c/sections-1.c\n\n--------\n > > +#define TAS(lock) _check_lock((slock_t *) (lock), 0, 1)\n > >\n > > +#define S_UNLOCK(lock) _clear_lock((slock_t *) (lock), 0)\n > >\n\n > How is it different from __sync_lock_test_and_set()? Why is it needed?\n > What is AIX kernel memory operation?\n\n More Info: _check_lock routine description\n https://www.ibm.com/docs/en/aix/7.1?topic=c-check-lock-subroutine\n The _check_lock subroutine performs an atomic (uninterruptible) sequence of\n operations. The compare_and_swap subroutine is similar, but does not issue\n synchronization instructions and therefore is inappropriate for updating lock\n words.\n\nThis change need not be replaced with gcc routine as, these changes will be\ncompiled for the non-gcc platforms only. This piece of code would never be\ncompiled, as we are using only gcc to build.\n\nI tried to replace the AIX asm (under__ppc__ macro) with the gcc routine\n__sync_lock_test_and_set(), and all the buildfarm tests passed. Attached patch\nand the buildfarm output. Please let me know your feedback.\n\n\n\n----\n\n > On a general note: it's your responsibility to explain all the changes\n > in a way that others will understand and can verify. It is especially\n > important for something critical and platform-specific like spinlocks,\n > because others don't have easy access to the hardware to test these\n > things independently. I also want to remind you that from the Postgres\n > community point of view, you are introducing support for a new platform,\n > AIX, not merely resurrecting the old stuff. Every change needs to be\n > justified anew.\n\nI do agree with you. To have a better understand on the previous changes,\nI was going through the history of the file(s_lock.h) and see that multiple\nchanges that were made wrt the tas()routine specific to ppc/AIX. Below are\nthe commit levels.\nI would kindly request, Tom Lane, to provide some insights on these changes.\n\n\n commit e3b06a871b63b90d4a08560ce184bb33324410b8\n commit 50938576d482cd36e52a60b5bb1b56026e63962a << added tas() for AIX\n commit 7233aae50bea503369b0a4ef9a3b6a3864c96812\n commit ceb4f5ea9c2c6c2bd44d4799ff4a62c40a038894 << added tas() for PPC\n commit f9ba0a7fe56398e89fe349476d9e437c3197ea28\n commit eb5e4c58d137c9258eff5e41b09cb5fe4ed6d64c\n commit cd35d601b859d3a56632696b8d5293cbe547764b\n commit 109867748259d286dd01fce17d5f895ce59c68d5\n commit 5cfa8dd3007d7e953c6a03b0fa2215d97c581b0c\n commit 631beeac3598a73dee2c2afa38fa2e734148031b\n commit bc2a050d40976441cdb963ad829316c23e8df0aa\n commit c41a1215f04912108068b909569551f42059db29\n commit 50938576d482cd36e52a60b5bb1b56026e63962a\n\n\nPlease let me know if would like to try on the hardware, we have recently\nsetup a node in the OSU lab to try out.\n\nThanks,\nSriram.", "msg_date": "Wed, 11 Sep 2024 12:38:33 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "On 11/09/2024 15:38, Srirama Kucherlapati wrote:\n>> I still don't understand. We have Linux powerpc systems running\n>> happily in the buildfarm. They are happy with the current spinlock\n>> implementation. Why is this needed? What happens without it?\n> \n> Not sure, by the time the below commits were made if there was a \n> consideration to use the gcc routines.\n\nThe PPC asm code was originally written in 2002, and the first use of \n__sync_lock_test_and_set(), for ARM, appeared in 2012. The commit that \nmade __sync_lock_test_and_set() be chosen automatically for platforms \nthat don't specify anything else was added in 2022.\n\n> I tried to replace the AIX asm (under__ppc__ macro) with the gcc\n> routine __sync_lock_test_and_set(), and all the buildfarm tests\n> passed. Attached patch and the buildfarm output. Please let me know\n> your feedback.\nOk, if we don't need the assembler code at all, that's good. A patch to \nintroduce AIX support should not change it for non-AIX powerpc systems \nthough. That might be a good change, but would need to be justified \nseparately, e.g. by some performance testing, and should be a separate \npatch.\n\nIf you make no changes to s_lock.h at all, will it work? Why not?\n\nYou said earlier:\n\n> I mean this part of the code is needed as this is specific to AIX kernel memory\n> operation which is different from __sync_lock_test_and_set().\n> \n> I would like to mention that the changes made in src/include/storage/s_lock.h\n> are pretty much required and need to be operated in assemble specific to IBM\n> Power architecture.\n\nWas that earlier statement incorrect? Is the manual wrong or outdated or \nnot applicable to us?\n\n\nMoving on..\n\nDo you still need mkldexport.sh? Surely there's a better way to do that \nin year 2024. Some quick googling says there's a '-bexpall' option to \n'ld', which kind of sounds like what we want. Will that work? How do \nother programs do this?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 12 Sep 2024 00:57:33 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Thu, Sep 12, 2024 at 9:57 AM Heikki Linnakangas <[email protected]> wrote:\n> If you make no changes to s_lock.h at all, will it work? Why not?\n\nIt's good to keep the work independent and I don't want to hold up\nanything happening in this thread, but just for information: I have\nbeen poking around at the idea of entirely removing the old spinlock\ncode and pointing spin.h's function-like-macros to the atomics code.\nWe couldn't do that before, because atomics were sometimes implemented\nwith spinlocks, but now that pg_atomic_flag is never implemented with\nspinlocks we can flip that around, and then have only one place where\nwe know how to do this stuff. What is needed for that to progress is,\nI guess, to determine though assembler analysis or experimentation\nacross a bunch of targets that it works out at least as good...\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGJ%2BoA%2B62iUZ-EQb5R2cAOW3Y942ZoOtzOD%3D1sQ05iNg6Q%40mail.gmail.com#23598cafac3dd08ca94fa9e2228a4764\n\n\n", "msg_date": "Thu, 12 Sep 2024 10:06:22 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "> The PPC asm code was originally written in 2002, and the first use of\n > _ sync_lock_test_and_set(), for ARM, appeared in 2012. The commit that\n > made __sync_lock_test_and_set() be chosen automatically for platforms\n > that don't specify anything else was added in 2022.\nThanks for the info.\n\n\n\n------------------\n > Ok, if we don't need the assembler code at all, that's good. A patch to\n > introduce AIX support should not change it for non-AIX powerpc systems\n > though. That might be a good change, but would need to be justified\n > separately, e.g. by some performance testing, and should be a separate\n > patch.\n\n > If you make no changes to s_lock.h at all, will it work? Why not?\nWith the existing asm code I see there are some syntax errors, being hit.\nBut after reverting the old changes the issues resolved. Below are diffs.\n\n static __inline__ int\n tas(volatile slock_t *lock)\n {\n @@ -424,17 +413,15 @@ tas(volatile slock_t *lock)\n __asm__ __volatile__(\n \" lwarx %0,0,%3,1 \\n\"\n \" cmpwi %0,0 \\n\"\n -\" bne 1f \\n\"\n +\" bne $+16 \\n\" /* branch to li %1,1 */\n \" addi %0,%0,1 \\n\"\n \" stwcx. %0,0,%3 \\n\"\n -\" beq 2f \\n\"\n -\"1: \\n\"\n +\" beq $+12 \\n\" /* branch to lwsync */\n \" li %1,1 \\n\"\n -\" b 3f \\n\"\n -\"2: \\n\"\n +\" b $+12 \\n\" /* branch to end of asm sequence */\n \" lwsync \\n\"\n \" li %1,0 \\n\"\n -\"3: \\n\"\n +\n : \"=&b\"(_t), \"=r\"(_res), \"+m\"(*lock)\n : \"r\"(lock)\n : \"memory\", \"cc\");\n\nLet me know if I need to run any perf tools to check the performance of\nthe __sync_lock_test_and_set change.\n\n\n\n---------------\n > > I mean this part of the code is needed as this is specific to AIX kernel memory\n > > operation which is different from __sync_lock_test_and_set().\n > >\n > > I would like to mention that the changes made in src/include/storage/s_lock.h\n > > are pretty much required and need to be operated in assemble specific to IBM\n > > Power architecture.\n\n > Was that earlier statement incorrect? Is the manual wrong or outdated or\n > not applicable to us?\n\nHere this change is specific to AIX, but since we are compiling with gcc, this\nis not applicable. But I will try with __sync* routines and check.\n\n\n---------------\n > Do you still need mkldexport.sh? Surely there's a better way to do that\n > in year 2024. Some quick googling says there's a '-bexpall' option to\n > 'ld', which kind of sounds like what we want. Will that work? How do\n > other programs do this?\n\nThanks for looking into this, I’m working on this, I will let you know.\n\n\nThanks,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n    > The PPC asm code was originally written in 2002, and the first use of \n    > _ sync_lock_test_and_set(), for ARM, appeared in 2012. The commit that \n    > made __sync_lock_test_and_set() be chosen automatically for platforms \n    > that don't specify anything else was added in 2022.\nThanks for the info.\n\n \n \n \n------------------\n    > Ok, if we don't need the assembler code at all, that's good. A patch to \n    > introduce AIX support should not change it for non-AIX powerpc systems \n    > though. That might be a good change, but would need to be justified \n    > separately, e.g.  by some performance testing, and should be a separate \n    > patch.\n\n    > If you make no changes to s_lock.h at all, will it work? Why not?\nWith the existing asm code I see there are some syntax errors, being hit.\nBut after reverting the old changes the issues resolved. Below are diffs.\n \n     static __inline__ int\n     tas(volatile slock_t *lock)\n     {\n    @@ -424,17 +413,15 @@ tas(volatile slock_t *lock)\n            __asm__ __volatile__(\n     \"      lwarx   %0,0,%3,1       \\n\"\n     \"      cmpwi   %0,0            \\n\"\n    -\"      bne     1f                      \\n\"\n    +\"      bne     $+16            \\n\"             /* branch to li %1,1 */\n     \"      addi    %0,%0,1         \\n\"\n     \"      stwcx.  %0,0,%3         \\n\"\n    -\"      beq     2f                      \\n\"\n    -\"1: \\n\"\n    +\"      beq     $+12            \\n\"             /* branch to lwsync */\n     \"      li      %1,1            \\n\"\n    -\"      b       3f                      \\n\"\n    -\"2: \\n\"\n    +\"      b       $+12            \\n\"             /* branch to end of asm sequence */\n     \"      lwsync                          \\n\"\n     \"      li      %1,0            \\n\"\n    -\"3: \\n\"\n    +\n     :      \"=&b\"(_t), \"=r\"(_res), \"+m\"(*lock)\n     :      \"r\"(lock)\n     :      \"memory\", \"cc\");\n\n\nLet me know if I need to run any perf tools to check the performance of\n\nthe __sync_lock_test_and_set change.\n \n \n \n---------------\n    > > I mean this part of the code is needed as this is specific to AIX kernel memory\n    > > operation which is different from __sync_lock_test_and_set().\n    > > \n    > > I would like to mention that the changes made in src/include/storage/s_lock.h\n    > > are pretty much required and need to be operated in assemble specific to IBM\n    > > Power architecture.\n\n    > Was that earlier statement incorrect? Is the manual wrong or outdated or \n    > not applicable to us?\n\n\nHere this change is specific to AIX, but since we are compiling with gcc, this\n\nis not applicable. But I will try with __sync* routines and check.\n \n \n---------------\n    > Do you still need mkldexport.sh? Surely there's a better way to do that\n    > in year 2024. Some quick googling says there's a '-bexpall' option to\n    > 'ld', which kind of sounds like what we want. Will that work? How do\n    > other programs do this?\n \nThanks for looking into this, I’m working on this, I will let you know.\n \n \nThanks,\nSriram.", "msg_date": "Fri, 13 Sep 2024 11:49:24 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "> Do you still need mkldexport.sh? Surely there's a better way to do that\n > in year 2024. Some quick googling says there's a '-bexpall' option to\n > 'ld', which kind of sounds like what we want. Will that work? How do\n > other programs do this?\n\nWe have noticed couple of caveats with these flags -bexpall/-bexpfull in other\nopensource tools on AIX. This option would export too many symbols causing\nproblems because a shared library may re-export symbols from another library\ncausing confused dependencies, duplicate symbols.\n\n\n\n\nWe have similar discussion wrt to these flag in Cmake\n\n https://gitlab.kitware.com/cmake/cmake/-/issues/19163\n\n\n\n\n\nAlso, I tried some sample program to verify the same as below\n\n\n\n >> cat foo.c\n\n #include <stdio.h>\n\n #include <string.h>\n\n int func1()\n\n {\n\n char str1[] = \"Hello \", str2[] = \"world! \";\n\n strcat(str1,str2);\n\n puts(str1);\n\n return 0;\n\n }\n\n\n\n >> gcc -c foo.c -o foo.o\n\n >> gcc -shared -Wl,-bexpall -o foo.so foo.o\n\n >> dump -Tov foo.so\n\n\n\n foo.so:\n\n\n\n ***Object Module Header***\n\n # Sections Symbol Ptr # Symbols Opt Hdr Len Flags\n\n 4 0x00000d88 120 72 0x3002\n\n Flags=( EXEC DYNLOAD SHROBJ DEP_SYSTEM )\n\n Timestamp = \"Sep 17 10:17:35 2024\"\n\n Magic = 0x1df (32-bit XCOFF)\n\n\n\n ***Optional Header***\n\n Tsize Dsize Bsize Tstart Dstart\n\n 0x00000548 0x0000010c 0x00000004 0x10000128 0x20000670\n\n\n\n SNloader SNentry SNtext SNtoc SNdata\n\n 0x0004 0x0000 0x0001 0x0002 0x0002\n\n\n\n TXTalign DATAalign TOC vstamp entry\n\n 0x0005 0x0004 0x20000750 0x0001 0xffffffff\n\n\n\n maxSTACK maxDATA SNbss magic modtype\n\n 0x00000000 0x00000000 0x0003 0x010b RE\n\n\n\n ***Loader Section***\n\n\n\n ***Loader Symbol Table Information***\n\n [Index] Value Scn IMEX Sclass Type IMPid Name\n\n\n\n [0] 0x2000068c .data RW SECdef [noIMid] __rtinit\n\n [1] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __cxa_finalize\n\n [2] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) _GLOBAL__AIXI_shr_o\n\n [3] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) _GLOBAL__AIXD_shr_o\n\n [4] 0x00000000 undef IMP DS EXTref libc.a(shr.o) __strtollmax\n\n [5] 0x00000000 undef IMP DS EXTref libc.a(shr.o) puts\n\n [6] 0x200006f4 .data EXP DS Ldef [noIMid] __init_aix_libgcc_cxa_atexit\n\n [7] 0x20000724 .data EXP DS Ldef [noIMid] _GLOBAL__AIXI_foo_so\n\n [8] 0x20000730 .data EXP DS Ldef [noIMid] _GLOBAL__AIXD_foo_so\n\n>> [9] 0x2000073c .data EXP DS SECdef [noIMid] strcat\n\n [10] 0x20000744 .data EXP DS Ldef [noIMid] func1\n\n\n\nThe code makes use of strcat from libc but re-exports the symbol (because of -bexpall).\n\n\n\n\n\nAs of now due to the limitation with these flags (-bexpall / -bexpfull ? ), the\n\nsolution here is to continue to extract the symbols from the object files and\n\nuse that export file as part of building the shared library. (Continue to use\n\nthe mkldexport.sh script to generate the export symbols).\n\n\n\nThanks,\nSriram.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n    > Do you still need mkldexport.sh? Surely there's a better way to do that\n    > in year 2024. Some quick googling says there's a '-bexpall' option to\n    > 'ld', which kind of sounds like what we want. Will that work? How do\n    > other programs do this?\n \nWe have noticed couple of caveats with these flags -bexpall/-bexpfull in other\nopensource tools on AIX.  This option would export too many symbols causing\nproblems because a shared library may re-export symbols from another library\ncausing confused dependencies, duplicate symbols.\n \n \nWe have similar discussion wrt to these flag in Cmake\n\n    https://gitlab.kitware.com/cmake/cmake/-/issues/19163\n \n \nAlso, I tried some sample program to verify the same as below\n \n    >> cat foo.c\n        #include <stdio.h>\n        #include <string.h>\n        int func1()\n        {\n            char str1[] = \"Hello \", str2[] = \"world! \";\n            strcat(str1,str2);\n            puts(str1);\n            return 0;\n        }\n \n    >> gcc -c foo.c -o foo.o\n    >> gcc -shared -Wl,-bexpall -o foo.so foo.o\n    >> dump -Tov foo.so\n \n    foo.so:\n \n                            ***Object Module Header***\n    # Sections      Symbol Ptr      # Symbols       Opt Hdr Len     Flags\n             4      0x00000d88            120                72     0x3002\n    Flags=( EXEC DYNLOAD SHROBJ DEP_SYSTEM )\n    Timestamp = \"Sep 17 10:17:35 2024\"\n    Magic = 0x1df  (32-bit XCOFF)\n \n                            ***Optional Header***\n    Tsize       Dsize       Bsize       Tstart      Dstart\n    0x00000548  0x0000010c  0x00000004  0x10000128  0x20000670\n \n    SNloader    SNentry     SNtext      SNtoc       SNdata\n    0x0004      0x0000      0x0001      0x0002      0x0002\n \n    TXTalign    DATAalign   TOC         vstamp      entry\n    0x0005      0x0004      0x20000750  0x0001      0xffffffff\n \n    maxSTACK    maxDATA     SNbss       magic       modtype\n    0x00000000  0x00000000  0x0003      0x010b        RE\n \n                            ***Loader Section***\n \n                            ***Loader Symbol Table Information***\n    [Index]      Value      Scn     IMEX Sclass   Type           IMPid Name\n \n    [0]     0x2000068c    .data              RW SECdef        [noIMid] __rtinit\n    [1]     0x00000000    undef      IMP     DS EXTref libgcc_s.a(shr.o) __cxa_finalize\n    [2]     0x00000000    undef      IMP     DS EXTref libgcc_s.a(shr.o) _GLOBAL__AIXI_shr_o\n    [3]     0x00000000    undef      IMP     DS EXTref libgcc_s.a(shr.o) _GLOBAL__AIXD_shr_o\n    [4]     0x00000000    undef      IMP     DS EXTref   libc.a(shr.o) __strtollmax\n    [5]     0x00000000    undef      IMP     DS EXTref   libc.a(shr.o) puts\n    [6]     0x200006f4    .data      EXP     DS   Ldef        [noIMid] __init_aix_libgcc_cxa_atexit\n    [7]     0x20000724    .data      EXP     DS   Ldef        [noIMid] _GLOBAL__AIXI_foo_so\n    [8]     0x20000730    .data      EXP     DS   Ldef        [noIMid] _GLOBAL__AIXD_foo_so\n>>    [9]     0x2000073c    .data      EXP     DS SECdef        [noIMid] strcat\n\n    [10]    0x20000744    .data      EXP     DS   Ldef        [noIMid] func1\n \nThe code makes use of strcat from libc but re-exports the symbol (because of -bexpall). \n \n \nAs of now due to the limitation with these flags (-bexpall / -bexpfull ? ), the\nsolution here is to continue to extract the symbols from the object files and\nuse that export file as part of building the shared library. (Continue to use\nthe mkldexport.sh script to generate the export symbols).\n \n \n \nThanks,\nSriram.", "msg_date": "Tue, 17 Sep 2024 16:29:01 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "Hi Heikki & team,\n\nCould you please let me know your comments on the previous details?\n\nAttached are the individual patches for AIX and gcc(__sync) routines.\n\nThanks,\nSriram.", "msg_date": "Tue, 24 Sep 2024 11:25:35 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" }, { "msg_contents": "On 24/09/2024 14:25, Srirama Kucherlapati wrote:\n> Hi Heikki & team,\n> \n> Could you please let me know your comments on the previous details?\n> \n> Attached are the individual patches for AIX and gcc(__sync) routines.\n\nRepeating what I said earlier:\n\n> Ok, if we don't need the assembler code at all, that's good. A patch\n> to introduce AIX support should not change it for non-AIX powerpc\n> systems though. That might be a good change, but would need to be\n> justified separately, e.g. by some performance testing, and should\n> be a separate patch\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 24 Sep 2024 16:09:39 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "On Tue, Sep 24, 2024 at 04:09:39PM +0300, Heikki Linnakangas wrote:\n> On 24/09/2024 14:25, Srirama Kucherlapati wrote:\n> > Hi Heikki & team,\n> > \n> > Could you please let me know your comments on the previous details?\n> > \n> > Attached are the individual patches for AIX and gcc(__sync) routines.\n> \n> Repeating what I said earlier:\n> \n> > Ok, if we don't need the assembler code at all, that's good. A patch\n> > to introduce AIX support should not change it for non-AIX powerpc\n> > systems though. That might be a good change, but would need to be\n> > justified separately, e.g. by some performance testing, and should\n> > be a separate patch\n\nAgreed. Srirama Kucherlapati, you seem to be doing the minimum amount\nof work and then repeatedly asking the same questions. I suggest you\nstart to take this task more seriously. I would go back and read the\nentire thread, and take the things we have told you more seriously.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Tue, 24 Sep 2024 09:34:15 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AIX support" }, { "msg_contents": "Hi Heikki,\n\nAs requested earlier, I need some assistance from the Postgres side to identify any tool or testcase to calibrate the sync routine performance in Postgres.\nI see the below tools for benchmarking.\n\n * Pgbench https://www.postgresql.org/docs/current/pgbench.html\n * Pg_test_fsync https://www.postgresql.org/docs/current/pgtestfsync.html\n * pg_test_timing https://www.postgresql.org/docs/current/pgtesttiming.html\n\nPlease let me know, if these tools are fine or else ca you suggest us with any additional tools to run the benchmarking.\n\n\n > > Ok, if we don't need the assembler code at all, that's good. A patch to\n > > introduce AIX support should not change it for non-AIX powerpc systems\n > > though. That might be a good change, but would need to be justified\n > > separately, e.g. by some performance testing, and should be a separate\n > > patch.\n\n > > If you make no changes to s_lock.h at all, will it work? Why not?\n\n > With the existing asm code I see there are some syntax errors, being hit.\n > But after reverting the old changes the issues resolved. Below are diffs.\n\n > Let me know if I need to run any perf tools to check the performance of\n > the __sync_lock_test_and_set change.\n\nThanks,\nSriram.\n\n\n\n\n\n\n\n\n\n \nHi Heikki,\n \nAs requested earlier, I need some assistance from the Postgres side to identify any tool or testcase to calibrate the sync routine performance in Postgres.\nI see the below tools for benchmarking.\n\n\nPgbench \nhttps://www.postgresql.org/docs/current/pgbench.html\nPg_test_fsync \nhttps://www.postgresql.org/docs/current/pgtestfsync.html\npg_test_timing \nhttps://www.postgresql.org/docs/current/pgtesttiming.html\n \nPlease let me know, if these tools are fine or else ca you suggest us with any additional tools to run the benchmarking.\n \n \n  > > Ok, if we don't need the assembler code at all, that's good. A patch to\n  > > introduce AIX support should not change it for non-AIX powerpc systems\n  > > though. That might be a good change, but would need to be justified\n  > > separately, e.g.  by some performance testing, and should be a separate\n  > > patch.\n \n  > > If you make no changes to s_lock.h at all, will it work? Why not?\n \n  > With the existing asm code I see there are some syntax errors, being hit.\n  > But after reverting the old changes the issues resolved. Below are diffs.\n \n  > Let me know if I need to run any perf tools to check the performance of\n  > the __sync_lock_test_and_set change.\n \n\nThanks,\nSriram.", "msg_date": "Wed, 25 Sep 2024 06:04:45 +0000", "msg_from": "Srirama Kucherlapati <[email protected]>", "msg_from_op": false, "msg_subject": "RE: AIX support" } ]
[ { "msg_contents": "Hi!\n\nIn commit 7b5275eec more tests and test coverage were added into\npg_resetwal/t/001_basic.pl.\nAll the added stuff are pretty useful in my view. Unfortunately, there\nwere some magic constants\nbeen used. In overall, this is not a problem. But while working on 64 bit\nXIDs I've noticed these\nchanges and spent some time to figure it out what this magic values are\nstands fore.\n\nAnd it turns out that I’m not the only one.\n\nSo, by Svetlana Derevyanko's suggestion, I made this patch. I add\nconstants, just like we did\nin verify_heapam tests.\n\nSidenote here: in defines in multixact.c TransactionId type used, but I'm\nsure this is not correct,\nsince we're dealing here with MultiXactId and MultiXactOffset. For now,\nthis is obviously not a\nproblem, since sizes of this particular types are equal. But this will\nmanifest itself when we switch\nto the 64 bits types for MultiXactOffset or MultiXactId.\n\nAs always, reviews and opinions are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Thu, 21 Mar 2024 19:58:28 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Refactoring of pg_resetwal/t/001_basic.pl" }, { "msg_contents": "On 21.03.24 17:58, Maxim Orlov wrote:\n> In commit 7b5275eec more tests and test coverage were added into \n> pg_resetwal/t/001_basic.pl <http://001_basic.pl>.\n> All the added stuff are pretty useful in my view.  Unfortunately, there \n> were some magic constants\n> been used.  In overall, this is not a problem.  But while working on 64 \n> bit XIDs I've noticed these\n> changes and spent some time to figure it out what this magic values are \n> stands fore.\n> \n> And it turns out that I’m not the only one.\n> \n> So, by Svetlana Derevyanko's suggestion, I made this patch.  I add \n> constants, just like we did\n> in verify_heapam tests.\n\nOk, this sounds like a reasonable idea.\n\n> \n> Sidenote here: in defines in multixact.c TransactionId type used, but \n> I'm sure this is not correct,\n> since we're dealing here with MultiXactId and MultiXactOffset.  For now, \n> this is obviously not a\n> problem, since sizes of this particular types are equal.  But this will \n> manifest itself when we switch\n> to the 64 bits types for MultiXactOffset or MultiXactId.\n\nPlease send a separate patch for this if you want to propose any changes.\n\n\n\n", "msg_date": "Thu, 21 Mar 2024 23:08:10 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring of pg_resetwal/t/001_basic.pl" }, { "msg_contents": "On Fri, 22 Mar 2024 at 01:08, Peter Eisentraut <[email protected]> wrote:\n\n> Please send a separate patch for this if you want to propose any changes.\n>\n>\nThank you for your reply. Here is the second one. I've change types and\nargument\nnames for the macro functions, so that they better reflect the reality.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 22 Mar 2024 13:27:23 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Refactoring of pg_resetwal/t/001_basic.pl" }, { "msg_contents": "On 21.03.24 17:58, Maxim Orlov wrote:\n> In commit 7b5275eec more tests and test coverage were added into \n> pg_resetwal/t/001_basic.pl <http://001_basic.pl>.\n> All the added stuff are pretty useful in my view.  Unfortunately, there \n> were some magic constants\n> been used.  In overall, this is not a problem.  But while working on 64 \n> bit XIDs I've noticed these\n> changes and spent some time to figure it out what this magic values are \n> stands fore.\n> \n> And it turns out that I’m not the only one.\n> \n> So, by Svetlana Derevyanko's suggestion, I made this patch.  I add \n> constants, just like we did\n> in verify_heapam tests.\n\nConsider this change:\n\n-$mult = 32 * $blcksz / 4;\n+$mult = SLRU_PAGES_PER_SEGMENT * $blcksz / MXOFF_SIZE;\n\nwith\n\n+use constant SLRU_PAGES_PER_SEGMENT => 32;\n+use constant MXOFF_SIZE => 4;\n\nSLRU_PAGES_PER_SEGMENT is a constant that also exists in the source \ncode, so good.\n\nBut MXOFF_SIZE doesn't exist anywhere else. The actual formula uses\nsizeof(MultiXactOffset), which isn't obvious from your patch. So this \njust moves the magic constants around by one level.\n\nThe TAP test says\n\n# these use the guidance from the documentation\n\nand the documentation in this case says\n\nSLRU_PAGES_PER_SEGMENT * BLCKSZ / sizeof(MultiXactOffset)\n\nI think if we're going to add more symbols, then it has to be done \nconsistently in the source code, the documentation, and the tests, not \njust one of them.\n\n\n\n", "msg_date": "Mon, 25 Mar 2024 15:10:47 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring of pg_resetwal/t/001_basic.pl" }, { "msg_contents": "Peter Eisentraut писал(а) 2024-03-25 17:10:\n\n> But MXOFF_SIZE doesn't exist anywhere else. The actual formula uses\n> sizeof(MultiXactOffset), which isn't obvious from your patch. So this \n> just moves the magic constants around by one level.\n> \n> I think if we're going to add more symbols, then it has to be done \n> consistently in the source code, the documentation, and the tests, not \n> just one of them.\n> \n\nHello!\nThank you for your reply.\n\nAttached is the updated version of patch for pg_resetwal test. I added \ndefinitions for MXOFF_SIZE and MXID_SIZE constants in multixact.c (and \nreplaced use of sizeof(MultiXactId) and sizeof(MultiXactOffset) \naccordingly). Also changed multipliers for pg_xact/members/offset on \nCLOG_XACTS_PER_PAGE/MULTIXACT_MEMBERS_PER_PAGE/MULTIXACT_OFFSETS_PER_PAGE \nboth in src/bin/pg_resetwal/t/001_basic.pl and docs, since it seems to \nme that this makes things more clear.\n\nWhat do you think?\n\n\nBest regards,\nSvetlana Derevyanko.", "msg_date": "Tue, 26 Mar 2024 14:53:35 +0300", "msg_from": "Svetlana Derevyanko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring of pg_resetwal/t/001_basic.pl" }, { "msg_contents": "On Tue, Mar 26, 2024 at 02:53:35PM +0300, Svetlana Derevyanko wrote:\n> What do you think?\n>\n> +use constant SLRU_PAGES_PER_SEGMENT => 32;\n\nWell, I disagree with what you are doing here, adding a hardcoded\ndependency between the test code and the backend code. I would\nsuggest to use a more dynamic approach and retrieve such values\ndirectly from the headers. See scan_server_header() in\n039_end_of_wal.pl as one example. 7b5275eec3a5 is newer than\nbae868caf222, so the original commit could have used that, as well.\n--\nMichael", "msg_date": "Thu, 4 Apr 2024 10:58:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring of pg_resetwal/t/001_basic.pl" }, { "msg_contents": "On 26.03.24 12:53, Svetlana Derevyanko wrote:\n> Peter Eisentraut писал(а) 2024-03-25 17:10:\n> \n>> But MXOFF_SIZE doesn't exist anywhere else.  The actual formula uses\n>> sizeof(MultiXactOffset), which isn't obvious from your patch.  So this \n>> just moves the magic constants around by one level.\n>>\n>> I think if we're going to add more symbols, then it has to be done \n>> consistently in the source code, the documentation, and the tests, not \n>> just one of them.\n>>\n> \n> Hello!\n> Thank you for your reply.\n> \n> Attached is the updated version of patch for pg_resetwal test. I added \n> definitions for MXOFF_SIZE and MXID_SIZE constants in multixact.c (and \n> replaced use of sizeof(MultiXactId) and sizeof(MultiXactOffset) \n> accordingly). Also changed multipliers for pg_xact/members/offset on \n> CLOG_XACTS_PER_PAGE/MULTIXACT_MEMBERS_PER_PAGE/MULTIXACT_OFFSETS_PER_PAGE both in src/bin/pg_resetwal/t/001_basic.pl and docs, since it seems to me that this makes things more clear.\n> \n> What do you think?\n\nI don't know. This patch does not fill me with joy. These additional \ndefines ultimately make the code itself harder to comprehend.\n\nMaybe the original request could be satisfied by adding more comments to \nthe test files, like\n\n @files = get_slru_files('pg_xact');\n+# SLRU_PAGES_PER_SEGMENT * BLCKSZ * CLOG_XACTS_PER_BYTE\n $mult = 32 * $blcksz * 4;\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 11:10:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring of pg_resetwal/t/001_basic.pl" } ]
[ { "msg_contents": "Hello, \n\nMy team and I have been working on adding Large block size(LBS)\nsupport to XFS in Linux[1]. Once this feature lands upstream, we will be\nable to create XFS with FS block size > page size of the system on Linux.\nWe also gave a talk about it in Linux Plumbers conference recently[2]\nfor more context. The initial support is only for XFS but more FSs will\nfollow later.\n\nOn an x86_64 system, fs block size was limited to 4k, but traditionally\nPostgres uses 8k as its default internal page size. With LBS support,\nfs block size can be set to 8K, thereby matching the Postgres page size.\n\nIf the file system block size == DB page size, then Postgres can have\nguarantees that a single DB page will be written as a single unit during\nkernel write back and not split.\n\nMy knowledge of Postgres internals is limited, so I'm wondering if there\nare any optimizations or potential optimizations that Postgres could\nleverage once we have LBS support on Linux?\n\n\n[1] https://lore.kernel.org/linux-xfs/[email protected]/\n[2] https://www.youtube.com/watch?v=ar72r5Xf7x4\n-- \nPankaj Raghav\n\n\n", "msg_date": "Thu, 21 Mar 2024 18:46:19 +0100", "msg_from": "\"Pankaj Raghav (Samsung)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Large block sizes support in Linux" }, { "msg_contents": "On Thu, Mar 21, 2024 at 06:46:19PM +0100, Pankaj Raghav (Samsung) wrote:\n> Hello, \n> \n> My team and I have been working on adding Large block size(LBS)\n> support to XFS in Linux[1]. Once this feature lands upstream, we will be\n> able to create XFS with FS block size > page size of the system on Linux.\n> We also gave a talk about it in Linux Plumbers conference recently[2]\n> for more context. The initial support is only for XFS but more FSs will\n> follow later.\n> \n> On an x86_64 system, fs block size was limited to 4k, but traditionally\n> Postgres uses 8k as its default internal page size. With LBS support,\n> fs block size can be set to 8K, thereby matching the Postgres page size.\n> \n> If the file system block size == DB page size, then Postgres can have\n> guarantees that a single DB page will be written as a single unit during\n> kernel write back and not split.\n> \n> My knowledge of Postgres internals is limited, so I'm wondering if there\n> are any optimizations or potential optimizations that Postgres could\n> leverage once we have LBS support on Linux?\n\nWe have discussed this in the past, and in fact in the early years we\nthought we didn't need fsync since the BSD file system was 8k at the\ntime.\n\nWhat we later realized is that we have no guarantee that the file system\nwill write to the device in the specified block size, and even it it\ndoes, the I/O layers between the OS and the device might not, since many\ndevices use 512 byte blocks or other sizes.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 22 Mar 2024 14:46:05 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" }, { "msg_contents": "\n\nOn 3/22/24 19:46, Bruce Momjian wrote:\n> On Thu, Mar 21, 2024 at 06:46:19PM +0100, Pankaj Raghav (Samsung) wrote:\n>> Hello, \n>>\n>> My team and I have been working on adding Large block size(LBS)\n>> support to XFS in Linux[1]. Once this feature lands upstream, we will be\n>> able to create XFS with FS block size > page size of the system on Linux.\n>> We also gave a talk about it in Linux Plumbers conference recently[2]\n>> for more context. The initial support is only for XFS but more FSs will\n>> follow later.\n>>\n>> On an x86_64 system, fs block size was limited to 4k, but traditionally\n>> Postgres uses 8k as its default internal page size. With LBS support,\n>> fs block size can be set to 8K, thereby matching the Postgres page size.\n>>\n>> If the file system block size == DB page size, then Postgres can have\n>> guarantees that a single DB page will be written as a single unit during\n>> kernel write back and not split.\n>>\n>> My knowledge of Postgres internals is limited, so I'm wondering if there\n>> are any optimizations or potential optimizations that Postgres could\n>> leverage once we have LBS support on Linux?\n> \n> We have discussed this in the past, and in fact in the early years we\n> thought we didn't need fsync since the BSD file system was 8k at the\n> time.\n> \n> What we later realized is that we have no guarantee that the file system\n> will write to the device in the specified block size, and even it it\n> does, the I/O layers between the OS and the device might not, since many\n> devices use 512 byte blocks or other sizes.\n> \n\nRight, but things change over time - current storage devices support\nmuch larger sectors (LBA format), usually 4K. And if you do I/O with\nthis size, it's usually atomic.\n\nAFAIK if you built Postgres with 4K pages, on a device with 4K LBA\nformat, that would not need full-page writes - we always do I/O in 4k\npages, and block layer does I/O (during writeback from page cache) with\nminimum guaranteed size = logical block size. 4K are great for OLTP\nsystems in general, it'd be even better if we didn't need to worry about\ntorn pages (but the tricky part is to be confident it's safe to disable\nthem on a particular system).\n\nI did watch the talk linked by Pankaj, and IIUC the promise of the LBS\npatches is that this benefit would extend would apply even to larger\npage sizes (= fs page size). Which right now you can't even mount, but\nthe patches allow that. So for example it would be possible to create an\nXFS filesystem with 8kB pages, and then we'd read/write 8kB pages as\nusual, and we'd know that the page cache always writes out either the\nwhole page or none of it. Which right now is not guaranteed to happen,\nit's possible to e.g. write the page as two 4K requests, even if all\nother things are set properly (drive has 4K logical/physical sectors).\n\nAt least that's my understanding ...\n\nPankaj, could you clarify what the guarantees provided by LBS are going\nto be? the talk uses wording like \"should be\" and \"hint\" in a couple\nplaces, and there's also stuff I'm not 100% familiar with.\n\nIf we create a filesystem with 8K blocks, and we only ever do writes\n(and reads) in 8K chunks (our default page size), what guarantees that\ngives us? What if the underlying device has LBA format with only 4K (or\nperhaps even just 512B), how would that affect the guarantees?\n\nThe other thing is - is there a reliable way to say when the guarantees\nactually apply? I mean, how would the administrator *know* it's safe to\nset full_page_writes=off, or even better how could we verify this when\nthe database starts (and complain if it's not safe to disable FPW)?\n\nIt's easy to e.g. take a backup on one filesystem and restore it on\nanother one, and forget those may have different block sizes etc. I'm\nnot sure it's possible in a 100% reliable way (tablespaces?).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 22 Mar 2024 22:31:11 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" }, { "msg_contents": "On Fri, Mar 22, 2024 at 10:31:11PM +0100, Tomas Vondra wrote:\n> Right, but things change over time - current storage devices support\n> much larger sectors (LBA format), usually 4K. And if you do I/O with\n> this size, it's usually atomic.\n> \n> AFAIK if you built Postgres with 4K pages, on a device with 4K LBA\n> format, that would not need full-page writes - we always do I/O in 4k\n> pages, and block layer does I/O (during writeback from page cache) with\n> minimum guaranteed size = logical block size. 4K are great for OLTP\n> systems in general, it'd be even better if we didn't need to worry about\n> torn pages (but the tricky part is to be confident it's safe to disable\n> them on a particular system).\n\nYes, even if the file system is 8k, and the storage is 8k, we only know\nthat torn pages are impossible if the file system never overwrites\nexisting 8k pages, but writes new ones and then makes it active. I\nthink ZFS does that to handle snapshots.\n\n> The other thing is - is there a reliable way to say when the guarantees\n> actually apply? I mean, how would the administrator *know* it's safe to\n> set full_page_writes=off, or even better how could we verify this when\n> the database starts (and complain if it's not safe to disable FPW)?\n\nYes, this is quite hard to know. Our docs have:\n\n\thttps://www.postgresql.org/docs/current/wal-reliability.html\n\t\n\tAnother risk of data loss is posed by the disk platter write operations\n\tthemselves. Disk platters are divided into sectors, commonly 512 bytes\n\teach. Every physical read or write operation processes a whole sector.\n\tWhen a write request arrives at the drive, it might be for some multiple\n\tof 512 bytes (PostgreSQL typically writes 8192 bytes, or 16 sectors, at\n\ta time), and the process of writing could fail due to power loss at any\n\ttime, meaning some of the 512-byte sectors were written while others\n\twere not. To guard against such failures, PostgreSQL periodically writes\n\tfull page images to permanent WAL storage before modifying the actual\n\tpage on disk. By doing this, during crash recovery PostgreSQL can\n-->\trestore partially-written pages from WAL. If you have file-system\n-->\tsoftware that prevents partial page writes (e.g., ZFS), you can turn off\n-->\tthis page imaging by turning off the full_page_writes parameter.\n-->\tBattery-Backed Unit (BBU) disk controllers do not prevent partial page\n-->\twrites unless they guarantee that data is written to the BBU as full\n-->\t(8kB) pages.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 22 Mar 2024 22:41:41 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" }, { "msg_contents": "On Fri, Mar 22, 2024 at 10:56 PM Pankaj Raghav (Samsung)\n<[email protected]> wrote:\n> My team and I have been working on adding Large block size(LBS)\n> support to XFS in Linux[1]. Once this feature lands upstream, we will be\n> able to create XFS with FS block size > page size of the system on Linux.\n> We also gave a talk about it in Linux Plumbers conference recently[2]\n> for more context. The initial support is only for XFS but more FSs will\n> follow later.\n\nVery cool!\n\n(I used XFS on IRIX in the 90s, and it had large blocks then, a\nfeature lost in the port to Linux AFAIK.)\n\n> On an x86_64 system, fs block size was limited to 4k, but traditionally\n> Postgres uses 8k as its default internal page size. With LBS support,\n> fs block size can be set to 8K, thereby matching the Postgres page size.\n>\n> If the file system block size == DB page size, then Postgres can have\n> guarantees that a single DB page will be written as a single unit during\n> kernel write back and not split.\n>\n> My knowledge of Postgres internals is limited, so I'm wondering if there\n> are any optimizations or potential optimizations that Postgres could\n> leverage once we have LBS support on Linux?\n\nFWIW here are a couple of things I wrote about our storage atomicity\nproblem, for non-PostgreSQL hackers who may not understand our project\njargon:\n\nhttps://wiki.postgresql.org/wiki/Full_page_writes\nhttps://freebsdfoundation.org/wp-content/uploads/2023/02/munro_ZFS.pdf\n\nThe short version is that we (and MySQL, via a different scheme with\ndifferent tradeoffs) could avoid writing all our stuff out twice if we\ncould count on atomic writes of a suitable size on power failure, so\nthe benefits are very large. As far as I know, there are two things\nwe need from the kernel and storage to do that on \"overwrite\"\nfilesystems like XFS:\n\n1. The disk must promise that its atomicity-on-power-failure is a\nmultiple of our block size -- something like NVMe AWUPF, right? My\ndevices seem to say 0 :-( Or I guess the filesystem has to\ncompensate, but then it's not exactly an overwrite filesystem\nanymore...\n\n2. The kernel must promise that there is no code path in either\nbuffered I/O or direct I/O that will arbitrarily chop up our 8KB (or\nother configured block size) writes on some smaller boundary, most\nlikely sector I guess, on their way to the device, as you were saying.\nNot just in happy cases, but even under memory pressure, if\ninterrupted, etc etc.\n\nSounds like you're working on problem #2 which is great news.\n\nI've been wondering for a while how a Unixoid kernel should report\nthese properties to userspace where it knows them, especially on\nnon-overwrite filesystems like ZFS where this sort of thing works\nalready, without stuff like AWUPF working the way one might hope.\nHere was one throw-away idea on the back of a napkin about that, for\nwhat little it's worth:\n\nhttps://wiki.postgresql.org/wiki/FreeBSD/AtomicIO\n\n\n", "msg_date": "Sat, 23 Mar 2024 17:53:20 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" }, { "msg_contents": "Hi Tomas and Bruce,\n\n>>> My knowledge of Postgres internals is limited, so I'm wondering if there\n>>> are any optimizations or potential optimizations that Postgres could\n>>> leverage once we have LBS support on Linux?\n>>\n>> We have discussed this in the past, and in fact in the early years we\n>> thought we didn't need fsync since the BSD file system was 8k at the\n>> time.\n>>\n>> What we later realized is that we have no guarantee that the file system\n>> will write to the device in the specified block size, and even it it\n>> does, the I/O layers between the OS and the device might not, since many\n>> devices use 512 byte blocks or other sizes.\n>>\n> \n> Right, but things change over time - current storage devices support\n> much larger sectors (LBA format), usually 4K. And if you do I/O with\n> this size, it's usually atomic.\n> \n> AFAIK if you built Postgres with 4K pages, on a device with 4K LBA\n> format, that would not need full-page writes - we always do I/O in 4k\n> pages, and block layer does I/O (during writeback from page cache) with\n> minimum guaranteed size = logical block size. 4K are great for OLTP\n> systems in general, it'd be even better if we didn't need to worry about\n> torn pages (but the tricky part is to be confident it's safe to disable\n> them on a particular system).\n> \n> I did watch the talk linked by Pankaj, and IIUC the promise of the LBS\n> patches is that this benefit would extend would apply even to larger\n> page sizes (= fs page size). Which right now you can't even mount, but\n> the patches allow that. So for example it would be possible to create an\n> XFS filesystem with 8kB pages, and then we'd read/write 8kB pages as\n> usual, and we'd know that the page cache always writes out either the\n> whole page or none of it. Which right now is not guaranteed to happen,\n> it's possible to e.g. write the page as two 4K requests, even if all\n> other things are set properly (drive has 4K logical/physical sectors).\n> \n> At least that's my understanding ...\n>> Pankaj, could you clarify what the guarantees provided by LBS are going\n> to be? the talk uses wording like \"should be\" and \"hint\" in a couple\n> places, and there's also stuff I'm not 100% familiar with.\n> \n> If we create a filesystem with 8K blocks, and we only ever do writes\n> (and reads) in 8K chunks (our default page size), what guarantees that\n> gives us? What if the underlying device has LBA format with only 4K (or\n> perhaps even just 512B), how would that affect the guarantees?\n> \n\nYes, the whole FS block is managed as one unit (also on a physically contiguous\npage), so we send the whole fs block while performing writeback. This is not guaranteed\nwhen FS block size = 4k and the DB page size is 8k as it might be sent as two\ndifferent requests as you have indicated.\n\nThe LBA format will not affect the guarantee of sending the whole FS block without\nsplitting as long as the FS block size is less than the maximum IO transfer size*.\n\nBut another issue now is even though the host has done its job, the device might\nhave a smaller atomic guarantee, thereby making it not powerfail safe.\n\n> The other thing is - is there a reliable way to say when the guarantees\n> actually apply? I mean, how would the administrator *know* it's safe to\n> set full_page_writes=off, or even better how could we verify this when\n> the database starts (and complain if it's not safe to disable FPW)?\n> \n\nThis is an excellent question that needs a bit of community discussion to\nexpose a device agnostic value that userspace can trust.\n\nThere might be a talk this year at LSFMM about untorn writes[1] in buffered IO\npath. I will make sure to bring this question up.\n\nAt the moment, Linux exposes the physical blocksize by taking also atomic guarantees\ninto the picture, especially for NVMe it uses the NAWUPF and AWUPF while setting\nphysical blocksize (/sys/block/<dev>/queue/physical_block_size).\n\nA system admin could use value exposed by phy_bs as a hint to disable full_page_write=off.\nOf course this requires also the device to give atomic guarantees.\n\nThe most optimal would be DB page size == FS block size == Device atomic size.\n\n> It's easy to e.g. take a backup on one filesystem and restore it on\n> another one, and forget those may have different block sizes etc. I'm\n> not sure it's possible in a 100% reliable way (tablespaces?).\n> \n> \n> regards\n> \n\n[1] https://lore.kernel.org/linux-fsdevel/[email protected]/\n\n* A small caveat, I am most familiar with NVMe, so my answers might be based on\nmy experience in NVMe.\n\n\n", "msg_date": "Mon, 25 Mar 2024 14:53:56 +0100", "msg_from": "Pankaj Raghav <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" }, { "msg_contents": "Hi Thomas,\n\nOn 23/03/2024 05:53, Thomas Munro wrote:\n> On Fri, Mar 22, 2024 at 10:56 PM Pankaj Raghav (Samsung)\n> <[email protected]> wrote:\n>> My team and I have been working on adding Large block size(LBS)\n>> support to XFS in Linux[1]. Once this feature lands upstream, we will be\n>> able to create XFS with FS block size > page size of the system on Linux.\n>> We also gave a talk about it in Linux Plumbers conference recently[2]\n>> for more context. The initial support is only for XFS but more FSs will\n>> follow later.\n> \n> Very cool!\n> \n> (I used XFS on IRIX in the 90s, and it had large blocks then, a\n> feature lost in the port to Linux AFAIK.)\n> \n\nYes, I heard this also from the Maintainer of XFS that they had to drop\nthis functionality when they did the port. :)\n\n>> On an x86_64 system, fs block size was limited to 4k, but traditionally\n>> Postgres uses 8k as its default internal page size. With LBS support,\n>> fs block size can be set to 8K, thereby matching the Postgres page size.\n>>\n>> If the file system block size == DB page size, then Postgres can have\n>> guarantees that a single DB page will be written as a single unit during\n>> kernel write back and not split.\n>>\n>> My knowledge of Postgres internals is limited, so I'm wondering if there\n>> are any optimizations or potential optimizations that Postgres could\n>> leverage once we have LBS support on Linux?\n> \n> FWIW here are a couple of things I wrote about our storage atomicity\n> problem, for non-PostgreSQL hackers who may not understand our project\n> jargon:\n> \n> https://wiki.postgresql.org/wiki/Full_page_writes\n> https://freebsdfoundation.org/wp-content/uploads/2023/02/munro_ZFS.pdf\n> \nThis is very useful, thanks a lot.\n\n> The short version is that we (and MySQL, via a different scheme with\n> different tradeoffs) could avoid writing all our stuff out twice if we\n> could count on atomic writes of a suitable size on power failure, so\n> the benefits are very large. As far as I know, there are two things\n> we need from the kernel and storage to do that on \"overwrite\"\n> filesystems like XFS:\n> \n> 1. The disk must promise that its atomicity-on-power-failure is a\n> multiple of our block size -- something like NVMe AWUPF, right? My\n> devices seem to say 0 :-( Or I guess the filesystem has to\n> compensate, but then it's not exactly an overwrite filesystem\n> anymore...\n> \n\n0 means 1 logical block, which might be 4k in your case. Typically device\nvendors have to put extra hardware to guarantee bigger atomic block sizes.\n\n> 2. The kernel must promise that there is no code path in either\n> buffered I/O or direct I/O that will arbitrarily chop up our 8KB (or\n> other configured block size) writes on some smaller boundary, most\n> likely sector I guess, on their way to the device, as you were saying.\n> Not just in happy cases, but even under memory pressure, if\n> interrupted, etc etc.\n> \n> Sounds like you're working on problem #2 which is great news.\n> \n\nYes, you are spot on. :)\n\n> I've been wondering for a while how a Unixoid kernel should report\n> these properties to userspace where it knows them, especially on\n> non-overwrite filesystems like ZFS where this sort of thing works\n\nSo it looks like ZFS (or any other CoW filesystem that supports larger\nblock sizes) is doing what postgres will do anyway with FPW=on, making\nit safe to turn off FPW.\n\nOne question: Does ZFS do something like FUA request to force the device\nto clear the cache before it can update the node to point to the new page?\n\nIf it doesn't do it, there is no guarantee from device to update the data\natomically unless it has bigger atomic guarantees?\n\n> already, without stuff like AWUPF working the way one might hope.\n> Here was one throw-away idea on the back of a napkin about that, for\n> what little it's worth:\n> > https://wiki.postgresql.org/wiki/FreeBSD/AtomicIO\n\nAs I replied in the previous mail to Tomas, we might be having a talk\nabout Untorn writes[1] in LSFMM this year. I hope to bring up some of the\ndiscussions from here. Thanks!\n\n[1] https://lore.kernel.org/linux-fsdevel/[email protected]/\n\n\n", "msg_date": "Mon, 25 Mar 2024 15:34:07 +0100", "msg_from": "Pankaj Raghav <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" }, { "msg_contents": "On 23/03/2024 03:41, Bruce Momjian wrote:\n> On Fri, Mar 22, 2024 at 10:31:11PM +0100, Tomas Vondra wrote:\n>> Right, but things change over time - current storage devices support\n>> much larger sectors (LBA format), usually 4K. And if you do I/O with\n>> this size, it's usually atomic.\n>>\n>> AFAIK if you built Postgres with 4K pages, on a device with 4K LBA\n>> format, that would not need full-page writes - we always do I/O in 4k\n>> pages, and block layer does I/O (during writeback from page cache) with\n>> minimum guaranteed size = logical block size. 4K are great for OLTP\n>> systems in general, it'd be even better if we didn't need to worry about\n>> torn pages (but the tricky part is to be confident it's safe to disable\n>> them on a particular system).\n> \n> Yes, even if the file system is 8k, and the storage is 8k, we only know\n> that torn pages are impossible if the file system never overwrites\n> existing 8k pages, but writes new ones and then makes it active. I\n> think ZFS does that to handle snapshots.\n> \n\nI think we can also avoid torn writes:\n- if filesystem's data path always writes in multiples of 8k (with alignment)\n- device supports 8k atomic writes.\n\nThen we might be able to push the responsibility to the device without having the overhead\nof a CoW FS or FPW=on. Of course, the performance here depends on the vendor specific\nimplementation of atomics.\n\nWe are trying to enable the former by adding LBS support to XFS in Linux.\n\n--\nPankaj\n\n\n", "msg_date": "Mon, 25 Mar 2024 16:06:04 +0100", "msg_from": "Pankaj Raghav <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" }, { "msg_contents": "On Mon, Mar 25, 2024 at 02:53:56PM +0100, Pankaj Raghav wrote:\n> This is an excellent question that needs a bit of community discussion to\n> expose a device agnostic value that userspace can trust.\n> \n> There might be a talk this year at LSFMM about untorn writes[1] in buffered IO\n> path. I will make sure to bring this question up.\n> \n> At the moment, Linux exposes the physical blocksize by taking also atomic guarantees\n> into the picture, especially for NVMe it uses the NAWUPF and AWUPF while setting\n> physical blocksize (/sys/block/<dev>/queue/physical_block_size).\n> \n> A system admin could use value exposed by phy_bs as a hint to disable full_page_write=off.\n> Of course this requires also the device to give atomic guarantees.\n> \n> The most optimal would be DB page size == FS block size == Device atomic size.\n\nOne other thing I remember is that some people modified the ZFS file\nsystem parameters enough that they made Postgres non-durable and\ncorrupted their database. This is a very hard thing to get right\nbecause the user has very little feedback when they break things.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 25 Mar 2024 16:19:00 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" }, { "msg_contents": "On Tue, Mar 26, 2024 at 3:34 AM Pankaj Raghav <[email protected]> wrote:\n> One question: Does ZFS do something like FUA request to force the device\n> to clear the cache before it can update the node to point to the new page?\n>\n> If it doesn't do it, there is no guarantee from device to update the data\n> atomically unless it has bigger atomic guarantees?\n\nIt flushes the whole disk write cache (unless you turn that off).\nAFAIK it can't use use FUA instead yet (it knows some things about it,\nthere are mentions under the Linux-specific parts of the tree but that\nmay be more to do with understanding and implementing it when\nexporting a virtual block device, or something like that (?), but I\ndon't believe it knows how to use it for its own underlying log or\nordering). FUA would clearly be better, no waiting for random extra\ndata to be flushed.\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:31:50 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" }, { "msg_contents": "Greetings,\n\n* Pankaj Raghav ([email protected]) wrote:\n> On 23/03/2024 05:53, Thomas Munro wrote:\n> > On Fri, Mar 22, 2024 at 10:56 PM Pankaj Raghav (Samsung)\n> > <[email protected]> wrote:\n> >> My team and I have been working on adding Large block size(LBS)\n> >> support to XFS in Linux[1]. Once this feature lands upstream, we will be\n> >> able to create XFS with FS block size > page size of the system on Linux.\n> >> We also gave a talk about it in Linux Plumbers conference recently[2]\n> >> for more context. The initial support is only for XFS but more FSs will\n> >> follow later.\n> > \n> > Very cool!\n\nYes, this is very cool sounding and could be a real difference for PG.\n\n> > (I used XFS on IRIX in the 90s, and it had large blocks then, a\n> > feature lost in the port to Linux AFAIK.)\n> \n> Yes, I heard this also from the Maintainer of XFS that they had to drop\n> this functionality when they did the port. :)\n\nI also recall the days of XFS on IRIX... Many moons ago.\n\n> > The short version is that we (and MySQL, via a different scheme with\n> > different tradeoffs) could avoid writing all our stuff out twice if we\n> > could count on atomic writes of a suitable size on power failure, so\n> > the benefits are very large. As far as I know, there are two things\n> > we need from the kernel and storage to do that on \"overwrite\"\n> > filesystems like XFS:\n> > \n> > 1. The disk must promise that its atomicity-on-power-failure is a\n> > multiple of our block size -- something like NVMe AWUPF, right? My\n> > devices seem to say 0 :-( Or I guess the filesystem has to\n> > compensate, but then it's not exactly an overwrite filesystem\n> > anymore...\n> \n> 0 means 1 logical block, which might be 4k in your case. Typically device\n> vendors have to put extra hardware to guarantee bigger atomic block sizes.\n\nIf I'm following correctly, this would mean that PG with FPW=off\n(assuming everything else works) would be safe on more systems if PG\nsupported a 4K block size than if PG only supports 8K blocks, right?\n\nThere's been discussion and even some patches posted around the idea of\nhaving run-time support in PG for different block sizes. Currently,\nit's a compile-time option with the default being 8K, meaning that's the\nonly option on a huge number of the deployed PG environments out there.\nMoving it to run-time has some challenges and there's concerns about the\nperformance ... but if it meant we could run safely with FPW=off, that's\na pretty big deal. On the other hand, if the expectation is that\nbasically everything will support atomic 8K, then we might be able to\nsimply keep that and not deal with supporting different page sizes at\nrun-time (of course, this is only one of the considerations in play, but\nit could be particularly key, if I'm following correctly).\n\nAppreciate any insights you can share on this.\n\nThanks!\n\nStephen", "msg_date": "Wed, 27 Mar 2024 18:13:00 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large block sizes support in Linux" } ]
[ { "msg_contents": "Hello postgres hackers:\nI recently notice these sql can lead to a assertion error in pg14 and older version. Here is an example:\npostgres=> CREATE TABLE t1 (a int); CREATE TABLE postgres=> INSERT INTO t1 VALUES (1); INSERT 0 1 postgres=> SELECT EXISTS ( SELECT * FROM t1 GROUP BY GROUPING SETS ((a), generate_series (1, 262144)) ) AS result; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.Here is breaktrace (release v14.11)\n#0 0x00007fdc4d58d387 in raise () from /lib64/libc.so.6 #1 0x00007fdc4d58ea78 in abort () from /lib64/libc.so.6 #2 0x00000000009479aa in ExceptionalCondition (conditionName=conditionName@entry=0xb27bf0 \"!lt->writing || lt->buffer == NULL\", errorType=errorType@entry=0x99a00b \"FailedAssertion\", fileName=fileName@entry=0xb27989 \"logtape.c\", lineNumber=lineNumber@en try=1279) at assert.c:69 #3 0x000000000097715b in LogicalTapeSetBlocks (lts=<optimized out>) at logtape.c:1279 #4 0x00000000009793ab in tuplesort_free (state=0x1ee30e0) at tuplesort.c:1402 #5 0x000000000097f2b9 in tuplesort_end (state=0x1ee30e0) at tuplesort.c:1468 #6 0x00000000006cd3a1 in ExecEndAgg (node=0x1ecf2c8) at nodeAgg.c:4401 #7 0x00000000006be3b1 in ExecEndNode (node=<optimized out>) at execProcnode.c:733 #8 0x00000000006b8514 in ExecEndPlan (estate=0x1ecf050, planstate=<optimized out>) at execMain.c:1416 #9 standard_ExecutorEnd (queryDesc=0x1e18af0) at execMain.c:493 #10 0x0000000000673779 in PortalCleanup (portal=<optimized out>) at portalcmds.c:299 #11 0x0000000000972b34 in PortalDrop (portal=0x1e586c0, isTopCommit=<optimized out>) at portalmem.c:502 #12 0x0000000000834140 in exec_simple_query (query_string=0x1df5020 \"SELECT\\nEXISTS (\\nSELECT\\n* FROM t1\\nGROUP BY\\nGROUPING SETS ((a), generate_series (1, 26214400))\\n) AS result;\") at postgres.c:1223 #13 0x0000000000835e37 in PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7fff746c4470, dbname=<optimized out>, username=<optimized out>) at postgres.c:4513 #14 0x00000000007acdcb in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4540 #15 BackendStartup (port=<optimized out>) at postmaster.c:4262 #16 ServerLoop () at postmaster.c:1748 #17 0x00000000007adc45 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1defb50) at postmaster.c:1420 #18 0x000000000050a544 in main (argc=3, argv=0x1defb50) at main.c:209\nThe reason could be that, there are mutiple phase in grouping sets in this exists sublink. In executing phase, once function ExecAgg return a valid tupleslot, ExecSubPlan won't call exector_node repeatedly, ineed there is no needed. It causes unexpected status in sort_out and sort_in, so they don't pass assertion checking in ExecEndAgg.\nI haven’t thought of a good solution yet. Only in my opinion, it is unreasonable to add processing for other specific types of execution nodes in function ExecSubplan, but there is no way to know in ExecAgg whether it is executed again.\nBest regards,\nTinghai Zhao", "msg_date": "Fri, 22 Mar 2024 16:54:49 +0800", "msg_from": "\"=?UTF-8?B?6LW15bqt5rW3KOW6reeroCk=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?c3VibGluayBbZXhpc3RzIChzZWxlY3QgeHh4IGdyb3VwIGJ5IGdyb3VwaW5nIHNldHMgKCkp?=\n =?UTF-8?B?XSBjYXVzZXMgYW4gYXNzZXJ0aW9uIGVycm9y?=" }, { "msg_contents": "\"=?UTF-8?B?6LW15bqt5rW3KOW6reeroCk=?=\" <[email protected]> writes:\n> I recently notice these sql can lead to a assertion error in pg14 and older version. Here is an example:\n> postgres=> CREATE TABLE t1 (a int);\n> postgres=> INSERT INTO t1 VALUES (1);\n> postgres=> SELECT EXISTS ( SELECT * FROM t1 GROUP BY GROUPING SETS ((a), generate_series (1, 262144)) ) AS result;\n> server closed the connection unexpectedly\n\nIn current v14 this produces:\nTRAP: FailedAssertion(\"!lt->writing || lt->buffer == NULL\", File: \"logtape.c\", Line: 1279, PID: 928622)\n\nThanks for the report. I did some bisecting and found that the crash\nappears at Jeff's commit c8aeaf3ab (which introduced this assertion)\nand disappears at Heikki's c4649cce3 (which removed it). So I would\nsay that the problem is \"this assertion is wrong\", and we should fix\nthe problem by fixing the assertion, not by hacking around in distant\ncalling code. On the whole, since this code has been dead for\nseveral versions, I'd be inclined to just remove the assertion.\nI think it's quite risky because of the possibility that we reach\nthis function during post-transaction-abort cleanup, when there's\nno very good reason to assume that the tapeset's been closed down\ncleanly. (To be clear, that's not what's happening in the given\ntest case ... but I fear that it could.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 12:28:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sublink [exists (select xxx group by grouping sets ())] causes an\n assertion error" }, { "msg_contents": "On Fri, 2024-03-22 at 12:28 -0400, Tom Lane wrote:\n> Thanks for the report.  I did some bisecting and found that the crash\n> appears at Jeff's commit c8aeaf3ab (which introduced this assertion)\n> and disappears at Heikki's c4649cce3 (which removed it).  So I would\n> say that the problem is \"this assertion is wrong\", and we should fix\n> the problem by fixing the assertion, not by hacking around in distant\n> calling code.  On the whole, since this code has been dead for\n> several versions, I'd be inclined to just remove the assertion.\n\nc4649cce3 didn't add additional calls to LogicalTapeSetBlocks(), so I'm\nnot sure if the removal of the Assert was related to his changes, or if\nhe just realized the assertion was wrong and removed it along the way?\n\nAlso, without the assertion, the word \"should\" in the comment is\nambiguous (does it mean \"must not\" or something else), and it still\nexists in master. Do we care about the calculation being wrong if\nthere's an unfinished write? If not, I'll just clarify that the\ncalculation doesn't take into account still-buffered data. If we do\ncare, then something might need to be fixed.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 18:22:21 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sublink [exists (select xxx group by grouping sets ())] causes\n an assertion error" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> On Fri, 2024-03-22 at 12:28 -0400, Tom Lane wrote:\n>> Thanks for the report.  I did some bisecting and found that the crash\n>> appears at Jeff's commit c8aeaf3ab (which introduced this assertion)\n>> and disappears at Heikki's c4649cce3 (which removed it).  So I would\n>> say that the problem is \"this assertion is wrong\", and we should fix\n>> the problem by fixing the assertion, not by hacking around in distant\n>> calling code.  On the whole, since this code has been dead for\n>> several versions, I'd be inclined to just remove the assertion.\n\n> c4649cce3 didn't add additional calls to LogicalTapeSetBlocks(), so I'm\n> not sure if the removal of the Assert was related to his changes, or if\n> he just realized the assertion was wrong and removed it along the way?\n\nMy guess is he just zapped it because the code block was dependent\non the \"tape\" abstraction which was going away. Heikki?\n\n> Also, without the assertion, the word \"should\" in the comment is\n> ambiguous (does it mean \"must not\" or something else), and it still\n> exists in master. Do we care about the calculation being wrong if\n> there's an unfinished write?\n\nI think it's actually fine. The callers of LogicalTapeSetBlocks only\nuse the number for statistics or trace reporting, so precision isn't\ncritical in the first place, but I think what they care about is the\namount of data that's really been written out to files. The\ncomment should be clarified, for sure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 22:17:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sublink [exists (select xxx group by grouping sets ())] causes an\n assertion error" } ]
[ { "msg_contents": "In my queries I often need to do MIN/MAX for tuples, for example:\n\n SELECT MAX(row(year, month))\n FROM (VALUES(2025, 1), (2024,2)) x(year, month);\n\nThis query throws:\n\n ERROR: function max(record) does not exist\n\nIn this case you can replace it with `MAX((year||'-'||month||'-1')::date)`.\nHowever in my case I have an event table with `event_time` and `text`\ncolumns, I'm grouping that table by some key and want to have the text for\nthe newest event. I would do `MAX(ROW(event_time, text)).text`. Workarounds\nfor this are clumsy, e.g. with a subquery with LIMIT 1.\n\nThe lack of this feature is kind of unexpected, because the `>` operator or\n`GREATEST` function are defined for records:\n\n SELECT\n GREATEST((2025, 1), (2024, 2)),\n (2025, 1) > (2024, 2)\n\nWas this ever discussed or is there something preventing the implementation?\n\nViliam\n\nIn my queries I often need to do MIN/MAX for tuples, for example:  SELECT MAX(row(year, month))   FROM (VALUES(2025, 1), (2024,2)) x(year, month);This query throws:    ERROR: function max(record) does not existIn this case you can replace it with `MAX((year||'-'||month||'-1')::date)`. However in my case I have an event table\n with `event_time` and `text` columns, I'm grouping that table by \nsome key and want to have the text for the newest event. I would do \n`MAX(ROW(event_time, text)).text`. Workarounds for this are clumsy, e.g. with a subquery with LIMIT 1.\n\nThe lack of this feature is kind of unexpected, because the `>` operator or `GREATEST` function are defined for records:    SELECT         GREATEST((2025, 1), (2024, 2)),         (2025, 1) > (2024, 2) Was this ever discussed or is there something preventing the implementation?Viliam", "msg_date": "Fri, 22 Mar 2024 12:26:27 +0100", "msg_from": "=?UTF-8?Q?Viliam_=C4=8Eurina?= <[email protected]>", "msg_from_op": true, "msg_subject": "MIN/MAX functions for a record" }, { "msg_contents": "Hi,\n\n> In my queries I often need to do MIN/MAX for tuples, for example:\n>\n> SELECT MAX(row(year, month))\n> FROM (VALUES(2025, 1), (2024,2)) x(year, month);\n>\n> This query throws:\n>\n> ERROR: function max(record) does not exist\n>\n> In this case you can replace it with `MAX((year||'-'||month||'-1')::date)`. However in my case I have an event table with `event_time` and `text` columns, I'm grouping that table by some key and want to have the text for the newest event. I would do `MAX(ROW(event_time, text)).text`. Workarounds for this are clumsy, e.g. with a subquery with LIMIT 1.\n>\n> The lack of this feature is kind of unexpected, because the `>` operator or `GREATEST` function are defined for records:\n>\n> SELECT\n> GREATEST((2025, 1), (2024, 2)),\n> (2025, 1) > (2024, 2)\n>\n> Was this ever discussed or is there something preventing the implementation?\n\nI believe it would be challenging to implement max(record) that would\nwork reasonably well in a general case.\n\nWhat if, for instance, one of the columns is JOSNB or XML? Not to\nmention the fact that Postgres supports user-defined types which don't\nnecessarily have a reasonable order. Take a point in a 2D or 3D space\nas an example. On top of that I doubt that the proposed query will\nperform well since I don't see how it could benefit from using\nindexes. I don't claim that this is necessarily true in your case but\ngenerally one could argue that the wrong schema is used here and\ninstead of (year, month) pair a table should have a date/timestamp(tz)\ncolumn.\n\nPersonally I would choose format() function [1] in cases like this in\norder to play it safe. Assuming of course that the table is small and\nthe query is not executed often.\n\n[1]: https://www.postgresql.org/docs/current/functions-string.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 22 Mar 2024 18:02:29 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> In my queries I often need to do MIN/MAX for tuples, for example:\n>> SELECT MAX(row(year, month))\n>> FROM (VALUES(2025, 1), (2024,2)) x(year, month);\n>> This query throws:\n>> ERROR: function max(record) does not exist\n>> Was this ever discussed or is there something preventing the implementation?\n\n> I believe it would be challenging to implement max(record) that would\n> work reasonably well in a general case.\n\nAs long as you define it as \"works the same way record comparison\ndoes\", ie base it on record_cmp(), I don't think it would be much\nmore than a finger exercise [*]. And why would you want it to act\nany differently from record_cmp()? Those semantics have been\nestablished for a long time.\n\n\t\t\tregards, tom lane\n\n[*] Although conceivably there are some challenges in getting\nrecord_cmp's caching logic to work in the context of an aggregate.\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:12:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "Exactly Tom, I see no fundamental problem for it not to be implemented,\nsince comparison operator is already implemented. In fact, MIN/MAX should\nwork for all types for which comparison operator is defined.\n\nRegarding index support, there should not be an issue if the index is\ndefined for the record (e.g. `CREATE INDEX ON my_table(ROW(field_a,\nfield_b))`). However such indexes seem not to be supported. Whether a\ncomposite index is compatible with a record created on the indexed fields\nin every edge case I'm not sure...\n\nAlexander, rewriting the year-month example is easy, but how would you\nrewrite this query?\n\nCREATE TABLE events(event_time TIMESTAMP, message VARCHAR, user_id VARCHAR);\n\nYou want a newest message for each user. It's easy with MAX(record):\n\nSELECT user_id, MAX(ROW(event_time, message)).message\nFROM events\nGROUP BY user_id;\n\nOne option is to rewrite to a subquery with LIMIT 1\n\nSELECT user_id, (SELECT message FROM events e2 WHERE e1.user_id=e2.user_id\nORDER BY event_time DESC LIMIT 1)\nFROM events e1\nGROUP BY user_id;\n\nIf your query already has multiple levels of grouping, multiple joins,\nUNIONs etc., it gets much more complex. I also wonder if the optimizer\nwould pick the same plan as it would be if the MAX(record) is supported.\n\nViliam\n\nOn Fri, Mar 22, 2024 at 4:12 PM Tom Lane <[email protected]> wrote:\n\n> Aleksander Alekseev <[email protected]> writes:\n> >> In my queries I often need to do MIN/MAX for tuples, for example:\n> >> SELECT MAX(row(year, month))\n> >> FROM (VALUES(2025, 1), (2024,2)) x(year, month);\n> >> This query throws:\n> >> ERROR: function max(record) does not exist\n> >> Was this ever discussed or is there something preventing the\n> implementation?\n>\n> > I believe it would be challenging to implement max(record) that would\n> > work reasonably well in a general case.\n>\n> As long as you define it as \"works the same way record comparison\n> does\", ie base it on record_cmp(), I don't think it would be much\n> more than a finger exercise [*]. And why would you want it to act\n> any differently from record_cmp()? Those semantics have been\n> established for a long time.\n>\n> regards, tom lane\n>\n> [*] Although conceivably there are some challenges in getting\n> record_cmp's caching logic to work in the context of an aggregate.\n>\n\nExactly Tom, I see no fundamental problem for it not to be implemented, since comparison operator is already implemented. In fact, MIN/MAX should work for all types for which comparison operator is defined.Regarding index support, there should not be an issue if the index is defined for the record (e.g. `CREATE INDEX ON my_table(ROW(field_a, field_b))`). However such indexes seem not to be supported. Whether a composite index is compatible with a record created on the indexed fields in every edge case I'm not sure...Alexander, rewriting the year-month example is easy, but how would you rewrite this query?CREATE TABLE events(event_time TIMESTAMP, message VARCHAR, user_id VARCHAR);You want a newest message for each user. It's easy with MAX(record):SELECT user_id, MAX(ROW(event_time, message)).messageFROM eventsGROUP BY user_id;One option is to rewrite to a subquery with LIMIT 1SELECT user_id, (SELECT message FROM events e2 WHERE e1.user_id=e2.user_id ORDER BY event_time DESC LIMIT 1)FROM events e1GROUP BY user_id;If your query already has multiple levels of grouping, multiple joins, UNIONs etc., it gets much more complex. I also wonder if the optimizer would pick the same plan as it would be if the MAX(record) is supported.ViliamOn Fri, Mar 22, 2024 at 4:12 PM Tom Lane <[email protected]> wrote:Aleksander Alekseev <[email protected]> writes:\n>> In my queries I often need to do MIN/MAX for tuples, for example:\n>> SELECT MAX(row(year, month))\n>> FROM (VALUES(2025, 1), (2024,2)) x(year, month);\n>> This query throws:\n>> ERROR: function max(record) does not exist\n>> Was this ever discussed or is there something preventing the implementation?\n\n> I believe it would be challenging to implement max(record) that would\n> work reasonably well in a general case.\n\nAs long as you define it as \"works the same way record comparison\ndoes\", ie base it on record_cmp(), I don't think it would be much\nmore than a finger exercise [*].  And why would you want it to act\nany differently from record_cmp()?  Those semantics have been\nestablished for a long time.\n\n                        regards, tom lane\n\n[*] Although conceivably there are some challenges in getting\nrecord_cmp's caching logic to work in the context of an aggregate.", "msg_date": "Fri, 22 Mar 2024 16:50:01 +0100", "msg_from": "=?UTF-8?Q?Viliam_=C4=8Eurina?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "Hi,\n\n> Exactly Tom, I see no fundamental problem for it not to be implemented, since comparison operator is already implemented. In fact, MIN/MAX should work for all types for which comparison operator is defined.\n\nOn second thought, this should work reasonably well.\n\nPFA a WIP patch. At this point it implements only MAX(record), no MIN, no tests:\n\n```\n=# SELECT MAX(row(year, month)) FROM (VALUES(2025, 1), (2024,2)) x(year, month);\n max\n----------\n (2025,1)\n```\n\nOne thing I'm not 100% sure of is whether record_larger() should make\na copy of its arguments or the current implementation is safe.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Sat, 23 Mar 2024 13:59:17 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "Aleksander Alekseev <[email protected]> writes:\n> One thing I'm not 100% sure of is whether record_larger() should make\n> a copy of its arguments or the current implementation is safe.\n\nI don't see any copying happening in, say, text_larger or\nnumeric_larger, so this shouldn't need to either.\n\nPersonally I'd write \"record_cmp(fcinfo) > 0\" rather than indirecting\nthrough record_gt. The way you have it is not strictly correct anyhow:\nyou're cheating by not using DirectFunctionCall.\n\nAlso, given that you're passing the fcinfo, there's no need\nto extract the arguments from it before that call. So it\nseems to me that code like\n\n\tif (record_cmp(fcinfo) > 0)\n\t\tPG_RETURN_HEAPTUPLEHEADER(PG_GETARG_HEAPTUPLEHEADER(0));\n\telse\n\t\tPG_RETURN_HEAPTUPLEHEADER(PG_GETARG_HEAPTUPLEHEADER(1));\n\nshould do, and possibly save one useless detoast step. Or you could\ndo\n\n\tif (record_cmp(fcinfo) > 0)\n\t\tPG_RETURN_DATUM(PG_GETARG_DATUM(0));\n\telse\n\t\tPG_RETURN_DATUM(PG_GETARG_DATUM(1));\n\nbecause really there's no point in detoasting at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 23 Mar 2024 11:04:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "Hi,\n\n> I don't see any copying happening in, say, text_larger or\n> numeric_larger, so this shouldn't need to either.\n>\n> Personally I'd write \"record_cmp(fcinfo) > 0\" rather than indirecting\n> through record_gt. The way you have it is not strictly correct anyhow:\n> you're cheating by not using DirectFunctionCall.\n>\n> Also, given that you're passing the fcinfo, there's no need\n> to extract the arguments from it before that call. So it\n> seems to me that code like\n>\n> if (record_cmp(fcinfo) > 0)\n> PG_RETURN_HEAPTUPLEHEADER(PG_GETARG_HEAPTUPLEHEADER(0));\n> else\n> PG_RETURN_HEAPTUPLEHEADER(PG_GETARG_HEAPTUPLEHEADER(1));\n>\n> should do, and possibly save one useless detoast step. Or you could\n> do\n>\n> if (record_cmp(fcinfo) > 0)\n> PG_RETURN_DATUM(PG_GETARG_DATUM(0));\n> else\n> PG_RETURN_DATUM(PG_GETARG_DATUM(1));\n>\n> because really there's no point in detoasting at all.\n\nMany thanks. Here is the corrected patch. Now it also includes MIN()\nsupport and tests.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 25 Mar 2024 13:38:55 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "Hi,\n\n> Many thanks. Here is the corrected patch. Now it also includes MIN()\n> support and tests.\n\nMichael Paquier (cc:'ed) commented offlist that I forgot to change the\ndocumentation.\n\nHere is the corrected patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 8 Jul 2024 12:20:30 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "On Mon, Jul 08, 2024 at 12:20:30PM +0300, Aleksander Alekseev wrote:\n> Here is the corrected patch.\n\n313f87a17155 is one example of a similar change with pg_lsn, with four\nentries added to pg_proc and two to pg_aggregate. That's what this\npatch is doing from what I can see.\n\n- and arrays of any of these types.\n+ and also arrays and records of any of these types.\n\nThis update of the docs is incorrect, no? Records could include much\nmore types than the ones currently supported for min()/max().\n\nI am not sure to get the concerns of upthread regarding the type\ncaching in the context of an aggregate, which is the business with\nlookup_type_cache(), especially since there is a btree operator\nrelying on record_cmp(). Tom, what were your concerns here?\n--\nMichael", "msg_date": "Tue, 9 Jul 2024 09:44:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> I am not sure to get the concerns of upthread regarding the type\n> caching in the context of an aggregate, which is the business with\n> lookup_type_cache(), especially since there is a btree operator\n> relying on record_cmp(). Tom, what were your concerns here?\n\nDon't recall right at this instant, but I've put myself down as\nreviewer of this patch to remind me to look at it in the next\nday or two.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2024 20:54:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "On Mon, Jul 08, 2024 at 08:54:31PM -0400, Tom Lane wrote:\n> Don't recall right at this instant, but I've put myself down as\n> reviewer of this patch to remind me to look at it in the next\n> day or two.\n\nThanks for the update. WFM.\n--\nMichael", "msg_date": "Tue, 9 Jul 2024 09:59:44 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Jul 08, 2024 at 12:20:30PM +0300, Aleksander Alekseev wrote:\n> - and arrays of any of these types.\n> + and also arrays and records of any of these types.\n\n> This update of the docs is incorrect, no? Records could include much\n> more types than the ones currently supported for min()/max().\n\nYeah, actually the contained data types could be anything with\nbtree sort support. This is true for arrays too, so the text\nwas wrong already. I changed it to\n\n+ and also arrays and composite types containing sortable data types.\n\n(Using \"composite type\" not \"record\" is a judgment call here, but\nI don't see anyplace else in func.sgml preferring \"record\" for this\nmeaning.)\n\n> I am not sure to get the concerns of upthread regarding the type\n> caching in the context of an aggregate, which is the business with\n> lookup_type_cache(), especially since there is a btree operator\n> relying on record_cmp(). Tom, what were your concerns here?\n\nRe-reading, I was just mentioning that as something to check,\nnot a major problem. It isn't, because array min/max are already\nrelying on the ability to use fcinfo->flinfo->fn_extra as cache space\nin an aggregate. (Indeed, the array aggregate code is almost\nidentical to where we ended up.)\n\nAFAICS this is good to go. I made a couple of tiny cosmetic\nadjustments, added a catversion bump, and pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jul 2024 12:00:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIN/MAX functions for a record" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1], I noticed $SUBJECT: WaitLatchOrSocket in back\nbranches is ignoring the possibility of failing partway through, too.\nI added a PG_FAINALLY block to that function, like commit 555276f85.\nPatch attached.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK15DF6EE7O6hTLbe5-fHvPDwEx9vm-BOCN3dsKOjZCo7bw%40mail.gmail.com", "msg_date": "Fri, 22 Mar 2024 21:15:45 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": true, "msg_subject": "Another WaitEventSet resource leakage in back branches" }, { "msg_contents": "On Fri, Mar 22, 2024 at 9:15 PM Etsuro Fujita <[email protected]> wrote:\n> While working on [1], I noticed $SUBJECT: WaitLatchOrSocket in back\n> branches is ignoring the possibility of failing partway through, too.\n> I added a PG_FAINALLY block to that function, like commit 555276f85.\n> Patch attached.\n\nI noticed that PG_FAINALLY was added in v13. I created a separate\npatch for v12 using PG_CATCH instead. Patch attached. I am attaching\nthe previous patch for later versions as well.\n\nI am planning to back-patch these next week.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Fri, 5 Apr 2024 19:55:16 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another WaitEventSet resource leakage in back branches" }, { "msg_contents": "On Fri, Apr 5, 2024 at 7:55 PM Etsuro Fujita <[email protected]> wrote:\n> I am planning to back-patch these next week.\n\nDone.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 11 Apr 2024 19:41:04 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another WaitEventSet resource leakage in back branches" }, { "msg_contents": "Hi,\n\nOn 2024-03-22 21:15:45 +0900, Etsuro Fujita wrote:\n> While working on [1], I noticed $SUBJECT: WaitLatchOrSocket in back\n> branches is ignoring the possibility of failing partway through, too.\n> I added a PG_FAINALLY block to that function, like commit 555276f85.\n> Patch attached.\n\nCould you expand a bit on the concrete scenario you're worried about here?\nPG_TRY/CATCH aren't free, so adding something like this to a quite common\npath, in the back branches, without a concrete analysis as to why it's needed,\nseems a bit scary.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 09:29:46 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another WaitEventSet resource leakage in back branches" }, { "msg_contents": "Hi Andres,\n\nOn Fri, Apr 12, 2024 at 1:29 AM Andres Freund <[email protected]> wrote:\n> On 2024-03-22 21:15:45 +0900, Etsuro Fujita wrote:\n> > While working on [1], I noticed $SUBJECT: WaitLatchOrSocket in back\n> > branches is ignoring the possibility of failing partway through, too.\n> > I added a PG_FAINALLY block to that function, like commit 555276f85.\n> > Patch attached.\n>\n> Could you expand a bit on the concrete scenario you're worried about here?\n> PG_TRY/CATCH aren't free, so adding something like this to a quite common\n> path, in the back branches, without a concrete analysis as to why it's needed,\n> seems a bit scary.\n\nWhat I am worried about is that system calls used in\nWaitLatchOrSocket, like epoll_ctl, might fail, throwing an error\n(epoll_ctl might fail due to eg, ENOMEM or ENOSPC). The probability\nof such failures would be pretty low, but not zero.\n\nThis causes more problems than it solves?\n\nThanks for the comment!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 12 Apr 2024 20:21:24 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Another WaitEventSet resource leakage in back branches" } ]
[ { "msg_contents": "Another thing I noticed while working on [1] is $SUBJECT: this\nfunction checks whether the given query string is non-NULL or not when\ncreating a WARNING message, but the function is always called with the\nquery string set, so it should be non-NULL. I removed the check and\ninstead added an assertion ensuring that the query string is non-NULL.\n(I added the assertion to pgfdw_exec_cleanup_query_begin() as well.)\nAttached is a patch for that.\n\nIf there are no objections, I will apply the patch to HEAD only.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK15DF6EE7O6hTLbe5-fHvPDwEx9vm-BOCN3dsKOjZCo7bw%40mail.gmail.com", "msg_date": "Fri, 22 Mar 2024 21:30:09 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": true, "msg_subject": "postgres_fdw: Useless test in pgfdw_exec_cleanup_query_end()" }, { "msg_contents": "On Fri, Mar 22, 2024 at 9:30 PM Etsuro Fujita <[email protected]> wrote:\n> If there are no objections, I will apply the patch to HEAD only.\n\nDone.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 4 Apr 2024 18:13:00 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: Useless test in pgfdw_exec_cleanup_query_end()" } ]
[ { "msg_contents": "On 2024-03-20 12:11, Alexander Korotkov wrote:\n> On Wed, Mar 20, 2024 at 12:34 AM Kartyshov Ivan\n> <[email protected]> wrote:\n>> > 4.2 With an unreasonably high future LSN, BEGIN command waits\n>> > unboundedly, shouldn't we check if the specified LSN is more than\n>> > pg_last_wal_receive_lsn() error out?\n> \n> I think limiting wait lsn by current received lsn would destroy the\n> whole value of this feature. The value is to wait till given LSN is\n> replayed, whether it's already received or not.\n\nOk sounds reasonable, I`ll rollback the changes.\n\n> But I don't see a problem here. On the replica, it's out of our\n> control to check which lsn is good and which is not. We can't check\n> whether the lsn, which is in future for the replica, is already issued\n> by primary.\n> \n> For the case of wrong lsn, which could cause potentially infinite\n> wait, there is the timeout and the manual query cancel.\n\nFully agree with this take.\n\n>> > 4.3 With an unreasonably high wait time, BEGIN command waits\n>> > unboundedly, shouldn't we restrict the wait time to some max\n> value,\n>> > say a day or so?\n>> > SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\n>> > BEGIN AFTER :'future_receive_lsn' WITHIN 100000;\n>> \n>> Good idea, I put it 1 day. But this limit we should to discuss.\n> \n> Do you think that specifying timeout in milliseconds is suitable? I\n> would prefer to switch to seconds (with ability to specify fraction of\n> second). This was expressed before by Alexander Lakhin.\n\nIt sounds like an interesting idea. Please review the result.\n\n>> > https://github.com/macdice/redo-bench or similar tools?\n> \n> Ivan, could you do this?\n\nYes, test redo-bench/crash-recovery.sh\nThis patch on master\n91.327, 1.973\n105.907, 3.338\n98.412, 4.579\n95.818, 4.19\n\nREL_13-STABLE\n116.645, 3.005\n113.212, 2.568\n117.644, 3.183\n111.411, 2.782\n\nmaster\n124.712, 2.047\n117.012, 1.736\n116.328, 2.035\n115.662, 1.797\n\nStrange behavior, patched version is faster then REL_13-STABLE and \nmaster.\n\n> I don't see this change in the patch. Normally if a process gets a\n> signal, that causes WaitLatch() to exit immediately. It also exists\n> immediately on query cancel. IIRC, this 1 minute timeout is needed to\n> handle some extreme cases when an interrupt is missing. Other places\n> have it equal to 1 minute. I don't see why we should have it\n> different.\n\nOk, I`ll rollback my changes.\n\n>> 4) added and expanded sections in the documentation\n> \n> I don't see this in the patch. I see only a short description in\n> func.sgml, which is definitely not sufficient. We need at least\n> everything we have in the docs before to be adjusted with the current\n> approach of procedure.\n\nI didn't find another section where to add the description of \npg_wait_lsn().\nSo I extend description on the bottom of the table.\n\n>> 5) add default variant of timeout\n>> pg_wait_lsn(trg_lsn pg_lsn, delay int8 DEFAULT 0)\n>> example: pg_wait_lsn('0/31B1B60') equal pg_wait_lsn('0/31B1B60', 0)\n> \n> Does zero here mean no timeout? I think this should be documented.\n> Also, I would prefer to see the timeout by default. Probably one\n> minute would be good for default.\n\nLets discuss this point. Loop in function WaitForLSN is made that way,\nif we choose delay=0, only then we can wait infinitely to wait LSN\nwithout timeout. So default must be 0.\n\nPlease take one more look on the patch.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com", "msg_date": "Fri, 22 Mar 2024 16:45:40 +0300", "msg_from": "Kartyshov Ivan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Thank you for your feedback.\n\nOn 2024-03-20 12:11, Alexander Korotkov wrote:\n> On Wed, Mar 20, 2024 at 12:34 AM Kartyshov Ivan\n> <[email protected]> wrote:\n>> > 4.2 With an unreasonably high future LSN, BEGIN command waits\n>> > unboundedly, shouldn't we check if the specified LSN is more than\n>> > pg_last_wal_receive_lsn() error out?\n> \n> I think limiting wait lsn by current received lsn would destroy the\n> whole value of this feature. The value is to wait till given LSN is\n> replayed, whether it's already received or not.\n\nOk sounds reasonable, I`ll rollback the changes.\n\n> But I don't see a problem here. On the replica, it's out of our\n> control to check which lsn is good and which is not. We can't check\n> whether the lsn, which is in future for the replica, is already issued\n> by primary.\n> \n> For the case of wrong lsn, which could cause potentially infinite\n> wait, there is the timeout and the manual query cancel.\n\nFully agree with this take.\n\n>> > 4.3 With an unreasonably high wait time, BEGIN command waits\n>> > unboundedly, shouldn't we restrict the wait time to some max\n> value,\n>> > say a day or so?\n>> > SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\n>> > BEGIN AFTER :'future_receive_lsn' WITHIN 100000;\n>> \n>> Good idea, I put it 1 day. But this limit we should to discuss.\n> \n> Do you think that specifying timeout in milliseconds is suitable? I\n> would prefer to switch to seconds (with ability to specify fraction of\n> second). This was expressed before by Alexander Lakhin.\n\nIt sounds like an interesting idea. Please review the result.\n\n>> > https://github.com/macdice/redo-bench or similar tools?\n> \n> Ivan, could you do this?\n\nYes, test redo-bench/crash-recovery.sh\nThis patch on master\n91.327, 1.973\n105.907, 3.338\n98.412, 4.579\n95.818, 4.19\n\nREL_13-STABLE\n116.645, 3.005\n113.212, 2.568\n117.644, 3.183\n111.411, 2.782\n\nmaster\n124.712, 2.047\n117.012, 1.736\n116.328, 2.035\n115.662, 1.797\n\nStrange behavior, patched version is faster then REL_13-STABLE and \nmaster.\n\n> I don't see this change in the patch. Normally if a process gets a\n> signal, that causes WaitLatch() to exit immediately. It also exists\n> immediately on query cancel. IIRC, this 1 minute timeout is needed to\n> handle some extreme cases when an interrupt is missing. Other places\n> have it equal to 1 minute. I don't see why we should have it\n> different.\n\nOk, I`ll rollback my changes.\n\n>> 4) added and expanded sections in the documentation\n> \n> I don't see this in the patch. I see only a short description in\n> func.sgml, which is definitely not sufficient. We need at least\n> everything we have in the docs before to be adjusted with the current\n> approach of procedure.\n\nI didn't find another section where to add the description of \npg_wait_lsn().\nSo I extend description on the bottom of the table.\n\n>> 5) add default variant of timeout\n>> pg_wait_lsn(trg_lsn pg_lsn, delay int8 DEFAULT 0)\n>> example: pg_wait_lsn('0/31B1B60') equal pg_wait_lsn('0/31B1B60', 0)\n> \n> Does zero here mean no timeout? I think this should be documented.\n> Also, I would prefer to see the timeout by default. Probably one\n> minute would be good for default.\n\nLets discuss this point. Loop in function WaitForLSN is made that way,\nif we choose delay=0, only then we can wait infinitely to wait LSN\nwithout timeout. So default must be 0.\n\nPlease take one more look on the patch.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com", "msg_date": "Fri, 22 Mar 2024 22:21:20 +0300", "msg_from": "Kartyshov Ivan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Fri, Mar 22, 2024 at 3:45 PM Kartyshov Ivan\n<[email protected]> wrote:\n> On 2024-03-20 12:11, Alexander Korotkov wrote:\n> > On Wed, Mar 20, 2024 at 12:34 AM Kartyshov Ivan\n> > <[email protected]> wrote:\n> >> > 4.2 With an unreasonably high future LSN, BEGIN command waits\n> >> > unboundedly, shouldn't we check if the specified LSN is more than\n> >> > pg_last_wal_receive_lsn() error out?\n> >\n> > I think limiting wait lsn by current received lsn would destroy the\n> > whole value of this feature. The value is to wait till given LSN is\n> > replayed, whether it's already received or not.\n>\n> Ok sounds reasonable, I`ll rollback the changes.\n>\n> > But I don't see a problem here. On the replica, it's out of our\n> > control to check which lsn is good and which is not. We can't check\n> > whether the lsn, which is in future for the replica, is already issued\n> > by primary.\n> >\n> > For the case of wrong lsn, which could cause potentially infinite\n> > wait, there is the timeout and the manual query cancel.\n>\n> Fully agree with this take.\n>\n> >> > 4.3 With an unreasonably high wait time, BEGIN command waits\n> >> > unboundedly, shouldn't we restrict the wait time to some max\n> > value,\n> >> > say a day or so?\n> >> > SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\n> >> > BEGIN AFTER :'future_receive_lsn' WITHIN 100000;\n> >>\n> >> Good idea, I put it 1 day. But this limit we should to discuss.\n> >\n> > Do you think that specifying timeout in milliseconds is suitable? I\n> > would prefer to switch to seconds (with ability to specify fraction of\n> > second). This was expressed before by Alexander Lakhin.\n>\n> It sounds like an interesting idea. Please review the result.\n>\n> >> > https://github.com/macdice/redo-bench or similar tools?\n> >\n> > Ivan, could you do this?\n>\n> Yes, test redo-bench/crash-recovery.sh\n> This patch on master\n> 91.327, 1.973\n> 105.907, 3.338\n> 98.412, 4.579\n> 95.818, 4.19\n>\n> REL_13-STABLE\n> 116.645, 3.005\n> 113.212, 2.568\n> 117.644, 3.183\n> 111.411, 2.782\n>\n> master\n> 124.712, 2.047\n> 117.012, 1.736\n> 116.328, 2.035\n> 115.662, 1.797\n>\n> Strange behavior, patched version is faster then REL_13-STABLE and\n> master.\n\nI've run this test on my machine with v13 of the path.\n\npatched\n53.663, 0.466\n53.884, 0.402\n54.102, 0.441\n\nmaster\n55.216, 0.441\n54.52, 0.464\n51.479, 0.438\n\nIt seems that difference is less than variance.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 29 Mar 2024 01:15:13 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" } ]
[ { "msg_contents": "\nHi, hackers,\n\nWhen I try to configure PostgreSQL 16.2 on Illumos using the following command,\nit complains $subject.\n\n ./configure --enable-cassert --enable-debug --enable-nls --with-perl \\\n --with-python --without-tcl --without-gssapi --with-openssl \\\n --with-ldap --with-libxml --with-libxslt --without-systemd \\\n --with-readline --enable-thread-safety --enable-dtrace \\\n DTRACEFLAGS=-64 CFLAGS=-Werror\n\nHowever, if I remove the `CFLAGS=-Werror`, it works fine.\n\nI'm not sure what happened here.\n\n$ uname -a\nSunOS db_build 5.11 hunghu-20231216T132436Z i86pc i386 i86pc illumos\n$ gcc --version\ngcc (GCC) 10.4.0\nCopyright (C) 2020 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n\n", "msg_date": "Sat, 23 Mar 2024 00:48:05 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Hi,\n\nOn 2024-03-23 00:48:05 +0800, Japin Li wrote:\n> When I try to configure PostgreSQL 16.2 on Illumos using the following command,\n> it complains $subject.\n> \n> ./configure --enable-cassert --enable-debug --enable-nls --with-perl \\\n> --with-python --without-tcl --without-gssapi --with-openssl \\\n> --with-ldap --with-libxml --with-libxslt --without-systemd \\\n> --with-readline --enable-thread-safety --enable-dtrace \\\n> DTRACEFLAGS=-64 CFLAGS=-Werror\n> \n> However, if I remove the `CFLAGS=-Werror`, it works fine.\n\nLikely there's an unrelated warning triggering the configure test to\nfail. We'd need to see config.log to see what that is.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 22 Mar 2024 09:53:05 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Japin Li <[email protected]> writes:\n> When I try to configure PostgreSQL 16.2 on Illumos using the following command,\n> it complains $subject.\n\n> ./configure --enable-cassert --enable-debug --enable-nls --with-perl \\\n> --with-python --without-tcl --without-gssapi --with-openssl \\\n> --with-ldap --with-libxml --with-libxslt --without-systemd \\\n> --with-readline --enable-thread-safety --enable-dtrace \\\n> DTRACEFLAGS=-64 CFLAGS=-Werror\n\n> However, if I remove the `CFLAGS=-Werror`, it works fine.\n> I'm not sure what happened here.\n\nCFLAGS=-Werror breaks a whole lot of configure's tests, not only that\none. (We even have this documented, see [1].) So you can't inject\n-Werror that way. What I do on my buildfarm animals is the equivalent\nof\n\n\texport COPT='-Werror'\n\nafter configure and before build. I think configure pays no attention\nto COPT, so it'd likely be safe to keep that set all the time, but in\nthe buildfarm client it's just as easy to be conservative.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/install-make.html#CONFIGURE-ENVVARS\n\n\n", "msg_date": "Fri, 22 Mar 2024 13:04:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Sat, 23 Mar 2024 at 00:53, Andres Freund <[email protected]> wrote:\n> Hi,\n>\n> On 2024-03-23 00:48:05 +0800, Japin Li wrote:\n>> When I try to configure PostgreSQL 16.2 on Illumos using the following command,\n>> it complains $subject.\n>>\n>> ./configure --enable-cassert --enable-debug --enable-nls --with-perl \\\n>> --with-python --without-tcl --without-gssapi --with-openssl \\\n>> --with-ldap --with-libxml --with-libxslt --without-systemd \\\n>> --with-readline --enable-thread-safety --enable-dtrace \\\n>> DTRACEFLAGS=-64 CFLAGS=-Werror\n>>\n>> However, if I remove the `CFLAGS=-Werror`, it works fine.\n>\n> Likely there's an unrelated warning triggering the configure test to\n> fail. We'd need to see config.log to see what that is.\n>\n\nThanks for your quick reply. Attach the config.log.", "msg_date": "Sat, 23 Mar 2024 01:11:31 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "\nOn Sat, 23 Mar 2024 at 01:04, Tom Lane <[email protected]> wrote:\n> Japin Li <[email protected]> writes:\n>> When I try to configure PostgreSQL 16.2 on Illumos using the following command,\n>> it complains $subject.\n>\n>> ./configure --enable-cassert --enable-debug --enable-nls --with-perl \\\n>> --with-python --without-tcl --without-gssapi --with-openssl \\\n>> --with-ldap --with-libxml --with-libxslt --without-systemd \\\n>> --with-readline --enable-thread-safety --enable-dtrace \\\n>> DTRACEFLAGS=-64 CFLAGS=-Werror\n>\n>> However, if I remove the `CFLAGS=-Werror`, it works fine.\n>> I'm not sure what happened here.\n>\n> CFLAGS=-Werror breaks a whole lot of configure's tests, not only that\n> one. (We even have this documented, see [1].) So you can't inject\n> -Werror that way. What I do on my buildfarm animals is the equivalent\n> of\n>\n> \texport COPT='-Werror'\n>\n> after configure and before build. I think configure pays no attention\n> to COPT, so it'd likely be safe to keep that set all the time, but in\n> the buildfarm client it's just as easy to be conservative.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/docs/devel/install-make.html#CONFIGURE-ENVVARS\n\nThank you very much! I didn't notice this part before.\n\n\n", "msg_date": "Sat, 23 Mar 2024 01:22:56 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Japin Li <[email protected]> writes:\n> On Sat, 23 Mar 2024 at 00:53, Andres Freund <[email protected]> wrote:\n>> Likely there's an unrelated warning triggering the configure test to\n>> fail. We'd need to see config.log to see what that is.\n\n> Thanks for your quick reply. Attach the config.log.\n\nYup:\n\nconftest.c:139:5: error: no previous prototype for 'does_int64_work' [-Werror=missing-prototypes]\n 139 | int does_int64_work()\n | ^~~~~~~~~~~~~~~\ncc1: all warnings being treated as errors\nconfigure:17003: $? = 1\nconfigure: program exited with status 1\n\nThis warning is harmless normally, but breaks the configure probe if\nyou enable -Werror.\n\nNo doubt we could improve that test snippet so that it does not\ntrigger that warning. But trying to make configure safe for -Werror\nseems like a fool's errand, for these reasons:\n\n* Do you really want to try to make all of configure's probes proof\nagainst every compiler warning everywhere?\n\n* Many of the test snippets aren't readily under our control, as they\nare supplied by Autoconf.\n\n* In the majority of cases, any such failures would be silent, as\nconfigure would just conclude that the feature it is probing for\nisn't there. So even finding there's a problem would be difficult.\n\nThe short answer is that Autoconf is not designed to support -Werror\nand it's not worth it to try to make it do so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 13:25:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Hi,\n\nOn 2024-03-22 13:25:56 -0400, Tom Lane wrote:\n> The short answer is that Autoconf is not designed to support -Werror\n> and it's not worth it to try to make it do so.\n\nI wonder if we ought to make configure warn if it sees -Werror in CFLAGS -\nthis is far from the first time somebody stumbling with -Werror. Including a\nfew quite senior hackers, if I recall correctly. We could also just filter it\ntemporarily and put it back at the end of configure.\n\nI don't think there's great way of making the autoconf buildsystem use -Werror\ncontinually, today. IIRC the best way is to use Makefile.custom.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:38:26 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Fri, Mar 22, 2024 at 2:38 PM Andres Freund <[email protected]> wrote:\n> I wonder if we ought to make configure warn if it sees -Werror in CFLAGS -\n> this is far from the first time somebody stumbling with -Werror. Including a\n> few quite senior hackers, if I recall correctly. We could also just filter it\n> temporarily and put it back at the end of configure.\n\nI think I made this mistake at some point, but I just looked at\nconfig.log and corrected my mistake. I'm not strongly against having\nan explicit check for -Werror, but I think the main problem here is\nthat the original poster didn't have a look at config.log to see what\nthe actual problem was, and at least IME that's necessary in pretty\nmuch 100% of cases where configure fails for whatever reason. Perhaps\nautotools could be better-designed in that regard, but we don't\nnecessarily want to work around every problem that can stem from that\ndesign choice in our code, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 15:02:45 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Hi,\n\nOn 2024-03-22 15:02:45 -0400, Robert Haas wrote:\n> On Fri, Mar 22, 2024 at 2:38 PM Andres Freund <[email protected]> wrote:\n> > I wonder if we ought to make configure warn if it sees -Werror in CFLAGS -\n> > this is far from the first time somebody stumbling with -Werror. Including a\n> > few quite senior hackers, if I recall correctly. We could also just filter it\n> > temporarily and put it back at the end of configure.\n> \n> I think I made this mistake at some point, but I just looked at\n> config.log and corrected my mistake.\n\nIME the bigger issue is that sometimes it doesn't lead to outright failures,\njust to lots of stuff not being detected as supported / present.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 22 Mar 2024 12:31:22 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Fri, Mar 22, 2024 at 3:31 PM Andres Freund <[email protected]> wrote:\n> IME the bigger issue is that sometimes it doesn't lead to outright failures,\n> just to lots of stuff not being detected as supported / present.\n\nUgh. That does, indeed, sound not very nice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 15:34:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Mar 22, 2024 at 3:31 PM Andres Freund <[email protected]> wrote:\n>> IME the bigger issue is that sometimes it doesn't lead to outright failures,\n>> just to lots of stuff not being detected as supported / present.\n\n> Ugh. That does, indeed, sound not very nice.\n\nI could get behind throwing an error if -Werror is spotted. I think\ntrying to pull it out and put it back is too much work and a bit\ntoo much magic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 16:43:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Sat, Mar 23, 2024 at 6:26 AM Tom Lane <[email protected]> wrote:\n> conftest.c:139:5: error: no previous prototype for 'does_int64_work' [-Werror=missing-prototypes]\n> 139 | int does_int64_work()\n> | ^~~~~~~~~~~~~~~\n> cc1: all warnings being treated as errors\n> configure:17003: $? = 1\n> configure: program exited with status 1\n\n. o O ( int64_t, PRIdi64, etc were standardised a quarter of a century ago )\n\n\n", "msg_date": "Sat, 23 Mar 2024 13:45:28 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> . o O ( int64_t, PRIdi64, etc were standardised a quarter of a century ago )\n\nYeah. Now that we require C99 it's probably reasonable to assume\nthat those things exist. I wouldn't be in favor of ripping out our\nexisting notations like UINT64CONST, because the code churn would be\nsubstantial and the gain minimal. But we could imagine reimplementing\nthat stuff atop <stdint.h> and then getting rid of the configure-time\nprobes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 22:23:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "\nOn Sat, 23 Mar 2024 at 01:22, Japin Li <[email protected]> wrote:\n> On Sat, 23 Mar 2024 at 01:04, Tom Lane <[email protected]> wrote:\n>> Japin Li <[email protected]> writes:\n>>> When I try to configure PostgreSQL 16.2 on Illumos using the following command,\n>>> it complains $subject.\n>>\n>>> ./configure --enable-cassert --enable-debug --enable-nls --with-perl \\\n>>> --with-python --without-tcl --without-gssapi --with-openssl \\\n>>> --with-ldap --with-libxml --with-libxslt --without-systemd \\\n>>> --with-readline --enable-thread-safety --enable-dtrace \\\n>>> DTRACEFLAGS=-64 CFLAGS=-Werror\n>>\n>>> However, if I remove the `CFLAGS=-Werror`, it works fine.\n>>> I'm not sure what happened here.\n>>\n>> CFLAGS=-Werror breaks a whole lot of configure's tests, not only that\n>> one. (We even have this documented, see [1].) So you can't inject\n>> -Werror that way. What I do on my buildfarm animals is the equivalent\n>> of\n>>\n>> \texport COPT='-Werror'\n>>\n>> after configure and before build. I think configure pays no attention\n>> to COPT, so it'd likely be safe to keep that set all the time, but in\n>> the buildfarm client it's just as easy to be conservative.\n>>\n>> \t\t\tregards, tom lane\n>>\n>> [1] https://www.postgresql.org/docs/devel/install-make.html#CONFIGURE-ENVVARS\n>\n> Thank you very much! I didn't notice this part before.\n\nI try to use the following to compile it, however, it cannot compile it.\n\n$ ../configure --enable-cassert --enable-debug --enable-nls --with-perl --with-python --without-tcl --without-gssapi --with-openssl --with-ldap --with-libxml --with-libxslt --without-systemd --with-readline --enable-thread-safety --enable-dtrace DTRACEFLAGS=-64\n$ make COPT='-Werror' -s\n/home/japin/postgres/debug/../src/bin/pg_dump/pg_dump_sort.c: In function 'repairDependencyLoop':\n/home/japin/postgres/debug/../src/bin/pg_dump/pg_dump_sort.c:1276:3: error: format not a string literal and no format arguments [-Werror=format-security]\n 1276 | pg_log_warning(ngettext(\"there are circular foreign-key constraints on this table:\",\n | ^~~~~~~~~~~~~~\ncc1: all warnings being treated as errors\nmake[3]: *** [<builtin>: pg_dump_sort.o] Error 1\nmake[2]: *** [Makefile:43: all-pg_dump-recurse] Error 2\nmake[1]: *** [Makefile:42: all-bin-recurse] Error 2\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\n\n", "msg_date": "Mon, 25 Mar 2024 09:24:58 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Japin Li <[email protected]> writes:\n> /home/japin/postgres/debug/../src/bin/pg_dump/pg_dump_sort.c: In function 'repairDependencyLoop':\n> /home/japin/postgres/debug/../src/bin/pg_dump/pg_dump_sort.c:1276:3: error: format not a string literal and no format arguments [-Werror=format-security]\n> 1276 | pg_log_warning(ngettext(\"there are circular foreign-key constraints on this table:\",\n> | ^~~~~~~~~~~~~~\n\nYeah, some of the older buildfarm animals issue that warning too.\nAFAICS it's a bogus compiler heuristic: there is not anything\nwrong with the code as given.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 24 Mar 2024 21:32:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "\nOn Mon, 25 Mar 2024 at 09:32, Tom Lane <[email protected]> wrote:\n> Japin Li <[email protected]> writes:\n>> /home/japin/postgres/debug/../src/bin/pg_dump/pg_dump_sort.c: In function 'repairDependencyLoop':\n>> /home/japin/postgres/debug/../src/bin/pg_dump/pg_dump_sort.c:1276:3: error: format not a string literal and no format arguments [-Werror=format-security]\n>> 1276 | pg_log_warning(ngettext(\"there are circular foreign-key constraints on this table:\",\n>> | ^~~~~~~~~~~~~~\n>\n> Yeah, some of the older buildfarm animals issue that warning too.\n> AFAICS it's a bogus compiler heuristic: there is not anything\n> wrong with the code as given.\n>\n\nThanks! It seems I should remove -Werror option on Illumos.\n\n\n", "msg_date": "Mon, 25 Mar 2024 09:38:41 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Sat, Mar 23, 2024 at 3:23 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > . o O ( int64_t, PRIdi64, etc were standardised a quarter of a century ago )\n>\n> Yeah. Now that we require C99 it's probably reasonable to assume\n> that those things exist. I wouldn't be in favor of ripping out our\n> existing notations like UINT64CONST, because the code churn would be\n> substantial and the gain minimal. But we could imagine reimplementing\n> that stuff atop <stdint.h> and then getting rid of the configure-time\n> probes.\n\nI played around with this a bit, but am not quite there yet.\n\nprintf() is a little tricky. The standard wants us to use\n<inttypes.h>'s PRId64 etc, but that might confuse our snprintf.c (in\ntheory, probably not in practice). \"ll\" should have the right size on\nall systems, but gets warnings from the printf format string checker\non systems where \"l\" is the right type. So I think we still need to\nprobe for INT64_MODIFIER at configure-time. Here's one way, but I can\nsee it's not working on Clang/Linux... perhaps instead of that dubious\nincantation I should try compiling some actual printfs and check for\nwarnings/errors.\n\nI think INT64CONST should just point to standard INT64_C().\n\nFor limits, why do we have this:\n\n- * stdint.h limits aren't guaranteed to have compatible types with our fixed\n- * width types. So just define our own.\n\n? I mean, how could they not have compatible types?\n\nI noticed that configure.ac checks if int64 (no \"_t\") might be defined\nalready by system header pollution, but meson.build doesn't. That's\nan inconsistency that should be fixed, but which way? Hmm, commit\n15abc7788e6 said that was done for BeOS, which we de-supported. So\nmaybe we should get rid of that?", "msg_date": "Thu, 18 Apr 2024 12:31:05 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "\nOn Thu, 18 Apr 2024 at 08:31, Thomas Munro <[email protected]> wrote:\n> On Sat, Mar 23, 2024 at 3:23 PM Tom Lane <[email protected]> wrote:\n>> Thomas Munro <[email protected]> writes:\n>> > . o O ( int64_t, PRIdi64, etc were standardised a quarter of a century ago )\n>>\n>> Yeah. Now that we require C99 it's probably reasonable to assume\n>> that those things exist. I wouldn't be in favor of ripping out our\n>> existing notations like UINT64CONST, because the code churn would be\n>> substantial and the gain minimal. But we could imagine reimplementing\n>> that stuff atop <stdint.h> and then getting rid of the configure-time\n>> probes.\n>\n> I played around with this a bit, but am not quite there yet.\n>\n> printf() is a little tricky. The standard wants us to use\n> <inttypes.h>'s PRId64 etc, but that might confuse our snprintf.c (in\n> theory, probably not in practice). \"ll\" should have the right size on\n> all systems, but gets warnings from the printf format string checker\n> on systems where \"l\" is the right type. So I think we still need to\n> probe for INT64_MODIFIER at configure-time. Here's one way, but I can\n> see it's not working on Clang/Linux... perhaps instead of that dubious\n> incantation I should try compiling some actual printfs and check for\n> warnings/errors.\n>\n> I think INT64CONST should just point to standard INT64_C().\n>\n> For limits, why do we have this:\n>\n> - * stdint.h limits aren't guaranteed to have compatible types with our fixed\n> - * width types. So just define our own.\n>\n> ? I mean, how could they not have compatible types?\n>\n> I noticed that configure.ac checks if int64 (no \"_t\") might be defined\n> already by system header pollution, but meson.build doesn't. That's\n> an inconsistency that should be fixed, but which way? Hmm, commit\n> 15abc7788e6 said that was done for BeOS, which we de-supported. So\n> maybe we should get rid of that?\n\nThanks for working on this! I test the patch and it now works on Illumos when\nconfigure with -Werror option. However, there are some errors when compiling.\n\nIn file included from /home/japin/postgres/build/../src/include/c.h:834,\n from /home/japin/postgres/build/../src/include/postgres_fe.h:25,\n from /home/japin/postgres/build/../src/common/config_info.c:20:\n/home/japin/postgres/build/../src/common/config_info.c: In function 'get_configdata':\n/home/japin/postgres/build/../src/common/config_info.c:198:11: error: comparison of integer expressions of different signedness: 'int' and 'size_t' {aka 'long unsigned int'} [-Werror=sign-compare]\n 198 | Assert(i == *configdata_len);\n | ^~\n/home/japin/postgres/build/../src/common/config_info.c:198:2: note: in expansion of macro 'Assert'\n 198 | Assert(i == *configdata_len);\n | ^~~~~~\nIn file included from /home/japin/postgres/build/../src/common/blkreftable.c:36:\n/home/japin/postgres/build/../src/include/lib/simplehash.h: In function 'blockreftable_stat':\n/home/japin/postgres/build/../src/include/lib/simplehash.h:1138:9: error: format '%llu' expects argument of type 'long long unsigned int', but argument 4 has type 'uint64' {aka 'long unsigned int'} [-Werror=format=]\n 1138 | sh_log(\"size: \" UINT64_FORMAT \", members: %u, filled: %f, total chain: %u, max chain: %u, avg chain: %f, total_collisions: %u, max_collisions: %u, avg_collisions: %f\",\n | ^~~~~~~~\n 1139 | tb->size, tb->members, fillfactor, total_chain_length, max_chain_length, avg_chain_length,\n | ~~~~~~~~\n | |\n | uint64 {aka long unsigned int}\n/home/japin/postgres/build/../src/include/common/logging.h:125:46: note: in definition of macro 'pg_log_info'\n 125 | pg_log_generic(PG_LOG_INFO, PG_LOG_PRIMARY, __VA_ARGS__)\n | ^~~~~~~~~~~\n/home/japin/postgres/build/../src/include/lib/simplehash.h:1138:2: note: in expansion of macro 'sh_log'\n 1138 | sh_log(\"size: \" UINT64_FORMAT \", members: %u, filled: %f, total chain: %u, max chain: %u, avg chain: %f, total_collisions: %u, max_collisions: %u, avg_collisions: %f\",\n | ^~~~~~\nIn file included from /home/japin/postgres/build/../src/include/access/xlogrecord.h:14,\n from /home/japin/postgres/build/../src/include/access/xlogreader.h:41,\n from /home/japin/postgres/build/../src/include/access/xlog_internal.h:23,\n from /home/japin/postgres/build/../src/common/controldata_utils.c:28:\n/home/japin/postgres/build/../src/include/access/rmgr.h: In function 'RmgrIdIsCustom':\n/home/japin/postgres/build/../src/include/access/rmgr.h:50:42: error: comparison of integer expressions of different signedness: 'int' and 'unsigned int' [-Werror=sign-compare]\n 50 | return rmid >= RM_MIN_CUSTOM_ID && rmid <= RM_MAX_CUSTOM_ID;\n | ^~\n/home/japin/postgres/build/../src/common/blkreftable.c: In function 'BlockRefTableReaderGetBlocks':\n/home/japin/postgres/build/../src/common/blkreftable.c:716:22: error: comparison of integer expressions of different signedness: 'unsigned int' and 'int' [-Werror=sign-compare]\n 716 | blocks_found < nblocks)\n | ^\n/home/japin/postgres/build/../src/common/blkreftable.c:732:22: error: comparison of integer expressions of different signedness: 'unsigned int' and 'int' [-Werror=sign-compare]\n 732 | blocks_found < nblocks)\n | ^\n/home/japin/postgres/build/../src/common/blkreftable.c:742:20: error: comparison of integer expressions of different signedness: 'unsigned int' and 'int' [-Werror=sign-compare]\n 742 | if (blocks_found >= nblocks)\n | ^~\ncc1: all warnings being treated as errors\ncc1: all warnings being treated as errors\nmake[2]: *** [../../src/Makefile.global:947: config_info.o] Error 1\nmake[2]: *** Waiting for unfinished jobs....\nmake[2]: *** [../../src/Makefile.global:947: controldata_utils.o] Error 1\nIn file included from /home/japin/postgres/build/../src/include/postgres_fe.h:25,\n from /home/japin/postgres/build/../src/common/logging.c:15:\n/home/japin/postgres/build/../src/common/logging.c: In function 'pg_log_generic_v':\n/home/japin/postgres/build/../src/include/c.h:523:23: error: format '%llu' expects argument of type 'long long unsigned int', but argument 3 has type 'uint64' {aka 'long unsigned int'} [-Werror=format=]\n 523 | #define UINT64_FORMAT \"%\" INT64_MODIFIER \"u\"\n | ^~~\n/home/japin/postgres/build/../src/common/logging.c:259:21: note: in expansion of macro 'UINT64_FORMAT'\n 259 | fprintf(stderr, UINT64_FORMAT \":\", lineno);\n | ^~~~~~~~~~~~~\n/home/japin/postgres/build/../src/include/c.h:523:43: note: format string is defined here\n 523 | #define UINT64_FORMAT \"%\" INT64_MODIFIER \"u\"\n | ~~~~~~~~~~~~~~~~~~~^\n | |\n | long long unsigned int\n/home/japin/postgres/build/../src/common/unicode_norm.c: In function 'recompose_code':\n/home/japin/postgres/build/../src/common/unicode_norm.c:290:17: error: comparison of integer expressions of different signedness: 'int' and 'long unsigned int' [-Werror=sign-compare]\n 290 | for (i = 0; i < lengthof(UnicodeDecompMain); i++)\n | ^\nIn file included from /home/japin/postgres/build/../src/include/c.h:834,\n from /home/japin/postgres/build/../src/common/encnames.c:13:\n/home/japin/postgres/build/../src/common/encnames.c: In function 'pg_encoding_to_char_private':\n/home/japin/postgres/build/../src/common/encnames.c:593:19: error: comparison of integer expressions of different signedness: 'int' and 'pg_enc' [-Werror=sign-compare]\n 593 | Assert(encoding == p->encoding);\n | ^~\n/home/japin/postgres/build/../src/common/encnames.c:593:3: note: in expansion of macro 'Assert'\n 593 | Assert(encoding == p->encoding);\n | ^~~~~~\n/home/japin/postgres/build/../src/common/jsonapi.c: In function 'pg_parse_json_incremental':\n/home/japin/postgres/build/../src/common/jsonapi.c:693:11: error: comparison of integer expressions of different signedness: 'char' and 'JsonTokenType' [-Werror=sign-compare]\n 693 | if (top == tok)\n | ^~\ncc1: all warnings being treated as errors\ncc1: all warnings being treated as errors\nmake[2]: *** [../../src/Makefile.global:947: logging.o] Error 1\nmake[2]: *** [../../src/Makefile.global:947: blkreftable.o] Error 1\n/home/japin/postgres/build/../src/common/kwlookup.c: In function 'ScanKeywordLookup':\n/home/japin/postgres/build/../src/common/kwlookup.c:50:10: error: comparison of integer expressions of different signedness: 'size_t' {aka 'long unsigned int'} and 'int' [-Werror=sign-compare]\n 50 | if (len > keywords->max_kw_len)\n | ^\ncc1: all warnings being treated as errors\nmake[2]: *** [../../src/Makefile.global:947: encnames.o] Error 1\ncc1: all warnings being treated as errors\ncc1: all warnings being treated as errors\nmake[2]: *** [../../src/Makefile.global:947: kwlookup.o] Error 1\nIn file included from /home/japin/postgres/build/../src/include/c.h:834,\n from /home/japin/postgres/build/../src/include/postgres_fe.h:25,\n from /home/japin/postgres/build/../src/common/file_utils.c:19:\n/home/japin/postgres/build/../src/common/file_utils.c: In function 'pg_pwrite_zeros':\n/home/japin/postgres/build/../src/common/file_utils.c:725:23: error: comparison of integer expressions of different signedness: 'ssize_t' {aka 'long int'} and 'size_t' {aka 'long unsigned int'} [-Werror=sign-compare]\n 725 | Assert(total_written == size);\n | ^~\n/home/japin/postgres/build/../src/common/file_utils.c:725:2: note: in expansion of macro 'Assert'\n 725 | Assert(total_written == size);\n | ^~~~~~\nmake[2]: *** [../../src/Makefile.global:947: unicode_norm.o] Error 1\ncc1: all warnings being treated as errors\nmake[2]: *** [../../src/Makefile.global:947: file_utils.o] Error 1\n/home/japin/postgres/build/../src/common/unicode_case.c: In function 'convert_case':\n/home/japin/postgres/build/../src/common/unicode_case.c:155:31: error: comparison of integer expressions of different signedness: 'size_t' {aka 'long unsigned int'} and 'ssize_t' {aka 'long int'} [-Werror=sign-compare]\n 155 | while ((srclen < 0 || srcoff < srclen) && src[srcoff] != '\\0')\n | ^\n/home/japin/postgres/build/../src/common/wchar.c: In function 'pg_utf8_verifystr':\n/home/japin/postgres/build/../src/common/wchar.c:1868:10: error: comparison of integer expressions of different signedness: 'int' and 'long unsigned int' [-Werror=sign-compare]\n 1868 | if (len >= STRIDE_LENGTH)\n | ^~\n/home/japin/postgres/build/../src/common/wchar.c:1870:14: error: comparison of integer expressions of different signedness: 'int' and 'long unsigned int' [-Werror=sign-compare]\n 1870 | while (len >= STRIDE_LENGTH)\n | ^~\ncc1: all warnings being treated as errors\nmake[2]: *** [../../src/Makefile.global:947: unicode_case.o] Error 1\ncc1: all warnings being treated as errors\nmake[2]: *** [../../src/Makefile.global:947: jsonapi.o] Error 1\ncc1: all warnings being treated as errors\nmake[2]: *** [../../src/Makefile.global:947: wchar.o] Error 1\nmake[1]: *** [Makefile:42: all-common-recurse] Error 2\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\nFor rmid >= RM_MIN_CUSTOM_ID && rmid <= RM_MAX_CUSTOM_ID comparison error, I\nfound that UINT8_MAX is defined as '255U' on Illumos, however, Linux glibc\nuses '255' for UINT8_MAX, which is signed.\n\n[1] https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/sys/int_limits.h#L92\n[2] https://sourceware.org/git/?p=glibc.git;a=blob;f=stdlib/stdint.h;h=bb3e8b5cc61fb3df8842225d2286de67e6f2ffe2;hb=refs/heads/master#l116\n\n\n--\nRegards,\nJapin Li\n\n\n", "msg_date": "Thu, 18 Apr 2024 14:07:22 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On 18.04.24 02:31, Thomas Munro wrote:\n> On Sat, Mar 23, 2024 at 3:23 PM Tom Lane <[email protected]> wrote:\n>> Thomas Munro <[email protected]> writes:\n>>> . o O ( int64_t, PRIdi64, etc were standardised a quarter of a century ago )\n>>\n>> Yeah. Now that we require C99 it's probably reasonable to assume\n>> that those things exist. I wouldn't be in favor of ripping out our\n>> existing notations like UINT64CONST, because the code churn would be\n>> substantial and the gain minimal. But we could imagine reimplementing\n>> that stuff atop <stdint.h> and then getting rid of the configure-time\n>> probes.\n> \n> I played around with this a bit, but am not quite there yet.\n\nLooks promising.\n\n> printf() is a little tricky. The standard wants us to use\n> <inttypes.h>'s PRId64 etc, but that might confuse our snprintf.c (in\n> theory, probably not in practice). \"ll\" should have the right size on\n> all systems, but gets warnings from the printf format string checker\n> on systems where \"l\" is the right type.\n\nI'm not sure I understand the problem here. Do you mean that in theory \na platform's PRId64 could be something other than \"l\" or \"ll\"?\n\n> For limits, why do we have this:\n> \n> - * stdint.h limits aren't guaranteed to have compatible types with our fixed\n> - * width types. So just define our own.\n> \n> ? I mean, how could they not have compatible types?\n\nMaybe this means something like our int64 is long long int but the \nsystem's int64_t is long int underneath, but I don't see how that would \nmatter for the limit macros.\n\n> I noticed that configure.ac checks if int64 (no \"_t\") might be defined\n> already by system header pollution, but meson.build doesn't. That's\n> an inconsistency that should be fixed, but which way? Hmm, commit\n> 15abc7788e6 said that was done for BeOS, which we de-supported. So\n> maybe we should get rid of that?\n\nI had a vague recollection that it was for AIX, but the commit indeed \nmentions BeOS. Could be removed in either case.\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:46:57 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Thu, Apr 18, 2024 at 8:47 PM Peter Eisentraut <[email protected]> wrote:\n> I'm not sure I understand the problem here. Do you mean that in theory\n> a platform's PRId64 could be something other than \"l\" or \"ll\"?\n\nYes. I don't know why anyone would do that, and the systems I checked\nall have the obvious definitions, eg \"ld\", \"lld\" etc. Perhaps it's an\nacceptable risk? It certainly gives us a tidier result.\n\nFor discussion, here is a variant that fully embraces <inttypes.h> and\nthe PRI*64 macros.", "msg_date": "Fri, 19 Apr 2024 08:29:27 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Thu, Apr 18, 2024 at 8:47 PM Peter Eisentraut <[email protected]> wrote:\n> Maybe this means something like our int64 is long long int but the\n> system's int64_t is long int underneath, but I don't see how that would\n> matter for the limit macros.\n\nAgreed, so I don't think it's long vs long long (when they have the same width).\n\nI wonder if this comment is a clue:\n\nstatic char *\ninet_net_ntop_ipv6(const u_char *src, int bits, char *dst, size_t size)\n{\n /*\n * Note that int32_t and int16_t need only be \"at least\" large enough to\n * contain a value of the specified size. On some systems, like Crays,\n * there is no such thing as an integer variable with 16 bits. Keep this\n * in mind if you think this function should have been coded to use\n * pointer overlays. All the world's not a VAX.\n */\n\nI'd seen that claim before somewhere else but I can't recall where.\nSo there were systems using those names in an ad hoc unspecified way\nbefore C99 nailed this stuff down? In modern C, int32_t is definitely\nan exact width type (but there are other standardised variants like\nint_fast32_t to allow for Cray-like systems that would prefer to use a\nwider type, ie \"at least\", 32 bits wide, so I guess that's what\nhappened to that idea?).\n\nOr perhaps it's referring to worries about the width of char, short,\nint or the assumption of two's-complement. I think if any of that\nstuff weren't as assumed we'd have many problems in many places, so\nI'm not seeing a problem. (FTR C23 finally nailed down\ntwo's-complement as a requirement, and although C might not say so,\nPOSIX says that char is a byte, and our assumption that int = int32_t\nis pretty deeply baked into PostgreSQL, so it's almost impossible to\nimagine that short has a size other than 16 bits; but these are all\nassumptions made by the OLD coding, not by the patch I posted). In\nshort, I guess that isn't what was meant.\n\n\n", "msg_date": "Fri, 19 Apr 2024 09:00:05 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Thu, Apr 18, 2024 at 6:09 PM Japin Li <[email protected]> wrote:\n> /home/japin/postgres/build/../src/common/config_info.c:198:11: error: comparison of integer expressions of different signedness: 'int' and 'size_t' {aka 'long unsigned int'} [-Werror=sign-compare]\n> 198 | Assert(i == *configdata_len);\n\nRight, PostgreSQL doesn't compile cleanly with the \"sign-compare\"\nwarning. There have been a few threads about that, and someone might\nwant to think harder about it, but it's a different topic unrelated to\n<stdint.h>.\n\n> /home/japin/postgres/build/../src/include/lib/simplehash.h:1138:9: error: format '%llu' expects argument of type 'long long unsigned int', but argument 4 has type 'uint64' {aka 'long unsigned int'} [-Werror=format=]\n\nIt seems my v1 patch's configure probe for INT64_FORMAT was broken.\nIn the v2 patch I tried not doing that probe at all, and instead\ninviting <inttypes.h> into our world (that's the standardised way to\nproduce format strings, which has the slight complication that we are\nintercepting printf calls...). I suspect that'll work better for you.\n\n\n", "msg_date": "Fri, 19 Apr 2024 09:22:10 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "\nOn Fri, 19 Apr 2024 at 05:22, Thomas Munro <[email protected]> wrote:\n> On Thu, Apr 18, 2024 at 6:09 PM Japin Li <[email protected]> wrote:\n>> /home/japin/postgres/build/../src/include/lib/simplehash.h:1138:9: error: format '%llu' expects argument of type 'long long unsigned int', but argument 4 has type 'uint64' {aka 'long unsigned int'} [-Werror=format=]\n>\n> It seems my v1 patch's configure probe for INT64_FORMAT was broken.\n> In the v2 patch I tried not doing that probe at all, and instead\n> inviting <inttypes.h> into our world (that's the standardised way to\n> produce format strings, which has the slight complication that we are\n> intercepting printf calls...). I suspect that'll work better for you.\n\nYeah, the v2 patch fixed this problem.\n\n--\nRegards,\nJapin Li\n\n\n", "msg_date": "Fri, 19 Apr 2024 13:25:05 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On 18.04.24 02:31, Thomas Munro wrote:\n> For limits, why do we have this:\n> \n> - * stdint.h limits aren't guaranteed to have compatible types with our fixed\n> - * width types. So just define our own.\n> \n> ? I mean, how could they not have compatible types?\n\nThe commit for this was 62e2a8dc2c7 and the thread was \n<https://www.postgresql.org/message-id/flat/E1YatAv-0007cu-KW%40gemulon.postgresql.org>. \n The problem was that something like\n\n snprintf(bufm, sizeof(bufm), INT64_FORMAT, SEQ_MINVALUE);\n\ncould issue a warning if, say, INT64_FORMAT, which refers to our own \nint64, is based on long int, but SEQ_MINVALUE, which was then INT64_MIN, \nwhich refers to int64_t, which could be long long int.\n\nSo this is correct. If we introduce the use of int64_t, then you need \nto be consistent still:\n\nint64, PG_INT64_MIN, PG_INT64_MAX, INT64_FORMAT\n\nint64_t, INT64_MIN, INT64_MAX, PRId64\n\n\n", "msg_date": "Fri, 19 Apr 2024 10:34:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On 18/04/2024 23:29, Thomas Munro wrote:\n> On Thu, Apr 18, 2024 at 8:47 PM Peter Eisentraut <[email protected]> wrote:\n>> I'm not sure I understand the problem here. Do you mean that in theory\n>> a platform's PRId64 could be something other than \"l\" or \"ll\"?\n> \n> Yes. I don't know why anyone would do that, and the systems I checked\n> all have the obvious definitions, eg \"ld\", \"lld\" etc. Perhaps it's an\n> acceptable risk? It certainly gives us a tidier result.\n\nCould we have a configure check or static assertion for that?\n\n> For discussion, here is a variant that fully embraces <inttypes.h> and\n> the PRI*64 macros.\n\nLooks good to me.\n\nPersonally, I find \"PRId64\" pretty unreadable. \"INT64_MODIFIER\" wasn't \nnice either, though, and following standards is good, so I'm sure I'll \nget used to it.\n\nThey're both less readable than INT64_FORMAT and \"%lld\", which we use in \nmost places, though. Perhaps \"%lld\" and casting the arguments to \"long \nlong\" would be more readable in the places where this patch replaces \nINT64_MODIFIER with PRI*64, too.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 16:34:57 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Wed, Jul 3, 2024 at 1:34 AM Heikki Linnakangas <[email protected]> wrote:\n> On 18/04/2024 23:29, Thomas Munro wrote:\n> > On Thu, Apr 18, 2024 at 8:47 PM Peter Eisentraut <[email protected]> wrote:\n> >> I'm not sure I understand the problem here. Do you mean that in theory\n> >> a platform's PRId64 could be something other than \"l\" or \"ll\"?\n> >\n> > Yes. I don't know why anyone would do that, and the systems I checked\n> > all have the obvious definitions, eg \"ld\", \"lld\" etc. Perhaps it's an\n> > acceptable risk? It certainly gives us a tidier result.\n>\n> Could we have a configure check or static assertion for that?\n\nUnfortunately, that theory turned out to be wrong. The usual suspect,\nWindows, uses something else: \"I64\" or something like that. We could\nteach our snprintf to grok that, but I don't like the idea anymore.\nSo let's go back to INT64_MODIFIER, with just a small amount of\nconfigure time work to pick the right value. I couldn't figure out\nany header-only way to do that.\n\n> Personally, I find \"PRId64\" pretty unreadable. \"INT64_MODIFIER\" wasn't\n> nice either, though, and following standards is good, so I'm sure I'll\n> get used to it.\n\nYeah, I like standards a lot but we've painted ourselves into a corner here...\n\nNew version attached. This time I was brave enough to try to tackle\nsrc/timezone too, which had comments planning to drop a lot of small\ndifferences against the upstream tzcode once all supported branches\nrequired C99. I suppose that should make future maintenance easier,\nand C89 disappeared from our window of interest with PostgreSQL 11.\nIt's a little hard to understand what changed, but to try to show it\nbetter I diff'd master against upstream (after filtering through sed\nand pgindent as recommended by README), and then diff'd patched\nagainst upstream, and then ... ehm.. diff'd the two diffs, so that you\ncan see there are some hunks that go away.\n\nIMHO it's a rather scary choice on tzcode's part to use int_fastN_t,\nand hard for us to verify that it works correctly especially when\ncombined with our changes, but on the other hand I don't really expect\nany system that PostgreSQL can run on to have \"fast\" types that really\ndiffer from the \"exact width\" types. My understanding is that this is\nsomething of interest to historical supercomputers and\nmicrocontrollers, but I can't find any evidence of general\npurpose/commodity systems that we target using different sizes (anyone\nknow better?).", "msg_date": "Thu, 4 Jul 2024 14:55:59 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> New version attached. This time I was brave enough to try to tackle\n> src/timezone too, which had comments planning to drop a lot of small\n> differences against the upstream tzcode once all supported branches\n> required C99.\n\nUnless you've specifically checked that this reduces diffs against\nupstream tzcode, I'd really prefer not to touch that code right now.\nI know I'm overdue for a round of syncing src/timezone/ with upstream,\nbut I can't see how drive-by changes will make that easier.\n\n> IMHO it's a rather scary choice on tzcode's part to use int_fastN_t,\n\nYeah, I was never pleased with that choice of theirs. OTOH, I've\nseen darn few portability complaints on their mailing list, so\nit seems like they've got it right in isolation. The problem\nfrom our standpoint is that I don't think we want int_fastN_t\nto leak into APIs visible to the rest of Postgres, because then\nwe risk issues related to their configuration methods being\ntotally unlike ours.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2024 23:10:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On Thu, Jul 4, 2024 at 3:10 PM Tom Lane <[email protected]> wrote:\n> Unless you've specifically checked that this reduces diffs against\n> upstream tzcode, I'd really prefer not to touch that code right now.\n> I know I'm overdue for a round of syncing src/timezone/ with upstream,\n> but I can't see how drive-by changes will make that easier.\n\nSure, I'll wait until you say it's a good time. It does remove a\ndozen or so hunks of difference, which should hopefully make that job\neasier eventually but I don't want to get in your way. I can see\nthere are a few more trivialities we could synchronise on, like const\nkeywords, to kill useless diffs (either dropping local improvements or\nsending patches upstream).\n\n> > IMHO it's a rather scary choice on tzcode's part to use int_fastN_t,\n>\n> Yeah, I was never pleased with that choice of theirs. OTOH, I've\n> seen darn few portability complaints on their mailing list, so\n> it seems like they've got it right in isolation. The problem\n> from our standpoint is that I don't think we want int_fastN_t\n> to leak into APIs visible to the rest of Postgres, because then\n> we risk issues related to their configuration methods being\n> totally unlike ours.\n\nYeah. My first swing at this touched only .c files, no .h files, with\nthat in mind.\n\n\n", "msg_date": "Thu, 4 Jul 2024 15:44:06 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On 04.07.24 03:55, Thomas Munro wrote:\n> Unfortunately, that theory turned out to be wrong. The usual suspect,\n> Windows, uses something else: \"I64\" or something like that. We could\n> teach our snprintf to grok that, but I don't like the idea anymore.\n> So let's go back to INT64_MODIFIER, with just a small amount of\n> configure time work to pick the right value. I couldn't figure out\n> any header-only way to do that.\n\nsrc/port/snprintf.c used to support I64 in the past, but it was later \nremoved. We could probably put it back.\n\n>> Personally, I find \"PRId64\" pretty unreadable. \"INT64_MODIFIER\" wasn't\n>> nice either, though, and following standards is good, so I'm sure I'll\n>> get used to it.\n\nUsing PRId64 would be very beneficial because gettext understands it, \nand so we would no longer need the various workarounds for not putting \nINT64_FORMAT into the middle of a translated string.\n\nBut this could be a separate patch. What you have works for now.\n\nHere are some other comments on this patch set:\n\n* v3-0001-Use-int64_t-support-from-stdint.h.patch\n\n- src/include/c.h:\n\nMaybe add a comment that above all the int8_t -> int8 etc. typedefs\nthat these are for backward compatibility or something like that.\nActually, just move the comment\n\n+/* Historical names for limits in stdint.h. */\n\nup a bit to it covers the types as well.\n\nAlso, these /* == 8 bits */ comments could be removed, I think.\n\n- src/include/port/pg_bitutils.h:\n- src/port/pg_bitutils.c:\n- src/port/snprintf.c:\n\nThese changes look functionally correct, but I think I like the old\ncode layout better, like\n\n#if (using long)\n...\n#elif (using long long)\n...\n#else\n#error\n#endif\n\nrather than\n\n#if (using long)\n...\n#else\nstatic assertion\n... // long long\n#endif\n\nwhich seems a bit more complicated. I think you could leave the code\nmostly alone and just change\n\ndefined(HAVE_LONG_INT_64) to SIZEOF_LONG == 8\ndefined(HAVE_LONG_LONG_INT_64) to SIZEOF_LONG_LONG == 8\n\nin each case.\n\n- src/include/postgres_ext.h:\n\n-#define OID_MAX UINT_MAX\n-/* you will need to include <limits.h> to use the above #define */\n+#define OID_MAX UINT32_MAX\n\nIf the type Oid is unsigned int, then the OID_MAX should be UINT_MAX.\nSo this should not be changed. Also, is the comment about <limits.h>\nno longer applicable?\n\n\n- src/interfaces/ecpg/ecpglib/typename.c:\n- src/interfaces/ecpg/include/sqltypes.h:\n- .../test/expected/compat_informix-sqlda.c:\n\n-#ifdef HAVE_LONG_LONG_INT_64\n+#if SIZEOF_LONG < 8\n\nThese changes alter the library behavior unnecessarily. The old code\nwould always prefer to report back long long (ECPGt_long_long etc.),\nbut the new code will report back long (ECPGt_long etc.) if it is\n64-bit. I don't know the impact of these changes, but it seems\npreferable to keep the existing behavior.\n\n- src/interfaces/ecpg/include/ecpg_config.h.in:\n- src/interfaces/ecpg/include/meson.build:\n\nIn the past, we have kept obsolete symbols as always defined in\necpg_config.h. We ought to do the same here.\n\n\n* v3-0002-Remove-traces-of-BeOS.patch\n\nThis looks ok. This could also be committed before 0001.\n\n\n* v3-0003-Allow-tzcode-to-use-stdint.h-and-inttypes.h.patch\n\n- src/timezone/localtime.c:\n\nAddition of #include <stdint.h> is unnecessary, since it's already\nincluded in c.h, and it's also not in the upstream code.\n\nThis looks like a typo:\n\n- * UTC months are at least 28 days long \n(minus 1 second for a\n+ * UTC months are at least 2 days long \n(minus 1 second for a\n\n-getsecs(const char *strp, int32 *const secsp)\n+getsecs(const char *strp, int_fast32_t * const secsp)\n\nNeed to add int_fast32_t (and maybe the other types) to typedefs.list?\n\n- src/timezone/zic.c:\n\n+#include <inttypes.h>\n+#include <stdint.h>\n\nWe don't need both of these. Also, this is not in the upstream code\nAFAICT.\n\n\n\n", "msg_date": "Sun, 14 Jul 2024 15:47:50 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 04.07.24 03:55, Thomas Munro wrote:\n>>> Personally, I find \"PRId64\" pretty unreadable. \"INT64_MODIFIER\" wasn't\n>>> nice either, though, and following standards is good, so I'm sure I'll\n>>> get used to it.\n\n> Using PRId64 would be very beneficial because gettext understands it, \n> and so we would no longer need the various workarounds for not putting \n> INT64_FORMAT into the middle of a translated string.\n\nUh, really? The translated strings live in /usr/share, which is\nexpected to be architecture-independent, so how would they make\nthat work?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 Jul 2024 10:51:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" }, { "msg_contents": "On 14.07.24 16:51, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> On 04.07.24 03:55, Thomas Munro wrote:\n>>>> Personally, I find \"PRId64\" pretty unreadable. \"INT64_MODIFIER\" wasn't\n>>>> nice either, though, and following standards is good, so I'm sure I'll\n>>>> get used to it.\n> \n>> Using PRId64 would be very beneficial because gettext understands it,\n>> and so we would no longer need the various workarounds for not putting\n>> INT64_FORMAT into the middle of a translated string.\n> \n> Uh, really? The translated strings live in /usr/share, which is\n> expected to be architecture-independent, so how would they make\n> that work?\n\nGettext has some special run-time support for this. See here: \n<https://www.gnu.org/software/gettext/manual/html_node/Preparing-Strings.html#No-string-concatenation>\n\n\n\n", "msg_date": "Mon, 22 Jul 2024 16:39:13 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cannot find a working 64-bit integer type on Illumos" } ]
[ { "msg_contents": "Dear hackers,\n\nI was looking at how foreign keys deal with collations, and I came across this comment about not \nre-checking a foreign key if the column type changes in a compatible way:\n\n * Since we require that all collations share the same notion of\n * equality (which they do, because texteq reduces to bitwise\n * equality), we don't compare collation here.\n\nBut now that we have nondeterministic collations, isn't that out of date?\n\nFor instance I could make this foreign key:\n\npaul=# create collation itext (provider = 'icu', locale = 'und-u-ks-level1', deterministic = false);\nCREATE COLLATION\npaul=# create table t1 (id text collate itext primary key);\nCREATE TABLE\npaul=# create table t2 (id text, parent_id text references t1);\nCREATE TABLE\n\nAnd then:\n\npaul=# insert into t1 values ('a');\nINSERT 0 1\npaul=# insert into t2 values ('.', 'A');\nINSERT 0 1\n\nSo far that behavior seems correct, because the user told us 'a' and 'A' were equivalent,\nbut now I can change the collation on the referenced table and the FK doesn't complain:\n\npaul=# alter table t1 alter column id type text collate \"C\";\nALTER TABLE\n\nThe constraint claims to be valid, but I can't drop & add it:\n\npaul=# alter table t2 drop constraint t2_parent_id_fkey;\nALTER TABLE\npaul=# alter table t2 add constraint t2_parent_id_fkey foreign key (parent_id) references t1;\nERROR: insert or update on table \"t2\" violates foreign key constraint \"t2_parent_id_fkey\"\nDETAIL: Key (parent_id)=(A) is not present in table \"t1\".\n\nIsn't that a problem?\n\nPerhaps if the previous collation was nondeterministic we should force a re-check.\n\n(Tested on 17devel 697f8d266c and also 16.)\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]\n\n\n", "msg_date": "Sat, 23 Mar 2024 10:04:04 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On 3/23/24 10:04, Paul Jungwirth wrote:\n> Perhaps if the previous collation was nondeterministic we should force a re-check.\n\nHere is a patch implementing this. It was a bit more fuss than I expected, so maybe someone has a \nbetter way.\n\nWe have had nondeterministic collations since v12, so perhaps it is something to back-patch?\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]", "msg_date": "Sun, 24 Mar 2024 23:47:26 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On Mon, Mar 25, 2024 at 2:47 PM Paul Jungwirth\n<[email protected]> wrote:\n>\n> On 3/23/24 10:04, Paul Jungwirth wrote:\n> > Perhaps if the previous collation was nondeterministic we should force a re-check.\n>\n> Here is a patch implementing this. It was a bit more fuss than I expected, so maybe someone has a\n> better way.\n>\n\n+ /* test follows the one in ri_FetchConstraintInfo() */\n+ if (ARR_NDIM(arr) != 1 ||\n+ ARR_HASNULL(arr) ||\n+ ARR_ELEMTYPE(arr) != INT2OID)\n+ elog(ERROR, \"conkey is not a 1-D smallint array\");\n+ attarr = (AttrNumber *) ARR_DATA_PTR(arr);\n+\n+ /* stash a List of the collation Oids in our Constraint node */\n+ for (i = 0; i < numkeys; i++)\n+ con->old_collations = lappend_oid(con->old_collations,\n+ list_nth_oid(changedCollationOids, attarr[i] - 1));\n\nI don't understand the \"ri_FetchConstraintInfo\" comment.\n\n\n+static void\n+RememberCollationForRebuilding(AttrNumber attnum, AlteredTableInfo *tab)\n+{\n+ Oid typid;\n+ int32 typmod;\n+ Oid collid;\n+ ListCell *lc;\n+\n+ /* Fill in the list with InvalidOid if this is our first visit */\n+ if (tab->changedCollationOids == NIL)\n+ {\n+ int len = RelationGetNumberOfAttributes(tab->rel);\n+ int i;\n+\n+ for (i = 0; i < len; i++)\n+ tab->changedCollationOids = lappend_oid(tab->changedCollationOids,\n+ InvalidOid);\n+ }\n+\n+ get_atttypetypmodcoll(RelationGetRelid(tab->rel), attnum,\n+ &typid, &typmod, &collid);\n+\n+ lc = list_nth_cell(tab->changedCollationOids, attnum - 1);\n+ lfirst_oid(lc) = collid;\n+}\n\ndo we need to check if `collid` is a valid collation?\nlike:\n\nif (!OidIsValid(collid))\n{\nlc = list_nth_cell(tab->changedCollationOids, attnum - 1);\nlfirst_oid(lc) = collid;\n}\n\n\n", "msg_date": "Tue, 26 Mar 2024 13:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On Tue, Mar 26, 2024 at 1:00 PM jian he <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 2:47 PM Paul Jungwirth\n> <[email protected]> wrote:\n> >\n> > On 3/23/24 10:04, Paul Jungwirth wrote:\n> > > Perhaps if the previous collation was nondeterministic we should force a re-check.\n> >\n> > Here is a patch implementing this. It was a bit more fuss than I expected, so maybe someone has a\n> > better way.\n> >\n>\n> + /* test follows the one in ri_FetchConstraintInfo() */\n> + if (ARR_NDIM(arr) != 1 ||\n> + ARR_HASNULL(arr) ||\n> + ARR_ELEMTYPE(arr) != INT2OID)\n> + elog(ERROR, \"conkey is not a 1-D smallint array\");\n> + attarr = (AttrNumber *) ARR_DATA_PTR(arr);\n> +\n> + /* stash a List of the collation Oids in our Constraint node */\n> + for (i = 0; i < numkeys; i++)\n> + con->old_collations = lappend_oid(con->old_collations,\n> + list_nth_oid(changedCollationOids, attarr[i] - 1));\n>\n> I don't understand the \"ri_FetchConstraintInfo\" comment.\n\nI still don't understand this comment.\n\n>\n> +static void\n> +RememberCollationForRebuilding(AttrNumber attnum, AlteredTableInfo *tab)\n> +{\n> + Oid typid;\n> + int32 typmod;\n> + Oid collid;\n> + ListCell *lc;\n> +\n> + /* Fill in the list with InvalidOid if this is our first visit */\n> + if (tab->changedCollationOids == NIL)\n> + {\n> + int len = RelationGetNumberOfAttributes(tab->rel);\n> + int i;\n> +\n> + for (i = 0; i < len; i++)\n> + tab->changedCollationOids = lappend_oid(tab->changedCollationOids,\n> + InvalidOid);\n> + }\n> +\n> + get_atttypetypmodcoll(RelationGetRelid(tab->rel), attnum,\n> + &typid, &typmod, &collid);\n> +\n> + lc = list_nth_cell(tab->changedCollationOids, attnum - 1);\n> + lfirst_oid(lc) = collid;\n> +}\n>\n> do we need to check if `collid` is a valid collation?\n> like:\n>\n> if (!OidIsValid(collid))\n> {\n> lc = list_nth_cell(tab->changedCollationOids, attnum - 1);\n> lfirst_oid(lc) = collid;\n> }\nnow I think RememberCollationForRebuilding is fine. no need further change.\n\n\nin ATAddForeignKeyConstraint.\nif (old_check_ok)\n{\n/*\n* When a pfeqop changes, revalidate the constraint. We could\n* permit intra-opfamily changes, but that adds subtle complexity\n* without any concrete benefit for core types. We need not\n* assess ppeqop or ffeqop, which RI_Initial_Check() does not use.\n*/\nold_check_ok = (pfeqop == lfirst_oid(old_pfeqop_item));\nold_pfeqop_item = lnext(fkconstraint->old_conpfeqop,\nold_pfeqop_item);\n}\nmaybe we can do the logic right below it. like:\n\nif (old_check_ok)\n{\n\n* All deterministic collations use bitwise equality to resolve\n* ties, but if the previous collation was nondeterministic,\n* we must re-check the foreign key, because some references\n* that use to be \"equal\" may not be anymore. If we have\n* InvalidOid here, either there was no collation or the\n* attribute didn't change.\nold_check_ok = (old_collation == InvalidOid ||\nget_collation_isdeterministic(old_collation));\n}\nthen we don't need to cram it with the other `if (old_check_ok){}`.\n\n\ndo we need to add an entry in\nhttps://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items#Open_Issues\n?\n\n\n", "msg_date": "Fri, 12 Apr 2024 17:06:04 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On Fri, Apr 12, 2024 at 5:06 PM jian he <[email protected]> wrote:\n>\n> On Tue, Mar 26, 2024 at 1:00 PM jian he <[email protected]> wrote:\n> >\n> > On Mon, Mar 25, 2024 at 2:47 PM Paul Jungwirth\n> > <[email protected]> wrote:\n> > >\n> > > On 3/23/24 10:04, Paul Jungwirth wrote:\n> > > > Perhaps if the previous collation was nondeterministic we should force a re-check.\n> > >\n> > > Here is a patch implementing this. It was a bit more fuss than I expected, so maybe someone has a\n> > > better way.\nI think I found a simple way.\n\nthe logic is:\n* ATExecAlterColumnType changes one column once at a time.\n* one column can only have one collation. so we don't need to store a\nlist of collation oid.\n* ATExecAlterColumnType we can get the new collation (targetcollid)\nand original collation info.\n* RememberAllDependentForRebuilding will check the column dependent,\nwhether this column is referenced by a foreign key or not information\nis recorded.\nso AlteredTableInfo->changedConstraintOids have the primary key and\nforeign key oids.\n* ATRewriteCatalogs will call ATPostAlterTypeCleanup (see the comments\nin ATRewriteCatalogs)\n* for tab->changedConstraintOids (foreign key, primary key) will call\nATPostAlterTypeParse, so\nfor foreign key (con->contype == CONSTR_FOREIGN) will call TryReuseForeignKey.\n* in TryReuseForeignKey, we can pass the information that our primary\nkey old collation is nondeterministic\nand old collation != new collation to the foreign key constraint.\nso foreign key can act accordingly at ATAddForeignKeyConstraint (old_check_ok).\n\n\nbased on the above logic, I add one bool in struct AlteredTableInfo,\none bool in struct Constraint.\nbool in AlteredTableInfo is for storing it, later passing it to struct\nConstraint.\nwe need bool in Constraint because ATAddForeignKeyConstraint needs it.", "msg_date": "Sat, 13 Apr 2024 21:13:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On Sat, Apr 13, 2024 at 9:13 PM jian he <[email protected]> wrote:\n>\n> > > > Here is a patch implementing this. It was a bit more fuss than I expected, so maybe someone has a\n> > > > better way.\n> I think I found a simple way.\n>\n> the logic is:\n> * ATExecAlterColumnType changes one column once at a time.\n> * one column can only have one collation. so we don't need to store a\n> list of collation oid.\n> * ATExecAlterColumnType we can get the new collation (targetcollid)\n> and original collation info.\n> * RememberAllDependentForRebuilding will check the column dependent,\n> whether this column is referenced by a foreign key or not information\n> is recorded.\n> so AlteredTableInfo->changedConstraintOids have the primary key and\n> foreign key oids.\n> * ATRewriteCatalogs will call ATPostAlterTypeCleanup (see the comments\n> in ATRewriteCatalogs)\n> * for tab->changedConstraintOids (foreign key, primary key) will call\n> ATPostAlterTypeParse, so\n> for foreign key (con->contype == CONSTR_FOREIGN) will call TryReuseForeignKey.\n> * in TryReuseForeignKey, we can pass the information that our primary\n> key old collation is nondeterministic\n> and old collation != new collation to the foreign key constraint.\n> so foreign key can act accordingly at ATAddForeignKeyConstraint (old_check_ok).\n>\n>\n> based on the above logic, I add one bool in struct AlteredTableInfo,\n> one bool in struct Constraint.\n> bool in AlteredTableInfo is for storing it, later passing it to struct\n> Constraint.\n> we need bool in Constraint because ATAddForeignKeyConstraint needs it.\n\nI refactored the comments.\nalso added some extra tests hoping to make it more bullet proof, maybe\nit's redundant.", "msg_date": "Fri, 7 Jun 2024 14:39:15 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "jian he <[email protected]> writes:\n>> * in TryReuseForeignKey, we can pass the information that our primary\n>> key old collation is nondeterministic\n>> and old collation != new collation to the foreign key constraint.\n\nI have a basic question about this: why are we allowing FKs to be\nbased on nondeterministic collations at all? ISTM that that breaks\nthe assumption that there is exactly one referenced row for any\nreferencing row.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Jun 2024 16:12:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On Sat, Jun 8, 2024 at 4:12 AM Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> >> * in TryReuseForeignKey, we can pass the information that our primary\n> >> key old collation is nondeterministic\n> >> and old collation != new collation to the foreign key constraint.\n>\n> I have a basic question about this: why are we allowing FKs to be\n> based on nondeterministic collations at all? ISTM that that breaks\n> the assumption that there is exactly one referenced row for any\n> referencing row.\n>\n\nfor FKs nondeterministic,\nI think that would require the PRIMARY KEY collation to not be\nindeterministic also.\n\nfor example:\nCREATE COLLATION ignore_accent_case (provider = icu, deterministic =\nfalse, locale = 'und-u-ks-level1');\nDROP TABLE IF EXISTS fktable, pktable;\nCREATE TABLE pktable (x text COLLATE ignore_accent_case PRIMARY KEY);\nCREATE TABLE fktable (x text REFERENCES pktable on update cascade on\ndelete cascade);\nINSERT INTO pktable VALUES ('A');\nINSERT INTO fktable VALUES ('a');\nINSERT INTO fktable VALUES ('A');\nupdate pktable set x = 'Å';\ntable fktable;\n\n\n\nif FK is nondeterministic, then it looks PK more like FK.\nthe following example, one FK row is referenced by two PK rows.\n\nDROP TABLE IF EXISTS fktable, pktable;\nCREATE TABLE pktable (x text COLLATE \"C\" PRIMARY KEY);\nCREATE TABLE fktable (x text COLLATE ignore_accent_case REFERENCES\npktable on update cascade on delete cascade);\nINSERT INTO pktable VALUES ('A'), ('Å');\nINSERT INTO fktable VALUES ('A');\n\nbegin; delete from pktable where x = 'Å'; TABLE fktable; rollback;\nbegin; delete from pktable where x = 'A'; TABLE fktable; rollback;\n\n\n", "msg_date": "Sat, 8 Jun 2024 12:14:53 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On 08.06.24 06:14, jian he wrote:\n> if FK is nondeterministic, then it looks PK more like FK.\n> the following example, one FK row is referenced by two PK rows.\n> \n> DROP TABLE IF EXISTS fktable, pktable;\n> CREATE TABLE pktable (x text COLLATE \"C\" PRIMARY KEY);\n> CREATE TABLE fktable (x text COLLATE ignore_accent_case REFERENCES\n> pktable on update cascade on delete cascade);\n> INSERT INTO pktable VALUES ('A'), ('Å');\n> INSERT INTO fktable VALUES ('A');\n\nYes, this is a problem. The RI checks are done with the collation of \nthe primary key.\n\nThe comment at ri_GenerateQualCollation() says \"the SQL standard \nspecifies that RI comparisons should use the referenced column's \ncollation\". But this is not what it says in my current copy.\n\n... [ digs around ISO archives ] ...\n\nYes, this was changed in SQL:2016 to require the collation on the PK \nside and the FK side to match at constraint creation time. The argument \nmade is exactly the same we have here. This was actually already the \nrule in SQL:1999 but was then relaxed in SQL:2003 and then changed back \nbecause it was a mistake.\n\nWe probably don't need to enforce this for deterministic collations, \nwhich would preserve some backward compatibility.\n\nI'll think some more about what steps to take to solve this and what to \ndo about back branches etc.\n\n\n\n", "msg_date": "Tue, 18 Jun 2024 10:50:24 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On Tue, Jun 18, 2024 at 4:50 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 08.06.24 06:14, jian he wrote:\n> > if FK is nondeterministic, then it looks PK more like FK.\n> > the following example, one FK row is referenced by two PK rows.\n> >\n> > DROP TABLE IF EXISTS fktable, pktable;\n> > CREATE TABLE pktable (x text COLLATE \"C\" PRIMARY KEY);\n> > CREATE TABLE fktable (x text COLLATE ignore_accent_case REFERENCES\n> > pktable on update cascade on delete cascade);\n> > INSERT INTO pktable VALUES ('A'), ('Å');\n> > INSERT INTO fktable VALUES ('A');\n>\n> Yes, this is a problem. The RI checks are done with the collation of\n> the primary key.\n>\n> The comment at ri_GenerateQualCollation() says \"the SQL standard\n> specifies that RI comparisons should use the referenced column's\n> collation\". But this is not what it says in my current copy.\n>\n> ... [ digs around ISO archives ] ...\n>\n> Yes, this was changed in SQL:2016 to require the collation on the PK\n> side and the FK side to match at constraint creation time. The argument\n> made is exactly the same we have here. This was actually already the\n> rule in SQL:1999 but was then relaxed in SQL:2003 and then changed back\n> because it was a mistake.\n>\n> We probably don't need to enforce this for deterministic collations,\n> which would preserve some backward compatibility.\n>\n> I'll think some more about what steps to take to solve this and what to\n> do about back branches etc.\n>\n\nI have come up with 3 corner cases.\n\n---case1. not ok. PK indeterministic, FK default\nDROP TABLE IF EXISTS fktable, pktable;\nCREATE TABLE pktable (x text COLLATE ignore_accent_case PRIMARY KEY);\nCREATE TABLE fktable (x text REFERENCES pktable on update cascade on\ndelete cascade);\nINSERT INTO pktable VALUES ('A');\nINSERT INTO fktable VALUES ('a');\nINSERT INTO fktable VALUES ('A');\n\nRI_FKey_check (Check foreign key existence ) querybuf.data is\nSELECT 1 FROM ONLY \"public\".\"pktable\" x WHERE \"x\"\nOPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x\nin here, fktable doesn't have collation.\nwe cannot rewrite to\nSELECT 1 FROM ONLY \"public\".\"pktable\" x WHERE \"x\"\nOPERATOR(pg_catalog.=) $1 collate \"C\" FOR KEY SHARE OF x\nso assumption (only one referenced row for any referencing row) will\nbreak when inserting values to fktable.\nRI_FKey_check already allows invalidate values to happen, not sure how\nri_GenerateQualCollation can help.\noverall i don't know how to stop invalid value while inserting value\nto fktable in this case.\n\n\n\n---case2. not ok case PK deterministic, FK indeterministic\nDROP TABLE IF EXISTS fktable, pktable;\nCREATE TABLE pktable (x text COLLATE \"C\" PRIMARY KEY);\nCREATE TABLE fktable (x text COLLATE ignore_accent_case REFERENCES\npktable on update cascade on delete cascade);\nINSERT INTO pktable VALUES ('A'), ('Å');\nINSERT INTO fktable VALUES ('A');\nbegin; update pktable set x = 'B' where x = 'Å'; table fktable; rollback;\nbegin; update pktable set x = 'B' where x = 'A'; table fktable; rollback;\n\nwhen cascade update fktable, in RI_FKey_cascade_upd\nwe can use pktable's collation. but again, a query updating fktable\nonly, using pktable collation seems strange.\n\n\n\n---case3. ---not ok case PK indeterministic, FK deterministic\nDROP TABLE IF EXISTS fktable, pktable;\nCREATE TABLE pktable (x text COLLATE ignore_accent_case PRIMARY KEY);\nCREATE TABLE fktable (x text collate \"C\" REFERENCES pktable on update\ncascade on delete cascade);\nINSERT INTO pktable VALUES ('A');\nINSERT INTO fktable VALUES ('A');\nINSERT INTO fktable VALUES ('a');\nbegin; update pktable set x = 'Å'; table fktable; rollback;\n\nwhen we \"INSERT INTO fktable VALUES ('a');\"\nwe can disallow this invalid query in RI_FKey_check by constructing\nthe following stop query\nSELECT 1 FROM ONLY public.pktable x WHERE x OPERATOR(pg_catalog.=) 'a'\ncollate \"C\" FOR KEY SHARE OF x;\nbut this query associated PK table with fktable collation seems weird?\n\n\n\nsummary:\ncase1 seems hard to resolve\ncase2 can be resolved in ri_GenerateQualCollation, not 100% sure.\ncase3 can be resolved in RI_FKey_check\nbecause case1 is hard to resolve;\nso overall I feel like erroring out PK indeterministic and FK\nindeterministic while creating foreign keys is easier.\n\nWe can mandate foreign keys and primary key columns be deterministic\nin ATAddForeignKeyConstraint.\nThe attached patch does that.\n\nThat means src/test/regress/sql/collate.icu.utf8.sql table test10pk,\ntable test11pk will have a big change.\nso only attach src/backend/commands/tablecmds.c changes.", "msg_date": "Tue, 16 Jul 2024 16:29:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On 07.06.24 08:39, jian he wrote:\n> On Sat, Apr 13, 2024 at 9:13 PM jian he <[email protected]> wrote:\n>>\n>>>>> Here is a patch implementing this. It was a bit more fuss than I expected, so maybe someone has a\n>>>>> better way.\n>> I think I found a simple way.\n>>\n>> the logic is:\n>> * ATExecAlterColumnType changes one column once at a time.\n>> * one column can only have one collation. so we don't need to store a\n>> list of collation oid.\n>> * ATExecAlterColumnType we can get the new collation (targetcollid)\n>> and original collation info.\n>> * RememberAllDependentForRebuilding will check the column dependent,\n>> whether this column is referenced by a foreign key or not information\n>> is recorded.\n>> so AlteredTableInfo->changedConstraintOids have the primary key and\n>> foreign key oids.\n>> * ATRewriteCatalogs will call ATPostAlterTypeCleanup (see the comments\n>> in ATRewriteCatalogs)\n>> * for tab->changedConstraintOids (foreign key, primary key) will call\n>> ATPostAlterTypeParse, so\n>> for foreign key (con->contype == CONSTR_FOREIGN) will call TryReuseForeignKey.\n>> * in TryReuseForeignKey, we can pass the information that our primary\n>> key old collation is nondeterministic\n>> and old collation != new collation to the foreign key constraint.\n>> so foreign key can act accordingly at ATAddForeignKeyConstraint (old_check_ok).\n>>\n>>\n>> based on the above logic, I add one bool in struct AlteredTableInfo,\n>> one bool in struct Constraint.\n>> bool in AlteredTableInfo is for storing it, later passing it to struct\n>> Constraint.\n>> we need bool in Constraint because ATAddForeignKeyConstraint needs it.\n> \n> I refactored the comments.\n> also added some extra tests hoping to make it more bullet proof, maybe\n> it's redundant.\n\nI like this patch version (v4). It's the simplest, probably also \neasiest to backpatch.\n\nIt has a flaw: It will also trigger a FK recheck if you alter the \ncollation of the referencing column (foreign key column), even though \nthat is not necessary. (Note that your tests and the examples in this \nthread only discuss altering the PK column collation, because that is \nwhat is actually used during the foreign key checks.) Maybe there is an \neasy way to avoid that, but I couldn't see one in that patch structure.\n\nMaybe that is ok as a compromise. If, in the future, we make it a \nrequirement that the collations on the PK and FK side have to be the \nsame if either collation is nondeterministic, then this case can no \nlonger happen anyway. And so building more infrastructure for this \ncheck might be wasted.\n\n\n\n", "msg_date": "Tue, 3 Sep 2024 11:41:20 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" }, { "msg_contents": "On Tue, Sep 3, 2024 at 5:41 PM Peter Eisentraut <[email protected]> wrote:\n>\n>\n> I like this patch version (v4). It's the simplest, probably also\n> easiest to backpatch.\n>\n\n\nI am actually confused.\nIn this email thread [1], I listed 3 corn cases.\nI thought all these 3 corner cases should not be allowed.\nbut V4 didn't solve these corner case issues.\nwhat do you think of their corner case, should it be allowed?\n\n\n\nAnyway, I thought these corner cases should not be allowed to happen,\nso I made sure PK, FK ties related collation were deterministic.\nPK can have indeterministic collation as long as it does not interact with FK.\n\n\n[1] https://www.postgresql.org/message-id/CACJufxEW6OMBqt8cbr%3D3Jt%2B%2BZd_SL-4YDjfk7Q7DhGKiSLcu4g%40mail.gmail.com", "msg_date": "Wed, 4 Sep 2024 14:54:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: altering a column's collation leaves an invalid foreign key" } ]
[ { "msg_contents": "I have a patch in the queue [1] that among other things tries to\nreduce the number of XIDs consumed during pg_upgrade by making\npg_restore group its commands into batches of a thousand or so\nper transaction. This had been passing tests, so I was quite\nsurprised when the cfbot started to show it as falling over.\nInvestigation showed that it is now failing because 6185c9737\nadded these objects to the regression tests and didn't drop them:\n\nCREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple');\nCREATE DOMAIN rgb AS rainbow CHECK (VALUE IN ('red', 'green', 'blue'));\n\nIn binary-upgrade mode, pg_dump dumps the enum type like this:\n\nCREATE TYPE public.rainbow AS ENUM (\n);\nALTER TYPE public.rainbow ADD VALUE 'red';\nALTER TYPE public.rainbow ADD VALUE 'orange';\nALTER TYPE public.rainbow ADD VALUE 'yellow';\nALTER TYPE public.rainbow ADD VALUE 'green';\nALTER TYPE public.rainbow ADD VALUE 'blue';\nALTER TYPE public.rainbow ADD VALUE 'purple';\n\nand then, if we're within the same transaction, creation of the domain\nfalls over with\n\nERROR: unsafe use of new value \"red\" of enum type rainbow\nHINT: New enum values must be committed before they can be used.\n\nSo I'm glad we found that sooner not later, but something needs\nto be done about it if [1] is to get committed. It doesn't seem\nparticularly hard to fix though: we just have to track the enum\ntype OIDs made in the current transaction, using largely the same\napproach as is already used in pg_enum.c to track enum value OIDs.\nenum.sql contains a comment opining that this is too expensive,\nbut I don't see why it is as long as we don't bother to track\ncommands that are nested within subtransactions. That would be a bit\ncomplicated to do correctly, but pg_dump/pg_restore doesn't need it.\n\nHence, I propose the attached.\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/47/4713/", "msg_date": "Sat, 23 Mar 2024 15:00:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump versus enum types, round N+1" }, { "msg_contents": "On Sat, Mar 23, 2024 at 3:00 PM Tom Lane <[email protected]> wrote:\n\n> I have a patch in the queue [1] that among other things tries to\n> reduce the number of XIDs consumed during pg_upgrade by making\n> pg_restore group its commands into batches of a thousand or so\n> per transaction. This had been passing tests, so I was quite\n> surprised when the cfbot started to show it as falling over.\n> Investigation showed that it is now failing because 6185c9737\n> added these objects to the regression tests and didn't drop them:\n>\n> CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue',\n> 'purple');\n> CREATE DOMAIN rgb AS rainbow CHECK (VALUE IN ('red', 'green', 'blue'));\n>\n> In binary-upgrade mode, pg_dump dumps the enum type like this:\n>\n> CREATE TYPE public.rainbow AS ENUM (\n> );\n> ALTER TYPE public.rainbow ADD VALUE 'red';\n> ALTER TYPE public.rainbow ADD VALUE 'orange';\n> ALTER TYPE public.rainbow ADD VALUE 'yellow';\n> ALTER TYPE public.rainbow ADD VALUE 'green';\n> ALTER TYPE public.rainbow ADD VALUE 'blue';\n> ALTER TYPE public.rainbow ADD VALUE 'purple';\n>\n> and then, if we're within the same transaction, creation of the domain\n> falls over with\n>\n> ERROR: unsafe use of new value \"red\" of enum type rainbow\n> HINT: New enum values must be committed before they can be used.\n>\n> So I'm glad we found that sooner not later, but something needs\n> to be done about it if [1] is to get committed. It doesn't seem\n> particularly hard to fix though: we just have to track the enum\n> type OIDs made in the current transaction, using largely the same\n> approach as is already used in pg_enum.c to track enum value OIDs.\n> enum.sql contains a comment opining that this is too expensive,\n> but I don't see why it is as long as we don't bother to track\n> commands that are nested within subtransactions. That would be a bit\n> complicated to do correctly, but pg_dump/pg_restore doesn't need it.\n>\n> Hence, I propose the attached.\n>\n>\n>\n\n\nMakes sense, Nice clear comments.\n\ncheers\n\nandrew\n\nOn Sat, Mar 23, 2024 at 3:00 PM Tom Lane <[email protected]> wrote:I have a patch in the queue [1] that among other things tries to\nreduce the number of XIDs consumed during pg_upgrade by making\npg_restore group its commands into batches of a thousand or so\nper transaction.  This had been passing tests, so I was quite\nsurprised when the cfbot started to show it as falling over.\nInvestigation showed that it is now failing because 6185c9737\nadded these objects to the regression tests and didn't drop them:\n\nCREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple');\nCREATE DOMAIN rgb AS rainbow CHECK (VALUE IN ('red', 'green', 'blue'));\n\nIn binary-upgrade mode, pg_dump dumps the enum type like this:\n\nCREATE TYPE public.rainbow AS ENUM (\n);\nALTER TYPE public.rainbow ADD VALUE 'red';\nALTER TYPE public.rainbow ADD VALUE 'orange';\nALTER TYPE public.rainbow ADD VALUE 'yellow';\nALTER TYPE public.rainbow ADD VALUE 'green';\nALTER TYPE public.rainbow ADD VALUE 'blue';\nALTER TYPE public.rainbow ADD VALUE 'purple';\n\nand then, if we're within the same transaction, creation of the domain\nfalls over with\n\nERROR:  unsafe use of new value \"red\" of enum type rainbow\nHINT:  New enum values must be committed before they can be used.\n\nSo I'm glad we found that sooner not later, but something needs\nto be done about it if [1] is to get committed.  It doesn't seem\nparticularly hard to fix though: we just have to track the enum\ntype OIDs made in the current transaction, using largely the same\napproach as is already used in pg_enum.c to track enum value OIDs.\nenum.sql contains a comment opining that this is too expensive,\nbut I don't see why it is as long as we don't bother to track\ncommands that are nested within subtransactions.  That would be a bit\ncomplicated to do correctly, but pg_dump/pg_restore doesn't need it.\n\nHence, I propose the attached.\n\n                Makes sense, Nice clear comments.cheersandrew", "msg_date": "Sat, 23 Mar 2024 18:10:35 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus enum types, round N+1" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On Sat, Mar 23, 2024 at 3:00 PM Tom Lane <[email protected]> wrote:\n>> So I'm glad we found that sooner not later, but something needs\n>> to be done about it if [1] is to get committed. It doesn't seem\n>> particularly hard to fix though: we just have to track the enum\n>> type OIDs made in the current transaction, using largely the same\n>> approach as is already used in pg_enum.c to track enum value OIDs.\n\n> Makes sense, Nice clear comments.\n\nThanks for looking. Pushed after a bit more work on the comments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 24 Mar 2024 14:32:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus enum types, round N+1" } ]
[ { "msg_contents": "Hi,\n\nI am aware of few previous attempts and discussions on this topic\n(eventually shelved or didn't materialize):\n\n- https://www.postgresql.org/message-id/[email protected]\n-\nhttps://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com\n\n-\nhttps://www.postgresql.org/message-id/flat/CAD21AoBqfMVWdk7Odh4A4OpF-m5GytRjXME5E8cEGXvhSJb8zw@mail.gmail.com\n\n\nAnd still I want to revise this topic for the obvious benefits.\n\nI do not have any patch or code changes ready. The changes could be tricky\nand might need efforts, possibly some re-factoring. Hence, before starting\nthe effort, I would like to get the proposal reviewed and consensus built\nto avoid redundancy of efforts.\n\n*Why do we need it? *\n\nSince more and more large businesses are on-boarding PostgreSQL, it is only\nfair that we make the essential utilities like vacuum more manageable and\nscalable. The data sizes are definitely going to increase and maintenance\nwindows will reduce with businesses operating across the time zones and\n24x7. Making the database more manageable with the least overhead is going\nto be definitely a pressing need.\n\nTo avoid the repetition and duplicate efforts, I have picked up the snippet\nbelow from the previous email conversation on the community (Ref:\nhttps://www.postgresql.org/message-id/[email protected])\n\n<Quote>\n*For a large table, although it can be vacuumed by enabling vacuum\ncost-based delay, the processing may last for several days (maybe hours).\nIt definitely has a negative effect on system performance. So if systems\nwhich have maintenance time, it is preferred to vacuum in the maintenance\nwindow. Vacuum tasks can be split into small subtasks, and they can be\nscheduled into maintenance window time slots. This can reduce the impact of\nvacuum to system service.*\n\n*But currently vacuum tasks can not be split: if an interrupt or error\noccurs during vacuum processing, vacuum totally forgets what it has done\nand terminates itself. Following vacuum on the same table has to scan from\nthe beginning of the heap block. This proposal enable vacuum has capability\nto stop and resume.*\n</Quote>\n\n*External Interface*\n\nThis feature is especially useful when the size of table/s is quite large\nand their bloat is quite high and it is expected vacuum runs will take long\ntime.\n\nRef: https://www.postgresql.org/docs/current/sql-vacuum.html\nvacuum [ ( *option* [, ...], *[{ for time = hh:mm}| {resume [for time =\nhh:mm]}] *) ] [ *table_and_columns* [, ...] ]\n\nThe additional options give flexibility to run the vacuum for a limited\ntime and stop or resume the vacuum from the last time when it was stopped\nfor a given time.\n\nWhen vacuum is invoked with ‘for time ...’ option it will store the\nintermediate state of the dead tuples accumulated periodically on the disk\nas it progresses. It will run for a specified time and stop after that\nduration.\n\nWhen vacuum is invoked with ‘for time ...’ option and is stopped\nautomatically after the specified time or interrupted manually and if it is\ninvoked next time with ‘resume’ option, it will try to check the stored\nstate of the last run and try to start as closely as possible from where it\nleft last time and avoid repetition of work.\n\nWhen resumed, it can either run for a specified time again (if the duration\nis specified) or run till completion if the duration is not specified.\n\nWhen vacuum is invoked with ‘resume for’ option when there was no earlier\nincomplete run or an earlier run with ‘for time’ option, the ‘for resume’\noption will be ignored with a message in the errorlog.\n\nWhen vacuum is invoked without ‘for time’ or ‘resume for’ options after\npreceding incomplete runs with those options , then the persisted data from\nthe previous runs is discarded and deleted. This is important because\nsuccessive runs with ‘for time’ or ‘resume for’ assume the persisted data\nis valid and there’s no run in between to invalidate it and the state of\nheap pages in terms of vacuum is the same for the given saved vacuum\nhorizon.\n\nIn further discussion in the rest of this proposal, we will refer to vacuum\ninvoked with ‘for time’ or ‘resume for’ option as listed above as\n‘resumable vacuum’.\n\nInternal Changes (High level)For each table, vacuum progresses in the\nfollowing steps or phases (taken from the documentation)\nhttps://www.postgresql.org/docs/current/progress-reporting.html#VACUUM-PHASES\n<https://www.postgresql.org/docs/current/progress-reporting.html> :\n\n1. Initializing - VACUUM is preparing to begin scanning the heap. This\nphase is expected to be very brief.\n\n2. Scanning heap - VACUUM is currently scanning the heap. It will prune and\ndefragment each page if required, and possibly perform freezing activity.\nThe heap_blks_scanned column can be used to monitor the progress of the\nscan.\n\n3. Vacuuming Indexes - VACUUM is currently vacuuming the indexes. If a\ntable has any indexes, this will happen at least once per vacuum, after the\nheap has been completely scanned. It may happen multiple times per vacuum\nif maintenance_work_mem\n<https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM>\n(or,\nin the case of autovacuum, autovacuum_work_mem\n<https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-AUTOVACUUM-WORK-MEM>\nif\nset) is insufficient to store the number of dead tuples found.\n\n4. Vacuuming Heap - VACUUM is currently vacuuming the heap. Vacuuming the\nheap is distinct from scanning the heap, and occurs after each instance of\nvacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the\nsystem will return to scanning the heap after this phase is completed;\notherwise, it will begin cleaning up indexes after this phase is completed.\n\n5. Cleaning up indexes - VACUUM is currently vacuuming the heap. Vacuuming\nthe heap is distinct from scanning the heap, and occurs after each instance\nof vacuuming indexes. If heap_blks_scanned is less than heap_blks_total,\nthe system will return to scanning the heap after this phase is completed;\notherwise, it will begin cleaning up indexes after this phase is completed.\n\n6. Truncating heap - VACUUM is currently truncating the heap so as to\nreturn empty pages at the end of the relation to the operating system. This\noccurs after cleaning up indexes.\n\n7. Performing final cleanup - VACUUM is performing final cleanup. During\nthis phase, VACUUM will vacuum the free space map, update statistics in\npg_class, and report statistics to the cumulative statistics system. When\nthis phase is completed, VACUUM will end.\n\nThe resumable vacuum will store the following information during its run on\nthe disk:\n\n 1. Database Name/Id, Table Name/Oid\n 2. Phase of its run (from the ones listed in the table above)\n 3. Vacuum horizon (created during ‘initializing’ phase)\n 4. Array of dead tuple-ids accumulated in the ‘heap scan’ phase.\n 5. Number of times the dead tuple ids array was spilled over due to\n memory limitation.\n\nOut of the above information, #2 and #3 will be updated as it progresses.\n\nWhen it is resumed, vacuum will first check the persisted information from\nthe last run. It will\n\n 1. Retrieve all the stored information on the disk from the last run\n (listed above)\n 2. Check what phase the vacuum was during the last run when it stopped\n 3. Based on the phase, it will adapt the further actions based on that.\n 1. If the previous run was stopped in the initialization phase, then\n the new run will start from scratch.\n 2. If the previous run was stopped during the heap scan phase, it\n will use the persisted array of dead tids and it will start the scan from\n the last accumulated dead tuple.\n 3. If the previous run was stopped during the ‘vacuuming index’\n phase, it will start this phase all over, but from the beginning of the\n stored dead tuple ids.\n 4. If the previous run was stopped during the ’vacuuming heap’ phase,\n and without previous spillovers, it will continue vacuuming heap for the\n rest of the dead tuple array and proceed for further phases. If\nit was with\n previous spillovers, then it will continue vacuuming the heap\nfor the rest\n of the dead tuple array and go again for phase 2 if the heap\nscan is still\n incomplete and follow the loop which is already there.\n 5. If the previous run was stopped during any of the remaining\n phases, it will just complete the remaining work and exit.\n\nIndexes can change across the runs. Please note that in this proposal any\nre-run above does not depend on the last state of the indices. Any actions\nin this whole proposal does not depend on the last state of indices nor\ndoes it store it.\n\nThis approach does not add any overhead in the DML code path. The changes\nare limited only to vacuum operation and just enough to make it resumable.\nThere are no drastic changes to the regular flow.\n\nThis approach doesn’t change the core functions apart from conditionally\npersisting the vacuum progress information to the disk. Thus, any future\nenhancements to the core functions can be easily accommodated.\n\nPlease let me know or comment on this so that we can conclude if this does\nlook like a reasonable enhancement.\n\nHi,I am aware of few previous attempts and discussions on this topic (eventually shelved or didn't materialize): - https://www.postgresql.org/message-id/[email protected] - https://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com - https://www.postgresql.org/message-id/flat/CAD21AoBqfMVWdk7Odh4A4OpF-m5GytRjXME5E8cEGXvhSJb8zw@mail.gmail.com And still I want to revise this topic for the obvious benefits. I do not have any patch or code changes ready. The changes could be tricky and might need efforts, possibly some re-factoring. Hence, before starting the effort, I would like to get the proposal reviewed and consensus built to avoid redundancy of efforts. Why do we need it? Since more and more large businesses are on-boarding PostgreSQL, it is only fair that we make the essential utilities like vacuum more manageable and scalable. The data sizes are definitely going to increase and maintenance windows will reduce with businesses operating across the time zones and 24x7. Making the database more manageable with the least overhead is going to be definitely a pressing need. To avoid the repetition and duplicate efforts, I have picked up the snippet below from the previous email conversation on the community (Ref: https://www.postgresql.org/message-id/[email protected])<Quote>For a large table, although it can be vacuumed by enabling vacuum cost-based delay, the processing may last for several days (maybe hours). It definitely has a negative effect on system performance. So if systems which have maintenance time, it is preferred to vacuum in the maintenance window. Vacuum tasks can be split into small subtasks, and they can be scheduled into maintenance window time slots. This can reduce the impact of vacuum to system service.But currently vacuum tasks can not be split: if an interrupt or error occurs during vacuum processing, vacuum totally forgets what it has done and terminates itself. Following vacuum on the same table has to scan from the beginning of the heap block. This proposal enable vacuum has capability to stop and resume.</Quote> External InterfaceThis feature is especially useful when the size of table/s is quite large and their bloat is quite high and it is expected vacuum runs will take long time.Ref: https://www.postgresql.org/docs/current/sql-vacuum.htmlvacuum [ ( option [, ...], [{ for time = hh:mm}| {resume [for time = hh:mm]}] ) ] [ table_and_columns [, ...] ]The additional options give flexibility to run the vacuum for a limited time and stop or resume the vacuum from the last time when it was stopped for a given time.When vacuum is invoked with ‘for time ...’ option it will store the intermediate state of the dead tuples accumulated periodically on the disk as it progresses. It will run for a specified time and stop after that duration. When vacuum is invoked with ‘for time ...’ option and is stopped automatically after the specified time or interrupted manually and if it is invoked next time with ‘resume’ option, it will try to check the stored state of the last run and try to start as closely as possible from where it left last time and avoid repetition of work. When resumed, it can either run for a specified time again (if the duration is specified) or run till completion if the duration is not specified.When vacuum is invoked with ‘resume for’ option when there was no earlier incomplete run or an earlier run with ‘for time’ option, the ‘for resume’ option will be ignored with a message in the errorlog. When vacuum is invoked without ‘for time’ or ‘resume for’ options after preceding incomplete runs with those options , then the persisted data from the previous runs is discarded and deleted. This is important because successive runs with ‘for time’ or ‘resume for’ assume the persisted data is valid and there’s no run in between to invalidate it and the state of heap pages in terms of vacuum is the same for the given saved vacuum horizon. In further discussion in the rest of this proposal, we will refer to vacuum invoked with ‘for time’ or ‘resume for’ option as listed above as ‘resumable vacuum’.Internal Changes (High level)For each table, vacuum progresses in the following steps or phases (taken from the documentation)https://www.postgresql.org/docs/current/progress-reporting.html#VACUUM-PHASES :1. Initializing -   VACUUM is preparing to begin scanning the heap. This phase is expected to be very brief.2. Scanning heap - VACUUM is currently scanning the heap. It will prune and defragment each page if required, and possibly perform freezing activity. The heap_blks_scanned column can be used to monitor the progress of the scan.3. Vacuuming Indexes - VACUUM is currently vacuuming the indexes. If a table has any indexes, this will happen at least once per vacuum, after the heap has been completely scanned. It may happen multiple times per vacuum if maintenance_work_mem (or, in the case of autovacuum, autovacuum_work_mem if set) is insufficient to store the number of dead tuples found.4. Vacuuming Heap - VACUUM is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of vacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed. 5. Cleaning up indexes - VACUUM is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of vacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed. 6. Truncating heap - VACUUM is currently truncating the heap so as to return empty pages at the end of the relation to the operating system. This occurs after cleaning up indexes.7. Performing final cleanup - VACUUM is performing final cleanup. During this phase, VACUUM will vacuum the free space map, update statistics in pg_class, and report statistics to the cumulative statistics system. When this phase is completed, VACUUM will end.The resumable vacuum will store the following information during its run on the disk: Database Name/Id, Table Name/OidPhase of its run (from the ones listed in the table above)Vacuum horizon (created during ‘initializing’ phase)Array of dead tuple-ids accumulated in the ‘heap scan’ phase. Number of times the dead tuple ids array was spilled over due to memory limitation.Out of the above information, #2 and #3 will be updated as it progresses. When it is resumed, vacuum will first check the persisted information from the last run. It will Retrieve all the stored information on the disk from the last run (listed above)Check what phase the vacuum was during the last run when it stopped Based on the phase, it will adapt the further actions based on that.If the previous run was stopped in the initialization phase, then the new run will start from scratch. If the previous run was stopped during the heap scan phase, it will use the persisted array of dead tids and it will start the scan from the last accumulated dead tuple. If the previous run was stopped during the ‘vacuuming index’ phase, it will start this phase all over, but from the beginning of the stored dead tuple ids. If the previous run was stopped during the ’vacuuming heap’ phase, and without previous spillovers, it will continue vacuuming heap for the rest of the dead tuple array and proceed for further phases. If it was with previous spillovers, then it will continue vacuuming the heap for the rest of the dead tuple array and go again for phase 2 if the heap scan is still incomplete and follow the loop which is already there. If the previous run was stopped during any of the remaining phases, it will just complete the remaining work and exit. Indexes can change across the runs. Please note that in this proposal any re-run above does not depend on the last state of the indices. Any actions in this whole proposal does not depend on the last state of indices nor does it store it. This approach does not add any overhead in the DML code path. The changes are limited only to vacuum operation and just enough to make it resumable. There are no drastic changes to the regular flow.This approach doesn’t change the core functions apart from conditionally persisting the vacuum progress information to the disk. Thus, any future enhancements to the core functions can be easily accommodated. Please let me know or comment on this so that we can conclude if this does look like a reasonable enhancement.", "msg_date": "Sun, 24 Mar 2024 09:57:15 +0530", "msg_from": "Jay <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal for Resumable Vacuum (again ...)" }, { "msg_contents": "Hi All,\n\nA revised proposal with few minor corrections (thanks to\[email protected] for pointing the error):\n\n<Start>\n\nI am aware of few previous attempts and discussions on this topic\n(eventually shelved or didn't materialize):\n\n- https://www.postgresql.org/message-id/[email protected]\n-\nhttps://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com\n\n-\nhttps://www.postgresql.org/message-id/flat/CAD21AoBqfMVWdk7Odh4A4OpF-m5GytRjXME5E8cEGXvhSJb8zw@mail.gmail.com\n\n\nAnd still I want to revise this topic for the obvious benefits.\n\nI do not have any patch or code changes ready. The changes could be tricky\nand might need efforts, possibly some re-factoring. Hence, before starting\nthe effort, I would like to get the proposal reviewed and consensus built\nto avoid redundancy of efforts.\n\n*Why do we need it? *\n\nSince more and more large businesses are on-boarding PostgreSQL, it is only\nfair that we make the essential utilities like vacuum more manageable and\nscalable. The data sizes are definitely going to increase and maintenance\nwindows will reduce with businesses operating across the time zones and\n24x7. Making the database more manageable with the least overhead is going\nto be definitely a pressing need.\n\nTo avoid the repetition and duplicate efforts, I have picked up the snippet\nbelow from the previous email conversation on the community (Ref:\nhttps://www.postgresql.org/message-id/[email protected])\n\n<Quote>\n*For a large table, although it can be vacuumed by enabling vacuum\ncost-based delay, the processing may last for several days (maybe hours).\nIt definitely has a negative effect on system performance. So if systems\nwhich have maintenance time, it is preferred to vacuum in the maintenance\nwindow. Vacuum tasks can be split into small subtasks, and they can be\nscheduled into maintenance window time slots. This can reduce the impact of\nvacuum to system service.*\n\n*But currently vacuum tasks can not be split: if an interrupt or error\noccurs during vacuum processing, vacuum totally forgets what it has done\nand terminates itself. Following vacuum on the same table has to scan from\nthe beginning of the heap block. This proposal enable vacuum has capability\nto stop and resume.*\n</Quote>\n\n*External Interface*\n\nThis feature is especially useful when the size of table/s is quite large\nand their bloat is quite high and it is expected vacuum runs will take long\ntime.\n\nRef: https://www.postgresql.org/docs/current/sql-vacuum.html\nvacuum [ ( *option* [, ...], *[{ for time = hh:mm}| {resume [for time =\nhh:mm]}] *) ] [ *table_and_columns* [, ...] ]\n\nThe additional options give flexibility to run the vacuum for a limited\ntime and stop or resume the vacuum from the last time when it was stopped\nfor a given time.\n\nWhen vacuum is invoked with ‘for time ...’ option it will store the\nintermediate state of the dead tuples accumulated periodically on the disk\nas it progresses. It will run for a specified time and stop after that\nduration.\n\nWhen vacuum is invoked with ‘for time ...’ option and is stopped\nautomatically after the specified time or interrupted manually and if it is\ninvoked next time with ‘resume’ option, it will try to check the stored\nstate of the last run and try to start as closely as possible from where it\nleft last time and avoid repetition of work.\n\nWhen resumed, it can either run for a specified time again (if the duration\nis specified) or run till completion if the duration is not specified.\n\nWhen vacuum is invoked with ‘resume for’ option when there was no earlier\nincomplete run or an earlier run with ‘for time’ option, the ‘for resume’\noption will be ignored with a message in the errorlog.\n\nWhen vacuum is invoked without ‘for time’ or ‘resume for’ options after\npreceding incomplete runs with those options , then the persisted data from\nthe previous runs is discarded and deleted. This is important because\nsuccessive runs with ‘for time’ or ‘resume for’ assume the persisted data\nis valid and there’s no run in between to invalidate it and the state of\nheap pages in terms of vacuum is the same for the given saved vacuum\nhorizon.\n\nIn further discussion in the rest of this proposal, we will refer to vacuum\ninvoked with ‘for time’ or ‘resume for’ option as listed above as\n‘resumable vacuum’.\n\nInternal Changes (High level)For each table, vacuum progresses in the\nfollowing steps or phases (taken from the documentation)\nhttps://www.postgresql.org/docs/current/progress-reporting.html#VACUUM-PHASES\n<https://www.postgresql.org/docs/current/progress-reporting.html> :\n\n1. Initializing - VACUUM is preparing to begin scanning the heap. This\nphase is expected to be very brief.\n\n2. Scanning heap - VACUUM is currently scanning the heap. It will prune and\ndefragment each page if required, and possibly perform freezing activity.\nThe heap_blks_scanned column can be used to monitor the progress of the\nscan.\n\n3. Vacuuming Indexes - VACUUM is currently vacuuming the indexes. If a\ntable has any indexes, this will happen at least once per vacuum, after the\nheap has been completely scanned. It may happen multiple times per vacuum\nif maintenance_work_mem\n<https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM>\n(or,\nin the case of autovacuum, autovacuum_work_mem\n<https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-AUTOVACUUM-WORK-MEM>\nif\nset) is insufficient to store the number of dead tuples found.\n\n4. Vacuuming Heap - VACUUM is currently vacuuming the heap. Vacuuming the\nheap is distinct from scanning the heap, and occurs after each instance of\nvacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the\nsystem will return to scanning the heap after this phase is completed;\notherwise, it will begin cleaning up indexes after this phase is completed.\n\n5. Cleaning up indexes - VACUUM is currently cleaning up indexes. This\noccurs after the heap has been completely scanned and all vacuuming of the\nindexes and the heap has been completed.\n\n6. Truncating heap - VACUUM is currently truncating the heap so as to\nreturn empty pages at the end of the relation to the operating system. This\noccurs after cleaning up indexes.\n\n7. Performing final cleanup - VACUUM is performing final cleanup. During\nthis phase, VACUUM will vacuum the free space map, update statistics in\npg_class, and report statistics to the cumulative statistics system. When\nthis phase is completed, VACUUM will end.\n\nThe resumable vacuum will store the following information during its run on\nthe disk:\n\n 1. Database Name/Id, Table Name/Oid\n 2. Vacuum horizon (created during ‘initializing’ phase)\n 3. Phase of its run (from the ones listed in the table above)\n 4. Array of dead tuple ids accumulated in ‘heap scan’ phase.\n 5. In case the phase is #4 or later above, then the progress of vacuum\n in dead tuple id array (the index of tuple id up to which the processing is\n done, which can be refreshed with some frequency e.g. per heap page)\n 6. Number of times the dead tuple ids array was spilled over due to\n memory limitation.\n\nOut of the above information, #3 to #6 will be updated as it progresses.\n\nWhen it is resumed, vacuum will first check the persisted information from\nthe last run. It will\n\n 1. Retrieve all the stored information on the disk from the last run\n (listed above)\n 2. Check what phase the vacuum was during the last run when it stopped\n 3. Based on the phase, it will adapt the further actions based on that.\n 1. If the previous run was stopped in the initialization phase, then\n the new run will start from scratch.\n 2. If the previous run was stopped during the heap scan phase, it\n will use the persisted array of dead tids and it will start the scan from\n the last accumulated dead tuple.\n 3. If the previous run was stopped during the ‘vacuuming index’\n phase, it will start this phase all over, but from the beginning of the\n stored dead tuple ids in the array.\n 4. If the previous run was stopped during the ’vacuuming heap’ phase,\n and without previous spillovers, it will continue vacuuming heap for the\n rest of the dead tuple array and proceed for further phases. If\nit was with\n previous spillovers, then it will continue vacuuming the heap\nfor the rest\n of the dead tuple array and go again for phase 2 if the heap\nscan is still\n incomplete and follow the loop which is already there.\n 5. If the previous run was stopped during any of the remaining\n phases, it will just complete the remaining work and exit.\n\nHighlights of the proposal\n\n - Indexes can change across the runs. Please note that in this proposal\n any re-run above does not depend on the last state of the indices. Any\n actions in this whole proposal does not depend on the last state of indices\n nor does it store it.\n - This approach does not add any overhead in the DML code path. The\n changes are limited only to vacuum operation and just enough to make it\n resumable. There are no drastic changes to the regular flow.\n - This approach doesn’t change the core functions apart from\n conditionally persisting the vacuum progress information to the disk. Thus,\n any future enhancements to the core functions can be easily accommodated.\n\n\nPlease let me know or comment on this so that we can conclude if this does\nlook like a reasonable enhancement.\n\n<End>\n\nOn Sun, Mar 24, 2024 at 9:57 AM Jay <[email protected]> wrote:\n\n> Hi,\n>\n> I am aware of few previous attempts and discussions on this topic\n> (eventually shelved or didn't materialize):\n>\n> - https://www.postgresql.org/message-id/[email protected]\n> -\n> https://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com\n>\n> -\n> https://www.postgresql.org/message-id/flat/CAD21AoBqfMVWdk7Odh4A4OpF-m5GytRjXME5E8cEGXvhSJb8zw@mail.gmail.com\n>\n>\n> And still I want to revise this topic for the obvious benefits.\n>\n> I do not have any patch or code changes ready. The changes could be tricky\n> and might need efforts, possibly some re-factoring. Hence, before starting\n> the effort, I would like to get the proposal reviewed and consensus built\n> to avoid redundancy of efforts.\n>\n> *Why do we need it? *\n>\n> Since more and more large businesses are on-boarding PostgreSQL, it is\n> only fair that we make the essential utilities like vacuum more manageable\n> and scalable. The data sizes are definitely going to increase and\n> maintenance windows will reduce with businesses operating across the time\n> zones and 24x7. Making the database more manageable with the least overhead\n> is going to be definitely a pressing need.\n>\n> To avoid the repetition and duplicate efforts, I have picked up the\n> snippet below from the previous email conversation on the community (Ref:\n> https://www.postgresql.org/message-id/[email protected])\n>\n> <Quote>\n> *For a large table, although it can be vacuumed by enabling vacuum\n> cost-based delay, the processing may last for several days (maybe hours).\n> It definitely has a negative effect on system performance. So if systems\n> which have maintenance time, it is preferred to vacuum in the maintenance\n> window. Vacuum tasks can be split into small subtasks, and they can be\n> scheduled into maintenance window time slots. This can reduce the impact of\n> vacuum to system service.*\n>\n> *But currently vacuum tasks can not be split: if an interrupt or error\n> occurs during vacuum processing, vacuum totally forgets what it has done\n> and terminates itself. Following vacuum on the same table has to scan from\n> the beginning of the heap block. This proposal enable vacuum has capability\n> to stop and resume.*\n> </Quote>\n>\n> *External Interface*\n>\n> This feature is especially useful when the size of table/s is quite large\n> and their bloat is quite high and it is expected vacuum runs will take long\n> time.\n>\n> Ref: https://www.postgresql.org/docs/current/sql-vacuum.html\n> vacuum [ ( *option* [, ...], *[{ for time = hh:mm}| {resume [for time =\n> hh:mm]}] *) ] [ *table_and_columns* [, ...] ]\n>\n> The additional options give flexibility to run the vacuum for a limited\n> time and stop or resume the vacuum from the last time when it was stopped\n> for a given time.\n>\n> When vacuum is invoked with ‘for time ...’ option it will store the\n> intermediate state of the dead tuples accumulated periodically on the disk\n> as it progresses. It will run for a specified time and stop after that\n> duration.\n>\n> When vacuum is invoked with ‘for time ...’ option and is stopped\n> automatically after the specified time or interrupted manually and if it is\n> invoked next time with ‘resume’ option, it will try to check the stored\n> state of the last run and try to start as closely as possible from where it\n> left last time and avoid repetition of work.\n>\n> When resumed, it can either run for a specified time again (if the\n> duration is specified) or run till completion if the duration is not\n> specified.\n>\n> When vacuum is invoked with ‘resume for’ option when there was no earlier\n> incomplete run or an earlier run with ‘for time’ option, the ‘for resume’\n> option will be ignored with a message in the errorlog.\n>\n> When vacuum is invoked without ‘for time’ or ‘resume for’ options after\n> preceding incomplete runs with those options , then the persisted data from\n> the previous runs is discarded and deleted. This is important because\n> successive runs with ‘for time’ or ‘resume for’ assume the persisted data\n> is valid and there’s no run in between to invalidate it and the state of\n> heap pages in terms of vacuum is the same for the given saved vacuum\n> horizon.\n>\n> In further discussion in the rest of this proposal, we will refer to\n> vacuum invoked with ‘for time’ or ‘resume for’ option as listed above as\n> ‘resumable vacuum’.\n>\n> Internal Changes (High level)For each table, vacuum progresses in the\n> following steps or phases (taken from the documentation)\n>\n> https://www.postgresql.org/docs/current/progress-reporting.html#VACUUM-PHASES\n> <https://www.postgresql.org/docs/current/progress-reporting.html> :\n>\n> 1. Initializing - VACUUM is preparing to begin scanning the heap. This\n> phase is expected to be very brief.\n>\n> 2. Scanning heap - VACUUM is currently scanning the heap. It will prune\n> and defragment each page if required, and possibly perform freezing\n> activity. The heap_blks_scanned column can be used to monitor the\n> progress of the scan.\n>\n> 3. Vacuuming Indexes - VACUUM is currently vacuuming the indexes. If a\n> table has any indexes, this will happen at least once per vacuum, after the\n> heap has been completely scanned. It may happen multiple times per vacuum\n> if maintenance_work_mem\n> <https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM> (or,\n> in the case of autovacuum, autovacuum_work_mem\n> <https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-AUTOVACUUM-WORK-MEM> if\n> set) is insufficient to store the number of dead tuples found.\n>\n> 4. Vacuuming Heap - VACUUM is currently vacuuming the heap. Vacuuming the\n> heap is distinct from scanning the heap, and occurs after each instance of\n> vacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the\n> system will return to scanning the heap after this phase is completed;\n> otherwise, it will begin cleaning up indexes after this phase is completed.\n>\n> 5. Cleaning up indexes - VACUUM is currently vacuuming the heap.\n> Vacuuming the heap is distinct from scanning the heap, and occurs after\n> each instance of vacuuming indexes. If heap_blks_scanned is less than\n> heap_blks_total, the system will return to scanning the heap after this\n> phase is completed; otherwise, it will begin cleaning up indexes after this\n> phase is completed.\n>\n> 6. Truncating heap - VACUUM is currently truncating the heap so as to\n> return empty pages at the end of the relation to the operating system. This\n> occurs after cleaning up indexes.\n>\n> 7. Performing final cleanup - VACUUM is performing final cleanup. During\n> this phase, VACUUM will vacuum the free space map, update statistics in\n> pg_class, and report statistics to the cumulative statistics system. When\n> this phase is completed, VACUUM will end.\n>\n> The resumable vacuum will store the following information during its run\n> on the disk:\n>\n> 1. Database Name/Id, Table Name/Oid\n> 2. Phase of its run (from the ones listed in the table above)\n> 3. Vacuum horizon (created during ‘initializing’ phase)\n> 4. Array of dead tuple-ids accumulated in the ‘heap scan’ phase.\n> 5. Number of times the dead tuple ids array was spilled over due to\n> memory limitation.\n>\n> Out of the above information, #2 and #3 will be updated as it progresses.\n>\n> When it is resumed, vacuum will first check the persisted information from\n> the last run. It will\n>\n> 1. Retrieve all the stored information on the disk from the last run\n> (listed above)\n> 2. Check what phase the vacuum was during the last run when it stopped\n> 3. Based on the phase, it will adapt the further actions based on that.\n> 1. If the previous run was stopped in the initialization phase,\n> then the new run will start from scratch.\n> 2. If the previous run was stopped during the heap scan phase, it\n> will use the persisted array of dead tids and it will start the scan from\n> the last accumulated dead tuple.\n> 3. If the previous run was stopped during the ‘vacuuming index’\n> phase, it will start this phase all over, but from the beginning of the\n> stored dead tuple ids.\n> 4. If the previous run was stopped during the ’vacuuming heap’\n> phase, and without previous spillovers, it will continue vacuuming heap for\n> the rest of the dead tuple array and proceed for further phases. If it was\n> with previous spillovers, then it will continue vacuuming the heap for the\n> rest of the dead tuple array and go again for phase 2 if the heap scan is\n> still incomplete and follow the loop which is already there.\n> 5. If the previous run was stopped during any of the remaining\n> phases, it will just complete the remaining work and exit.\n>\n> Indexes can change across the runs. Please note that in this proposal any\n> re-run above does not depend on the last state of the indices. Any actions\n> in this whole proposal does not depend on the last state of indices nor\n> does it store it.\n>\n> This approach does not add any overhead in the DML code path. The changes\n> are limited only to vacuum operation and just enough to make it resumable.\n> There are no drastic changes to the regular flow.\n>\n> This approach doesn’t change the core functions apart from conditionally\n> persisting the vacuum progress information to the disk. Thus, any future\n> enhancements to the core functions can be easily accommodated.\n>\n> Please let me know or comment on this so that we can conclude if this does\n> look like a reasonable enhancement.\n>\n>\n\nHi All,A revised proposal with few minor corrections (thanks to [email protected] for pointing the error):<Start> I am aware of few previous attempts and discussions on this topic (eventually shelved or didn't materialize): - https://www.postgresql.org/message-id/[email protected] - https://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com - https://www.postgresql.org/message-id/flat/CAD21AoBqfMVWdk7Odh4A4OpF-m5GytRjXME5E8cEGXvhSJb8zw@mail.gmail.com And still I want to revise this topic for the obvious benefits. I do not have any patch or code changes ready. The changes could be tricky and might need efforts, possibly some re-factoring. Hence, before starting the effort, I would like to get the proposal reviewed and consensus built to avoid redundancy of efforts. Why do we need it? Since more and more large businesses are on-boarding PostgreSQL, it is only fair that we make the essential utilities like vacuum more manageable and scalable. The data sizes are definitely going to increase and maintenance windows will reduce with businesses operating across the time zones and 24x7. Making the database more manageable with the least overhead is going to be definitely a pressing need. To avoid the repetition and duplicate efforts, I have picked up the snippet below from the previous email conversation on the community (Ref: https://www.postgresql.org/message-id/[email protected])<Quote>For a large table, although it can be vacuumed by enabling vacuum cost-based delay, the processing may last for several days (maybe hours). It definitely has a negative effect on system performance. So if systems which have maintenance time, it is preferred to vacuum in the maintenance window. Vacuum tasks can be split into small subtasks, and they can be scheduled into maintenance window time slots. This can reduce the impact of vacuum to system service.But currently vacuum tasks can not be split: if an interrupt or error occurs during vacuum processing, vacuum totally forgets what it has done and terminates itself. Following vacuum on the same table has to scan from the beginning of the heap block. This proposal enable vacuum has capability to stop and resume.</Quote> External InterfaceThis feature is especially useful when the size of table/s is quite large and their bloat is quite high and it is expected vacuum runs will take long time.Ref: https://www.postgresql.org/docs/current/sql-vacuum.htmlvacuum [ ( option [, ...], [{ for time = hh:mm}| {resume [for time = hh:mm]}] ) ] [ table_and_columns [, ...] ]The additional options give flexibility to run the vacuum for a limited time and stop or resume the vacuum from the last time when it was stopped for a given time.When vacuum is invoked with ‘for time ...’ option it will store the intermediate state of the dead tuples accumulated periodically on the disk as it progresses. It will run for a specified time and stop after that duration.When vacuum is invoked with ‘for time ...’ option and is stopped automatically after the specified time or interrupted manually and if it is invoked next time with ‘resume’ option, it will try to check the stored state of the last run and try to start as closely as possible from where it left last time and avoid repetition of work.When resumed, it can either run for a specified time again (if the duration is specified) or run till completion if the duration is not specified.When vacuum is invoked with ‘resume for’ option when there was no earlier incomplete run or an earlier run with ‘for time’ option, the ‘for resume’ option will be ignored with a message in the errorlog.When vacuum is invoked without ‘for time’ or ‘resume for’ options after preceding incomplete runs with those options , then the persisted data from the previous runs is discarded and deleted. This is important because successive runs with ‘for time’ or ‘resume for’ assume the persisted data is valid and there’s no run in between to invalidate it and the state of heap pages in terms of vacuum is the same for the given saved vacuum horizon.In further discussion in the rest of this proposal, we will refer to vacuum invoked with ‘for time’ or ‘resume for’ option as listed above as ‘resumable vacuum’.Internal Changes (High level)For each table, vacuum progresses in the following steps or phases (taken from the documentation)https://www.postgresql.org/docs/current/progress-reporting.html#VACUUM-PHASES :1. Initializing -   VACUUM is preparing to begin scanning the heap. This phase is expected to be very brief.2. Scanning heap - VACUUM is currently scanning the heap. It will prune and defragment each page if required, and possibly perform freezing activity. The heap_blks_scanned column can be used to monitor the progress of the scan.3. Vacuuming Indexes - VACUUM is currently vacuuming the indexes. If a table has any indexes, this will happen at least once per vacuum, after the heap has been completely scanned. It may happen multiple times per vacuum if maintenance_work_mem (or, in the case of autovacuum, autovacuum_work_mem if set) is insufficient to store the number of dead tuples found.4. Vacuuming Heap - VACUUM is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of vacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed. 5. Cleaning up indexes - VACUUM is currently cleaning up indexes. This occurs after the heap has been completely scanned and all vacuuming of the indexes and the heap has been completed. 6. Truncating heap - VACUUM is currently truncating the heap so as to return empty pages at the end of the relation to the operating system. This occurs after cleaning up indexes.7. Performing final cleanup - VACUUM is performing final cleanup. During this phase, VACUUM will vacuum the free space map, update statistics in pg_class, and report statistics to the cumulative statistics system. When this phase is completed, VACUUM will end.The resumable vacuum will store the following information during its run on the disk:Database Name/Id, Table Name/OidVacuum horizon (created during ‘initializing’ phase)Phase of its run (from the ones listed in the table above)Array of dead tuple ids accumulated in ‘heap scan’ phase.In case the phase is #4 or later above, then the progress of vacuum in dead tuple id array (the index of tuple id up to which the processing is done, which can be refreshed with some frequency e.g. per heap page)Number of times the dead tuple ids array was spilled over due to memory limitation.Out of the above information, #3 to #6 will be updated as it progresses.When it is resumed, vacuum will first check the persisted information from the last run. It willRetrieve all the stored information on the disk from the last run (listed above)Check what phase the vacuum was during the last run when it stoppedBased on the phase, it will adapt the further actions based on that.If the previous run was stopped in the initialization phase, then the new run will start from scratch.If the previous run was stopped during the heap scan phase, it will use the persisted array of dead tids and it will start the scan from the last accumulated dead tuple.If the previous run was stopped during the ‘vacuuming index’ phase, it will start this phase all over, but from the beginning of the stored dead tuple ids in the array.If the previous run was stopped during the ’vacuuming heap’ phase, and without previous spillovers, it will continue vacuuming heap for the rest of the dead tuple array and proceed for further phases. If it was with previous spillovers, then it will continue vacuuming the heap for the rest of the dead tuple array and go again for phase 2 if the heap scan is still incomplete and follow the loop which is already there.If the previous run was stopped during any of the remaining phases, it will just complete the remaining work and exit.Highlights of the proposalIndexes can change across the runs. Please note that in this proposal any re-run above does not depend on the last state of the indices. Any actions in this whole proposal does not depend on the last state of indices nor does it store it.This approach does not add any overhead in the DML code path. The changes are limited only to vacuum operation and just enough to make it resumable. There are no drastic changes to the regular flow.This approach doesn’t change the core functions apart from conditionally persisting the vacuum progress information to the disk. Thus, any future enhancements to the core functions can be easily accommodated. Please let me know or comment on this so that we can conclude if this does look like a reasonable enhancement.<End>On Sun, Mar 24, 2024 at 9:57 AM Jay <[email protected]> wrote:Hi,I am aware of few previous attempts and discussions on this topic (eventually shelved or didn't materialize): - https://www.postgresql.org/message-id/[email protected] - https://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com - https://www.postgresql.org/message-id/flat/CAD21AoBqfMVWdk7Odh4A4OpF-m5GytRjXME5E8cEGXvhSJb8zw@mail.gmail.com And still I want to revise this topic for the obvious benefits. I do not have any patch or code changes ready. The changes could be tricky and might need efforts, possibly some re-factoring. Hence, before starting the effort, I would like to get the proposal reviewed and consensus built to avoid redundancy of efforts. Why do we need it? Since more and more large businesses are on-boarding PostgreSQL, it is only fair that we make the essential utilities like vacuum more manageable and scalable. The data sizes are definitely going to increase and maintenance windows will reduce with businesses operating across the time zones and 24x7. Making the database more manageable with the least overhead is going to be definitely a pressing need. To avoid the repetition and duplicate efforts, I have picked up the snippet below from the previous email conversation on the community (Ref: https://www.postgresql.org/message-id/[email protected])<Quote>For a large table, although it can be vacuumed by enabling vacuum cost-based delay, the processing may last for several days (maybe hours). It definitely has a negative effect on system performance. So if systems which have maintenance time, it is preferred to vacuum in the maintenance window. Vacuum tasks can be split into small subtasks, and they can be scheduled into maintenance window time slots. This can reduce the impact of vacuum to system service.But currently vacuum tasks can not be split: if an interrupt or error occurs during vacuum processing, vacuum totally forgets what it has done and terminates itself. Following vacuum on the same table has to scan from the beginning of the heap block. This proposal enable vacuum has capability to stop and resume.</Quote> External InterfaceThis feature is especially useful when the size of table/s is quite large and their bloat is quite high and it is expected vacuum runs will take long time.Ref: https://www.postgresql.org/docs/current/sql-vacuum.htmlvacuum [ ( option [, ...], [{ for time = hh:mm}| {resume [for time = hh:mm]}] ) ] [ table_and_columns [, ...] ]The additional options give flexibility to run the vacuum for a limited time and stop or resume the vacuum from the last time when it was stopped for a given time.When vacuum is invoked with ‘for time ...’ option it will store the intermediate state of the dead tuples accumulated periodically on the disk as it progresses. It will run for a specified time and stop after that duration. When vacuum is invoked with ‘for time ...’ option and is stopped automatically after the specified time or interrupted manually and if it is invoked next time with ‘resume’ option, it will try to check the stored state of the last run and try to start as closely as possible from where it left last time and avoid repetition of work. When resumed, it can either run for a specified time again (if the duration is specified) or run till completion if the duration is not specified.When vacuum is invoked with ‘resume for’ option when there was no earlier incomplete run or an earlier run with ‘for time’ option, the ‘for resume’ option will be ignored with a message in the errorlog. When vacuum is invoked without ‘for time’ or ‘resume for’ options after preceding incomplete runs with those options , then the persisted data from the previous runs is discarded and deleted. This is important because successive runs with ‘for time’ or ‘resume for’ assume the persisted data is valid and there’s no run in between to invalidate it and the state of heap pages in terms of vacuum is the same for the given saved vacuum horizon. In further discussion in the rest of this proposal, we will refer to vacuum invoked with ‘for time’ or ‘resume for’ option as listed above as ‘resumable vacuum’.Internal Changes (High level)For each table, vacuum progresses in the following steps or phases (taken from the documentation)https://www.postgresql.org/docs/current/progress-reporting.html#VACUUM-PHASES :1. Initializing -   VACUUM is preparing to begin scanning the heap. This phase is expected to be very brief.2. Scanning heap - VACUUM is currently scanning the heap. It will prune and defragment each page if required, and possibly perform freezing activity. The heap_blks_scanned column can be used to monitor the progress of the scan.3. Vacuuming Indexes - VACUUM is currently vacuuming the indexes. If a table has any indexes, this will happen at least once per vacuum, after the heap has been completely scanned. It may happen multiple times per vacuum if maintenance_work_mem (or, in the case of autovacuum, autovacuum_work_mem if set) is insufficient to store the number of dead tuples found.4. Vacuuming Heap - VACUUM is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of vacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed. 5. Cleaning up indexes - VACUUM is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of vacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed. 6. Truncating heap - VACUUM is currently truncating the heap so as to return empty pages at the end of the relation to the operating system. This occurs after cleaning up indexes.7. Performing final cleanup - VACUUM is performing final cleanup. During this phase, VACUUM will vacuum the free space map, update statistics in pg_class, and report statistics to the cumulative statistics system. When this phase is completed, VACUUM will end.The resumable vacuum will store the following information during its run on the disk: Database Name/Id, Table Name/OidPhase of its run (from the ones listed in the table above)Vacuum horizon (created during ‘initializing’ phase)Array of dead tuple-ids accumulated in the ‘heap scan’ phase. Number of times the dead tuple ids array was spilled over due to memory limitation.Out of the above information, #2 and #3 will be updated as it progresses. When it is resumed, vacuum will first check the persisted information from the last run. It will Retrieve all the stored information on the disk from the last run (listed above)Check what phase the vacuum was during the last run when it stopped Based on the phase, it will adapt the further actions based on that.If the previous run was stopped in the initialization phase, then the new run will start from scratch. If the previous run was stopped during the heap scan phase, it will use the persisted array of dead tids and it will start the scan from the last accumulated dead tuple. If the previous run was stopped during the ‘vacuuming index’ phase, it will start this phase all over, but from the beginning of the stored dead tuple ids. If the previous run was stopped during the ’vacuuming heap’ phase, and without previous spillovers, it will continue vacuuming heap for the rest of the dead tuple array and proceed for further phases. If it was with previous spillovers, then it will continue vacuuming the heap for the rest of the dead tuple array and go again for phase 2 if the heap scan is still incomplete and follow the loop which is already there. If the previous run was stopped during any of the remaining phases, it will just complete the remaining work and exit. Indexes can change across the runs. Please note that in this proposal any re-run above does not depend on the last state of the indices. Any actions in this whole proposal does not depend on the last state of indices nor does it store it. This approach does not add any overhead in the DML code path. The changes are limited only to vacuum operation and just enough to make it resumable. There are no drastic changes to the regular flow.This approach doesn’t change the core functions apart from conditionally persisting the vacuum progress information to the disk. Thus, any future enhancements to the core functions can be easily accommodated. Please let me know or comment on this so that we can conclude if this does look like a reasonable enhancement.", "msg_date": "Mon, 25 Mar 2024 14:34:41 +0530", "msg_from": "Jay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal for Resumable Vacuum (again ...)" } ]
[ { "msg_contents": "Cast_jsonb_to_hstore WIP\nv1\nThis extension add function that can cast jsonb to hstore.\nThat link to my github where does my extension lie\nhttps://github.com/antuanviolin/cast_jsonb_to_hstore\n\nCast_jsonb_to_hstore WIPv1This extension add function that can cast jsonb to hstore.That link to my github where does my extension lie https://github.com/antuanviolin/cast_jsonb_to_hstore", "msg_date": "Sun, 24 Mar 2024 19:48:38 +0700", "msg_from": "ShadowGhost <[email protected]>", "msg_from_op": true, "msg_subject": "Extension for PostgreSQL WIP" }, { "msg_contents": "On Sun, Mar 24, 2024 at 5:49 AM ShadowGhost <[email protected]>\nwrote:\n\n> Cast_jsonb_to_hstore WIP\n> v1\n> This extension add function that can cast jsonb to hstore.\n> That link to my github where does my extension lie\n> https://github.com/antuanviolin/cast_jsonb_to_hstore\n>\n\nIf you are intending to submit this to the project you need to follow the\ncorrect procedures.\n\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nOtherwise, this is not the correct place to be promoting your extension.\n\nDavid J.\n\np.s. I would advise that including the whole bit about jsonb and hstore in\nthe email subject line (if you resubmit in a proper format) instead of only\nin the body of the email. Subject lines are very important on a mailing\nlist such as this and should be fairly precise - not just the word\nextension.\n\nOn Sun, Mar 24, 2024 at 5:49 AM ShadowGhost <[email protected]> wrote:Cast_jsonb_to_hstore WIPv1This extension add function that can cast jsonb to hstore.That link to my github where does my extension lie https://github.com/antuanviolin/cast_jsonb_to_hstoreIf you are intending to submit this to the project you need to follow the correct procedures.https://wiki.postgresql.org/wiki/Submitting_a_PatchOtherwise, this is not the correct place to be promoting your extension.David J.p.s. I would advise that including the whole bit about jsonb and hstore in the email subject line (if you resubmit in a proper format) instead of only in the body of the email.  Subject lines are very important on a mailing list such as this and should be fairly precise - not just the word extension.", "msg_date": "Sun, 24 Mar 2024 06:48:31 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension for PostgreSQL WIP" } ]
[ { "msg_contents": "Track last_inactive_time in pg_replication_slots.\n\nThis commit adds a new property called last_inactive_time for slots. It is\nset to 0 whenever a slot is made active/acquired and set to the current\ntimestamp whenever the slot is inactive/released or restored from the disk.\nNote that we don't set the last_inactive_time for the slots currently being\nsynced from the primary to the standby because such slots are typically\ninactive as decoding is not allowed on those.\n\nThe 'last_inactive_time' will be useful on production servers to debug and\nanalyze inactive replication slots. It will also help to know the lifetime\nof a replication slot - one can know how long a streaming standby, logical\nsubscriber, or replication slot consumer is down.\n\nThe 'last_inactive_time' will also be useful to implement inactive\ntimeout-based replication slot invalidation in a future commit.\n\nAuthor: Bharath Rupireddy\nReviewed-by: Bertrand Drouvot, Amit Kapila, Shveta Malik\nDiscussion: https://www.postgresql.org/message-id/CALj2ACW4aUe-_uFQOjdWCEN-xXoLGhmvRFnL8SNw_TZ5nJe+aw@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/a11f330b5584f2430371d68871e00f5c63735299\n\nModified Files\n--------------\ndoc/src/sgml/system-views.sgml | 10 ++\nsrc/backend/catalog/system_views.sql | 1 +\nsrc/backend/replication/slot.c | 35 +++++++\nsrc/backend/replication/slotfuncs.c | 7 +-\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_proc.dat | 6 +-\nsrc/include/replication/slot.h | 3 +\nsrc/test/recovery/t/019_replslot_limit.pl | 152 ++++++++++++++++++++++++++++++\nsrc/test/regress/expected/rules.out | 3 +-\n9 files changed, 213 insertions(+), 6 deletions(-)", "msg_date": "Mon, 25 Mar 2024 11:11:27 +0000", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 7:11 AM Amit Kapila <[email protected]> wrote:\n> Track last_inactive_time in pg_replication_slots.\n>\n> This commit adds a new property called last_inactive_time for slots. It is\n> set to 0 whenever a slot is made active/acquired and set to the current\n> timestamp whenever the slot is inactive/released or restored from the disk.\n> Note that we don't set the last_inactive_time for the slots currently being\n> synced from the primary to the standby because such slots are typically\n> inactive as decoding is not allowed on those.\n\nSo the field name is last_inactive_time, but if I'm reading this\ndescription right, it's actually the last time the slot was active,\nexcept for the weird exception for slots being synced. I'm wondering\nif this field needs to be renamed.\n\nAnd I'm suspicious that having an exception for slots being synced is\na bad idea. That makes too much of a judgement about how the user will\nuse this field. It's usually better to just expose the data, and if\nthe user needs helps to make sense of that data, then give them that\nhelp separately. In this case, that would mean removing the exception,\nbut making it easy to tell the difference between slots are inactive\nbecause they're being synced and slots that are inactive for some\nother reason.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 09:26:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 6:57 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 7:11 AM Amit Kapila <[email protected]> wrote:\n> > Track last_inactive_time in pg_replication_slots.\n> >\n> > This commit adds a new property called last_inactive_time for slots. It is\n> > set to 0 whenever a slot is made active/acquired and set to the current\n> > timestamp whenever the slot is inactive/released or restored from the disk.\n> > Note that we don't set the last_inactive_time for the slots currently being\n> > synced from the primary to the standby because such slots are typically\n> > inactive as decoding is not allowed on those.\n>\n> So the field name is last_inactive_time, but if I'm reading this\n> description right, it's actually the last time the slot was active,\n> except for the weird exception for slots being synced. I'm wondering\n> if this field needs to be renamed.\n>\n\nWe considered the other two names as last_inactive_at and\nlast_active_time. For the first (last_inactive_at), there was an\nargument that most other fields that display time ends with _time. For\nthe second (last_active_time), there was an argument that it could be\nmisleading as one could think that it should be updated each time WAL\nrecord decoding is happening [1]. The other possibility is to name it\nlast_used_time but I think it won't be much different from\nlast_active_time.\n\n> And I'm suspicious that having an exception for slots being synced is\n> a bad idea. That makes too much of a judgement about how the user will\n> use this field. It's usually better to just expose the data, and if\n> the user needs helps to make sense of that data, then give them that\n> help separately.\n\nThe reason we didn't set this for sync slots is that they won't be\nusable (one can't use them to decode WAL) unless standby is promoted\n[2]. But I see your point as well. So, I have copied the others\ninvolved in this discussion to see what they think.\n\n>\n> In this case, that would mean removing the exception,\n> but making it easy to tell the difference between slots are inactive\n> because they're being synced and slots that are inactive for some\n> other reason.\n>\n\nI think this can be differentiated with the help of 'synced' column in\npg_replication_slots.\n\n[1] - https://www.postgresql.org/message-id/Zf1yx9QMbhgJ/Lfy%40ip-10-97-1-34.eu-west-3.compute.internal\n[2] - https://www.postgresql.org/message-id/CAJpy0uBGv85dFiWMnNLm6NuEs3eTVicsJCyRvMGbR8H%2BfOVBnA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 25 Mar 2024 19:32:11 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 10:02 AM Amit Kapila <[email protected]> wrote:\n> We considered the other two names as last_inactive_at and\n> last_active_time. For the first (last_inactive_at), there was an\n> argument that most other fields that display time ends with _time. For\n> the second (last_active_time), there was an argument that it could be\n> misleading as one could think that it should be updated each time WAL\n> record decoding is happening [1]. The other possibility is to name it\n> last_used_time but I think it won't be much different from\n> last_active_time.\n\nI don't understand the bit about updating it each time WAL record\ndecoding is happening. If it's the last active time, and the slot is\ncurrently active, then the answer is either \"right now\" or \"currently\nundefined.\" I'd expect to see NULL in the system view in such a case.\nAnd if that's so, then there's nothing to update each time a record is\ndecoded, because it's just still going to show NULL.\n\nWhy does this field get set to the current time when the slot is\nrestored from disk?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 10:20:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 7:51 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 10:02 AM Amit Kapila <[email protected]> wrote:\n> > We considered the other two names as last_inactive_at and\n> > last_active_time. For the first (last_inactive_at), there was an\n> > argument that most other fields that display time ends with _time. For\n> > the second (last_active_time), there was an argument that it could be\n> > misleading as one could think that it should be updated each time WAL\n> > record decoding is happening [1]. The other possibility is to name it\n> > last_used_time but I think it won't be much different from\n> > last_active_time.\n>\n> I don't understand the bit about updating it each time WAL record\n> decoding is happening. If it's the last active time, and the slot is\n> currently active, then the answer is either \"right now\" or \"currently\n> undefined.\" I'd expect to see NULL in the system view in such a case.\n> And if that's so, then there's nothing to update each time a record is\n> decoded, because it's just still going to show NULL.\n>\n\nIIUC, Bertrand's point was that users can interpret last_active_time\nas a value that gets updated each time they decode a change which is\nnot what we are doing. So, this can confuse users. Your expectation of\nanswer (NULL) when the slot is active is correct and that is what will\nhappen.\n\n> Why does this field get set to the current time when the slot is\n> restored from disk?\n>\n\nIt is because we don't want to include the time the server is down in\nthe last_inactive_time. Say, if we are shutting down the server at\ntime X and the server remains down for another two hours, we don't\nwant to include those two hours as the slot inactive time. The related\ntheory is that this field will be used to invalidate inactive slots\nbased on a threshold (say inactive_timeout). Say, before the shutdown,\nwe release the slot and set the current_time for last_inactive_time\nfor each slot and persist that information as well. Now, if the server\nis down for a long time, we may invalidate the slots as soon as the\nserver comes up. So, instead, we just set this field at the time we\nread slots for disk and then reset it to 0/NULL as soon as the slot\nbecame active.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 25 Mar 2024 20:38:16 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "Hi,\n\nOn Mon, Mar 25, 2024 at 08:38:16PM +0530, Amit Kapila wrote:\n> On Mon, Mar 25, 2024 at 7:51 PM Robert Haas <[email protected]> wrote:\n> >\n> > On Mon, Mar 25, 2024 at 10:02 AM Amit Kapila <[email protected]> wrote:\n> > > We considered the other two names as last_inactive_at and\n> > > last_active_time. For the first (last_inactive_at), there was an\n> > > argument that most other fields that display time ends with _time. For\n> > > the second (last_active_time), there was an argument that it could be\n> > > misleading as one could think that it should be updated each time WAL\n> > > record decoding is happening [1]. The other possibility is to name it\n> > > last_used_time but I think it won't be much different from\n> > > last_active_time.\n> >\n> > I don't understand the bit about updating it each time WAL record\n> > decoding is happening. If it's the last active time, and the slot is\n> > currently active, then the answer is either \"right now\" or \"currently\n> > undefined.\" I'd expect to see NULL in the system view in such a case.\n> > And if that's so, then there's nothing to update each time a record is\n> > decoded, because it's just still going to show NULL.\n> >\n> \n> IIUC, Bertrand's point was that users can interpret last_active_time\n> as a value that gets updated each time they decode a change which is\n> not what we are doing. So, this can confuse users. Your expectation of\n> answer (NULL) when the slot is active is correct and that is what will\n> happen.\n\nYeah, and so would be the confusion: why is last_active_time NULL while one is\nusing the slot?\n\n> > Why does this field get set to the current time when the slot is\n> > restored from disk?\n> >\n> \n> It is because we don't want to include the time the server is down in\n> the last_inactive_time. Say, if we are shutting down the server at\n> time X and the server remains down for another two hours, we don't\n> want to include those two hours as the slot inactive time. The related\n> theory is that this field will be used to invalidate inactive slots\n> based on a threshold (say inactive_timeout). Say, before the shutdown,\n> we release the slot and set the current_time for last_inactive_time\n> for each slot and persist that information as well. Now, if the server\n> is down for a long time, we may invalidate the slots as soon as the\n> server comes up. So, instead, we just set this field at the time we\n> read slots for disk and then reset it to 0/NULL as soon as the slot\n> became active.\n\nRight, and we also want to invalidate the slot if not used duration > timeout,\nso that setting the field to zero when the slot is restored from disk is also not\nan option.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 15:16:54 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 8:46 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 08:38:16PM +0530, Amit Kapila wrote:\n> > On Mon, Mar 25, 2024 at 7:51 PM Robert Haas <[email protected]> wrote:\n> > >\n> > > On Mon, Mar 25, 2024 at 10:02 AM Amit Kapila <[email protected]> wrote:\n> > > > We considered the other two names as last_inactive_at and\n> > > > last_active_time. For the first (last_inactive_at), there was an\n> > > > argument that most other fields that display time ends with _time. For\n> > > > the second (last_active_time), there was an argument that it could be\n> > > > misleading as one could think that it should be updated each time WAL\n> > > > record decoding is happening [1]. The other possibility is to name it\n> > > > last_used_time but I think it won't be much different from\n> > > > last_active_time.\n> > >\n> > > I don't understand the bit about updating it each time WAL record\n> > > decoding is happening. If it's the last active time, and the slot is\n> > > currently active, then the answer is either \"right now\" or \"currently\n> > > undefined.\" I'd expect to see NULL in the system view in such a case.\n> > > And if that's so, then there's nothing to update each time a record is\n> > > decoded, because it's just still going to show NULL.\n> > >\n> >\n> > IIUC, Bertrand's point was that users can interpret last_active_time\n> > as a value that gets updated each time they decode a change which is\n> > not what we are doing. So, this can confuse users. Your expectation of\n> > answer (NULL) when the slot is active is correct and that is what will\n> > happen.\n>\n> Yeah, and so would be the confusion: why is last_active_time NULL while one is\n> using the slot?\n>\n\nIt is because we set it to zero when we acquire the slot and that\nvalue will remain the same till the slot is active. I am not sure if I\nunderstood your question so what I am saying might not make sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 25 Mar 2024 20:59:55 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "Hi,\n\nOn Mon, Mar 25, 2024 at 08:59:55PM +0530, Amit Kapila wrote:\n> On Mon, Mar 25, 2024 at 8:46 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Mon, Mar 25, 2024 at 08:38:16PM +0530, Amit Kapila wrote:\n> > > On Mon, Mar 25, 2024 at 7:51 PM Robert Haas <[email protected]> wrote:\n> > > >\n> > > > On Mon, Mar 25, 2024 at 10:02 AM Amit Kapila <[email protected]> wrote:\n> > > > > We considered the other two names as last_inactive_at and\n> > > > > last_active_time. For the first (last_inactive_at), there was an\n> > > > > argument that most other fields that display time ends with _time. For\n> > > > > the second (last_active_time), there was an argument that it could be\n> > > > > misleading as one could think that it should be updated each time WAL\n> > > > > record decoding is happening [1]. The other possibility is to name it\n> > > > > last_used_time but I think it won't be much different from\n> > > > > last_active_time.\n> > > >\n> > > > I don't understand the bit about updating it each time WAL record\n> > > > decoding is happening. If it's the last active time, and the slot is\n> > > > currently active, then the answer is either \"right now\" or \"currently\n> > > > undefined.\" I'd expect to see NULL in the system view in such a case.\n> > > > And if that's so, then there's nothing to update each time a record is\n> > > > decoded, because it's just still going to show NULL.\n> > > >\n> > >\n> > > IIUC, Bertrand's point was that users can interpret last_active_time\n> > > as a value that gets updated each time they decode a change which is\n> > > not what we are doing. So, this can confuse users. Your expectation of\n> > > answer (NULL) when the slot is active is correct and that is what will\n> > > happen.\n> >\n> > Yeah, and so would be the confusion: why is last_active_time NULL while one is\n> > using the slot?\n> >\n> \n> It is because we set it to zero when we acquire the slot and that\n> value will remain the same till the slot is active. I am not sure if I\n> understood your question so what I am saying might not make sense.\n\nThere is no \"real\" question, I was just highlighting the confusion in case we\nname the field \"last_active_time\".\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 15:37:37 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 11:16 AM Bertrand Drouvot\n<[email protected]> wrote:\n> > IIUC, Bertrand's point was that users can interpret last_active_time\n> > as a value that gets updated each time they decode a change which is\n> > not what we are doing. So, this can confuse users. Your expectation of\n> > answer (NULL) when the slot is active is correct and that is what will\n> > happen.\n>\n> Yeah, and so would be the confusion: why is last_active_time NULL while one is\n> using the slot?\n\nI agree that users could get confused here, but the solution to that\nshouldn't be to give the field a name that is the opposite of what it\nactually does. I expect a field called last_inactive_time to tell me\nthe last time that the slot was inactive. Here, it tells me the last\ntime that a currently-inactive slot previously *WAS* active. How can\nyou justify calling that the last *INACTIVE* time?\n\nAFAICS, the user who has the confusion that you mention here is simply\nwrong. If they are looking at a field called \"last active time\" and\nthe slot is active, then the correct answer is \"right now\" or\n\"undefined\" and that is what they will see. Sure, they might not\nunderstand that. But flipping the name of the field on its head cannot\nbe the right way to help them.\n\nWith the current naming, I expect to have the exact opposite confusion\nas your hypothetical confused user. I'm going to be looking at a slot\nthat's currently inactive, and it's going to tell me that the\nlast_inactive_time was at some time in the past. And I'm going to say\n\"what the heck is going on here, the slot is inactive *right now*!\"\n\nHalf of me wonders whether we should avoid this whole problem by\nrenaming it to something like last_state_change or\nlast_state_change_time, or maybe just state_change like we do in\npg_stat_activity, and making it mean the last time the slot flipped\nbetween active and inactive in either direction. I'm not sure if this\nis better, but unless I'm misunderstanding something, the current\nsituation is terrible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 11:49:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "Hi,\n\nOn Mon, Mar 25, 2024 at 11:49:00AM -0400, Robert Haas wrote:\n> On Mon, Mar 25, 2024 at 11:16 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > > IIUC, Bertrand's point was that users can interpret last_active_time\n> > > as a value that gets updated each time they decode a change which is\n> > > not what we are doing. So, this can confuse users. Your expectation of\n> > > answer (NULL) when the slot is active is correct and that is what will\n> > > happen.\n> >\n> > Yeah, and so would be the confusion: why is last_active_time NULL while one is\n> > using the slot?\n> \n> I agree that users could get confused here, but the solution to that\n> shouldn't be to give the field a name that is the opposite of what it\n> actually does. I expect a field called last_inactive_time to tell me\n> the last time that the slot was inactive. Here, it tells me the last\n> time that a currently-inactive slot previously *WAS* active. How can\n> you justify calling that the last *INACTIVE* time?\n> \n> AFAICS, the user who has the confusion that you mention here is simply\n> wrong. If they are looking at a field called \"last active time\" and\n> the slot is active, then the correct answer is \"right now\" or\n> \"undefined\" and that is what they will see. Sure, they might not\n> understand that. But flipping the name of the field on its head cannot\n> be the right way to help them.\n> \n> With the current naming, I expect to have the exact opposite confusion\n> as your hypothetical confused user. I'm going to be looking at a slot\n> that's currently inactive, and it's going to tell me that the\n> last_inactive_time was at some time in the past. And I'm going to say\n> \"what the heck is going on here, the slot is inactive *right now*!\"\n> \n> Half of me wonders whether we should avoid this whole problem by\n> renaming it to something like last_state_change or\n> last_state_change_time, or maybe just state_change like we do in\n> pg_stat_activity, and making it mean the last time the slot flipped\n> between active and inactive in either direction. I'm not sure if this\n> is better, but unless I'm misunderstanding something, the current\n> situation is terrible.\n> \n\nNow that I read your arguments I think that last_<active|inactive>_time could be\nboth missleading because at the end they rely on users \"expectation\".\n\nWould \"released_time\" sounds better? (at the end this is exactly what it does \nrepresent unless for the case where it is restored from disk for which the meaning\nwould still makes sense to me though). It seems to me that released_time does not\nlead to any expectation then removing any confusion.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 16:12:29 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "Hi,\n\nOn Mon, Mar 25, 2024 at 07:32:11PM +0530, Amit Kapila wrote:\n> On Mon, Mar 25, 2024 at 6:57 PM Robert Haas <[email protected]> wrote:\n> > And I'm suspicious that having an exception for slots being synced is\n> > a bad idea. That makes too much of a judgement about how the user will\n> > use this field. It's usually better to just expose the data, and if\n> > the user needs helps to make sense of that data, then give them that\n> > help separately.\n> \n> The reason we didn't set this for sync slots is that they won't be\n> usable (one can't use them to decode WAL) unless standby is promoted\n> [2]. But I see your point as well. So, I have copied the others\n> involved in this discussion to see what they think.\n\nYeah I also see Robert's point. If we also sync the \"last inactive time\" field then\nwe would need to take care of the corner case mentioned by Shveta in [1] during\npromotion.\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uCLu%2BmqAwAMum%3DpXE9YYsy0BE7hOSw_Wno5vjwpFY%3D63g%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 16:24:54 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 12:12 PM Bertrand Drouvot\n<[email protected]> wrote:\n> Now that I read your arguments I think that last_<active|inactive>_time could be\n> both missleading because at the end they rely on users \"expectation\".\n\nWell, the user is always going to expect *something* -- that's just\nhow language works.\n\n> Would \"released_time\" sounds better? (at the end this is exactly what it does\n> represent unless for the case where it is restored from disk for which the meaning\n> would still makes sense to me though). It seems to me that released_time does not\n> lead to any expectation then removing any confusion.\n\nYeah, that's not bad. I mean, I don't agree that released_time doesn't\nlead to any expectation, but what it leads me to expect is that you're\ngoing to tell me the time at which the slot was released. So if it's\ncurrently active, then I see NULL, because it's not released; but if\nit's inactive, then I see the time at which it became so.\n\nIn the same vein, I think deactivated_at or inactive_since might be\ngood names to consider. I think they get at the same thing as\nreleased_time, but they avoid introducing a completely new word\n(release, as opposed to active/inactive).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 12:25:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "Hi,\n\nOn Mon, Mar 25, 2024 at 12:25:37PM -0400, Robert Haas wrote:\n> On Mon, Mar 25, 2024 at 12:12 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > Would \"released_time\" sounds better? (at the end this is exactly what it does\n> > represent unless for the case where it is restored from disk for which the meaning\n> > would still makes sense to me though). It seems to me that released_time does not\n> > lead to any expectation then removing any confusion.\n> \n> Yeah, that's not bad. I mean, I don't agree that released_time doesn't\n> lead to any expectation,\n> but what it leads me to expect is that you're\n> going to tell me the time at which the slot was released. So if it's\n> currently active, then I see NULL, because it's not released; but if\n> it's inactive, then I see the time at which it became so.\n> \n> In the same vein, I think deactivated_at or inactive_since might be\n> good names to consider. I think they get at the same thing as\n> released_time, but they avoid introducing a completely new word\n> (release, as opposed to active/inactive).\n> \n\nYeah, I'd vote for inactive_since then.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 16:49:12 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 04:49:12PM +0000, Bertrand Drouvot wrote:\n> On Mon, Mar 25, 2024 at 12:25:37PM -0400, Robert Haas wrote:\n>> In the same vein, I think deactivated_at or inactive_since might be\n>> good names to consider. I think they get at the same thing as\n>> released_time, but they avoid introducing a completely new word\n>> (release, as opposed to active/inactive).\n> \n> Yeah, I'd vote for inactive_since then.\n\nHaving only skimmed some of the related discussions, I'm inclined to agree\nthat inactive_since provides the clearest description for the column.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 15:00:26 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Tue, Mar 26, 2024 at 1:30 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 04:49:12PM +0000, Bertrand Drouvot wrote:\n> > On Mon, Mar 25, 2024 at 12:25:37PM -0400, Robert Haas wrote:\n> >> In the same vein, I think deactivated_at or inactive_since might be\n> >> good names to consider. I think they get at the same thing as\n> >> released_time, but they avoid introducing a completely new word\n> >> (release, as opposed to active/inactive).\n> >\n> > Yeah, I'd vote for inactive_since then.\n>\n> Having only skimmed some of the related discussions, I'm inclined to agree\n> that inactive_since provides the clearest description for the column.\n\nI think we all have some agreement on inactive_since. So, I'm\nattaching the patch for that change.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 26 Mar 2024 01:50:42 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 9:55 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 12:12 PM Bertrand Drouvot\n> <[email protected]> wrote:\n>\n> > Would \"released_time\" sounds better? (at the end this is exactly what it does\n> > represent unless for the case where it is restored from disk for which the meaning\n> > would still makes sense to me though). It seems to me that released_time does not\n> > lead to any expectation then removing any confusion.\n>\n> Yeah, that's not bad. I mean, I don't agree that released_time doesn't\n> lead to any expectation, but what it leads me to expect is that you're\n> going to tell me the time at which the slot was released. So if it's\n> currently active, then I see NULL, because it's not released; but if\n> it's inactive, then I see the time at which it became so.\n>\n> In the same vein, I think deactivated_at or inactive_since might be\n> good names to consider. I think they get at the same thing as\n> released_time, but they avoid introducing a completely new word\n> (release, as opposed to active/inactive).\n>\n\nWe have a consensus on inactive_since, so I'll make that change. I\nwould also like to solicit your opinion on the other slot-level\nparameter we are planning to introduce. This new slot-level parameter\nwill be named as inactive_timeout. This will indicate that once the\nslot is inactive for the inactive_timeout period, we will invalidate\nthe slot. We are also discussing to have this parameter\n(inactive_timeout) as GUC [1]. We can have this new parameter both at\nthe slot level and as well as a GUC, or just one of those.\n\n[1] - https://www.postgresql.org/message-id/20240325195443.GA2923888%40nathanxps13\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Mar 2024 06:11:47 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Mon, Mar 25, 2024 at 9:54 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Mon, Mar 25, 2024 at 07:32:11PM +0530, Amit Kapila wrote:\n> > On Mon, Mar 25, 2024 at 6:57 PM Robert Haas <[email protected]> wrote:\n> > > And I'm suspicious that having an exception for slots being synced is\n> > > a bad idea. That makes too much of a judgement about how the user will\n> > > use this field. It's usually better to just expose the data, and if\n> > > the user needs helps to make sense of that data, then give them that\n> > > help separately.\n> >\n> > The reason we didn't set this for sync slots is that they won't be\n> > usable (one can't use them to decode WAL) unless standby is promoted\n> > [2]. But I see your point as well. So, I have copied the others\n> > involved in this discussion to see what they think.\n>\n> Yeah I also see Robert's point. If we also sync the \"last inactive time\" field then\n> we would need to take care of the corner case mentioned by Shveta in [1] during\n> promotion.\n\nI have suggested one potential solution for that in [1]. Please have a look.\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uB-yE%2BRiw7JQ4hW0%2BigJxvPc%2Brq%2B9c7WyTa1Jz7%2B2gAiA%40mail.gmail.com\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:37:48 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Tue, Mar 26, 2024 at 1:50 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Mar 26, 2024 at 1:30 AM Nathan Bossart <[email protected]> wrote:\n> >\n> > On Mon, Mar 25, 2024 at 04:49:12PM +0000, Bertrand Drouvot wrote:\n> > > On Mon, Mar 25, 2024 at 12:25:37PM -0400, Robert Haas wrote:\n> > >> In the same vein, I think deactivated_at or inactive_since might be\n> > >> good names to consider. I think they get at the same thing as\n> > >> released_time, but they avoid introducing a completely new word\n> > >> (release, as opposed to active/inactive).\n> > >\n> > > Yeah, I'd vote for inactive_since then.\n> >\n> > Having only skimmed some of the related discussions, I'm inclined to agree\n> > that inactive_since provides the clearest description for the column.\n>\n> I think we all have some agreement on inactive_since. So, I'm\n> attaching the patch for that change.\n\npg_proc.dat needs to be changed to refer to 'inactive_since' instead\nof 'last_inactive_time' in the attached patch.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:55:19 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Tue, Mar 26, 2024 at 9:38 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 9:54 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Mon, Mar 25, 2024 at 07:32:11PM +0530, Amit Kapila wrote:\n> > > On Mon, Mar 25, 2024 at 6:57 PM Robert Haas <[email protected]> wrote:\n> > > > And I'm suspicious that having an exception for slots being synced is\n> > > > a bad idea. That makes too much of a judgement about how the user will\n> > > > use this field. It's usually better to just expose the data, and if\n> > > > the user needs helps to make sense of that data, then give them that\n> > > > help separately.\n> > >\n> > > The reason we didn't set this for sync slots is that they won't be\n> > > usable (one can't use them to decode WAL) unless standby is promoted\n> > > [2]. But I see your point as well. So, I have copied the others\n> > > involved in this discussion to see what they think.\n> >\n> > Yeah I also see Robert's point. If we also sync the \"last inactive time\" field then\n> > we would need to take care of the corner case mentioned by Shveta in [1] during\n> > promotion.\n>\n> I have suggested one potential solution for that in [1]. Please have a look.\n>\n> [1]: https://www.postgresql.org/message-id/CAJpy0uB-yE%2BRiw7JQ4hW0%2BigJxvPc%2Brq%2B9c7WyTa1Jz7%2B2gAiA%40mail.gmail.com\n\nI posted the v21 patch implementing the above idea in the other thread\n- https://www.postgresql.org/message-id/CALj2ACXRFx9g7A9RFJZF7eBe%3Dzxk7%3DapMRFuCgJJKYB7O%3Dvgwg%40mail.gmail.com.\nFor ease, I'm also attaching the patch in here.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 26 Mar 2024 11:13:55 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On 2024-Mar-26, Amit Kapila wrote:\n\n> We have a consensus on inactive_since, so I'll make that change.\n\nSounds reasonable. So this is a timestamptz if the slot is inactive,\nNULL if active, right? What value is it going to have for sync slots?\n\n> I would also like to solicit your opinion on the other slot-level\n> parameter we are planning to introduce. This new slot-level parameter\n> will be named as inactive_timeout.\n\nMaybe inactivity_timeout?\n\n> This will indicate that once the slot is inactive for the\n> inactive_timeout period, we will invalidate the slot. We are also\n> discussing to have this parameter (inactive_timeout) as GUC [1]. We\n> can have this new parameter both at the slot level and as well as a\n> GUC, or just one of those.\n\nreplication_slot_inactivity_timeout?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://postgr.es/m/[email protected]\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:39:52 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Tue, Mar 26, 2024 at 1:09 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Mar-26, Amit Kapila wrote:\n>\n> > We have a consensus on inactive_since, so I'll make that change.\n>\n> Sounds reasonable. So this is a timestamptz if the slot is inactive,\n> NULL if active, right?\n>\n\nYes.\n\n> What value is it going to have for sync slots?\n>\n\nThe behavior will be the same for non-sync slots. In each sync cycle,\nwe acquire/release the sync slots. So at the time of release,\ninactive_since will be updated. See email [1].\n\n> > I would also like to solicit your opinion on the other slot-level\n> > parameter we are planning to introduce. This new slot-level parameter\n> > will be named as inactive_timeout.\n>\n> Maybe inactivity_timeout?\n>\n> > This will indicate that once the slot is inactive for the\n> > inactive_timeout period, we will invalidate the slot. We are also\n> > discussing to have this parameter (inactive_timeout) as GUC [1]. We\n> > can have this new parameter both at the slot level and as well as a\n> > GUC, or just one of those.\n>\n> replication_slot_inactivity_timeout?\n>\n\nSo, it seems you are okay to have this parameter both at slot level\nand as a GUC. About names, let us see what others think.\n\nThanks for the input on the names.\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KrPGwfZV9LYGidjxHeW%2BrxJ%3DE2ThjXvwRGLO%3DiLNuo%3DQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Mar 2024 13:45:23 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On 2024-Mar-26, Amit Kapila wrote:\n\n> On Tue, Mar 26, 2024 at 1:09 PM Alvaro Herrera <[email protected]> wrote:\n> > On 2024-Mar-26, Amit Kapila wrote:\n> > > I would also like to solicit your opinion on the other slot-level\n> > > parameter we are planning to introduce. This new slot-level parameter\n> > > will be named as inactive_timeout.\n> >\n> > Maybe inactivity_timeout?\n> >\n> > > This will indicate that once the slot is inactive for the\n> > > inactive_timeout period, we will invalidate the slot. We are also\n> > > discussing to have this parameter (inactive_timeout) as GUC [1]. We\n> > > can have this new parameter both at the slot level and as well as a\n> > > GUC, or just one of those.\n> >\n> > replication_slot_inactivity_timeout?\n> \n> So, it seems you are okay to have this parameter both at slot level\n> and as a GUC.\n\nWell, I think a GUC is good to have regardless of the slot parameter,\nbecause the GUC can be used as an instance-wide protection against going\nout of disk space because of broken replication. However, now that I\nthink about it, I'm not really sure about invalidating a slot based on\ntime rather on disk space, for which we already have a parameter; what's\nyour rationale for that? The passage of time is not a very good\nmeasure, really, because the amount of WAL being protected has wildly\nvarying production rate across time.\n\nI can only see a timeout being useful as a parameter if its default\nvalue is not the special disable value; say, the default timeout is 3\ndays (to be more precise -- the period from Friday to Monday, that is,\nbetween DBA leaving the office one week until discovering a problem when\nhe returns early next week). This way we have a built-in mechanism that\ninvalidates slots regardless of how big the WAL partition is.\n\n\nI'm less sure about the slot parameter; in what situation do you need to\nextend the life of one individual slot further than the life of all the\nother slots? (Of course, it makes no sense to set the per-slot param to\na shorter period than the GUC: invalidating one slot ahead of the others\nis completely pointless.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:41:05 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 01:45:23PM +0530, Amit Kapila wrote:\n> On Tue, Mar 26, 2024 at 1:09 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > On 2024-Mar-26, Amit Kapila wrote:\n> >\n> > > We have a consensus on inactive_since, so I'll make that change.\n> >\n> > Sounds reasonable. So this is a timestamptz if the slot is inactive,\n> > NULL if active, right?\n> >\n> \n> Yes.\n> \n> > What value is it going to have for sync slots?\n> >\n> \n> The behavior will be the same for non-sync slots. In each sync cycle,\n> we acquire/release the sync slots. So at the time of release,\n> inactive_since will be updated. See email [1].\n\nI don't think we should set inactive_since to the current time at each sync cycle,\nsee [1] as to why. What do you think?\n\n[1]: https://www.postgresql.org/message-id/ZgKGIDC5lttWTdJH%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:04:33 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Tue, Mar 26, 2024 at 2:11 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Mar-26, Amit Kapila wrote:\n>\n> > On Tue, Mar 26, 2024 at 1:09 PM Alvaro Herrera <[email protected]> wrote:\n> > > On 2024-Mar-26, Amit Kapila wrote:\n> > > > I would also like to solicit your opinion on the other slot-level\n> > > > parameter we are planning to introduce. This new slot-level parameter\n> > > > will be named as inactive_timeout.\n> > >\n> > > Maybe inactivity_timeout?\n> > >\n> > > > This will indicate that once the slot is inactive for the\n> > > > inactive_timeout period, we will invalidate the slot. We are also\n> > > > discussing to have this parameter (inactive_timeout) as GUC [1]. We\n> > > > can have this new parameter both at the slot level and as well as a\n> > > > GUC, or just one of those.\n> > >\n> > > replication_slot_inactivity_timeout?\n> >\n> > So, it seems you are okay to have this parameter both at slot level\n> > and as a GUC.\n>\n> Well, I think a GUC is good to have regardless of the slot parameter,\n> because the GUC can be used as an instance-wide protection against going\n> out of disk space because of broken replication. However, now that I\n> think about it, I'm not really sure about invalidating a slot based on\n> time rather on disk space, for which we already have a parameter; what's\n> your rationale for that? The passage of time is not a very good\n> measure, really, because the amount of WAL being protected has wildly\n> varying production rate across time.\n>\n\nThe inactive slot not only blocks WAL from being removed but prevents\nthe vacuum from proceeding. Also, there is a risk of transaction Id\nwraparound. See email [1] for more context.\n\n> I can only see a timeout being useful as a parameter if its default\n> value is not the special disable value; say, the default timeout is 3\n> days (to be more precise -- the period from Friday to Monday, that is,\n> between DBA leaving the office one week until discovering a problem when\n> he returns early next week). This way we have a built-in mechanism that\n> invalidates slots regardless of how big the WAL partition is.\n>\n\nWe can have a default value for this parameter but it has the\npotential to break the replication, so not sure what could be a good\ndefault value.\n\n>\n> I'm less sure about the slot parameter; in what situation do you need to\n> extend the life of one individual slot further than the life of all the\n> other slots?\n\nI was thinking of an idle slot scenario where a slot from one\nparticular subscriber (or output plugin) is inactive due to some\nmaintenance activity. But it should be okay to have a GUC for this for\nnow.\n\n[1] - https://www.postgresql.org/message-id/20240325195443.GA2923888%40nathanxps13\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Mar 2024 15:44:29 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Tue, Mar 26, 2024 at 03:44:29PM +0530, Amit Kapila wrote:\n> On Tue, Mar 26, 2024 at 2:11 PM Alvaro Herrera <[email protected]> wrote:\n>> Well, I think a GUC is good to have regardless of the slot parameter,\n>> because the GUC can be used as an instance-wide protection against going\n>> out of disk space because of broken replication. However, now that I\n>> think about it, I'm not really sure about invalidating a slot based on\n>> time rather on disk space, for which we already have a parameter; what's\n>> your rationale for that? The passage of time is not a very good\n>> measure, really, because the amount of WAL being protected has wildly\n>> varying production rate across time.\n> \n> The inactive slot not only blocks WAL from being removed but prevents\n> the vacuum from proceeding. Also, there is a risk of transaction Id\n> wraparound. See email [1] for more context.\n\nFWIW I'd really prefer to have something like max_slot_xid_age for this. A\ntime-based parameter would likely help with most cases, but transaction ID\nusage will vary widely from server to server, so it'd be nice to have\nsomething to protect against wraparound more directly. I don't object to a\ntime-based setting as well, but that won't always work as well for this\nparticular use-case, especially if we are relying on users to set a\nslot-level parameter.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 10:09:18 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On 2024-Mar-26, Nathan Bossart wrote:\n\n> FWIW I'd really prefer to have something like max_slot_xid_age for this. A\n> time-based parameter would likely help with most cases, but transaction ID\n> usage will vary widely from server to server, so it'd be nice to have\n> something to protect against wraparound more directly.\n\nYeah, I tend to agree that an XID-based limit makes more sense than a\ntime-based one.\n\n> I don't object to a\n> time-based setting as well, but that won't always work as well for this\n> particular use-case, especially if we are relying on users to set a\n> slot-level parameter.\n\nI think slot-level parameters are mostly useless, because it takes just\none slot where you forget to set it for disaster to strike.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 26 Mar 2024 16:39:55 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 04:39:55PM +0100, Alvaro Herrera wrote:\n> On 2024-Mar-26, Nathan Bossart wrote:\n> > I don't object to a\n> > time-based setting as well, but that won't always work as well for this\n> > particular use-case, especially if we are relying on users to set a\n> > slot-level parameter.\n> \n> I think slot-level parameters are mostly useless, because it takes just\n> one slot where you forget to set it for disaster to strike.\n\nI think that's a fair point. So maybe we should focus on having a GUC first and\nlater on re-think about having (or not) a slot based one (in addition to the GUC).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 16:13:43 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Tue, Mar 26, 2024 at 9:10 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Mar-26, Nathan Bossart wrote:\n>\n> > FWIW I'd really prefer to have something like max_slot_xid_age for this. A\n> > time-based parameter would likely help with most cases, but transaction ID\n> > usage will vary widely from server to server, so it'd be nice to have\n> > something to protect against wraparound more directly.\n>\n> Yeah, I tend to agree that an XID-based limit makes more sense than a\n> time-based one.\n>\n\nSo, do we want the time-based parameter or just max_slot_xid_age\nconsidering both will be GUC's? Yes, the xid_age based parameter\nsounds to be directly helpful for transaction ID wraparound or dead\nrow cleanup, OTOH having a lot of inactive slots (which is possible in\nuse cases where a tool creates a lot of slots and forgets to remove\nthem, or the tool exits without cleaning up slots (say due to server\nshutdown)) also prohibit removing dead space which is not nice either?\n\nThe one example that comes to mind is the pg_createsubscriber\n(committed for PG17) which creates one slot per database to convert\nstandby to subscriber, now say it exits due to power shutdown then\nthere could be a lot of dangling slots on the primary server. Also,\nsay there is some bug in such a tool that doesn't allow proper cleanup\nof slots, the same problem can happen; yeah, this would be a problem\nof the tool but I think there is no harm in giving a way to avoid\nproblems at the server end due to such slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:10:05 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Wed, Mar 27, 2024 at 10:10 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Mar 26, 2024 at 9:10 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > On 2024-Mar-26, Nathan Bossart wrote:\n> >\n> > > FWIW I'd really prefer to have something like max_slot_xid_age for this. A\n> > > time-based parameter would likely help with most cases, but transaction ID\n> > > usage will vary widely from server to server, so it'd be nice to have\n> > > something to protect against wraparound more directly.\n> >\n> > Yeah, I tend to agree that an XID-based limit makes more sense than a\n> > time-based one.\n> >\n> So, do we want the time-based parameter or just max_slot_xid_age\n> considering both will be GUC's? Yes, the xid_age based parameter\n> sounds to be directly helpful for transaction ID wraparound or dead\n> row cleanup, OTOH having a lot of inactive slots (which is possible in\n> use cases where a tool creates a lot of slots and forgets to remove\n> them, or the tool exits without cleaning up slots (say due to server\n> shutdown)) also prohibit removing dead space which is not nice either?\n\nI've personally seen the leftover slots problem on production systems\nwhere a timeout based invalidation mechanism could have been of more\nhelp to save time and reduce manual intervention. Usually, most if not\nall, migration/upgrade/other tools create slots, and if at all any\nerrors occur or the operation gets cancelled midway, there's a chance\nthat the slots can be leftover if such tools forgets to clean them up\neither because there was a bug or for whatever reasons. These are\nunintended/ghost slots for the database user unnecessarily holding up\nresources such as XIDs, dead rows and WAL; which might lead to XID\nwraparound or server crash if unnoticed. Although XID based GUC helps\na lot, why do these unintended and unnoticed slots need to hold up the\nresources even before the XID age of say 1.5 or 2 billion is reached.\n\nWith both GUCs max_slot_xid_age and replication_slot_inactive_timeout\nin place, I can set max_slot_xid_age = 1.5 billion or so and also set\nreplication_slot_inactive_timeout = 2 days or so to make the database\nfoolproof.\n\n> The one example that comes to mind is the pg_createsubscriber\n> (committed for PG17) which creates one slot per database to convert\n> standby to subscriber, now say it exits due to power shutdown then\n> there could be a lot of dangling slots on the primary server. Also,\n> say there is some bug in such a tool that doesn't allow proper cleanup\n> of slots, the same problem can happen; yeah, this would be a problem\n> of the tool but I think there is no harm in giving a way to avoid\n> problems at the server end due to such slots.\n\nRight. I can personally connect to this problem of leftover slots\nwhere manual intervention was needed to drop all such slots which is\ntime-consuming and painful sometimes.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 12:28:06 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "Hi,\n\nOn Wed, Mar 27, 2024 at 12:28:06PM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 27, 2024 at 10:10 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Mar 26, 2024 at 9:10 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > On 2024-Mar-26, Nathan Bossart wrote:\n> > >\n> > > > FWIW I'd really prefer to have something like max_slot_xid_age for this. A\n> > > > time-based parameter would likely help with most cases, but transaction ID\n> > > > usage will vary widely from server to server, so it'd be nice to have\n> > > > something to protect against wraparound more directly.\n> > >\n> > > Yeah, I tend to agree that an XID-based limit makes more sense than a\n> > > time-based one.\n> > >\n> > So, do we want the time-based parameter or just max_slot_xid_age\n> > considering both will be GUC's? Yes, the xid_age based parameter\n> > sounds to be directly helpful for transaction ID wraparound or dead\n> > row cleanup, OTOH having a lot of inactive slots (which is possible in\n> > use cases where a tool creates a lot of slots and forgets to remove\n> > them, or the tool exits without cleaning up slots (say due to server\n> > shutdown)) also prohibit removing dead space which is not nice either?\n> \n> I've personally seen the leftover slots problem on production systems\n> where a timeout based invalidation mechanism could have been of more\n> help to save time and reduce manual intervention. Usually, most if not\n> all, migration/upgrade/other tools create slots, and if at all any\n> errors occur or the operation gets cancelled midway, there's a chance\n> that the slots can be leftover if such tools forgets to clean them up\n> either because there was a bug or for whatever reasons. These are\n> unintended/ghost slots for the database user unnecessarily holding up\n> resources such as XIDs, dead rows and WAL; which might lead to XID\n> wraparound or server crash if unnoticed. Although XID based GUC helps\n> a lot, why do these unintended and unnoticed slots need to hold up the\n> resources even before the XID age of say 1.5 or 2 billion is reached.\n> \n> With both GUCs max_slot_xid_age and replication_slot_inactive_timeout\n> in place, I can set max_slot_xid_age = 1.5 billion or so and also set\n> replication_slot_inactive_timeout = 2 days or so to make the database\n> foolproof.\n\nYeah, I think that both makes senses. The reason is that one depends of the\ndatabase activity and slot activity (the xid age one) while the other (the\ntimeout one) depends only of the slot activity.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 07:13:16 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Wed, Mar 27, 2024 at 3:13 AM Bertrand Drouvot\n<[email protected]> wrote:\n> Yeah, I think that both makes senses. The reason is that one depends of the\n> database activity and slot activity (the xid age one) while the other (the\n> timeout one) depends only of the slot activity.\n\nFWIW, I thought the time-based one sounded more useful. I think it\nwould be poor planning to say \"well, if the slot reaches an XID age of\na billion, kill it so we don't wrap around,\" because while that likely\nwill prevent me from getting into wraparound trouble, my database is\nlikely to become horribly bloated long before the cutoff is reached. I\nthought it would be easier to reason in terms of time: I don't expect\na slave to ever be down for more than X period of time, say an hour or\nwhatever, so if it is, forget about it. Or alternatively, I know that\nif a slave does go down for more than X period of time, I start to get\nbloat, so cut it off at that point and I'll rebuild it later. I feel\nlike these are things where people's intuition is going to be much\nstronger when reckoning in units of wall-clock time, which everyone\ndeals with every day in one way or another, rather than in XID-based\nunits that are, at least in my view, just a lot less intuitive.\n\nFor a previous example of where an XID threshold turned out not to be\ngreat, see vacuum_defer_cleanup_age, and in particular the commit\nmessage from where it was removed in\n1118cd37eb61e6a2428f457a8b2026a7bb3f801a. The case here might not turn\nout to be quite comparable for one reason or another, but I do think\nthat case is a cautionary tale.\n\nI'm sure the world won't end or anything if we end up with both\nthresholds, and I may be missing some reason why the XID threshold\nwould be really great here. I just can't quite see why I'd ever\nrecommend it to anyone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:33:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Wed, Mar 27, 2024 at 10:33:28AM -0400, Robert Haas wrote:\n> FWIW, I thought the time-based one sounded more useful. I think it\n> would be poor planning to say \"well, if the slot reaches an XID age of\n> a billion, kill it so we don't wrap around,\" because while that likely\n> will prevent me from getting into wraparound trouble, my database is\n> likely to become horribly bloated long before the cutoff is reached. I\n> thought it would be easier to reason in terms of time: I don't expect\n> a slave to ever be down for more than X period of time, say an hour or\n> whatever, so if it is, forget about it. Or alternatively, I know that\n> if a slave does go down for more than X period of time, I start to get\n> bloat, so cut it off at that point and I'll rebuild it later. I feel\n> like these are things where people's intuition is going to be much\n> stronger when reckoning in units of wall-clock time, which everyone\n> deals with every day in one way or another, rather than in XID-based\n> units that are, at least in my view, just a lot less intuitive.\n\nI don't disagree with this point in the context of a user who is managing a\nsingle server or just a handful of servers. They are going to understand\ntheir workload best and can reason about the right value for the timeout.\nI think they'd still benefit from having an XID-based setting as a backstop\nin case the timeout is still not sufficient to prevent wraparound, but it's\nnot nearly as important in that case.\n\nIMHO the use-case where this doesn't work so well is when you have many,\nmany servers to administer (e.g., a cloud provider). In those cases,\npicking a default timeout to try to prevent wraparound is going to be much\nless accurate, as any reasonable value you pick is still going to be\ninsufficient in some cases. I think the XID-based parameter would be\nbetter here; if the server is at imminent risk of an outage due to\nwraparound, invalidating the slots is probably a reasonable course of\naction.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:05:57 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." }, { "msg_contents": "On Wed, Mar 27, 2024 at 11:06 AM Nathan Bossart\n<[email protected]> wrote:\n> IMHO the use-case where this doesn't work so well is when you have many,\n> many servers to administer (e.g., a cloud provider). In those cases,\n> picking a default timeout to try to prevent wraparound is going to be much\n> less accurate, as any reasonable value you pick is still going to be\n> insufficient in some cases. I think the XID-based parameter would be\n> better here; if the server is at imminent risk of an outage due to\n> wraparound, invalidating the slots is probably a reasonable course of\n> action.\n\nWell, I'm certainly willing to believe that your experience with\nadministering servers in the cloud is superior to mine. I don't really\nunderstand why it makes a difference, though. Whether you have one\nserver or many, I agree that it is reasonable to invalidate slots when\nXID wraparound looms. But also, whether you have one server or many,\nby the time wraparound looms, you will typically have crippling bloat\nas well. If preventing that bloat isn't important or there are other\ndefenses against it, then I see the value of the XID-based cutoff as a\nbackstop. And I will admit that in an on-prem installation, I've\noccasionally seen situations where lots of XIDs got burned without\nreally causing a lot of bloat -- say, because there are heavily\nupdated staging tables which are periodically truncated, and very\nlittle modification to long-lived data.\n\nI'm not really strongly against having an XID-based threshold if smart\npeople such as yourself want it. I just think for a lot of users it's\ngoing to be fiddly to get right.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 11:26:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Track last_inactive_time in pg_replication_slots." } ]
[ { "msg_contents": "I happened to notice that the set_cheapest() calls in functions\nset_namedtuplestore_pathlist() and set_result_pathlist() are redundant,\nas we've centralized the set_cheapest() calls in set_rel_pathlist().\n\nAttached is a trivial patch to remove these calls.\n\nBTW, I suspect that the set_cheapest() call in set_dummy_rel_pathlist()\nis also redundant. The comment there says \"This is redundant when we're\ncalled from set_rel_size(), but not when called from elsewhere\". I\ndoubt it. The other places where it is called are set_append_rel_size()\nand set_subquery_pathlist(), both being called from set_rel_size(). So\nset_cheapest() would ultimately be called in set_rel_pathlist().\n\nThoughts?\n\nThanks\nRichard", "msg_date": "Tue, 26 Mar 2024 19:02:48 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Remove some redundant set_cheapest() calls" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> I happened to notice that the set_cheapest() calls in functions\n> set_namedtuplestore_pathlist() and set_result_pathlist() are redundant,\n> as we've centralized the set_cheapest() calls in set_rel_pathlist().\n\n> Attached is a trivial patch to remove these calls.\n\nAgreed, and pushed.\n\n> BTW, I suspect that the set_cheapest() call in set_dummy_rel_pathlist()\n> is also redundant. The comment there says \"This is redundant when we're\n> called from set_rel_size(), but not when called from elsewhere\". I\n> doubt it. The other places where it is called are set_append_rel_size()\n> and set_subquery_pathlist(), both being called from set_rel_size(). So\n> set_cheapest() would ultimately be called in set_rel_pathlist().\n\nI'm less convinced about changing this. I'd rather keep it consistent\nwith mark_dummy_rel.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Mar 2024 16:06:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove some redundant set_cheapest() calls" }, { "msg_contents": "On Wed, Mar 27, 2024 at 4:06 AM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > I happened to notice that the set_cheapest() calls in functions\n> > set_namedtuplestore_pathlist() and set_result_pathlist() are redundant,\n> > as we've centralized the set_cheapest() calls in set_rel_pathlist().\n>\n> > Attached is a trivial patch to remove these calls.\n>\n> Agreed, and pushed.\n\n\nThanks for pushing!\n\n\n> > BTW, I suspect that the set_cheapest() call in set_dummy_rel_pathlist()\n> > is also redundant. The comment there says \"This is redundant when we're\n> > called from set_rel_size(), but not when called from elsewhere\". I\n> > doubt it. The other places where it is called are set_append_rel_size()\n> > and set_subquery_pathlist(), both being called from set_rel_size(). So\n> > set_cheapest() would ultimately be called in set_rel_pathlist().\n>\n> I'm less convinced about changing this. I'd rather keep it consistent\n> with mark_dummy_rel.\n\n\nHm, I wonder if we should revise the comment there that states \"but not\nwhen called from elsewhere\", as it does not seem to be true.\n\nThanks\nRichard\n\nOn Wed, Mar 27, 2024 at 4:06 AM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> I happened to notice that the set_cheapest() calls in functions\n> set_namedtuplestore_pathlist() and set_result_pathlist() are redundant,\n> as we've centralized the set_cheapest() calls in set_rel_pathlist().\n\n> Attached is a trivial patch to remove these calls.\n\nAgreed, and pushed.Thanks for pushing! \n> BTW, I suspect that the set_cheapest() call in set_dummy_rel_pathlist()\n> is also redundant.  The comment there says \"This is redundant when we're\n> called from set_rel_size(), but not when called from elsewhere\".  I\n> doubt it.  The other places where it is called are set_append_rel_size()\n> and set_subquery_pathlist(), both being called from set_rel_size().  So\n> set_cheapest() would ultimately be called in set_rel_pathlist().\n\nI'm less convinced about changing this.  I'd rather keep it consistent\nwith mark_dummy_rel.Hm, I wonder if we should revise the comment there that states \"but notwhen called from elsewhere\", as it does not seem to be true.ThanksRichard", "msg_date": "Wed, 27 Mar 2024 15:06:50 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove some redundant set_cheapest() calls" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> On Wed, Mar 27, 2024 at 4:06 AM Tom Lane <[email protected]> wrote:\n>> I'm less convinced about changing this. I'd rather keep it consistent\n>> with mark_dummy_rel.\n\n> Hm, I wonder if we should revise the comment there that states \"but not\n> when called from elsewhere\", as it does not seem to be true.\n\nI'd be okay with wording like \"This is redundant in current usage\nbecause set_rel_pathlist will do it later, but it's cheap so we keep\nit for consistency with mark_dummy_rel\". What do you think?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:59:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove some redundant set_cheapest() calls" }, { "msg_contents": "On Wed, Mar 27, 2024 at 10:59 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > On Wed, Mar 27, 2024 at 4:06 AM Tom Lane <[email protected]> wrote:\n> >> I'm less convinced about changing this. I'd rather keep it consistent\n> >> with mark_dummy_rel.\n>\n> > Hm, I wonder if we should revise the comment there that states \"but not\n> > when called from elsewhere\", as it does not seem to be true.\n>\n> I'd be okay with wording like \"This is redundant in current usage\n> because set_rel_pathlist will do it later, but it's cheap so we keep\n> it for consistency with mark_dummy_rel\". What do you think?\n\n\nThat works for me. Thanks for the wording.\n\nThanks\nRichard\n\nOn Wed, Mar 27, 2024 at 10:59 PM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> On Wed, Mar 27, 2024 at 4:06 AM Tom Lane <[email protected]> wrote:\n>> I'm less convinced about changing this.  I'd rather keep it consistent\n>> with mark_dummy_rel.\n\n> Hm, I wonder if we should revise the comment there that states \"but not\n> when called from elsewhere\", as it does not seem to be true.\n\nI'd be okay with wording like \"This is redundant in current usage\nbecause set_rel_pathlist will do it later, but it's cheap so we keep\nit for consistency with mark_dummy_rel\".  What do you think?That works for me.  Thanks for the wording.ThanksRichard", "msg_date": "Thu, 28 Mar 2024 10:27:10 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove some redundant set_cheapest() calls" } ]
[ { "msg_contents": "Hi Alvaro,\n\nI met an issue related to Catalog not-null commit on HEAD.\n\npostgres=# CREATE TABLE t1(c0 int, c1 int);\nCREATE TABLE\npostgres=# ALTER TABLE t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\nALTER TABLE\npostgres=# \\d+ t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n c0 | integer | | not null | | plain |\n| |\n c1 | integer | | not null | | plain |\n| |\nIndexes:\n \"q\" PRIMARY KEY, btree (c0, c1)\nAccess method: heap\n\npostgres=# ALTER TABLE t1 DROP c1;\nALTER TABLE\npostgres=# \\d+ t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n c0 | integer | | not null | | plain |\n| |\nAccess method: heap\n\npostgres=# ALTER TABLE t1 ALTER c0 DROP NOT NULL;\nERROR: could not find not-null constraint on column \"c0\", relation \"t1\"\npostgres=# insert into t1 values (NULL);\nERROR: null value in column \"c0\" of relation \"t1\" violates not-null\nconstraint\nDETAIL: Failing row contains (null).\n\nI couldn't reproduce aboved issue on older version(12.12 ...16.1).\nto be more precisely, since b0e96f3119 commit.\n\nWithout the b0e96f3119, when we drop not null constraint, we just update\nthe pg_attribute attnotnull to false\nin ATExecDropNotNull(). But now we first check pg_constraint if has the\ntuple. if attnotnull is ture, but pg_constraint\ndoesn't has that tuple. Aboved error will report.\n\nIt will be confuesed for users. Because \\d shows the column c0 has not\nnull, and we cann't insert NULL value. But it\nreports errore when users drop the NOT NULL constraint.\n\nThe attached patch is my workaround solution. Look forward your apply.\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/", "msg_date": "Tue, 26 Mar 2024 20:00:36 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Mar-26, Tender Wang wrote:\n\n> postgres=# CREATE TABLE t1(c0 int, c1 int);\n> postgres=# ALTER TABLE t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> postgres=# ALTER TABLE t1 DROP c1;\n> \n> postgres=# ALTER TABLE t1 ALTER c0 DROP NOT NULL;\n> ERROR: could not find not-null constraint on column \"c0\", relation \"t1\"\n\nOoh, hah, what happens here is that we drop the PK constraint\nindirectly, so we only go via doDeletion rather than the tablecmds.c\ncode, so we don't check the attnotnull flags that the PK was protecting.\n\n> The attached patch is my workaround solution. Look forward your apply.\n\nYeah, this is not a very good approach -- I think you're just guessing\nthat the column is marked NOT NULL because a PK was dropped in the\npast -- but really what this catalog state is, is corrupted contents\nbecause the PK drop was mishandled. At least in theory there are other\nways to drop a constraint other than dropping one of its columns (for\nexample, maybe this could happen if you drop a collation that the PK\ndepends on). The right approach is to ensure that the PK drop always\ndoes the dance that ATExecDropConstraint does. A good fix probably just\nmoves some code from dropconstraint_internal to RemoveConstraintById.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n", "msg_date": "Tue, 26 Mar 2024 16:25:42 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "Alvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:\n\n> On 2024-Mar-26, Tender Wang wrote:\n>\n> > postgres=# CREATE TABLE t1(c0 int, c1 int);\n> > postgres=# ALTER TABLE t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> > postgres=# ALTER TABLE t1 DROP c1;\n> >\n> > postgres=# ALTER TABLE t1 ALTER c0 DROP NOT NULL;\n> > ERROR: could not find not-null constraint on column \"c0\", relation \"t1\"\n>\n> Ooh, hah, what happens here is that we drop the PK constraint\n> indirectly, so we only go via doDeletion rather than the tablecmds.c\n> code, so we don't check the attnotnull flags that the PK was protecting.\n>\n\nYeah, Indeed, as you said.\n\n> The attached patch is my workaround solution. Look forward your apply.\n>\n> Yeah, this is not a very good approach -- I think you're just guessing\n> that the column is marked NOT NULL because a PK was dropped in the\n> past -- but really what this catalog state is, is corrupted contents\n> because the PK drop was mishandled. At least in theory there are other\n> ways to drop a constraint other than dropping one of its columns (for\n> example, maybe this could happen if you drop a collation that the PK\n> depends on). The right approach is to ensure that the PK drop always\n> does the dance that ATExecDropConstraint does. A good fix probably just\n> moves some code from dropconstraint_internal to RemoveConstraintById.\n>\n\nAgreed. It is look better. But it will not work if simply move some codes\nfrom dropconstraint_internal\nto RemoveConstraintById. I have tried this fix before 0001 patch, but\nfailed.\n\nFor example:\ncreate table skip_wal_skip_rewrite_index (c varchar(10) primary key);\nalter table skip_wal_skip_rewrite_index alter c type varchar(20);\nERROR: primary key column \"c\" is not marked NOT NULL\n\nindex_check_primary_key() in index.c has below comments;\n\n\"We check for a pre-existing primary key, and that all columns of the index\nare simple column references (not expressions), and that all those columns\nare marked NOT NULL. If not, fail.\"\n\nSo in aboved example, RemoveConstraintById() can't reset attnotnull. We can\npass some information to\nRemoveConstraintById() like a bool var to indicate that attnotnull should\nbe reset or not.\n\n\n\n--\nTender Wang\nOpenPie: https://en.openpie.com/\n\nAlvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:On 2024-Mar-26, Tender Wang wrote:\n\n> postgres=# CREATE TABLE t1(c0 int, c1 int);\n> postgres=# ALTER TABLE  t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> postgres=# ALTER TABLE  t1 DROP c1;\n> \n> postgres=# ALTER TABLE  t1  ALTER c0 DROP NOT NULL;\n> ERROR:  could not find not-null constraint on column \"c0\", relation \"t1\"\n\nOoh, hah, what happens here is that we drop the PK constraint\nindirectly, so we only go via doDeletion rather than the tablecmds.c\ncode, so we don't check the attnotnull flags that the PK was protecting.Yeah,   Indeed, as you said.\n> The attached patch is my workaround solution.  Look forward your apply.\n\nYeah, this is not a very good approach -- I think you're just guessing\nthat the column is marked NOT NULL because a PK was dropped in the\npast -- but really what this catalog state is, is corrupted contents\nbecause the PK drop was mishandled.  At least in theory there are other\nways to drop a constraint other than dropping one of its columns (for\nexample, maybe this could happen if you drop a collation that the PK\ndepends on).  The right approach is to ensure that the PK drop always\ndoes the dance that ATExecDropConstraint does.  A good fix probably just\nmoves some code from dropconstraint_internal to RemoveConstraintById. Agreed. It is look better.  But it will not work if simply move some codes from dropconstraint_internalto RemoveConstraintById. I have tried this fix before 0001 patch, but failed.For example:create table skip_wal_skip_rewrite_index (c varchar(10) primary key);alter table skip_wal_skip_rewrite_index alter c type varchar(20);ERROR:  primary key column \"c\" is not marked NOT NULLindex_check_primary_key() in index.c has below comments;\"We check for a pre-existing primary key, and that all columns of the index are simple column references (not expressions), and that all those columns are marked NOT NULL.  If not, fail.\"So in aboved example, RemoveConstraintById() can't reset attnotnull. We can pass some information  toRemoveConstraintById() like a bool var to indicate that attnotnull should be reset or not.--Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Wed, 27 Mar 2024 11:33:29 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "Alvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:\n\n> On 2024-Mar-26, Tender Wang wrote:\n>\n> > postgres=# CREATE TABLE t1(c0 int, c1 int);\n> > postgres=# ALTER TABLE t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> > postgres=# ALTER TABLE t1 DROP c1;\n> >\n> > postgres=# ALTER TABLE t1 ALTER c0 DROP NOT NULL;\n> > ERROR: could not find not-null constraint on column \"c0\", relation \"t1\"\n>\n> Ooh, hah, what happens here is that we drop the PK constraint\n> indirectly, so we only go via doDeletion rather than the tablecmds.c\n> code, so we don't check the attnotnull flags that the PK was protecting.\n>\n> > The attached patch is my workaround solution. Look forward your apply.\n>\n> Yeah, this is not a very good approach -- I think you're just guessing\n> that the column is marked NOT NULL because a PK was dropped in the\n> past -- but really what this catalog state is, is corrupted contents\n> because the PK drop was mishandled. At least in theory there are other\n> ways to drop a constraint other than dropping one of its columns (for\n> example, maybe this could happen if you drop a collation that the PK\n> depends on). The right approach is to ensure that the PK drop always\n> does the dance that ATExecDropConstraint does. A good fix probably just\n> moves some code from dropconstraint_internal to RemoveConstraintById.\n>\n\nI think again, below solutin maybe looks more better:\ni. move some code from dropconstraint_internal to RemoveConstraintById,\n not change the RemoveConstraintById interface. Ensure that the PK drop\nalways\n does the dance that ATExecDropConstraint does.\n\nii. After i phase, the attnotnull of some column of primary key may be\nreset to false as I provided example in last email.\n We can set attnotnull to true again in some place.\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nAlvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:On 2024-Mar-26, Tender Wang wrote:\n\n> postgres=# CREATE TABLE t1(c0 int, c1 int);\n> postgres=# ALTER TABLE  t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> postgres=# ALTER TABLE  t1 DROP c1;\n> \n> postgres=# ALTER TABLE  t1  ALTER c0 DROP NOT NULL;\n> ERROR:  could not find not-null constraint on column \"c0\", relation \"t1\"\n\nOoh, hah, what happens here is that we drop the PK constraint\nindirectly, so we only go via doDeletion rather than the tablecmds.c\ncode, so we don't check the attnotnull flags that the PK was protecting.\n\n> The attached patch is my workaround solution.  Look forward your apply.\n\nYeah, this is not a very good approach -- I think you're just guessing\nthat the column is marked NOT NULL because a PK was dropped in the\npast -- but really what this catalog state is, is corrupted contents\nbecause the PK drop was mishandled.  At least in theory there are other\nways to drop a constraint other than dropping one of its columns (for\nexample, maybe this could happen if you drop a collation that the PK\ndepends on).  The right approach is to ensure that the PK drop always\ndoes the dance that ATExecDropConstraint does.  A good fix probably just\nmoves some code from dropconstraint_internal to RemoveConstraintById.I think again, below solutin maybe looks more better:i. move some code from  dropconstraint_internal to RemoveConstraintById,   not change the RemoveConstraintById interface. Ensure that the PK drop always  does the dance that ATExecDropConstraint does.ii. After i phase, the attnotnull of some column of primary key  may be reset to false as I provided example in last email.   We can set attnotnull to true again in some place.-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Wed, 27 Mar 2024 16:21:35 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "Alvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:\n\n> On 2024-Mar-26, Tender Wang wrote:\n>\n> > postgres=# CREATE TABLE t1(c0 int, c1 int);\n> > postgres=# ALTER TABLE t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> > postgres=# ALTER TABLE t1 DROP c1;\n> >\n> > postgres=# ALTER TABLE t1 ALTER c0 DROP NOT NULL;\n> > ERROR: could not find not-null constraint on column \"c0\", relation \"t1\"\n>\n> Ooh, hah, what happens here is that we drop the PK constraint\n> indirectly, so we only go via doDeletion rather than the tablecmds.c\n> code, so we don't check the attnotnull flags that the PK was protecting.\n>\n> > The attached patch is my workaround solution. Look forward your apply.\n>\n> Yeah, this is not a very good approach -- I think you're just guessing\n> that the column is marked NOT NULL because a PK was dropped in the\n> past -- but really what this catalog state is, is corrupted contents\n> because the PK drop was mishandled. At least in theory there are other\n> ways to drop a constraint other than dropping one of its columns (for\n> example, maybe this could happen if you drop a collation that the PK\n> depends on). The right approach is to ensure that the PK drop always\n> does the dance that ATExecDropConstraint does. A good fix probably just\n> moves some code from dropconstraint_internal to RemoveConstraintById.\n>\n\nI found some types ddl would check the attnotnull of column is true, for\nexample: AT_ReAddIndex, AT_ReplicaIdentity.\nSo we should add AT_SetAttNotNull sub-command to the wqueue. I add a\nnew AT_PASS_OLD_COL_ATTRS to make sure\nAT_SetAttNotNull will have done when do AT_ReAddIndex or\nAT_ReplicaIdentity.\n\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/", "msg_date": "Wed, 27 Mar 2024 22:26:10 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On Wed, Mar 27, 2024 at 10:26 PM Tender Wang <[email protected]> wrote:\n>\n> Alvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:\n>>\n>> On 2024-Mar-26, Tender Wang wrote:\n>>\n>> > postgres=# CREATE TABLE t1(c0 int, c1 int);\n>> > postgres=# ALTER TABLE t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n>> > postgres=# ALTER TABLE t1 DROP c1;\n>> >\n>> > postgres=# ALTER TABLE t1 ALTER c0 DROP NOT NULL;\n>> > ERROR: could not find not-null constraint on column \"c0\", relation \"t1\"\n>>\n>> Ooh, hah, what happens here is that we drop the PK constraint\n>> indirectly, so we only go via doDeletion rather than the tablecmds.c\n>> code, so we don't check the attnotnull flags that the PK was protecting.\n>>\n>> > The attached patch is my workaround solution. Look forward your apply.\n>>\n\nafter applying v2-0001-Fix-pg_attribute-attnotnull-not-reset-when-droppe.patch\n\nsomething is off, now i cannot drop a table.\ndemo:\nCREATE TABLE t2(c0 int, c1 int);\nALTER TABLE t2 ADD CONSTRAINT t2_pk PRIMARY KEY(c0, c1);\nALTER TABLE t2 ALTER COLUMN c0 ADD GENERATED ALWAYS AS IDENTITY;\nDROP TABLE t2 cascade;\nSimilarly, maybe there will be some issue with replica identity.\n\n\n+ /*\n+ * If this was a NOT NULL or the primary key, the constrained columns must\n+ * have had pg_attribute.attnotnull set. See if we need to reset it, and\n+ * do so.\n+ */\n+ if (unconstrained_cols)\nit should be if (unconstrained_cols != NIL)?,\ngiven unconstrained_cols is a List, also \"unconstrained_cols\" naming\nseems not intuitive.\nmaybe pk_attnums or pk_cols or pk_columns.\n\n\n+ attrel = table_open(AttributeRelationId, RowExclusiveLock);\n+ rel = table_open(con->conrelid, RowExclusiveLock);\nI am not sure why we using RowExclusiveLock for con->conrelid?\ngiven we use AccessExclusiveLock at:\n/*\n* If the constraint is for a relation, open and exclusive-lock the\n* relation it's for.\n*/\nrel = table_open(con->conrelid, AccessExclusiveLock);\n\n\n+ /*\n+ * Since the above deletion has been made visible, we can now\n+ * search for any remaining constraints on this column (or these\n+ * columns, in the case we're dropping a multicol primary key.)\n+ * Then, verify whether any further NOT NULL or primary key\n+ * exists, and reset attnotnull if none.\n+ *\n+ * However, if this is a generated identity column, abort the\n+ * whole thing with a specific error message, because the\n+ * constraint is required in that case.\n+ */\n+ contup = findNotNullConstraintAttnum(RelationGetRelid(rel), attnum);\n+ if (contup ||\n+ bms_is_member(attnum - FirstLowInvalidHeapAttributeNumber,\n+ pkcols))\n+ continue;\n\nI didn't get this part.\nif you drop delete a primary key,\nthe \"NOT NULL\" constraint within pg_constraint should definitely be removed?\ntherefore contup should be pretty sure is NULL?\n\n\n /*\n- * The parser will create AT_AttSetNotNull subcommands for\n- * columns of PRIMARY KEY indexes/constraints, but we need\n- * not do anything with them here, because the columns'\n- * NOT NULL marks will already have been propagated into\n- * the new table definition.\n+ * PK drop now will reset pg_attribute attnotnull to false.\n+ * We should set attnotnull to true again.\n */\nPK drop now will reset pg_attribute attnotnull to false,\nwhich is what we should be expecting.\nthe comment didn't explain why should set attnotnull to true again?\n\n\n", "msg_date": "Thu, 28 Mar 2024 13:18:41 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "jian he <[email protected]> 于2024年3月28日周四 13:18写道:\n\n> On Wed, Mar 27, 2024 at 10:26 PM Tender Wang <[email protected]> wrote:\n> >\n> > Alvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:\n> >>\n> >> On 2024-Mar-26, Tender Wang wrote:\n> >>\n> >> > postgres=# CREATE TABLE t1(c0 int, c1 int);\n> >> > postgres=# ALTER TABLE t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> >> > postgres=# ALTER TABLE t1 DROP c1;\n> >> >\n> >> > postgres=# ALTER TABLE t1 ALTER c0 DROP NOT NULL;\n> >> > ERROR: could not find not-null constraint on column \"c0\", relation\n> \"t1\"\n> >>\n> >> Ooh, hah, what happens here is that we drop the PK constraint\n> >> indirectly, so we only go via doDeletion rather than the tablecmds.c\n> >> code, so we don't check the attnotnull flags that the PK was protecting.\n> >>\n> >> > The attached patch is my workaround solution. Look forward your\n> apply.\n> >>\n>\n> after applying\n> v2-0001-Fix-pg_attribute-attnotnull-not-reset-when-droppe.patch\n>\n> something is off, now i cannot drop a table.\n> demo:\n> CREATE TABLE t2(c0 int, c1 int);\n> ALTER TABLE t2 ADD CONSTRAINT t2_pk PRIMARY KEY(c0, c1);\n> ALTER TABLE t2 ALTER COLUMN c0 ADD GENERATED ALWAYS AS IDENTITY;\n> DROP TABLE t2 cascade;\n> Similarly, maybe there will be some issue with replica identity.\n>\nThanks for review this patch. Yeah, I can reproduce it. The error reported\nin RemoveConstraintById(), where I moved\nsome codes from dropconstraint_internal(). But some check seems to no need\nin RemoveConstraintById(). For example:\n\n/*\n* It's not valid to drop the not-null constraint for a GENERATED\n* AS IDENTITY column.\n*/\nif (attForm->attidentity)\nereport(ERROR,\nerrcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\nerrmsg(\"column \\\"%s\\\" of relation \\\"%s\\\" is an identity column\",\n get_attname(RelationGetRelid(rel), attnum,\n false),\n RelationGetRelationName(rel)));\n\n/*\n* It's not valid to drop the not-null constraint for a column in\n* the replica identity index, either. (FULL is not affected.)\n*/\nif (bms_is_member(lfirst_int(lc) - FirstLowInvalidHeapAttributeNumber,\nircols))\nereport(ERROR,\nerrcode(ERRCODE_INVALID_TABLE_DEFINITION),\nerrmsg(\"column \\\"%s\\\" is in index used as replica identity\",\n get_attname(RelationGetRelid(rel), lfirst_int(lc), false)));\n\nAbove two check can remove from RemoveConstraintById()? I need more test.\n\n>\n> + /*\n> + * If this was a NOT NULL or the primary key, the constrained columns must\n> + * have had pg_attribute.attnotnull set. See if we need to reset it, and\n> + * do so.\n> + */\n> + if (unconstrained_cols)\n> it should be if (unconstrained_cols != NIL)?,\n> given unconstrained_cols is a List, also \"unconstrained_cols\" naming\n> seems not intuitive.\n> maybe pk_attnums or pk_cols or pk_columns.\n>\nAs I said above, the codes were copied from dropconstraint_internal(). NOT\nNULL columns were not alwayls PK.\nSo I thinks \"unconstrained_cols\" is OK.\n\n>\n> + attrel = table_open(AttributeRelationId, RowExclusiveLock);\n> + rel = table_open(con->conrelid, RowExclusiveLock);\n> I am not sure why we using RowExclusiveLock for con->conrelid?\n> given we use AccessExclusiveLock at:\n> /*\n> * If the constraint is for a relation, open and exclusive-lock the\n> * relation it's for.\n> */\n> rel = table_open(con->conrelid, AccessExclusiveLock);\n>\nYeah, you are right.\n\n>\n>\n> + /*\n> + * Since the above deletion has been made visible, we can now\n> + * search for any remaining constraints on this column (or these\n> + * columns, in the case we're dropping a multicol primary key.)\n> + * Then, verify whether any further NOT NULL or primary key\n> + * exists, and reset attnotnull if none.\n> + *\n> + * However, if this is a generated identity column, abort the\n> + * whole thing with a specific error message, because the\n> + * constraint is required in that case.\n> + */\n> + contup = findNotNullConstraintAttnum(RelationGetRelid(rel), attnum);\n> + if (contup ||\n> + bms_is_member(attnum - FirstLowInvalidHeapAttributeNumber,\n> + pkcols))\n> + continue;\n>\n> I didn't get this part.\n> if you drop delete a primary key,\n> the \"NOT NULL\" constraint within pg_constraint should definitely be\n> removed?\n> therefore contup should be pretty sure is NULL?\n>\n\nNo, If the original definaiton of column includes NOT NULL, we can't reset\nattnotnull to false when\nWe we drop PK.\n\n\n>\n> /*\n> - * The parser will create AT_AttSetNotNull subcommands for\n> - * columns of PRIMARY KEY indexes/constraints, but we need\n> - * not do anything with them here, because the columns'\n> - * NOT NULL marks will already have been propagated into\n> - * the new table definition.\n> + * PK drop now will reset pg_attribute attnotnull to false.\n> + * We should set attnotnull to true again.\n> */\n> PK drop now will reset pg_attribute attnotnull to false,\n> which is what we should be expecting.\n> the comment didn't explain why should set attnotnull to true again?\n>\n\nThe V2 patch still needs more cases to test, Probably not right solution.\nAnyway, I will send a v3 version patch after I do more test.\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\njian he <[email protected]> 于2024年3月28日周四 13:18写道:On Wed, Mar 27, 2024 at 10:26 PM Tender Wang <[email protected]> wrote:\n>\n> Alvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:\n>>\n>> On 2024-Mar-26, Tender Wang wrote:\n>>\n>> > postgres=# CREATE TABLE t1(c0 int, c1 int);\n>> > postgres=# ALTER TABLE  t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n>> > postgres=# ALTER TABLE  t1 DROP c1;\n>> >\n>> > postgres=# ALTER TABLE  t1  ALTER c0 DROP NOT NULL;\n>> > ERROR:  could not find not-null constraint on column \"c0\", relation \"t1\"\n>>\n>> Ooh, hah, what happens here is that we drop the PK constraint\n>> indirectly, so we only go via doDeletion rather than the tablecmds.c\n>> code, so we don't check the attnotnull flags that the PK was protecting.\n>>\n>> > The attached patch is my workaround solution.  Look forward your apply.\n>>\n\nafter applying v2-0001-Fix-pg_attribute-attnotnull-not-reset-when-droppe.patch\n\nsomething is off, now i cannot drop a table.\ndemo:\nCREATE TABLE t2(c0 int, c1 int);\nALTER TABLE  t2 ADD CONSTRAINT t2_pk PRIMARY KEY(c0, c1);\nALTER TABLE t2 ALTER COLUMN c0 ADD GENERATED ALWAYS AS IDENTITY;\nDROP TABLE t2 cascade;\nSimilarly, maybe there will be some issue with replica identity.Thanks for review this patch. Yeah, I can reproduce it.  The error reported in RemoveConstraintById(), where I movedsome codes from dropconstraint_internal(). But some check seems to no need in RemoveConstraintById(). For example:/*\t\t\t * It's not valid to drop the not-null constraint for a GENERATED\t\t\t * AS IDENTITY column.\t\t\t */\t\t\tif (attForm->attidentity)\t\t\t\tereport(ERROR,\t\t\t\t\t\terrcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\t\t\t\t\t\terrmsg(\"column \\\"%s\\\" of relation \\\"%s\\\" is an identity column\",\t\t\t\t\t\t\t   get_attname(RelationGetRelid(rel), attnum,\t\t\t\t\t\t\t\t\t\t   false),\t\t\t\t\t\t\t   RelationGetRelationName(rel)));\t\t\t/*\t\t\t * It's not valid to drop the not-null constraint for a column in\t\t\t * the replica identity index, either. (FULL is not affected.)\t\t\t */\t\t\tif (bms_is_member(lfirst_int(lc) - FirstLowInvalidHeapAttributeNumber, ircols))\t\t\t\tereport(ERROR,\t\t\t\t\t\terrcode(ERRCODE_INVALID_TABLE_DEFINITION),\t\t\t\t\t\terrmsg(\"column \\\"%s\\\" is in index used as replica identity\",\t\t\t\t\t\t\t   get_attname(RelationGetRelid(rel), lfirst_int(lc), false)));Above two check can remove from RemoveConstraintById()? I need more test.\n\n+ /*\n+ * If this was a NOT NULL or the primary key, the constrained columns must\n+ * have had pg_attribute.attnotnull set.  See if we need to reset it, and\n+ * do so.\n+ */\n+ if (unconstrained_cols)\nit should be if (unconstrained_cols != NIL)?,\ngiven unconstrained_cols is a List, also \"unconstrained_cols\" naming\nseems not intuitive.\nmaybe pk_attnums or pk_cols or pk_columns.As I said above, the codes were copied from  dropconstraint_internal(). NOT NULL columns were not alwayls PK.So I thinks \"unconstrained_cols\" is OK.\n\n+ attrel = table_open(AttributeRelationId, RowExclusiveLock);\n+ rel = table_open(con->conrelid, RowExclusiveLock);\nI am not sure why we using RowExclusiveLock for con->conrelid?\ngiven we use AccessExclusiveLock at:\n/*\n* If the constraint is for a relation, open and exclusive-lock the\n* relation it's for.\n*/\nrel = table_open(con->conrelid, AccessExclusiveLock);Yeah, you are right. \n\n\n+ /*\n+ * Since the above deletion has been made visible, we can now\n+ * search for any remaining constraints on this column (or these\n+ * columns, in the case we're dropping a multicol primary key.)\n+ * Then, verify whether any further NOT NULL or primary key\n+ * exists, and reset attnotnull if none.\n+ *\n+ * However, if this is a generated identity column, abort the\n+ * whole thing with a specific error message, because the\n+ * constraint is required in that case.\n+ */\n+ contup = findNotNullConstraintAttnum(RelationGetRelid(rel), attnum);\n+ if (contup ||\n+ bms_is_member(attnum - FirstLowInvalidHeapAttributeNumber,\n+  pkcols))\n+ continue;\n\nI didn't get this part.\nif you drop delete a primary key,\nthe \"NOT NULL\" constraint within pg_constraint should definitely be removed?\ntherefore contup should be pretty sure is NULL? No,  If the original definaiton of column includes NOT NULL, we can't reset attnotnull to false whenWe we drop PK.\n\n\n  /*\n- * The parser will create AT_AttSetNotNull subcommands for\n- * columns of PRIMARY KEY indexes/constraints, but we need\n- * not do anything with them here, because the columns'\n- * NOT NULL marks will already have been propagated into\n- * the new table definition.\n+ * PK drop now will reset pg_attribute attnotnull to false.\n+ * We should set attnotnull to true again.\n  */\nPK drop now will reset pg_attribute attnotnull to false,\nwhich is what we should be expecting.\nthe comment didn't explain why should set attnotnull to true again?The V2 patch still needs more cases to test, Probably not right solution. Anyway, I will  send a v3 version patch  after I do more test. -- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Thu, 28 Mar 2024 14:13:48 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "Alvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:\n\n> On 2024-Mar-26, Tender Wang wrote:\n>\n> > postgres=# CREATE TABLE t1(c0 int, c1 int);\n> > postgres=# ALTER TABLE t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> > postgres=# ALTER TABLE t1 DROP c1;\n> >\n> > postgres=# ALTER TABLE t1 ALTER c0 DROP NOT NULL;\n> > ERROR: could not find not-null constraint on column \"c0\", relation \"t1\"\n>\n> Ooh, hah, what happens here is that we drop the PK constraint\n> indirectly, so we only go via doDeletion rather than the tablecmds.c\n> code, so we don't check the attnotnull flags that the PK was protecting.\n>\n> > The attached patch is my workaround solution. Look forward your apply.\n>\n> Yeah, this is not a very good approach -- I think you're just guessing\n> that the column is marked NOT NULL because a PK was dropped in the\n> past -- but really what this catalog state is, is corrupted contents\n> because the PK drop was mishandled. At least in theory there are other\n> ways to drop a constraint other than dropping one of its columns (for\n> example, maybe this could happen if you drop a collation that the PK\n> depends on). The right approach is to ensure that the PK drop always\n> does the dance that ATExecDropConstraint does. A good fix probably just\n> moves some code from dropconstraint_internal to RemoveConstraintById.\n>\n\n RemoveConstraintById() should think recurse(e.g. partition table)? I'm not\nsure now.\n If we should think process recurse in RemoveConstraintById(), the\nfunction will look complicated than before.\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nAlvaro Herrera <[email protected]> 于2024年3月26日周二 23:25写道:On 2024-Mar-26, Tender Wang wrote:\n\n> postgres=# CREATE TABLE t1(c0 int, c1 int);\n> postgres=# ALTER TABLE  t1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> postgres=# ALTER TABLE  t1 DROP c1;\n> \n> postgres=# ALTER TABLE  t1  ALTER c0 DROP NOT NULL;\n> ERROR:  could not find not-null constraint on column \"c0\", relation \"t1\"\n\nOoh, hah, what happens here is that we drop the PK constraint\nindirectly, so we only go via doDeletion rather than the tablecmds.c\ncode, so we don't check the attnotnull flags that the PK was protecting.\n\n> The attached patch is my workaround solution.  Look forward your apply.\n\nYeah, this is not a very good approach -- I think you're just guessing\nthat the column is marked NOT NULL because a PK was dropped in the\npast -- but really what this catalog state is, is corrupted contents\nbecause the PK drop was mishandled.  At least in theory there are other\nways to drop a constraint other than dropping one of its columns (for\nexample, maybe this could happen if you drop a collation that the PK\ndepends on).  The right approach is to ensure that the PK drop always\ndoes the dance that ATExecDropConstraint does.  A good fix probably just\nmoves some code from dropconstraint_internal to RemoveConstraintById. RemoveConstraintById() should think recurse(e.g. partition table)? I'm not sure now. If we should think process recurse in RemoveConstraintById(),  the function will look complicated than before.-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Thu, 28 Mar 2024 16:32:41 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Mar-28, Tender Wang wrote:\n\n> RemoveConstraintById() should think recurse(e.g. partition table)? I'm not\n> sure now.\n> If we should think process recurse in RemoveConstraintById(), the\n> function will look complicated than before.\n\nNo -- this function handles just a single constraint, as identified by\nOID. The recursion is handled by upper layers, which can be either\ndependency.c or tablecmds.c. I think the problem you found is caused by\nthe fact that I worked with the tablecmds.c recursion and neglected the\none in dependency.c.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No nos atrevemos a muchas cosas porque son difíciles,\npero son difíciles porque no nos atrevemos a hacerlas\" (Séneca)\n\n\n", "msg_date": "Thu, 28 Mar 2024 10:18:38 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "Alvaro Herrera <[email protected]> 于2024年3月28日周四 17:18写道:\n\n> On 2024-Mar-28, Tender Wang wrote:\n>\n> > RemoveConstraintById() should think recurse(e.g. partition table)? I'm\n> not\n> > sure now.\n> > If we should think process recurse in RemoveConstraintById(), the\n> > function will look complicated than before.\n>\n> No -- this function handles just a single constraint, as identified by\n> OID. The recursion is handled by upper layers, which can be either\n> dependency.c or tablecmds.c. I think the problem you found is caused by\n> the fact that I worked with the tablecmds.c recursion and neglected the\n> one in dependency.c.\n>\n\nIndeed.\n\ncreate table skip_wal_skip_rewrite_index (c varchar(10) primary key);\nalter table skip_wal_skip_rewrite_index alter c type varchar(20);\n\nAbove SQL need attnotnull to be true when re-add index, but\nRemoveConstraintById() is hard to recognize this scenario as I know.\nWe should re-set attnotnull to be true before re-add index. I add a new\nAT_PASS in attached patch.\nAny thoughts?\n--\nTender Wang\nOpenPie: https://en.openpie.com/", "msg_date": "Thu, 28 Mar 2024 20:05:04 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "hi.\nabout v4, i think, i understand the changes you made.\nRemoveConstraintById(Oid conId)\nwill drop a single constraint record.\nif the constraint is primary key, then primary key associated\nattnotnull should set to false.\nbut sometimes it shouldn't.\n\n\nfor example:\ndrop table if exists t2;\nCREATE TABLE t2(c0 int, c1 int);\nALTER TABLE t2 ADD CONSTRAINT t2_pk PRIMARY KEY(c0, c1);\nALTER TABLE t2 ALTER COLUMN c0 ADD GENERATED ALWAYS AS IDENTITY;\nALTER TABLE t2 DROP c1;\n\n+ * If this was a NOT NULL or the primary key, the constrained columns must\n+ * have had pg_attribute.attnotnull set. See if we need to reset it, and\n+ * do so.\n+ */\n+ if (unconstrained_cols != NIL)\n\nunconstrained_cols is not NIL, which means we have dropped a primary key.\nI am confused by \"If this was a NOT NULL or the primary key\".\n\n+\n+ /*\n+ * Since the above deletion has been made visible, we can now\n+ * search for any remaining constraints on this column (or these\n+ * columns, in the case we're dropping a multicol primary key.)\n+ * Then, verify whether any further NOT NULL exists, and reset\n+ * attnotnull if none.\n+ */\n+ contup = findNotNullConstraintAttnum(RelationGetRelid(rel), attnum);\n+ if (HeapTupleIsValid(contup))\n+ {\n+ heap_freetuple(contup);\n+ heap_freetuple(atttup);\n+ continue;\n+ }\n\nI am a little bit confused by the above comment.\nI think the comments should say,\nif contup is valid, that means, we already have one \"not null\"\nconstraint associate with the attnum\nin that condition, we must not set attnotnull, otherwise the\nconsistency between attnotnull and \"not null\"\ntable constraint will be broken.\n\nother than that, the change in RemoveConstraintById looks sane.\n\n\n", "msg_date": "Fri, 29 Mar 2024 14:56:47 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "jian he <[email protected]> 于2024年3月29日周五 14:56写道:\n\n> hi.\n> about v4, i think, i understand the changes you made.\n> RemoveConstraintById(Oid conId)\n> will drop a single constraint record.\n> if the constraint is primary key, then primary key associated\n> attnotnull should set to false.\n> but sometimes it shouldn't.\n>\n\nYeah, indeed.\n\n>\n>\n> for example:\n> drop table if exists t2;\n> CREATE TABLE t2(c0 int, c1 int);\n> ALTER TABLE t2 ADD CONSTRAINT t2_pk PRIMARY KEY(c0, c1);\n> ALTER TABLE t2 ALTER COLUMN c0 ADD GENERATED ALWAYS AS IDENTITY;\n> ALTER TABLE t2 DROP c1;\n>\n> + * If this was a NOT NULL or the primary key, the constrained columns must\n> + * have had pg_attribute.attnotnull set. See if we need to reset it, and\n> + * do so.\n> + */\n> + if (unconstrained_cols != NIL)\n>\n> unconstrained_cols is not NIL, which means we have dropped a primary key.\n> I am confused by \"If this was a NOT NULL or the primary key\".\n>\n\nNOT NULL means the definition of column having not-null constranit. For\nexample:\ncreate table t1(a int not null);\nthe pg_attribute.attnotnull is set to true.\nprimary key case like below:\ncreate table t2(a int primary key);\nthe pg_attribute.attnotnull is set to true.\n\nI think aboved case can explain what's meaning about comments in\ndropconstraint_internal.\nBut here, in RemoveConstraintById() , we only care about primary key case,\nso NOT NULL is better\nto removed from comments.\n\n\n>\n> +\n> + /*\n> + * Since the above deletion has been made visible, we can now\n> + * search for any remaining constraints on this column (or these\n> + * columns, in the case we're dropping a multicol primary key.)\n> + * Then, verify whether any further NOT NULL exists, and reset\n> + * attnotnull if none.\n> + */\n> + contup = findNotNullConstraintAttnum(RelationGetRelid(rel), attnum);\n> + if (HeapTupleIsValid(contup))\n> + {\n> + heap_freetuple(contup);\n> + heap_freetuple(atttup);\n> + continue;\n> + }\n>\n> I am a little bit confused by the above comment.\n> I think the comments should say,\n> if contup is valid, that means, we already have one \"not null\"\n> constraint associate with the attnum\n> in that condition, we must not set attnotnull, otherwise the\n> consistency between attnotnull and \"not null\"\n> table constraint will be broken.\n>\n> other than that, the change in RemoveConstraintById looks sane.\n>\n Above comments want to say that after pk constranit dropped, if there are\ntuples in\npg_constraint, that means the definition of column has not-null constraint.\nSo we can't\nset pg_attribute.attnotnull to false.\n\nFor example:\ncreate table t1(a int not null);\nalter table t1 add constraint t1_pk primary key(a);\nalter table t1 drop constraint t1_pk;\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\njian he <[email protected]> 于2024年3月29日周五 14:56写道:hi.\nabout v4, i think, i understand the changes you made.\nRemoveConstraintById(Oid conId)\nwill drop a single constraint record.\nif the constraint is primary key, then primary key associated\nattnotnull should set to false.\nbut sometimes it shouldn't.Yeah, indeed. \n\n\nfor example:\ndrop table if exists t2;\nCREATE TABLE t2(c0 int, c1 int);\nALTER TABLE  t2 ADD CONSTRAINT t2_pk PRIMARY KEY(c0, c1);\nALTER TABLE t2 ALTER COLUMN c0 ADD GENERATED ALWAYS AS IDENTITY;\nALTER TABLE  t2 DROP c1;\n\n+ * If this was a NOT NULL or the primary key, the constrained columns must\n+ * have had pg_attribute.attnotnull set.  See if we need to reset it, and\n+ * do so.\n+ */\n+ if (unconstrained_cols != NIL)\n\nunconstrained_cols is not NIL, which means we have dropped a primary key.\nI am confused by \"If this was a NOT NULL or the primary key\".NOT NULL means the definition of column having not-null constranit. For example:create table t1(a int not null);the pg_attribute.attnotnull is set to true.primary key case like below:create table t2(a int primary key);the pg_attribute.attnotnull is set to true.I think aboved case can explain what's meaning about comments in dropconstraint_internal.But here, in RemoveConstraintById() , we only care about primary key case, so NOT NULL is betterto removed from comments. \n\n+\n+ /*\n+ * Since the above deletion has been made visible, we can now\n+ * search for any remaining constraints on this column (or these\n+ * columns, in the case we're dropping a multicol primary key.)\n+ * Then, verify whether any further NOT NULL exists, and reset\n+ * attnotnull if none.\n+ */\n+ contup = findNotNullConstraintAttnum(RelationGetRelid(rel), attnum);\n+ if (HeapTupleIsValid(contup))\n+ {\n+ heap_freetuple(contup);\n+ heap_freetuple(atttup);\n+ continue;\n+ }\n\nI am a little bit confused by the above comment.\nI think the comments should say,\nif contup is valid, that means, we already have one  \"not null\"\nconstraint associate with the attnum\nin that condition, we must not set attnotnull, otherwise the\nconsistency between attnotnull and \"not null\"\ntable constraint will be broken.\n\nother than that, the change in RemoveConstraintById looks sane. Above comments want to say that after pk constranit dropped, if there are tuples inpg_constraint, that means the definition of column has not-null constraint. So we can'tset pg_attribute.attnotnull to false.For example:create table t1(a int not null);alter table t1 add constraint t1_pk primary key(a);alter table t1 drop constraint t1_pk;-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Fri, 29 Mar 2024 15:51:39 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "It has been several days since the last email. Do you have any\nsuggestions, please?\n\nWhat I'm concerned about is that adding a new AT_PASS is good fix? Is it\na big try?\nMore concerned is that it can cover all ALTER TABLE cases?\n\nAny thoughts.\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nIt has been several days since the last email.  Do you have any suggestions, please?What I'm concerned about  is that adding a new AT_PASS is good fix?  Is it a big try?More concerned is that it can cover all ALTER TABLE cases?Any thoughts.-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Sun, 7 Apr 2024 18:33:30 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Mar-29, Tender Wang wrote:\n\n> I think aboved case can explain what's meaning about comments in\n> dropconstraint_internal.\n> But here, in RemoveConstraintById() , we only care about primary key case,\n> so NOT NULL is better to removed from comments.\n\nActually, I think it's better if all the resets of attnotnull occur in\nRemoveConstraintById, for both types of constraints; we would keep that\nblock in dropconstraint_internal only to raise errors in the cases where\nthe constraint is protecting a replica identity or a generated column.\nSomething like the attached, perhaps, may need more polish.\n\nI'm not really sure about the business of adding a new pass value\n-- it's clear that things don't work if we don't do *something* -- I'm\njust not sure if this is the something that we want to do.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)", "msg_date": "Tue, 9 Apr 2024 19:29:03 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On Wed, Apr 10, 2024 at 1:29 AM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Mar-29, Tender Wang wrote:\n>\n> > I think aboved case can explain what's meaning about comments in\n> > dropconstraint_internal.\n> > But here, in RemoveConstraintById() , we only care about primary key case,\n> > so NOT NULL is better to removed from comments.\n>\n> Actually, I think it's better if all the resets of attnotnull occur in\n> RemoveConstraintById, for both types of constraints; we would keep that\n> block in dropconstraint_internal only to raise errors in the cases where\n> the constraint is protecting a replica identity or a generated column.\n> Something like the attached, perhaps, may need more polish.\n>\n\nDROP TABLE if exists notnull_tbl2;\nCREATE TABLE notnull_tbl2 (c0 int generated by default as IDENTITY, c1 int);\nALTER TABLE notnull_tbl2 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\nALTER TABLE notnull_tbl2 DROP CONSTRAINT notnull_tbl2_c0_not_null;\nALTER TABLE notnull_tbl2 DROP c1;\n\\d notnull_tbl2\n\nlast \"\\d notnull_tbl2\" command, master output is:\n Table \"public.notnull_tbl2\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+----------------------------------\n c0 | integer | | not null | generated by default as identity\n\n\n\nlast \"\\d notnull_tbl2\" command, applying\n0001-Correctly-reset-attnotnull-when-constraints-dropped-.patch\noutput:\n Table \"public.notnull_tbl2\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+----------------------------------\n c0 | integer | | | generated by default as identity\n\n\nthere may also have similar issues with the replicate identity.\n\n\n", "msg_date": "Wed, 10 Apr 2024 14:10:23 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "jian he <[email protected]> 于2024年4月10日周三 14:10写道:\n\n> On Wed, Apr 10, 2024 at 1:29 AM Alvaro Herrera <[email protected]>\n> wrote:\n> >\n> > On 2024-Mar-29, Tender Wang wrote:\n> >\n> > > I think aboved case can explain what's meaning about comments in\n> > > dropconstraint_internal.\n> > > But here, in RemoveConstraintById() , we only care about primary key\n> case,\n> > > so NOT NULL is better to removed from comments.\n> >\n> > Actually, I think it's better if all the resets of attnotnull occur in\n> > RemoveConstraintById, for both types of constraints; we would keep that\n> > block in dropconstraint_internal only to raise errors in the cases where\n> > the constraint is protecting a replica identity or a generated column.\n> > Something like the attached, perhaps, may need more polish.\n> >\n>\n> DROP TABLE if exists notnull_tbl2;\n> CREATE TABLE notnull_tbl2 (c0 int generated by default as IDENTITY, c1\n> int);\n> ALTER TABLE notnull_tbl2 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> ALTER TABLE notnull_tbl2 DROP CONSTRAINT notnull_tbl2_c0_not_null;\n> ALTER TABLE notnull_tbl2 DROP c1;\n> \\d notnull_tbl2\n>\n> last \"\\d notnull_tbl2\" command, master output is:\n> Table \"public.notnull_tbl2\"\n> Column | Type | Collation | Nullable | Default\n>\n> --------+---------+-----------+----------+----------------------------------\n> c0 | integer | | not null | generated by default as identity\n>\n>\n>\n> last \"\\d notnull_tbl2\" command, applying\n> 0001-Correctly-reset-attnotnull-when-constraints-dropped-.patch\n> output:\n> Table \"public.notnull_tbl2\"\n> Column | Type | Collation | Nullable | Default\n>\n> --------+---------+-----------+----------+----------------------------------\n> c0 | integer | | | generated by default as identity\n>\n\nHmm,\nALTER TABLE notnull_tbl2 DROP c1; will not call dropconstraint_internal().\nWhen dropping PK constraint indirectly, c0's attnotnull was set to false in\nRemoveConstraintById().\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\njian he <[email protected]> 于2024年4月10日周三 14:10写道:On Wed, Apr 10, 2024 at 1:29 AM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Mar-29, Tender Wang wrote:\n>\n> > I think aboved case can explain what's meaning about comments in\n> > dropconstraint_internal.\n> > But here, in RemoveConstraintById() , we only care about primary key case,\n> > so NOT NULL is better to removed from comments.\n>\n> Actually, I think it's better if all the resets of attnotnull occur in\n> RemoveConstraintById, for both types of constraints; we would keep that\n> block in dropconstraint_internal only to raise errors in the cases where\n> the constraint is protecting a replica identity or a generated column.\n> Something like the attached, perhaps, may need more polish.\n>\n\nDROP TABLE if exists notnull_tbl2;\nCREATE TABLE notnull_tbl2 (c0 int generated by default as IDENTITY, c1 int);\nALTER TABLE notnull_tbl2 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\nALTER TABLE notnull_tbl2 DROP CONSTRAINT notnull_tbl2_c0_not_null;\nALTER TABLE notnull_tbl2 DROP c1;\n\\d notnull_tbl2\n\nlast \"\\d notnull_tbl2\" command, master output is:\n                        Table \"public.notnull_tbl2\"\n Column |  Type   | Collation | Nullable |             Default\n--------+---------+-----------+----------+----------------------------------\n c0     | integer |           | not null | generated by default as identity\n\n\n\nlast \"\\d notnull_tbl2\" command, applying\n0001-Correctly-reset-attnotnull-when-constraints-dropped-.patch\noutput:\n                        Table \"public.notnull_tbl2\"\n Column |  Type   | Collation | Nullable |             Default\n--------+---------+-----------+----------+----------------------------------\n c0     | integer |           |          | generated by default as identityHmm,  ALTER TABLE notnull_tbl2 DROP c1; will not call dropconstraint_internal(). When dropping PK constraint indirectly, c0's attnotnull was set to false in RemoveConstraintById().-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Wed, 10 Apr 2024 14:36:33 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "another related bug, in master.\n\ndrop table if exists notnull_tbl1;\nCREATE TABLE notnull_tbl1 (c0 int not null, c1 int);\nALTER TABLE notnull_tbl1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n\\d+ notnull_tbl1\nALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\nALTER TABLE notnull_tbl1 ALTER c1 DROP NOT NULL;\n\n\"ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\"\nshould fail?\n\nI didn't investigate deep enough.\n\nanother related bug, in master.drop table if exists notnull_tbl1;CREATE TABLE notnull_tbl1 (c0 int not null, c1 int);ALTER TABLE notnull_tbl1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\\d+ notnull_tbl1ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;ALTER TABLE notnull_tbl1 ALTER c1 DROP NOT NULL;\"ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\"should fail?I didn't investigate deep enough.", "msg_date": "Wed, 10 Apr 2024 17:34:30 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "jian he <[email protected]> 于2024年4月10日周三 17:34写道:\n\n>\n> another related bug, in master.\n>\n> drop table if exists notnull_tbl1;\n> CREATE TABLE notnull_tbl1 (c0 int not null, c1 int);\n> ALTER TABLE notnull_tbl1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> \\d+ notnull_tbl1\n> ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\n> ALTER TABLE notnull_tbl1 ALTER c1 DROP NOT NULL;\n>\n> \"ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\"\n> should fail?\n>\n\nYeah, it should fail as before, because c0 is primary key.\nIn master, although c0's pg_attribute.attnotnull is still true, but its\nnot-null constraint has been deleted\nin dropconstraint_internal().\n\nIf we drop column c1 after dropping c0 not null, the primary key will be\ndropped indirectly.\nAnd now you can see c0 is still not-null if you do \\d+ notnull_tbl1. But it\nwill report error \"not found not-null\"\nif you alter c0 drop not null.\n\npostgres=# ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\nALTER TABLE\npostgres=# \\d+ notnull_tbl1\n Table \"public.notnull_tbl1\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n c0 | integer | | not null | | plain |\n| |\n c1 | integer | | not null | | plain |\n| |\nIndexes:\n \"q\" PRIMARY KEY, btree (c0, c1)\nAccess method: heap\n\npostgres=# alter table notnull_tbl1 drop c1;\nALTER TABLE\npostgres=# \\d notnull_tbl1\n Table \"public.notnull_tbl1\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n c0 | integer | | not null |\n\npostgres=# alter table notnull_tbl1 alter c0 drop not null;\nERROR: could not find not-null constraint on column \"c0\", relation\n\"notnull_tbl1\"\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\njian he <[email protected]> 于2024年4月10日周三 17:34写道:another related bug, in master.drop table if exists notnull_tbl1;CREATE TABLE notnull_tbl1 (c0 int not null, c1 int);ALTER TABLE notnull_tbl1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\\d+ notnull_tbl1ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;ALTER TABLE notnull_tbl1 ALTER c1 DROP NOT NULL;\"ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\"should fail?Yeah, it should fail as before, because c0 is primary key.In master, although c0's pg_attribute.attnotnull is still true, but its not-null constraint has been deletedin dropconstraint_internal(). If we drop column c1 after dropping c0 not null, the primary key will be dropped indirectly.And now you can see c0 is still not-null if you do \\d+ notnull_tbl1. But it will report error \"not found not-null\"if you alter c0 drop not null. postgres=# ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;ALTER TABLEpostgres=# \\d+ notnull_tbl1                                      Table \"public.notnull_tbl1\" Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- c0     | integer |           | not null |         | plain   |             |              | c1     | integer |           | not null |         | plain   |             |              |Indexes:    \"q\" PRIMARY KEY, btree (c0, c1)Access method: heappostgres=# alter table notnull_tbl1 drop c1;ALTER TABLEpostgres=# \\d notnull_tbl1            Table \"public.notnull_tbl1\" Column |  Type   | Collation | Nullable | Default--------+---------+-----------+----------+--------- c0     | integer |           | not null |postgres=# alter table notnull_tbl1 alter c0 drop not null;ERROR:  could not find not-null constraint on column \"c0\", relation \"notnull_tbl1\" -- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Wed, 10 Apr 2024 18:11:02 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Apr-10, Tender Wang wrote:\n\n> Yeah, it should fail as before, because c0 is primary key.\n> In master, although c0's pg_attribute.attnotnull is still true, but its\n> not-null constraint has been deleted\n> in dropconstraint_internal().\n\nYeah, the problem here is that we need to do the checks whenever the\nconstraints are dropped, either directly or indirectly ... but we cannot\ndo them in RemoveConstraintById, because if you elog(ERROR) there, it\nwon't let you use DROP TABLE (which will also arrive at\nRemoveConstraintById):\n\n55490 17devel 2220048=# drop table notnull_tbl2;\nERROR: column \"c0\" of relation \"notnull_tbl2\" is an identity column\n\n... which is of course kinda ridiculous, so this is not a viable\nalternative. The problem is that RemoveConstraintById doesn't have\nsufficient context about the whole operation. Worse: it cannot feed\nits operations back into the alter table state.\n\n\nI had a thought yesterday about making the resets of attnotnull and the\ntests for replica identity and PKs to a separate ALTER TABLE pass,\nindependent of RemoveConstraintById (which would continue to be\nresponsible only for dropping the catalog row, as currently).\nThis may make the whole thing simpler. I'm on it.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Hay que recordar que la existencia en el cosmos, y particularmente la\nelaboración de civilizaciones dentro de él no son, por desgracia,\nnada idílicas\" (Ijon Tichy)\n\n\n", "msg_date": "Wed, 10 Apr 2024 12:54:13 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Apr-10, jian he wrote:\n\n> another related bug, in master.\n> \n> drop table if exists notnull_tbl1;\n> CREATE TABLE notnull_tbl1 (c0 int not null, c1 int);\n> ALTER TABLE notnull_tbl1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> \\d+ notnull_tbl1\n> ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\n> ALTER TABLE notnull_tbl1 ALTER c1 DROP NOT NULL;\n> \n> \"ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\"\n> should fail?\n\nNo, this should not fail, and it is working correctly in master. You\ncan drop the not-null constraint, but the column will still be\nnon-nullable, because the primary key still exists. If you drop the\nprimary key later, then the column becomes nullable. This is by design.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)\n\n\n", "msg_date": "Wed, 10 Apr 2024 13:01:52 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "It turns out that trying to close all holes that lead to columns marked\nnot-null without a pg_constraint row is not possible within the ALTER\nTABLE framework, because it can happen outside it also. Consider this\n\nCREATE DOMAIN dom1 AS integer;\nCREATE TABLE notnull_tbl (a dom1, b int, PRIMARY KEY (a, b));\nDROP DOMAIN dom1 CASCADE;\n\nIn this case you'll end up with b having attnotnull=true and no\nconstraint; and no amount of messing with tablecmds.c will fix it.\n\nSo I propose to instead allow those constraints, and treat them as\nsecond-class citizens. We allow dropping them with ALTER TABLE DROP NOT\nNULL, and we allow to create a backing full-fledged constraint with SET\nNOT NULL or ADD CONSTRAINT. So here's a partial crude initial patch to\ndo that.\n\n\nOne thing missing here is pg_dump support. If you just dump this table,\nit'll end up with no constraint at all. That's obviously bad, so I\npropose we have pg_dump add a regular NOT NULL constraint for those, to\navoid perpetuating the weird situation further.\n\nAnother thing I wonder if whether I should use the existing\nset_attnotnull() instead of adding drop_orphaned_notnull(). Or we could\njust inline the code in ATExecDropNotNull, since it's small and\nself-contained.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Postgres is bloatware by design: it was built to house\n PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)", "msg_date": "Wed, 10 Apr 2024 15:58:49 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On Wed, Apr 10, 2024 at 7:01 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Apr-10, jian he wrote:\n>\n> > another related bug, in master.\n> >\n> > drop table if exists notnull_tbl1;\n> > CREATE TABLE notnull_tbl1 (c0 int not null, c1 int);\n> > ALTER TABLE notnull_tbl1 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> > \\d+ notnull_tbl1\n> > ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\n> > ALTER TABLE notnull_tbl1 ALTER c1 DROP NOT NULL;\n> >\n> > \"ALTER TABLE notnull_tbl1 ALTER c0 DROP NOT NULL;\"\n> > should fail?\n>\n> No, this should not fail, and it is working correctly in master. You\n> can drop the not-null constraint, but the column will still be\n> non-nullable, because the primary key still exists. If you drop the\n> primary key later, then the column becomes nullable. This is by design.\n>\n\nnow I got it. the second time, it will fail.\nit should be the expected behavior.\n\nper commit:\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=14dd0f27d7cd56ffae9ecdbe324965073d01a9ff\nIn the function dropconstraint_internal, I changed \"foreach\" to\n\"foreach_int\" in some places,\nand other minor cosmetic changes within the function\ndropconstraint_internal only.\n\nSince I saw your changes in dropconstraint_internal, I posted here.\nI will review your latest patch later.", "msg_date": "Wed, 10 Apr 2024 23:03:30 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Apr-10, Alvaro Herrera wrote:\n\n> One thing missing here is pg_dump support. If you just dump this table,\n> it'll end up with no constraint at all. That's obviously bad, so I\n> propose we have pg_dump add a regular NOT NULL constraint for those, to\n> avoid perpetuating the weird situation further.\n\nHere's another crude patchset, this time including the pg_dump aspect.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"On the other flipper, one wrong move and we're Fatal Exceptions\"\n(T.U.X.: Term Unit X - http://www.thelinuxreview.com/TUX/)", "msg_date": "Wed, 10 Apr 2024 19:23:25 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "Alvaro Herrera <[email protected]> 于2024年4月10日周三 21:58写道:\n\n> It turns out that trying to close all holes that lead to columns marked\n> not-null without a pg_constraint row is not possible within the ALTER\n> TABLE framework, because it can happen outside it also. Consider this\n>\n> CREATE DOMAIN dom1 AS integer;\n> CREATE TABLE notnull_tbl (a dom1, b int, PRIMARY KEY (a, b));\n> DROP DOMAIN dom1 CASCADE;\n>\n> In this case you'll end up with b having attnotnull=true and no\n> constraint; and no amount of messing with tablecmds.c will fix it.\n>\n\nI try above case on my v4 patch[1], and it seems no result as what you said.\nBut, anyway, I now don't like updating other catalog in\nRemoveConstraintById().\nBecause it will not be friendly for others who call RemoveConstraintById()\nwant only\nto remove pg_constraint tuple, but actually it do more works stealthily.\n\n\n> So I propose to instead allow those constraints, and treat them as\n> second-class citizens. We allow dropping them with ALTER TABLE DROP NOT\n> NULL, and we allow to create a backing full-fledged constraint with SET\n> NOT NULL or ADD CONSTRAINT. So here's a partial crude initial patch to\n> do that.\n>\n\nHmm, the patch looks like the patch in my first email in this thread. But\nmy v1 patch seem\na poc at most.\n\n>\n>\n> One thing missing here is pg_dump support. If you just dump this table,\n> it'll end up with no constraint at all. That's obviously bad, so I\n> propose we have pg_dump add a regular NOT NULL constraint for those, to\n> avoid perpetuating the weird situation further.\n>\n> Another thing I wonder if whether I should use the existing\n> set_attnotnull() instead of adding drop_orphaned_notnull(). Or we could\n> just inline the code in ATExecDropNotNull, since it's small and\n> self-contained.\n>\n\nI like just inline the code in ATExecDropNotNull, as you said, it's small\nand self-contained.\nin ATExecDropNotNull(), we had open the pg_attributed table and hold\nRowExclusiveLock,\nthe tuple we also get.\nWhat we do is set attnotnull = false, and call CatalogTupleUpdate.\n\n-- \n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n> \"Postgres is bloatware by design: it was built to house\n> PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)\n>\n\n[1]\nhttps://www.postgresql.org/message-id/CAHewXNn_So7LUCxxxyDNfdvCQp1TnD3gTVECBZX2bT_nbPgraQ%40mail.gmail.com\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nAlvaro Herrera <[email protected]> 于2024年4月10日周三 21:58写道:It turns out that trying to close all holes that lead to columns marked\nnot-null without a pg_constraint row is not possible within the ALTER\nTABLE framework, because it can happen outside it also.  Consider this\n\nCREATE DOMAIN dom1 AS integer;\nCREATE TABLE notnull_tbl (a dom1, b int, PRIMARY KEY (a, b));\nDROP DOMAIN dom1 CASCADE;\n\nIn this case you'll end up with b having attnotnull=true and no\nconstraint; and no amount of messing with tablecmds.c will fix it.I try above case on my v4 patch[1], and it seems no result as what you said.But, anyway, I now don't like updating other catalog in RemoveConstraintById().Because it will not be friendly for others who call RemoveConstraintById() want onlyto remove pg_constraint tuple, but actually it do more works stealthily.\n\nSo I propose to instead allow those constraints, and treat them as\nsecond-class citizens.  We allow dropping them with ALTER TABLE DROP NOT\nNULL, and we allow to create a backing full-fledged constraint with SET\nNOT NULL or ADD CONSTRAINT.  So here's a partial crude initial patch to\ndo that.Hmm, the patch looks like the patch in my first email in this thread. But my v1 patch seema poc at most. \n\n\nOne thing missing here is pg_dump support.  If you just dump this table,\nit'll end up with no constraint at all.  That's obviously bad, so I\npropose we have pg_dump add a regular NOT NULL constraint for those, to\navoid perpetuating the weird situation further.\n\nAnother thing I wonder if whether I should use the existing\nset_attnotnull() instead of adding drop_orphaned_notnull().  Or we could\njust inline the code in ATExecDropNotNull, since it's small and\nself-contained.I like just inline the code in  ATExecDropNotNull, as you said, it's small and self-contained.in ATExecDropNotNull(), we had open the pg_attributed table and hold RowExclusiveLock,the tuple we also get. What we do is set attnotnull = false, and call CatalogTupleUpdate.\n-- \nÁlvaro Herrera               48°01'N 7°57'E  —  https://www.EnterpriseDB.com/\n\"Postgres is bloatware by design: it was built to house\n PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)\n[1] https://www.postgresql.org/message-id/CAHewXNn_So7LUCxxxyDNfdvCQp1TnD3gTVECBZX2bT_nbPgraQ%40mail.gmail.com-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Thu, 11 Apr 2024 10:18:21 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On Wed, Apr 10, 2024 at 2:10 PM jian he <[email protected]> wrote:\n>\n> DROP TABLE if exists notnull_tbl2;\n> CREATE TABLE notnull_tbl2 (c0 int generated by default as IDENTITY, c1 int);\n> ALTER TABLE notnull_tbl2 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> ALTER TABLE notnull_tbl2 DROP CONSTRAINT notnull_tbl2_c0_not_null;\n> ALTER TABLE notnull_tbl2 DROP c1;\n> \\d notnull_tbl2\n\n> ALTER TABLE notnull_tbl2 DROP CONSTRAINT notnull_tbl2_c0_not_null;\nper above sequence execution order, this should error out?\n\notherwise which \"not null\" (attribute|constraint) to anchor \"generated\nby default as identity\" not null property?\n\"DROP c1\" will drop the not null property for \"c0\" and \"c1\".\nif \"DROP CONSTRAINT notnull_tbl2_c0_not_nul\" not error out, then\n\" ALTER TABLE notnull_tbl2 DROP c1;\"\nshould either error out\nor transform \"c0\" from \"c0 int generated by default as identity\"\nto\n\"c0 int\"\n\n\nOn Thu, Apr 11, 2024 at 1:23 AM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Apr-10, Alvaro Herrera wrote:\n>\n> > One thing missing here is pg_dump support. If you just dump this table,\n> > it'll end up with no constraint at all. That's obviously bad, so I\n> > propose we have pg_dump add a regular NOT NULL constraint for those, to\n> > avoid perpetuating the weird situation further.\n>\n> Here's another crude patchset, this time including the pg_dump aspect.\n>\n\n+DROP TABLE notnull_tbl1;\n+-- make sure attnotnull is reset correctly when a PK is dropped indirectly\n+CREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1));\n+ALTER TABLE notnull_tbl1 DROP c1;\n+\\d+ notnull_tbl1\n+ Table \"public.notnull_tbl1\"\n+ Column | Type | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n+--------+---------+-----------+----------+---------+---------+--------------+-------------\n+ c0 | integer | | not null | | plain | |\n+\n\nthis is not what we expected?\n\"not null\" for \"c0\" now should be false?\nam I missing something?\n\n\n", "msg_date": "Thu, 11 Apr 2024 14:40:04 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "jian he <[email protected]> 于2024年4月11日周四 14:40写道:\n\n> On Wed, Apr 10, 2024 at 2:10 PM jian he <[email protected]>\n> wrote:\n> >\n> > DROP TABLE if exists notnull_tbl2;\n> > CREATE TABLE notnull_tbl2 (c0 int generated by default as IDENTITY, c1\n> int);\n> > ALTER TABLE notnull_tbl2 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> > ALTER TABLE notnull_tbl2 DROP CONSTRAINT notnull_tbl2_c0_not_null;\n> > ALTER TABLE notnull_tbl2 DROP c1;\n> > \\d notnull_tbl2\n>\n> > ALTER TABLE notnull_tbl2 DROP CONSTRAINT notnull_tbl2_c0_not_null;\n> per above sequence execution order, this should error out?\n>\n> otherwise which \"not null\" (attribute|constraint) to anchor \"generated\n> by default as identity\" not null property?\n> \"DROP c1\" will drop the not null property for \"c0\" and \"c1\".\n> if \"DROP CONSTRAINT notnull_tbl2_c0_not_nul\" not error out, then\n> \" ALTER TABLE notnull_tbl2 DROP c1;\"\n> should either error out\n> or transform \"c0\" from \"c0 int generated by default as identity\"\n> to\n> \"c0 int\"\n>\n> I try above case on MASTER and MASTER with Alvaro V2 patch, and all work\ncorrectly.\n\\d+ notnull_tbl2 will see not-null of \"c0\".\n\n\n>\n> On Thu, Apr 11, 2024 at 1:23 AM Alvaro Herrera <[email protected]>\n> wrote:\n> >\n> > On 2024-Apr-10, Alvaro Herrera wrote:\n> >\n> > > One thing missing here is pg_dump support. If you just dump this\n> table,\n> > > it'll end up with no constraint at all. That's obviously bad, so I\n> > > propose we have pg_dump add a regular NOT NULL constraint for those, to\n> > > avoid perpetuating the weird situation further.\n> >\n> > Here's another crude patchset, this time including the pg_dump aspect.\n> >\n>\n> +DROP TABLE notnull_tbl1;\n> +-- make sure attnotnull is reset correctly when a PK is dropped indirectly\n> +CREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1));\n> +ALTER TABLE notnull_tbl1 DROP c1;\n> +\\d+ notnull_tbl1\n> + Table \"public.notnull_tbl1\"\n> + Column | Type | Collation | Nullable | Default | Storage | Stats\n> target | Description\n>\n> +--------+---------+-----------+----------+---------+---------+--------------+-------------\n> + c0 | integer | | not null | | plain |\n> |\n> +\n>\n> this is not what we expected?\n> \"not null\" for \"c0\" now should be false?\n> am I missing something?\n>\nYeah, now this is expected behavior.\nUsers can drop manually not-null of \"c0\" if they want, and no error\nreporte.\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\njian he <[email protected]> 于2024年4月11日周四 14:40写道:On Wed, Apr 10, 2024 at 2:10 PM jian he <[email protected]> wrote:\n>\n> DROP TABLE if exists notnull_tbl2;\n> CREATE TABLE notnull_tbl2 (c0 int generated by default as IDENTITY, c1 int);\n> ALTER TABLE notnull_tbl2 ADD CONSTRAINT Q PRIMARY KEY(c0, c1);\n> ALTER TABLE notnull_tbl2 DROP CONSTRAINT notnull_tbl2_c0_not_null;\n> ALTER TABLE notnull_tbl2 DROP c1;\n> \\d notnull_tbl2\n\n> ALTER TABLE notnull_tbl2 DROP CONSTRAINT notnull_tbl2_c0_not_null;\nper above sequence execution order, this should error out?\n\notherwise which \"not null\" (attribute|constraint) to anchor \"generated\nby default as identity\" not null property?\n\"DROP c1\" will drop the not null property for \"c0\" and \"c1\".\nif \"DROP CONSTRAINT notnull_tbl2_c0_not_nul\" not error out, then\n\" ALTER TABLE notnull_tbl2 DROP c1;\"\nshould either error out\nor transform \"c0\" from \"c0 int generated by default as identity\"\nto\n\"c0 int\"\nI try above case on MASTER and MASTER with Alvaro V2 patch, and all work correctly.\\d+ notnull_tbl2 will see not-null of \"c0\". \n\nOn Thu, Apr 11, 2024 at 1:23 AM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Apr-10, Alvaro Herrera wrote:\n>\n> > One thing missing here is pg_dump support.  If you just dump this table,\n> > it'll end up with no constraint at all.  That's obviously bad, so I\n> > propose we have pg_dump add a regular NOT NULL constraint for those, to\n> > avoid perpetuating the weird situation further.\n>\n> Here's another crude patchset, this time including the pg_dump aspect.\n>\n\n+DROP TABLE notnull_tbl1;\n+-- make sure attnotnull is reset correctly when a PK is dropped indirectly\n+CREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1));\n+ALTER TABLE  notnull_tbl1 DROP c1;\n+\\d+ notnull_tbl1\n+                               Table \"public.notnull_tbl1\"\n+ Column |  Type   | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n+--------+---------+-----------+----------+---------+---------+--------------+-------------\n+ c0     | integer |           | not null |         | plain   |              |\n+\n\nthis is not what we expected?\n\"not null\" for \"c0\" now should be false?\nam I missing something?Yeah, now this is expected behavior.Users can  drop manually not-null of \"c0\" if they want, and no error reporte. -- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Thu, 11 Apr 2024 15:19:35 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On Thu, Apr 11, 2024 at 3:19 PM Tender Wang <[email protected]> wrote:\n>\n>> +DROP TABLE notnull_tbl1;\n>> +-- make sure attnotnull is reset correctly when a PK is dropped indirectly\n>> +CREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1));\n>> +ALTER TABLE notnull_tbl1 DROP c1;\n>> +\\d+ notnull_tbl1\n>> + Table \"public.notnull_tbl1\"\n>> + Column | Type | Collation | Nullable | Default | Storage | Stats\n>> target | Description\n>> +--------+---------+-----------+----------+---------+---------+--------------+-------------\n>> + c0 | integer | | not null | | plain | |\n>> +\n>>\n>> this is not what we expected?\n>> \"not null\" for \"c0\" now should be false?\n>> am I missing something?\n>\n> Yeah, now this is expected behavior.\n> Users can drop manually not-null of \"c0\" if they want, and no error reporte.\n>\n\nsorry for the noise.\nthese two past patches confused me:\n0001-Correctly-reset-attnotnull-when-constraints-dropped-.patch\nv4-0001-Fix-pg_attribute-attnotnull-not-reset-when-droppe.patch\n\nI thought dropping a column of primary key (ALTER TABLE notnull_tbl2 DROP c1)\nwill make the others key columns to not have \"not null\" property.\n\nnow I figured out that\ndropping a column of primary key columns will not change other key\ncolumns' \"not null\" property.\ndropping the primary key associated constraint will make all key\ncolumns \"not null\" property disappear.\n\nv2-0001-Handle-ALTER-.-DROP-NOT-NULL-when-no-pg_constrain.patch\nbehavior looks fine to me now.\n\n\ninline drop_orphaned_notnull in ATExecDropNotNull looks fine to me.\n\n\n", "msg_date": "Thu, 11 Apr 2024 16:49:35 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Apr-11, jian he wrote:\n\n> now I figured out that\n> dropping a column of primary key columns will not change other key\n> columns' \"not null\" property.\n> dropping the primary key associated constraint will make all key\n> columns \"not null\" property disappear.\n\nWell, I think you were right that we should try to handle the situation\nof unmarking attnotnull as much as possible, to decrease the chances\nthat the problematic situation occurs. That means, we can use the\nearlier code to handle DROP COLUMN when it causes a PK to be dropped --\neven though we still need to handle the situation of an attnotnull flag\nset with no pg_constraint row. I mean, we still have to handle DROP\nDOMAIN correctly (and there might be other cases that I haven't thought\nabout) ... but surely this is a much less common situation than the one\nyou reported. So I'll merge everything and post an updated patch.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n", "msg_date": "Thu, 11 Apr 2024 11:10:17 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Apr-11, Alvaro Herrera wrote:\n\n> Well, I think you were right that we should try to handle the situation\n> of unmarking attnotnull as much as possible, to decrease the chances\n> that the problematic situation occurs. That means, we can use the\n> earlier code to handle DROP COLUMN when it causes a PK to be dropped --\n> even though we still need to handle the situation of an attnotnull flag\n> set with no pg_constraint row. I mean, we still have to handle DROP\n> DOMAIN correctly (and there might be other cases that I haven't thought\n> about) ... but surely this is a much less common situation than the one\n> you reported. So I'll merge everything and post an updated patch.\n\nHere's roughly what I'm thinking. If we drop a constraint, we can still\nreset attnotnull in RemoveConstraintById(), but only after checking that\nit's not a generated column or a replica identity. If they are, we\ndon't error out -- just skip the attnotnull update.\n\nNow, about the code to allow ALTER TABLE DROP NOT NULL in case there's\nno pg_constraint row, I think at this point it's mostly dead code,\nbecause it can only happen when you have a replica identity or generated\ncolumn ... and the DROP NOT NULL should still prevent you from dropping\nthe flag anyway. But the case can still arise, if you change the\nreplica identity or ALTER TABLE ALTER COLUMN DROP DEFAULT, respectively.\n\nI'm still not ready with this -- still not convinced about the new AT\npass. Also, I want to add a test for the pg_dump behavior, and there's\nan XXX comment.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)", "msg_date": "Thu, 11 Apr 2024 16:48:00 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On Thu, Apr 11, 2024 at 10:48 PM Alvaro Herrera <[email protected]> wrote:\n>\n>\n> I'm still not ready with this -- still not convinced about the new AT\n> pass. Also, I want to add a test for the pg_dump behavior, and there's\n> an XXX comment.\n>\nNow I am more confused...\n\n+-- make sure attnotnull is reset correctly when a PK is dropped indirectly,\n+-- or kept if there's a reason for that\n+CREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1));\n+ALTER TABLE notnull_tbl1 DROP c1;\n+\\d+ notnull_tbl1\n+ Table \"public.notnull_tbl1\"\n+ Column | Type | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n+--------+---------+-----------+----------+---------+---------+--------------+-------------\n+ c0 | integer | | | | plain | |\n+\n+DROP TABLE notnull_tbl1;\n\nsame query, mysql make let \"c0\" be not null\nmysql https://dbfiddle.uk/_ltoU7PO\n\nfor postgre\nhttps://dbfiddle.uk/ZHJXEqL1\nfrom 9.3 to 16 (click the link (https://dbfiddle.uk/ZHJXEqL1), then\nclick \"9.3\" choose which version you like)\nall will make the remaining column \"co\" be not null.\n\nlatest\n0001-Better-handle-indirect-constraint-drops.patch make c0 attnotnull be false.\n\nprevious patches:\nv2-0001-Handle-ALTER-.-DROP-NOT-NULL-when-no-pg_constrain.patch make\nc0 attnotnull be true.\n0001-Correctly-reset-attnotnull-when-constraints-dropped-.patch make\nc0 attnotnull be false.\n\n\n", "msg_date": "Fri, 12 Apr 2024 10:11:54 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "jian he <[email protected]> 于2024年4月12日周五 10:12写道:\n\n> On Thu, Apr 11, 2024 at 10:48 PM Alvaro Herrera <[email protected]>\n> wrote:\n> >\n> >\n> > I'm still not ready with this -- still not convinced about the new AT\n> > pass. Also, I want to add a test for the pg_dump behavior, and there's\n> > an XXX comment.\n> >\n> Now I am more confused...\n>\n> +-- make sure attnotnull is reset correctly when a PK is dropped\n> indirectly,\n> +-- or kept if there's a reason for that\n> +CREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1));\n> +ALTER TABLE notnull_tbl1 DROP c1;\n> +\\d+ notnull_tbl1\n> + Table \"public.notnull_tbl1\"\n> + Column | Type | Collation | Nullable | Default | Storage | Stats\n> target | Description\n>\n> +--------+---------+-----------+----------+---------+---------+--------------+-------------\n> + c0 | integer | | | | plain |\n> |\n> +\n> +DROP TABLE notnull_tbl1;\n>\n> same query, mysql make let \"c0\" be not null\n> mysql https://dbfiddle.uk/_ltoU7PO\n>\n> for postgre\n> https://dbfiddle.uk/ZHJXEqL1\n> from 9.3 to 16 (click the link (https://dbfiddle.uk/ZHJXEqL1), then\n> click \"9.3\" choose which version you like)\n> all will make the remaining column \"co\" be not null.\n>\n> latest\n> 0001-Better-handle-indirect-constraint-drops.patch make c0 attnotnull be\n> false.\n>\n> previous patches:\n> v2-0001-Handle-ALTER-.-DROP-NOT-NULL-when-no-pg_constrain.patch make\n> c0 attnotnull be true.\n> 0001-Correctly-reset-attnotnull-when-constraints-dropped-.patch make\n> c0 attnotnull be false.\n>\n\nI'm not sure that SQL standard specifies what database must do for this\ncase.\nIf the standard does not specify, then it depends on each database vendor's\ndecision.\n\nSome people like not-null retained, other people may like not-null removed.\nI think it will be ok if people can drop not-null or add not-null back\nagain after dropping pk.\n\nIn Master, not-null will reset when we drop PK directly. I hope dropping pk\nindirectly\nis consistent with dropping PK directly.\n\n--\nTender Wang\nOpenPie: https://en.openpie.com/\n\njian he <[email protected]> 于2024年4月12日周五 10:12写道:On Thu, Apr 11, 2024 at 10:48 PM Alvaro Herrera <[email protected]> wrote:\n>\n>\n> I'm still not ready with this -- still not convinced about the new AT\n> pass.  Also, I want to add a test for the pg_dump behavior, and there's\n> an XXX comment.\n>\nNow I am more confused...\n\n+-- make sure attnotnull is reset correctly when a PK is dropped indirectly,\n+-- or kept if there's a reason for that\n+CREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1));\n+ALTER TABLE  notnull_tbl1 DROP c1;\n+\\d+ notnull_tbl1\n+                               Table \"public.notnull_tbl1\"\n+ Column |  Type   | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n+--------+---------+-----------+----------+---------+---------+--------------+-------------\n+ c0     | integer |           |          |         | plain   |              |\n+\n+DROP TABLE notnull_tbl1;\n\nsame query, mysql make let \"c0\" be not null\nmysql https://dbfiddle.uk/_ltoU7PO\n\nfor postgre\nhttps://dbfiddle.uk/ZHJXEqL1\nfrom 9.3 to 16 (click the link (https://dbfiddle.uk/ZHJXEqL1), then\nclick \"9.3\" choose which version you like)\nall will make the remaining column \"co\" be not null.\n\nlatest\n0001-Better-handle-indirect-constraint-drops.patch make c0 attnotnull be false.\n\nprevious patches:\nv2-0001-Handle-ALTER-.-DROP-NOT-NULL-when-no-pg_constrain.patch  make\nc0 attnotnull be true.\n0001-Correctly-reset-attnotnull-when-constraints-dropped-.patch make\nc0 attnotnull be false.\nI'm not sure that SQL standard specifies what database must do for this case.If the standard does not specify, then it depends on each database vendor's decision.Some people like not-null retained, other people may like not-null removed.I think it will be ok if people can drop not-null or add not-null back again after dropping pk.In Master, not-null will reset when we drop PK directly. I hope dropping pk indirectlyis consistent with dropping PK directly.--Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Fri, 12 Apr 2024 11:26:00 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "Alvaro Herrera <[email protected]> 于2024年4月11日周四 22:48写道:\n\n> On 2024-Apr-11, Alvaro Herrera wrote:\n>\n> > Well, I think you were right that we should try to handle the situation\n> > of unmarking attnotnull as much as possible, to decrease the chances\n> > that the problematic situation occurs. That means, we can use the\n> > earlier code to handle DROP COLUMN when it causes a PK to be dropped --\n> > even though we still need to handle the situation of an attnotnull flag\n> > set with no pg_constraint row. I mean, we still have to handle DROP\n> > DOMAIN correctly (and there might be other cases that I haven't thought\n> > about) ... but surely this is a much less common situation than the one\n> > you reported. So I'll merge everything and post an updated patch.\n>\n> Here's roughly what I'm thinking. If we drop a constraint, we can still\n> reset attnotnull in RemoveConstraintById(), but only after checking that\n> it's not a generated column or a replica identity. If they are, we\n> don't error out -- just skip the attnotnull update.\n>\n> Now, about the code to allow ALTER TABLE DROP NOT NULL in case there's\n> no pg_constraint row, I think at this point it's mostly dead code,\n> because it can only happen when you have a replica identity or generated\n> column ... and the DROP NOT NULL should still prevent you from dropping\n> the flag anyway. But the case can still arise, if you change the\n> replica identity or ALTER TABLE ALTER COLUMN DROP DEFAULT, respectively.\n>\n> I'm still not ready with this -- still not convinced about the new AT\n> pass.\n\n\nYeah, at first, I was also hesitant. Two reasons make me convinced.\nin ATPostAlterTypeParse()\n-----\n else if (cmd->subtype == AT_SetAttNotNull)\n {\n /*\n * The parser will create\nAT_AttSetNotNull subcommands for\n * columns of PRIMARY KEY\nindexes/constraints, but we need\n * not do anything with them here,\nbecause the columns'\n * NOT NULL marks will already have\nbeen propagated into\n * the new table definition.\n */\n }\n-------\nThe new table difinition continues to use old column not-null, so here does\nnothing.\nIf we reset NOT NULL marks in RemoveConstrainById() when dropping PK\nindirectly,\nwe need to do something here or somewhere else.\n\nExcept AT_SetAttNotNull type, other types add a AT pass to tab->subcmds.\nBecause\nnot-null should be added before re-adding index, there is no right AT pass\nin current AlterTablePass.\nSo a new AT pass ahead AT_PASS_OLD_INDEX is needed.\n\nAnother reason is that it can use ALTER TABLE frame to set not-null.\nThis way looks simpler and better than hardcode to re-install not-null in\nsome funciton.\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nAlvaro Herrera <[email protected]> 于2024年4月11日周四 22:48写道:On 2024-Apr-11, Alvaro Herrera wrote:\n\n> Well, I think you were right that we should try to handle the situation\n> of unmarking attnotnull as much as possible, to decrease the chances\n> that the problematic situation occurs.  That means, we can use the\n> earlier code to handle DROP COLUMN when it causes a PK to be dropped --\n> even though we still need to handle the situation of an attnotnull flag\n> set with no pg_constraint row.  I mean, we still have to handle DROP\n> DOMAIN correctly (and there might be other cases that I haven't thought\n> about) ... but surely this is a much less common situation than the one\n> you reported.  So I'll merge everything and post an updated patch.\n\nHere's roughly what I'm thinking.  If we drop a constraint, we can still\nreset attnotnull in RemoveConstraintById(), but only after checking that\nit's not a generated column or a replica identity.  If they are, we\ndon't error out -- just skip the attnotnull update.\n\nNow, about the code to allow ALTER TABLE DROP NOT NULL in case there's\nno pg_constraint row, I think at this point it's mostly dead code,\nbecause it can only happen when you have a replica identity or generated\ncolumn ... and the DROP NOT NULL should still prevent you from dropping\nthe flag anyway.  But the case can still arise, if you change the\nreplica identity or ALTER TABLE ALTER COLUMN DROP DEFAULT, respectively.\n\nI'm still not ready with this -- still not convinced about the new AT\npass. Yeah, at first, I was also hesitant. Two reasons make me convinced.in ATPostAlterTypeParse()-----                               else if (cmd->subtype == AT_SetAttNotNull)                                {                                        /*                                         * The parser will create AT_AttSetNotNull subcommands for                                         * columns of PRIMARY KEY indexes/constraints, but we need                                         * not do anything with them here, because the columns'                                         * NOT NULL marks will already have been propagated into                                         * the new table definition.                                         */                                }-------The new table difinition continues to use old column not-null, so here does nothing.If we reset NOT NULL marks in RemoveConstrainById() when dropping PK indirectly, we need to do something here or somewhere else.Except AT_SetAttNotNull type, other types add a AT pass to tab->subcmds. Becausenot-null should be added before re-adding index, there is no right AT pass in current AlterTablePass.So a new AT pass ahead AT_PASS_OLD_INDEX  is needed.Another reason is that it can use ALTER TABLE frame to set not-null. This way looks simpler and better than hardcode to re-install not-null in some funciton.-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Fri, 12 Apr 2024 14:02:51 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Apr-12, jian he wrote:\n\n> Now I am more confused...\n\n> +CREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1));\n> +ALTER TABLE notnull_tbl1 DROP c1;\n\n> same query, mysql make let \"c0\" be not null\n\nYes, that was Postgres' old model. But the way we think of it now, is\nthat a column is marked attnotnull when a pg_constraint entry exists to\nsupport that flag, which can be a not-null constraint, or a primary key\nconstraint. In the old Postgres model, you're right that we would\ncontinue to have c0 as not-null, just like mysql. In the new model,\nthat flag no longer has no reason to be there, because the backing\nprimary key constraint has been removed, which is why we reset it.\n\nSo what I was saying in the cases with replica identity and generated\ncolumns, is that there's an attnotnull flag we cannot remove, because of\neither of those things, but we don't have any backing constraint for it,\nwhich is an inconsistency with the view of the world that I described\nabove. I would like to manufacture one not-null constraint at that\npoint, or just abort the drop of the PK ... but I don't see how to do\neither of those things.\n\n\nIf you want the c0 column to be still not-null after dropping the\nprimary key, you need to SET NOT NULL:\n\nCREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1)); \nALTER TABLE notnull_tbl1 ALTER c0 SET NOT NULL;\nALTER TABLE notnull_tbl1 DROP c1;\n\\d+ notnull_tbl1\n Table \"public.notnull_tbl1\"\n Column │ Type │ Collation │ Nullable │ Default │ Storage │ Compression │ Stats target │ Description \n────────┼─────────┼───────────┼──────────┼─────────┼─────────┼─────────────┼──────────────┼─────────────\n c0 │ integer │ │ not null │ │ plain │ │ │ \nNot-null constraints:\n \"notnull_tbl1_c0_not_null\" NOT NULL \"c0\"\nAccess method: heap\n\n\nOne thing that's not quite ideal, is that the \"Nullable\" column doesn't\nmake it obvious that the flag is going to be removed if you drop the PK;\nyou have to infer that that's going to happen by noticing that there's\nno explicit not-null constraint listed for that column -- maybe too\nsubtle, especially if you have a lot of columns (luckily, PKs normally\ndon't have too many columns). This is why I suggested to change the\ncontents of that column if the flag is sustained by the PK. Something\nlike this, perhaps:\n\n=# CREATE TABLE notnull_tbl1 (c0 int not null, c1 int, PRIMARY KEY (c0, c1)); \n=# \\d+ notnull_tbl1\n Table \"public.notnull_tbl1\"\n Column │ Type │ Collation │ Nullable │ Default │ Storage │ Compression │ Stats target │ Description \n────────┼─────────┼───────────┼─────────────┼─────────┼─────────┼─────────────┼──────────────┼─────────────\n c0 │ integer │ │ not null │ │ plain │ │ │ \n c1 │ integer │ │ primary key │ │ plain │ │ │ \nIndexes:\n \"notnull_tbl1_pkey\" PRIMARY KEY, btree (c0, c1)\nNot-null constraints:\n \"notnull_tbl1_c0_not_null\" NOT NULL \"c0\"\nAccess method: heap\n\nwhich should make it obvious.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Right now the sectors on the hard disk run clockwise, but I heard a rumor that\nyou can squeeze 0.2% more throughput by running them counterclockwise.\nIt's worth the effort. Recommended.\" (Gerry Pourwelle)\n\n\n", "msg_date": "Fri, 12 Apr 2024 09:52:05 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On Fri, Apr 12, 2024 at 3:52 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Apr-12, jian he wrote:\n>\n> > Now I am more confused...\n>\n> > +CREATE TABLE notnull_tbl1 (c0 int, c1 int, PRIMARY KEY (c0, c1));\n> > +ALTER TABLE notnull_tbl1 DROP c1;\n>\n> > same query, mysql make let \"c0\" be not null\n>\n> Yes, that was Postgres' old model. But the way we think of it now, is\n> that a column is marked attnotnull when a pg_constraint entry exists to\n> support that flag, which can be a not-null constraint, or a primary key\n> constraint. In the old Postgres model, you're right that we would\n> continue to have c0 as not-null, just like mysql. In the new model,\n> that flag no longer has no reason to be there, because the backing\n> primary key constraint has been removed, which is why we reset it.\n>\n> So what I was saying in the cases with replica identity and generated\n> columns, is that there's an attnotnull flag we cannot remove, because of\n> either of those things, but we don't have any backing constraint for it,\n> which is an inconsistency with the view of the world that I described\n> above. I would like to manufacture one not-null constraint at that\n> point, or just abort the drop of the PK ... but I don't see how to do\n> either of those things.\n>\n\nthanks for your explanation.\nnow I understand it.\nI wonder is there any incompatibility issue, or do we need to say something\nabout the new behavior when dropping a key column?\n\nthe comments look good to me.\n\nonly minor cosmetic issue:\n+ if (unconstrained_cols)\ni would like change it to\n+ if (unconstrained_cols != NIL)\n\n+ foreach(lc, unconstrained_cols)\nwe can change to\n+ foreach_int(attnum, unconstrained_cols)\nper commit\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=14dd0f27d7cd56ffae9ecdbe324965073d01a9ff\n\n\n", "msg_date": "Sat, 13 Apr 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Apr-13, jian he wrote:\n\n> I wonder is there any incompatibility issue, or do we need to say something\n> about the new behavior when dropping a key column?\n\nUmm, yeah, maybe we should document it in ALTER TABLE DROP PRIMARY KEY\nand in the release notes to note the different behavior.\n\n> only minor cosmetic issue:\n> + if (unconstrained_cols)\n> i would like change it to\n> + if (unconstrained_cols != NIL)\n> \n> + foreach(lc, unconstrained_cols)\n> we can change to\n> + foreach_int(attnum, unconstrained_cols)\n> per commit\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=14dd0f27d7cd56ffae9ecdbe324965073d01a9ff\n\nAh, yeah. I did that, rewrote some comments and refined the tests a\nlittle bit to ensure the pg_upgrade behavior is sane. I intend to get\nthis pushed tomorrow, if nothing ugly comes up.\n\nCI run: https://cirrus-ci.com/build/5471117953990656\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La gente vulgar sólo piensa en pasar el tiempo;\nel que tiene talento, en aprovecharlo\"", "msg_date": "Thu, 18 Apr 2024 20:49:52 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "Alvaro Herrera <[email protected]> 于2024年4月19日周五 02:49写道:\n\n> On 2024-Apr-13, jian he wrote:\n>\n> > I wonder is there any incompatibility issue, or do we need to say\n> something\n> > about the new behavior when dropping a key column?\n>\n> Umm, yeah, maybe we should document it in ALTER TABLE DROP PRIMARY KEY\n> and in the release notes to note the different behavior.\n>\n> > only minor cosmetic issue:\n> > + if (unconstrained_cols)\n> > i would like change it to\n> > + if (unconstrained_cols != NIL)\n> >\n> > + foreach(lc, unconstrained_cols)\n> > we can change to\n> > + foreach_int(attnum, unconstrained_cols)\n> > per commit\n> >\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=14dd0f27d7cd56ffae9ecdbe324965073d01a9ff\n>\n> Ah, yeah. I did that, rewrote some comments and refined the tests a\n> little bit to ensure the pg_upgrade behavior is sane. I intend to get\n> this pushed tomorrow, if nothing ugly comes up.\n>\n\nThe new patch looks good to me.\n\n\n>\n> CI run: https://cirrus-ci.com/build/5471117953990656\n>\n>\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nAlvaro Herrera <[email protected]> 于2024年4月19日周五 02:49写道:On 2024-Apr-13, jian he wrote:\n\n> I wonder is there any incompatibility issue, or do we need to say something\n> about the new behavior when dropping a key column?\n\nUmm, yeah, maybe we should document it in ALTER TABLE DROP PRIMARY KEY\nand in the release notes to note the different behavior.\n\n> only minor cosmetic issue:\n> + if (unconstrained_cols)\n> i would like change it to\n> + if (unconstrained_cols != NIL)\n> \n> + foreach(lc, unconstrained_cols)\n> we can change to\n> + foreach_int(attnum, unconstrained_cols)\n> per commit\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=14dd0f27d7cd56ffae9ecdbe324965073d01a9ff\n\nAh, yeah.  I did that, rewrote some comments and refined the tests a\nlittle bit to ensure the pg_upgrade behavior is sane.  I intend to get\nthis pushed tomorrow, if nothing ugly comes up.The new patch looks good to me. \n\nCI run: https://cirrus-ci.com/build/5471117953990656\n-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Fri, 19 Apr 2024 14:41:26 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" }, { "msg_contents": "On 2024-Apr-19, Tender Wang wrote:\n\n> The new patch looks good to me.\n\nThanks for looking once more. I have pushed it now. I didn't try\npg_upgrade other than running the tests, so maybe buildfarm member crake\nwill have more to complain about -- we'll see.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"After a quick R of TFM, all I can say is HOLY CR** THAT IS COOL! PostgreSQL was\namazing when I first started using it at 7.2, and I'm continually astounded by\nlearning new features and techniques made available by the continuing work of\nthe development team.\"\nBerend Tober, http://archives.postgresql.org/pgsql-hackers/2007-08/msg01009.php\n\n\n", "msg_date": "Fri, 19 Apr 2024 12:40:11 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find not null constraint, but \\d+ shows that" } ]
[ { "msg_contents": "Hi,\n\nAt present time, an existing pg_is_in_recovery() method is not enough\nto distinguish a server being in point in time recovery (PITR) mode and \nan ordinary replica\nbecause it returns true in both cases.\n\nThat is why pg_is_standby_requested() function introduced in attached \npatch might help.\nIt reports whether a standby.signal file was found in the data directory \nat startup process.\nInstructions for reproducing the possible use case are also attached.\n\nHope it will be usefull.\n\nRespectfully,\n\nMikhail Litsarev\nPostgres Professional: https://postgrespro.com", "msg_date": "Tue, 26 Mar 2024 17:28:01 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "SQL function which allows to distinguish a server being in point in\n time recovery mode and an ordinary replica" }, { "msg_contents": "On Tue Mar 26, 2024 at 9:28 AM CDT, m.litsarev wrote:\n> Hi,\n>\n> At present time, an existing pg_is_in_recovery() method is not enough\n> to distinguish a server being in point in time recovery (PITR) mode and \n> an ordinary replica\n> because it returns true in both cases.\n>\n> That is why pg_is_standby_requested() function introduced in attached \n> patch might help.\n> It reports whether a standby.signal file was found in the data directory \n> at startup process.\n> Instructions for reproducing the possible use case are also attached.\n>\n> Hope it will be usefull.\n\nHey Mikhail,\n\nSaw your patch for the first time today. Looks like your patch is messed \nup? You seem to have more of the diff at the bottom which seems to add \na test. Want to send a v2 with a properly formatted patch?\n\nExample command:\n\ngit format-patch -v2 -M HEAD^\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 15 Apr 2024 16:06:03 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL function which allows to distinguish a server being in\n point in time recovery mode and an ordinary replica" }, { "msg_contents": "On Mon, Apr 15, 2024 at 04:06:03PM -0500, Tristan Partin wrote:\n> Saw your patch for the first time today. Looks like your patch is messed up?\n> You seem to have more of the diff at the bottom which seems to add a test.\n> Want to send a v2 with a properly formatted patch?\n\nFWIW, complicating more XLogRecoveryCtlData sends me shivers, these\ndays, because we have already a lot of recovery state to track within\nit.\n\nMore seriously, I'm not much a fan of introducing more branches at the\nbottom of readRecoverySignalFile() for the boolean flags tracking if\nstandby and/or archive recovery are triggered, even if these are\nsimple there are already too many of them. Perhaps we should begin\ntracking all that as a set of bitmasks, then plug in the tracked state\nin shmem for consumption in some SQL function.\n--\nMichael", "msg_date": "Tue, 16 Apr 2024 08:39:39 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL function which allows to distinguish a server being in point\n in time recovery mode and an ordinary replica" }, { "msg_contents": "On 2024-Apr-16, Michael Paquier wrote:\n\n> there are already too many of them. Perhaps we should begin\n> tracking all that as a set of bitmasks, then plug in the tracked state\n> in shmem for consumption in some SQL function.\n\nYes, it sounds reasonable.\nLet me implement some initial draft and come back with it after a while.\n\nRespectfully,\n\nMikhail Litsarev\nPostgres Professional: https://postgrespro.com\n\n\n", "msg_date": "Wed, 17 Apr 2024 11:57:47 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: SQL function which allows to distinguish a server being in point\n in time recovery mode and an ordinary replica" }, { "msg_contents": "> simple there are already too many of them. Perhaps we should begin\n> tracking all that as a set of bitmasks, then plug in the tracked state\n> in shmem for consumption in some SQL function.\n\nHi!\n\nMichael, Tristan\nas a first step I have introduced the `SharedRecoveryDataFlags` bitmask \ninstead of three boolean SharedHotStandbyActive, \nSharedPromoteIsTriggered and SharedStandbyModeRequested flags (the last \none from my previous patch) and made minimal updates in corresponding \ncode based on that change.\n\nRespectfully,\n\nMikhail Litsarev\nPostgres Professional: https://postgrespro.com", "msg_date": "Mon, 06 May 2024 18:55:46 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: SQL function which allows to distinguish a server being in point\n in time recovery mode and an ordinary replica" }, { "msg_contents": "On Mon, May 06, 2024 at 06:55:46PM +0300, [email protected] wrote:\n> as a first step I have introduced the `SharedRecoveryDataFlags` bitmask\n> instead of three boolean SharedHotStandbyActive, SharedPromoteIsTriggered\n> and SharedStandbyModeRequested flags (the last one from my previous patch)\n> and made minimal updates in corresponding code based on that change.\n\nThanks for the patch.\n\n /*\n- * Local copy of SharedHotStandbyActive variable. False actually means \"not\n+ * Local copy of XLR_HOT_STANDBY_ACTIVE flag. False actually means \"not\n * known, need to check the shared state\".\n */\n static bool LocalHotStandbyActive = false;\n \n /*\n- * Local copy of SharedPromoteIsTriggered variable. False actually means \"not\n+ * Local copy of XLR_PROMOTE_IS_TRIGGERED flag. False actually means \"not\n * known, need to check the shared state\".\n */\n static bool LocalPromoteIsTriggered = false;\n\nIt's a bit strange to have a bitwise set of flags in shmem while we\nkeep these local copies as booleans. Perhaps it would be cleaner to\nmerge both local variables into a single bits32 store?\n\n+ uint32 SharedRecoveryDataFlags;\n\nI'd switch to bits32 for flags here.\n\n+bool\n+StandbyModeIsRequested(void)\n+{\n+\t/*\n+\t * Spinlock is not needed here because XLR_STANDBY_MODE_REQUESTED flag\n+\t * can only be read after startup process is done.\n+\t */\n+\treturn (XLogRecoveryCtl->SharedRecoveryDataFlags & XLR_STANDBY_MODE_REQUESTED) != 0;\n+}\n\nHow about introducing a single wrapper function that returns the whole\nvalue SharedRecoveryDataFlags, with the flags published in a header?\nSure, XLR_HOT_STANDBY_ACTIVE is not really exciting because being able\nto query a standby implies it, but XLR_PROMOTE_IS_TRIGGERED could be\ninteresting? Then this could be used with a function that returns a\ntext[] array with all the states retrieved?\n\nThe refactoring pieces and the function pieces should be split, for\nclarity.\n--\nMichael", "msg_date": "Wed, 8 May 2024 13:24:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL function which allows to distinguish a server being in point\n in time recovery mode and an ordinary replica" }, { "msg_contents": "Hi!\n\nMichael,\nI have fixed the patches according to your comments.\n\n> merge both local variables into a single bits32 store?\nThis is done in v3-0001-Standby-mode-requested.patch\n\n> Then this could be used with a function that returns a\n> text[] array with all the states retrieved?\nPlaced this in the v3-0002-Text-array-sql-wrapper.patch\n\n> The refactoring pieces and the function pieces should be split, for\n> clarity.\nSure. I also added the third patch with some tests. Perhaps it would be \nusefull.\n\nRespectfully,\n\nMikhail Litsarev\nPostgres Professional: https://postgrespro.com", "msg_date": "Thu, 13 Jun 2024 21:07:42 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: SQL function which allows to distinguish a server being in point\n in time recovery mode and an ordinary replica" }, { "msg_contents": "On Thu, Jun 13, 2024 at 09:07:42PM +0300, [email protected] wrote:\n> Hi!\n> \n> Michael,\n> I have fixed the patches according to your comments.\n> \n> > merge both local variables into a single bits32 store?\n> This is done in v3-0001-Standby-mode-requested.patch\n> \n> > Then this could be used with a function that returns a\n> > text[] array with all the states retrieved?\n> Placed this in the v3-0002-Text-array-sql-wrapper.patch\n> \n> > The refactoring pieces and the function pieces should be split, for\n> > clarity.\n> Sure. I also added the third patch with some tests. Perhaps it would be\n> usefull.\n\n+ * -- XLR_PROMOTE_IS_TRIGGERED indicates if a standby promotion\n+ * has been triggered. Protected by info_lck.\n+ *\n+ * -- XLR_STANDBY_MODE_REQUESTED indicates if we're in a standby mode\n+ * at start, while recovery mode is on. No info_lck protection.\n+ *\n+ * and can be extended in future.\n\nThis comment is incorrect for XLR_STANDBY_MODE_REQUESTED? A startup\nwe would unlikely be in standby mode, most likely in crash recovery,\nthen switch to standby mode.\n\n- LocalPromoteIsTriggered = XLogRecoveryCtl->SharedPromoteIsTriggered;\n+ LocalRecoveryDataFlags &= ~XLR_PROMOTE_IS_TRIGGERED;\n+ LocalRecoveryDataFlags |=\n+ (XLogRecoveryCtl->SharedRecoveryDataFlags & XLR_PROMOTE_IS_TRIGGERED)\n\nAre these complications really needed? All these flags are false,\nthen switched to true. true -> false is not possible.\n\n StandbyModeRequested = true;\n ArchiveRecoveryRequested = true;\n+ XLogRecoveryCtl->SharedRecoveryDataFlags |= XLR_STANDBY_MODE_REQUESTED;\n\nShouldn't STANDBY_MODE be only used in the local flag, as well as an\nARCHIVE_RECOVERY_REQUESTED? It looks like this could push a bit more\nforward the removal of more of these booleans, with a bit more work..\n\n return (LocalRecoveryDataFlags & XLR_HOT_STANDBY_ACTIVE);\n }\n \n+\n /*\nSome noise lying around.\n\n+\t/* Returns bit array as Datum */\n+\ttxt_arr = construct_array_builtin(flags, cnt, TEXTOID);\n\nYep, that's the correct way to do it.\n\n+is($ret_mode_primary, '{}', \"master is not a replica\"); \n\nThe test additions are welcome. Note that we avoid the word \"master\",\nsee 229f8c219f8f.\n--\nMichael", "msg_date": "Fri, 14 Jun 2024 13:51:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL function which allows to distinguish a server being in point\n in time recovery mode and an ordinary replica" } ]
[ { "msg_contents": "Hi Team,\n\n     I am currently referring to the Postgres source code (psql (PostgreSQL) 14.3) and came across a particular section related to window aggregate initialization that has left me with a question. Specifically, I noticed a conditional case in the initialization of per aggregate (initialize_peraggregate in nodeWindowAgg.c) where the winstate frameOptions is being checked, but it appears that frameOptions is not set before this conditional case. I observed the same behaviour in PG version 16 as well.\n\n\n\n\n\n/*\n\n* Figure out whether we want to use the moving-aggregate implementation,\n\n* and collect the right set of fields from the pg_attribute entry.\n\n*\n\n* It's possible that an aggregate would supply a safe moving-aggregate\n\n* implementation and an unsafe normal one, in which case our hand is\n\n* forced. Otherwise, if the frame head can't move, we don't need\n\n* moving-aggregate code. Even if we'd like to use it, don't do so if the\n\n* aggregate's arguments (and FILTER clause if any) contain any calls to\n\n* volatile functions. Otherwise, the difference between restarting and\n\n* not restarting the aggregation would be user-visible.\n\n*/\n\nif (!OidIsValid(aggform->aggminvtransfn))\n\nuse_ma_code = false; /* sine qua non */\n\nelse if (aggform->aggmfinalmodify == AGGMODIFY_READ_ONLY &&\n\naggform->aggfinalmodify != AGGMODIFY_READ_ONLY)\n\nuse_ma_code = true; /* decision forced by safety */\n\nelse if (winstate->frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING)\n\nuse_ma_code = false; /* non-moving frame head */\n\nelse if (contain_volatile_functions((Node *) wfunc))\n\nuse_ma_code = false; /* avoid possible behavioral change */\n\nelse\n\nuse_ma_code = true; /* yes, let's use it */\n\nif (use_ma_code)\n\n{\n\nperaggstate->transfn_oid = transfn_oid = aggform->aggmtransfn;\n\nperaggstate->invtransfn_oid = invtransfn_oid = aggform->aggminvtransfn;\n\nperaggstate->finalfn_oid = finalfn_oid = aggform->aggmfinalfn;\n\nfinalextra = aggform->aggmfinalextra;\n\nfinalmodify = aggform->aggmfinalmodify;\n\naggtranstype = aggform->aggmtranstype;\n\ninitvalAttNo = Anum_pg_aggregate_aggminitval;\n\n}\n\n\n   \n\nFrame options are being set to winstate after the initialization of peraggregate. (line 2504 in nodeWindowAgg.c)\n\n\n\n/* copy frame options to state node for easy access */\n\nwinstate->frameOptions = frameOptions;\n\n\n\n\nCould you kindly share the reason why this conditional case was added during the initialization of per aggregate, especially considering that winstate frameOptions is set after this initialization?\n\n\n\nThanks\nVallimaharajan G\nHi Team,     I am currently referring to the Postgres source code (psql (PostgreSQL) 14.3) and came across a particular section related to window aggregate initialization that has left me with a question. Specifically, I noticed a conditional case in the initialization of per aggregate (initialize_peraggregate in nodeWindowAgg.c) where the winstate frameOptions is being checked, but it appears that frameOptions is not set before this conditional case. I observed the same behaviour in PG version 16 as well./** Figure out whether we want to use the moving-aggregate implementation,* and collect the right set of fields from the pg_attribute entry.** It's possible that an aggregate would supply a safe moving-aggregate* implementation and an unsafe normal one, in which case our hand is* forced. Otherwise, if the frame head can't move, we don't need* moving-aggregate code. Even if we'd like to use it, don't do so if the* aggregate's arguments (and FILTER clause if any) contain any calls to* volatile functions. Otherwise, the difference between restarting and* not restarting the aggregation would be user-visible.*/if (!OidIsValid(aggform->aggminvtransfn))use_ma_code = false; /* sine qua non */else if (aggform->aggmfinalmodify == AGGMODIFY_READ_ONLY &&aggform->aggfinalmodify != AGGMODIFY_READ_ONLY)use_ma_code = true; /* decision forced by safety */else if (winstate->frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING)use_ma_code = false; /* non-moving frame head */else if (contain_volatile_functions((Node *) wfunc))use_ma_code = false; /* avoid possible behavioral change */elseuse_ma_code = true; /* yes, let's use it */if (use_ma_code){peraggstate->transfn_oid = transfn_oid = aggform->aggmtransfn;peraggstate->invtransfn_oid = invtransfn_oid = aggform->aggminvtransfn;peraggstate->finalfn_oid = finalfn_oid = aggform->aggmfinalfn;finalextra = aggform->aggmfinalextra;finalmodify = aggform->aggmfinalmodify;aggtranstype = aggform->aggmtranstype;initvalAttNo = Anum_pg_aggregate_aggminitval;}   Frame options are being set to winstate after the initialization of peraggregate. (line 2504 in nodeWindowAgg.c)/* copy frame options to state node for easy access */winstate->frameOptions = frameOptions;Could you kindly share the reason why this conditional case was added during the initialization of per aggregate, especially considering that winstate frameOptions is set after this initialization?ThanksVallimaharajan G", "msg_date": "Wed, 27 Mar 2024 14:37:43 +0530", "msg_from": "Vallimaharajan G <[email protected]>", "msg_from_op": true, "msg_subject": "Query Regarding frame options initialization in Window aggregate\n state" }, { "msg_contents": "Vallimaharajan G <[email protected]> writes:\n> I am currently referring to the Postgres source code (psql (PostgreSQL) 14.3) and came across a particular section related to window aggregate initialization that has left me with a question. Specifically, I noticed a conditional case in the initialization of per aggregate (initialize_peraggregate in nodeWindowAgg.c) where the winstate frameOptions is being checked, but it appears that frameOptions is not set before this conditional case.\n\nYou are right, and that's a bug. It's not of major consequence ---\nit would just cause us to set up for moving-aggregate mode when\nwe won't ever actually move the frame head -- but it's at least a\nlittle bit inefficient.\n\nWhile I'm looking at it, there's a pretty obvious typo in the\nadjacent comment:\n\n- * and collect the right set of fields from the pg_attribute entry.\n+ * and collect the right set of fields from the pg_aggregate entry.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Mar 2024 12:56:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Regarding frame options initialization in Window aggregate\n state" } ]
[ { "msg_contents": "Hi!\n\nWe take care to always set application_name to improve our log lines\nwhere we use %a in log_line_prefix to log application name, per [1].\nBut notices which are sent to the client do not have the application\nname and are thus hard to attribute correctly. Could \"a\" be added with\nthe application name (when available) to the error and notice message\nfields [2]?\n\n\nMitar\n\n[1] https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LINE-PREFIX\n[2] https://www.postgresql.org/docs/current/protocol-error-fields.html\n\n-- \nhttps://mitar.tnode.com/\nhttps://twitter.com/mitar_m\nhttps://noc.social/@mitar\n\n\n", "msg_date": "Wed, 27 Mar 2024 16:22:03 +0100", "msg_from": "Mitar <[email protected]>", "msg_from_op": true, "msg_subject": "Adding application_name to the error and notice message fields" }, { "msg_contents": "Hi!\n\nOh, I can use PQparameterStatus to obtain application_name of the\ncurrent connection. It seems then it is not needed to add this\ninformation into notice message.\n\n\nMitar\n\nOn Wed, Mar 27, 2024 at 4:22 PM Mitar <[email protected]> wrote:\n>\n> Hi!\n>\n> We take care to always set application_name to improve our log lines\n> where we use %a in log_line_prefix to log application name, per [1].\n> But notices which are sent to the client do not have the application\n> name and are thus hard to attribute correctly. Could \"a\" be added with\n> the application name (when available) to the error and notice message\n> fields [2]?\n>\n>\n> Mitar\n>\n> [1] https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LINE-PREFIX\n> [2] https://www.postgresql.org/docs/current/protocol-error-fields.html\n>\n> --\n> https://mitar.tnode.com/\n> https://twitter.com/mitar_m\n> https://noc.social/@mitar\n\n\n\n-- \nhttps://mitar.tnode.com/\nhttps://twitter.com/mitar_m\nhttps://noc.social/@mitar\n\n\n", "msg_date": "Wed, 27 Mar 2024 18:22:27 +0100", "msg_from": "Mitar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding application_name to the error and notice message fields" } ]
[ { "msg_contents": "Our PostGIS bot that follows master branch has been crashing for past couple\nof days on one of our tests\n\nhttps://trac.osgeo.org/postgis/ticket/5701\n\nI traced the issue down to this commit:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=66c0185a3d14b\nbbf51d0fc9d267093ffec735231\n\n\nThe issue can be exercised without postgis installed as follows:\n\n\nCREATE TABLE edge_data AS \nSELECT i AS edge_id, i + 1 AS start_node, i + 2 As end_node\nFROM generate_series(1,10) AS i;\n\n WITH edge AS (\n SELECT start_node, end_node\n FROM edge_data\n WHERE edge_id = 1\n )\n SELECT start_node id FROM edge UNION\n SELECT end_node FROM edge;\n\n\nIf I run using UNION ALL, this works fine:\n\n WITH edge AS (\n SELECT start_node, end_node\n FROM edge_data\n WHERE edge_id = 1\n )\n SELECT start_node id FROM edge UNION ALL\n SELECT end_node FROM edge;\n\n\nThanks,\nRegina\n\n\n\n\n", "msg_date": "Wed, 27 Mar 2024 11:33:55 -0400", "msg_from": "\"Regina Obe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Crash on UNION with PG 17" }, { "msg_contents": "On Wed, Mar 27, 2024 at 11:33:55AM -0400, Regina Obe wrote:\n> Our PostGIS bot that follows master branch has been crashing for past couple\n> of days on one of our tests\n> \n> https://trac.osgeo.org/postgis/ticket/5701\n> \n> I traced the issue down to this commit:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=66c0185a3d14b\n> bbf51d0fc9d267093ffec735231\n> \n> \n> The issue can be exercised without postgis installed as follows:\n> \n> \n> CREATE TABLE edge_data AS \n> SELECT i AS edge_id, i + 1 AS start_node, i + 2 As end_node\n> FROM generate_series(1,10) AS i;\n> \n> WITH edge AS (\n> SELECT start_node, end_node\n> FROM edge_data\n> WHERE edge_id = 1\n> )\n> SELECT start_node id FROM edge UNION\n> SELECT end_node FROM edge;\n\nI can confirm the crash in git master.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 27 Mar 2024 18:28:26 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crash on UNION with PG 17" }, { "msg_contents": "On Thu, 28 Mar 2024 at 04:34, Regina Obe <[email protected]> wrote:\n> The issue can be exercised without postgis installed as follows:\n>\n>\n> CREATE TABLE edge_data AS\n> SELECT i AS edge_id, i + 1 AS start_node, i + 2 As end_node\n> FROM generate_series(1,10) AS i;\n>\n> WITH edge AS (\n> SELECT start_node, end_node\n> FROM edge_data\n> WHERE edge_id = 1\n> )\n> SELECT start_node id FROM edge UNION\n> SELECT end_node FROM edge;\n\nThanks for the report.\n\nThere's some discussion about this in [1] along with a proposed way to\nfix it. The proposed fix does alter the function signature of an\nimportant and externally visible planner function, so will be waiting\nfor some feedback on that before moving ahead with fixing.\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\nDavid\n\n\n", "msg_date": "Thu, 28 Mar 2024 12:46:28 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crash on UNION with PG 17" }, { "msg_contents": "On Thu, 28 Mar 2024 at 04:34, Regina Obe <[email protected]> wrote:\n> CREATE TABLE edge_data AS\n> SELECT i AS edge_id, i + 1 AS start_node, i + 2 As end_node\n> FROM generate_series(1,10) AS i;\n>\n> WITH edge AS (\n> SELECT start_node, end_node\n> FROM edge_data\n> WHERE edge_id = 1\n> )\n> SELECT start_node id FROM edge UNION\n> SELECT end_node FROM edge;\n\nAs of d5d2205c8, this query should work as expected.\n\nThank you for letting us know about this.\n\nDavid\n\n\n", "msg_date": "Tue, 2 Apr 2024 12:25:09 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crash on UNION with PG 17" }, { "msg_contents": "> On Thu, 28 Mar 2024 at 04:34, Regina Obe <[email protected]> wrote:\n> > CREATE TABLE edge_data AS\n> > SELECT i AS edge_id, i + 1 AS start_node, i + 2 As end_node FROM\n> > generate_series(1,10) AS i;\n> >\n> > WITH edge AS (\n> > SELECT start_node, end_node\n> > FROM edge_data\n> > WHERE edge_id = 1\n> > )\n> > SELECT start_node id FROM edge UNION\n> > SELECT end_node FROM edge;\n> \n> As of d5d2205c8, this query should work as expected.\n> \n> Thank you for letting us know about this.\n> \n> David\n\nThanks for the fix. I confirm it works now and our bots are green again.\n\nRegina\n\n\n\n", "msg_date": "Tue, 2 Apr 2024 15:58:36 -0400", "msg_from": "\"Regina Obe\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Crash on UNION with PG 17" } ]
[ { "msg_contents": "I've been having trouble compiling PG 17 using msys2 / mingw64 (sorry my\nmeson setup is a bit broken at moment, so couldn't test that.).\n\nBoth my msys2 envs (Rev2, Built by MSYS2 project) 13.2.0 and my older\nsetup (x86_64-posix-seh-rev0, Built by MinGW-W64 project) 8.1.0\n\nHave the same issue:\n\nThe error is \n\nrm -f '../../src/include/storage/lwlocknames.h'\ncp -pR ../../backend/storage/lmgr/lwlocknames.h\n'../../src/include/storage/lwlocknames.h'\ncp: cannot stat '../../backend/storage/lmgr/lwlocknames.h': No such file or\ndirectory\nmake[1]: *** [Makefile:143: ../../src/include/storage/lwlocknames.h] Error 1\nmake[1]: Leaving directory '/projects/postgresql/postgresql-git/src/backend'\nmake: *** [../../src/Makefile.global:382: submake-generated-headers] Error 2\n\n\nI traced the issue down to this change in\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=721856ff24b\n\n$(top_builddir)/src/include/storage/lwlocknames.h:\nstorage/lmgr/lwlocknames.h\n- prereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n- cd '$(dir $@)' && rm -f $(notdir $@) && \\\n- $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n+ rm -f '$@'\n+ $(LN_S) ../../backend/$< '$@'\n \n $(top_builddir)/src/include/utils/wait_event_types.h:\nutils/activity/wait_event_types.h\n- prereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n- cd '$(dir $@)' && rm -f $(notdir $@) && \\\n- $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n+ rm -f '$@'\n+ $(LN_S) ../../backend/$< '$@'\n\n\nReverting just the above change fixes the issue. I'm not sure what all that\ndoes to be honest, so not sure the best way to move forward.\nMy linux autoconf build systems do not have this issue.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Wed, 27 Mar 2024 11:50:55 -0400", "msg_from": "\"Regina Obe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Can't compile PG 17 (master) from git under Msys2 autoconf" }, { "msg_contents": "Hi Regina,\n\nOn 2024-Mar-27, Regina Obe wrote:\n\n> The error is \n> \n> rm -f '../../src/include/storage/lwlocknames.h'\n> cp -pR ../../backend/storage/lmgr/lwlocknames.h\n> '../../src/include/storage/lwlocknames.h'\n> cp: cannot stat '../../backend/storage/lmgr/lwlocknames.h': No such file or\n> directory\n> make[1]: *** [Makefile:143: ../../src/include/storage/lwlocknames.h] Error 1\n> make[1]: Leaving directory '/projects/postgresql/postgresql-git/src/backend'\n> make: *** [../../src/Makefile.global:382: submake-generated-headers] Error 2\n\nHmm, I changed these rules again in commit da952b415f44, maybe your\nproblem is with that one? I wonder if changing the references to\n../include/storage/lwlocklist.h to\n$(topdir)/src/include/storage/lwlocklist.h (and similar things in\nsrc/backend/storage/lmgr/Makefile) would fix it.\n\nDo you run configure in the source directory or a separate build one?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If it is not right, do not do it.\nIf it is not true, do not say it.\" (Marcus Aurelius, Meditations)\n\n\n", "msg_date": "Wed, 3 Apr 2024 19:52:32 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't compile PG 17 (master) from git under Msys2 autoconf" }, { "msg_contents": "> Hi Regina,\n> \n> On 2024-Mar-27, Regina Obe wrote:\n> \n> > The error is\n> >\n> > rm -f '../../src/include/storage/lwlocknames.h'\n> > cp -pR ../../backend/storage/lmgr/lwlocknames.h\n> > '../../src/include/storage/lwlocknames.h'\n> > cp: cannot stat '../../backend/storage/lmgr/lwlocknames.h': No such\n> > file or directory\n> > make[1]: *** [Makefile:143: ../../src/include/storage/lwlocknames.h]\n> > Error 1\n> > make[1]: Leaving directory '/projects/postgresql/postgresql-\n> git/src/backend'\n> > make: *** [../../src/Makefile.global:382: submake-generated-headers]\n> > Error 2\n> \n> Hmm, I changed these rules again in commit da952b415f44, maybe your\n> problem is with that one? I wonder if changing the references to\n> ../include/storage/lwlocklist.h to $(topdir)/src/include/storage/lwlocklist.h\n> (and similar things in\n> src/backend/storage/lmgr/Makefile) would fix it.\n> \n> Do you run configure in the source directory or a separate build one?\n> \n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n> \"If it is not right, do not do it.\n> If it is not true, do not say it.\" (Marcus Aurelius, Meditations)\n\nI run in the source directory, but tried doing in a separate build directory and ran into the same issue.\n\nLet me look at that commit and get back to you if that makes a difference.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Wed, 3 Apr 2024 16:34:48 -0400", "msg_from": "\"Regina Obe\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can't compile PG 17 (master) from git under Msys2 autoconf" }, { "msg_contents": "> Hi Regina,\n> \n> On 2024-Mar-27, Regina Obe wrote:\n> \n> > The error is\n> >\n> > rm -f '../../src/include/storage/lwlocknames.h'\n> > cp -pR ../../backend/storage/lmgr/lwlocknames.h\n> > '../../src/include/storage/lwlocknames.h'\n> > cp: cannot stat '../../backend/storage/lmgr/lwlocknames.h': No such\n> > file or directory\n> > make[1]: *** [Makefile:143: ../../src/include/storage/lwlocknames.h]\n> > Error 1\n> > make[1]: Leaving directory '/projects/postgresql/postgresql-\n> git/src/backend'\n> > make: *** [../../src/Makefile.global:382: submake-generated-headers]\n> > Error 2\n> \n> Hmm, I changed these rules again in commit da952b415f44, maybe your\n> problem is with that one? I wonder if changing the references to\n> ../include/storage/lwlocklist.h to $(topdir)/src/include/storage/lwlocklist.h\n> (and similar things in\n> src/backend/storage/lmgr/Makefile) would fix it.\n> \n> Do you run configure in the source directory or a separate build one?\n> \n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n> \"If it is not right, do not do it.\n> If it is not true, do not say it.\" (Marcus Aurelius, Meditations)\n\nI tried the change\n> ../include/storage/lwlocklist.h to $(top_builddir)/src/include/storage/lwlocklist.h\n\nI assume you meant that instead of $(topdir)\n\nBut nah that made no difference. Your change was already in my patched version so isn't causing any issues.\n\nBut as stated, rolling back this change in src/backend/Makefile in 0a475f01a4a (November 6th commit) makes it work for me.\n\n$(top_builddir)/src/include/storage/lwlocknames.h: storage/lmgr/lwlocknames.h\n- prereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n- cd '$(dir $@)' && rm -f $(notdir $@) && \\\n- $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n+ rm -f '$@'\n+ $(LN_S) ../../backend/$< '$@'\n \n $(top_builddir)/src/include/utils/wait_event_types.h: utils/activity/wait_event_types.h\n- prereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n- cd '$(dir $@)' && rm -f $(notdir $@) && \\\n- $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n+ rm -f '$@'\n+ $(LN_S) ../../backend/$< '$@'\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 18:50:21 -0400", "msg_from": "\"Regina Obe\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can't compile PG 17 (master) from git under Msys2 autoconf" }, { "msg_contents": "> > Hi Regina,\n> >\n> > On 2024-Mar-27, Regina Obe wrote:\n> >\n> > > The error is\n> > >\n> > > rm -f '../../src/include/storage/lwlocknames.h'\n> > > cp -pR ../../backend/storage/lmgr/lwlocknames.h\n> > > '../../src/include/storage/lwlocknames.h'\n> > > cp: cannot stat '../../backend/storage/lmgr/lwlocknames.h': No such\n> > > file or directory\n> > > make[1]: *** [Makefile:143: ../../src/include/storage/lwlocknames.h]\n> > > Error 1\n> > > make[1]: Leaving directory '/projects/postgresql/postgresql-\n> > git/src/backend'\n> > > make: *** [../../src/Makefile.global:382: submake-generated-headers]\n> > > Error 2\n> >\n> > Hmm, I changed these rules again in commit da952b415f44, maybe your\n> > problem is with that one? I wonder if changing the references to\n> > ../include/storage/lwlocklist.h to\n> > $(topdir)/src/include/storage/lwlocklist.h\n> > (and similar things in\n> > src/backend/storage/lmgr/Makefile) would fix it.\n> >\n> > Do you run configure in the source directory or a separate build one?\n> >\n> > --\n> > Álvaro Herrera Breisgau, Deutschland —\n> > https://www.EnterpriseDB.com/\n> > \"If it is not right, do not do it.\n> > If it is not true, do not say it.\" (Marcus Aurelius, Meditations)\n> \n> I tried the change\n> > ../include/storage/lwlocklist.h to\n> > $(top_builddir)/src/include/storage/lwlocklist.h\n> \n> I assume you meant that instead of $(topdir)\n> \n> But nah that made no difference. Your change was already in my patched\n> version so isn't causing any issues.\n\nI think I got something not too far off from what's there now that works under my msys2 setup again. This is partly using your idea of using $(top_builddir) to qualify the path but in the LN_S section that is causing me grief.\nThis seems to work okay building in tree and out of tree.\nBy changing these lines in src/backend/Makefile\n\n$(top_builddir)/src/include/storage/lwlocknames.h: storage/lmgr/lwlocknames.h\n rm -f '$@'\n- $(LN_S) ../../backend/$< '$@'\n+ $(LN_S) $(top_builddir)/src/backend/$< '$@'\n\n $(top_builddir)/src/include/utils/wait_event_types.h: utils/activity/wait_event_types.h\n rm -f '$@'\n- $(LN_S) ../../backend/$< '$@'\n+ $(LN_S) $(top_builddir)/src/backend/$< '$@'\n\nI've also attached as a patch.\n\nThanks,\nRegina", "msg_date": "Thu, 4 Apr 2024 22:36:04 -0400", "msg_from": "\"Regina Obe\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can't compile PG 17 (master) from git under Msys2 autoconf" }, { "msg_contents": "On 2024-Apr-04, Regina Obe wrote:\n\n> I think I got something not too far off from what's there now that works under my msys2 setup again. This is partly using your idea of using $(top_builddir) to qualify the path but in the LN_S section that is causing me grief.\n> This seems to work okay building in tree and out of tree.\n> By changing these lines in src/backend/Makefile\n> \n> $(top_builddir)/src/include/storage/lwlocknames.h: storage/lmgr/lwlocknames.h\n> rm -f '$@'\n> - $(LN_S) ../../backend/$< '$@'\n> + $(LN_S) $(top_builddir)/src/backend/$< '$@'\n> \n> $(top_builddir)/src/include/utils/wait_event_types.h: utils/activity/wait_event_types.h\n> rm -f '$@'\n> - $(LN_S) ../../backend/$< '$@'\n> + $(LN_S) $(top_builddir)/src/backend/$< '$@'\n> \n> I've also attached as a patch.\n\nHmm, that's quite strange ... it certainly doesn't work for me. Maybe\nthe LN_S utility is resolving the symlink at creation time, rather than\nletting it be a reference to be resolved later. Apparently, the only Msys2 buildfarm animal is now using Meson,\nso we don't have any representative animal for your situation.\n\nWhat does LN_S do for you anyway? Looking at\nhttps://stackoverflow.com/questions/61594025/symlink-in-msys2-copy-or-hard-link\nit sounds like this would work if the MSYS environment variable was set\nto winsymlinks (or maybe not. I don't know if a \"windows shortcut\" would\nbe usable in this case.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Nadie está tan esclavizado como el que se cree libre no siéndolo\" (Goethe)\n\n\n", "msg_date": "Fri, 5 Apr 2024 19:41:56 +0200", "msg_from": "'Alvaro Herrera' <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't compile PG 17 (master) from git under Msys2 autoconf" }, { "msg_contents": "> > I think I got something not too far off from what's there now that works\n> under my msys2 setup again. This is partly using your idea of using\n> $(top_builddir) to qualify the path but in the LN_S section that is causing me\n> grief.\n> > This seems to work okay building in tree and out of tree.\n> > By changing these lines in src/backend/Makefile\n> >\n> > $(top_builddir)/src/include/storage/lwlocknames.h:\n> storage/lmgr/lwlocknames.h\n> > rm -f '$@'\n> > - $(LN_S) ../../backend/$< '$@'\n> > + $(LN_S) $(top_builddir)/src/backend/$< '$@'\n> >\n> > $(top_builddir)/src/include/utils/wait_event_types.h:\n> utils/activity/wait_event_types.h\n> > rm -f '$@'\n> > - $(LN_S) ../../backend/$< '$@'\n> > + $(LN_S) $(top_builddir)/src/backend/$< '$@'\n> >\n> > I've also attached as a patch.\n> \n> Hmm, that's quite strange ... it certainly doesn't work for me. Maybe the\n> LN_S utility is resolving the symlink at creation time, rather than letting it be a\n> reference to be resolved later. Apparently, the only Msys2 buildfarm animal is\n> now using Meson, so we don't have any representative animal for your\n> situation.\n> \n> What does LN_S do for you anyway? Looking at\n> https://stackoverflow.com/questions/61594025/symlink-in-msys2-copy-or-\n> hard-link\n> it sounds like this would work if the MSYS environment variable was set to\n> winsymlinks (or maybe not. I don't know if a \"windows shortcut\" would be\n> usable in this case.)\n> \n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n> \"Nadie está tan esclavizado como el que se cree libre no siéndolo\" (Goethe)\n\nI think it ends up doing a copy thus the copy error in my log failures which don't exist anywhere in the Makefil\n\ncp -pR ../../backend/storage/lmgr/lwlocknames.h\n\nSorry for not checking on a linux system. I was thinking I should have done that first.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Fri, 5 Apr 2024 13:59:26 -0400", "msg_from": "\"Regina Obe\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can't compile PG 17 (master) from git under Msys2 autoconf" }, { "msg_contents": "On 2024-Apr-05, Regina Obe wrote:\n\n> I think it ends up doing a copy thus the copy error in my log failures which don't exist anywhere in the Makefil\n> \n> cp -pR ../../backend/storage/lmgr/lwlocknames.h\n> \n> Sorry for not checking on a linux system. I was thinking I should have done that first.\n\nAh yeah, that's per configure:\n\n if ln -s conf$$.file conf$$ 2>/dev/null; then\n as_ln_s='ln -s'\n # ... but there are two gotchas:\n # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail.\n # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable.\n # In both cases, we have to default to `cp -pR'.\n ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe ||\n as_ln_s='cp -pR'\n\nI guess we need to patch the rule so that the LN_S is called so that\nit'd resolve correctly in both cases. I guess the easy option is to go\nback to the original recipe and update the comment to indicate that we\ndo it to placate Msys2. Here it is with the old comment:\n\n -# The point of the prereqdir incantation in some of the rules below is to\n -# force the symlink to use an absolute path rather than a relative path.\n -# For headers which are generated by make distprep, the actual header within\n -# src/backend will be in the source tree, while the symlink in src/include\n -# will be in the build tree, so a simple ../.. reference won't work.\n -# For headers generated during regular builds, we prefer a relative symlink.\n\n $(top_builddir)/src/include/storage/lwlocknames.h: storage/lmgr/lwlocknames.h\n - prereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n - cd '$(dir $@)' && rm -f $(notdir $@) && \\\n - $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n\n\nMaybe it's possible to make this simpler, as it looks overly baroque,\nand we don't really need absolute paths anyway -- we just need the path\nresolved at the right time.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 5 Apr 2024 20:10:01 +0200", "msg_from": "'Alvaro Herrera' <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't compile PG 17 (master) from git under Msys2 autoconf" }, { "msg_contents": "> > I think it ends up doing a copy thus the copy error in my log failures\n> > which don't exist anywhere in the Makefil\n> >\n> > cp -pR ../../backend/storage/lmgr/lwlocknames.h\n> >\n> > Sorry for not checking on a linux system. I was thinking I should have done\n> that first.\n> \n> Ah yeah, that's per configure:\n> \n> if ln -s conf$$.file conf$$ 2>/dev/null; then\n> as_ln_s='ln -s'\n> # ... but there are two gotchas:\n> # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail.\n> # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable.\n> # In both cases, we have to default to `cp -pR'.\n> ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe ||\n> as_ln_s='cp -pR'\n> \n> I guess we need to patch the rule so that the LN_S is called so that it'd resolve\n> correctly in both cases. I guess the easy option is to go back to the original\n> recipe and update the comment to indicate that we do it to placate Msys2.\n> Here it is with the old comment:\n> \n> -# The point of the prereqdir incantation in some of the rules below is to\n> -# force the symlink to use an absolute path rather than a relative path.\n> -# For headers which are generated by make distprep, the actual header\n> within\n> -# src/backend will be in the source tree, while the symlink in src/include\n> -# will be in the build tree, so a simple ../.. reference won't work.\n> -# For headers generated during regular builds, we prefer a relative symlink.\n> \n> $(top_builddir)/src/include/storage/lwlocknames.h:\n> storage/lmgr/lwlocknames.h\n> - prereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n> - cd '$(dir $@)' && rm -f $(notdir $@) && \\\n> - $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n> \n> \n> Maybe it's possible to make this simpler, as it looks overly baroque, and we\n> don't really need absolute paths anyway -- we just need the path resolved at\n> the right time.\n> \n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n\nYah I was thinking it was removed cause \nno one could figure out why it was so complicated and decided to make it more readable.\n\n\n\n\n", "msg_date": "Fri, 5 Apr 2024 18:41:15 -0400", "msg_from": "\"Regina Obe\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can't compile PG 17 (master) from git under Msys2 autoconf" } ]
[ { "msg_contents": "Hi,\n\nNathan Bossart <nathandbossart(at)gmail(dot)com> writes:\n>Committed with that change. Thanks for the guidance on this one.\n\nI think that left an oversight in a commit d365ae7\n<https://github.com/postgres/postgres/commit/d365ae705409f5d9c81da4b668f59c3598feb512>\nIf the admin_role is a NULL pointer, so, can be dereferenced\nin the main loop of the function roles_is_member_of and\nworst, IMO, can be destroying aleatory memory?\n\nFirst, is a better shortcut test to check if admin_role is NOT NULL.\nSecond, !OidIsValid(*admin_role), It doesn't seem necessary anymore.\n\nOr am I losing something?\n\nbest regards,\nRanier Vilela", "msg_date": "Wed, 27 Mar 2024 13:21:23 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Wed, Mar 27, 2024 at 01:21:23PM -0300, Ranier Vilela wrote:\n> Nathan Bossart <nathandbossart(at)gmail(dot)com> writes:\n>>Committed with that change. Thanks for the guidance on this one.\n> \n> I think that left an oversight in a commit d365ae7\n> <https://github.com/postgres/postgres/commit/d365ae705409f5d9c81da4b668f59c3598feb512>\n> If the admin_role is a NULL pointer, so, can be dereferenced\n> in the main loop of the function roles_is_member_of and\n> worst, IMO, can be destroying aleatory memory?\n> \n> First, is a better shortcut test to check if admin_role is NOT NULL.\n> Second, !OidIsValid(*admin_role), It doesn't seem necessary anymore.\n> \n> Or am I losing something?\n\nIf admin_role is NULL, then admin_of is expected to be set to InvalidOid.\nSee the assertion at the top of the function. AFAICT the code that\ndereferences admin_role short-circuits if admin_of is invalid.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 11:41:04 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Em qua., 27 de mar. de 2024 às 13:41, Nathan Bossart <\[email protected]> escreveu:\n\n> On Wed, Mar 27, 2024 at 01:21:23PM -0300, Ranier Vilela wrote:\n> > Nathan Bossart <nathandbossart(at)gmail(dot)com> writes:\n> >>Committed with that change. Thanks for the guidance on this one.\n> >\n> > I think that left an oversight in a commit d365ae7\n> > <\n> https://github.com/postgres/postgres/commit/d365ae705409f5d9c81da4b668f59c3598feb512\n> >\n> > If the admin_role is a NULL pointer, so, can be dereferenced\n> > in the main loop of the function roles_is_member_of and\n> > worst, IMO, can be destroying aleatory memory?\n> >\n> > First, is a better shortcut test to check if admin_role is NOT NULL.\n> > Second, !OidIsValid(*admin_role), It doesn't seem necessary anymore.\n> >\n> > Or am I losing something?\n>\n> If admin_role is NULL, then admin_of is expected to be set to InvalidOid.\n> See the assertion at the top of the function. AFAICT the code that\n> dereferences admin_role short-circuits if admin_of is invalid.\n>\nThese conditions seem a little fragile and confusing to me.\nWhen a simple test, it protects the pointer and avoids a series of tests,\nwhich are unnecessary if the pointer is invalid.\n\nbest regards,\nRanier Vilela\n\nEm qua., 27 de mar. de 2024 às 13:41, Nathan Bossart <[email protected]> escreveu:On Wed, Mar 27, 2024 at 01:21:23PM -0300, Ranier Vilela wrote:\n> Nathan Bossart <nathandbossart(at)gmail(dot)com> writes:\n>>Committed with that change. Thanks for the guidance on this one.\n> \n> I think that left an oversight in a commit d365ae7\n> <https://github.com/postgres/postgres/commit/d365ae705409f5d9c81da4b668f59c3598feb512>\n> If the admin_role is a NULL pointer, so, can be dereferenced\n> in the main loop of the function roles_is_member_of and\n> worst, IMO, can be destroying aleatory memory?\n> \n> First, is a better shortcut test to check if admin_role is NOT NULL.\n> Second, !OidIsValid(*admin_role), It doesn't seem necessary anymore.\n> \n> Or am I losing something?\n\nIf admin_role is NULL, then admin_of is expected to be set to InvalidOid.\nSee the assertion at the top of the function.  AFAICT the code that\ndereferences admin_role short-circuits if admin_of is invalid.These conditions seem a little fragile and confusing to me.When a simple test, it protects the pointer and avoids a series of tests, which are unnecessary if the pointer is invalid.best regards,Ranier Vilela", "msg_date": "Wed, 27 Mar 2024 13:47:38 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "On Wed, Mar 27, 2024 at 01:47:38PM -0300, Ranier Vilela wrote:\n> Em qua., 27 de mar. de 2024 �s 13:41, Nathan Bossart <\n> [email protected]> escreveu:\n>> On Wed, Mar 27, 2024 at 01:21:23PM -0300, Ranier Vilela wrote:\n>> > I think that left an oversight in a commit d365ae7\n>> > <\n>> https://github.com/postgres/postgres/commit/d365ae705409f5d9c81da4b668f59c3598feb512\n>> >\n>> > If the admin_role is a NULL pointer, so, can be dereferenced\n>> > in the main loop of the function roles_is_member_of and\n>> > worst, IMO, can be destroying aleatory memory?\n>> >\n>> > First, is a better shortcut test to check if admin_role is NOT NULL.\n>> > Second, !OidIsValid(*admin_role), It doesn't seem necessary anymore.\n>> >\n>> > Or am I losing something?\n>>\n>> If admin_role is NULL, then admin_of is expected to be set to InvalidOid.\n>> See the assertion at the top of the function. AFAICT the code that\n>> dereferences admin_role short-circuits if admin_of is invalid.\n>>\n> These conditions seem a little fragile and confusing to me.\n> When a simple test, it protects the pointer and avoids a series of tests,\n> which are unnecessary if the pointer is invalid.\n\nMaybe. But that doesn't seem like an oversight in commit d365ae7.\n\n-\t\t\tif (otherid == admin_of && form->admin_option &&\n-\t\t\t\tOidIsValid(admin_of) && !OidIsValid(*admin_role))\n+\t\t\tif (admin_role != NULL && otherid == admin_of && form->admin_option &&\n+\t\t\t\tOidIsValid(admin_of))\n \t\t\t\t*admin_role = memberid;\n\nI'm not following why it's safe to remove the !OidIsValid(*admin_role)\ncheck here. We don't want to overwrite a previously-set value of\n*admin_role, as per the comment above roles_is_member_of():\n\n * If admin_of is not InvalidOid, this function sets *admin_role, either\n * to the OID of the first role in the result list that directly possesses\n * ADMIN OPTION on the role corresponding to admin_of, or to InvalidOid if\n * there is no such role.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 12:35:09 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" }, { "msg_contents": "Em qua., 27 de mar. de 2024 às 14:35, Nathan Bossart <\[email protected]> escreveu:\n\n> On Wed, Mar 27, 2024 at 01:47:38PM -0300, Ranier Vilela wrote:\n> > Em qua., 27 de mar. de 2024 às 13:41, Nathan Bossart <\n> > [email protected]> escreveu:\n> >> On Wed, Mar 27, 2024 at 01:21:23PM -0300, Ranier Vilela wrote:\n> >> > I think that left an oversight in a commit d365ae7\n> >> > <\n> >>\n> https://github.com/postgres/postgres/commit/d365ae705409f5d9c81da4b668f59c3598feb512\n> >> >\n> >> > If the admin_role is a NULL pointer, so, can be dereferenced\n> >> > in the main loop of the function roles_is_member_of and\n> >> > worst, IMO, can be destroying aleatory memory?\n> >> >\n> >> > First, is a better shortcut test to check if admin_role is NOT NULL.\n> >> > Second, !OidIsValid(*admin_role), It doesn't seem necessary anymore.\n> >> >\n> >> > Or am I losing something?\n> >>\n> >> If admin_role is NULL, then admin_of is expected to be set to\n> InvalidOid.\n> >> See the assertion at the top of the function. AFAICT the code that\n> >> dereferences admin_role short-circuits if admin_of is invalid.\n> >>\n> > These conditions seem a little fragile and confusing to me.\n> > When a simple test, it protects the pointer and avoids a series of tests,\n> > which are unnecessary if the pointer is invalid.\n>\n> Maybe. But that doesn't seem like an oversight in commit d365ae7.\n>\nSorry for exceeding.\n\n>\n> - if (otherid == admin_of && form->admin_option &&\n> - OidIsValid(admin_of) &&\n> !OidIsValid(*admin_role))\n> + if (admin_role != NULL && otherid == admin_of &&\n> form->admin_option &&\n> + OidIsValid(admin_of))\n> *admin_role = memberid;\n>\n> I'm not following why it's safe to remove the !OidIsValid(*admin_role)\n> check here. We don't want to overwrite a previously-set value of\n> *admin_role, as per the comment above roles_is_member_of():\n>\n> * If admin_of is not InvalidOid, this function sets *admin_role, either\n> * to the OID of the first role in the result list that directly possesses\n> * ADMIN OPTION on the role corresponding to admin_of, or to InvalidOid if\n> * there is no such role.\n>\nOk. If admin_role is NOT NULL, so *admin_role is InvalidOid, by\ninitialization\nin the head of function.\n\nI think that a cheap test *admin_role == InvalidOid, is enough?\nWhat do you think?\n\nv1 attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Wed, 27 Mar 2024 16:46:57 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow GRANT ROLE on PostgreSQL 16 with thousands of ROLEs" } ]
[ { "msg_contents": "Greetings,\n\nThere is a report on the pgjdbc github JDBC Driver shows erratic behavior\nwhen filtering on CURRENT_DATE · pgjdbc/pgjdbc · Discussion #3184\n(github.com) <https://github.com/pgjdbc/pgjdbc/discussions/3184>\n\nHere are the plans.\n\nJDBC - Nested Loop (incorrect result)\n\nSort (cost=1071.31..1071.60 rows=114 width=83) (actual time=2.894..2.912\nrows=330 loops=1)\n Sort Key: p.partno\n Sort Method: quicksort Memory: 70kB\n -> Nested Loop Left Join (cost=9.46..1067.42 rows=114 width=83) (actual\ntime=0.082..2.446 rows=330 loops=1)\n -> Bitmap Heap Scan on part p (cost=9.18..295.79 rows=114\nwidth=29) (actual time=0.064..0.502 rows=330 loops=1)\n Recheck Cond: (mutation >= ((CURRENT_DATE -\n'1971-12-31'::date) - 28))\n Heap Blocks: exact=181\n -> Bitmap Index Scan on i_42609 (cost=0.00..9.15 rows=114\nwidth=0) (actual time=0.041..0.041 rows=344 loops=1)\n Index Cond: (mutation >= ((CURRENT_DATE -\n'1971-12-31'::date) - 28))\n -> Index Scan using i_39773 on part_fa_entity pfe\n(cost=0.29..6.76 rows=1 width=65) (actual time=0.005..0.005 rows=1\nloops=330)\n Index Cond: ((partno)::text = (p.partno)::text)\nPlanning Time: 0.418 ms\nExecution Time: 2.971 ms\n\nJDBC - Hash Right (correct result)\n\nSort (cost=1352.35..1352.94 rows=238 width=83) (actual time=5.214..5.236\nrows=345 loops=1)\n Sort Key: p.partno\n Sort Method: quicksort Memory: 73kB\n -> Hash Right Join (cost=472.00..1342.95 rows=238 width=83) (actual\ntime=0.654..4.714 rows=345 loops=1)\n Hash Cond: ((pfe.partno)::text = (p.partno)::text)\n -> Seq Scan on part_fa_entity pfe (cost=0.00..837.27 rows=12827\nwidth=65) (actual time=0.009..2.191 rows=12827 loops=1)\n -> Hash (cost=469.03..469.03 rows=238 width=29) (actual\ntime=0.623..0.624 rows=345 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 30kB\n -> Bitmap Heap Scan on part p (cost=18.14..469.03 rows=238\nwidth=29) (actual time=0.073..0.532 rows=345 loops=1)\n Recheck Cond: (mutation >= ((CURRENT_DATE -\n'1971-12-31'::date) - 29))\n Heap Blocks: exact=186\n -> Bitmap Index Scan on i_42609 (cost=0.00..18.08\nrows=238 width=0) (actual time=0.049..0.049 rows=359 loops=1)\n Index Cond: (mutation >= ((CURRENT_DATE -\n'1971-12-31'::date) - 29))\nPlanning Time: 0.304 ms\nExecution Time: 5.292 ms\n\nAppX - Nested Loop (correct result)\n\nSort (cost=1071.31..1071.60 rows=114 width=83) (actual time=3.083..3.102\nrows=330 loops=1)\n Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price,\npfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost,\npfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status\n Sort Key: p.partno\n Sort Method: quicksort Memory: 71kB\n -> Nested Loop Left Join (cost=9.46..1067.42 rows=114 width=83) (actual\ntime=0.069..2.471 rows=330 loops=1)\n Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price,\npfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost,\npfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status\n -> Bitmap Heap Scan on .part p (cost=9.18..295.79 rows=114\nwidth=29) (actual time=0.054..0.308 rows=330 loops=1)\n Output: p.min_safety_stock, p.manual_safety_stock,\np.extended_stateno_i, p.partno, p.partmatch, p.partseqno_i, p.description,\np.remarks, p.specification, p.ata_chapter, p.vendor, p.weight, p.storetime,\np.alert_qty, p.measure_unit, p.waste_code, p.reord_level, p.safety_stock,\np.max_purch, p.ac_typ, p.mat_class, p.mat_type, p.country_origin,\np.reorder, p.tool, p.repairable, p.avg_ta_time, p.default_supplier,\np.default_repair, p.special_contract, p.fixed_asset,\np.reorder_last_mutator, p.reorder_last_mutation, p.max_shop_visit,\np.shop_visit_reset_condition, p.special_measure_unit, p.manufacturer,\np.pma, p.resource_type_id, p.counter_template_groupno_i, p.mutation,\np.mutator, p.status, p.mutation_time, p.created_by, p.created_date\n Recheck Cond: (p.mutation >= ((CURRENT_DATE -\n`1971-12-31`::date) - 28))\n Heap Blocks: exact=181\n -> Bitmap Index Scan on i_42609 (cost=0.00..9.15 rows=114\nwidth=0) (actual time=0.033..0.034 rows=341 loops=1)\n Index Cond: (p.mutation >= ((CURRENT_DATE -\n`1971-12-31`::date) - 28))\n -> Index Scan using i_39773 on .part_fa_entity pfe\n(cost=0.29..6.76 rows=1 width=65) (actual time=0.005..0.006 rows=1\nloops=330)\n Output: pfe.part_fa_entityno_i, pfe.partno, pfe.entityno_i,\npfe.average_price, pfe.sales_price, pfe.purch_price, pfe.average_price_2,\npfe.avg_repair_cost, pfe.avg_repair_cost_func, pfe.fa_qty,\npfe.fa_open_iv_qty, pfe.fa_start_qty, pfe.fa_start_price,\npfe.fa_start_price_2, pfe.mutation, pfe.mutator, pfe.status,\npfe.mutation_time, pfe.created_by, pfe.created_date,\npfe.average_price_func, pfe.fa_start_price_func, pfe.fsv, pfe.fsv_func\n Index Cond: ((pfe.partno)::text = (p.partno)::text)\nPlanning Time: 0.361 ms\nExecution Time: 3.157 ms\n\nAppX - Hash Join (correct result)\n\nSort (cost=1352.35..1352.94 rows=238 width=83) (actual time=5.361..5.384\nrows=345 loops=1)\n Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price,\npfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost,\npfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status\n Sort Key: p.partno\n Sort Method: quicksort Memory: 73kB\n -> Hash Right Join (cost=472.00..1342.95 rows=238 width=83) (actual\ntime=0.594..4.669 rows=345 loops=1)\n Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price,\npfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost,\npfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status\n Inner Unique: true\n Hash Cond: ((pfe.partno)::text = (p.partno)::text)\n -> Seq Scan on amos.part_fa_entity pfe (cost=0.00..837.27\nrows=12827 width=65) (actual time=0.006..1.581 rows=12827 loops=1)\n Output: pfe.part_fa_entityno_i, pfe.partno, pfe.entityno_i,\npfe.average_price, pfe.sales_price, pfe.purch_price, pfe.average_price_2,\npfe.avg_repair_cost, pfe.avg_repair_cost_func, pfe.fa_qty,\npfe.fa_open_iv_qty, pfe.fa_start_qty, pfe.fa_start_price,\npfe.fa_start_price_2, pfe.mutation, pfe.mutator, pfe.status,\npfe.mutation_time, pfe.created_by, pfe.created_date,\npfe.average_price_func, pfe.fa_start_price_func, pfe.fsv, pfe.fsv_func\n -> Hash (cost=469.03..469.03 rows=238 width=29) (actual\ntime=0.564..0.566 rows=345 loops=1)\n Output: p.partseqno_i, p.partno, p.partmatch, p.status\n Buckets: 1024 Batches: 1 Memory Usage: 30kB\n -> Bitmap Heap Scan on amos.part p (cost=18.14..469.03\nrows=238 width=29) (actual time=0.075..0.488 rows=345 loops=1)\n Output: p.partseqno_i, p.partno, p.partmatch, p.status\n Recheck Cond: (p.mutation >= ((CURRENT_DATE -\n`1971-12-31`::date) - 29))\n Heap Blocks: exact=186\n -> Bitmap Index Scan on i_42609 (cost=0.00..18.08\nrows=238 width=0) (actual time=0.035..0.035 rows=356 loops=1)\n Index Cond: (p.mutation >= ((CURRENT_DATE -\n`1971-12-31`::date) - 29))\nPlanning Time: 0.379 ms\nExecution Time: 5.443 ms\n<https://github.com/pgjdbc/pgjdbc/discussions/3184>\n\nDave Cramer\n\nGreetings,There is a report on the pgjdbc github JDBC Driver shows erratic behavior when filtering on CURRENT_DATE · pgjdbc/pgjdbc · Discussion #3184 (github.com)Here are the plans.JDBC - Nested Loop (incorrect result)Sort  (cost=1071.31..1071.60 rows=114 width=83) (actual time=2.894..2.912 rows=330 loops=1)  Sort Key: p.partno  Sort Method: quicksort  Memory: 70kB  ->  Nested Loop Left Join  (cost=9.46..1067.42 rows=114 width=83) (actual time=0.082..2.446 rows=330 loops=1)        ->  Bitmap Heap Scan on part p  (cost=9.18..295.79 rows=114 width=29) (actual time=0.064..0.502 rows=330 loops=1)              Recheck Cond: (mutation >= ((CURRENT_DATE - '1971-12-31'::date) - 28))              Heap Blocks: exact=181              ->  Bitmap Index Scan on i_42609  (cost=0.00..9.15 rows=114 width=0) (actual time=0.041..0.041 rows=344 loops=1)                    Index Cond: (mutation >= ((CURRENT_DATE - '1971-12-31'::date) - 28))        ->  Index Scan using i_39773 on part_fa_entity pfe  (cost=0.29..6.76 rows=1 width=65) (actual time=0.005..0.005 rows=1 loops=330)              Index Cond: ((partno)::text = (p.partno)::text)Planning Time: 0.418 msExecution Time: 2.971 msJDBC - Hash Right (correct result)Sort  (cost=1352.35..1352.94 rows=238 width=83) (actual time=5.214..5.236 rows=345 loops=1)  Sort Key: p.partno  Sort Method: quicksort  Memory: 73kB  ->  Hash Right Join  (cost=472.00..1342.95 rows=238 width=83) (actual time=0.654..4.714 rows=345 loops=1)        Hash Cond: ((pfe.partno)::text = (p.partno)::text)        ->  Seq Scan on part_fa_entity pfe  (cost=0.00..837.27 rows=12827 width=65) (actual time=0.009..2.191 rows=12827 loops=1)        ->  Hash  (cost=469.03..469.03 rows=238 width=29) (actual time=0.623..0.624 rows=345 loops=1)              Buckets: 1024  Batches: 1  Memory Usage: 30kB              ->  Bitmap Heap Scan on part p  (cost=18.14..469.03 rows=238 width=29) (actual time=0.073..0.532 rows=345 loops=1)                    Recheck Cond: (mutation >= ((CURRENT_DATE - '1971-12-31'::date) - 29))                    Heap Blocks: exact=186                    ->  Bitmap Index Scan on i_42609  (cost=0.00..18.08 rows=238 width=0) (actual time=0.049..0.049 rows=359 loops=1)                          Index Cond: (mutation >= ((CURRENT_DATE - '1971-12-31'::date) - 29))Planning Time: 0.304 msExecution Time: 5.292 msAppX - Nested Loop (correct result)Sort  (cost=1071.31..1071.60 rows=114 width=83) (actual time=3.083..3.102 rows=330 loops=1)  Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price, pfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost, pfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status  Sort Key: p.partno  Sort Method: quicksort  Memory: 71kB  ->  Nested Loop Left Join  (cost=9.46..1067.42 rows=114 width=83) (actual time=0.069..2.471 rows=330 loops=1)        Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price, pfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost, pfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status        ->  Bitmap Heap Scan on .part p  (cost=9.18..295.79 rows=114 width=29) (actual time=0.054..0.308 rows=330 loops=1)              Output: p.min_safety_stock, p.manual_safety_stock, p.extended_stateno_i, p.partno, p.partmatch, p.partseqno_i, p.description, p.remarks, p.specification, p.ata_chapter, p.vendor, p.weight, p.storetime, p.alert_qty, p.measure_unit, p.waste_code, p.reord_level, p.safety_stock, p.max_purch, p.ac_typ, p.mat_class, p.mat_type, p.country_origin, p.reorder, p.tool, p.repairable, p.avg_ta_time, p.default_supplier, p.default_repair, p.special_contract, p.fixed_asset, p.reorder_last_mutator, p.reorder_last_mutation, p.max_shop_visit, p.shop_visit_reset_condition, p.special_measure_unit, p.manufacturer, p.pma, p.resource_type_id, p.counter_template_groupno_i, p.mutation, p.mutator, p.status, p.mutation_time, p.created_by, p.created_date              Recheck Cond: (p.mutation >= ((CURRENT_DATE - `1971-12-31`::date) - 28))              Heap Blocks: exact=181              ->  Bitmap Index Scan on i_42609  (cost=0.00..9.15 rows=114 width=0) (actual time=0.033..0.034 rows=341 loops=1)                    Index Cond: (p.mutation >= ((CURRENT_DATE - `1971-12-31`::date) - 28))        ->  Index Scan using i_39773 on .part_fa_entity pfe  (cost=0.29..6.76 rows=1 width=65) (actual time=0.005..0.006 rows=1 loops=330)              Output: pfe.part_fa_entityno_i, pfe.partno, pfe.entityno_i, pfe.average_price, pfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost, pfe.avg_repair_cost_func, pfe.fa_qty, pfe.fa_open_iv_qty, pfe.fa_start_qty, pfe.fa_start_price, pfe.fa_start_price_2, pfe.mutation, pfe.mutator, pfe.status, pfe.mutation_time, pfe.created_by, pfe.created_date, pfe.average_price_func, pfe.fa_start_price_func, pfe.fsv, pfe.fsv_func              Index Cond: ((pfe.partno)::text = (p.partno)::text)Planning Time: 0.361 msExecution Time: 3.157 msAppX - Hash Join (correct result)Sort  (cost=1352.35..1352.94 rows=238 width=83) (actual time=5.361..5.384 rows=345 loops=1)  Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price, pfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost, pfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status  Sort Key: p.partno  Sort Method: quicksort  Memory: 73kB  ->  Hash Right Join  (cost=472.00..1342.95 rows=238 width=83) (actual time=0.594..4.669 rows=345 loops=1)        Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price, pfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost, pfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status        Inner Unique: true        Hash Cond: ((pfe.partno)::text = (p.partno)::text)        ->  Seq Scan on amos.part_fa_entity pfe  (cost=0.00..837.27 rows=12827 width=65) (actual time=0.006..1.581 rows=12827 loops=1)              Output: pfe.part_fa_entityno_i, pfe.partno, pfe.entityno_i, pfe.average_price, pfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost, pfe.avg_repair_cost_func, pfe.fa_qty, pfe.fa_open_iv_qty, pfe.fa_start_qty, pfe.fa_start_price, pfe.fa_start_price_2, pfe.mutation, pfe.mutator, pfe.status, pfe.mutation_time, pfe.created_by, pfe.created_date, pfe.average_price_func, pfe.fa_start_price_func, pfe.fsv, pfe.fsv_func        ->  Hash  (cost=469.03..469.03 rows=238 width=29) (actual time=0.564..0.566 rows=345 loops=1)              Output: p.partseqno_i, p.partno, p.partmatch, p.status              Buckets: 1024  Batches: 1  Memory Usage: 30kB              ->  Bitmap Heap Scan on amos.part p  (cost=18.14..469.03 rows=238 width=29) (actual time=0.075..0.488 rows=345 loops=1)                    Output: p.partseqno_i, p.partno, p.partmatch, p.status                    Recheck Cond: (p.mutation >= ((CURRENT_DATE - `1971-12-31`::date) - 29))                    Heap Blocks: exact=186                    ->  Bitmap Index Scan on i_42609  (cost=0.00..18.08 rows=238 width=0) (actual time=0.035..0.035 rows=356 loops=1)                          Index Cond: (p.mutation >= ((CURRENT_DATE - `1971-12-31`::date) - 29))Planning Time: 0.379 msExecution Time: 5.443 msDave Cramer", "msg_date": "Wed, 27 Mar 2024 17:33:17 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "incorrect results and different plan with 2 very similar queries" }, { "msg_contents": "On Thu, 28 Mar 2024 at 10:33, Dave Cramer <[email protected]> wrote:\n> There is a report on the pgjdbc github JDBC Driver shows erratic behavior when filtering on CURRENT_DATE · pgjdbc/pgjdbc · Discussion #3184 (github.com)\n>\n> Here are the plans.\n>\n> JDBC - Nested Loop (incorrect result)\n>\n> Index Cond: (mutation >= ((CURRENT_DATE - '1971-12-31'::date) - 28))\n\n> JDBC - Hash Right (correct result)\n>\n> Recheck Cond: (mutation >= ((CURRENT_DATE - '1971-12-31'::date) - 29))\n\nI don't see any version details or queries, but going by the\nconditions above, the queries don't appear to be the same, so\ndifferent results aren't too surprising and not a demonstration that\nthere's any sort of bug.\n\nDavid\n\n\n", "msg_date": "Thu, 28 Mar 2024 10:56:54 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: incorrect results and different plan with 2 very similar queries" }, { "msg_contents": "Dave Cramer\n\n\nOn Wed, 27 Mar 2024 at 17:57, David Rowley <[email protected]> wrote:\n\n> On Thu, 28 Mar 2024 at 10:33, Dave Cramer <[email protected]> wrote:\n> > There is a report on the pgjdbc github JDBC Driver shows erratic\n> behavior when filtering on CURRENT_DATE · pgjdbc/pgjdbc · Discussion #3184 (\n> github.com)\n> >\n> > Here are the plans.\n> >\n> > JDBC - Nested Loop (incorrect result)\n> >\n> > Index Cond: (mutation >= ((CURRENT_DATE -\n> '1971-12-31'::date) - 28))\n>\n> > JDBC - Hash Right (correct result)\n> >\n> > Recheck Cond: (mutation >= ((CURRENT_DATE -\n> '1971-12-31'::date) - 29))\n>\n> I don't see any version details or queries, but going by the\n> conditions above, the queries don't appear to be the same, so\n> different results aren't too surprising and not a demonstration that\n> there's any sort of bug.\n>\n\nSorry, you are correct. Version is 12.14. Here is the query\n\nSELECT\n p.partseqno_i\n, p.partno\n, p.partmatch\n, pfe.average_price\n, pfe.sales_price\n, pfe.purch_price\n, pfe.average_price_2\n, pfe.avg_repair_cost\n, pfe.average_price_func\n, pfe.fsv\n, pfe.fsv_func\n, p.status\n\nFROM part p\nLEFT JOIN part_fa_entity pfe ON (p.partno = pfe.partno)\nWHERE 1=1\nAND (p.mutation >= (CURRENT_DATE - '1971-12-31'::date)-27) ORDER BY p.partno\n\nDave\n\n\n> David\n>\n\nDave CramerOn Wed, 27 Mar 2024 at 17:57, David Rowley <[email protected]> wrote:On Thu, 28 Mar 2024 at 10:33, Dave Cramer <[email protected]> wrote:\n> There is a report on the pgjdbc github JDBC Driver shows erratic behavior when filtering on CURRENT_DATE · pgjdbc/pgjdbc · Discussion #3184 (github.com)\n>\n> Here are the plans.\n>\n> JDBC - Nested Loop (incorrect result)\n>\n>                     Index Cond: (mutation >= ((CURRENT_DATE - '1971-12-31'::date) - 28))\n\n> JDBC - Hash Right (correct result)\n>\n>                     Recheck Cond: (mutation >= ((CURRENT_DATE - '1971-12-31'::date) - 29))\n\nI don't see any version details or queries, but going by the\nconditions above, the queries don't appear to be the same, so\ndifferent results aren't too surprising and not a demonstration that\nthere's any sort of bug.Sorry, you are correct. Version is 12.14. Here is the querySELECT\n p.partseqno_i\n, p.partno\n, p.partmatch\n, pfe.average_price\n, pfe.sales_price\n, pfe.purch_price\n, pfe.average_price_2\n, pfe.avg_repair_cost\n, pfe.average_price_func\n, pfe.fsv\n, pfe.fsv_func\n, p.status\n\nFROM part p\nLEFT JOIN part_fa_entity pfe ON (p.partno = pfe.partno)\nWHERE 1=1\nAND (p.mutation >= (CURRENT_DATE - '1971-12-31'::date)-27) ORDER BY p.partno Dave\n\nDavid", "msg_date": "Wed, 27 Mar 2024 18:10:20 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: incorrect results and different plan with 2 very similar queries" }, { "msg_contents": "\n\nOn 3/27/24 23:10, Dave Cramer wrote:\n> Dave Cramer\n> \n> \n> On Wed, 27 Mar 2024 at 17:57, David Rowley <[email protected]> wrote:\n> \n>> On Thu, 28 Mar 2024 at 10:33, Dave Cramer <[email protected]> wrote:\n>>> There is a report on the pgjdbc github JDBC Driver shows erratic\n>> behavior when filtering on CURRENT_DATE · pgjdbc/pgjdbc · Discussion #3184 (\n>> github.com)\n>>>\n>>> Here are the plans.\n>>>\n>>> JDBC - Nested Loop (incorrect result)\n>>>\n>>> Index Cond: (mutation >= ((CURRENT_DATE -\n>> '1971-12-31'::date) - 28))\n>>\n>>> JDBC - Hash Right (correct result)\n>>>\n>>> Recheck Cond: (mutation >= ((CURRENT_DATE -\n>> '1971-12-31'::date) - 29))\n>>\n>> I don't see any version details or queries, but going by the\n>> conditions above, the queries don't appear to be the same, so\n>> different results aren't too surprising and not a demonstration that\n>> there's any sort of bug.\n>>\n> \n> Sorry, you are correct. Version is 12.14. Here is the query\n> \n> SELECT\n> p.partseqno_i\n> , p.partno\n> , p.partmatch\n> , pfe.average_price\n> , pfe.sales_price\n> , pfe.purch_price\n> , pfe.average_price_2\n> , pfe.avg_repair_cost\n> , pfe.average_price_func\n> , pfe.fsv\n> , pfe.fsv_func\n> , p.status\n> \n> FROM part p\n> LEFT JOIN part_fa_entity pfe ON (p.partno = pfe.partno)\n> WHERE 1=1\n> AND (p.mutation >= (CURRENT_DATE - '1971-12-31'::date)-27) ORDER BY p.partno\n> \n\nI guess the confusing bit is that the report does not claim that those\nqueries are expected to produce the same result, but that the parameter\nvalue affects which plan gets selected, and one of those plans produces\nincorrect result.\n\nI think the simplest explanation might be that one of the indexes on\npart_fa_entity is corrupted and fails to lookup some rows by partno.\nThat would explain why the plan with seqscan works fine, but nestloop\nwith index scan is missing these rows.\n\nThey might try a couple things:\n\n1) set enable_nestloop=off, see if the results get correct\n\n2) try bt_index_check on i_39773, might notice some corruption\n\n3) rebuild the index\n\nIf it's not this, they'll need to build a reproducer. It's really\ndifficult to deduce what's going on just from query plans for different\nparameter values.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 28 Mar 2024 23:04:33 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: incorrect results and different plan with 2 very similar queries" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nCoverity has some reports in the new code\npg_createsubscriber.c\nI think that Coverity is right.\n\n1.\nCID 1542682: (#1 of 1): Resource leak (RESOURCE_LEAK)\nleaked_storage: Variable buf going out of scope leaks the storage it points\nto.\n\n2.\nCID 1542704: (#1 of 1): Resource leak (RESOURCE_LEAK)\nleaked_storage: Variable conn going out of scope leaks the storage it\npoints to.\n\n3.\nCID 1542691: (#1 of 1): Resource leak (RESOURCE_LEAK)\nleaked_storage: Variable str going out of scope leaks the storage it points\nto.\n\nPatch attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Wed, 27 Mar 2024 20:50:09 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Fix some resources leaks\n (src/bin/pg_basebackup/pg_createsubscriber.c)" }, { "msg_contents": "On Wed, Mar 27, 2024, at 8:50 PM, Ranier Vilela wrote:\n> Coverity has some reports in the new code\n> pg_createsubscriber.c\n> I think that Coverity is right.\n\nIt depends on your \"right\" definition. If your program execution is ephemeral\nand the leak is just before exiting, do you think it's worth it?\n\n> 1.\n> CID 1542682: (#1 of 1): Resource leak (RESOURCE_LEAK)\n> leaked_storage: Variable buf going out of scope leaks the storage it points to.\n\nIt will exit in the next instruction.\n\n> 2.\n> CID 1542704: (#1 of 1): Resource leak (RESOURCE_LEAK)\n> leaked_storage: Variable conn going out of scope leaks the storage it points to.\n\nThe connect_database function whose exit_on_error is false is used in 2 routines:\n\n* cleanup_objects_atexit: that's about to exit;\n* drop_primary_replication_slot: that will execute a few routines before exiting.\n\n> 3.\n> CID 1542691: (#1 of 1): Resource leak (RESOURCE_LEAK)\n> leaked_storage: Variable str going out of scope leaks the storage it points to.\n\nIt will exit in the next instruction.\n\nHaving said that, applying this patch is just a matter of style.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Mar 27, 2024, at 8:50 PM, Ranier Vilela wrote:Coverity has some reports in the new codepg_createsubscriber.cI think that Coverity is right.It depends on your \"right\" definition. If your program execution is ephemeraland the leak is just before exiting, do you think it's worth it?1.CID 1542682: (#1 of 1): Resource leak (RESOURCE_LEAK)leaked_storage: Variable buf going out of scope leaks the storage it points to.It will exit in the next instruction.2.CID 1542704: (#1 of 1): Resource leak (RESOURCE_LEAK)leaked_storage: Variable conn going out of scope leaks the storage it points to.The connect_database function whose exit_on_error is false is used in 2 routines:* cleanup_objects_atexit: that's about to exit;* drop_primary_replication_slot: that will execute a few routines before exiting.3.CID 1542691: (#1 of 1): Resource leak (RESOURCE_LEAK)leaked_storage: Variable str going out of scope leaks the storage it points to.It will exit in the next instruction.Having said that, applying this patch is just a matter of style.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 27 Mar 2024 23:07:44 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix some resources leaks\n (src/bin/pg_basebackup/pg_createsubscriber.c)" }, { "msg_contents": "Em qua., 27 de mar. de 2024 às 23:08, Euler Taveira <[email protected]>\nescreveu:\n\n> On Wed, Mar 27, 2024, at 8:50 PM, Ranier Vilela wrote:\n>\n> Coverity has some reports in the new code\n> pg_createsubscriber.c\n> I think that Coverity is right.\n>\n>\n> It depends on your \"right\" definition.\n>\nI do not think so.\n\nIf your program execution is ephemeral\n> and the leak is just before exiting, do you think it's worth it?\n>\nI think it's worth it.\nEven an ephemeral execution can allocate resources, for example, and\nmainly, in Windows,\nand block these resources for other programs, and on a highly loaded server\nwith very few free resources,\nreleasing resources as quickly as possible makes a difference.\n\n\n> 1.\n> CID 1542682: (#1 of 1): Resource leak (RESOURCE_LEAK)\n> leaked_storage: Variable buf going out of scope leaks the storage it\n> points to.\n>\n>\n> It will exit in the next instruction.\n>\nYes, by exit() call function.\n\n\n>\n> 2.\n> CID 1542704: (#1 of 1): Resource leak (RESOURCE_LEAK)\n> leaked_storage: Variable conn going out of scope leaks the storage it\n> points to.\n>\n>\n> The connect_database function whose exit_on_error is false is used in 2\n> routines:\n>\nCalling connect_database with false, per example:\nconn = connect_database(dbinfo[i].pubconninfo, false);\n\nIf the connection status != CONNECTION_OK, the function returns NULL,\nbut does not free connection struct, memory and data.\n\nIn the loop with thousands of \"number of specified databases\",\nIt would quickly end up in problems, especially on Windows.\n\n\n> * cleanup_objects_atexit: that's about to exit;\n> * drop_primary_replication_slot: that will execute a few routines before\n> exiting.\n>\n> 3.\n> CID 1542691: (#1 of 1): Resource leak (RESOURCE_LEAK)\n> leaked_storage: Variable str going out of scope leaks the storage it\n> points to.\n>\n>\n> It will exit in the next instruction.\n>\nYes, by exit() call function.\n\nAbout exit() function:\n\ndeallocating-memory-when-using-exit1-c\n<https://stackoverflow.com/questions/7414051/deallocating-memory-when-using-exit1-c>\n\"exit *does not* call the destructors of any stack based objects so if\nthose objects have allocated any memory internally then yes that memory\nwill be leaked. \"\n\nreference/cstdlib/exit/ <https://cplusplus.com/reference/cstdlib/exit/>\n\"Note that objects with automatic storage are not destroyed by calling exit\n(C++).\"\n\nI think that relying on the exit function for cleaning is extremely\nfragile, especially on Windows.\n\n\n> Having said that, applying this patch is just a matter of style.\n>\nIMO, a better and more correct style.\n\nbest regards,\nRanier Vilela\n\nEm qua., 27 de mar. de 2024 às 23:08, Euler Taveira <[email protected]> escreveu:On Wed, Mar 27, 2024, at 8:50 PM, Ranier Vilela wrote:Coverity has some reports in the new codepg_createsubscriber.cI think that Coverity is right.It depends on your \"right\" definition.I do not think so. If your program execution is ephemeraland the leak is just before exiting, do you think it's worth it?I think it's worth it.Even an ephemeral execution can allocate resources, for example, and mainly, in Windows, and block these resources for other programs, and on a highly loaded server with very few free resources, releasing resources as quickly as possible makes a difference. 1.CID 1542682: (#1 of 1): Resource leak (RESOURCE_LEAK)leaked_storage: Variable buf going out of scope leaks the storage it points to.It will exit in the next instruction.Yes, by exit() call function. 2.CID 1542704: (#1 of 1): Resource leak (RESOURCE_LEAK)leaked_storage: Variable conn going out of scope leaks the storage it points to.The connect_database function whose exit_on_error is false is used in 2 routines:Calling connect_database with false, per example:conn = connect_database(dbinfo[i].pubconninfo, false);If the connection status != CONNECTION_OK, the function returns NULL,but does not free connection struct, memory and data.In the loop with thousands of \"number of specified databases\",It would quickly end up in problems, especially on Windows.* cleanup_objects_atexit: that's about to exit;* drop_primary_replication_slot: that will execute a few routines before exiting.3.CID 1542691: (#1 of 1): Resource leak (RESOURCE_LEAK)leaked_storage: Variable str going out of scope leaks the storage it points to.It will exit in the next instruction.\nYes, by exit() call function.  About exit() function:deallocating-memory-when-using-exit1-c\n\"exit does not call the destructors of any stack based objects \nso if those objects have allocated any memory internally then yes that \nmemory will be leaked. \"reference/cstdlib/exit/\"Note that objects with automatic storage are not destroyed by calling exit (C++).\"I think that relying on the exit function for cleaning is extremely fragile, especially on Windows.Having said that, applying this patch is just a matter of style.IMO, a better and more correct style.best regards,Ranier Vilela", "msg_date": "Thu, 28 Mar 2024 08:30:23 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix some resources leaks\n (src/bin/pg_basebackup/pg_createsubscriber.c)" }, { "msg_contents": "\"Euler Taveira\" <[email protected]> writes:\n> On Wed, Mar 27, 2024, at 8:50 PM, Ranier Vilela wrote:\n>> Coverity has some reports in the new code\n>> pg_createsubscriber.c\n>> I think that Coverity is right.\n\n> It depends on your \"right\" definition. If your program execution is ephemeral\n> and the leak is just before exiting, do you think it's worth it?\n\nI agree with Ranier, actually. The functions in question don't\nexit() themselves, but return control to a caller that might or\nmight not choose to exit. I don't think they should be assuming\nthat an exit will happen promptly, even if that's true today.\n\nThe community Coverity run found a few additional leaks of the same\nkind in that file. I pushed a merged fix for all of them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Apr 2024 13:52:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix some resources leaks\n (src/bin/pg_basebackup/pg_createsubscriber.c)" }, { "msg_contents": "Em seg., 1 de abr. de 2024 às 14:52, Tom Lane <[email protected]> escreveu:\n\n> \"Euler Taveira\" <[email protected]> writes:\n> > On Wed, Mar 27, 2024, at 8:50 PM, Ranier Vilela wrote:\n> >> Coverity has some reports in the new code\n> >> pg_createsubscriber.c\n> >> I think that Coverity is right.\n>\n> > It depends on your \"right\" definition. If your program execution is\n> ephemeral\n> > and the leak is just before exiting, do you think it's worth it?\n>\n> I agree with Ranier, actually. The functions in question don't\n> exit() themselves, but return control to a caller that might or\n> might not choose to exit. I don't think they should be assuming\n> that an exit will happen promptly, even if that's true today.\n>\n> The community Coverity run found a few additional leaks of the same\n> kind in that file. I pushed a merged fix for all of them.\n>\nThanks Tom, for the commit.\n\nbest regards,\nRanier Vilela\n\nEm seg., 1 de abr. de 2024 às 14:52, Tom Lane <[email protected]> escreveu:\"Euler Taveira\" <[email protected]> writes:\n> On Wed, Mar 27, 2024, at 8:50 PM, Ranier Vilela wrote:\n>> Coverity has some reports in the new code\n>> pg_createsubscriber.c\n>> I think that Coverity is right.\n\n> It depends on your \"right\" definition. If your program execution is ephemeral\n> and the leak is just before exiting, do you think it's worth it?\n\nI agree with Ranier, actually.  The functions in question don't\nexit() themselves, but return control to a caller that might or\nmight not choose to exit.  I don't think they should be assuming\nthat an exit will happen promptly, even if that's true today.\n\nThe community Coverity run found a few additional leaks of the same\nkind in that file.  I pushed a merged fix for all of them.Thanks Tom, for the commit.best regards,Ranier Vilela", "msg_date": "Mon, 1 Apr 2024 15:01:38 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix some resources leaks\n (src/bin/pg_basebackup/pg_createsubscriber.c)" } ]
[ { "msg_contents": "Hi -hackers,\n\nWhile chasing some other bug I've learned that backtrace_functions\nmight be misleading with top elog/ereport() address.\n\nReproducer:\n\n# using Tom's reproducer on master:\nwget https://www.postgresql.org/message-id/attachment/112394/ri-collation-bug-example.sql\necho '' >> ri-collation-bug-example.sql\necho '\\errverbose' >> ri-collation-bug-example.sql\n-- based on grepping the source code locations many others could go in here:\npsql -p 5437 -c \"alter system set backtrace_functions =\n'RI_FKey_cascade_del,get_collation_isdeterministic';\"\npsql -p 5437 -c \"select pg_reload_conf();\"\ndropdb -p 5437 test1 ; createdb -p 5437 test1 ; psql -p 5437 -d test1\n-f ri-collation-bug-example.sql\n\ngives (note \"get_collation_isdeterministic\"):\n psql:ri-collation-bug-example.sql:42: ERROR: cache lookup failed\nfor collation 0\n psql:ri-collation-bug-example.sql:44: error: ERROR: XX000: cache\nlookup failed for collation 0\n LOCATION: get_collation_isdeterministic, lsyscache.c:1088\n\nand in log note the 0x14c6bb:\n2024-03-27 14:39:13.065 CET [12899] postgres@test1 ERROR: cache\nlookup failed for collation 0\n2024-03-27 14:39:13.065 CET [12899] postgres@test1 BACKTRACE:\n postgres: 16/main: postgres test1 [local] DELETE(+0x14c6bb)\n[0x5565c5a9c6bb]\n postgres: 16/main: postgres test1 [local]\nDELETE(RI_FKey_cascade_del+0x323) [0x5565c5ec0973]\n postgres: 16/main: postgres test1 [local] DELETE(+0x2d401f)\n[0x5565c5c2401f]\n postgres: 16/main: postgres test1 [local] DELETE(+0x2d5a04)\n[0x5565c5c25a04]\n postgres: 16/main: postgres test1 [local]\nDELETE(AfterTriggerEndQuery+0x78) [0x5565c5c2a918]\n[..]\n2024-03-27 14:39:13.065 CET [12899] postgres@test1 STATEMENT: delete\nfrom revisions where id='5c617ce7-688d-4bea-9d66-c0f0ebc635da';\n\nbased on \\errverbose it is OK (error matches the code, thanks to\nAlvaro for this hint):\n\n 9 bool\n 8 get_collation_isdeterministic(Oid colloid)\n 7 {\n 6 │ HeapTuple> tp;\n 5 │ Form_pg_collation colltup;\n 4 │ bool> > result;\n 3 │\n 2 │ tp = SearchSysCache1(COLLOID, ObjectIdGetDatum(colloid));\n 1 │ if (!HeapTupleIsValid(tp))\n 1088 │ │ elog(ERROR, \"cache lookup failed for collation %u\", colloid);\n[..]\n\n\nbut based on backtrace address 0x14c6bb (!) and it resolves\ndifferently which seems to be highly misleading (note the\n\"get_language_name.cold\"):\n\n$ addr2line -e /usr/lib/postgresql/16/bin/postgres -a -f 0x14c6bb\n0x000000000014c6bb\nget_language_name.cold\n./build/src/backend/utils/cache/./build/../src/backend/utils/cache/lsyscache.c:1181\n\nWhen i disassemble the get_collation_isdeterministic() (and I know the\nname from \\errverbose):\n\nDump of assembler code for function get_collation_isdeterministic:\nAddress range 0x5c7680 to 0x5c76c1:\n 0x00000000005c7680 <+0>: push %rbp\n[..]\n 0x00000000005c7694 <+20>: call 0x5d63b0 <SearchSysCache1>\n 0x00000000005c7699 <+25>: test %rax,%rax\n 0x00000000005c769c <+28>: je 0x14c686\n<get_collation_isdeterministic.cold>\n 0x00000000005c76a2 <+34>: mov %rax,%rdi\n[..]\n 0x00000000005c76bf <+63>: leave\n 0x00000000005c76c0 <+64>: ret\nAddress range 0x14c686 to 0x14c6bb:\n 0x000000000014c686 <-4698106>: xor %esi,%esi\n 0x000000000014c688 <-4698104>: mov $0x15,%edi\n 0x000000000014c68d <-4698099>: call 0x14ec86 <errstart_cold>\n 0x000000000014c692 <-4698094>: mov %r12d,%esi\n 0x000000000014c695 <-4698091>: lea 0x5028dc(%rip),%rdi\n # 0x64ef78\n 0x000000000014c69c <-4698084>: xor %eax,%eax\n 0x000000000014c69e <-4698082>: call 0x5df320 <errmsg_internal>\n 0x000000000014c6a3 <-4698077>: lea 0x6311a6(%rip),%rdx\n # 0x77d850 <__func__.34>\n 0x000000000014c6aa <-4698070>: mov $0x440,%esi\n 0x000000000014c6af <-4698065>: lea 0x630c8a(%rip),%rdi\n # 0x77d340\n 0x000000000014c6b6 <-4698058>: call 0x5df100 <errfinish>\n << NOTE the exact 0x14c6bb is even missing here(!)\n\nnotice the interesting thing here: according to GDB\nget_collation_isdeterministic() is @ 0x5c7680 + jump to 0x14c686\n<get_collation_isdeterministic.cold> till 0x14c6bb (but without it)\nwithout any stack setup for that .cold. But backtrace() just captured\nthe elog/ereport (cold) at the end of 0x14c6bb instead, so if I take\nthat exact address from backtrace_functions output as it is\n(\"DELETE(+0x14c6bb)\"), it also shows WRONG function (just like\naddr2line):\n\n(gdb) disassemble 0x14c6bb\nDump of assembler code for function get_language_name:\nAddress range 0x5c7780 to 0x5c77ee:\n[..]\n 0x00000000005c77ed <+109>: ret\nAddress range 0x14c6bb to 0x14c6f0:\n 0x000000000014c6bb <-4698309>: xor %esi,%esi\n[..]\n 0x000000000014c6eb <-4698261>: call 0x5df100 <errfinish>\nEnd of assembler dump.\n\nso this is wrong (as are failing in \"get_collation_isdeterministic\"\nnot in \"get_language_name\").\n\nI was very confused at the beginning with the main question being: why\nare we compiling elog/ereport() so that it is incompatible with\nbacktrace? When looking for it I've found two threads [1] by David\nthat were actually helpful in understanding that this was done for\nperformance (unlikley(), cold attribute and similiar type of\ndiscussions). Now my thought is that for >= ERROR in ereport_domain we\ncould add something more (?) before pg_unreachable() that would help\nthe backtrace to resolve the symbol correctly\n\nIf I try non-portable PoC with x86 nop instruction:\n\n--- a/src/include/utils/elog.h\n+++ b/src/include/utils/elog.h\n@@ -145,8 +145,10 @@ struct Node;\n errstart_cold(elevel, domain) : \\\n errstart(elevel, domain)) \\\n __VA_ARGS__, errfinish(__FILE__, __LINE__, __func__); \\\n- if (__builtin_constant_p(elevel) && (elevel) >= ERROR) \\\n+ if (__builtin_constant_p(elevel) && (elevel) >= ERROR) { \\\n+ __asm__ __volatile__(\"nop\"); \\\n pg_unreachable(); \\\n+ } \\\n\nit partially works and the address can be now properly resolved!\n\nSomewhat backtrace() still is unable to lookup the to do so in log itself:\n postgres: postgres test1 [local] DELETE(+0x15f84c) [0x55703bf5284c]\n postgres: postgres test1 [local]\nDELETE(RI_FKey_cascade_del+0x2b0) [0x55703c34c850]\n\nbut at least:\n\npostgres@hive:~$ addr2line -e /usr/pgsql17/bin/postgres -a -f 0x15f84c\n0x000000000015f84c\nget_collation_isdeterministic\n/git/postgres/src/backend/utils/cache/lsyscache.c:1062 (discriminator 4)\npostgres@hive:~$\n\nin gdb it will point right to our new nop:\n 0x000000000015f840 <-4490944>: lea 0x608306(%rip),%rdi\n # 0x767b4d\n 0x000000000015f847 <-4490937>: call 0x5be1d0 <errfinish>\n 0x000000000015f84c <-4490932>: nop\n\nThoughts? Does it make sense to post a patch (for pg18?)? How to do it\nin $arch-independent way? I've tried also to try to find a way with\ne.g. -rdynamic to show that real function name, but it looks like\nwithout some more serious unwind library it seems unrealistic (?) to\nresolve that get_collation_isdeterministic.cold\n\n-J.\n\n[1] - https://www.postgresql.org/message-id/flat/CAApHDvoWoxtbWiqZxrhO%2Bi9NoG56AWHDzuDDD%2B1OEf4PxzFhig%40mail.gmail.com#611566bd6fd06f27e51dbc3148a673d3\n[2] - https://www.postgresql.org/message-id/flat/CAKJS1f8yqRW3qx2CO9r4bqqvA2Vx68%3D3awbh8CJWTP9zXeoHMw%40mail.gmail.com\n\n\n", "msg_date": "Thu, 28 Mar 2024 13:01:28 +0100", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": true, "msg_subject": "elog/ereport VS misleading backtrace_function function address" }, { "msg_contents": "Jakub Wartak <[email protected]> writes:\n> While chasing some other bug I've learned that backtrace_functions\n> might be misleading with top elog/ereport() address.\n\nThat was understood from the beginning: this type of backtrace is\ninherently pretty imprecise, and I doubt there is much that can\nbe done to make it better. IIRC the fundamental problem is it only\nlooks at global symbols, so static functions inherently defeat it.\nIt was argued that this is better than nothing, which is true, but\nyou have to take the info with a mountain of salt.\n\nI recall speculating about whether we could somehow invoke gdb\nto get a more comprehensive and accurate backtrace, but I don't\nreally have a concrete idea how that could be made to work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Mar 2024 14:36:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: elog/ereport VS misleading backtrace_function function address" }, { "msg_contents": "Hi Tom and -hackers!\n\nOn Thu, Mar 28, 2024 at 7:36 PM Tom Lane <[email protected]> wrote:\n>\n> Jakub Wartak <[email protected]> writes:\n> > While chasing some other bug I've learned that backtrace_functions\n> > might be misleading with top elog/ereport() address.\n>\n> That was understood from the beginning: this type of backtrace is\n> inherently pretty imprecise, and I doubt there is much that can\n> be done to make it better. IIRC the fundamental problem is it only\n> looks at global symbols, so static functions inherently defeat it.\n> It was argued that this is better than nothing, which is true, but\n> you have to take the info with a mountain of salt.\n\nOK, point taken, thanks for describing the limitations, still I find\nbacktrace_functions often the best thing we have primarily due its\nsimplicity and ease of use (for customers). I found out simplest\nportable way to generate NOP (or any other instruction that makes the\nproblem go away):\n\nwith the reproducer, psql returns:\n\npsql:ri-collation-bug-example.sql:48: error: ERROR: XX000: cache\nlookup failed for collation 0\nLOCATION: get_collation_isdeterministic, lsyscache.c:1062\n\nlogfile without patch:\n\n2024-05-07 09:05:43.043 CEST [44720] ERROR: cache lookup failed for collation 0\n2024-05-07 09:05:43.043 CEST [44720] BACKTRACE:\n postgres: postgres postgres [local] DELETE(+0x155883) [0x55ce5a86a883]\n postgres: postgres postgres [local]\nDELETE(RI_FKey_cascade_del+0x2b0) [0x55ce5ac6dfd0]\n postgres: postgres postgres [local] DELETE(+0x2d488b) [0x55ce5a9e988b]\n[..]\n\n$ addr2line -e /usr/pgsql18/bin/postgres -a -f 0x155883\n0x0000000000155883\nget_constraint_type.cold <<<<<< so it's wrong as the real function\nshould be get_collation_isdeterministic\n\nlogfile with attached patch:\n\n2024-05-07 09:11:06.596 CEST [51185] ERROR: cache lookup failed for collation 0\n2024-05-07 09:11:06.596 CEST [51185] BACKTRACE:\n postgres: postgres postgres [local] DELETE(+0x168bf0) [0x560e1cdfabf0]\n postgres: postgres postgres [local]\nDELETE(RI_FKey_cascade_del+0x2b0) [0x560e1d200c00]\n postgres: postgres postgres [local] DELETE(+0x2e90fb) [0x560e1cf7b0fb]\n[..]\n\n$ addr2line -e /usr/pgsql18/bin/postgres -a -f 0x168bf0\n0x0000000000168bf0\nget_collation_isdeterministic.cold <<< It's ok with the patch\n\nNOTE: in case one will be testing this: one cannot ./configure with\n--enable-debug as it prevents the compiler optimizations that actually\nend up with the \".cold\" branch optimizations that cause backtrace() to\nreturn the wrong address.\n\n> I recall speculating about whether we could somehow invoke gdb\n> to get a more comprehensive and accurate backtrace, but I don't\n> really have a concrete idea how that could be made to work.\n\nOh no, I'm more about micro-fix rather than doing some bigger\nrevolution. The patch simply adds this one instruction in macro, so\nthat now backtrace_functions behaves better:\n\n 0x0000000000773d28 <+105>: call 0x79243f <errfinish>\n 0x0000000000773d2d <+110>: movzbl -0x12(%rbp),%eax << this ends\nup being added by the patch\n 0x0000000000773d31 <+114>: call 0xdc1a0 <abort@plt>\n\nI'll put that as for PG18 in CF, but maybe it could be backpatched too\n- no hard opinion on that (I don't see how that ERROR/FATAL path could\ncause any performance regressions)\n\n-J.", "msg_date": "Tue, 7 May 2024 09:43:36 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: elog/ereport VS misleading backtrace_function function address" }, { "msg_contents": "On 07.05.24 09:43, Jakub Wartak wrote:\n> NOTE: in case one will be testing this: one cannot ./configure with\n> --enable-debug as it prevents the compiler optimizations that actually\n> end up with the \".cold\" branch optimizations that cause backtrace() to\n> return the wrong address.\n\nIs that configuration useful? If you're interested in backtraces, \nwouldn't you also want debug symbols? Don't production builds use debug \nsymbols nowadays as well?\n\n>> I recall speculating about whether we could somehow invoke gdb\n>> to get a more comprehensive and accurate backtrace, but I don't\n>> really have a concrete idea how that could be made to work.\n> Oh no, I'm more about micro-fix rather than doing some bigger\n> revolution. The patch simply adds this one instruction in macro, so\n> that now backtrace_functions behaves better:\n> \n> 0x0000000000773d28 <+105>: call 0x79243f <errfinish>\n> 0x0000000000773d2d <+110>: movzbl -0x12(%rbp),%eax << this ends\n> up being added by the patch\n> 0x0000000000773d31 <+114>: call 0xdc1a0 <abort@plt>\n> \n> I'll put that as for PG18 in CF, but maybe it could be backpatched too\n> - no hard opinion on that (I don't see how that ERROR/FATAL path could\n> cause any performance regressions)\n\nI'm missing a principled explanation of what this does. I just see that \nit sprinkles some no-op code to make this particular thing work a bit \ndifferently, but this might just behave differently with the next \ncompiler next year. I'd like to see a bit more of an abstract \nexplanation of what is happening here.\n\n\n\n", "msg_date": "Sun, 12 May 2024 22:33:42 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: elog/ereport VS misleading backtrace_function function address" }, { "msg_contents": "Hi Peter!\n\nOn Sun, May 12, 2024 at 10:33 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 07.05.24 09:43, Jakub Wartak wrote:\n> > NOTE: in case one will be testing this: one cannot ./configure with\n> > --enable-debug as it prevents the compiler optimizations that actually\n> > end up with the \".cold\" branch optimizations that cause backtrace() to\n> > return the wrong address.\n>\n> Is that configuration useful? If you're interested in backtraces,\n> wouldn't you also want debug symbols?\n\nThe use case here is that backtrace_functions is unreliable when we\nask customers on production builds to use it (so it's useful not just\nfor local tests)\n\n> Don't production builds use debug\n> symbols nowadays as well?\n\nReality is apparently mixed,at least from what I have checked :\n- all RHEL 7.x/8.x (PGDG and our forks) do NOT come with\n--enable-debug according to pg_config.\n- on Debian 11/12 PGDG does come with --enable-debug\n\n> >> I recall speculating about whether we could somehow invoke gdb\n> >> to get a more comprehensive and accurate backtrace, but I don't\n> >> really have a concrete idea how that could be made to work.\n> > Oh no, I'm more about micro-fix rather than doing some bigger\n> > revolution. The patch simply adds this one instruction in macro, so\n> > that now backtrace_functions behaves better:\n> >\n> > 0x0000000000773d28 <+105>: call 0x79243f <errfinish>\n> > 0x0000000000773d2d <+110>: movzbl -0x12(%rbp),%eax << this ends\n> > up being added by the patch\n> > 0x0000000000773d31 <+114>: call 0xdc1a0 <abort@plt>\n> >\n> > I'll put that as for PG18 in CF, but maybe it could be backpatched too\n> > - no hard opinion on that (I don't see how that ERROR/FATAL path could\n> > cause any performance regressions)\n>\n> I'm missing a principled explanation of what this does. I just see that\n> it sprinkles some no-op code to make this particular thing work a bit\n> differently, but this might just behave differently with the next\n> compiler next year. I'd like to see a bit more of an abstract\n> explanation of what is happening here.\n\nOK I'll try to explain using assembly, but I'm not an expert on this.\nLet's go to the 1st post, assume we run with backtrace_function set:\n\nget_collation_isdeterministic() 0x5c7680 to 0x5c76c1:\n might jump(!) to 0x14c686 <get_collation_isdeterministic.cold>\n note the two completely different address ranges (hot and separate cold)\n it's because of 913ec71d682 that added the cold branches\noptimization via pg_attribute_cold to errstart_cold\n\nDump of assembler code for function get_collation_isdeterministic:\nAddress range 0x5c7680 to 0x5c76c1:\n 0x00000000005c7680 <+0>: push %rbp\n[..]\n 0x00000000005c7694 <+20>: call 0x5d63b0 <SearchSysCache1>\n 0x00000000005c7699 <+25>: test %rax,%rax\n 0x00000000005c769c <+28>: je 0x14c686\n<get_collation_isdeterministic.cold>\n[..]\n 0x00000000005c76b3 <+51>: call 0x5d6430 <ReleaseSysCache>\n 0x00000000005c76b8 <+56>: mov %r12d,%eax\n 0x00000000005c76bb <+59>: mov -0x8(%rbp),%r12\n 0x00000000005c76bf <+63>: leave\n 0x00000000005c76c0 <+64>: ret\nAddress range 0x14c686 to 0x14c6bb:\n // note here that the last\ninstruction is 0x14c6b6 not 0x14c6bb(!)\n // note this cold path also has no\nframe pointer setup\n 0x000000000014c686 <-4698106>: xor %esi,%esi\n 0x000000000014c688 <-4698104>: mov $0x15,%edi\n 0x000000000014c68d <-4698099>: call 0x14ec86 <errstart_cold>\n 0x000000000014c692 <-4698094>: mov %r12d,%esi\n 0x000000000014c695 <-4698091>: lea 0x5028dc(%rip),%rdi\n # 0x64ef78\n 0x000000000014c69c <-4698084>: xor %eax,%eax\n 0x000000000014c69e <-4698082>: call 0x5df320 <errmsg_internal>\n 0x000000000014c6a3 <-4698077>: lea 0x6311a6(%rip),%rdx\n # 0x77d850 <__func__.34>\n 0x000000000014c6aa <-4698070>: mov $0x440,%esi\n 0x000000000014c6af <-4698065>: lea 0x630c8a(%rip),%rdi\n # 0x77d340\n 0x000000000014c6b6 <-4698058>: call 0x5df100 <errfinish>\nEnd of assembler dump.\n\nso it's\n get_collation_isdeterministic() ->\n get_collation_isdeterministic.cold() [but not real function ->\nno proper fp/stack?] ->\n .. ->\n errfinish() ->\n set_backtrace() // just builds and appends string\nusing backtrace()/backtrace_functions()/backtrace_symbol_list().\n prints/logs, finishes\n\nIt seems that the thing is that the limited GNU libc\nbacktrace_symbol_list() won't resolve the 0x14c6b6..0x14c6bb to the\n\"get_collation_isdeterministic[.cold]\" symbol name and it will just\nsimply put 0x14c6bb in that the text asm. It's wrong and it is\nconfusing:\n\n2024-03-27 14:39:13.065 CET [12899] postgres(at)test1 ERROR: cache\nlookup failed for collation 0\n2024-03-27 14:39:13.065 CET [12899] postgres(at)test1 BACKTRACE:\n postgres: 16/main: postgres test1 [local] DELETE(+0x14c6bb)\n[0x5565c5a9c6bb]\n postgres: 16/main: postgres test1 [local]\nDELETE(RI_FKey_cascade_del+0x323) [0x5565c5ec0973]\n[..]\n\nif you now take addr2line for 0x14c6bb it will be resolved to the next\nassembly AFTER where it had happened: we did blow up in\n<0x14c686..0x14c6bb), but it put 0x14c6bb as the IP:\n\n$ addr2line -e /usr/lib/postgresql/16/bin/postgres -a -f 0x14c6bb\n0x000000000014c6bb\nget_language_name.cold\n\n^ it's wrong,! but GDB also gets the same (GIGO):\n\n(gdb) disassemble 0x14c6bb\nDump of assembler code for function get_language_name:\nAddress range 0x5c7780 to 0x5c77ee:\n[..]\n 0x00000000005c77da <+90>: je 0x14c6bb <get_language_name.cold>\n[..]\n 0x00000000005c77ed <+109>: ret\nAddress range 0x14c6bb to 0x14c6f0: // <<< HERE\n 0x000000000014c6bb <-4698309>: xor %esi,%esi\n[..]\n\nSo something was saved in the wrong way (fp?), however adding that 1\ntrivial NOP instruction between errfinish() and pg_unreachable() where\nafterwards somehow it returns, does make the returned address (using\naddr2line) resolve to the proper function. My limited understanding is\nthat we simply pushed the compiler into generating that cold path\n(that has no %ebp/%rbp frame pointer assembly) followed by no-exit in\nasm code that prevents backtrace() from working. So it looks like we\nare playing hard at micro-optimizations (cold branch optimizations,\nagain 913ec71d682), we provide backtrace_functions GUC and we provide\nvarious packaging ways, and that somehow bite me. The backtrace(3)\nsays `Omission of the frame pointers (as implied by any of gcc(1)'s\nnonzero optimization levels) may cause these assumptions to be\nviolated.`\n\n-J.\n\n\n", "msg_date": "Tue, 14 May 2024 12:12:06 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: elog/ereport VS misleading backtrace_function function address" }, { "msg_contents": "On 2024-May-14, Jakub Wartak wrote:\n\n> On Sun, May 12, 2024 at 10:33 PM Peter Eisentraut <[email protected]> wrote:\n\n> > Don't production builds use debug\n> > symbols nowadays as well?\n> \n> Reality is apparently mixed,at least from what I have checked :\n> - all RHEL 7.x/8.x (PGDG and our forks) do NOT come with\n> --enable-debug according to pg_config.\n\nOoh, yeah, that's true according to \nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/main/non-common/postgresql-16/main/postgresql-16.spec;h=ab2f6edc903f083e04b8f7a1d3bad8e1b7018791;hb=1a8b9fa7019d3f73322ca873b62dc0b33e73ed1d\n\n 507 %if %beta\n 508 --enable-debug \\\n 509 --enable-cassert \\\n 510 %endif\n\nMaybe a better approach for this whole thing is to change the specs so\nthat --enable-debug is always given, not just for %beta.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 14 May 2024 13:41:06 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: elog/ereport VS misleading backtrace_function function address" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2024-May-14, Jakub Wartak wrote:\n>> Reality is apparently mixed,at least from what I have checked :\n>> - all RHEL 7.x/8.x (PGDG and our forks) do NOT come with\n>> --enable-debug according to pg_config.\n\n> Ooh, yeah, that's true according to \n> https://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/main/non-common/postgresql-16/main/postgresql-16.spec;h=ab2f6edc903f083e04b8f7a1d3bad8e1b7018791;hb=1a8b9fa7019d3f73322ca873b62dc0b33e73ed1d\n\n> 507 %if %beta\n> 508 --enable-debug \\\n> 509 --enable-cassert \\\n> 510 %endif\n\n> Maybe a better approach for this whole thing is to change the specs so\n> that --enable-debug is always given, not just for %beta.\n\nMy recollection from my time at Red Hat is that their standard policy\nis to build everything with debug symbols all the time; so this is\nviolating that policy, and we should change it just on those grounds.\nHowever, I'm not sure how much the change will help Joe Average User\nwith respect to the thread topic. RH actually has infrastructure that\nsplits the debug symbol tables out into separate \"debuginfo\" RPMs,\nwhich you have to install manually if you want to debug a particular\npackage. This is good for disk space consumption, but it means that\nmost users are still only going to see the same backtrace they see\ncurrently.\n\nI don't know how much of that applies to, eg, Debian.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 May 2024 11:05:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: elog/ereport VS misleading backtrace_function function address" }, { "msg_contents": "On Tue, May 14, 2024 at 6:13 AM Jakub Wartak\n<[email protected]> wrote:\n> OK I'll try to explain using assembly, but I'm not an expert on this.\n> Let's go to the 1st post, assume we run with backtrace_function set:\n\nI feel like this explanation doesn't really explain very much. I mean,\nthe question is not \"how is it that adding a nop instruction fixes\nanything?\" but \"is adding a nop instruction a principled way of fixing\nthings, and if so, for what reason?\". And as far as I can see, you\nhaven't answered that question anywhere. Unless we really understand\nwhy the results are better with that nop instruction added, we can't\npossibly have any confidence that this is anything other than random\ngood fortune, which isn't a sufficiently good reason to make a change.\nAnd, while I'm no expert on this, I suspect that it is mostly just\nrandom good fortune -- the fact that inserting a volatile variable\ndeclaration here solved the problem seems like something that could\neasily fail to work out on another platform or compiler or compiler\nversion. I also think it's the wrong idea on principle to insert junk\ndeclarations into our code to try to get good backtraces.\n\nI think the right answer here is probably what Alvaro said, namely,\nthat people have to have the debug symbols installed if they want to\nget backtraces. Tom is probably correct when he says that there's\nnothing we can do to ensure that users end up with debug symbols in\nall cases, but that doesn't mean it's the wrong solution. It's at\nleast understandable why it works.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 May 2024 11:36:05 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: elog/ereport VS misleading backtrace_function function address" }, { "msg_contents": "Hi Robert,\n\nOn Tue, May 14, 2024 at 5:36 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, May 14, 2024 at 6:13 AM Jakub Wartak\n> <[email protected]> wrote:\n> > OK I'll try to explain using assembly, but I'm not an expert on this.\n> > Let's go to the 1st post, assume we run with backtrace_function set:\n>\n> I feel like this explanation doesn't really explain very much. I mean,\n> the question is not \"how is it that adding a nop instruction fixes\n> anything?\" but \"is adding a nop instruction a principled way of fixing\n> things, and if so, for what reason?\".\n\nShort: IMHO the backtrace() is reliable and cannot be blamed here,\nit's just that the __attribute__((cold)) with __builtin_unreachable\nprevents backtrace() from retrieving proper data from the stack.\n\nLong: using compiler cold branches with __builtin_unreachable() causes\nthe compiler to generate specific code that is misleading GNU libc's\nbacktrace() and tools like addr2line/gdb/objdump. The exact\ninstruction that causes it is that CALL assembly stores the saved RIP\nonto the stack frame. So, if you have something like this (that's from\nattached trimmed C semi-reproducer that tries to mimic what PG is\ndoing with ereport() macro and cold path -- NOTE: not exact match for\ninstructions, but good enough to see problems related with it):\n\n$ cat bt.c\n[..]\nint main() {\n int r;\n printf(\"starting f(), NOT yet in somemisleading_another_f() !\\n\");\n r = f();\n printf(\"%d\\n\", r*2);\n printf(\"starting somemisleading_another_f()\\n\");\n r = somemisleading_another_f();\n[..]\n\n$ gcc-12 -Wall -rdynamic -g -O2 bt.c -o bt\n$ ./bt\nstarting f(), NOT yet in somemisleading_another_f() !\nerr_cold(): 3\nerrstatic(): 3\n ./bt(dump_current_backtrace+0x35) [0x564fbb25d2d5]\n ./bt(err_cold+0x2f) [0x564fbb25d11f]\n ./bt(+0x1133) [0x564fbb25d133]\n ./bt(+0x1144) [0x564fbb25d144]\n ./bt(main+0x23) [0x564fbb25d173]\n /lib/x86_64-linux-gnu/libc.so.6(+0x2724a) [0x7fba4028624a]\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x7fba40286305]\n ./bt(_start+0x21) [0x564fbb25d1d1]\n\nfinishing demo() macro\n\n$ addr2line -e ./bt -f 0x1133\nf\n/root/bt.c:84\n\n$ addr2line -e ./bt -f 0x1144\nsomemisleading_another_f\n/root/bt.c:84\n\nNOTE here that the code has NOT called somemisleading_another_f() at\nall, but backtrace() is going to fetch 0x1144, because such RIP value\nwas stored by the CALL. In asm, it looks like this, in the brackets\nI've put sequence of instructions:\n\n0000000000001125 <f.part.0>:\n 1125: 55 push %rbp [4]\n 1126: bf 03 00 00 00 mov $0x3,%edi [5]\n 112b: 48 89 e5 mov %rsp,%rbp [6]\n 112e: e8 bd ff ff ff call 10f0 <err_cold> [7 -->\n.. backtrace()]\n 1133: bf 03 00 00 00 mov $0x3,%edi\n 1138: e8 73 02 00 00 call 13b0 <err_finish>\n\n000000000000113d <f.cold>:\n 113d: 31 c0 xor %eax,%eax [2]\n 113f: e8 e1 ff ff ff call 1125 <f.part.0> [3 -->\nthis is going to store 0x113f+4b = 0x1144 as saved RIP onto stack\nframe]\n\n0000000000001144 <somemisleading_another_f.cold>:\n 1144: 31 c0 xor %eax,%eax\n 1146: e8 da ff ff ff call 1125 <f.part.0>\n 114b: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)\n\n[..]\n00000000000013e0 <f>:\n[..]\n 13ed: e8 7e fc ff ff call 1070 <getpid@plt>\n 13f2: a8 01 test $0x1,%al\n 13f4: 0f 85 43 fd ff ff jne 113d <f.cold> [1]\n\nSo here, just like in PostgreSQL, in the f.cold the \"CALL\"@0x113f is\ngoing to save RIP of the following instruction (0x1144), but that's\nfrom a different (next) function in address layout. So when\nbacktrace() is going to retrieve that stack frame, it's going to see\nsomething belonging to a function that could not even be called\nphysically. The thing is that path is never going to return (and\nthat's also caused by usage of __builtin_unreachable()), so for the\ncompiler there's no need to generate any other instruction after such\nCALL. There's no such need, of course unless you want to have valid\nbacktrace() output. Storing literally any instruction after such CALL,\ncauses the savedRIP in the stack frame to point back to the proper\norigin function.\n\nSo again, with on fresh PG18 today (compiled with: ./configure\n--prefix=/usr/pgsql18 ), reproducer gives shows location error as\nget_collation_isdeterministic() in psql:\n\npsql:ri-collation-bug-example.sql:42: ERROR: cache lookup failed for\ncollation 0\npsql:ri-collation-bug-example.sql:44: error: ERROR: XX000: cache\nlookup failed for collation 0\nLOCATION: get_collation_isdeterministic, lsyscache.c:1062 // \\errverbose\n\nit's evident that backtrace_functions produces garbage:\n\n2024-08-26 10:00:22.317 CEST [39311] STATEMENT: delete from revisions\nwhere id='5c617ce7-688d-4bea-9d66-c0f0ebc635da';\n2024-08-26 10:00:45.216 CEST [39325] ERROR: cache lookup failed for collation 0\n2024-08-26 10:00:45.216 CEST [39325] BACKTRACE:\n postgres: postgres test1 [local] DELETE(+0x155898) [0x55e9161be898]\n postgres: postgres test1 [local]\nDELETE(RI_FKey_cascade_del+0x2a9) [0x55e9165c2479]\n postgres: postgres test1 [local] DELETE(+0x2d7e1f) [0x55e916340e1f]\n[..]\n\nso what's the 0x155898 ? We get:\n$ addr2line -e /usr/pgsql18/bin/postgres -a -f 0x155898\n0x0000000000155898\nget_constraint_type.cold\nlsyscache.c:?\n\nThat's wrong, we should get \"get_collation_isdeterministic\". Now with\nthe patch backtrace_functions produces:\n\n2024-08-26 10:10:26.151 CEST [49283] ERROR: cache lookup failed for collation 0\n2024-08-26 10:10:26.151 CEST [49283] BACKTRACE:\n postgres: postgres test2 [local] DELETE(+0x16964f) [0x55dc64f4064f]\n postgres: postgres test2 [local]\nDELETE(RI_FKey_cascade_del+0x2a9) [0x55dc65343559]\n[..]\n2024-08-26 10:10:26.151 CEST [49283] STATEMENT: delete from revisions\nwhere id='5c617ce7-688d-4bea-9d66-c0f0ebc635da';\n$ addr2line -e /usr/pgsql18/bin/postgres -a -f 0x16964f\n0x000000000016964f\nget_collation_isdeterministic.cold\nlsyscache.c:?\n\n> I think the right answer here is probably what Alvaro said, namely, that people have to have the debug symbols installed if they want to get backtraces.\n\nYes, the binary cannot be strip(1)-ed (ELF symbols table '.symtab'\ncannot be emptied) for it to work OR the binary can be striped, but\nthen the debug symbols need to be installed as add-on in order to\ndecode the addresses, but that's standard thing today and not such\ninstallation is not a problem in general.\n\nFun fact: one thing worth mentioning here is that I was plain wrong:\nit is NOT --enable-debug that is causing - or not - generating .cold\nfunctions. Those seem to be almost always mostly generated on modern\ngcc compilers. E.g. Debian official PGDG packages come without \".cold\"\nfunction optimizations VISIBLE within the binary by default when using\nnormal objdump(1). objdump(1) was unable to resolve anything to have\n\".cold\", but following assembly, it looked like those stubs were\npresent. Now using e.g. objdump --dwarf\n--disassemble=get_collation_oid.cold (the key here is to use --dwarf),\nI was able to get those function names with .cold . The --dwarf switch\nactually starts reading the external (\"/usr/lib/debug/.build-id/\")\nsymbols and confirms they are there.\n\nAt first I thought that clang does not seem to be affected by this\nwhen dealing with my toy bt.c (actually clang fixes it!), it looks\nlike it is not generating those \".cold\" stubs at all. However the\nissue is still the wrong output in PG even with clang ! (I've got the\nwrong function output in log too, and again there was CALL as last\ninstruction, so it saved the wrong RIP of the follow-up instruction\ntoo!). My org. patch does not fix it there, as sadly clang optimized\nout my volatile char. To my disgust, what helped there was the below\nthing:\n\n--- a/src/include/utils/elog.h\n+++ b/src/include/utils/elog.h\n@@ -145,8 +145,11 @@ struct Node;\n- if (__builtin_constant_p(elevel) && (elevel) >= ERROR) \\\n+ if (__builtin_constant_p(elevel) && (elevel) >= ERROR) { \\\n+ volatile char fix_backtrace_addr = 0x42; \\\n+ fix_backtrace_addr = fix_backtrace_addr + 1; \\\n pg_unreachable(); \\\n\nIt's kind of ugly hack, maybe some other hackers have better ideas.\nAlso I have not checked different archs that the x86_64 (and part of\nthe problem is that it needs a CPU-agnostic operand).\n\n-J.", "msg_date": "Mon, 26 Aug 2024 13:09:58 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: elog/ereport VS misleading backtrace_function function address" } ]
[ { "msg_contents": "Hello hackers,\n\nWhen running multiple 027_stream_regress.pl test instances in parallel\n(and with aggressive autovacuum) on a rather slow machine, I encountered\ntest failures due to the subselect test instability just as the following\nfailures on buildfarm:\n1) https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-03-27%2010%3A16%3A12\n\n--- /home/bf/bf-build/grassquit/HEAD/pgsql/src/test/regress/expected/subselect.out 2024-03-19 22:20:34.435867114 +0000\n+++ /home/bf/bf-build/grassquit/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/subselect.out \n2024-03-27 10:28:38.185776605 +0000\n@@ -2067,16 +2067,16 @@\n                     QUERY PLAN\n  -------------------------------------------------\n   Hash Join\n-   Hash Cond: (c.odd = b.odd)\n+   Hash Cond: (c.hundred = a.hundred)\n     ->  Hash Join\n-         Hash Cond: (a.hundred = c.hundred)\n-         ->  Seq Scan on tenk1 a\n+         Hash Cond: (b.odd = c.odd)\n+         ->  Seq Scan on tenk2 b\n           ->  Hash\n                 ->  HashAggregate\n                       Group Key: c.odd, c.hundred\n                       ->  Seq Scan on tenk2 c\n     ->  Hash\n-         ->  Seq Scan on tenk2 b\n+         ->  Seq Scan on tenk1 a\n  (11 rows)\n\n2) https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-03-27%2009%3A49%3A38\n\n(That query was added recently (by 9f1337639 from 2023-02-15) and the\nfailure evidentially depends on timing, so the number of the failures I\ncould find on buildfarm is moderate for now.)\n\nWith the subselect test modified as in attached, I could see what makes\nthe plan change:\n-                     ->  Seq Scan on public.tenk2 c (cost=0.00..445.00 rows=10000 width=8)\n+                     ->  Seq Scan on public.tenk2 c (cost=0.00..444.95 rows=9995 width=8)\n\n   relname | relpages | reltuples | autovacuum_count | autoanalyze_count\n  ---------+----------+-----------+------------------+-------------------\n- tenk2   |      345 |     10000 |                0 |                 0\n+ tenk2   |      345 |      9995 |                0 |                 0\n\nUsing the trick Thomas proposed in [1] (see my modification attached), I\ncould reproduce the failure easily on my workstation with no specific\nconditions:\n2024-03-28 14:05:13.792 UTC client backend[2358012] pg_regress/test_setup LOG:  !!!ConditionalLockBufferForCleanup() \nreturning false\n2024-03-28 14:05:13.792 UTC client backend[2358012] pg_regress/test_setup CONTEXT:  while scanning block 29 of relation \n\"public.tenk2\"\n2024-03-28 14:05:13.792 UTC client backend[2358012] pg_regress/test_setup STATEMENT:  VACUUM ANALYZE tenk2;\n...\n   relname | relpages | reltuples | autovacuum_count | autoanalyze_count\n  ---------+----------+-----------+------------------+-------------------\n- tenk2   |      345 |     10000 |                0 |                 0\n+ tenk2   |      345 |      9996 |                0 |                 0\n  (1 row)\n\nSo it looks to me like a possible cause of the failure, and I wonder\nwhether checks for query plans should be immune to such changes or results\nof VACUUM ANALYZE should be 100% stable?\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGKYNHmL_DhmVRiidHv6YLAL8jViifwwn2ABY__Y3BCphg%40mail.gmail.com\n\nBest regards,\nAlexander", "msg_date": "Thu, 28 Mar 2024 18:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "On 3/28/24 16:00, Alexander Lakhin wrote:\n> ...\n>\n> Using the trick Thomas proposed in [1] (see my modification attached), I\n> could reproduce the failure easily on my workstation with no specific\n> conditions:\n> 2024-03-28 14:05:13.792 UTC client backend[2358012]\n> pg_regress/test_setup LOG:  !!!ConditionalLockBufferForCleanup()\n> returning false\n> 2024-03-28 14:05:13.792 UTC client backend[2358012]\n> pg_regress/test_setup CONTEXT:  while scanning block 29 of relation\n> \"public.tenk2\"\n> 2024-03-28 14:05:13.792 UTC client backend[2358012]\n> pg_regress/test_setup STATEMENT:  VACUUM ANALYZE tenk2;\n> ...\n>   relname | relpages | reltuples | autovacuum_count | autoanalyze_count\n>  ---------+----------+-----------+------------------+-------------------\n> - tenk2   |      345 |     10000 |                0 |                 0\n> + tenk2   |      345 |      9996 |                0 |                 0\n>  (1 row)\n> \n> So it looks to me like a possible cause of the failure, and I wonder\n> whether checks for query plans should be immune to such changes or results\n> of VACUUM ANALYZE should be 100% stable?\n> \n\nYeah. I think it's good to design the data/queries in such a way that\nthe behavior does not flip due to minor noise like in this case.\n\nBut I'm a bit confused - how come the estimates do change at all? The\nanalyze simply fetches 30k rows, and tenk only has 10k of them. So we\nshould have *exact* numbers, and it should be exactly the same for all\nthe analyze runs. So how come it changes like this?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 28 Mar 2024 18:04:54 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> Yeah. I think it's good to design the data/queries in such a way that\n> the behavior does not flip due to minor noise like in this case.\n\n+1\n\n> But I'm a bit confused - how come the estimates do change at all? The\n> analyze simply fetches 30k rows, and tenk only has 10k of them. So we\n> should have *exact* numbers, and it should be exactly the same for all\n> the analyze runs. So how come it changes like this?\n\nIt's plausible that the VACUUM ANALYZE done by test_setup fails\nConditionalLockBufferForCleanup() sometimes because of concurrent\nactivity like checkpointer writes. I'm not quite sure how we\nget from that to the observed symptom though. Maybe the\nVACUUM needs DISABLE_PAGE_SKIPPING?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Mar 2024 13:33:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "On Fri, Mar 29, 2024 at 1:33 AM Tom Lane <[email protected]> wrote:\n\n> Tomas Vondra <[email protected]> writes:\n> > Yeah. I think it's good to design the data/queries in such a way that\n> > the behavior does not flip due to minor noise like in this case.\n>\n> +1\n\n\nAgreed. The query in problem is:\n\n-- we can pull up the sublink into the inner JoinExpr.\nexplain (costs off)\nSELECT * FROM tenk1 A INNER JOIN tenk2 B\nON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n\nFor this query, the RHS of the semijoin can be unique-ified, allowing it\nto be joined to anything else by unique-ifying the RHS. Hence, both\njoin orders 'A/C/B' (as in the answer file) and 'B/C/A' (as in the\nreported plan diff) are legal.\n\nSo I'm wondering if we can make this test case more stable by using\n'c.odd > b.odd' instead of 'c.odd = b.odd' in the subquery, as attached.\nAs a result, the RHS of the semijoin cannot be unique-ified any more, so\nthat the only legal join order is 'A/B/C'. We would not have different\njoin orders due to noises in the estimates, while still testing what we\nintend to test.\n\nThanks\nRichard", "msg_date": "Fri, 29 Mar 2024 14:45:37 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "On Thu, Mar 28, 2024 at 11:00 PM Alexander Lakhin <[email protected]>\nwrote:\n\n> When running multiple 027_stream_regress.pl test instances in parallel\n> (and with aggressive autovacuum) on a rather slow machine, I encountered\n> test failures due to the subselect test instability just as the following\n> failures on buildfarm:\n> 1)\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-03-27%2010%3A16%3A12\n>\n> ---\n> /home/bf/bf-build/grassquit/HEAD/pgsql/src/test/regress/expected/subselect.out\n> 2024-03-19 22:20:34.435867114 +0000\n> +++\n> /home/bf/bf-build/grassquit/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/subselect.out\n>\n> 2024-03-27 10:28:38.185776605 +0000\n> @@ -2067,16 +2067,16 @@\n> QUERY PLAN\n> -------------------------------------------------\n> Hash Join\n> - Hash Cond: (c.odd = b.odd)\n> + Hash Cond: (c.hundred = a.hundred)\n> -> Hash Join\n> - Hash Cond: (a.hundred = c.hundred)\n> - -> Seq Scan on tenk1 a\n> + Hash Cond: (b.odd = c.odd)\n> + -> Seq Scan on tenk2 b\n> -> Hash\n> -> HashAggregate\n> Group Key: c.odd, c.hundred\n> -> Seq Scan on tenk2 c\n> -> Hash\n> - -> Seq Scan on tenk2 b\n> + -> Seq Scan on tenk1 a\n> (11 rows)\n\n\nFWIW, this issue is also being reproduced in Cirrus CI, as Matthias\nreported in another thread [1] days ago.\n\n[1]\nhttps://www.postgresql.org/message-id/CAEze2WiiE-iTKxgWQzcjyiiiA4q-zsdkkAdCaD_E83xA2g2BLA%40mail.gmail.com\n\nThanks\nRichard\n\nOn Thu, Mar 28, 2024 at 11:00 PM Alexander Lakhin <[email protected]> wrote:\nWhen running multiple 027_stream_regress.pl test instances in parallel\n(and with aggressive autovacuum) on a rather slow machine, I encountered\ntest failures due to the subselect test instability just as the following\nfailures on buildfarm:\n1) https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-03-27%2010%3A16%3A12\n\n--- /home/bf/bf-build/grassquit/HEAD/pgsql/src/test/regress/expected/subselect.out 2024-03-19 22:20:34.435867114 +0000\n+++ /home/bf/bf-build/grassquit/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/subselect.out \n2024-03-27 10:28:38.185776605 +0000\n@@ -2067,16 +2067,16 @@\n                     QUERY PLAN\n  -------------------------------------------------\n   Hash Join\n-   Hash Cond: (c.odd = b.odd)\n+   Hash Cond: (c.hundred = a.hundred)\n     ->  Hash Join\n-         Hash Cond: (a.hundred = c.hundred)\n-         ->  Seq Scan on tenk1 a\n+         Hash Cond: (b.odd = c.odd)\n+         ->  Seq Scan on tenk2 b\n           ->  Hash\n                 ->  HashAggregate\n                       Group Key: c.odd, c.hundred\n                       ->  Seq Scan on tenk2 c\n     ->  Hash\n-         ->  Seq Scan on tenk2 b\n+         ->  Seq Scan on tenk1 a\n  (11 rows)FWIW, this issue is also being reproduced in Cirrus CI, as Matthiasreported in another thread [1] days ago.[1] https://www.postgresql.org/message-id/CAEze2WiiE-iTKxgWQzcjyiiiA4q-zsdkkAdCaD_E83xA2g2BLA%40mail.gmail.comThanksRichard", "msg_date": "Fri, 29 Mar 2024 15:06:07 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "28.03.2024 20:33, Tom Lane wrote:\n>\n>> But I'm a bit confused - how come the estimates do change at all? The\n>> analyze simply fetches 30k rows, and tenk only has 10k of them. So we\n>> should have *exact* numbers, and it should be exactly the same for all\n>> the analyze runs. So how come it changes like this?\n> It's plausible that the VACUUM ANALYZE done by test_setup fails\n> ConditionalLockBufferForCleanup() sometimes because of concurrent\n> activity like checkpointer writes. I'm not quite sure how we\n> get from that to the observed symptom though. Maybe the\n> VACUUM needs DISABLE_PAGE_SKIPPING?\n\nYeah, the way from ConditionalLockBufferForCleanup() returning false to\nreltuples < 10000 is not one-step, as I thought initially. There is also\nsanity_check doing VACUUM in between. So, effectively the troublesome\nscenario is:\nVACUUM ANALYZE tenk2; -- with cleanup lock not granted for some blocks\nVACUUM tenk2;\n\nIn this scenario, lazy_scan_heap() -> vac_estimate_reltuples() called two\ntimes.\nFirst, with rel_pages: 384, vacrel->scanned_pages: 384,\nvacrel->live_tuples: 10000 and it results in\nvacrel->new_live_tuples = 10000,\n\nAnd second, with rel_pages: 345, vacrel->scanned_pages: 80,\nvacrel->live_tuples: 2315 (for instance), and we get\nvacrel->new_live_tuples = 9996,\n\nWith unmodified ConditionalLockBufferForCleanup() the second call is\nperformed with rel_pages: 345, vacrel->scanned_pages: 1,\nvacrel->live_tuples: 24 and it returns 10000.\n\nThis simple change fixes the issue for me:\n-VACUUM ANALYZE tenk2;\n+VACUUM (ANALYZE, DISABLE_PAGE_SKIPPING) tenk2;\n\nBut it looks like subselect is not the only test that can fail due to\nvacuum instability. I see that create_index also suffers from cranky\nConditionalLockBufferForCleanup() (+if (rand() % 10 == 0)\nreturn false; ), although it placed in parallel_schedule before\nsanity_check, so this failure needs another explanation:\n-                      QUERY PLAN\n--------------------------------------------------------\n- Index Only Scan using tenk1_thous_tenthous on tenk1\n-   Index Cond: (thousand < 2)\n-   Filter: (tenthous = ANY ('{1001,3000}'::integer[]))\n-(3 rows)\n+                                      QUERY PLAN\n+--------------------------------------------------------------------------------------\n+ Sort\n+   Sort Key: thousand\n+   ->  Index Only Scan using tenk1_thous_tenthous on tenk1\n+         Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[])))\n+(4 rows)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 29 Mar 2024 11:59:59 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "29.03.2024 11:59, Alexander Lakhin wrote:\n>\n> This simple change fixes the issue for me:\n> -VACUUM ANALYZE tenk2;\n> +VACUUM (ANALYZE, DISABLE_PAGE_SKIPPING) tenk2;\n>\n\nI'm sorry, I wasn't persevering enough when testing that...\nAfter more test runs, I see that in fact it doesn't help.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 29 Mar 2024 13:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "29.03.2024 11:59, Alexander Lakhin wrote:\n>\n> But it looks like subselect is not the only test that can fail due to\n> vacuum instability. I see that create_index also suffers from cranky\n> ConditionalLockBufferForCleanup() (+if (rand() % 10 == 0)\n> return false; ), although it placed in parallel_schedule before\n> sanity_check, so this failure needs another explanation:\n> -                      QUERY PLAN\n> --------------------------------------------------------\n> - Index Only Scan using tenk1_thous_tenthous on tenk1\n> -   Index Cond: (thousand < 2)\n> -   Filter: (tenthous = ANY ('{1001,3000}'::integer[]))\n> -(3 rows)\n> +                                      QUERY PLAN\n> +--------------------------------------------------------------------------------------\n> + Sort\n> +   Sort Key: thousand\n> +   ->  Index Only Scan using tenk1_thous_tenthous on tenk1\n> +         Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[])))\n> +(4 rows)\n\nI think that deviation can be explained by the fact that cost_index() takes\nbaserel->allvisfrac (derived from pg_class.relallvisible) into account for\nthe index-only-scan case, and I see the following difference when a test\nrun fails:\n         relname        | relpages | reltuples | relallvisible | indisvalid | autovacuum_count | autoanalyze_count\n  ----------------------+----------+-----------+---------------+------------+------------------+-------------------\n- tenk1                |      345 |     10000 |           345 |            |                0 |                 0\n+ tenk1                |      345 |     10000 |           305 |            |                0 |                 0\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 29 Mar 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> On Fri, Mar 29, 2024 at 1:33 AM Tom Lane <[email protected]> wrote:\n>> Tomas Vondra <[email protected]> writes:\n>>> Yeah. I think it's good to design the data/queries in such a way that\n>>> the behavior does not flip due to minor noise like in this case.\n\n>> +1\n\n> Agreed. The query in problem is:\n> -- we can pull up the sublink into the inner JoinExpr.\n> explain (costs off)\n> SELECT * FROM tenk1 A INNER JOIN tenk2 B\n> ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n\n> So I'm wondering if we can make this test case more stable by using\n> 'c.odd > b.odd' instead of 'c.odd = b.odd' in the subquery, as attached.\n\nI'm not sure that that is testing the same thing (since it's no longer\nan equijoin), or that it would fix the issue. The problem really is\nthat all three baserels have identical statistics so there's no\ndifference in cost between different join orders, and then it's mostly\na matter of unspecified implementation details which order we will\nchoose, and even the smallest change in one rel's statistics can\nflip it. The way we have fixed similar issues elsewhere is to add a\nscan-level WHERE restriction that makes one of the baserels smaller,\nbreaking the symmetry. So I'd try something like\n\n explain (costs off)\n SELECT * FROM tenk1 A INNER JOIN tenk2 B\n-ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n+ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd)\n+WHERE a.thousand < 750;\n\n(I first tried reducing the size of B, but that caused the join\norder to change; restricting A makes it keep the existing plan.)\n\nMight or might not need to mess with the size of C, but since that\none needs uniquification it's different from the others already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Mar 2024 09:49:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "Alexander Lakhin <[email protected]> writes:\n> I think that deviation can be explained by the fact that cost_index() takes\n> baserel->allvisfrac (derived from pg_class.relallvisible) into account for\n> the index-only-scan case, and I see the following difference when a test\n> run fails:\n>         relname        | relpages | reltuples | relallvisible | indisvalid | autovacuum_count | autoanalyze_count\n>  ----------------------+----------+-----------+---------------+------------+------------------+-------------------\n> - tenk1                |      345 |     10000 |           345 |            |                0 |                 0\n> + tenk1                |      345 |     10000 |           305 |            |                0 |                 0\n\nOuch. So what's triggering that? The intention of test_setup\nsurely is to provide a uniform starting point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Mar 2024 09:51:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "Hello Tom,\n\n29.03.2024 16:51, Tom Lane wrote:\n> Alexander Lakhin <[email protected]> writes:\n>> I think that deviation can be explained by the fact that cost_index() takes\n>> baserel->allvisfrac (derived from pg_class.relallvisible) into account for\n>> the index-only-scan case, and I see the following difference when a test\n>> run fails:\n>>         relname        | relpages | reltuples | relallvisible | indisvalid | autovacuum_count | autoanalyze_count\n>>  ----------------------+----------+-----------+---------------+------------+------------------+-------------------\n>> - tenk1                |      345 |     10000 |           345 |            |                0 |                 0\n>> + tenk1                |      345 |     10000 |           305 |            |                0 |                 0\n> Ouch. So what's triggering that? The intention of test_setup\n> surely is to provide a uniform starting point.\n\nThanks for your attention to the issue!\nPlease try the attached...\n\nBest regards,\nAlexander", "msg_date": "Fri, 29 Mar 2024 17:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "Alexander Lakhin <[email protected]> writes:\n> 29.03.2024 16:51, Tom Lane wrote:\n>> Ouch. So what's triggering that? The intention of test_setup\n>> surely is to provide a uniform starting point.\n\n> Thanks for your attention to the issue!\n> Please try the attached...\n\nI experimented with the attached modified version of the patch,\nwhich probes just after the relevant VACUUMs and reduces the\ncrankiness of ConditionalLockBufferForCleanup a bit to more nearly\napproximate what we're likely to see in the buildfarm. There are\ntwo clearly-visible effects:\n\n1. The initial VACUUM fails to count some pages as all-visible,\npresumably exactly the same ones that ConditionalLockBufferForCleanup\nfails on. This feels like a bug. We still scan the pages (else\nreltuples would be wrong); why would we not recognize that they are\nall-visible?\n\n2. The re-VACUUM in sanity_check.sql corrects most of the\nrelallvisible discrepancy, presumably because it's preferentially\ngoing after the pages that didn't get marked the first time.\nHowever, it's distorting reltuples. Interestingly, the distortion\nis worse with less-cranky ConditionalLockBufferForCleanup.\n\nI believe the cause of the reltuples distortion is that the\nre-VACUUM will nearly always scan the last page of the relation,\nwhich would usually contain fewer tuples than the rest, but\nthen it counts that page equally with the rest to compute the\nnew tuple density. The fewer other pages are included in that\ncomputation, the worse the new density estimate is, accounting\nfor the effect that when ConditionalLockBufferForCleanup is more\nprone to failure the error gets smaller.\n\nThe comments in vac_estimate_reltuples already point out that\nvacuum tends to always hit the last page and claim that we\n\"handle that here\", but it's doing nothing about the likelihood\nthat the last page has fewer than the normal number of tuples.\nI wonder if we could adjust the density calculation to account\nfor that. I don't think it'd be unreasonable to just assume\nthat the last page is only half full. Or we could try to get\nthe vacuum logic to report the last-page count separately ...\n\nI tried the patch in v16 too and got similar results, so these\nare not new problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Mar 2024 14:31:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "I wrote:\n> I experimented with the attached modified version of the patch,\n> which probes just after the relevant VACUUMs and reduces the\n> crankiness of ConditionalLockBufferForCleanup a bit to more nearly\n> approximate what we're likely to see in the buildfarm.\n\nSigh, forgot to attach the patch, not that you couldn't have\nguessed what's in it.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 29 Mar 2024 14:35:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" }, { "msg_contents": "29.03.2024 11:59, Alexander Lakhin wrote:\n> But it looks like subselect is not the only test that can fail due to\n> vacuum instability. I see that create_index also suffers from cranky\n> ConditionalLockBufferForCleanup() (+if (rand() % 10 == 0)  ...\n\nJust for the record, I think I've reproduced the same failure as:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-03-17%2003%3A03%3A57\nnot ok 66    + create_index                            27509 ms\n...\n\nand the similar occurrences:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-01-02%2007%3A09%3A09\nnot ok 66    + create_index                            25830 ms\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2023-11-15%2006%3A16%3A15\nnot ok 66    + create_index                            38508 ms\n\nby running 8 027_stream_regress instances in parallel on a slow ARMv7\ndevice like this:\nfor i in `seq 10`; do echo \"I $i\"; parallel --halt now,fail=1  -j8 \\\n  --linebuffer --tag NO_TEMP_INSTALL=1 make -s check -C \\\n  src/test/recovery_{}/ PROVE_TESTS=\"t/027*\" ::: `seq 8` || break; done\n5\n5       #   Failed test 'regression tests pass'\n5       #   at t/027_stream_regress.pl line 92.\n5       #          got: '256'\n5       #     expected: '0'\n5       t/027_stream_regress.pl ..\n5       Dubious, test returned 1 (wstat 256, 0x100)\n5       Failed 1/6 subtests\n\nnot ok 66    + create_index                           152995 ms\n...\n=== dumping .../src/test/recovery_5/tmp_check/regression.diffs ===\ndiff -U3 .../src/test/regress/expected/create_index.out .../src/test/recovery_5/tmp_check/results/create_index.out\n--- .../src/test/regress/expected/create_index.out  2024-05-30 15:30:34.523048633 +0000\n+++ .../src/test/recovery_5/tmp_check/results/create_index.out 2024-05-31 13:07:56.359001362 +0000\n@@ -1916,11 +1916,15 @@\n  SELECT unique1 FROM tenk1\n  WHERE unique1 IN (1,42,7)\n  ORDER BY unique1;\n-                      QUERY PLAN\n--------------------------------------------------------\n- Index Only Scan using tenk1_unique1 on tenk1\n-   Index Cond: (unique1 = ANY ('{1,42,7}'::integer[]))\n-(2 rows)\n+                            QUERY PLAN\n+-------------------------------------------------------------------\n+ Sort\n+   Sort Key: unique1\n+   ->  Bitmap Heap Scan on tenk1\n+         Recheck Cond: (unique1 = ANY ('{1,42,7}'::integer[]))\n+         ->  Bitmap Index Scan on tenk1_unique1\n+               Index Cond: (unique1 = ANY ('{1,42,7}'::integer[]))\n+(6 rows)\n\n  SELECT unique1 FROM tenk1\n  WHERE unique1 IN (1,42,7)\n@@ -1936,12 +1940,13 @@\n  SELECT thousand, tenthous FROM tenk1\n  WHERE thousand < 2 AND tenthous IN (1001,3000)\n  ORDER BY thousand;\n-                      QUERY PLAN\n--------------------------------------------------------\n- Index Only Scan using tenk1_thous_tenthous on tenk1\n-   Index Cond: (thousand < 2)\n-   Filter: (tenthous = ANY ('{1001,3000}'::integer[]))\n-(3 rows)\n+                                      QUERY PLAN\n+--------------------------------------------------------------------------------------\n+ Sort\n+   Sort Key: thousand\n+   ->  Index Only Scan using tenk1_thous_tenthous on tenk1\n+         Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[])))\n+(4 rows)\n\n  SELECT thousand, tenthous FROM tenk1\n  WHERE thousand < 2 AND tenthous IN (1001,3000)\n=== EOF ===\n\nI got failures on iteration 2, 3, 7, 1.\n\nBut with the repeated VACUUM ANALYZE:\n--- a/src/test/regress/sql/test_setup.sql\n+++ b/src/test/regress/sql/test_setup.sql\n@@ -163,6 +163,8 @@ CREATE TABLE tenk1 (\n  \\set filename :abs_srcdir '/data/tenk.data'\n  COPY tenk1 FROM :'filename';\n  VACUUM ANALYZE tenk1;\n+VACUUM ANALYZE tenk1;\n+VACUUM ANALYZE tenk1;\n\n20 iterations succeeded in the same environment.\n\nSo I think that that IOS plan change can be explained by the issue\ndiscussed here.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 31 May 2024 23:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: To what extent should tests rely on VACUUM ANALYZE?" } ]
[ { "msg_contents": "Noticed some newly introduced excessive trailing semicolons:\n\n$ git grep -E \";;$\" -- *.c *.h\nsrc/include/lib/radixtree.h: int deletepos =\nslot - n4->children;;\nsrc/test/modules/test_tidstore/test_tidstore.c: BlockNumber prevblkno = 0;;\n\nHere is a trivial patch to remove them.\n\nThanks\nRichard", "msg_date": "Fri, 29 Mar 2024 17:14:30 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Remove excessive trailing semicolons" }, { "msg_contents": "> On 29 Mar 2024, at 10:14, Richard Guo <[email protected]> wrote:\n\n> Noticed some newly introduced excessive trailing semicolons:\n> \n> $ git grep -E \";;$\" -- *.c *.h\n> src/include/lib/radixtree.h: int deletepos = slot - n4->children;;\n> src/test/modules/test_tidstore/test_tidstore.c: BlockNumber prevblkno = 0;;\n> \n> Here is a trivial patch to remove them.\n\nThanks, applied!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 30 Mar 2024 00:14:19 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove excessive trailing semicolons" } ]
[ { "msg_contents": "I am trying to discover the causes of occasional data loss in logical\nreplication; it is VERY rare and happens every few week/months. \n\nOur setup is a source DB running in docker on AWS cloud server. The\nsource database is stored in on local disks on the cloud server. \n\nThe replication target is a K8 POD running in an AWS instance with an\nattached persistent AWS disk. The disk mounting is managed by K8.\nPeriodically this POD is deleted and restarted in an orderly way, and\nthe persistent disk stores the database. \n\nWhat we are seeing is *very* occasional records not being replicated in\nthe more active tables. \n\nSometimes we have a backlog of several GB of data due to missing fields\nin the target or network outages etc. \n\nI am also seeing signs that some triggers are not being applied (at the\nsame time frame): ie. data *is* inserted but triggers that summarize\nthat data is not summarizing some rows and the dates on those\nnon-summarized rows corresponds to dates on unrelated missing rows in\nother tables.\n\nThis all leads me to conclude that there might be missing transactions?\nOr non-applied transactions etc. But it is further complicated by the\nfact that there is a second target database that *does* have all the\nmissing records. \n\nAny insights or avenues of exploration would be very welcome!\n\nI am trying to discover the causes of occasional data loss in logical replication; it is VERY rare and happens every few week/months.\nOur setup is a source DB running in docker on AWS cloud server. The source database is stored in on local disks on the cloud server.\nThe replication target is a K8 POD running in an AWS instance with an attached persistent AWS disk. The disk mounting is managed by K8. Periodically this POD is deleted and restarted in an orderly way, and the persistent disk stores the database.\nWhat we are seeing is *very* occasional records not being replicated in the more active tables.\nSometimes we have a backlog of several GB of data due to missing fields in the target or network outages etc.\nI am also seeing signs that some triggers are not being applied (at the same time frame): ie. data *is* inserted but triggers that summarize that data is not summarizing some rows and the dates on those non-summarized rows corresponds to dates on unrelated missing rows in other tables.This all leads me to conclude that there might be missing transactions? Or non-applied transactions etc. But it is further complicated by the fact that there is a second target database that *does* have all the missing records.\nAny insights or avenues of exploration would be very welcome!", "msg_date": "Fri, 29 Mar 2024 22:43:26 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Logical replication failure modes" }, { "msg_contents": "I should have added that the source DB is 16.1 and the target is 16.0\n\nI should have added that the source DB is 16.1 and the target is 16.0", "msg_date": "Fri, 29 Mar 2024 22:47:24 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logical replication failure modes" }, { "msg_contents": "On Fri, 29 Mar 2024 at 17:14, Philip Warner <[email protected]> wrote:\n>\n> I am trying to discover the causes of occasional data loss in logical replication; it is VERY rare and happens every few week/months.\n>\n> Our setup is a source DB running in docker on AWS cloud server. The source database is stored in on local disks on the cloud server.\n>\n> The replication target is a K8 POD running in an AWS instance with an attached persistent AWS disk. The disk mounting is managed by K8. Periodically this POD is deleted and restarted in an orderly way, and the persistent disk stores the database.\n>\n> What we are seeing is *very* occasional records not being replicated in the more active tables.\n>\n> Sometimes we have a backlog of several GB of data due to missing fields in the target or network outages etc.\n>\n> I am also seeing signs that some triggers are not being applied (at the same time frame): ie. data *is* inserted but triggers that summarize that data is not summarizing some rows and the dates on those non-summarized rows corresponds to dates on unrelated missing rows in other tables.\n>\n> This all leads me to conclude that there might be missing transactions? Or non-applied transactions etc. But it is further complicated by the fact that there is a second target database that *does* have all the missing records.\n>\n> Any insights or avenues of exploration would be very welcome!\n\nCan you check the following a) if there is any error in the log files\n(both publisher and subscriber), b) Is the apply worker process\nrunning for that particular subscription in the subscriber c) Check\npg_stat_subscription if last_msg_send_time, last_msg_receipt_time,\nlatest_end_lsn and latest_end_time are getting advanced in the\nsubscriber, e) Check pg_stat_replication for the lsn values and\nreply_time getting updated in the publisher.\n\nSince you have the second target database up to date,comparing the\npg_stat_subscription and pg_stat_replication for both of them will\ngive what is happening.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 1 Apr 2024 11:47:30 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logical replication failure modes" } ]
[ { "msg_contents": "Hi!\n\nI have the same problem as [1]. I have table something like:\n\nCREATE TABLE values (\n id int NOT NULL,\n revision int NOT NULL,\n data jsonb NOT NULL,\n PRIMARY KEY (id, revision)\n)\n\nAnd I would like to be able to specify PRIMARY KEY (id, revision DESC)\nbecause the most common query I am making is:\n\nSELECT data FROM values WHERE id=123 ORDER BY revision DESC LIMIT 1\n\nMy understanding, based on [2], is that the primary key index cannot\nhelp here, unless it is defined with DESC on revision. But this does\nnot seem to be possible. Would you entertain a patch adding this\nfeature? It seems pretty straightforward?\n\n\nMitar\n\n[1] https://stackoverflow.com/questions/45597101/primary-key-with-asc-or-desc-ordering\n[2] https://www.postgresql.org/docs/16/indexes-ordering.html\n\n-- \nhttps://mitar.tnode.com/\nhttps://twitter.com/mitar_m\nhttps://noc.social/@mitar\n\n\n", "msg_date": "Fri, 29 Mar 2024 21:34:27 +0100", "msg_from": "Mitar <[email protected]>", "msg_from_op": true, "msg_subject": "Allowing DESC for a PRIMARY KEY column" }, { "msg_contents": "Mitar <[email protected]> writes:\n> And I would like to be able to specify PRIMARY KEY (id, revision DESC)\n> because the most common query I am making is:\n> SELECT data FROM values WHERE id=123 ORDER BY revision DESC LIMIT 1\n\nDid you experiment with whether that actually needs a special index?\nI get\n\nregression=# create table t(id int, revision int, primary key(id, revision));\nCREATE TABLE\nregression=# explain select * from t WHERE id=123 ORDER BY revision DESC LIMIT 1;\n QUERY PLAN \n--------------------------------------------------------------------------------------\n Limit (cost=0.15..3.45 rows=1 width=8)\n -> Index Only Scan Backward using t_pkey on t (cost=0.15..36.35 rows=11 width=8)\n Index Cond: (id = 123)\n(3 rows)\n\n> My understanding, based on [2], is that the primary key index cannot\n> help here, unless it is defined with DESC on revision. But this does\n> not seem to be possible. Would you entertain a patch adding this\n> feature? It seems pretty straightforward?\n\nYou would need a lot stronger case than \"I didn't bother checking\nwhether I really need this\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Mar 2024 16:41:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allowing DESC for a PRIMARY KEY column" }, { "msg_contents": "Hi!\n\nOn Fri, Mar 29, 2024 at 9:41 PM Tom Lane <[email protected]> wrote:\n> You would need a lot stronger case than \"I didn't bother checking\n> whether I really need this\".\n\nThanks! I have tested it this way (based on your example):\n\ncreate table t (id int not null, revision int not null);\ncreate unique index on t (id, revision desc);\nexplain select * from t where id=123 order by revision desc limit 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Limit (cost=0.15..3.45 rows=1 width=8)\n -> Index Only Scan using t_id_revision_idx on t (cost=0.15..36.35\nrows=11 width=8)\n Index Cond: (id = 123)\n(3 rows)\n\nIt is very similar, with only the direction difference. Based on [1] I\nwas under the impression that \"Index Only Scan Backward\" is much\nslower than \"Index Only Scan\", but based on your answer it seems I\nmisunderstood and backwards scanning is comparable with forward\nscanning? Especially this section:\n\n\"Consider a two-column index on (x, y): this can satisfy ORDER BY x, y\nif we scan forward, or ORDER BY x DESC, y DESC if we scan backward.\nBut it might be that the application frequently needs to use ORDER BY\nx ASC, y DESC. There is no way to get that ordering from a plain\nindex, but it is possible if the index is defined as (x ASC, y DESC)\nor (x DESC, y ASC).\"\n\nI am curious, what is then an example where the quote from [1]\napplies? Really just if I would be doing ORDER BY id, revision DESC on\nthe whole table? Because one future query I am working on is where I\nselect all rows but for only the latest (highest) revision. Curious if\nthat will have an effect there.\n\n\nMitar\n\n[1] https://www.postgresql.org/docs/16/indexes-ordering.html\n\n-- \nhttps://mitar.tnode.com/\nhttps://twitter.com/mitar_m\nhttps://noc.social/@mitar\n\n\n", "msg_date": "Fri, 29 Mar 2024 22:50:42 +0100", "msg_from": "Mitar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allowing DESC for a PRIMARY KEY column" } ]
[ { "msg_contents": "You might have seen reports today about a very complex exploit added to\nrecent versions of liblzma. Fortunately, it was only enabled two months\nago and has not been pushed to most stable operating systems like Debian\nand Ubuntu. The original detection report is:\n\n https://www.openwall.com/lists/oss-security/2024/03/29/4\n\nAnd this ycombinator discussion has details:\n\n https://news.ycombinator.com/item?id=39865810\n\n It looks like an earlier commit with a binary blob \"test data\"\n contained the bulk of the backdoor, then the configure script\n enabled it, and then later commits patched up valgrind errors\n caused by the backdoor. See the commit links in the \"Compromised\n Repository\" section.\n\nand I think the configure came in through the autoconf output file\n'configure', not configure.ac:\n\n This is my main take-away from this. We must stop using upstream\n configure and other \"binary\" scripts. Delete them all and run\n \"autoreconf -fi\" to recreate them. (Debian already does something\n like this I think.)\n\nNow, we don't take pull requests, and all our committers are known\nindividuals, but this might have cautionary lessons for us.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 29 Mar 2024 18:37:24 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Security lessons from liblzma" }, { "msg_contents": "On Sat, Mar 30, 2024 at 11:37 AM Bruce Momjian <[email protected]> wrote:\n> You might have seen reports today about a very complex exploit added to\n> recent versions of liblzma. Fortunately, it was only enabled two months\n> ago and has not been pushed to most stable operating systems like Debian\n> and Ubuntu. The original detection report is:\n>\n> https://www.openwall.com/lists/oss-security/2024/03/29/4\n\nIncredible work from Andres. The attackers made a serious strategic\nmistake: they made PostgreSQL slightly slower.\n\n\n", "msg_date": "Sat, 30 Mar 2024 11:48:35 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Hi,\n\nOn 2024-03-29 18:37:24 -0400, Bruce Momjian wrote:\n> You might have seen reports today about a very complex exploit added to\n> recent versions of liblzma. Fortunately, it was only enabled two months\n> ago and has not been pushed to most stable operating systems like Debian\n> and Ubuntu. The original detection report is:\n> \n> https://www.openwall.com/lists/oss-security/2024/03/29/4\n> \n> And this ycombinator discussion has details:\n> \n> https://news.ycombinator.com/item?id=39865810\n> \n> It looks like an earlier commit with a binary blob \"test data\"\n> contained the bulk of the backdoor, then the configure script\n> enabled it, and then later commits patched up valgrind errors\n> caused by the backdoor. See the commit links in the \"Compromised\n> Repository\" section.\n> \n> and I think the configure came in through the autoconf output file\n> 'configure', not configure.ac:\n>\n> This is my main take-away from this. We must stop using upstream\n> configure and other \"binary\" scripts. Delete them all and run\n> \"autoreconf -fi\" to recreate them. (Debian already does something\n> like this I think.)\n\nI don't think that's a useful lesson here, actually. In this case, if you ran\nautoreconf -fi in a released tarball, it'd just recreate precisely what\nthe tarball already contained, including the exploit.\n\nI think the issue in this case rather was that the tarball contains files that\nare not in the release - a lot of them. The attackers injected the\n'activating' part of the backdoor into the release tarball, while it was not\npresent in the git tree. They did that because they knew that distributions\noften build from release tarballs.\n\nIf the release pre-backdoor release tarball had been identical to the git\nrepository, this would likely have been noticed by packagers - but it was\nnormal for there to be a lot of new files.\n\nWe traditionally had also a lot of generated files in the tarball that weren't\nin our git tree - but we removed a lot of that a few months ago, when we\nstopped including bison/flex/other generated code.\n\nWe still include generated docs, but that's much harder to exploit at\nscale. However, they still make it harder to verify that the release is\nexactly the same as the git tree.\n\n\n> Now, we don't take pull requests, and all our committers are known\n> individuals, but this might have cautionary lessons for us.\n\nI am doubtful that every committer would find something sneaky hidden in\ne.g. one of the test changes in a large commit. It's not too hard to hide\nsomething sneaky. I comparison to that hiding something in configure.ac seems\nless likely to succeed IMO, that imo tends to be more scrutinized. And hiding\njust in configure directly wouldn't get you far, it'd just get removed when\nthe committer or some other committer at a later time, regenerates configure.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 29 Mar 2024 15:59:53 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "> On 29 Mar 2024, at 23:59, Andres Freund <[email protected]> wrote:\n> On 2024-03-29 18:37:24 -0400, Bruce Momjian wrote:\n\n>> Now, we don't take pull requests, and all our committers are known\n>> individuals, but this might have cautionary lessons for us.\n> \n> I am doubtful that every committer would find something sneaky hidden in\n> e.g. one of the test changes in a large commit. It's not too hard to hide\n> something sneaky.\n\nOne take-away for me is how important it is to ship recipes for regenerating\nany testdata which is included in generated/compiled/binary format. Kind of\nhow we in our tree ship the config for test TLS certificates and keys which can\nbe manually inspected, and used to rebuild the testdata (although the risk for\ninjections in this particular case seems low). Bad things can still be\ninjected, but formats which allow manual review at least goes some way towards\nlowering risk.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 30 Mar 2024 00:14:11 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> One take-away for me is how important it is to ship recipes for regenerating\n> any testdata which is included in generated/compiled/binary format.\n\nIMO that's a hard, no-exceptions requirement just for\nmaintainability's sake, never mind security risks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Mar 2024 19:20:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Fri, Mar 29, 2024 at 7:00 PM Andres Freund <[email protected]> wrote:\n> I am doubtful that every committer would find something sneaky hidden in\n> e.g. one of the test changes in a large commit. It's not too hard to hide\n> something sneaky. I comparison to that hiding something in configure.ac seems\n> less likely to succeed IMO, that imo tends to be more scrutinized. And hiding\n> just in configure directly wouldn't get you far, it'd just get removed when\n> the committer or some other committer at a later time, regenerates configure.\n\nI agree with this. If I were trying to get away with a malicious\ncommit, I'd look for files that other people would be unlikely to\nexamine closely, or would have difficulty examining closely. Test data\nor test scripts seem like great possibilities. And I also would like\nit to be part of some relatively large commit that is annoying to read\nthrough visually. We don't have a lot of binary format files in the\ntree, which is good, but there's probably some things like Unicode\ntables and ECPG expected output files that very, very few people ever\nactually examine. If we had a file in the tree that looked based on\nthe name like an expected output file for a test, but there was no\ncorresponding test, how many of us would notice that? How many of us\nwould scrutinize it? Imagine hiding something bad in the middle of\nthat file somewhere.\n\nMaybe we need some kind of tool that scores files for risk. Longer\nfiles are riskier. Binary files are riskier, as are text files that\nare something other than plain English/C code/SGML. Files that haven't\nchanged in a long time are not risky, but files with few recent\nchanges are riskier than files with many recent changes, especially if\nonly 1 or 2 committers made all of those changes, and especially if\nthose commits also touched a lot of other files. Of course, if we had\na tool like this that were public, I suppose anyone targeting PG would\nlook at the tool and try to find ways around its heuristics. But maybe\nwe should have something and not disclose the whole algorithm\npublicly, or even if we do disclose it all, having something is\nprobably better than having nothing. It might force a hypothetical bad\nactor to do things that would be more likely to be noticed by the\nhumans.\n\nWe might also want to move toward signing commits and tags. One of the\nmeson maintainers was recommending that on-list not long ago.\n\nWe should think about weaknesses that might occur during the packaging\nprocess, too. If someone who alleges that their packaging PG is really\npackaging PG w/badstuff123.patch, how would we catch that?\n\nAn awful lot of what we do operates on the principle that we know the\npeople who are involved and trust them, and I'm glad we do trust them,\nbut the world is full of people who trusted somebody too much and\nregretted it afterwards. The fact that we have many committers rather\nthan a single maintainer probably reduces risk at least as far as the\nsource repository is concerned, because there are more people paying\nattention to potentially notice something that isn't as it should be.\nBut it's also more potential points of compromise, and a lot of things\noutside of that repository are not easy to audit. I can't for example\nverify what the infrastructure team is doing, or what Tom does when he\nbuilds the release tarballs. It seems like a stretch to imagine\nsomeone taking over Tom's online identity while simultaneously\nrendering him incommunicado ... but at the same time, the people\nbehind this attack obviously put a lot of work into it and had a lot\nof resources available to craft the attack. We shouldn't make the\nmistake of assuming that bad things can't happen to us.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 30 Mar 2024 16:50:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Hi,\n\nOn 2024-03-30 16:50:26 -0400, Robert Haas wrote:\n> We might also want to move toward signing commits and tags. One of the\n> meson maintainers was recommending that on-list not long ago.\n\nI don't know how valuable the commit signing really is, but I strongly agree\nthat we should sign both tags and tarballs.\n\n\n> We should think about weaknesses that might occur during the packaging\n> process, too. If someone who alleges that their packaging PG is really\n> packaging PG w/badstuff123.patch, how would we catch that?\n\nI don't think we realistically can. The environment, configure and compiler\noptions will influence things too much to do any sort of automatic\nverification.\n\nBut that shouldn't stop us from ensuring that at least the packages\ndistributed via *.postgresql.org are reproducibly built.\n\nAnother good avenue for introducing an attack would be to propose some distro\nspecific changes to the packaging teams - there's a lot fewer eyes there. I\nthink it might be worth working with some of our packagers to integrate more\nof their changes into our tree.\n\n\n> I can't for example verify what the infrastructure team is doing, or what\n> Tom does when he builds the release tarballs.\n\nThis one however, I think we could improve upon. Making sure the tarball\ngeneration is completely binary reproducible and providing means of checking\nthat would surely help. This will be a lot easier if we, as dicussed\nelsewhere, I believe, split out the generated docs into a separately\ndownloadable archive. We already stopped including other generated files\nrecently.\n\n\n> We shouldn't make the mistake of assuming that bad things can't happen to\n> us.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 30 Mar 2024 14:12:44 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Sat, Mar 30, 2024 at 04:50:26PM -0400, Robert Haas wrote:\n> On Fri, Mar 29, 2024 at 7:00 PM Andres Freund <[email protected]> wrote:\n> > I am doubtful that every committer would find something sneaky hidden in\n> > e.g. one of the test changes in a large commit. It's not too hard to hide\n> > something sneaky. I comparison to that hiding something in configure.ac seems\n> > less likely to succeed IMO, that imo tends to be more scrutinized. And hiding\n> > just in configure directly wouldn't get you far, it'd just get removed when\n> > the committer or some other committer at a later time, regenerates configure.\n> \n> I agree with this. If I were trying to get away with a malicious\n> commit, I'd look for files that other people would be unlikely to\n> examine closely, or would have difficulty examining closely. Test data\n> or test scripts seem like great possibilities. And I also would like\n> it to be part of some relatively large commit that is annoying to read\n> through visually. We don't have a lot of binary format files in the\n> tree, which is good, but there's probably some things like Unicode\n> tables and ECPG expected output files that very, very few people ever\n> actually examine. If we had a file in the tree that looked based on\n> the name like an expected output file for a test, but there was no\n> corresponding test, how many of us would notice that? How many of us\n> would scrutinize it? Imagine hiding something bad in the middle of\n> that file somewhere.\n\nSo, in this case, the hooks were in 'configure', but not configure.ac,\nand the exploit was in a test file which was in the tarball but _not_ in\nthe git tree. So, they used the obfuscation of 'configure's syntax, and\nthe lack of git oversight by not putting the test files in the git tree.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 30 Mar 2024 19:22:28 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On 3/30/24 17:12, Andres Freund wrote:\n> Hi,\n> \n> On 2024-03-30 16:50:26 -0400, Robert Haas wrote:\n>> We might also want to move toward signing commits and tags. One of the\n>> meson maintainers was recommending that on-list not long ago.\n> \n> I don't know how valuable the commit signing really is, but I strongly agree\n> that we should sign both tags and tarballs.\n\n+1\n\n\n>> We should think about weaknesses that might occur during the packaging\n>> process, too. If someone who alleges that their packaging PG is really\n>> packaging PG w/badstuff123.patch, how would we catch that?\n> \n> I don't think we realistically can. The environment, configure and compiler\n> options will influence things too much to do any sort of automatic\n> verification.\n> \n> But that shouldn't stop us from ensuring that at least the packages\n> distributed via *.postgresql.org are reproducibly built.\n> \n> Another good avenue for introducing an attack would be to propose some distro\n> specific changes to the packaging teams - there's a lot fewer eyes there. I\n> think it might be worth working with some of our packagers to integrate more\n> of their changes into our tree.\n\nHuge +1 to that. I have thought many times, and even said to Devrim, a \nhuge number of people trust him to not be evil.\n\nVirtually every RPM source, including ours, contains out of tree patches \nthat get applied on top of the release tarball. At least for the PGDG \npackages, it would be nice to integrate them into our git repo as build \noptions or whatever so that the packages could be built without any \npatches applied to it. Add a tarball that is signed and traceable back \nto the git tag, and we would be in a much better place than we are now.\n\n>> I can't for example verify what the infrastructure team is doing,\n\nNot sure what you feel like you should be able to follow -- anything \nspecific?\n\n>> or what Tom does when he builds the release tarballs.\n\nTom follows this, at least last time I checked:\n\nhttps://wiki.postgresql.org/wiki/Release_process\n\n> This one however, I think we could improve upon. Making sure the tarball\n> generation is completely binary reproducible and providing means of checking\n> that would surely help. This will be a lot easier if we, as dicussed\n> elsewhere, I believe, split out the generated docs into a separately\n> downloadable archive. We already stopped including other generated files\n> recently.\n\nagain, big +1\n\n>> We shouldn't make the mistake of assuming that bad things can't happen to\n>> us.\n> \n> +1\n\nand again with the +1 ;-)\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 30 Mar 2024 19:54:00 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On 3/30/24 19:54, Joe Conway wrote:\n>> On 2024-03-30 16:50:26 -0400, Robert Haas wrote:\n>>> or what Tom does when he builds the release tarballs.\n> \n> Tom follows this, at least last time I checked:\n> \n> https://wiki.postgresql.org/wiki/Release_process\n\nReading through that, I wonder if this part is true anymore:\n\n In principle this could be done anywhere, but again there's a concern\n about reproducibility, since the results may vary depending on\n installed bison, flex, docbook, etc versions. Current practice is to\n always do this as pgsql on borka.postgresql.org, so it can only be\n done by people who have a login there. In detail:\n\nMaybe if we split out the docs from the release tarball, we could also \nadd the script (mk-release) to our git repo?\n\nSome other aspects of that wiki page look out of date too. Perhaps it \nneeds an overall update? Maybe Tom and/or Magnus could weigh in here.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 30 Mar 2024 20:05:59 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> On 3/30/24 19:54, Joe Conway wrote:\n>> Tom follows this, at least last time I checked:\n>> https://wiki.postgresql.org/wiki/Release_process\n\n> Reading through that, I wonder if this part is true anymore:\n\n> In principle this could be done anywhere, but again there's a concern\n> about reproducibility, since the results may vary depending on\n> installed bison, flex, docbook, etc versions. Current practice is to\n> always do this as pgsql on borka.postgresql.org, so it can only be\n> done by people who have a login there. In detail:\n\nThe reproducibility argument would still apply to the docs (in\nwhatever form we're packaging them), but hopefully not to the\nbasic source tarball.\n\n> Maybe if we split out the docs from the release tarball, we could also \n> add the script (mk-release) to our git repo?\n\nIf memory serves, the critical steps are already in our source tree,\nas \"make dist\" (but I'm not sure how that's going to work if we want\nto get away from using autoconf/make). It's not clear to me how much\nof the rest of mk-release is relevant to people who might be trying to\ngenerate things elsewhere. I'd like mk-release to continue to work\nfor older branches, too, so it's going to be some sort of hybrid mess\nfor a few years here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 30 Mar 2024 20:15:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Sat, Mar 30, 2024 at 07:54:00PM -0400, Joe Conway wrote:\n> Virtually every RPM source, including ours, contains out of tree patches\n> that get applied on top of the release tarball. At least for the PGDG\n> packages, it would be nice to integrate them into our git repo as build\n> options or whatever so that the packages could be built without any patches\n> applied to it. Add a tarball that is signed and traceable back to the git\n> tag, and we would be in a much better place than we are now.\n\nHow would someone access the out-of-tree patches? I think Debian\nincludes the patches in its source tarball.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 30 Mar 2024 21:52:47 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Sat, Mar 30, 2024 at 09:52:47PM -0400, Bruce Momjian wrote:\n> On Sat, Mar 30, 2024 at 07:54:00PM -0400, Joe Conway wrote:\n> > Virtually every RPM source, including ours, contains out of tree patches\n> > that get applied on top of the release tarball. At least for the PGDG\n> > packages, it would be nice to integrate them into our git repo as build\n> > options or whatever so that the packages could be built without any patches\n> > applied to it. Add a tarball that is signed and traceable back to the git\n> > tag, and we would be in a much better place than we are now.\n> \n> How would someone access the out-of-tree patches? I think Debian\n> includes the patches in its source tarball.\n\nIf you ask where they are maintained, the answer is here:\n\nhttps://salsa.debian.org/postgresql/postgresql/-/tree/17/debian/patches?ref_type=heads\n\nthe other major versions have their own branch.\n\n\nMichael\n\n\n", "msg_date": "Sun, 31 Mar 2024 12:18:29 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On 3/30/24 21:52, Bruce Momjian wrote:\n> On Sat, Mar 30, 2024 at 07:54:00PM -0400, Joe Conway wrote:\n>> Virtually every RPM source, including ours, contains out of tree patches\n>> that get applied on top of the release tarball. At least for the PGDG\n>> packages, it would be nice to integrate them into our git repo as build\n>> options or whatever so that the packages could be built without any patches\n>> applied to it. Add a tarball that is signed and traceable back to the git\n>> tag, and we would be in a much better place than we are now.\n> \n> How would someone access the out-of-tree patches? I think Debian\n> includes the patches in its source tarball.\n\nI am saying maybe those patches should be eliminated in favor of our \ntree including build options that would produce the same result.\n\nFor example, these patches are applied to our release tarball files when \nthe RPM is being built for pg16 on RHEL 9:\n\n-----\nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/main/non-common/postgresql-16/main/postgresql-16-rpm-pgsql.patch;h=d9b6d12b7517407ac81352fa325ec91b05587641;hb=HEAD\n\nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/main/non-common/postgresql-16/main/postgresql-16-var-run-socket.patch;h=f2528efaf8f4681754b20283463eff3e14eedd39;hb=HEAD\n\nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/main/non-common/postgresql-16/main/postgresql-16-conf.patch;h=da28ed793232316dd81fdcbbe59a6479b054a364;hb=HEAD\n\nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/main/non-common/postgresql-16/main/postgresql-16-perl-rpath.patch;h=748c42f0ec2c9730af3143e90e5b205c136f40d9;hb=HEAD\n-----\n\nNothing too crazy, but wouldn't it be better if no patches were required \nat all?\n\nIdeally we should have reproducible builds so that starting with our \ntarball (which is traceable back to the git release tag) one can easily \nobtain the same binary as what the RPMs/DEBs deliver.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 31 Mar 2024 08:15:59 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Hi,\n\nOn Sun, 2024-03-31 at 08:15 -0400, Joe Conway wrote:\n> \n> I am saying maybe those patches should be eliminated in favor of our \n> tree including build options that would produce the same result.\n\nWorks for me, as a long as I can commit them and upcoming potential\npatches to PostgreSQL git repo.\n\nOTOH, we also carry non-patches like README files, systemd unit files,\npam files, setup script, etc., which are very RPM specific.\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Sun, 31 Mar 2024 16:40:20 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> I am saying maybe those patches should be eliminated in favor of our \n> tree including build options that would produce the same result.\n\nI don't really see how that can be expected to work sanely.\nIt turns the responsibility for platform-specific build issues\non its head, and it doesn't work at all for issues discovered\nafter we make a release. The normal understanding of how you\ncan vet a distro's package is that you look at the package\ncontents (the SRPM in Red Hat world and whatever the equivalent\nconcept is elsewhere), check that the contained tarball\nmatches upstream and that the patches and build instructions\nlook sane, and then build it locally and check for a match to\nthe distro's binary package. Even if we could overcome the\nobstacles to putting the patch files into the upstream tarball,\nwe're surely not going to include the build instructions, so\nwe'd not have moved the needle very far in terms of whether the\npackager could do something malicious.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 Mar 2024 11:49:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Hi,\n\nOn Sat, 2024-03-30 at 21:52 -0400, Bruce Momjian wrote:\n> How would someone access the out-of-tree patches? \n\nHere are the v17 patches:\n\nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=tree;f=rpm/redhat/main/non-common/postgresql-17/main\n\nYou can replace -17 with -16 (and etc) for the other major releases.\n\nPlease note that both Debian folks and me build about 300 other packages\nto support the ecosystem. Just saying.\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Sun, 31 Mar 2024 16:55:26 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On 3/31/24 11:49, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> I am saying maybe those patches should be eliminated in favor of our \n>> tree including build options that would produce the same result.\n> \n> I don't really see how that can be expected to work sanely.\n> It turns the responsibility for platform-specific build issues\n> on its head, and it doesn't work at all for issues discovered\n> after we make a release. The normal understanding of how you\n> can vet a distro's package is that you look at the package\n> contents (the SRPM in Red Hat world and whatever the equivalent\n> concept is elsewhere), check that the contained tarball\n> matches upstream and that the patches and build instructions\n> look sane, and then build it locally and check for a match to\n> the distro's binary package. Even if we could overcome the\n> obstacles to putting the patch files into the upstream tarball,\n> we're surely not going to include the build instructions, so\n> we'd not have moved the needle very far in terms of whether the\n> packager could do something malicious.\n\nTrue enough I guess.\n\nBut it has always bothered me how many patches get applied to the \nupstream tarballs by the OS package builders. Some of them, e.g. glibc \non RHEL 7, include more than 1000 patches that you would have to \nmanually vet if you cared enough and had the skills. Last time I looked \nat the openssl package sources it was similar in volume and complexity. \nThey might as well be called forks if everyone were being honest about it...\n\nI know our PGDG packages are no big deal compared to that, fortunately.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 31 Mar 2024 13:05:40 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Hi,\n\nOn Sun, Mar 31, 2024 at 01:05:40PM -0400, Joe Conway wrote:\n> But it has always bothered me how many patches get applied to the upstream\n> tarballs by the OS package builders. Some of them, e.g. glibc on RHEL 7,\n> include more than 1000 patches that you would have to manually vet if you\n> cared enough and had the skills. Last time I looked at the openssl package\n> sources it was similar in volume and complexity. They might as well be\n> called forks if everyone were being honest about it...\n\nI think this more an artifact of how RHEL development works, i.e. trying\nto ship the same major version of glibc for 10 years, but still fix lots\nof bugs and possibly some performance improvements your larger customers\nask for. So I guess a lot of those 1000 patches are just cherry-picks /\nbackports of upstream commits from newer releases.\n\nI guess it would be useful to maybe have another look at the patches\nthat are being applied for apt/yum.postgresql.org for the 18 release\ncycle, but I do not think those are a security problem. Not sure about\nRPM builds, but at least in theory the APT builds should be\nreproducible.\n\nWhat would be a significant gain in security/trust was an easy\nservice/recipe on how to verify the reproducibility (i) by independently\nbuilding packages (and maybe the more popular extensions) and comparing\nthem to the {apt,yum}.postgresql.org repository packages (ii) by being\nable to build the release tarballs reproducibly.\n\n\nMichael\n\n\n", "msg_date": "Sun, 31 Mar 2024 21:09:51 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Michael Banck <[email protected]> writes:\n> On Sun, Mar 31, 2024 at 01:05:40PM -0400, Joe Conway wrote:\n>> But it has always bothered me how many patches get applied to the upstream\n>> tarballs by the OS package builders.\n\n> I think this more an artifact of how RHEL development works, i.e. trying\n> to ship the same major version of glibc for 10 years, but still fix lots\n> of bugs and possibly some performance improvements your larger customers\n> ask for. So I guess a lot of those 1000 patches are just cherry-picks /\n> backports of upstream commits from newer releases.\n\nYeah. Also, precisely because they keep supporting versions that are\nout-of-support according to upstream, the idea that all the patches\ncan be moved upstream isn't going to work for them, and they're\nunlikely to be excited about partial solutions.\n\nThe bigger problem though is: if we do this, are we going to take\npatches that we fundamentally don't agree with? For instance,\nif a packager chooses to rip out the don't-run-server-as-root check.\n(Pretty sure I've heard of people doing that.) That would look like\nblessing things we don't think are good ideas, and it would inevitably\nlead to long arguments with packagers about why-dont-you-do-this-some-\nother-way. I'm not excited about that prospect.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 Mar 2024 15:27:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Hi,\n\nOn 2024-03-31 12:18:29 +0200, Michael Banck wrote:\n> If you ask where they are maintained, the answer is here:\n>\n> https://salsa.debian.org/postgresql/postgresql/-/tree/17/debian/patches?ref_type=heads\n>\n> the other major versions have their own branch.\n\nLuckily these are all quite small, leaving little space to hide stuff. I'd\nstill like to get rid of at least some of them.\n\nI've previously proposed a patch to make pkglibdir configurable, I think we\nshould just go for that.\n\nFor the various defines, ISTM we should just make them #ifndef guarded, then\nthey could be overridden by defining them at configure time. Some of them,\nlike DEFAULT_PGSOCKET_DIR seem to be overridden by just about every\ndistro. And others would be nice to easily override anyway, I e.g. dislike the\ndefault DEFAULT_PAGER value.\n\n\nOn 2024-03-31 16:55:26 +0100, Devrim G�nd�z wrote:\n> Here are the v17 patches:\n>\n> https://git.postgresql.org/gitweb/?p=pgrpms.git;a=tree;f=rpm/redhat/main/non-common/postgresql-17/main\n\nA bit bigger/more patches, but still not too bad.\n\npostgresql-17-conf.patch\n\nUncomments a few values to their default, that's a bit odd.\n\n\npostgresql-17-ecpg_config.h:\npostgresql-17-pg_config.h:\n\nHm, wonder if we should make this easier somehow. Perhaps we ought to support\ninstalling the various *config.h headers into a different directory than the\narchitecture independent stuff? At least on debian based systems it seems we\nought to support installing pg_config.h etc into /usr/include/<tripplet> or\nsomething along those lines.\n\n\npostgresql-17-rpm-pgsql.patch:\n\nWe should probably make this stuff properly configurable. The current logic\nwith inferring whether to add /postgresql is just weird. Perhaps a configure\noption that defaults to the current logic when set to an empty string but can\nbe overridden?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 31 Mar 2024 14:12:57 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "I looked through the repositories of 19 linux distros [1] to see what \nkind of patches are applied often. Many of them share the same package \nmanagers / repositories and thus the same patches. I made sure to look \nat some smaller, \"other\" distros as well, to see what kind of problems \nappear outside the mainstream distros.\n\nAndres Freund:\n> I've previously proposed a patch to make pkglibdir configurable, I think we\n> should just go for that.\n\n+1. Other paths, which some distros need to configure are pkgincludedir \nand pgxs (independently of pkglibdir).\n\nAlso a configurable directoy to look up extensions, possibly even to be \nchanged at run-time like [2]. The patch says this:\n\n> This directory is prepended to paths when loading extensions (control and SQL files), and to the '$libdir' directive when loading modules that back functions. The location is made configurable to allow build-time testing of extensions that do not have been installed to their proper location yet.\n\nThis seems like a great thing to have. This might also be relevant in \nlight of recent discussions in the ecosystem around extension management.\n\nAll the path-related issues have in common, that while it's easy to move \nfiles around to their proper locations later, they all need to adjust \npg_config's output.\n\n\nAndres Freund:\n> For the various defines, ISTM we should just make them #ifndef guarded, then\n> they could be overridden by defining them at configure time. Some of them,\n> like DEFAULT_PGSOCKET_DIR seem to be overridden by just about every\n> distro. And others would be nice to easily override anyway, I e.g. dislike the\n> default DEFAULT_PAGER value.\n\nDEFAULT_PAGER is also overriden by a few distros. DEFAULT_EDITOR by one \nas well. As you said DEFAULT_PGSOCKET_DIR in **a lot** of them.\n\n\nAndres Freund:\n> postgresql-17-rpm-pgsql.patch:\n> \n> We should probably make this stuff properly configurable. The current logic\n> with inferring whether to add /postgresql is just weird. Perhaps a configure\n> option that defaults to the current logic when set to an empty string but can\n> be overridden?\n\n+1 for that option to force the suffix, no matter whether postgres/pgsql \nare in the path already.\n\n\nSome other commonly patched issues are:\n- Building only libpq, only libecpg - or disabling both and building \nonly the server code. This comes up often, it's not supported nicely in \nour build system, yet. I think meson already has some build targets for \nparts of that, but it's very hard / impossible to then install only this \nsubset of the build. It would be great to be able to build and install \nonly the frontend code (including only the frontend headers) or only the \nbackend code (including headers) etc.\n- Related to the above is pg_config and how to deal with it when \ninstalling separate client and server packages, in different locations, \ntoo. Some distros provide a second version of pg_config as pg_server_config.\n\nThose two issues and the path-related issues above make it harder than \nit should be to provide separate packages for each major version, which \ncan be installed at the same time (versioned prefixes, multiple server \npackages, but maybe only a single libpq package etc.).\n\n\nSome small patches that might not be widespread, but could possibly \nstill be upstreamed:\n- jit-s390x [3] (Alpine, Debian, Fedora, openSUSE)\n- pg_config --major-version option for extension builds [4] (Alpine)\n- Some fixes for man pages [5] (AlmaLinux, CentOS, Fedora)\n\n\nTLDR: I think it should be possible to make the build system more \nflexible in some areas without introducing distro specific things in \ncore. This should eliminate the need for many of the same patches across \nthe board for a lot of distros.\n\nBest,\n\nWolfgang\n\n[1]: ALT Linux, Adélie Linux, AlmaLinux, Alpine Linux, Arch Linux, \nCentOS, Crux, Debian, Fedora, Gentoo, GoboLinux, Guix, Mandriva, NixOS, \nOpenWRT, Rocky Linux, Ubuntu, Void Linux, openSUSE\n\n[2]: \nhttps://salsa.debian.org/postgresql/postgresql/-/blob/17/debian/patches/extension_destdir?ref_type=heads\n\n[3]: \nhttps://salsa.debian.org/postgresql/postgresql/-/blob/17/debian/patches/jit-s390x?ref_type=heads\n\n[4]: \nhttps://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/postgresql16/pg_config-add-major-version.patch?ref_type=heads\n\n[5]: \nhttps://gitlab.com/redhat/centos-stream/rpms/postgresql/-/blob/c9s/postgresql-man.patch\n\n\n", "msg_date": "Mon, 1 Apr 2024 12:55:37 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Apr 1, 2024, at 06:55, [email protected] wrote:\n\n> Also a configurable directoy to look up extensions, possibly even to be changed at run-time like [2]. The patch says this:\n> \n>> This directory is prepended to paths when loading extensions (control and SQL files), and to the '$libdir' directive when loading modules that back functions. The location is made configurable to allow build-time testing of extensions that do not have been installed to their proper location yet.\n> \n> This seems like a great thing to have. This might also be relevant in light of recent discussions in the ecosystem around extension management.\n> \n> All the path-related issues have in common, that while it's easy to move files around to their proper locations later, they all need to adjust pg_config's output.\n\nFunny timing, I was planning to resurrect this old patch[1] and propose that patch this week. One of motivators is the increasing use of Docker images in Kubernetes to run Postgres, where there’s a desire to keep the core service and extensions immutable, and to have a second directory mounted to a persistent volume into which other extensions can be installed and preserved independently of the Docker image.\n\nThe current approach involves symlinking shenanigans[2] that complicate things pretty substantially, making it more difficult to administer. A second directory fit for purpose would be far better.\n\nThere are some other motivators, so I’ll do some additional diligence and start a separate thread (or reply to the original[3]).\n\nBest,\n\nDavid\n\n[1] https://commitfest.postgresql.org/5/170/\n[2] https://speakerdeck.com/ongres/postgres-extensions-in-kubernetes?slide=14\n[3] https://www.postgresql.org/message-id/flat/51AE0845.8010600%40ocharles.org.uk\n\n\n\n", "msg_date": "Mon, 1 Apr 2024 10:15:44 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Sun, Mar 31, 2024 at 02:12:57PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-03-31 12:18:29 +0200, Michael Banck wrote:\n> > If you ask where they are maintained, the answer is here:\n> >\n> > https://salsa.debian.org/postgresql/postgresql/-/tree/17/debian/patches?ref_type=heads\n> >\n> > the other major versions have their own branch.\n> \n> Luckily these are all quite small, leaving little space to hide stuff. I'd\n> still like to get rid of at least some of them.\n> \n> I've previously proposed a patch to make pkglibdir configurable, I think we\n> should just go for that.\n> \n> For the various defines, ISTM we should just make them #ifndef guarded, then\n> they could be overridden by defining them at configure time. Some of them,\n> like DEFAULT_PGSOCKET_DIR seem to be overridden by just about every\n> distro. And others would be nice to easily override anyway, I e.g. dislike the\n> default DEFAULT_PAGER value.\n\nI realize we can move some changes into our code, but packagers are\nstill going to need a way to do immediate adjustments to match their OS\nin time frames that don't match the Postgres release schedule.\n\nI was more asking if users have access to patches so they could recreate\nthe build by using the Postgres git tree and supplied OS-specific\npatches.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 1 Apr 2024 13:19:06 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I was more asking if users have access to patches so they could recreate\n> the build by using the Postgres git tree and supplied OS-specific\n> patches.\n\nAFAIK, every open-source distro makes all the pieces needed to\nrebuild their packages available to users. It wouldn't be much\nof an open-source situation otherwise. You do have to learn\ntheir package build process.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Apr 2024 15:17:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Fri, Mar 29, 2024 at 06:37:24PM -0400, Bruce Momjian wrote:\n> You might have seen reports today about a very complex exploit added to\n> recent versions of liblzma. Fortunately, it was only enabled two months\n> ago and has not been pushed to most stable operating systems like Debian\n> and Ubuntu. The original detection report is:\n> \n> https://www.openwall.com/lists/oss-security/2024/03/29/4\n\nI was watching this video about the exploit:\n\n\thttps://www.youtube.com/watch?v=bS9em7Bg0iU\n\nand at 2:29, they mention \"hero software developer\", our own Andres\nFreund as the person who discovered the exploit. I noticed the author's\nname at the openwall email link above, but I assumed it was someone else\nwith the same name. They mentioned it was found while researching\nPostgres performance, and then I noticed the email address matched!\n\nI thought the analogy he uses at the end of the video is very clear.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 1 Apr 2024 16:58:07 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Mon, Apr 1, 2024 at 03:17:55PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I was more asking if users have access to patches so they could recreate\n> > the build by using the Postgres git tree and supplied OS-specific\n> > patches.\n> \n> AFAIK, every open-source distro makes all the pieces needed to\n> rebuild their packages available to users. It wouldn't be much\n> of an open-source situation otherwise. You do have to learn\n> their package build process.\n\nI wasn't clear if all the projects provide a source tree that can be\nverified against the project's source tree, and then independent\npatches, or if the patches were integrated and therefore harder to\nverify against the project source tree.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 1 Apr 2024 16:59:51 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Mon, Apr 1, 2024 at 03:17:55PM -0400, Tom Lane wrote:\n>> AFAIK, every open-source distro makes all the pieces needed to\n>> rebuild their packages available to users. It wouldn't be much\n>> of an open-source situation otherwise. You do have to learn\n>> their package build process.\n\n> I wasn't clear if all the projects provide a source tree that can be\n> verified against the project's source tree, and then independent\n> patches, or if the patches were integrated and therefore harder to\n> verify against the project source tree.\n\nIn the systems I'm familiar with, an SRPM-or-equivalent includes the\npristine upstream tarball and then some patch files to apply to it.\nThe patch files have to be maintained anyway, and if you don't ship\nthem then you're not shipping \"source\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Apr 2024 17:03:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "\nOn 2024-03-31 Su 17:12, Andres Freund wrote:\n> Hi,\n>\n> On 2024-03-31 12:18:29 +0200, Michael Banck wrote:\n>> If you ask where they are maintained, the answer is here:\n>>\n>> https://salsa.debian.org/postgresql/postgresql/-/tree/17/debian/patches?ref_type=heads\n>>\n>> the other major versions have their own branch.\n> Luckily these are all quite small, leaving little space to hide stuff. I'd\n> still like to get rid of at least some of them.\n>\n> I've previously proposed a patch to make pkglibdir configurable, I think we\n> should just go for that.\n>\n> For the various defines, ISTM we should just make them #ifndef guarded, then\n> they could be overridden by defining them at configure time. Some of them,\n> like DEFAULT_PGSOCKET_DIR seem to be overridden by just about every\n> distro. And others would be nice to easily override anyway, I e.g. dislike the\n> default DEFAULT_PAGER value.\n>\n\n+1 to this proposal.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 1 Apr 2024 18:06:49 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Hi,\n\nAs most will know by now, the way xz debacle was able to make sshd vulnerable\nwas through a dependency from sshd to libsystemd and then from libsystemd to\nliblzma. One lesson from this is that unnecessary dependencies can still\nincrease risk.\n\nIt's worth noting that we have an optional dependency on libsystemd as well.\n\nOpenssh has now integrated [1] a patch to remove the dependency on libsystemd\nfor triggering service manager readyness notifications, by inlining the\nnecessary function. That's not hard, the protocol is pretty simple.\n\nI suspect we should do the same. We're not even close to being a target as\nattractive as openssh, but still, it seems unnecessary.\n\nIntro into the protocol is at [2], with real content and outline of the\nrelevant code at [3].\n\n\nAn argument could be made to instead just remove support, but I think it's\nquite valuable to have intra service dependencies that can rely on the server\nactually having started up.\n\nGreetings,\n\nAndres Freund\n\n[1] https://bugzilla.mindrot.org/show_bug.cgi?id=2641\n[2] https://www.freedesktop.org/software/systemd/man/devel/systemd.html#Readiness%20Protocol\n[3] https://www.freedesktop.org/software/systemd/man/devel/sd_notify.html#Notes\n\n\n", "msg_date": "Wed, 3 Apr 2024 10:57:21 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma - libsystemd" }, { "msg_contents": "On 30.03.24 00:14, Daniel Gustafsson wrote:\n> One take-away for me is how important it is to ship recipes for regenerating\n> any testdata which is included in generated/compiled/binary format. Kind of\n> how we in our tree ship the config for test TLS certificates and keys which can\n> be manually inspected, and used to rebuild the testdata (although the risk for\n> injections in this particular case seems low). Bad things can still be\n> injected, but formats which allow manual review at least goes some way towards\n> lowering risk.\n\nI actually find the situation with the test TLS files quite \nunsatisfactory. While we have build rules, the output files are not \nreproducible, both because of some inherent randomness in the \ngeneration, and because different OpenSSL versions produce different \ndetails. So you just have to \"trust\" that what's there now makes sense. \n Of course, you can use openssl tools to unpack these files, but that \nis difficult and unreliable unless you know exactly what you are looking \nfor. Also, for example, do we even know whether all the files that are \nthere now are even used by any tests?\n\nA few years ago, some guy on the internet sent in a purported update to \none of the files [0], which I ended up committing, but I remember that \nthat situation gave me quite some pause at the time.\n\nIt would be better if we created the required test files as part of the \ntest run. (Why not? Too slow?) Alternatively, I have been thinking \nthat maybe we could make the output more reproducible by messing with \nwhatever random seed OpenSSL uses. Or maybe use a Python library to \ncreate the files. Some things to think about.\n\n[0]: \nhttps://www.postgresql.org/message-id/FEF81714-D479-4512-839B-C769D2605F8A%40yesql.se\n\n\n\n", "msg_date": "Wed, 3 Apr 2024 20:09:45 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "> On 3 Apr 2024, at 20:09, Peter Eisentraut <[email protected]> wrote:\n> \n> On 30.03.24 00:14, Daniel Gustafsson wrote:\n>> One take-away for me is how important it is to ship recipes for regenerating\n>> any testdata which is included in generated/compiled/binary format. Kind of\n>> how we in our tree ship the config for test TLS certificates and keys which can\n>> be manually inspected, and used to rebuild the testdata (although the risk for\n>> injections in this particular case seems low). Bad things can still be\n>> injected, but formats which allow manual review at least goes some way towards\n>> lowering risk.\n> \n> I actually find the situation with the test TLS files quite unsatisfactory. While we have build rules, the output files are not reproducible, both because of some inherent randomness in the generation, and because different OpenSSL versions produce different details.\n\nThis testdata is by nature not reproducible, and twisting arms to make it so\nwill only result in testing against synthetic data which risk hiding bugs IMO.\n\n> So you just have to \"trust\" that what's there now makes sense.\n\nNot entirely, you can review the input files which are used to generate the\ntest data and verify that those make sense..\n\n> Of course, you can use openssl tools to unpack these files, but that is difficult and unreliable unless you know exactly what you are looking for.\n\n..and check like you mention above, but it's as you say not entirely trivial. It\nwould be nice to improve this but I'm not sure how. Document how to inspect\nthe data in src/test/ssl/README perhaps?\n\n> It would be better if we created the required test files as part of the test run. (Why not? Too slow?)\n\nThe make sslfiles step requires OpenSSL 1.1.1, which is higher than what we\nrequire to be installed to build postgres. The higher-than-base requirement is\ndue to it building test data only used when running 1.1.1 or higher, so\ntechnically it could be made to work if anyone is interesting in investing time\nin 1.0.2.\n\nTime is one aspect, on my crusty old laptop it takes ~2 seconds to regenerate\nthe files. That in itself isn't that much, but we've rejected test-time\nadditions far less than that. We could however make CI and the Buildfarm run\nthe regeneration and leave it up to each developer to decide locally? Or\nremove them and regenerate on the first SSL test run and then use the cached\nones after that?\n\nOn top of time I have a feeling the regeneration won't run on Windows. When\nit's converted to use Meson then that might be fixed though.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 3 Apr 2024 21:42:36 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Wed, Apr 3, 2024 at 7:57 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> As most will know by now, the way xz debacle was able to make sshd\n> vulnerable\n> was through a dependency from sshd to libsystemd and then from libsystemd\n> to\n> liblzma. One lesson from this is that unnecessary dependencies can still\n> increase risk.\n>\n\nYeah, I think that's something to consider for every dependency added. I\nthink we're fairly often protected against \"adding too many libraries\"\nbecause many libraries simply don't exist for all the platforms we want to\nbuild on. But it's nevertheless something to think about each time.\n\n\nIt's worth noting that we have an optional dependency on libsystemd as well.\n>\n> Openssh has now integrated [1] a patch to remove the dependency on\n> libsystemd\n> for triggering service manager readyness notifications, by inlining the\n> necessary function. That's not hard, the protocol is pretty simple.\n>\n> I suspect we should do the same. We're not even close to being a target as\n> attractive as openssh, but still, it seems unnecessary.\n>\n\n+1.\n\nWhen the code is this simple, we should definitely consider carrying it\nourselves. At least if we don't expect to need *other* functionality from\nthe same library in the future, which I doubt we will from libsystemd.\n\n\nAn argument could be made to instead just remove support, but I think it's\n> quite valuable to have intra service dependencies that can rely on the\n> server\n> actually having started up.\n>\n>\nIf we remove support we're basically just asking most of our linux\npackagers to add it back in, and they will add it back in the same way we\ndid it. I think we do everybody a disservice if we do that. It's useful\nfunctionality.\n\n//Magnus\n\nOn Wed, Apr 3, 2024 at 7:57 PM Andres Freund <[email protected]> wrote:Hi,\n\nAs most will know by now, the way xz debacle was able to make sshd vulnerable\nwas through a dependency from sshd to libsystemd and then from libsystemd to\nliblzma. One lesson from this is that unnecessary dependencies can still\nincrease risk.Yeah, I think that's something to consider for every dependency added. I think we're fairly often protected against \"adding too many libraries\" because many libraries simply don't exist for all the platforms we want to build on. But it's nevertheless something to think about each time.\nIt's worth noting that we have an optional dependency on libsystemd as well.\n\nOpenssh has now integrated [1] a patch to remove the dependency on libsystemd\nfor triggering service manager readyness notifications, by inlining the\nnecessary function. That's not hard, the protocol is pretty simple.\n\nI suspect we should do the same. We're not even close to being a target as\nattractive as openssh, but still, it seems unnecessary.+1.When the code is this simple, we should definitely consider carrying it ourselves. At least if we don't expect to need *other* functionality from the same library in the future, which I doubt we will from libsystemd.An argument could be made to instead just remove support, but I think it's\nquite valuable to have intra service dependencies that can rely on the server\nactually having started up.If we remove support we're basically just asking most of our linux packagers to add it back in, and they will add it back in the same way we did it. I think we do everybody a disservice if we do that. It's useful functionality.//Magnus", "msg_date": "Wed, 3 Apr 2024 23:19:54 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma - libsystemd" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Wed, Apr 3, 2024 at 7:57 PM Andres Freund <[email protected]> wrote:\n>> Openssh has now integrated [1] a patch to remove the dependency on\n>> libsystemd\n>> for triggering service manager readyness notifications, by inlining the\n>> necessary function. That's not hard, the protocol is pretty simple.\n>> I suspect we should do the same. We're not even close to being a target as\n>> attractive as openssh, but still, it seems unnecessary.\n\n> +1.\n\nI didn't read the patch, but if it's short and stable enough then this\nseems like a good idea. (If openssh and we are using such a patch,\nthat will probably be a big enough stake in the ground to prevent\nsomebody deciding to change the protocol ...)\n\n>> An argument could be made to instead just remove support, but I think it's\n>> quite valuable to have intra service dependencies that can rely on the\n>> server actually having started up.\n\n> If we remove support we're basically just asking most of our linux\n> packagers to add it back in, and they will add it back in the same way we\n> did it. I think we do everybody a disservice if we do that. It's useful\n> functionality.\n\nYeah, that idea seems particularly silly in view of the desire\nexpressed earlier in this thread to reduce the number of patches\ncarried by packagers. People packaging for systemd-using distros\nwill not consider that this functionality is optional.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Apr 2024 17:58:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma - libsystemd" }, { "msg_contents": "Hi,\n\nOn 2024-04-03 17:58:55 -0400, Tom Lane wrote:\n> Magnus Hagander <[email protected]> writes:\n> > On Wed, Apr 3, 2024 at 7:57 PM Andres Freund <[email protected]> wrote:\n> >> Openssh has now integrated [1] a patch to remove the dependency on\n> >> libsystemd\n> >> for triggering service manager readyness notifications, by inlining the\n> >> necessary function. That's not hard, the protocol is pretty simple.\n> >> I suspect we should do the same. We're not even close to being a target as\n> >> attractive as openssh, but still, it seems unnecessary.\n> \n> > +1.\n> \n> I didn't read the patch, but if it's short and stable enough then this\n> seems like a good idea.\n\nIt's basically just checking for an env var, opening the unix socket indicated\nby that, writing a string to it and closing the socket again.\n\n\n> (If openssh and we are using such a patch, that will probably be a big\n> enough stake in the ground to prevent somebody deciding to change the\n> protocol ...)\n\nOne version of the openssh patch to remove liblzma was submitted by one of the\ncore systemd devs, so I think they agree that it's a stable API. The current\nprotocol supports adding more information by adding attributes, so it should\nbe extensible enough anyway.\n\n\n> >> An argument could be made to instead just remove support, but I think it's\n> >> quite valuable to have intra service dependencies that can rely on the\n> >> server actually having started up.\n> \n> > If we remove support we're basically just asking most of our linux\n> > packagers to add it back in, and they will add it back in the same way we\n> > did it. I think we do everybody a disservice if we do that. It's useful\n> > functionality.\n> \n> Yeah, that idea seems particularly silly in view of the desire\n> expressed earlier in this thread to reduce the number of patches\n> carried by packagers. People packaging for systemd-using distros\n> will not consider that this functionality is optional.\n\nYep.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Apr 2024 15:16:03 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma - libsystemd" }, { "msg_contents": "On 03.04.24 23:19, Magnus Hagander wrote:\n> When the code is this simple, we should definitely consider carrying it \n> ourselves. At least if we don't expect to need *other* functionality \n> from the same library in the future, which I doubt we will from libsystemd.\n\nWell, I've long had it on my list to do some integration to log directly \nto the journal, so you can preserve metadata better. I'm not sure right \nnow whether this would use libsystemd, but it's not like there is \nabsolutely no other systemd-related functionality that could be added.\n\nPersonally, I think this proposed change is trying to close a barndoor \nafter a horse has bolted. There are many more interesting and scary \nlibraries in the dependency tree of \"postgres\", so just picking off one \nright now doesn't really accomplish anything. The next release of \nlibsystemd will drop all the compression libraries as hard dependencies, \nso the issue in that sense is gone anyway. Also, fun fact: liblzma is \nalso a dependency via libxml2.\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 01:10:20 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma - libsystemd" }, { "msg_contents": "On Wed, Apr 3, 2024 at 3:42 PM Daniel Gustafsson <[email protected]> wrote:\n> > So you just have to \"trust\" that what's there now makes sense.\n>\n> Not entirely, you can review the input files which are used to generate the\n> test data and verify that those make sense..\n\nYeah, I mean, in theory I suppose that's true, but look at this commit:\n\nhttps://git.tukaani.org/?p=xz.git;a=commitdiff;h=6e636819e8f070330d835fce46289a3ff72a7b89\n\nIn case the link stops working for some reason, the commit message is\nas follows:\n\nJT> Tests: Update two test files.\nJT>\nJT> The original files were generated with random local to my machine.\nJT> To better reproduce these files in the future, a constant seed was used\nJT> to recreate these files.\n\nEssentially, your argument is the same as his, namely: hey, don't\nworry, you could totally verify these test files if you wanted to! But\nof course, nobody did, because it was hard, and everybody had better\nthings to do with their time. And I think the same thing is probably\ntrue here: nobody really is going to verify much about these files.\n\nI just went and ran openssl x509 -in $f -text on each .crt file and it\nworked, so all of those files look basically like certificates. But\nlike, hypothetically, how do I know that the modulus chosen for a\nparticular certificate was chosen at random, rather than maliciously?\nSimilarly for the key files. Are there padding bytes in any of these\nfiles that could be used to store evil information? I don't know that,\neither. I'm not sure how far it's worth continuing down this path of\nparanoia; I doubt that Daniel Gustafsson is a clever alias for a\nnefarious cabal of state-sponsored hackers, and the hypotheses that\nsupposedly-random values were chosen non-randomly borders on\nunfalsifiable. Nonetheless, I think Peter is correct to point out that\nthese are the kinds of files about which it is reasonable to be\nconcerned, because they seem to have properties quite similar to those\nof the files used in an actual attack.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Apr 2024 15:38:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "> On 4 Apr 2024, at 21:38, Robert Haas <[email protected]> wrote:\n\n> Essentially, your argument is the same as his, namely: hey, don't\n> worry, you could totally verify these test files if you wanted to! But\n> of course, nobody did, because it was hard, and everybody had better\n> things to do with their time. And I think the same thing is probably\n> true here: nobody really is going to verify much about these files.\n\nI don't disagree, like I said that very email: it's non-trivial and I wish we\ncould make it better somehow, but I don't hav an abundance of good ideas.\n\nRemoving the generated versions and creating them when running tests makes\nsneaking in malicious content harder since it then has to be submitted in\nclear-text *only*. The emphasis added since it's like that today as well: *I*\nfully trust our team of committers to not accept a binary file in a patch\nwithout replacing with a regenerated version, but enforcing it might make it\neasier for a wider community to share that level of trust?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 22:25:56 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Thu, Apr 4, 2024 at 4:25 PM Daniel Gustafsson <[email protected]> wrote:\n> I don't disagree, like I said that very email: it's non-trivial and I wish we\n> could make it better somehow, but I don't hav an abundance of good ideas.\n\nIs the basic issue that we can't rely on the necessary toolchain to be\npresent on every machine where someone might try to build PostgreSQL?\n\n> Removing the generated versions and creating them when running tests makes\n> sneaking in malicious content harder since it then has to be submitted in\n> clear-text *only*. The emphasis added since it's like that today as well: *I*\n> fully trust our team of committers to not accept a binary file in a patch\n> without replacing with a regenerated version, but enforcing it might make it\n> easier for a wider community to share that level of trust?\n\nTo be honest, I'm not at all sure that I would have considered\nregenerating a binary file to be a must-do kind of a thing, so I guess\nthat's a lesson learned for me. Trust is a really tricky thing in\ncases like this. It's not just about whether some committer is\nsecretly a malicious actor; it's also about whether everyone\nunderstands the best practices and follows them consistently. In that\nregard, I don't even trust myself. I hope that it's unlikely that I\nwould mistakenly commit something malicious, but I think it could\nhappen, and I think it could happen to anyone else, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Apr 2024 16:40:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Apr 4, 2024 at 4:25 PM Daniel Gustafsson <[email protected]> wrote:\n>> I don't disagree, like I said that very email: it's non-trivial and I wish we\n>> could make it better somehow, but I don't hav an abundance of good ideas.\n\n> Is the basic issue that we can't rely on the necessary toolchain to be\n> present on every machine where someone might try to build PostgreSQL?\n\nIIUC, it's not really that, but that regenerating these files is\nexpensive; multiple seconds even on fast machines. Putting that\ninto tests that are run many times a day is unappetizing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Apr 2024 16:47:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "> On 4 Apr 2024, at 22:40, Robert Haas <[email protected]> wrote:\n> \n> On Thu, Apr 4, 2024 at 4:25 PM Daniel Gustafsson <[email protected]> wrote:\n>> I don't disagree, like I said that very email: it's non-trivial and I wish we\n>> could make it better somehow, but I don't hav an abundance of good ideas.\n> \n> Is the basic issue that we can't rely on the necessary toolchain to be\n> present on every machine where someone might try to build PostgreSQL?\n\nAFAIK we haven't historically enforced that installations have the openssl\nbinary in PATH, but it would be a pretty low bar to add. The bigger issue is\nlikely to find someone to port this to Windows, it probably won't be too hard\nbut as with all things building on Windows, we need someone skilled in that\narea to do it.\n\n>> Removing the generated versions and creating them when running tests makes\n>> sneaking in malicious content harder since it then has to be submitted in\n>> clear-text *only*. The emphasis added since it's like that today as well: *I*\n>> fully trust our team of committers to not accept a binary file in a patch\n>> without replacing with a regenerated version, but enforcing it might make it\n>> easier for a wider community to share that level of trust?\n> \n> To be honest, I'm not at all sure that I would have considered\n> regenerating a binary file to be a must-do kind of a thing, so I guess\n> that's a lesson learned for me. Trust is a really tricky thing in\n> cases like this. It's not just about whether some committer is\n> secretly a malicious actor; it's also about whether everyone\n> understands the best practices and follows them consistently. In that\n> regard, I don't even trust myself. I hope that it's unlikely that I\n> would mistakenly commit something malicious, but I think it could\n> happen, and I think it could happen to anyone else, too.\n\nIt absolutelty could. Re-reading Ken Thompsons Turing Lecture \"Reflections on\nTrusting Trust\" at periodic intervals is a good reminder to self just how\ncomplicated this is.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 22:48:27 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "> On 4 Apr 2024, at 22:47, Tom Lane <[email protected]> wrote:\n> \n> Robert Haas <[email protected]> writes:\n>> On Thu, Apr 4, 2024 at 4:25 PM Daniel Gustafsson <[email protected]> wrote:\n>>> I don't disagree, like I said that very email: it's non-trivial and I wish we\n>>> could make it better somehow, but I don't hav an abundance of good ideas.\n> \n>> Is the basic issue that we can't rely on the necessary toolchain to be\n>> present on every machine where someone might try to build PostgreSQL?\n> \n> IIUC, it's not really that, but that regenerating these files is\n> expensive; multiple seconds even on fast machines. Putting that\n> into tests that are run many times a day is unappetizing.\n\nThat's one aspect of it. We could cache the results of course to amortize the\ncost over multiple test-runs but at the end of the day it will add time to\ntest-runs regardless of what we do.\n\nOne thing to consider would be to try and rearrange/refactor the tests to\nrequire a smaller set of keys and certificates. I haven't looked into what\nsort of savings that could yield (if any) but if we go the route of\nregeneration at test-time we shouldn't leave potential savings on the table.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 22:56:01 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": ">\n> It would be better if we created the required test files as part of the\n> test run. (Why not? Too slow?) Alternatively, I have been thinking\n> that maybe we could make the output more reproducible by messing with\n> whatever random seed OpenSSL uses. Or maybe use a Python library to\n> create the files. Some things to think about.\n>\n\nI think this last idea is the way to go. I've hand-crafted GIF images and\nPGP messages in the past; surely we have enough combined brain power around\nhere to craft our own SSL files? It may even be a wheel that someone has\ninvented already.\n\nCheers,\nGreg\n\nIt would be better if we created the required test files as part of the \ntest run.  (Why not?  Too slow?)  Alternatively, I have been thinking \nthat maybe we could make the output more reproducible by messing with \nwhatever random seed OpenSSL uses.  Or maybe use a Python library to \ncreate the files.  Some things to think about. I think this last idea is the way to go. I've hand-crafted GIF images and PGP messages in the past; surely we have enough combined brain power around here to craft our own SSL files? It may even be a wheel that someone has invented already.Cheers,Greg", "msg_date": "Thu, 4 Apr 2024 17:01:12 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Thu, Apr 4, 2024 at 10:56:01PM +0200, Daniel Gustafsson wrote:\n> > On 4 Apr 2024, at 22:47, Tom Lane <[email protected]> wrote:\n> > \n> > Robert Haas <[email protected]> writes:\n> >> On Thu, Apr 4, 2024 at 4:25 PM Daniel Gustafsson <[email protected]> wrote:\n> >>> I don't disagree, like I said that very email: it's non-trivial and I wish we\n> >>> could make it better somehow, but I don't hav an abundance of good ideas.\n> > \n> >> Is the basic issue that we can't rely on the necessary toolchain to be\n> >> present on every machine where someone might try to build PostgreSQL?\n> > \n> > IIUC, it's not really that, but that regenerating these files is\n> > expensive; multiple seconds even on fast machines. Putting that\n> > into tests that are run many times a day is unappetizing.\n> \n> That's one aspect of it. We could cache the results of course to amortize the\n> cost over multiple test-runs but at the end of the day it will add time to\n> test-runs regardless of what we do.\n> \n> One thing to consider would be to try and rearrange/refactor the tests to\n> require a smaller set of keys and certificates. I haven't looked into what\n> sort of savings that could yield (if any) but if we go the route of\n> regeneration at test-time we shouldn't leave potential savings on the table.\n\nRather then everyone testing it on every build, couldn't we have an\nautomated test every night that checked binary files.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 4 Apr 2024 17:01:32 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Thu, 4 Apr 2024 at 22:56, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 4 Apr 2024, at 22:47, Tom Lane <[email protected]> wrote:\n> >\n> > Robert Haas <[email protected]> writes:\n> >> On Thu, Apr 4, 2024 at 4:25 PM Daniel Gustafsson <[email protected]> wrote:\n> >>> I don't disagree, like I said that very email: it's non-trivial and I wish we\n> >>> could make it better somehow, but I don't hav an abundance of good ideas.\n> >\n> >> Is the basic issue that we can't rely on the necessary toolchain to be\n> >> present on every machine where someone might try to build PostgreSQL?\n> >\n> > IIUC, it's not really that, but that regenerating these files is\n> > expensive; multiple seconds even on fast machines. Putting that\n> > into tests that are run many times a day is unappetizing.\n>\n> That's one aspect of it. We could cache the results of course to amortize the\n> cost over multiple test-runs but at the end of the day it will add time to\n> test-runs regardless of what we do.\n\nHow about we make it meson/make targets, so they are simply cached\njust like any of our other build artefacts are cached. Then only clean\nbuilds are impacted, not every test run.\n\n\n", "msg_date": "Thu, 4 Apr 2024 23:02:38 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "> On 4 Apr 2024, at 23:02, Jelte Fennema-Nio <[email protected]> wrote:\n\n> How about we make it meson/make targets, so they are simply cached\n> just like any of our other build artefacts are cached. Then only clean\n> builds are impacted, not every test run.\n\nThey already are (well, make not meson yet), they're just not hooked up to any\ntop-level commands. Running \"make ssfiles-clean ssfiles\" in src/test/ssl\nregenerates all the files from the base config files that define their\ncontents.\n \n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 23:06:29 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> How about we make it meson/make targets, so they are simply cached\n> just like any of our other build artefacts are cached. Then only clean\n> builds are impacted, not every test run.\n\nEvery buildfarm and CI run is \"clean\" in those terms.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Apr 2024 17:25:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Thu, 4 Apr 2024 at 23:06, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 4 Apr 2024, at 23:02, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> > How about we make it meson/make targets, so they are simply cached\n> > just like any of our other build artefacts are cached. Then only clean\n> > builds are impacted, not every test run.\n>\n> They already are (well, make not meson yet), they're just not hooked up to any\n> top-level commands. Running \"make ssfiles-clean ssfiles\" in src/test/ssl\n> regenerates all the files from the base config files that define their\n> contents.\n\nOkay turns out even generating them in parallel isn't any faster than\nrunning that sequentially. I guess it's because of the strong random\ngeneration being the slow part. Command I used was the following and\ntook ~5 seconds on my machine:\n\nmake -C src/test/ssl sslfiles-clean && make -C src/test/ssl sslfiles -j20\n\nAnd I think that's actually a good thing, because that would mean\ntotal build time is pretty much not impacted if we'd include it as\npart of the regular clean build. Since building these certs are\nbottlenecked on randomness, not on CPU (as pretty much all of our\nother build artifacts are). So they should pipeline pretty very well\nwith the other items, assuming build concurrency is set slightly\nhigher than the number of cores.\n\nAs a proof of concept I ran the above command in a simple bash loop constantly:\n\nwhile true; do make -C src/test/ssl sslfiles-clean && make -C\nsrc/test/ssl sslfiles -j20; done\n\nAnd then ran a clean (parallel) build in another shell:\n\nninja -C build clean && ninja -C build\n\nAnd total build time went from 41 to 43 seconds. To be clear, that's\nwhile constantly running the ssl file creation. If I only run the\ncreation once, there's no noticeable increase in build time at all.\n\n\n", "msg_date": "Thu, 4 Apr 2024 23:40:35 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> Okay turns out even generating them in parallel isn't any faster than\n> running that sequentially. I guess it's because of the strong random\n> generation being the slow part. Command I used was the following and\n> took ~5 seconds on my machine:\n\n> make -C src/test/ssl sslfiles-clean && make -C src/test/ssl sslfiles -j20\n\nJust for comparison's sake, this takes about 2 minutes on mamba's\nhost. Now that's certainly museum-grade hardware, but I don't\nthink it's even the slowest machine in the buildfarm. On a\nRaspberry Pi 4B, it was about 25 seconds.\n\n(I concur with your result that parallelism helps little.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Apr 2024 18:34:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Thu, Apr 4, 2024 at 4:48 PM Daniel Gustafsson <[email protected]> wrote:\n> AFAIK we haven't historically enforced that installations have the openssl\n> binary in PATH, but it would be a pretty low bar to add. The bigger issue is\n> likely to find someone to port this to Windows, it probably won't be too hard\n> but as with all things building on Windows, we need someone skilled in that\n> area to do it.\n\nI wonder how hard it would be to just code up our own binary to do\nthis. If it'd be a pain to do that, or to maintain it across SSL\nversions, then it's a bad plan and we shouldn't do it. But if it's not\nthat much code, maybe it'd be worth considering.\n\nI'm also sort of afraid that we're getting sucked into thinking real\nhard about this SSL certificate issue rather than trying to brainstorm\nall the other places that might be problematic. The latter might be a\nmore fruitful exercise (or maybe not, what do I know?).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Apr 2024 09:23:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Fri, Apr 5, 2024 at 6:24 AM Robert Haas <[email protected]> wrote:\n> I wonder how hard it would be to just code up our own binary to do\n> this. If it'd be a pain to do that, or to maintain it across SSL\n> versions, then it's a bad plan and we shouldn't do it. But if it's not\n> that much code, maybe it'd be worth considering.\n\nI think my biggest concern, other than the maintenance costs, would be\nthe statement \"we know SSL works on Windows because we test it against\nsome certificates we hand-rolled ourselves.\" We can become experts in\ncertificate formats, of course, but... does it buy us much? If someone\ncomes and complains that a certificate doesn't work correctly (as they\nhave *very* recently [3]), I would like to be able to write a test\nthat uses what OpenSSL actually generates as opposed to learning how\nto make it myself first.\n\n> I'm also sort of afraid that we're getting sucked into thinking real\n> hard about this SSL certificate issue rather than trying to brainstorm\n> all the other places that might be problematic. The latter might be a\n> more fruitful exercise (or maybe not, what do I know?).\n\n+1. Don't get me wrong: I spent a lot of time refactoring the sslfiles\nmachinery a while back, and I would love for it to all go away. I\ndon't really want to interrupt any lines of thought that are moving in\nthat direction. Please continue.\n\n_And also:_ the xz attack relied on a long chain of injections, both\ntechnical and social. I'm still wrapping my head around Russ Cox's\nwriteup [1, 2], but the \"hidden blob of junk\" is only a single part of\nall that. I'm not even sure it was a necessary part; it just happened\nto work well for this particular project and line of attack.\n\nI've linked Russ Cox in particular because Golang has made a bunch of\nlanguage-level decisions with the supply chain in mind, including the\nphilosophy that a build should ideally not be able to run arbitrary\ncode at all, and therefore generated files _must_ be checked in. I\nremember $OLDJOB having buildbots that would complain if the contents\nof the file you checked in didn't match what was (reproducibly!)\ngenerated. I think there's a lot more to think about here.\n\n--Jacob\n\n[1] https://research.swtch.com/xz-timeline\n[2] https://research.swtch.com/xz-script\n[3] https://www.postgresql.org/message-id/flat/17760-b6c61e752ec07060%40postgresql.org\n\n\n", "msg_date": "Fri, 5 Apr 2024 11:40:46 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Fri, Apr 05, 2024 at 11:40:46AM -0700, Jacob Champion wrote:\n> On Fri, Apr 5, 2024 at 6:24 AM Robert Haas <[email protected]> wrote:\n>> I'm also sort of afraid that we're getting sucked into thinking real\n>> hard about this SSL certificate issue rather than trying to brainstorm\n>> all the other places that might be problematic. The latter might be a\n>> more fruitful exercise (or maybe not, what do I know?).\n> \n> +1. Don't get me wrong: I spent a lot of time refactoring the sslfiles\n> machinery a while back, and I would love for it to all go away. I\n> don't really want to interrupt any lines of thought that are moving in\n> that direction. Please continue.\n\nThere are a few things that I've not seen mentioned on this thread.\n\nAny random byte sequences included in the tree should have ways to be\nregenerated. One problem with xz was that the binary blobs were\ndirectly part of the tree, with the input file and the test crafted so\nas the test would skip portions of the input, one line in ./configure\nbeing enough to switch the backdoor to evil mode (correct me if I read\nthat wrong). There could be other things that one would be tempted to\nintroduce for Postgres as test data to introduce a backdoor with a\nslight tweak of the source tarball. To name a few:\n- Binary WAL sequences, arguing that this WAL data is useful even with\nalignment restrictions.\n- Data for COPY.\n- Physical dumps, in raw or just SQL data.\n\nAnything like that can also be used in some test data provided by\nsomebody in a proposed patch or a bug report, for the sake of\nreproducing an issue. If there is an attack on this data, another\ncontributor could run it and get his/her own host powned. One thing\nthat would be easy to hide is something that reads on-disk file in a\nlarge dump file, with something edited in its inner parts.\n\nAn extra thing is if we should expand the use of signed commits and\npotentially physical keys for committers, provided by pg-infra which\nwould be the source of trust? Some have mentioned that in the past,\nand this could reduce the risk of committing problematic code if a\ncommitter's host is powned because the physical key would be required.\n\nSaying that, my spidey sense tingles at the recent commit\n3311ea86edc7, that had the idea to introduce a 20k line output file\nbased on a 378 line input file full of random URLs. In my experience,\ntests don't require to be that large to be useful, and the input data\nis very hard to parse.\n--\nMichael", "msg_date": "Sat, 6 Apr 2024 09:13:42 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "Hi,\n\n> There are many more interesting and scary libraries in the dependency\n> tree of \"postgres\", so just picking off one right now doesn't really\n> accomplish anything.  The next release of libsystemd will drop all\n> the compression libraries as hard dependencies, so the issue in that\n> sense is gone anyway.  Also, fun fact: liblzma is also a dependency\n> via libxml2.\n\nHaving an audit of all libraries linked to postgres and their level of\ntrust should help to point the next weak point. I'm pretty sure we have\nseveral of these tiny libraries maintained by a lone out of time hacker\nlinked somewhere. What is the next xz ?\n\nRegards,\nÉtienne \n-- \nDALIBO\n\n\n", "msg_date": "Mon, 08 Apr 2024 12:05:18 +0200", "msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma - libsystemd" }, { "msg_contents": "On Fri, Apr 5, 2024 at 5:14 PM Michael Paquier <[email protected]> wrote:\n> Saying that, my spidey sense tingles at the recent commit\n> 3311ea86edc7, that had the idea to introduce a 20k line output file\n> based on a 378 line input file full of random URLs. In my experience,\n> tests don't require to be that large to be useful, and the input data\n> is very hard to parse.\n\nThat's a good point. I've proposed a patch over at [1] to shrink it\nsubstantially.\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CAOYmi%2B%3Dkx14ui_A__4L_XcFePSuUuR1kwJfUKxphuZU_i6%3DWpA%40mail.gmail.com\n\n\n", "msg_date": "Mon, 8 Apr 2024 11:34:39 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Sat, Mar 30, 2024 at 04:50:26PM -0400, Robert Haas wrote:\n> An awful lot of what we do operates on the principle that we know the\n> people who are involved and trust them, and I'm glad we do trust them,\n> but the world is full of people who trusted somebody too much and\n> regretted it afterwards. The fact that we have many committers rather\n> than a single maintainer probably reduces risk at least as far as the\n> source repository is concerned, because there are more people paying\n> attention to potentially notice something that isn't as it should be.\n\nOne unwritten requirement for committers is that we are able to\ncommunicate with them securely. If we cannot do that, they potentially\ncould be forced by others, e.g., governments, to add code to our\nrepositories.\n\nUnfortunately, there is on good way for them to communicate with us\nsecurely once they are unable to communicate with us securely. I\nsuppose some special word could be used to communicate that status ---\nthat is how it was done in non-electronic communication in the past.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 8 Apr 2024 22:57:27 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On 4/8/24 22:57, Bruce Momjian wrote:\n> On Sat, Mar 30, 2024 at 04:50:26PM -0400, Robert Haas wrote:\n>> An awful lot of what we do operates on the principle that we know the\n>> people who are involved and trust them, and I'm glad we do trust them,\n>> but the world is full of people who trusted somebody too much and\n>> regretted it afterwards. The fact that we have many committers rather\n>> than a single maintainer probably reduces risk at least as far as the\n>> source repository is concerned, because there are more people paying\n>> attention to potentially notice something that isn't as it should be.\n> \n> One unwritten requirement for committers is that we are able to\n> communicate with them securely. If we cannot do that, they potentially\n> could be forced by others, e.g., governments, to add code to our\n> repositories.\n> \n> Unfortunately, there is on good way for them to communicate with us\n> securely once they are unable to communicate with us securely. I\n> suppose some special word could be used to communicate that status ---\n> that is how it was done in non-electronic communication in the past.\n\nI don't know how that really helps. If one of our committers is under \nduress, they probably cannot risk outing themselves anyway.\n\nThe best defense, IMHO, is the fact that our source code is open and can \nbe reviewed freely.\n\nThe trick is to get folks to do the review.\n\nI know, for example, at $past_employer we had a requirement to get \nsomeone on our staff to review every single commit in order to maintain \ncertain certifications. Of course there is no guarantee that such \nreviews would catch everything, but maybe we could establish post commit \nreviews by contributors in a more rigorous way? Granted, getting more \nqualified volunteers is not a trivial problem...\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 9 Apr 2024 08:43:57 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma" }, { "msg_contents": "On Thu, Apr 4, 2024 at 1:10 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 03.04.24 23:19, Magnus Hagander wrote:\n> > When the code is this simple, we should definitely consider carrying it\n> > ourselves. At least if we don't expect to need *other* functionality\n> > from the same library in the future, which I doubt we will from\n> libsystemd.\n>\n> Well, I've long had it on my list to do some integration to log directly\n> to the journal, so you can preserve metadata better. I'm not sure right\n> now whether this would use libsystemd, but it's not like there is\n> absolutely no other systemd-related functionality that could be added.\n>\n\nAh interesting. I hadn't thought of that use-case.\n\n\n\n\n> Personally, I think this proposed change is trying to close a barndoor\n> after a horse has bolted. There are many more interesting and scary\n> libraries in the dependency tree of \"postgres\", so just picking off one\n> right now doesn't really accomplish anything. The next release of\n> libsystemd will drop all the compression libraries as hard dependencies,\n> so the issue in that sense is gone anyway. Also, fun fact: liblzma is\n> also a dependency via libxml2.\n>\n\n\nTo be clear, I didn't mean to single out this one, just saying that it's\nsomething we should keep in consideration in general when adding library\ndependencies. Every new dependency, no matter how small, increases the\nmanagement and risks for it. And we should just be aware of that and weigh\nthem against each other.\n\nAs in we should *consider* it, that doesn't' mean we should necessarily\n*do* it.\n\n(And yes, there are many scary dependencies down the tree)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Apr 4, 2024 at 1:10 AM Peter Eisentraut <[email protected]> wrote:On 03.04.24 23:19, Magnus Hagander wrote:\n> When the code is this simple, we should definitely consider carrying it \n> ourselves. At least if we don't expect to need *other* functionality \n> from the same library in the future, which I doubt we will from libsystemd.\n\nWell, I've long had it on my list to do some integration to log directly \nto the journal, so you can preserve metadata better.  I'm not sure right \nnow whether this would use libsystemd, but it's not like there is \nabsolutely no other systemd-related functionality that could be added.Ah interesting. I hadn't thought of that use-case. \nPersonally, I think this proposed change is trying to close a barndoor \nafter a horse has bolted.  There are many more interesting and scary \nlibraries in the dependency tree of \"postgres\", so just picking off one \nright now doesn't really accomplish anything.  The next release of \nlibsystemd will drop all the compression libraries as hard dependencies, \nso the issue in that sense is gone anyway.  Also, fun fact: liblzma is \nalso a dependency via libxml2.\nTo be clear, I didn't mean to single out this one, just saying that it's something we should keep in consideration in general when adding library dependencies. Every new dependency, no matter how small, increases the management and risks for it. And we should just be aware of that and weigh them against each other.As in we should *consider* it, that doesn't' mean we should necessarily *do* it.(And yes, there are many scary dependencies down the tree)--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 12 Apr 2024 16:46:15 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma - libsystemd" }, { "msg_contents": "Hi,\n\nOn 2024-04-04 01:10:20 +0200, Peter Eisentraut wrote:\n> On 03.04.24 23:19, Magnus Hagander wrote:\n> > When the code is this simple, we should definitely consider carrying it\n> > ourselves. At least if we don't expect to need *other* functionality\n> > from the same library in the future, which I doubt we will from\n> > libsystemd.\n> \n> Well, I've long had it on my list to do some integration to log directly to\n> the journal, so you can preserve metadata better. I'm not sure right now\n> whether this would use libsystemd, but it's not like there is absolutely no\n> other systemd-related functionality that could be added.\n\nInteresting. I think that'd in all likelihood end up using libsystemd.\n\n\n> Personally, I think this proposed change is trying to close a barndoor after\n> a horse has bolted. There are many more interesting and scary libraries in\n> the dependency tree of \"postgres\", so just picking off one right now doesn't\n> really accomplish anything. The next release of libsystemd will drop all\n> the compression libraries as hard dependencies, so the issue in that sense\n> is gone anyway. Also, fun fact: liblzma is also a dependency via libxml2.\n\nI agree that doing this just because of future risk in liblzma is probably not\nworth it. Despite soon not being an unconditional dependency of libsystemd\nanymore, I'd guess that in a few months it's going to be one of the better\nvetted libraries out there. But I don't think that means it's not worth\nreducing the dependencies that we have unconditionally loaded.\n\nHaving a dependency to a fairly large library (~900k), which could be avoided\nwith ~30 LOC, is just not worth it. Independent of liblzma. Not from a\nperformance POV, nor from a risk POV.\n\nI'm actually fairly bothered by us linking to libxml2. It was effectively\nunmaintained for most of the last decade, with just very occasional drive-by\ncommits. And it's not that there weren't significant bugs or such. Maintenance\nhas picked up some, but it's still not well maintained, I'd say. If I wanted\nto attack postgres, it's where I'd start.\n\nMy worry level, in decreasing order, about postmaster-level dependencies:\n- libxml2 - effectively unmaintained, just about maintained today\n- gssapi - heavily undermaintained from what I can see, lots of complicated code\n- libldap - undermaintained, lots of creaky old code\n- libpam - undermaintained, lots of creaky old code\n- the rest\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 12 Apr 2024 09:00:11 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma - libsystemd" }, { "msg_contents": "On Fri, Apr 12, 2024 at 09:00:11AM -0700, Andres Freund wrote:\n> I'm actually fairly bothered by us linking to libxml2. It was effectively\n> unmaintained for most of the last decade, with just very occasional drive-by\n> commits. And it's not that there weren't significant bugs or such. Maintenance\n> has picked up some, but it's still not well maintained, I'd say. If I wanted\n> to attack postgres, it's where I'd start.\n\nIndeed, libxml2 worries me to, as much as out-of-core extensions.\nThere are a bunch of these out there, some of them not that\nmaintained, and they could face similar attacks.\n--\nMichael", "msg_date": "Tue, 16 Apr 2024 09:35:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Security lessons from liblzma - libsystemd" } ]
[ { "msg_contents": "Hi,\n\nWhile I working in [1], Coverity reported some errors:\n\nsrc/bin/pg_basebackup/pg_createsubscriber.c\nCID 1542690: (#1 of 2): Out-of-bounds access (OVERRUN)\nalloc_strlen: Allocating insufficient memory for the terminating null of\nthe string. [Note: The source code implementation of the function has been\noverridden by a builtin model.]\nCID 1542690: (#2 of 2): Out-of-bounds access (OVERRUN)\nalloc_strlen: Allocating insufficient memory for the terminating null of\nthe string. [Note: The source code implementation of the function has been\noverridden by a builtin model.]\n\nI think that is right.\n\nThe source of errors is the function PQescapeInternal.\nThe slow path has bugs when num_quotes or num_backslashes are greater than\nzero.\nFor each num_quotes or num_backslahes we need to allocate two more.\n\nCode were out-of-bounds it happens:\nfor (s = str; s - str < input_len; ++s)\n{\nif (*s == quote_char || (!as_ident && *s == '\\\\'))\n{\n*rp++ = *s;\n*rp++ = *s;\n}\n\nPatch attached.\n\nBest regards,\nRanier Vilela\n\n[1] Re: Fix some resources leaks\n(src/bin/pg_basebackup/pg_createsubscriber.c)\n<https://www.postgresql.org/message-id/CAEudQAqQHGrhmY3%2BrgdqJLM-76sozLm__0_NSJetuQHsa%2Bd41Q%40mail.gmail.com>", "msg_date": "Sat, 30 Mar 2024 08:36:55 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Fix out-of-bounds in the function PQescapeinternal\n (src/interfaces/libpq/fe-exec.c)" }, { "msg_contents": "Ranier Vilela <[email protected]> writes:\n> While I working in [1], Coverity reported some errors:\n> src/bin/pg_basebackup/pg_createsubscriber.c\n> CID 1542690: (#1 of 2): Out-of-bounds access (OVERRUN)\n> alloc_strlen: Allocating insufficient memory for the terminating null of\n> the string. [Note: The source code implementation of the function has been\n> overridden by a builtin model.]\n> CID 1542690: (#2 of 2): Out-of-bounds access (OVERRUN)\n> alloc_strlen: Allocating insufficient memory for the terminating null of\n> the string. [Note: The source code implementation of the function has been\n> overridden by a builtin model.]\n\nYeah, we saw that in the community run too. I'm tempted to call it\nan AI hallucination. The \"Note\" seems to mean that they're not\nactually analyzing our code but some made-up substitute.\n\n> The source of errors is the function PQescapeInternal.\n> The slow path has bugs when num_quotes or num_backslashes are greater than\n> zero.\n> For each num_quotes or num_backslahes we need to allocate two more.\n\nNonsense. The quote or backslash is already counted in input_len,\nso we need to add just one more.\n\nIf there were anything wrong here, I'm quite sure our testing under\ne.g. valgrind would have caught it years ago. However, just to be\nsure, I tried adding an Assert that the allocated space is filled\nexactly, as attached. It gets through check-world just fine.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 02 Apr 2024 17:13:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix out-of-bounds in the function PQescapeinternal\n (src/interfaces/libpq/fe-exec.c)" }, { "msg_contents": "Em ter., 2 de abr. de 2024 às 18:13, Tom Lane <[email protected]> escreveu:\n\n> Ranier Vilela <[email protected]> writes:\n> > While I working in [1], Coverity reported some errors:\n> > src/bin/pg_basebackup/pg_createsubscriber.c\n> > CID 1542690: (#1 of 2): Out-of-bounds access (OVERRUN)\n> > alloc_strlen: Allocating insufficient memory for the terminating null of\n> > the string. [Note: The source code implementation of the function has\n> been\n> > overridden by a builtin model.]\n> > CID 1542690: (#2 of 2): Out-of-bounds access (OVERRUN)\n> > alloc_strlen: Allocating insufficient memory for the terminating null of\n> > the string. [Note: The source code implementation of the function has\n> been\n> > overridden by a builtin model.]\n>\n> Yeah, we saw that in the community run too. I'm tempted to call it\n> an AI hallucination. The \"Note\" seems to mean that they're not\n> actually analyzing our code but some made-up substitute.\n>\nYeah, the message is a little confusing.\nIt seems to me that it is a replacement of the malloc function with its\nown, with some type of security cookie,\nlike /GS (Buffer Security Check)\n<https://learn.microsoft.com/en-us/cpp/build/reference/gs-buffer-security-check?view=msvc-170>\n\n\n> > The source of errors is the function PQescapeInternal.\n> > The slow path has bugs when num_quotes or num_backslashes are greater\n> than\n> > zero.\n> > For each num_quotes or num_backslahes we need to allocate two more.\n>\n> Nonsense. The quote or backslash is already counted in input_len,\n> so we need to add just one more.\n>\nRight, the quote or backslash is counted in input_len.\nIn the test I did, the string had 10 quotes, so\ninput_len had at least 10 bytes for quotes.\nBut we write 20 quotes, in the slow path.\n\nif (*s == quote_char || (!as_ident && *s == '\\\\'))\n*rp++ = *s;\n*rp++ = *s;\n\n|0| = quote_char\n|1| = quote_char\n|2| = quote_char\n|3| = quote_char\n|4| = quote_char\n|5| = quote_char\n|6| = quote_char\n|7| = quote_char\n|8| = quote_char\n|9| = quote_char\n|10| = quote_char <--memory overwrite\n|11| = quote_char\n|12| = quote_char\n|13| = quote_char\n|14| = quote_char\n|15| = quote_char\n|16| = quote_char\n|17| = quote_char\n|18| = quote_char\n|19| = quote_char\n\n\n\n> If there were anything wrong here, I'm quite sure our testing under\n> e.g. valgrind would have caught it years ago. However, just to be\n> sure, I tried adding an Assert that the allocated space is filled\n> exactly, as attached. It gets through check-world just fine.\n>\nIt seems to me that only the fast path is being validated by the assert.\n\nif (num_quotes == 0 && (num_backslashes == 0 || as_ident))\n{\nmemcpy(rp, str, input_len);\nrp += input_len;\n}\n\nbest regards,\nRanier Vilela\n\nEm ter., 2 de abr. de 2024 às 18:13, Tom Lane <[email protected]> escreveu:Ranier Vilela <[email protected]> writes:\n> While I working in [1], Coverity reported some errors:\n> src/bin/pg_basebackup/pg_createsubscriber.c\n> CID 1542690: (#1 of 2): Out-of-bounds access (OVERRUN)\n> alloc_strlen: Allocating insufficient memory for the terminating null of\n> the string. [Note: The source code implementation of the function has been\n> overridden by a builtin model.]\n> CID 1542690: (#2 of 2): Out-of-bounds access (OVERRUN)\n> alloc_strlen: Allocating insufficient memory for the terminating null of\n> the string. [Note: The source code implementation of the function has been\n> overridden by a builtin model.]\n\nYeah, we saw that in the community run too.  I'm tempted to call it\nan AI hallucination.  The \"Note\" seems to mean that they're not\nactually analyzing our code but some made-up substitute.Yeah, the message is a little confusing.It seems to me that it is a replacement of the malloc function with its own, with some type of security cookie,like /GS (Buffer Security Check)  \n> The source of errors is the function PQescapeInternal.\n> The slow path has bugs when num_quotes or num_backslashes are greater than\n> zero.\n> For each num_quotes or num_backslahes we need to allocate two more.\n\nNonsense.  The quote or backslash is already counted in input_len,\nso we need to add just one more.Right, the quote or backslash is counted in input_len.In the test I did, the string had 10 quotes, soinput_len had at least 10 bytes for quotes.But we write 20 quotes, in the slow path.\t\t\tif (*s == quote_char || (!as_ident && *s == '\\\\'))\t\t\t\t*rp++ = *s;\t\t\t\t*rp++ = *s;|0| = quote_char\n|1| = quote_char\n\n|2| = quote_char\n\n\n\n\n|3| = quote_char\n\n\n|4| = quote_char\n\n\n|5| = quote_char\n\n\n|6| = quote_char\n\n\n|7| = quote_char\n\n\n|8| = quote_char\n\n\n|9| = quote_char\n\n\n|10| = quote_char\n <--memory overwrite\n\n\n\n\n|11| = quote_char\n\n\n|12| = quote_char\n\n\n|13| = quote_char\n\n\n|14| = quote_char\n\n\n|15| = quote_char\n\n\n|16| = quote_char\n\n\n|17| = quote_char\n\n\n|18| = quote_char\n\n\n|19| = quote_char\n\n\nIf there were anything wrong here, I'm quite sure our testing under\ne.g. valgrind would have caught it years ago.  However, just to be\nsure, I tried adding an Assert that the allocated space is filled\nexactly, as attached.  It gets through check-world just fine.It seems to me that only the fast path is being validated by the assert.\tif (num_quotes == 0 && (num_backslashes == 0 || as_ident))\t{\t\tmemcpy(rp, str, input_len);\t\trp += input_len;\t}best regards,Ranier Vilela", "msg_date": "Wed, 3 Apr 2024 08:36:47 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix out-of-bounds in the function PQescapeinternal\n (src/interfaces/libpq/fe-exec.c)" }, { "msg_contents": "Em qua., 3 de abr. de 2024 às 08:36, Ranier Vilela <[email protected]>\nescreveu:\n\n>\n> Em ter., 2 de abr. de 2024 às 18:13, Tom Lane <[email protected]>\n> escreveu:\n>\n>> Ranier Vilela <[email protected]> writes:\n>> > While I working in [1], Coverity reported some errors:\n>> > src/bin/pg_basebackup/pg_createsubscriber.c\n>> > CID 1542690: (#1 of 2): Out-of-bounds access (OVERRUN)\n>> > alloc_strlen: Allocating insufficient memory for the terminating null of\n>> > the string. [Note: The source code implementation of the function has\n>> been\n>> > overridden by a builtin model.]\n>> > CID 1542690: (#2 of 2): Out-of-bounds access (OVERRUN)\n>> > alloc_strlen: Allocating insufficient memory for the terminating null of\n>> > the string. [Note: The source code implementation of the function has\n>> been\n>> > overridden by a builtin model.]\n>>\n>> Yeah, we saw that in the community run too. I'm tempted to call it\n>> an AI hallucination. The \"Note\" seems to mean that they're not\n>> actually analyzing our code but some made-up substitute.\n>>\n> Yeah, the message is a little confusing.\n> It seems to me that it is a replacement of the malloc function with its\n> own, with some type of security cookie,\n> like /GS (Buffer Security Check)\n> <https://learn.microsoft.com/en-us/cpp/build/reference/gs-buffer-security-check?view=msvc-170>\n>\n>\n>> > The source of errors is the function PQescapeInternal.\n>> > The slow path has bugs when num_quotes or num_backslashes are greater\n>> than\n>> > zero.\n>> > For each num_quotes or num_backslahes we need to allocate two more.\n>>\n>> Nonsense. The quote or backslash is already counted in input_len,\n>> so we need to add just one more.\n>>\n> Right, the quote or backslash is counted in input_len.\n> In the test I did, the string had 10 quotes, so\n> input_len had at least 10 bytes for quotes.\n> But we write 20 quotes, in the slow path.\n>\nSorry, some kind of brain damage.\nI ran the test with a debugger, and step by step, the defect does not occur\nin the section I indicated.\nOnly the exact bytes counted from quote_char and num_backslashes are\nactually written.\n\n\n>\n> if (*s == quote_char || (!as_ident && *s == '\\\\'))\n> *rp++ = *s;\n> *rp++ = *s;\n>\n> |0| = quote_char\n> |1| = quote_char\n> |2| = quote_char\n> |3| = quote_char\n> |4| = quote_char\n> |5| = quote_char\n> |6| = quote_char\n> |7| = quote_char\n> |8| = quote_char\n> |9| = quote_char\n> |10| = quote_char <--memory overwrite\n> |11| = quote_char\n> |12| = quote_char\n> |13| = quote_char\n> |14| = quote_char\n> |15| = quote_char\n> |16| = quote_char\n> |17| = quote_char\n> |18| = quote_char\n> |19| = quote_char\n>\n>\n>\n>> If there were anything wrong here, I'm quite sure our testing under\n>> e.g. valgrind would have caught it years ago. However, just to be\n>> sure, I tried adding an Assert that the allocated space is filled\n>> exactly, as attached. It gets through check-world just fine.\n>>\n> It seems to me that only the fast path is being validated by the assert.\n>\nThe assert works fine.\n\nThe only catch is Coverity will continue to report the error.\nBut in this case, the error does not exist and the warning is wrong.\n\nI will remove the patch.\n\nbest regards,\nRanier Vilela\n\nEm qua., 3 de abr. de 2024 às 08:36, Ranier Vilela <[email protected]> escreveu:Em ter., 2 de abr. de 2024 às 18:13, Tom Lane <[email protected]> escreveu:Ranier Vilela <[email protected]> writes:\n> While I working in [1], Coverity reported some errors:\n> src/bin/pg_basebackup/pg_createsubscriber.c\n> CID 1542690: (#1 of 2): Out-of-bounds access (OVERRUN)\n> alloc_strlen: Allocating insufficient memory for the terminating null of\n> the string. [Note: The source code implementation of the function has been\n> overridden by a builtin model.]\n> CID 1542690: (#2 of 2): Out-of-bounds access (OVERRUN)\n> alloc_strlen: Allocating insufficient memory for the terminating null of\n> the string. [Note: The source code implementation of the function has been\n> overridden by a builtin model.]\n\nYeah, we saw that in the community run too.  I'm tempted to call it\nan AI hallucination.  The \"Note\" seems to mean that they're not\nactually analyzing our code but some made-up substitute.Yeah, the message is a little confusing.It seems to me that it is a replacement of the malloc function with its own, with some type of security cookie,like /GS (Buffer Security Check)  \n> The source of errors is the function PQescapeInternal.\n> The slow path has bugs when num_quotes or num_backslashes are greater than\n> zero.\n> For each num_quotes or num_backslahes we need to allocate two more.\n\nNonsense.  The quote or backslash is already counted in input_len,\nso we need to add just one more.Right, the quote or backslash is counted in input_len.In the test I did, the string had 10 quotes, soinput_len had at least 10 bytes for quotes.But we write 20 quotes, in the slow path.Sorry, some kind of brain damage.I ran the test with a debugger, and step by step, the defect does not occur in the section I indicated.Only the exact bytes counted from quote_char and num_backslashes are actually written. \t\t\tif (*s == quote_char || (!as_ident && *s == '\\\\'))\t\t\t\t*rp++ = *s;\t\t\t\t*rp++ = *s;|0| = quote_char\n|1| = quote_char\n\n|2| = quote_char\n\n\n\n\n|3| = quote_char\n\n\n|4| = quote_char\n\n\n|5| = quote_char\n\n\n|6| = quote_char\n\n\n|7| = quote_char\n\n\n|8| = quote_char\n\n\n|9| = quote_char\n\n\n|10| = quote_char\n <--memory overwrite\n\n\n\n\n|11| = quote_char\n\n\n|12| = quote_char\n\n\n|13| = quote_char\n\n\n|14| = quote_char\n\n\n|15| = quote_char\n\n\n|16| = quote_char\n\n\n|17| = quote_char\n\n\n|18| = quote_char\n\n\n|19| = quote_char\n\n\nIf there were anything wrong here, I'm quite sure our testing under\ne.g. valgrind would have caught it years ago.  However, just to be\nsure, I tried adding an Assert that the allocated space is filled\nexactly, as attached.  It gets through check-world just fine.It seems to me that only the fast path is being validated by the assert.The assert works fine.The only catch is Coverity will continue to report the error.But in this case, the error does not exist and the warning is wrong.I will remove the patch.best regards,Ranier Vilela", "msg_date": "Thu, 4 Apr 2024 14:31:59 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix out-of-bounds in the function PQescapeinternal\n (src/interfaces/libpq/fe-exec.c)" } ]
[ { "msg_contents": "This is my first patch, so sorry if I miss something.\n\nIf I have a function which returns lots of columns and any of these columns\nreturns a wrong type it's not easy to see which one is that column because\nit points me only to its position, not its name. So, this patch only adds\nthat column name, just that.\n\ncreate function my_f(a integer, b integer) returns table(first_col integer,\nlots_of_cols_later numeric) language plpgsql as $function$\nbegin\n return query select a, b;\nend;$function$;\n\nselect * from my_f(1,1);\n--ERROR: structure of query does not match function result type\n--Returned type integer does not match expected type numeric in column 2.\n\nFor a function which has just 2 columns is easy but if it returns a hundred\nof columns, which one is that 66th column ?\n\nMy patch just adds column name to that description message.\n--ERROR: structure of query does not match function result type\n--Returned type integer does not match expected type numeric in column 2-\nlots_of_cols_later.\n\nregards\nMarcos", "msg_date": "Sun, 31 Mar 2024 10:22:15 -0300", "msg_from": "Marcos Pegoraro <[email protected]>", "msg_from_op": true, "msg_subject": "Add column name to error description" }, { "msg_contents": "On 2024-03-31 15:22 +0200, Marcos Pegoraro wrote:\n> This is my first patch, so sorry if I miss something.\n\nPlease make sure that tests are passing by running make check:\nhttps://www.postgresql.org/docs/current/regress-run.html#REGRESS-RUN-TEMP-INST\n\nThe patch breaks src/test/regress/sql/plpgsql.sql at:\n\n\t-- this does not work currently (no implicit casting)\n\tcreate or replace function compos() returns compostype as $$\n\tbegin\n\t return (1, 'hello');\n\tend;\n\t$$ language plpgsql;\n\tselect compos();\n\tserver closed the connection unexpectedly\n\t\tThis probably means the server terminated abnormally\n\t\tbefore or while processing the request.\n\tconnection to server was lost\n\n> If I have a function which returns lots of columns and any of these columns\n> returns a wrong type it's not easy to see which one is that column because\n> it points me only to its position, not its name. So, this patch only adds\n> that column name, just that.\n\n+1 for this improvement.\n\n> create function my_f(a integer, b integer) returns table(first_col integer,\n> lots_of_cols_later numeric) language plpgsql as $function$\n> begin\n> return query select a, b;\n> end;$function$;\n> \n> select * from my_f(1,1);\n> --ERROR: structure of query does not match function result type\n> --Returned type integer does not match expected type numeric in column 2.\n> \n> For a function which has just 2 columns is easy but if it returns a hundred\n> of columns, which one is that 66th column ?\n> \n> My patch just adds column name to that description message.\n> --ERROR: structure of query does not match function result type\n> --Returned type integer does not match expected type numeric in column 2-\n> lots_of_cols_later.\n\n> diff --git a/src/backend/access/common/attmap.c b/src/backend/access/common/attmap.c\n> index b0fe27ef57..85f7c0cb8c 100644\n> --- a/src/backend/access/common/attmap.c\n> +++ b/src/backend/access/common/attmap.c\n> @@ -118,12 +118,13 @@ build_attrmap_by_position(TupleDesc indesc,\n> ereport(ERROR,\n> (errcode(ERRCODE_DATATYPE_MISMATCH),\n> errmsg_internal(\"%s\", _(msg)),\n> - errdetail(\"Returned type %s does not match expected type %s in column %d.\",\n> + errdetail(\"Returned type %s does not match expected type %s in column %d-%s.\",\n\nThe format \"%d-%s\" is not ideal. I suggesst \"%d (%s)\".\n\n> format_type_with_typemod(att->atttypid,\n> att->atttypmod),\n> format_type_with_typemod(atttypid,\n> atttypmod),\n> - noutcols)));\n> + noutcols,\n> + att->attname)));\n\nMust be NameStr(att->attname) for use with printf's %s. GCC even prints\na warning:\n\n\tattmap.c:121:60: warning: format ‘%s’ expects argument of type ‘char *’, but argument 5 has type ‘NameData’ {aka ‘struct nameData’} [-Wformat=]\n\n-- \nErik\n\n\n", "msg_date": "Sun, 31 Mar 2024 20:57:57 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add column name to error description" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-03-31 15:22 +0200, Marcos Pegoraro wrote:\n>> This is my first patch, so sorry if I miss something.\n\n> Please make sure that tests are passing by running make check:\n\ncheck-world, in fact.\n\n> The format \"%d-%s\" is not ideal. I suggesst \"%d (%s)\".\n\nI didn't like that either, for two reasons: if we have a column name\nit ought to be the prominent label, but we might not have one if the\nTupleDesc came from some anonymous source (possibly that case explains\nthe test crash? Although I think the attname would be an empty string\nrather than missing entirely in such cases). I think it'd be worth\nproviding two distinct message strings:\n\n\"Returned type %s does not match expected type %s in column \\\"%s\\\" (position %d).\"\n\"Returned type %s does not match expected type %s in column position %d.\"\n\nI'd suggest dropping the column number entirely in the first case,\nwere it not that the attnames might well not be unique if we're\ndealing with an anonymous record type such as a SELECT result.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 Mar 2024 15:15:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add column name to error description" }, { "msg_contents": "On Mon, Apr 1, 2024 at 3:15 AM Tom Lane <[email protected]> wrote:\n>\n>\n> > The format \"%d-%s\" is not ideal. I suggesst \"%d (%s)\".\n>\n> I didn't like that either, for two reasons: if we have a column name\n> it ought to be the prominent label, but we might not have one if the\n> TupleDesc came from some anonymous source (possibly that case explains\n> the test crash? Although I think the attname would be an empty string\n> rather than missing entirely in such cases). I think it'd be worth\n> providing two distinct message strings:\n>\n> \"Returned type %s does not match expected type %s in column \\\"%s\\\" (position %d).\"\n> \"Returned type %s does not match expected type %s in column position %d.\"\n>\n> I'd suggest dropping the column number entirely in the first case,\n> were it not that the attnames might well not be unique if we're\n> dealing with an anonymous record type such as a SELECT result.\n>\n\nplease check the attached POC, hope the output is what you expected.\nnow we can output these two message.\n> \"Returned type %s does not match expected type %s in column \\\"%s\\\" (position %d).\"\n> \"Returned type %s does not match expected type %s in column position %d.\"\n\n\ncreate type compostype as (x int, y varchar);\ncreate or replace function compos() returns compostype as $$\nbegin return (1, 'hello'); end;\n$$ language plpgsql;\n\nselect compos();\nHEAD error message is\nERROR: returned record type does not match expected record type\nDETAIL: Returned type unknown does not match expected type character\nvarying in column 2.\nCONTEXT: PL/pgSQL function compos() while casting return value to\nfunction's return type\n\nif we print out NameStr(att->attname) then error becomes:\n+DETAIL: Returned type unknown does not match expected type character\nvarying in column \"f2\" (position 2).\n\nIn this case, printing out {column \\\"%s\\\"} is not helpful at all.\n\n---------------case1\ncreate function my_f(a integer, b integer)\nreturns table(first_col integer, lots_of_cols_later numeric) language plpgsql as\n$function$\nbegin\nreturn query select a, b;\nend;\n$function$;\n\n-----------------case2\ncreate or replace function returnsrecord(int) returns record language plpgsql as\n$$ begin return row($1,$1+1); end $$;\n\nselect * from my_f(1,1);\nselect * from returnsrecord(42) as r(x int, y bigint);\n\nIn the first case, we want to print out the column \\\"%s\\\",\nbut in the second case, we don't.\n\nin plpgsql_exec_function\nfirst case, return first tuple values then check tuple attributes\nin the second case, check the tuple attribute error out immediately.\n\nbuild_attrmap_by_position both indesc->tdtypeid = 2249, outdesc->tdtypeid = 2249\nso build_attrmap_by_position itself cannot distinguish these two cases.\n\nTo solve this,\nwe can add a boolean argument to convert_tuples_by_position.\n\n\nAlso this error\nERROR: structure of query does not match function result type\noccurred quite often on the internet, see [1]\nbut there are no tests for it.\nso we can add a test in src/test/regress/sql/plpgsql.sql\n\n[1] https://stackoverflow.com/search?q=structure+of+query+does+not+match+function+result+type", "msg_date": "Fri, 13 Sep 2024 16:02:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add column name to error description" } ]
[ { "msg_contents": "i am getting linking issues when trying to link libpq with my pg extension\nand i am using pg's libpq ,so libpq is built along with pg,so i did this in\nmy extension's cmakelists.txt\n\nfile (GLOB storage_SRC CONFIGURE_DEPENDS \"*.cpp\" )\nadd_library(storage OBJECT ${storage_SRC})\ntarget_link_libraries(storage PRIVATE pq)\n\nbtw included all required include dirs in my toplevel cmakelists.txt\n\nthen i got undefined symbol: pqsecure_write but don't know why if i give\nthe pg_config --libdir/libpq.a path in target_link_libraries instead of lib\nname then it links but walreceiver process cant start and i get FATAL:\ncould not connect to the primary server: libpq is incorrectly linked to\nbackend functions\n\ni am getting linking issues when trying to link libpq with my pg extension and i am using pg's libpq ,so libpq is built along with pg,so i did this in my extension's cmakelists.txtfile (GLOB storage_SRC CONFIGURE_DEPENDS \"*.cpp\" ) add_library(storage OBJECT ${storage_SRC}) target_link_libraries(storage PRIVATE pq) btw included all required include dirs in my toplevel cmakelists.txtthen i got undefined symbol: pqsecure_write but don't know why if i give the pg_config --libdir/libpq.a path in target_link_libraries instead of lib name then it links but walreceiver process cant start and i get FATAL: could not connect to the primary server: libpq is incorrectly linked to backend functions", "msg_date": "Mon, 1 Apr 2024 06:17:33 +0530", "msg_from": "Tony Wayne <[email protected]>", "msg_from_op": true, "msg_subject": "Cant link libpq with a pg extension using cmake" } ]
[ { "msg_contents": "Hi,\n\nWhen attempting to implement a new table access method, I discovered that\nrelation_copy_for_cluster() has the following declaration:\n\n\n void (*relation_copy_for_cluster) (Relation NewTable,\n Relation OldTable,\n Relation OldIndex,\n bool use_sort,\n TransactionId OldestXmin,\n TransactionId *xid_cutoff,\n MultiXactId *multi_cutoff,\n double *num_tuples,\n double *tups_vacuumed,\n double *tups_recently_dead);\n\nIt claims that the first parameter is a new table, and the second one is an\nold table. However, the table_relation_copy_for_cluster() uses the first\nparameter as the old table, and the second as a new table, see below:\n\nstatic inline void\ntable_relation_copy_for_cluster(Relation OldTable, Relation NewTable,\n Relation OldIndex,\n bool use_sort,\n TransactionId OldestXmin,\n TransactionId *xid_cutoff,\n MultiXactId *multi_cutoff,\n double *num_tuples,\n double *tups_vacuumed,\n double *tups_recently_dead)\n{\n OldTable->rd_tableam->relation_copy_for_cluster(OldTable, NewTable, OldIndex,\n use_sort, OldestXmin,\n xid_cutoff, multi_cutoff,\n num_tuples, tups_vacuumed,\n tups_recently_dead);\n}\n\nIt's a bit confusing, so attach a patch to fix this.\n\n--\nRegards,\nJapin Li", "msg_date": "Mon, 01 Apr 2024 16:12:45 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Fix parameters order for relation_copy_for_cluster" }, { "msg_contents": "Hi, Japin!\n\nOn Mon, 1 Apr 2024 at 12:15, Japin Li <[email protected]> wrote:\n\n>\n> Hi,\n>\n> When attempting to implement a new table access method, I discovered that\n> relation_copy_for_cluster() has the following declaration:\n>\n>\n> void (*relation_copy_for_cluster) (Relation NewTable,\n> Relation OldTable,\n> Relation OldIndex,\n> bool use_sort,\n> TransactionId OldestXmin,\n> TransactionId *xid_cutoff,\n> MultiXactId *multi_cutoff,\n> double *num_tuples,\n> double *tups_vacuumed,\n> double *tups_recently_dead);\n>\n> It claims that the first parameter is a new table, and the second one is an\n> old table. However, the table_relation_copy_for_cluster() uses the first\n> parameter as the old table, and the second as a new table, see below:\n>\n> static inline void\n> table_relation_copy_for_cluster(Relation OldTable, Relation NewTable,\n> Relation OldIndex,\n> bool use_sort,\n> TransactionId OldestXmin,\n> TransactionId *xid_cutoff,\n> MultiXactId *multi_cutoff,\n> double *num_tuples,\n> double *tups_vacuumed,\n> double *tups_recently_dead)\n> {\n> OldTable->rd_tableam->relation_copy_for_cluster(OldTable, NewTable,\n> OldIndex,\n> use_sort, OldestXmin,\n> xid_cutoff,\n> multi_cutoff,\n> num_tuples,\n> tups_vacuumed,\n> tups_recently_dead);\n> }\n>\n> It's a bit confusing, so attach a patch to fix this.\n>\n\nI've looked into your patch. All callers of *_relation_copy_for_cluster\nnow use Relation OldTable, Relation NewTable order. It coincides to what is\nexpected by the function, no now code is not broken. The only wrong thing\nis naming of arguments in declaration of this function in tableam.h I think\nthis is a minor oversight from original commit d25f519107b\n\nProvided all the above I'd recommend committing this catch. This is for\nclarity only, no changes in code behavior.\n\nThank you for finding this!\n\nBest regards,\nPavel Borisov\nSupabase\n\nHi, Japin!On Mon, 1 Apr 2024 at 12:15, Japin Li <[email protected]> wrote:\r\nHi,\n\r\nWhen attempting to implement a new table access method, I discovered that\r\nrelation_copy_for_cluster() has the following declaration:\n\n\r\n    void        (*relation_copy_for_cluster) (Relation NewTable,\r\n                                              Relation OldTable,\r\n                                              Relation OldIndex,\r\n                                              bool use_sort,\r\n                                              TransactionId OldestXmin,\r\n                                              TransactionId *xid_cutoff,\r\n                                              MultiXactId *multi_cutoff,\r\n                                              double *num_tuples,\r\n                                              double *tups_vacuumed,\r\n                                              double *tups_recently_dead);\n\r\nIt claims that the first parameter is a new table, and the second one is an\r\nold table.  However, the table_relation_copy_for_cluster() uses the first\r\nparameter as the old table, and the second as a new table, see below:\n\r\nstatic inline void\r\ntable_relation_copy_for_cluster(Relation OldTable, Relation NewTable,\r\n                                Relation OldIndex,\r\n                                bool use_sort,\r\n                                TransactionId OldestXmin,\r\n                                TransactionId *xid_cutoff,\r\n                                MultiXactId *multi_cutoff,\r\n                                double *num_tuples,\r\n                                double *tups_vacuumed,\r\n                                double *tups_recently_dead)\r\n{\r\n    OldTable->rd_tableam->relation_copy_for_cluster(OldTable, NewTable, OldIndex,\r\n                                                    use_sort, OldestXmin,\r\n                                                    xid_cutoff, multi_cutoff,\r\n                                                    num_tuples, tups_vacuumed,\r\n                                                    tups_recently_dead);\r\n}\n\r\nIt's a bit confusing, so attach a patch to fix this.I've looked into your patch. All callers of  *_relation_copy_for_cluster now use Relation OldTable, Relation NewTable order. It coincides to what is expected by the function, no now code is not broken. The only wrong thing is naming of arguments in declaration of this function in tableam.h I think this is a minor oversight from original commit d25f519107b Provided all the above I'd recommend committing this catch. This is for clarity only, no changes in code behavior. Thank you for finding this!Best regards,Pavel BorisovSupabase", "msg_date": "Mon, 1 Apr 2024 15:13:05 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix parameters order for relation_copy_for_cluster" }, { "msg_contents": "correction: so now code is not broken\n\n>\n\ncorrection: so now code is not broken", "msg_date": "Mon, 1 Apr 2024 16:11:10 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix parameters order for relation_copy_for_cluster" } ]
[ { "msg_contents": "Hi hackers,\n\nI'm playing a toy static analysis checker with PostgreSQL and found a\nvariable is missing volatile qualifier.\n\nBest Regards,\nXing", "msg_date": "Mon, 1 Apr 2024 21:44:31 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "[plpython] Add missing volatile qualifier." }, { "msg_contents": "Xing Guo <[email protected]> writes:\n> I'm playing a toy static analysis checker with PostgreSQL and found a\n> variable is missing volatile qualifier.\n\nGood catch! It looks like the consequences of a failure would be\npretty minimal --- AFAICS, no worse than a possible failure to remove\na refcount on Py_None --- but that's still a bug.\n\nI don't care for your proposed fix though. I think the real problem\nhere is schizophrenia about when to set up pltargs, and we could\nfix it more nicely as attached. (Perhaps the Asserts are overkill\nthough.)\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 01 Apr 2024 11:57:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [plpython] Add missing volatile qualifier." }, { "msg_contents": "On Mon, Apr 01, 2024 at 11:57:07AM -0400, Tom Lane wrote:\n> Xing Guo <[email protected]> writes:\n>> I'm playing a toy static analysis checker with PostgreSQL and found a\n>> variable is missing volatile qualifier.\n> \n> Good catch! It looks like the consequences of a failure would be\n> pretty minimal --- AFAICS, no worse than a possible failure to remove\n> a refcount on Py_None --- but that's still a bug.\n\nHuh. I seem to have dropped that \"volatile\" shortly before committing for\nsome reason [0].\n\n> I don't care for your proposed fix though. I think the real problem\n> here is schizophrenia about when to set up pltargs, and we could\n> fix it more nicely as attached. (Perhaps the Asserts are overkill\n> though.)\n\nYour fix seems fine to me.\n\n[0] https://postgr.es/m/20230504234235.GA2419591%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Apr 2024 13:29:20 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [plpython] Add missing volatile qualifier." }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Mon, Apr 01, 2024 at 11:57:07AM -0400, Tom Lane wrote:\n>> Good catch! It looks like the consequences of a failure would be\n>> pretty minimal --- AFAICS, no worse than a possible failure to remove\n>> a refcount on Py_None --- but that's still a bug.\n\n> Huh. I seem to have dropped that \"volatile\" shortly before committing for\n> some reason [0].\n\nOh, I'd forgotten that discussion. Given that we were both confused\nabout the need for it, all the more reason to try to avoid using a\nwithin-PG_TRY assignment.\n\n> Your fix seems fine to me.\n\nThanks for looking, I'll push it shortly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Apr 2024 14:39:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [plpython] Add missing volatile qualifier." } ]
[ { "msg_contents": "A warning was recently[0] introduced into the Meson build:\n\nWARNING: Project targets '>=0.54' but uses feature introduced in '0.55.0': Passing executable/found program object to script parameter of add_dist_script\n\nThere are 3 way to solve the issue that I have laid out in 3 separate \npatches that you pick at your leisure:\n\n1. version-check.diff\n\nWrap the offending line in a Meson version check.\n\n2. perl-string.diff\n\nPass the perl program as a string via its .path() method.\n\n3. meson-bump.diff\n\nBump the minimum meson version from 0.54 to 0.55, at least.\n\nI chose to do some analysis on option 3. The other 2 options are \nperfectly valid. I looked at all the systems in the buildfarm, and found \nthe following results:\n\nFirst column is the list of systems we test HEAD against in the \nbuildfarm. Second column is the Meson version available to those \nsystems, based on what I could find. Third column is whether or not we \ntest Meson on that system.\n\nSystem Meson Version Tested \nAlmaLinux 8 0.58.2 \nAlmaLinux 9 0.63.3 \nAlpine Linux 3.19.1 1.3.0 \nAmazon Linux 2 0.55.1 \nAmazon Linux 2023 0.62.2 \nArchLinux 1.4.0 \nCentOS 7 0.55.1 \nCentOS 7.1 0.55.1 \nCentOS 7.4 0.55.1 \nCentOS 7.9.2009 0.55.1 \nCentOS Stream 8 0.58.2 \nDebian 10 0.56.1 \nDebian 11 0.56.2 \nDebian 11.5 0.56.2 \nDebian 12 1.0.1 \nDebian 7.0 \nDebian 8.11 \nDebian 9 \nDebian 9.3 \nDebian unstable 1.4.0 x \nDragonFly BSD 6.2.2 0.63.2 \nFedora 38 1.0.1 \nFedora 39 1.3.2 x \nFedora 40 1.3.2 \nFreeBSD 12.2 \nFreeBSD 13.1 0.57.1 \nFreeBSD 14 1.4.0 \nLoongnix-Server 8.4.0 \nmacOS 13.6 \nmacOS 14.0 \nmacOS 14.0 / MacPorts 1.4.0 \nmacOS 14.3 1.4.0 x \nNetBSD 10.0 1.2.3 \nNetBSD 9.2 1.2.3 \nOmniOS / illumos r151038 \nOpenBSD 6.8 \nOpenBSD 6.9 \nOpenBSD 7.3 \nOpenBSD 7.4 1.2.1 \nOpenIndiana/illumos hipster rolling release 1.4.0 \nopenSUSE 15.0 0.46.0 \nopenSUSE 15.3 0.54.2 \nOracle Linux 9 0.63.3 \nPhoton 2.0 \nPhoton 3.0 \nRaspbian 7.8 0.56.2 \nRaspbian 8.0 1.0.1 \nRed Hat Enterprise Linux 7 0.55.1 \nRed Hat Enterprise Linux 7.1 0.55.1 \nRed Hat Enterprise Linux 7.9 0.55.1 \nRed Hat Enterprise Linux 9.2 0.63.3 \nSolaris 11.4.42 CBE 0.59.2 \nSUSE Linux Enterprise Server 12 SP5 \nSUSE Linux Enterprise Server 15 SP2 \nUbuntu 16.04 0.40.1 \nUbuntu 16.04.3 0.40.1 \nUbuntu 18.04 0.45.1 \nUbuntu 18.04.5 0.45.1 \nUbuntu 20.04.6 0.53.2 \nUbuntu 22.04.1 0.61.2 \nUbuntu 22.04.3 0.61.2 \nWindows 10 / Cygwin64 3.4.9 1.2.3 \nWindows / Msys 12.2.0 1.4.0 x \nWindows Server 2016 1.3.1 x \nWindows Server 2019 1.0.1 x \n\nSome notes:\n\n- The minimum Meson version we test against is 1.0.1, on drongo\n- For any RHEL 7 derivatives, you see, I took the EPEL Meson version\n- Debian 10 requires the backports repository to be enabled\n- OmniOS / illumos r151038 has Python 3.9, so could fetch Meson over \n pypi since it isn't packaged\n- Missing information on OpenBSD 6.8, 6.9, and 7.3, but they should have \n at least 0.55.0 available based on release dates\n- The missing macOS versions could definitely run 0.55 either through \n Homebrew, Macports, or PyPI\n- Other systems didn't have easily publicly available information\n\nAt the top of the root meson.build file, there is this comment:\n\n> # We want < 0.56 for python 3.5 compatibility on old platforms. EPEL for\n> # RHEL 7 has 0.55. < 0.54 would require replacing some uses of the fs\n> # module, < 0.53 all uses of fs. So far there's no need to go to >=0.56.\n\nSeems like as good an opportunity to bump Meson to 0.56 for Postgres 17, \nwhich I have found to exist in the EPEL for RHEL 7. I don't know what \nversion exists in the base repo. Perhaps it is 0.55, which would still \nget rid of the aforementioned warning.\n\nCommitter, please pick your patch :).\n\n[0]: https://www.postgresql.org/message-id/[email protected]\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 01 Apr 2024 18:46:01 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Silence Meson warning on HEAD" }, { "msg_contents": "On Mon, Apr 01, 2024 at 06:46:01PM -0500, Tristan Partin wrote:\n> 1. version-check.diff\n> \n> Wrap the offending line in a Meson version check.\n> \n> 2. perl-string.diff\n> \n> Pass the perl program as a string via its .path() method.\n> \n> 3. meson-bump.diff\n> \n> Bump the minimum meson version from 0.54 to 0.55, at least.\n\nAmong these three options, 2) seems like the most appealing to me at\nthis stage of the development cycle. Potential breakages and more\nbackpatch fuzz is never really appealing even if meson is a rather new\nthing, particularly considering that there is a simple solution that\ndoes not require a minimal version bump.\n\n> Seems like as good an opportunity to bump Meson to 0.56 for Postgres 17,\n> which I have found to exist in the EPEL for RHEL 7. I don't know what\n> version exists in the base repo. Perhaps it is 0.55, which would still get\n> rid of the aforementioned warning.\n\nPerhaps we could shave even more versions for PG18 if it helps with\nfuture development?\n--\nMichael", "msg_date": "Thu, 4 Apr 2024 09:55:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Silence Meson warning on HEAD" }, { "msg_contents": "On 02.04.24 01:46, Tristan Partin wrote:\n> A warning was recently[0] introduced into the Meson build:\n> \n> WARNING: Project targets '>=0.54' but uses feature introduced in \n> '0.55.0': Passing executable/found program object to script parameter of \n> add_dist_script\n> \n> There are 3 way to solve the issue that I have laid out in 3 separate \n> patches that you pick at your leisure:\n> \n> 1. version-check.diff\n> \n> Wrap the offending line in a Meson version check.\n\nI committed this.\n\n> 2. perl-string.diff\n> \n> Pass the perl program as a string via its .path() method.\n\nThe problem with this is that it doesn't contain any information why the \nworkaround is used, and then if we increase the meson version and clean \nup all the version checks, we won't find this one.\n\n> 3. meson-bump.diff\n> \n> Bump the minimum meson version from 0.54 to 0.55, at least.\n\nI think the beginning of the next dev cycle would be a good time to bump \nthe meson version. For now, I'm content to leave it alone.\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 11:33:17 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Silence Meson warning on HEAD" } ]