threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hello list,\n\nI have noticed that the performance during a SELECT COUNT(*) command is\nmuch slower than what the device can provide. Parallel workers improve the\nsituation but for simplicity's sake, I disable parallelism for my\nmeasurements here by setting max_parallel_workers_per_gather to 0.\n\nStrace'ing the postgresql process shows that all reads happen in offset'ed 8KB\nblocks using pread():\n\n pread64(172, ..., 8192, 437370880) = 8192\n\nThe read rate I see on the device is only 10-20 MB/s. My case is special\nthough, as this is on a zstd-compressed btrfs filesystem, on a very fast\n(1GB/s) direct attached storage system. Given the decompression ratio is around\n10x, the above rate corresponds to about 100 to 200 MB/s of data going into the\npostgres process.\n\nCan the 8K block size cause slowdown? Here are my observations:\n\n+ Reading a 1GB postgres file using dd (which uses read() internally) in\n 8K and 32K chunks:\n\n # dd if=4156889.4 of=/dev/null bs=8k\n 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.18829 s, 174 MB/s\n\n # dd if=4156889.4 of=/dev/null bs=8k # 2nd run, data is cached\n 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.287623 s, 3.7 GB/s\n\n # dd if=4156889.8 of=/dev/null bs=32k\n 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.02688 s, 1.0 GB/s\n\n # dd if=4156889.8 of=/dev/null bs=32k # 2nd run, data is cached\n 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.264049 s, 4.1 GB/s\n\n The rates displayed are after decompression (the fs does it\n transparently) and the results have been verified with multiple runs.\n\n Notice that the read rate with bs=8k is 174MB/s (I see ~20MB/s on the\n device), slow and similar to what Postgresql gave us above. With bs=32k\n the rate increases to 1GB/s (I see ~80MB/s on the device, but the time\n is very short to register properly).\n\n The cached reads are fast in both cases.\n\nNote that I suspect my setup being related, (btrfs compression behaving\nsuboptimally) since the raw device can give me up to 1GB/s rate. It is however\nevident that reading in bigger chunks would mitigate such setup inefficiencies.\nOn a system that reads are already optimal and the read rate remains the same,\nthen bigger block size would probably reduce the sys time postgresql consumes\nbecause of the fewer system calls.\n\nSo would it make sense for postgres to perform reads in bigger blocks? Is it\neasy-ish to implement (where would one look for that)? Or must the I/O unit be\ntied to postgres' page size?\n\nRegards,\nDimitris\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:28:51 +0200 (CEST)",
"msg_from": "Dimitrios Apostolou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance implications of 8K pread()s"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou <[email protected]> wrote:\n> Note that I suspect my setup being related, (btrfs compression behaving\n> suboptimally) since the raw device can give me up to 1GB/s rate. It is however\n> evident that reading in bigger chunks would mitigate such setup inefficiencies.\n> On a system that reads are already optimal and the read rate remains the same,\n> then bigger block size would probably reduce the sys time postgresql consumes\n> because of the fewer system calls.\n\nI don't know about btrfs but maybe it can be tuned to prefetch\nsequential reads better...\n\n> So would it make sense for postgres to perform reads in bigger blocks? Is it\n> easy-ish to implement (where would one look for that)? Or must the I/O unit be\n> tied to postgres' page size?\n\nIt is hard to implement. But people are working on it. One of the\nproblems is that the 8KB blocks that we want to read data into aren't\nnecessarily contiguous so you can't just do bigger pread() calls\nwithout solving a lot more problems first. The project at\nhttps://wiki.postgresql.org/wiki/AIO aims to deal with the\n\"clustering\" you seek plus the \"gathering\" required for non-contiguous\nbuffers by allowing multiple block-sized reads to be prepared and\ncollected on a pending list up to some size that triggers merging and\nsubmission to the operating system at a sensible rate, so we can build\nsomething like a single large preadv() call. In the current\nprototype, if io_method=worker then that becomes a literal preadv()\ncall running in a background \"io worker\" process, but it could also be\nOS-specific stuff (io_uring, ...) that starts an asynchronous IO\ndepending on settings. If you take that branch and run your test you\nshould see 128KB-sized preadv() calls.\n\n\n",
"msg_date": "Wed, 12 Jul 2023 05:12:45 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance implications of 8K pread()s"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 5:12 AM Thomas Munro <[email protected]> wrote:\n> \"gathering\"\n\n(Oops, for reads, that's \"scattering\". As in scatter/gather I/O but I\npicked the wrong one...).\n\n\n",
"msg_date": "Wed, 12 Jul 2023 05:22:42 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance implications of 8K pread()s"
},
{
"msg_contents": "Hello and thanks for the feedback!\n\nOn Wed, 12 Jul 2023, Thomas Munro wrote:\n\n> On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou <[email protected]> wrote:\n>> Note that I suspect my setup being related, (btrfs compression behaving\n>> suboptimally) since the raw device can give me up to 1GB/s rate. It is however\n>> evident that reading in bigger chunks would mitigate such setup inefficiencies.\n>> On a system that reads are already optimal and the read rate remains the same,\n>> then bigger block size would probably reduce the sys time postgresql consumes\n>> because of the fewer system calls.\n>\n> I don't know about btrfs but maybe it can be tuned to prefetch\n> sequential reads better...\n\nI tried a lot to tweak the kernel's block layer read-ahead and to change\ndifferent I/O schedulers, but it made no difference. I'm now convinced\nthat the problem manifests specially on compressed btrfs: the filesystem\ndoesn't do any read-ahed (pre-fetch) so no I/O requests \"merge\" on the\nblock layer.\n\nIostat gives an interesting insight in the above measurements. For both\npostgres doing sequential scan and for dd with bs=8k, the kernel block\nlayer does not appear to merge the I/O requests. `iostat -x` shows 16\nsectors average read request size, 0 merged requests, and very high\nreads/s IOPS number.\n\nThe dd commands with bs=32k block size show fewer IOPS on `iostat -x` but\nhigher speed(!), larger average block size and high number of merged\nrequests.\n\nExample output for some random second out of dd bs=8k:\n\n Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz\n sdc 1313.00 20.93 2.00 0.15 0.53 16.32\n\nwith dd bs=32k:\n\n Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz\n sdc 290.00 76.44 4528.00 93.98 1.71 269.92\n\nOn the same filesystem, doing dd bs=8k reads from a file that has not been\ncompressed by the filesystem I get 1GB/s device read throughput!\n\nI sent this feedback to the btrfs list, but got no feedback yet:\n\nhttps://www.spinics.net/lists/linux-btrfs/msg137200.html\n\n>\n>> So would it make sense for postgres to perform reads in bigger blocks? Is it\n>> easy-ish to implement (where would one look for that)? Or must the I/O unit be\n>> tied to postgres' page size?\n>\n> It is hard to implement. But people are working on it. One of the\n> problems is that the 8KB blocks that we want to read data into aren't\n> necessarily contiguous so you can't just do bigger pread() calls\n> without solving a lot more problems first.\n\nThis kind of overhaul is good, but goes much deeper. Same with async I/O\nof course. But what I have in mind should be much simpler (add grains\nof salt since I don't know postgres internals :-)\n\n+ A process wants to read a block from a file\n+ Postgres' buffer cache layer (shared_buffers?) looks it up in the cache,\n if not found it passes the request down to\n+ postgres' block layer; it submits an I/O request for 32KB that include\n the 8K block requested; it returns the 32K block to\n+ postgres' buffer cache layer; it stores all 4 blocks read from the disk\n into the buffer cache, and returns only the 1 block requested.\n\nThe danger here is that in random non-contiguous 8K reads, the buffer\ncache gets satsurated by 4x the amount of data because of 32K reads, and\n75% of that data is useless, but may still evict useful data. The answer\nis that is should be marked as unused then (by putting it in front of the\ncache's LRU for example) so that those unused read-ahead pages are re-used\nfor upcoming read-ahead, without evicting too much useful pages.\n\n> The project at\n> https://wiki.postgresql.org/wiki/AIO aims to deal with the\n> \"clustering\" you seek plus the \"gathering\" required for non-contiguous\n> buffers by allowing multiple block-sized reads to be prepared and\n> collected on a pending list up to some size that triggers merging and\n> submission to the operating system at a sensible rate, so we can build\n> something like a single large preadv() call. In the current\n> prototype, if io_method=worker then that becomes a literal preadv()\n> call running in a background \"io worker\" process, but it could also be\n> OS-specific stuff (io_uring, ...) that starts an asynchronous IO\n> depending on settings. If you take that branch and run your test you\n> should see 128KB-sized preadv() calls.\n>\n\nInteresting and kind of sad that the last update on the wiki page is from\n2021. What is the latest prototype? I'm not sure I'm up to the task of\nputting my database to the test. ;-)\n\n\nThanks and regards,\nDimitris",
"msg_date": "Wed, 12 Jul 2023 20:50:20 +0200 (CEST)",
"msg_from": "Dimitrios Apostolou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance implications of 8K pread()s"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 6:50 AM Dimitrios Apostolou <[email protected]> wrote:\n> Interesting and kind of sad that the last update on the wiki page is from\n> 2021. What is the latest prototype? I'm not sure I'm up to the task of\n> putting my database to the test. ;-)\n\nIt works pretty well, certainly well enough to try out, and work is\nhappening. I'll try to update the wiki with some more up-to-date\ninformation soon. Basically, compare these two slides (you could also\nlook at slide 11, which is the most most people are probably\ninterested in, but then you can't really see what's going on with\nsystem call-level tools):\n\nhttps://speakerdeck.com/macdice/aio-and-dio-for-postgresql-on-freebsd?slide=7\nhttps://speakerdeck.com/macdice/aio-and-dio-for-postgresql-on-freebsd?slide=9\n\nNot only are the IOs converted into 128KB preadv() calls, they are\nissued concurrently and ahead of time while your backend is chewing on\nthe last lot of pages. So even if your file system completely fails\nat prefetching, we'd have a fighting chance at getting closer to\ndevice/line speed. That's basically what you have to do to support\ndirect I/O, where there is no system-provided prefetching.\n\n\n",
"msg_date": "Mon, 17 Jul 2023 10:32:14 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance implications of 8K pread()s"
},
{
"msg_contents": "Thanks, it sounds promising! Are the changes in the 16 branch already,\ni.e. is it enough to fetch sources for 16-beta2? If\nso do I just configure --with-liburing (I'm on linux) and run with\nio_method=io_uring? Else, if I use the io_method=worker what is a sensible\namount of worker threads? Should I also set all the flags for direct I/O?\n(io_data_direct=on io_wal_direct=on).\n\n\n\n",
"msg_date": "Mon, 17 Jul 2023 16:42:31 +0200 (CEST)",
"msg_from": "Dimitrios Apostolou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance implications of 8K pread()s"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-17 16:42:31 +0200, Dimitrios Apostolou wrote:\n> Thanks, it sounds promising! Are the changes in the 16 branch already,\n> i.e. is it enough to fetch sources for 16-beta2?\n\nNo, this is in a separate branch.\n\nhttps://github.com/anarazel/postgres/tree/aio\n\n\n> If so do I just configure --with-liburing (I'm on linux) and run with\n> io_method=io_uring?\n\nIt's probably worth trying out both io_uring and worker. I've not looked at\nperformance on btrfs. I know that some of the optimized paths for io_uring\n(being able to perform filesystem IO without doing so synchronously in an\nin-kernel thread) require filesystem cooperation, and I do not know how much\nattention btrfs has received for that.\n\n\n> Else, if I use the io_method=worker what is a sensible amount of worker\n> threads?\n\nDepends on your workload :/. If you just want to measure whether it fixes your\nsingle-threaded query execution issue, the default should be just fine.\n\n\n> Should I also set all the flags for direct I/O? (io_data_direct=on\n> io_wal_direct=on).\n\nFWIW, I just pushed a rebased version to the aio branch, and there the config\nfor direct io is\nio_direct = 'data, wal, wal_init'\n(or a subset thereof).\n\n From what I know of btrfs, I don't think you want direct IO though. Possibly\nfor WAL, but definitely not for data. IIRC it currently can cause corruption.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jul 2023 08:34:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance implications of 8K pread()s"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou <[email protected]> wrote:\n> So would it make sense for postgres to perform reads in bigger blocks? Is it\n> easy-ish to implement (where would one look for that)? Or must the I/O unit be\n> tied to postgres' page size?\n\nFYI as of last week we can do a little bit of that on the master branch:\n\npostgres=# select count(*) from t;\n\npreadv(46, ..., 8, 256237568) = 131072\npreadv(46, ..., 5, 256368640) = 131072\npreadv(46, ..., 8, 256499712) = 131072\npreadv(46, ..., 5, 256630784) = 131072\n\npostgres=# set io_combine_limit = '256k';\npostgres=# select count(*) from t;\n\npreadv(47, ..., 5, 613728256) = 262144\npreadv(47, ..., 5, 613990400) = 262144\npreadv(47, ..., 5, 614252544) = 262144\npreadv(47, ..., 5, 614514688) = 262144\n\nHere's hoping the commits implementing this stick, for the PostgreSQL\n17 release. It's just the beginning though, we can only do this for\nfull table scans so far (plus a couple of other obscure places).\nHopefully in the coming year we'll get the \"streaming I/O\" mechanism\nthat powers this hooked up to lots more places... index scans and\nother stuff. And writing. Then eventually pushing the I/O into the\nbackground. Your questions actually triggered us to talk about why we\ncouldn't switch a few things around in our project and get the I/O\ncombining piece done sooner. Thanks!\n\n\n",
"msg_date": "Fri, 12 Apr 2024 17:45:52 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance implications of 8K pread()s"
},
{
"msg_contents": "Exciting! Since I still have the same performance issues on compressed btrfs, I'm looking forward to testing the patches, probably when a 17 Beta is out and I can find binaries on my platform (OpenSUSE). It looks like it will make a huge difference.\n\nThank you for persisting and getting this through.\n\nDimitris\n\n\nOn 12 April 2024 07:45:52 CEST, Thomas Munro <[email protected]> wrote:\n>On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou <[email protected]> wrote:\n>> So would it make sense for postgres to perform reads in bigger blocks? Is it\n>> easy-ish to implement (where would one look for that)? Or must the I/O unit be\n>> tied to postgres' page size?\n>\n>FYI as of last week we can do a little bit of that on the master branch:\n>\n>postgres=# select count(*) from t;\n>\n>preadv(46, ..., 8, 256237568) = 131072\n>preadv(46, ..., 5, 256368640) = 131072\n>preadv(46, ..., 8, 256499712) = 131072\n>preadv(46, ..., 5, 256630784) = 131072\n>\n>postgres=# set io_combine_limit = '256k';\n>postgres=# select count(*) from t;\n>\n>preadv(47, ..., 5, 613728256) = 262144\n>preadv(47, ..., 5, 613990400) = 262144\n>preadv(47, ..., 5, 614252544) = 262144\n>preadv(47, ..., 5, 614514688) = 262144\n>\n>Here's hoping the commits implementing this stick, for the PostgreSQL\n>17 release. It's just the beginning though, we can only do this for\n>full table scans so far (plus a couple of other obscure places).\n>Hopefully in the coming year we'll get the \"streaming I/O\" mechanism\n>that powers this hooked up to lots more places... index scans and\n>other stuff. And writing. Then eventually pushing the I/O into the\n>background. Your questions actually triggered us to talk about why we\n>couldn't switch a few things around in our project and get the I/O\n>combining piece done sooner. Thanks!\n\n\n",
"msg_date": "Fri, 12 Apr 2024 13:12:55 +0200",
"msg_from": "Dimitrios Apostolou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance implications of 8K pread()s"
}
] |
[
{
"msg_contents": "Hi there,\nI’m on Postgres 13.11 and I'm seeing a situation where an INSERT...SELECT statement seq scans an index, but only when wrapped in a SQL function. When invoked directly (via psql) or when called via a PL/pgSQL function, it only reads the index tuples it needs, resulting in much better performance. I can solve my problem by writing the function in PL/pgSQL, but I'm curious why the pure SQL version behaves the way it does.\n\nHere's my table --\n\n\\d documents\n+-------------------+------------------+----------------------------------------+\n| Column | Type | Modifiers |\n|-------------------+------------------+----------------------------------------|\n| document_id | integer | not null generated always as identity |\n| product_id | integer | not null |\n| units_sold | integer | not null |\n| sale_date | date | not null |\n... some other columns ...\n+-------------------+------------------+----------------------------------------+\n\nCREATE INDEX idx_philip_tmp on documents (document_id, product_id);\n\nHere's the SQL function which will use that index --\n\nCREATE OR REPLACE FUNCTION fn_create_tasks(product_ids int[])\nRETURNS void\nAS $$\n -- Create processing tasks for documents related to these products\n INSERT INTO\n processing_queue (document_id)\n SELECT\n DISTINCT document_id\n FROM\n documents\n JOIN unnest(product_ids::int[]) AS product_id USING (product_id)\n ;\n\n$$ LANGUAGE sql VOLATILE PARALLEL SAFE;\n\n96498 is a product_id that has one associated document_id. When I copy/paste this statement into psql, it executes quickly, and pg_stat_user_indexes.idx_tup_read reports 2 tuples read for the index.\n\nINSERT INTO\n processing_queue (document_id)\nSELECT\n DISTINCT document_id\nFROM\n documents\nJOIN unnest(ARRAY[96498]::int[]) AS product_id USING (product_id)\n;\n\nWhen I copy/paste this into psql, I expect it to perform just as quickly but it does not. pg_stat_user_indexes.idx_tup_read reports 64313783 tuples read (which is the entire index).\n\nSELECT fn_create_tasks(ARRAY[96498]::int[])\n\nIf I rewrite fn_create_tasks() in PL/pgSQL, it behaves as I expect (executes quickly, pg_stat_user_indexes.idx_tup_read = 2).\n\nSELECT fn_create_tasks_plpgsql(ARRAY[96498]::int[])\n\nMy rule of thumb is that SQL functions always perform as well as or better than a PL/pgSQL equivalent, but this is a case where that's not true. If anyone can give me some clues as to what's happening here, I'd appreciate it.\n\nThanks\nPhilip\n\n",
"msg_date": "Tue, 11 Jul 2023 12:07:26 -0400",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Entire index scanned, but only when in SQL function? "
}
] |
[
{
"msg_contents": "Hello Gents,\n\nI have a few queries regarding the TOAST Fields\nserialisation/deserialization performance.\n\nThe use case i am trying to solve here is to have millions of partitions\nand aggregate the data in array field.\n\nI wish to know if i declare certain column in table as \"array of UDT/JSONB\"\nand enable either lz4 or zstd compression on it, does appending or\nprepending to that array or even changing the intermediate fields of\nUDT/JSONB objects. in that array has a runtime cost of full array data\nde-serialization every single time. If i perform any UPDATE operation on\nits elements or add/remove new elements from any position, does PG rewrites\nthe new version of the column value regardless of its size.\n\nLet me know if more inputs are required\n\n-- \n*Thanks,*\n*Piyush Katariya*\n\nHello Gents,I have a few queries regarding the TOAST Fields serialisation/deserialization performance.The use case i am trying to solve here is to have millions of partitions and aggregate the data in array field.I wish to know if i declare certain column in table as \"array of UDT/JSONB\" and enable either lz4 or zstd compression on it, does appending or prepending to that array or even changing the intermediate fields of UDT/JSONB objects. in that array has a runtime cost of full array data de-serialization every single time. If i perform any UPDATE operation on its elements or add/remove new elements from any position, does PG rewrites the new version of the column value regardless of its size.Let me know if more inputs are required-- Thanks,Piyush Katariya",
"msg_date": "Wed, 26 Jul 2023 18:15:38 +0530",
"msg_from": "Piyush Katariya <[email protected]>",
"msg_from_op": true,
"msg_subject": "TOAST Fields serialisation/deserialization performance"
},
{
"msg_contents": "On Wed, 2023-07-26 at 18:15 +0530, Piyush Katariya wrote:\n> I have a few queries regarding the TOAST Fields serialisation/deserialization performance.\n> \n> The use case i am trying to solve here is to have millions of partitions and aggregate the data in array field.\n> \n> I wish to know if i declare certain column in table as \"array of UDT/JSONB\" and enable\n> either lz4 or zstd compression on it, does appending or prepending to that array or even\n> changing the intermediate fields of UDT/JSONB objects. in that array has a runtime cost\n> of full array data de-serialization every single time. If i perform any UPDATE operation\n> on its elements or add/remove new elements from any position, does PG rewrites the new\n> version of the column value regardless of its size.\n\nUpdating even a small part of a large JSONB value requires that the entire thing is\nread and written, causing a lot of data churn.\n\nThis is inefficient, and you shouldn't use large JSONB values if you plan to do that.\n\nIf the data have a regular structure, use a regular relational data model.\nOtherwise, one idea might be to split the JSONB in several parts and store each\nof those parts in a different table row. That would reduce the impact.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 26 Jul 2023 21:39:25 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TOAST Fields serialisation/deserialization performance"
},
{
"msg_contents": "Thanks for the feedback. Appreciate it.\n\nOn Thu, 27 Jul, 2023, 01:09 Laurenz Albe, <[email protected]> wrote:\n\n> On Wed, 2023-07-26 at 18:15 +0530, Piyush Katariya wrote:\n> > I have a few queries regarding the TOAST Fields\n> serialisation/deserialization performance.\n> >\n> > The use case i am trying to solve here is to have millions of partitions\n> and aggregate the data in array field.\n> >\n> > I wish to know if i declare certain column in table as \"array of\n> UDT/JSONB\" and enable\n> > either lz4 or zstd compression on it, does appending or prepending to\n> that array or even\n> > changing the intermediate fields of UDT/JSONB objects. in that array has\n> a runtime cost\n> > of full array data de-serialization every single time. If i perform any\n> UPDATE operation\n> > on its elements or add/remove new elements from any position, does PG\n> rewrites the new\n> > version of the column value regardless of its size.\n>\n> Updating even a small part of a large JSONB value requires that the entire\n> thing is\n> read and written, causing a lot of data churn.\n>\n> This is inefficient, and you shouldn't use large JSONB values if you plan\n> to do that.\n>\n> If the data have a regular structure, use a regular relational data model.\n> Otherwise, one idea might be to split the JSONB in several parts and store\n> each\n> of those parts in a different table row. That would reduce the impact.\n>\n> Yours,\n> Laurenz Albe\n>\n\nThanks for the feedback. Appreciate it.On Thu, 27 Jul, 2023, 01:09 Laurenz Albe, <[email protected]> wrote:On Wed, 2023-07-26 at 18:15 +0530, Piyush Katariya wrote:\n> I have a few queries regarding the TOAST Fields serialisation/deserialization performance.\n> \n> The use case i am trying to solve here is to have millions of partitions and aggregate the data in array field.\n> \n> I wish to know if i declare certain column in table as \"array of UDT/JSONB\" and enable\n> either lz4 or zstd compression on it, does appending or prepending to that array or even\n> changing the intermediate fields of UDT/JSONB objects. in that array has a runtime cost\n> of full array data de-serialization every single time. If i perform any UPDATE operation\n> on its elements or add/remove new elements from any position, does PG rewrites the new\n> version of the column value regardless of its size.\n\nUpdating even a small part of a large JSONB value requires that the entire thing is\nread and written, causing a lot of data churn.\n\nThis is inefficient, and you shouldn't use large JSONB values if you plan to do that.\n\nIf the data have a regular structure, use a regular relational data model.\nOtherwise, one idea might be to split the JSONB in several parts and store each\nof those parts in a different table row. That would reduce the impact.\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 27 Jul 2023 01:18:17 +0530",
"msg_from": "Piyush Katariya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TOAST Fields serialisation/deserialization performance"
}
] |
[
{
"msg_contents": "Hi all\n\nMy colleague and I did some experiments to see what effect using UUIDs as\n2nd-ary indexes has on Index IO. The context is that by default ORM\nframeworks will use UUIDs as index keys which I found as a major factor to\nperformance issues at Celonis. I suspect this isn't specific to Celonis.\nThe secondary factor is that random IO on Azure Single Server can be slow\nas a dog -- thus for large enough indexes that aren't cached, and workloads\ndoing insert/delete at a high enough QPS, this really hurts.\n\nWe found that using UUID v7 (which has a longer time based prefix than v8)\ngave 30% in IO savings in index access and roughly the same in index size\nafter I/D workload. v8 was ~24%. We simulated slow, random IO by running\nthis on a USB key which seemed to match Azure performance pretty well. SSD\nwas maybe 2x better.\nThis is relative to UUID v3 which is essentially random (actually, pretty\ngood random distribution on a 500Gb table).\n\nThis isn't as much as I expected, but, again for large indexes, slow IO, it\nwas significant.\n\n peter\n\nHi allMy colleague and I did some experiments to see what effect using UUIDs as 2nd-ary indexes has on Index IO. The context is that by default ORM frameworks will use UUIDs as index keys which I found as a major factor to performance issues at Celonis. I suspect this isn't specific to Celonis.The secondary factor is that random IO on Azure Single Server can be slow as a dog -- thus for large enough indexes that aren't cached, and workloads doing insert/delete at a high enough QPS, this really hurts.We found that using UUID v7 (which has a longer time based prefix than v8) gave 30% in IO savings in index access and roughly the same in index size after I/D workload. v8 was ~24%. We simulated slow, random IO by running this on a USB key which seemed to match Azure performance pretty well. SSD was maybe 2x better.This is relative to UUID v3 which is essentially random (actually, pretty good random distribution on a 500Gb table).This isn't as much as I expected, but, again for large indexes, slow IO, it was significant. peter",
"msg_date": "Sun, 30 Jul 2023 22:48:06 -0600",
"msg_from": "peter plachta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Results of experiments with UUIDv7, UUIDv8"
}
] |
[
{
"msg_contents": "Hi all\n\nBackground is we're trying a pg_repack-like functionality to compact a\n500Gb/145Gb index (x2) table from which we deleted 80% rows. Offline is not\nan option. The table has a moderate (let's say 100QPS) I/D workload running.\n\nThe typical procedure for this type of thing is basically CDC:\n\n1. create 'log' table/create trigger\n2. under SERIALIZABLE: select * from current_table insert into new_table\n\nWhat we're finding is that for the 1st 30 mins the rate is 10Gb/s, then it\ndrops to 1Mb/s and stays there.... and 22 hours later the copy is still\ngoing and now the log table is huge so we know the replay will also take a\nvery long time.\n\n===\n\nQ: what are some ways in which we could optimize the copy?\n\nBtw this is Postgres 9.6\n\n(we tried unlogged table (that did nothing), we tried creating indexes\nafter (that helped), we're experimenting with RRI)\n\nThanks!\n\nHi allBackground is we're trying a pg_repack-like functionality to compact a 500Gb/145Gb index (x2) table from which we deleted 80% rows. Offline is not an option. The table has a moderate (let's say 100QPS) I/D workload running.The typical procedure for this type of thing is basically CDC:1. create 'log' table/create trigger2. under SERIALIZABLE: select * from current_table insert into new_tableWhat we're finding is that for the 1st 30 mins the rate is 10Gb/s, then it drops to 1Mb/s and stays there.... and 22 hours later the copy is still going and now the log table is huge so we know the replay will also take a very long time.===Q: what are some ways in which we could optimize the copy?Btw this is Postgres 9.6(we tried unlogged table (that did nothing), we tried creating indexes after (that helped), we're experimenting with RRI)Thanks!",
"msg_date": "Sun, 30 Jul 2023 23:00:15 -0600",
"msg_from": "peter plachta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table copy with SERIALIZABLE is incredibly slow"
},
{
"msg_contents": "On Sun, 2023-07-30 at 23:00 -0600, peter plachta wrote:\n> Background is we're trying a pg_repack-like functionality to compact a 500Gb/145Gb\n> index (x2) table from which we deleted 80% rows. Offline is not an option. The table\n> has a moderate (let's say 100QPS) I/D workload running.\n> \n> The typical procedure for this type of thing is basically CDC:\n> \n> 1. create 'log' table/create trigger\n> 2. under SERIALIZABLE: select * from current_table insert into new_table\n> \n> What we're finding is that for the 1st 30 mins the rate is 10Gb/s, then it drops to\n> 1Mb/s and stays there.... and 22 hours later the copy is still going and now the log\n> table is huge so we know the replay will also take a very long time.\n> \n> ===\n> \n> Q: what are some ways in which we could optimize the copy?\n> \n> Btw this is Postgres 9.6\n> \n> (we tried unlogged table (that did nothing), we tried creating indexes after\n> (that helped), we're experimenting with RRI)\n\nWhy are you doing this the hard way, when pg_squeeze or pg_repack could do it?\n\nYou definitely should not be using PostgreSQL 9.6 at this time.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 31 Jul 2023 08:30:39 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table copy with SERIALIZABLE is incredibly slow"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm trying to understand a bit of weirdness in a plan output. There is a\nsort node above a sequential scan node where the scan node produces 26,026\nrows yet the sort node above it produces 42,995,408. How is it possible to\nsort more data than you received?\n\nhttps://explain.dalibo.com/plan/1ee665h69f92chc5\n\nThe PostgreSQL version is 14.2 running on Amazon's RDS. Thanks.\n\n\nDane\n\nHello,I'm trying to understand a bit of weirdness in a plan output. There is a sort node above a sequential scan node where the scan node produces 26,026 rows yet the sort node above it produces 42,995,408. How is it possible to sort more data than you received?https://explain.dalibo.com/plan/1ee665h69f92chc5The PostgreSQL version is 14.2 running on Amazon's RDS. Thanks.Dane",
"msg_date": "Fri, 4 Aug 2023 10:59:19 -0400",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Plan weirdness. A sort produces more rows than the node beneath it"
},
{
"msg_contents": "On Fri, Aug 4, 2023 at 11:00 AM Dane Foster <[email protected]> wrote:\n\n> Hello,\n>\n> I'm trying to understand a bit of weirdness in a plan output. There is a\n> sort node above a sequential scan node where the scan node produces 26,026\n> rows yet the sort node above it produces 42,995,408. How is it possible\n> to sort more data than you received?\n>\n\nThis is normal for a merge join. For every tie in the first input, the\nqualifying part of the 2nd input must be rescanned, and the rows are\ntallied again (in the sort node) each time they are rescanned.\n\nCheers,\n\nJeff\n\n>\n\nOn Fri, Aug 4, 2023 at 11:00 AM Dane Foster <[email protected]> wrote:Hello,I'm trying to understand a bit of weirdness in a plan output. There is a sort node above a sequential scan node where the scan node produces 26,026 rows yet the sort node above it produces 42,995,408. How is it possible to sort more data than you received?This is normal for a merge join. For every tie in the first input, the qualifying part of the 2nd input must be rescanned, and the rows are tallied again (in the sort node) each time they are rescanned. Cheers,Jeff",
"msg_date": "Fri, 4 Aug 2023 11:07:19 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan weirdness. A sort produces more rows than the node beneath\n it"
},
{
"msg_contents": "Thanks for the explanation.\n\n\nDane\n\n\nOn Fri, Aug 4, 2023 at 11:07 AM Jeff Janes <[email protected]> wrote:\n\n> On Fri, Aug 4, 2023 at 11:00 AM Dane Foster <[email protected]> wrote:\n>\n>> Hello,\n>>\n>> I'm trying to understand a bit of weirdness in a plan output. There is a\n>> sort node above a sequential scan node where the scan node produces 26,026\n>> rows yet the sort node above it produces 42,995,408. How is it possible\n>> to sort more data than you received?\n>>\n>\n> This is normal for a merge join. For every tie in the first input, the\n> qualifying part of the 2nd input must be rescanned, and the rows are\n> tallied again (in the sort node) each time they are rescanned.\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\nThanks for the explanation. DaneOn Fri, Aug 4, 2023 at 11:07 AM Jeff Janes <[email protected]> wrote:On Fri, Aug 4, 2023 at 11:00 AM Dane Foster <[email protected]> wrote:Hello,I'm trying to understand a bit of weirdness in a plan output. There is a sort node above a sequential scan node where the scan node produces 26,026 rows yet the sort node above it produces 42,995,408. How is it possible to sort more data than you received?This is normal for a merge join. For every tie in the first input, the qualifying part of the 2nd input must be rescanned, and the rows are tallied again (in the sort node) each time they are rescanned. Cheers,Jeff",
"msg_date": "Fri, 4 Aug 2023 11:09:09 -0400",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Plan weirdness. A sort produces more rows than the node beneath\n it"
},
{
"msg_contents": "Dane Foster <[email protected]> writes:\n> I'm trying to understand a bit of weirdness in a plan output. There is a\n> sort node above a sequential scan node where the scan node produces 26,026\n> rows yet the sort node above it produces 42,995,408. How is it possible to\n> sort more data than you received?\n\nIf the sort is the inner input to a merge join, this could reflect\nmark-and-restore rescanning of the sort's output. Are there a\nwhole lot of duplicate keys on the merge's other side?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Aug 2023 11:10:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan weirdness. A sort produces more rows than the node beneath\n it"
},
{
"msg_contents": "> If the sort is the inner input to a merge join, this could reflect\n> mark-and-restore rescanning of the sort's output. Are there a\n> whole lot of duplicate keys on the merge's other side?\n\nYes. The course_id column's values repeat a LOT on the merge side.\n\nDane\n\n\nOn Fri, Aug 4, 2023 at 11:10 AM Tom Lane <[email protected]> wrote:\n\n> Dane Foster <[email protected]> writes:\n> > I'm trying to understand a bit of weirdness in a plan output. There is a\n> > sort node above a sequential scan node where the scan node produces\n> 26,026\n> > rows yet the sort node above it produces 42,995,408. How is it possible\n> to\n> > sort more data than you received?\n>\n> If the sort is the inner input to a merge join, this could reflect\n> mark-and-restore rescanning of the sort's output. Are there a\n> whole lot of duplicate keys on the merge's other side?\n>\n> regards, tom lane\n>\n\n> If the sort is the inner input to a merge join, this could reflect\n> mark-and-restore rescanning of the sort's output. Are there a\n> whole lot of duplicate keys on the merge's other side?Yes. The course_id column's values repeat a LOT on the merge side.DaneOn Fri, Aug 4, 2023 at 11:10 AM Tom Lane <[email protected]> wrote:Dane Foster <[email protected]> writes:\n> I'm trying to understand a bit of weirdness in a plan output. There is a\n> sort node above a sequential scan node where the scan node produces 26,026\n> rows yet the sort node above it produces 42,995,408. How is it possible to\n> sort more data than you received?\n\nIf the sort is the inner input to a merge join, this could reflect\nmark-and-restore rescanning of the sort's output. Are there a\nwhole lot of duplicate keys on the merge's other side?\n\n regards, tom lane",
"msg_date": "Fri, 4 Aug 2023 11:15:40 -0400",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Plan weirdness. A sort produces more rows than the node beneath\n it"
},
{
"msg_contents": "Dane Foster <[email protected]> writes:\n>> If the sort is the inner input to a merge join, this could reflect\n>> mark-and-restore rescanning of the sort's output. Are there a\n>> whole lot of duplicate keys on the merge's other side?\n\n> Yes. The course_id column's values repeat a LOT on the merge side.\n\nHmm. The planner should avoid using a merge join if it knows that\nto be true. Maybe analyze'ing that table would prompt it to use\nsome other join method?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Aug 2023 11:31:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan weirdness. A sort produces more rows than the node beneath\n it"
},
{
"msg_contents": ">\n> Hmm. The planner should avoid using a merge join if it knows that\n> to be true. Maybe analyze'ing that table would prompt it to use\n> some other join method?\n\n\n\nThe planner has updated stats on the table and wants to use a nested loop:\n\nhttps://explain.dalibo.com/plan/3814d5356cc82528\n\n\nBut the nested loop version is around 8 seconds slower so I forced the\nissue. But thanks to this conversation I now understand what's happening\nwith the row count. This understanding helped make the nested loops' plan\neasier to understand. Unfortunately, there doesn't seem to be any hope for\nthe merge join variant in terms of being easily understood. The uninitiated\nsees a scan node and its parent sort node and their brain defaults to\nthinking: the sort node will produce the same number of rows as the node\nfeeding it.\n\n\nCheers,\n\nDane\n\n\n\nHmm. The planner should avoid using a merge join if it knows that\nto be true. Maybe analyze'ing that table would prompt it to use\nsome other join method?The planner has updated stats on the table and wants to use a nested loop:https://explain.dalibo.com/plan/3814d5356cc82528But the nested loop version is around 8 seconds slower so I forced the issue. But thanks to this conversation I now understand what's happening with the row count. This understanding helped make the nested loops' plan easier to understand. Unfortunately, there doesn't seem to be any hope for the merge join variant in terms of being easily understood. The uninitiated sees a scan node and its parent sort node and their brain defaults to thinking: the sort node will produce the same number of rows as the node feeding it.Cheers,Dane",
"msg_date": "Fri, 4 Aug 2023 12:41:28 -0400",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Plan weirdness. A sort produces more rows than the node beneath\n it"
}
] |
[
{
"msg_contents": "Hi:\nI have a function, if I call it from DBeaver, it returns within a minute.\n\n\ncall commonhp.run_unified_profile_load_script_work_assignment_details('BACDHP', 'G3XPM6YE2JHMSQA2');\n\n\nbut if I called it from spring jdbc template, it never comes back:\n\n public void runTransform(String proc, InitDataLoadEntity entity) {\n\n log.info(\"Initial data finished data migration for {}, starting transform for {}...\", entity.getOrganizationOid(), proc);\n\n var schema = clientDbInfo.getSchema(entity.getOrganizationOid())[1].toUpperCase();\n\n var count = unifiedProfileJdbcTemplate.update(\"call commonhp.\" + proc + \"(?, ?)\", schema, entity.getOrganizationOid());\n\n log.info(\"Initial data finished data migration for {}, end transform for {}, result is {}\", entity.getOrganizationOid(), proc, count);\n\n }\n\n\n\nThe server does show high CPU, the function has mainly just one insert command (batch insert), the target table has 3 FKs.\n\nPlease help.\nThanks\nAndrew\n\n\nThis message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.\n\n\n\n\n\n\n\n\n\nHi:\nI have a function, if I call it from DBeaver, it returns within a minute.\n \ncall commonhp.run_unified_profile_load_script_work_assignment_details('BACDHP',\n'G3XPM6YE2JHMSQA2');\n \n \nbut if I called it from spring jdbc template, it never comes back:\n \npublic\nvoid runTransform(String\nproc, InitDataLoadEntity\nentity) {\n \nlog.info(\"Initial data finished data\n migration for {}, starting transform for {}...\",\nentity.getOrganizationOid(),\nproc);\n \nvar\nschema =\nclientDbInfo.getSchema(entity.getOrganizationOid())[1].toUpperCase();\n \nvar\ncount =\nunifiedProfileJdbcTemplate.update(\"call commonhp.\"\n + proc +\n\"(?, ?)\",\nschema,\nentity.getOrganizationOid());\n \nlog.info(\"Initial data finished data\n migration for {}, end transform for {}, result is {}\",\nentity.getOrganizationOid(),\nproc,\ncount);\n }\n \n \nThe server does show high CPU, the function has mainly just one insert command (batch insert), the target table has 3 FKs.\n \nPlease help.\nThanks \nAndrew\n\n\n\nThis message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient,\n you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.",
"msg_date": "Tue, 8 Aug 2023 23:07:04 +0000",
"msg_from": "\"An, Hongguo (CORP)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Function call very slow from JDBC/java but super fast from DBear"
},
{
"msg_contents": "On Tue, 8 Aug 2023 at 17:07, An, Hongguo (CORP) <[email protected]> wrote:\n\n> Hi:\n>\n> I have a function, if I call it from DBeaver, it returns within a minute.\n>\n>\n>\n> *call* commonhp.run_unified_profile_load_script_work_assignment_details(\n> 'BACDHP', 'G3XPM6YE2JHMSQA2');\n>\n>\n>\n>\n>\n> but if I called it from spring jdbc template, it never comes back:\n>\n> *public* *void* runTransform(String proc, InitDataLoadEntity entity)\n> {\n>\n> *log*.info(\"Initial data finished data migration for {},\n> starting transform for {}...\", entity.getOrganizationOid(), proc);\n>\n> *var* schema = clientDbInfo.getSchema(entity\n> .getOrganizationOid())[1].toUpperCase();\n>\n> *var* count = unifiedProfileJdbcTemplate.update(\"call commonhp.\"\n> + proc + \"(?, ?)\", schema, entity.getOrganizationOid());\n>\n> *log*.info(\"Initial data finished data migration for {}, end\n> transform for {}, result is {}\", entity.getOrganizationOid(), proc, count\n> );\n>\n> }\n>\n>\n>\n>\n>\n> The server does show high CPU, the function has mainly just one insert\n> command (batch insert), the target table has 3 FKs.\n>\n\n\nThe main difference is that we are going to use an unnamed statement to run\nthis.\n\nDo you have server logs to see the statement being executed ?\n\nexplain plan(s)\n\nDave\n\n>\n>\n> Please help.\n>\n> Thanks\n>\n> Andrew\n>\n>\n> This message and any attachments are intended only for the use of the\n> addressee and may contain information that is privileged and confidential.\n> If the reader of the message is not the intended recipient or an authorized\n> representative of the intended recipient, you are hereby notified that any\n> dissemination of this communication is strictly prohibited. If you have\n> received this communication in error, notify the sender immediately by\n> return email and delete the message and any attachments from your system.\n>\n\nOn Tue, 8 Aug 2023 at 17:07, An, Hongguo (CORP) <[email protected]> wrote:\n\n\nHi:\nI have a function, if I call it from DBeaver, it returns within a minute.\n \ncall commonhp.run_unified_profile_load_script_work_assignment_details('BACDHP',\n'G3XPM6YE2JHMSQA2');\n \n \nbut if I called it from spring jdbc template, it never comes back:\n \npublic\nvoid runTransform(String\nproc, InitDataLoadEntity\nentity) {\n \nlog.info(\"Initial data finished data\n migration for {}, starting transform for {}...\",\nentity.getOrganizationOid(),\nproc);\n \nvar\nschema =\nclientDbInfo.getSchema(entity.getOrganizationOid())[1].toUpperCase();\n \nvar\ncount =\nunifiedProfileJdbcTemplate.update(\"call commonhp.\"\n + proc +\n\"(?, ?)\",\nschema,\nentity.getOrganizationOid());\n \nlog.info(\"Initial data finished data\n migration for {}, end transform for {}, result is {}\",\nentity.getOrganizationOid(),\nproc,\ncount);\n }\n \n \nThe server does show high CPU, the function has mainly just one insert command (batch insert), the target table has 3 FKs.The main difference is that we are going to use an unnamed statement to run this. Do you have server logs to see the statement being executed ?explain plan(s)Dave \n \nPlease help.\nThanks \nAndrew\n\n\n\nThis message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient,\n you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.",
"msg_date": "Wed, 9 Aug 2023 07:29:42 -0600",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function call very slow from JDBC/java but super fast from DBear"
},
{
"msg_contents": "Hi Dave:\nThanks for helping me out.\nHowever, I am not sure what do you mean unnamed statement. Which one is using unnamed statement ?(Dbeaver or JDBC), and if the named statement is good, how can I do it from JDBC?\n\nThe server is in AWS RDS, I don’t see any log, should I reconfig the server to get logs?\nI tried to\nExplain call …, but it said syntax error.\n\nWhen the JDBC is running, I got the pid, is there any way for me to check what is it waiting for? There is no dead lock, but some relationship locks, all granted and no waiting, but why it never comes back?\n\nThanks\nAndrew\n\nFrom: Dave Cramer <[email protected]>\nDate: Wednesday, August 9, 2023 at 6:30 AM\nTo: An, Hongguo (CORP) <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: Function call very slow from JDBC/java but super fast from DBear\nWARNING: Do not click links or open attachments unless you recognize the source of the email and know the contents are safe.\n\n________________________________\n\n\nOn Tue, 8 Aug 2023 at 17:07, An, Hongguo (CORP) <[email protected]<mailto:[email protected]>> wrote:\nHi:\nI have a function, if I call it from DBeaver, it returns within a minute.\n\n\ncall commonhp.run_unified_profile_load_script_work_assignment_details('BACDHP', 'G3XPM6YE2JHMSQA2');\n\n\nbut if I called it from spring jdbc template, it never comes back:\n\n public void runTransform(String proc, InitDataLoadEntity entity) {\n\n log.info(\"Initial data finished data migration for {}, starting transform for {}...\", entity.getOrganizationOid(), proc);\n\n var schema = clientDbInfo.getSchema(entity.getOrganizationOid())[1].toUpperCase();\n\n var count = unifiedProfileJdbcTemplate.update(\"call commonhp.\" + proc + \"(?, ?)\", schema, entity.getOrganizationOid());\n\n log.info(\"Initial data finished data migration for {}, end transform for {}, result is {}\", entity.getOrganizationOid(), proc, count);\n\n }\n\n\n\nThe server does show high CPU, the function has mainly just one insert command (batch insert), the target table has 3 FKs.\n\n\nThe main difference is that we are going to use an unnamed statement to run this.\n\nDo you have server logs to see the statement being executed ?\n\nexplain plan(s)\n\nDave\n\nPlease help.\nThanks\nAndrew\n\n\nThis message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.\n\n\n\n\n\n\n\n\n\nHi Dave:\nThanks for helping me out.\nHowever, I am not sure what do you mean unnamed statement. Which one is using unnamed statement ?(Dbeaver or JDBC), and if the named statement is good, how can I do it from JDBC?\n \nThe server is in AWS RDS, I don’t see any log, should I reconfig the server to get logs?\nI tried to \nExplain call …, but it said syntax error.\n \nWhen the JDBC is running, I got the pid, is there any way for me to check what is it waiting for? There is no dead lock, but some relationship locks, all granted and no waiting, but why it never comes back?\n \nThanks\nAndrew\n \n\n\n\nFrom:\nDave Cramer <[email protected]>\nDate: Wednesday, August 9, 2023 at 6:30 AM\nTo: An, Hongguo (CORP) <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: Function call very slow from JDBC/java but super fast from DBear\n\n\n\n\n\n\nWARNING: Do not click links or open attachments unless you recognize the source of the email and know the contents are safe.\n\n\n\n\n\n \n\n\n\n\n\n \n\n \n\n\nOn Tue, 8 Aug 2023 at 17:07, An, Hongguo (CORP) <[email protected]> wrote:\n\n\n\n\n\nHi:\nI have a function, if I call it from DBeaver, it returns within a minute.\n \ncall commonhp.run_unified_profile_load_script_work_assignment_details('BACDHP',\n'G3XPM6YE2JHMSQA2');\n \n \nbut if I called it from spring jdbc template, it never comes back:\n \npublic\nvoid runTransform(String\nproc, InitDataLoadEntity\nentity) {\n \nlog.info(\"Initial data finished data\n migration for {}, starting transform for {}...\",\nentity.getOrganizationOid(),\nproc);\n \nvar\nschema =\nclientDbInfo.getSchema(entity.getOrganizationOid())[1].toUpperCase();\n \nvar\ncount =\nunifiedProfileJdbcTemplate.update(\"call commonhp.\"\n + proc +\n\"(?, ?)\",\nschema,\nentity.getOrganizationOid());\n \nlog.info(\"Initial data finished data\n migration for {}, end transform for {}, result is {}\",\nentity.getOrganizationOid(),\nproc,\ncount);\n }\n \n \nThe server does show high CPU, the function has mainly just one insert command (batch insert), the target table has 3 FKs.\n\n\n\n\n\n \n\n\n \n\n\nThe main difference is that we are going to use an unnamed statement to run this. \n\n\n \n\n\nDo you have server logs to see the statement being executed ?\n\n\n \n\n\nexplain plan(s)\n\n\n \n\n\nDave \n\n\n\n\n\n \nPlease help.\nThanks\n\nAndrew\n\n\n\nThis message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient,\n you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.",
"msg_date": "Wed, 9 Aug 2023 16:50:05 +0000",
"msg_from": "\"An, Hongguo (CORP)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Function call very slow from JDBC/java but super fast from DBear"
},
{
"msg_contents": "On Tuesday, August 8, 2023, An, Hongguo (CORP) <[email protected]> wrote:\n\n> Hi:\n>\n> I have a function, if I call it from DBeaver, it returns within a minute.\n>\n> *call* commonhp.run_unified_profile_load_script_work_assignment_details(\n> 'BACDHP', 'G3XPM6YE2JHMSQA2');\n>\n> but if I called it from spring jdbc template, it never comes back:\n>\n\nIf you are passing in exactly the same text values and running this within\nthe same database then executing the call command via any method should\nresult in the same exact outcome in terms of execution. You can get shared\nbuffer cache variations but that should be it. I’d want to exhaust the\npossibility that those preconditions are not met before investigating this\nas some sort of bug. At least, a PostgreSQL bug. I suppose the relative\nnovelty of using call within the JBC driver leaves open the possibility of\nan issue there. Removing the driver by using PREPARE may help to move\nthings forward (as part of a well written bug report). In any case this is\nnot looking to be on-topic for the performance list. Some more research\nand proving out should be done then send it to either the jdbc list or\nmaybe -bugs, if not then -general.\n\nYou may want to install the auto-explain extension to get the inner query\nexplain plan(s) into the logs for examination.\n\nDavid J.\n\nOn Tuesday, August 8, 2023, An, Hongguo (CORP) <[email protected]> wrote:\n\n\nHi:\nI have a function, if I call it from DBeaver, it returns within a minute.\ncall commonhp.run_unified_profile_load_script_work_assignment_details('BACDHP',\n'G3XPM6YE2JHMSQA2'); \nbut if I called it from spring jdbc template, it never comes back:If you are passing in exactly the same text values and running this within the same database then executing the call command via any method should result in the same exact outcome in terms of execution. You can get shared buffer cache variations but that should be it. I’d want to exhaust the possibility that those preconditions are not met before investigating this as some sort of bug. At least, a PostgreSQL bug. I suppose the relative novelty of using call within the JBC driver leaves open the possibility of an issue there. Removing the driver by using PREPARE may help to move things forward (as part of a well written bug report). In any case this is not looking to be on-topic for the performance list. Some more research and proving out should be done then send it to either the jdbc list or maybe -bugs, if not then -general.You may want to install the auto-explain extension to get the inner query explain plan(s) into the logs for examination.David J.",
"msg_date": "Wed, 9 Aug 2023 20:33:58 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function call very slow from JDBC/java but super fast from DBear"
}
] |
[
{
"msg_contents": "Hi list,\n\nI have a queue table with the schema:\n\n```\ncreate table queue(id bigserial primary key, view_time timestamp with\ntimezone not null);\ncreate index queue_view_time ON queue(view_time ASC);\n```\n\nThe most concurrent operation is:\n```\nUPDATE queue SET view_time=view_time+INTERVAL '60 seconds' WHERE id=(\n SELECT id FROM queue WHERE view_time<=now() at time zone 'utc' ORDER BY\nview_time ASC LIMIT 1 FOR UPDATE SKIP LOCKED\n)\n```\nAs you can imagine, with increased concurrency, this query will have to\nread & skip a lot of locked+dead index entries, so taking a lot of cpu-time.\n\nI'm assuming 10K+ queries/second will do the update above and actually\nreturn a row.\nYou may think about how you'll maintain 10K connections, but you can\nincrease the limit, the queries being fast, use a connection pooler, use\nauto-commit, etc.\n\n--------------\n\nSince most of the overhead is in the `queue_view_time` index, I thought of\npartitioning just that with partial indexes and then querying the indexes\nrandomly. This is with 2 partitions:\n\n```\ncreate index queue_view_time_0 ON queue(view_time ASC) WHERE id%2=0;\ncreate index queue_view_time_0 ON queue(view_time ASC) WHERE id%2=1;\n```\nAdding `where id%2=0` to the select query above and trying the partitions\nrandomly until I get a row or searched all partitions.\n\n----------------\nBut looking at the docs\nhttps://www.postgresql.org/docs/current/indexes-partial.html, it says:\n\n> Do Not Use Partial Indexes as a Substitute for Partitioning\n> While a search in this larger index might have to descend through a\ncouple more tree levels than a search in a smaller index, that's almost\ncertainly going to be cheaper than the planner effort needed to select the\nappropriate one of the partial indexes. The core of the problem is that the\nsystem does not understand the relationship among the partial indexes, and\nwill laboriously test each one to see if it's applicable to the current\nquery.\n\nWould this be true in my case too?\n\nIs it faster for the planner to select a correct partition(hash\npartitioning on `id` column) instead of a correct partial index like in my\ncase? I don't think I'll need more than ~32 partitions/partial-indexes in\nan extreme scenario.\n\nRegards,\nDorian\n\nHi list,I have a queue table with the schema:```create table queue(id bigserial primary key, view_time timestamp with timezone not null);create index queue_view_time ON queue(view_time ASC);```The most concurrent operation is:```UPDATE queue SET view_time=view_time+INTERVAL '60 seconds' WHERE id=( SELECT id FROM queue WHERE view_time<=now() at time zone 'utc' ORDER BY view_time ASC LIMIT 1 FOR UPDATE SKIP LOCKED)```As you can imagine, with increased concurrency, this query will have to read & skip a lot of locked+dead index entries, so taking a lot of cpu-time.I'm assuming 10K+ queries/second will do the update above and actually return a row.You may think about how you'll maintain 10K connections, but you can increase the limit, the queries being fast, use a connection pooler, use auto-commit, etc.--------------Since most of the overhead is in the `queue_view_time` index, I thought of partitioning just that with partial indexes and then querying the indexes randomly. This is with 2 partitions:```create index queue_view_time_0 ON queue(view_time ASC) WHERE id%2=0;create index queue_view_time_0 ON queue(view_time ASC) WHERE id%2=1;```Adding `where id%2=0` to the select query above and trying the partitions randomly until I get a row or searched all partitions.----------------But looking at the docs https://www.postgresql.org/docs/current/indexes-partial.html, it says:> Do Not Use Partial Indexes as a Substitute for Partitioning> While a search in this larger index might have to descend through a couple more tree levels than a search in a smaller index, that's almost certainly going to be cheaper than the planner effort needed to select the appropriate one of the partial indexes. The core of the problem is that the system does not understand the relationship among the partial indexes, and will laboriously test each one to see if it's applicable to the current query.Would this be true in my case too? Is it faster for the planner to select a correct partition(hash partitioning on `id` column) instead of a correct partial index like in my case? I don't think I'll need more than ~32 partitions/partial-indexes in an extreme scenario.Regards,Dorian",
"msg_date": "Thu, 10 Aug 2023 10:36:12 +0200",
"msg_from": "Dorian Hoxha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning update-heavy queue with hash partitions vs partial\n indexes"
},
{
"msg_contents": "On Thu, 10 Aug 2023 at 20:36, Dorian Hoxha <[email protected]> wrote:\n> > Do Not Use Partial Indexes as a Substitute for Partitioning\n> > While a search in this larger index might have to descend through a couple more tree levels than a search in a smaller index, that's almost certainly going to be cheaper than the planner effort needed to select the appropriate one of the partial indexes. The core of the problem is that the system does not understand the relationship among the partial indexes, and will laboriously test each one to see if it's applicable to the current query.\n>\n> Would this be true in my case too?\n\nYes. The process of determining which partial indexes are valid for\nthe given query must consider each index one at a time and validate\nthe index's WHERE clause against the query's WHERE clause to see if it\ncan be used. There is no shortcut that sees you have a series of\npartial indexes with WHERE id % 10 = N; which just picks 1 index\nwithout searching all of them.\n\n> Is it faster for the planner to select a correct partition(hash partitioning on `id` column) instead of a correct partial index like in my case? I don't think I'll need more than ~32 partitions/partial-indexes in an extreme scenario.\n\nI mean, test it and find out, but probably, yes, the partition pruning\ncode for hash partitioning is an O(1) operation and is very fast.\nOnce the given Constants have been hashed, finding the partition is\njust a single divide operation away.\n\nDavid\n\n\n",
"msg_date": "Fri, 11 Aug 2023 16:49:24 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning update-heavy queue with hash partitions vs partial\n indexes"
},
{
"msg_contents": "hi,\nConsider adding id%10 as a new column?you will have one time burden but after creating index on it, update perf will satisfy.\n\nBurcin 📱\n\n> On 11 Aug 2023, at 07:49, David Rowley <[email protected]> wrote:\n> \n> On Thu, 10 Aug 2023 at 20:36, Dorian Hoxha <[email protected]> wrote:\n>>> Do Not Use Partial Indexes as a Substitute for Partitioning\n>>> While a search in this larger index might have to descend through a couple more tree levels than a search in a smaller index, that's almost certainly going to be cheaper than the planner effort needed to select the appropriate one of the partial indexes. The core of the problem is that the system does not understand the relationship among the partial indexes, and will laboriously test each one to see if it's applicable to the current query.\n>> \n>> Would this be true in my case too?\n> \n> Yes. The process of determining which partial indexes are valid for\n> the given query must consider each index one at a time and validate\n> the index's WHERE clause against the query's WHERE clause to see if it\n> can be used. There is no shortcut that sees you have a series of\n> partial indexes with WHERE id % 10 = N; which just picks 1 index\n> without searching all of them.\n> \n>> Is it faster for the planner to select a correct partition(hash partitioning on `id` column) instead of a correct partial index like in my case? I don't think I'll need more than ~32 partitions/partial-indexes in an extreme scenario.\n> \n> I mean, test it and find out, but probably, yes, the partition pruning\n> code for hash partitioning is an O(1) operation and is very fast.\n> Once the given Constants have been hashed, finding the partition is\n> just a single divide operation away.\n> \n> David\n> \n> \n\n\n",
"msg_date": "Fri, 11 Aug 2023 08:13:55 +0300",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Partitioning update-heavy queue with hash partitions vs partial\n indexes"
},
{
"msg_contents": "@David Rowley\nThanks for explaining.\n\n@Burcin\nI'm not too fond of the new column because it introduces a new index that\nneeds to be maintained.\nI could change the index on `view_time` to `(shard_id, view_time)`, but I'm\nafraid that may increase the time to traverse the index (because all\nshard_id will be the same in a query).\nAnd maybe increase index size, but there is index compression for duplicate\nvalues on last versions.\n\nThank you\n\n\nOn Fri, Aug 11, 2023 at 7:14 AM <[email protected]> wrote:\n\n> hi,\n> Consider adding id%10 as a new column?you will have one time burden but\n> after creating index on it, update perf will satisfy.\n>\n> Burcin 📱\n>\n> > On 11 Aug 2023, at 07:49, David Rowley <[email protected]> wrote:\n> >\n> > On Thu, 10 Aug 2023 at 20:36, Dorian Hoxha <[email protected]>\n> wrote:\n> >>> Do Not Use Partial Indexes as a Substitute for Partitioning\n> >>> While a search in this larger index might have to descend through a\n> couple more tree levels than a search in a smaller index, that's almost\n> certainly going to be cheaper than the planner effort needed to select the\n> appropriate one of the partial indexes. The core of the problem is that the\n> system does not understand the relationship among the partial indexes, and\n> will laboriously test each one to see if it's applicable to the current\n> query.\n> >>\n> >> Would this be true in my case too?\n> >\n> > Yes. The process of determining which partial indexes are valid for\n> > the given query must consider each index one at a time and validate\n> > the index's WHERE clause against the query's WHERE clause to see if it\n> > can be used. There is no shortcut that sees you have a series of\n> > partial indexes with WHERE id % 10 = N; which just picks 1 index\n> > without searching all of them.\n> >\n> >> Is it faster for the planner to select a correct partition(hash\n> partitioning on `id` column) instead of a correct partial index like in my\n> case? I don't think I'll need more than ~32 partitions/partial-indexes in\n> an extreme scenario.\n> >\n> > I mean, test it and find out, but probably, yes, the partition pruning\n> > code for hash partitioning is an O(1) operation and is very fast.\n> > Once the given Constants have been hashed, finding the partition is\n> > just a single divide operation away.\n> >\n> > David\n> >\n> >\n>\n\n@David RowleyThanks for explaining.@BurcinI'm not too fond of the new column because it introduces a new index that needs to be maintained.I could change the index on `view_time` to `(shard_id, view_time)`, but I'm afraid that may increase the time to traverse the index (because all shard_id will be the same in a query).And maybe increase index size, but there is index compression for duplicate values on last versions.Thank youOn Fri, Aug 11, 2023 at 7:14 AM <[email protected]> wrote:hi,\nConsider adding id%10 as a new column?you will have one time burden but after creating index on it, update perf will satisfy.\n\nBurcin 📱\n\n> On 11 Aug 2023, at 07:49, David Rowley <[email protected]> wrote:\n> \n> On Thu, 10 Aug 2023 at 20:36, Dorian Hoxha <[email protected]> wrote:\n>>> Do Not Use Partial Indexes as a Substitute for Partitioning\n>>> While a search in this larger index might have to descend through a couple more tree levels than a search in a smaller index, that's almost certainly going to be cheaper than the planner effort needed to select the appropriate one of the partial indexes. The core of the problem is that the system does not understand the relationship among the partial indexes, and will laboriously test each one to see if it's applicable to the current query.\n>> \n>> Would this be true in my case too?\n> \n> Yes. The process of determining which partial indexes are valid for\n> the given query must consider each index one at a time and validate\n> the index's WHERE clause against the query's WHERE clause to see if it\n> can be used. There is no shortcut that sees you have a series of\n> partial indexes with WHERE id % 10 = N; which just picks 1 index\n> without searching all of them.\n> \n>> Is it faster for the planner to select a correct partition(hash partitioning on `id` column) instead of a correct partial index like in my case? I don't think I'll need more than ~32 partitions/partial-indexes in an extreme scenario.\n> \n> I mean, test it and find out, but probably, yes, the partition pruning\n> code for hash partitioning is an O(1) operation and is very fast.\n> Once the given Constants have been hashed, finding the partition is\n> just a single divide operation away.\n> \n> David\n> \n>",
"msg_date": "Fri, 11 Aug 2023 13:09:57 +0200",
"msg_from": "Dorian Hoxha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning update-heavy queue with hash partitions vs partial\n indexes"
}
] |
[
{
"msg_contents": "I have created a table called _td with about 43 000 rows. I have tried to\nuse this as a primary key id list to delete records from my\nproduct.product_file table, but I could not do it. It uses 100% of one CPU\nand it takes forever. Then I changed the query to delete 100 records only,\nand measure the speed:\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON)\n\ndelete from product.product_file where id in (\n\nselect pf2_id from _td limit 100\n\n)\n\nIt still takes 11 seconds. It means 110 msec / record, and that is\nunacceptable.\n\nI'm going to post the whole query plan at the end of this email, but I\nwould like to highlight the \"Triggers\" part:\n\n\n\"Triggers\": [\n\n{\n\n\"Trigger Name\": \"RI_ConstraintTrigger_a_26535\",\n\n\"Constraint Name\": \"fk_pfft_product\",\n\n\"Relation\": \"product_file\",\n\n\"Time\": 4.600,\n\n\"Calls\": 90\n\n},\n\n{\n\n\"Trigger Name\": \"RI_ConstraintTrigger_a_26837\",\n\n\"Constraint Name\": \"fk_product_file_src\",\n\n\"Relation\": \"product_file\",\n\n\"Time\": 5.795,\n\n\"Calls\": 90\n\n},\n\n{\n\n\"Trigger Name\": \"RI_ConstraintTrigger_a_75463\",\n\n\"Constraint Name\": \"fk_pfq_src_product_file\",\n\n\"Relation\": \"product_file\",\n\n\"Time\": 11179.429,\n\n\"Calls\": 90\n\n},\n\n{\n\n\"Trigger Name\": \"_trg_002_aiu_audit_row\",\n\n\"Relation\": \"product_file\",\n\n\"Time\": 49.410,\n\n\"Calls\": 90\n\n}\n\n]\n\nIt seems that two foreign key constraints use 10.395 seconds out of the\ntotal 11.24 seconds. But I don't see why it takes that much?\n\nThe product.product_file table has 477 000 rows:\n\nCREATE TABLE product.product_file (\n\nid uuid NOT NULL,\n\nc_tim timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,\n\nc_uid uuid NULL,\n\nc_sid uuid NULL,\n\nm_tim timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,\n\nm_uid uuid NULL,\n\nm_sid uuid NULL,\n\nproduct_id uuid NOT NULL,\n\nproduct_file_type_id uuid NOT NULL,\n\nfile_id uuid NOT NULL,\n\nproduct_file_status_id uuid NOT NULL,\n\ndl_url text NULL,\n\nsrc_product_file_id uuid NULL,\n\nCONSTRAINT product_file_pkey PRIMARY KEY (id),\n\nCONSTRAINT fk_pf_file FOREIGN KEY (file_id) REFERENCES media.file(id),\n\nCONSTRAINT fk_pf_file_type FOREIGN KEY (product_file_type_id) REFERENCES\nproduct.product_file_type(id),\n\nCONSTRAINT fk_pf_product FOREIGN KEY (product_id) REFERENCES\nproduct.product(id) ON DELETE CASCADE,\n\nCONSTRAINT fk_product_file_src FOREIGN KEY (src_product_file_id) REFERENCES\nproduct.product_file(id),\n\nCONSTRAINT fk_product_file_status FOREIGN KEY (product_file_status_id)\nREFERENCES product.product_file_status(id)\n\n);\n\nCREATE INDEX idx_product_file_dl_url ON product.product_file USING btree\n(dl_url) INCLUDE (product_id) WHERE (dl_url IS NOT NULL);\n\nCREATE INDEX idx_product_file_file_product ON product.product_file USING\nbtree (file_id, product_id);\n\nCREATE INDEX idx_product_file_product_file ON product.product_file USING\nbtree (product_id, file_id);\n\nCREATE INDEX idx_product_file_src ON product.product_file USING btree\n(src_product_file_id) WHERE (src_product_file_id IS NOT NULL);\n\n\nThe one with fk_pfft_product looks like this, it has about 5000 records in\nit:\n\nCREATE TABLE product.product_file_tag (\n\nid uuid NOT NULL,\n\nc_tim timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,\n\nc_uid uuid NULL,\n\nc_sid uuid NULL,\n\nm_tim timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,\n\nm_uid uuid NULL,\n\nm_sid uuid NULL,\n\nproduct_file_id uuid NOT NULL,\n\nfile_tag_id uuid NOT NULL,\n\nCONSTRAINT product_file_tag_pkey PRIMARY KEY (id),\n\nCONSTRAINT fk_pfft_file_tag FOREIGN KEY (file_tag_id) REFERENCES\nproduct.file_tag(id) ON DELETE CASCADE DEFERRABLE,\n\nCONSTRAINT fk_pfft_product FOREIGN KEY (product_file_id) REFERENCES\nproduct.product_file(id) ON DELETE CASCADE DEFERRABLE\n\n);\n\nCREATE UNIQUE INDEX uidx_product_file_file_tag ON product.product_file_tag\nUSING btree (product_file_id, file_tag_id);\n\n\nThe other constraint has zero actual references, this returns zero:\n\nselect count(*) from product.product_file where src_product_file_id in (\n\nselect pf2_id from _td\n\n); -- 0\n\nI was trying to figure out how a foreign key constraint with zero actual\nreferences can cost 100 msec / record, but I failed.\n\nCan somebody please explain what is wrong here?\n\nThe plan is also visualized here:\nhttp://tatiyants.com/pev/#/plans/plan_1692129126258\n\n[\n\n{\n\n\"Plan\": {\n\n\"Node Type\": \"ModifyTable\",\n\n\"Operation\": \"Delete\",\n\n\"Parallel Aware\": false,\n\n\"Async Capable\": false,\n\n\"Relation Name\": \"product_file\",\n\n\"Schema\": \"product\",\n\n\"Alias\": \"product_file\",\n\n\"Startup Cost\": 4.21,\n\n\"Total Cost\": 840.79,\n\n\"Plan Rows\": 0,\n\n\"Plan Width\": 0,\n\n\"Actual Startup Time\": 0.567,\n\n\"Actual Total Time\": 0.568,\n\n\"Actual Rows\": 0,\n\n\"Actual Loops\": 1,\n\n\"Shared Hit Blocks\": 582,\n\n\"Shared Read Blocks\": 0,\n\n\"Shared Dirtied Blocks\": 10,\n\n\"Shared Written Blocks\": 0,\n\n\"Local Hit Blocks\": 0,\n\n\"Local Read Blocks\": 0,\n\n\"Local Dirtied Blocks\": 0,\n\n\"Local Written Blocks\": 0,\n\n\"Temp Read Blocks\": 0,\n\n\"Temp Written Blocks\": 0,\n\n\"Plans\": [\n\n{\n\n\"Node Type\": \"Nested Loop\",\n\n\"Parent Relationship\": \"Outer\",\n\n\"Parallel Aware\": false,\n\n\"Async Capable\": false,\n\n\"Join Type\": \"Inner\",\n\n\"Startup Cost\": 4.21,\n\n\"Total Cost\": 840.79,\n\n\"Plan Rows\": 100,\n\n\"Plan Width\": 46,\n\n\"Actual Startup Time\": 0.161,\n\n\"Actual Total Time\": 0.451,\n\n\"Actual Rows\": 90,\n\n\"Actual Loops\": 1,\n\n\"Output\": [\"product_file.ctid\", \"\\\"ANY_subquery\\\".*\"],\n\n\"Inner Unique\": true,\n\n\"Shared Hit Blocks\": 402,\n\n\"Shared Read Blocks\": 0,\n\n\"Shared Dirtied Blocks\": 10,\n\n\"Shared Written Blocks\": 0,\n\n\"Local Hit Blocks\": 0,\n\n\"Local Read Blocks\": 0,\n\n\"Local Dirtied Blocks\": 0,\n\n\"Local Written Blocks\": 0,\n\n\"Temp Read Blocks\": 0,\n\n\"Temp Written Blocks\": 0,\n\n\"Plans\": [\n\n{\n\n\"Node Type\": \"Aggregate\",\n\n\"Strategy\": \"Hashed\",\n\n\"Partial Mode\": \"Simple\",\n\n\"Parent Relationship\": \"Outer\",\n\n\"Parallel Aware\": false,\n\n\"Async Capable\": false,\n\n\"Startup Cost\": 3.79,\n\n\"Total Cost\": 4.79,\n\n\"Plan Rows\": 100,\n\n\"Plan Width\": 56,\n\n\"Actual Startup Time\": 0.118,\n\n\"Actual Total Time\": 0.136,\n\n\"Actual Rows\": 100,\n\n\"Actual Loops\": 1,\n\n\"Output\": [\"\\\"ANY_subquery\\\".*\", \"\\\"ANY_subquery\\\".pf2_id\"],\n\n\"Group Key\": [\"\\\"ANY_subquery\\\".pf2_id\"],\n\n\"Planned Partitions\": 0,\n\n\"HashAgg Batches\": 1,\n\n\"Peak Memory Usage\": 32,\n\n\"Disk Usage\": 0,\n\n\"Shared Hit Blocks\": 2,\n\n\"Shared Read Blocks\": 0,\n\n\"Shared Dirtied Blocks\": 0,\n\n\"Shared Written Blocks\": 0,\n\n\"Local Hit Blocks\": 0,\n\n\"Local Read Blocks\": 0,\n\n\"Local Dirtied Blocks\": 0,\n\n\"Local Written Blocks\": 0,\n\n\"Temp Read Blocks\": 0,\n\n\"Temp Written Blocks\": 0,\n\n\"Plans\": [\n\n{\n\n\"Node Type\": \"Subquery Scan\",\n\n\"Parent Relationship\": \"Outer\",\n\n\"Parallel Aware\": false,\n\n\"Async Capable\": false,\n\n\"Alias\": \"ANY_subquery\",\n\n\"Startup Cost\": 0.00,\n\n\"Total Cost\": 3.54,\n\n\"Plan Rows\": 100,\n\n\"Plan Width\": 56,\n\n\"Actual Startup Time\": 0.030,\n\n\"Actual Total Time\": 0.083,\n\n\"Actual Rows\": 100,\n\n\"Actual Loops\": 1,\n\n\"Output\": [\"\\\"ANY_subquery\\\".*\", \"\\\"ANY_subquery\\\".pf2_id\"],\n\n\"Shared Hit Blocks\": 2,\n\n\"Shared Read Blocks\": 0,\n\n\"Shared Dirtied Blocks\": 0,\n\n\"Shared Written Blocks\": 0,\n\n\"Local Hit Blocks\": 0,\n\n\"Local Read Blocks\": 0,\n\n\"Local Dirtied Blocks\": 0,\n\n\"Local Written Blocks\": 0,\n\n\"Temp Read Blocks\": 0,\n\n\"Temp Written Blocks\": 0,\n\n\"Plans\": [\n\n{\n\n\"Node Type\": \"Limit\",\n\n\"Parent Relationship\": \"Subquery\",\n\n\"Parallel Aware\": false,\n\n\"Async Capable\": false,\n\n\"Startup Cost\": 0.00,\n\n\"Total Cost\": 2.54,\n\n\"Plan Rows\": 100,\n\n\"Plan Width\": 16,\n\n\"Actual Startup Time\": 0.024,\n\n\"Actual Total Time\": 0.053,\n\n\"Actual Rows\": 100,\n\n\"Actual Loops\": 1,\n\n\"Output\": [\"_td.pf2_id\"],\n\n\"Shared Hit Blocks\": 2,\n\n\"Shared Read Blocks\": 0,\n\n\"Shared Dirtied Blocks\": 0,\n\n\"Shared Written Blocks\": 0,\n\n\"Local Hit Blocks\": 0,\n\n\"Local Read Blocks\": 0,\n\n\"Local Dirtied Blocks\": 0,\n\n\"Local Written Blocks\": 0,\n\n\"Temp Read Blocks\": 0,\n\n\"Temp Written Blocks\": 0,\n\n\"Plans\": [\n\n{\n\n\"Node Type\": \"Seq Scan\",\n\n\"Parent Relationship\": \"Outer\",\n\n\"Parallel Aware\": false,\n\n\"Async Capable\": false,\n\n\"Relation Name\": \"_td\",\n\n\"Schema\": \"public\",\n\n\"Alias\": \"_td\",\n\n\"Startup Cost\": 0.00,\n\n\"Total Cost\": 1100.07,\n\n\"Plan Rows\": 43307,\n\n\"Plan Width\": 16,\n\n\"Actual Startup Time\": 0.023,\n\n\"Actual Total Time\": 0.042,\n\n\"Actual Rows\": 100,\n\n\"Actual Loops\": 1,\n\n\"Output\": [\"_td.pf2_id\"],\n\n\"Shared Hit Blocks\": 2,\n\n\"Shared Read Blocks\": 0,\n\n\"Shared Dirtied Blocks\": 0,\n\n\"Shared Written Blocks\": 0,\n\n\"Local Hit Blocks\": 0,\n\n\"Local Read Blocks\": 0,\n\n\"Local Dirtied Blocks\": 0,\n\n\"Local Written Blocks\": 0,\n\n\"Temp Read Blocks\": 0,\n\n\"Temp Written Blocks\": 0\n\n}\n\n]\n\n}\n\n]\n\n}\n\n]\n\n},\n\n{\n\n\"Node Type\": \"Index Scan\",\n\n\"Parent Relationship\": \"Inner\",\n\n\"Parallel Aware\": false,\n\n\"Async Capable\": false,\n\n\"Scan Direction\": \"Forward\",\n\n\"Index Name\": \"product_file_pkey\",\n\n\"Relation Name\": \"product_file\",\n\n\"Schema\": \"product\",\n\n\"Alias\": \"product_file\",\n\n\"Startup Cost\": 0.42,\n\n\"Total Cost\": 8.36,\n\n\"Plan Rows\": 1,\n\n\"Plan Width\": 22,\n\n\"Actual Startup Time\": 0.003,\n\n\"Actual Total Time\": 0.003,\n\n\"Actual Rows\": 1,\n\n\"Actual Loops\": 100,\n\n\"Output\": [\"product_file.ctid\", \"product_file.id\"],\n\n\"Index Cond\": \"(product_file.id = \\\"ANY_subquery\\\".pf2_id)\",\n\n\"Rows Removed by Index Recheck\": 0,\n\n\"Shared Hit Blocks\": 400,\n\n\"Shared Read Blocks\": 0,\n\n\"Shared Dirtied Blocks\": 10,\n\n\"Shared Written Blocks\": 0,\n\n\"Local Hit Blocks\": 0,\n\n\"Local Read Blocks\": 0,\n\n\"Local Dirtied Blocks\": 0,\n\n\"Local Written Blocks\": 0,\n\n\"Temp Read Blocks\": 0,\n\n\"Temp Written Blocks\": 0\n\n}\n\n]\n\n}\n\n]\n\n},\n\n\"Planning\": {\n\n\"Shared Hit Blocks\": 0,\n\n\"Shared Read Blocks\": 0,\n\n\"Shared Dirtied Blocks\": 0,\n\n\"Shared Written Blocks\": 0,\n\n\"Local Hit Blocks\": 0,\n\n\"Local Read Blocks\": 0,\n\n\"Local Dirtied Blocks\": 0,\n\n\"Local Written Blocks\": 0,\n\n\"Temp Read Blocks\": 0,\n\n\"Temp Written Blocks\": 0\n\n},\n\n\"Planning Time\": 0.249,\n\n\"Triggers\": [\n\n{\n\n\"Trigger Name\": \"RI_ConstraintTrigger_a_26535\",\n\n\"Constraint Name\": \"fk_pfft_product\",\n\n\"Relation\": \"product_file\",\n\n\"Time\": 4.600,\n\n\"Calls\": 90\n\n},\n\n{\n\n\"Trigger Name\": \"RI_ConstraintTrigger_a_26837\",\n\n\"Constraint Name\": \"fk_product_file_src\",\n\n\"Relation\": \"product_file\",\n\n\"Time\": 5.795,\n\n\"Calls\": 90\n\n},\n\n{\n\n\"Trigger Name\": \"RI_ConstraintTrigger_a_75463\",\n\n\"Constraint Name\": \"fk_pfq_src_product_file\",\n\n\"Relation\": \"product_file\",\n\n\"Time\": 11179.429,\n\n\"Calls\": 90\n\n},\n\n{\n\n\"Trigger Name\": \"_trg_002_aiu_audit_row\",\n\n\"Relation\": \"product_file\",\n\n\"Time\": 49.410,\n\n\"Calls\": 90\n\n}\n\n],\n\n\"Execution Time\": 11240.265\n\n}\n\n]\n\nI have created a table called _td with about 43 000 rows. I have tried to use this as a primary key id list to delete records from my product.product_file table, but I could not do it. It uses 100% of one CPU and it takes forever. Then I changed the query to delete 100 records only, and measure the speed:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON)delete from product.product_file where id in ( select pf2_id from _td limit 100)It still takes 11 seconds. It means 110 msec / record, and that is unacceptable. I'm going to post the whole query plan at the end of this email, but I would like to highlight the \"Triggers\" part: \"Triggers\": [ { \"Trigger Name\": \"RI_ConstraintTrigger_a_26535\", \"Constraint Name\": \"fk_pfft_product\", \"Relation\": \"product_file\", \"Time\": 4.600, \"Calls\": 90 }, { \"Trigger Name\": \"RI_ConstraintTrigger_a_26837\", \"Constraint Name\": \"fk_product_file_src\", \"Relation\": \"product_file\", \"Time\": 5.795, \"Calls\": 90 }, { \"Trigger Name\": \"RI_ConstraintTrigger_a_75463\", \"Constraint Name\": \"fk_pfq_src_product_file\", \"Relation\": \"product_file\", \"Time\": 11179.429, \"Calls\": 90 }, { \"Trigger Name\": \"_trg_002_aiu_audit_row\", \"Relation\": \"product_file\", \"Time\": 49.410, \"Calls\": 90 } ]It seems that two foreign key constraints use 10.395 seconds out of the total 11.24 seconds. But I don't see why it takes that much?The product.product_file table has 477 000 rows:CREATE TABLE product.product_file (\tid uuid NOT NULL,\tc_tim timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,\tc_uid uuid NULL,\tc_sid uuid NULL,\tm_tim timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,\tm_uid uuid NULL,\tm_sid uuid NULL,\tproduct_id uuid NOT NULL,\tproduct_file_type_id uuid NOT NULL,\tfile_id uuid NOT NULL,\tproduct_file_status_id uuid NOT NULL,\tdl_url text NULL,\tsrc_product_file_id uuid NULL, CONSTRAINT product_file_pkey PRIMARY KEY (id), CONSTRAINT fk_pf_file FOREIGN KEY (file_id) REFERENCES media.file(id), CONSTRAINT fk_pf_file_type FOREIGN KEY (product_file_type_id) REFERENCES product.product_file_type(id), CONSTRAINT fk_pf_product FOREIGN KEY (product_id) REFERENCES product.product(id) ON DELETE CASCADE, CONSTRAINT fk_product_file_src FOREIGN KEY (src_product_file_id) REFERENCES product.product_file(id), CONSTRAINT fk_product_file_status FOREIGN KEY (product_file_status_id) REFERENCES product.product_file_status(id));CREATE INDEX idx_product_file_dl_url ON product.product_file USING btree (dl_url) INCLUDE (product_id) WHERE (dl_url IS NOT NULL);CREATE INDEX idx_product_file_file_product ON product.product_file USING btree (file_id, product_id);CREATE INDEX idx_product_file_product_file ON product.product_file USING btree (product_id, file_id);CREATE INDEX idx_product_file_src ON product.product_file USING btree (src_product_file_id) WHERE (src_product_file_id IS NOT NULL);The one with fk_pfft_product looks like this, it has about 5000 records in it:CREATE TABLE product.product_file_tag (\tid uuid NOT NULL,\tc_tim timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,\tc_uid uuid NULL,\tc_sid uuid NULL,\tm_tim timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,\tm_uid uuid NULL,\tm_sid uuid NULL,\tproduct_file_id uuid NOT NULL,\tfile_tag_id uuid NOT NULL, CONSTRAINT product_file_tag_pkey PRIMARY KEY (id), CONSTRAINT fk_pfft_file_tag FOREIGN KEY (file_tag_id) REFERENCES product.file_tag(id) ON DELETE CASCADE DEFERRABLE, CONSTRAINT fk_pfft_product FOREIGN KEY (product_file_id) REFERENCES product.product_file(id) ON DELETE CASCADE DEFERRABLE);CREATE UNIQUE INDEX uidx_product_file_file_tag ON product.product_file_tag USING btree (product_file_id, file_tag_id);The other constraint has zero actual references, this returns zero:select count(*) from product.product_file where src_product_file_id in ( select pf2_id from _td ); -- 0 I was trying to figure out how a foreign key constraint with zero actual references can cost 100 msec / record, but I failed.Can somebody please explain what is wrong here?The plan is also visualized here: http://tatiyants.com/pev/#/plans/plan_1692129126258[ { \"Plan\": { \"Node Type\": \"ModifyTable\", \"Operation\": \"Delete\", \"Parallel Aware\": false, \"Async Capable\": false, \"Relation Name\": \"product_file\", \"Schema\": \"product\", \"Alias\": \"product_file\", \"Startup Cost\": 4.21, \"Total Cost\": 840.79, \"Plan Rows\": 0, \"Plan Width\": 0, \"Actual Startup Time\": 0.567, \"Actual Total Time\": 0.568, \"Actual Rows\": 0, \"Actual Loops\": 1, \"Shared Hit Blocks\": 582, \"Shared Read Blocks\": 0, \"Shared Dirtied Blocks\": 10, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0, \"Plans\": [ { \"Node Type\": \"Nested Loop\", \"Parent Relationship\": \"Outer\", \"Parallel Aware\": false, \"Async Capable\": false, \"Join Type\": \"Inner\", \"Startup Cost\": 4.21, \"Total Cost\": 840.79, \"Plan Rows\": 100, \"Plan Width\": 46, \"Actual Startup Time\": 0.161, \"Actual Total Time\": 0.451, \"Actual Rows\": 90, \"Actual Loops\": 1, \"Output\": [\"product_file.ctid\", \"\\\"ANY_subquery\\\".*\"], \"Inner Unique\": true, \"Shared Hit Blocks\": 402, \"Shared Read Blocks\": 0, \"Shared Dirtied Blocks\": 10, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0, \"Plans\": [ { \"Node Type\": \"Aggregate\", \"Strategy\": \"Hashed\", \"Partial Mode\": \"Simple\", \"Parent Relationship\": \"Outer\", \"Parallel Aware\": false, \"Async Capable\": false, \"Startup Cost\": 3.79, \"Total Cost\": 4.79, \"Plan Rows\": 100, \"Plan Width\": 56, \"Actual Startup Time\": 0.118, \"Actual Total Time\": 0.136, \"Actual Rows\": 100, \"Actual Loops\": 1, \"Output\": [\"\\\"ANY_subquery\\\".*\", \"\\\"ANY_subquery\\\".pf2_id\"], \"Group Key\": [\"\\\"ANY_subquery\\\".pf2_id\"], \"Planned Partitions\": 0, \"HashAgg Batches\": 1, \"Peak Memory Usage\": 32, \"Disk Usage\": 0, \"Shared Hit Blocks\": 2, \"Shared Read Blocks\": 0, \"Shared Dirtied Blocks\": 0, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0, \"Plans\": [ { \"Node Type\": \"Subquery Scan\", \"Parent Relationship\": \"Outer\", \"Parallel Aware\": false, \"Async Capable\": false, \"Alias\": \"ANY_subquery\", \"Startup Cost\": 0.00, \"Total Cost\": 3.54, \"Plan Rows\": 100, \"Plan Width\": 56, \"Actual Startup Time\": 0.030, \"Actual Total Time\": 0.083, \"Actual Rows\": 100, \"Actual Loops\": 1, \"Output\": [\"\\\"ANY_subquery\\\".*\", \"\\\"ANY_subquery\\\".pf2_id\"], \"Shared Hit Blocks\": 2, \"Shared Read Blocks\": 0, \"Shared Dirtied Blocks\": 0, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0, \"Plans\": [ { \"Node Type\": \"Limit\", \"Parent Relationship\": \"Subquery\", \"Parallel Aware\": false, \"Async Capable\": false, \"Startup Cost\": 0.00, \"Total Cost\": 2.54, \"Plan Rows\": 100, \"Plan Width\": 16, \"Actual Startup Time\": 0.024, \"Actual Total Time\": 0.053, \"Actual Rows\": 100, \"Actual Loops\": 1, \"Output\": [\"_td.pf2_id\"], \"Shared Hit Blocks\": 2, \"Shared Read Blocks\": 0, \"Shared Dirtied Blocks\": 0, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0, \"Plans\": [ { \"Node Type\": \"Seq Scan\", \"Parent Relationship\": \"Outer\", \"Parallel Aware\": false, \"Async Capable\": false, \"Relation Name\": \"_td\", \"Schema\": \"public\", \"Alias\": \"_td\", \"Startup Cost\": 0.00, \"Total Cost\": 1100.07, \"Plan Rows\": 43307, \"Plan Width\": 16, \"Actual Startup Time\": 0.023, \"Actual Total Time\": 0.042, \"Actual Rows\": 100, \"Actual Loops\": 1, \"Output\": [\"_td.pf2_id\"], \"Shared Hit Blocks\": 2, \"Shared Read Blocks\": 0, \"Shared Dirtied Blocks\": 0, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0 } ] } ] } ] }, { \"Node Type\": \"Index Scan\", \"Parent Relationship\": \"Inner\", \"Parallel Aware\": false, \"Async Capable\": false, \"Scan Direction\": \"Forward\", \"Index Name\": \"product_file_pkey\", \"Relation Name\": \"product_file\", \"Schema\": \"product\", \"Alias\": \"product_file\", \"Startup Cost\": 0.42, \"Total Cost\": 8.36, \"Plan Rows\": 1, \"Plan Width\": 22, \"Actual Startup Time\": 0.003, \"Actual Total Time\": 0.003, \"Actual Rows\": 1, \"Actual Loops\": 100, \"Output\": [\"product_file.ctid\", \"product_file.id\"], \"Index Cond\": \"(product_file.id = \\\"ANY_subquery\\\".pf2_id)\", \"Rows Removed by Index Recheck\": 0, \"Shared Hit Blocks\": 400, \"Shared Read Blocks\": 0, \"Shared Dirtied Blocks\": 10, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0 } ] } ] }, \"Planning\": { \"Shared Hit Blocks\": 0, \"Shared Read Blocks\": 0, \"Shared Dirtied Blocks\": 0, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0 }, \"Planning Time\": 0.249, \"Triggers\": [ { \"Trigger Name\": \"RI_ConstraintTrigger_a_26535\", \"Constraint Name\": \"fk_pfft_product\", \"Relation\": \"product_file\", \"Time\": 4.600, \"Calls\": 90 }, { \"Trigger Name\": \"RI_ConstraintTrigger_a_26837\", \"Constraint Name\": \"fk_product_file_src\", \"Relation\": \"product_file\", \"Time\": 5.795, \"Calls\": 90 }, { \"Trigger Name\": \"RI_ConstraintTrigger_a_75463\", \"Constraint Name\": \"fk_pfq_src_product_file\", \"Relation\": \"product_file\", \"Time\": 11179.429, \"Calls\": 90 }, { \"Trigger Name\": \"_trg_002_aiu_audit_row\", \"Relation\": \"product_file\", \"Time\": 49.410, \"Calls\": 90 } ], \"Execution Time\": 11240.265 }]",
"msg_date": "Tue, 15 Aug 2023 22:23:26 +0200",
"msg_from": "Les <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow delete"
},
{
"msg_contents": "Les <[email protected]> writes:\n> It seems that two foreign key constraints use 10.395 seconds out of the\n> total 11.24 seconds. But I don't see why it takes that much?\n\nProbably because you don't have an index on the referencing column.\nYou can get away with that, if you don't care about the speed of\ndeletes from the PK table ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Aug 2023 16:37:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow delete"
},
{
"msg_contents": "Tom Lane <[email protected]> ezt írta (időpont: 2023. aug. 15., K, 22:37):\n\n> Les <[email protected]> writes:\n> > It seems that two foreign key constraints use 10.395 seconds out of the\n> > total 11.24 seconds. But I don't see why it takes that much?\n>\n> Probably because you don't have an index on the referencing column.\n> You can get away with that, if you don't care about the speed of\n> deletes from the PK table ...\n>\n\nFor fk_pfft_product constraint this is true, but I always thought that\nPostgreSQL can use an index \"partially\". There is already an index:\n\nCREATE UNIQUE INDEX uidx_product_file_file_tag ON product.product_file_tag\nUSING btree (product_file_id, file_tag_id);\n\nIt has the same order, only it has one column more. Wouldn't it be possible\nto use it for the plan?\n\nAfter I created these two missing indices:\n\nCREATE INDEX idx_pft_pf ON product.product_file_tag USING btree\n(product_file_id);\n\nCREATE INDEX idx_pfq_src_pf ON product.product_file_queue USING btree\n(src_product_file_id);\n\n\nI could delete all 40 000 records in 10 seconds.\n\nThank you!\n\n Laszlo\n\n>\n>\n\nTom Lane <[email protected]> ezt írta (időpont: 2023. aug. 15., K, 22:37):Les <[email protected]> writes:\n> It seems that two foreign key constraints use 10.395 seconds out of the\n> total 11.24 seconds. But I don't see why it takes that much?\n\nProbably because you don't have an index on the referencing column.\nYou can get away with that, if you don't care about the speed of\ndeletes from the PK table ...For fk_pfft_product constraint this is true, but I always thought that PostgreSQL can use an index \"partially\". There is already an index:CREATE UNIQUE INDEX uidx_product_file_file_tag ON product.product_file_tag USING btree (product_file_id, file_tag_id);It has the same order, only it has one column more. Wouldn't it be possible to use it for the plan?After I created these two missing indices:CREATE INDEX idx_pft_pf ON product.product_file_tag USING btree (product_file_id); CREATE INDEX idx_pfq_src_pf ON product.product_file_queue USING btree (src_product_file_id);I could delete all 40 000 records in 10 seconds.Thank you! Laszlo",
"msg_date": "Wed, 16 Aug 2023 06:43:05 +0200",
"msg_from": "Les <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow delete"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 4:23 PM Les <[email protected]> wrote:\n\n{\n>\n> \"Trigger Name\": \"RI_ConstraintTrigger_a_75463\",\n>\n> \"Constraint Name\": \"fk_pfq_src_product_file\",\n>\n> \"Relation\": \"product_file\",\n>\n> \"Time\": 11179.429,\n>\n> \"Calls\": 90\n>\n> },\n>\n...\n\n\n> The one with fk_pfft_product looks like this, it has about 5000 records in\n> it:\n>\n\nThat constraint took essentially no time. You need to look into the one\nthat took all of the time,\nwhich is fk_pfq_src_product_file.\n\nCheers,\n\nJeff\n\n>\n\nOn Tue, Aug 15, 2023 at 4:23 PM Les <[email protected]> wrote: { \"Trigger Name\": \"RI_ConstraintTrigger_a_75463\", \"Constraint Name\": \"fk_pfq_src_product_file\", \"Relation\": \"product_file\", \"Time\": 11179.429, \"Calls\": 90 }, ... The one with fk_pfft_product looks like this, it has about 5000 records in it:That constraint took essentially no time. You need to look into the one that took all of the time, which is fk_pfq_src_product_file. Cheers,Jeff",
"msg_date": "Wed, 16 Aug 2023 10:03:49 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow delete"
}
] |
[
{
"msg_contents": "Hello all\n\nWe are developping a software that has a lot of concurrent transactions so\nwe use a lot row locking (SELECT.. FOR SHARE/UPDATE) and we are\nexperiencing high disk write rates on large read queries.\n\nAs I understand the tuple is updated every time a lock is put on it (then\npage becomes dirty and at some point is written to the disk)\nIs there any way to prevent the writing to the disk since this information\nis temporary? Is there any system for caching the rows locks in RAM in\nPostgreSQL? Is increasing shared_buffers a solution since it contains the\ncached rows or is it only a read-purpose caching?\n\nThanks in advance for your knowledge on the subject\n\nHave a nice day\nMartin\n\nHello allWe are developping a software that has a lot of concurrent transactions so we use a lot row locking (SELECT.. FOR SHARE/UPDATE) and we are experiencing high disk write rates on large read queries.As I understand the tuple is updated every time a lock is put on it (then page becomes dirty and at some point is written to the disk)Is there any way to prevent the writing to the disk since this information is temporary? Is there any system for caching the rows locks in RAM in PostgreSQL? Is increasing \nshared_buffers a solution since it contains the cached rows or is it only a read-purpose caching?Thanks in advance for your knowledge on the subjectHave a nice dayMartin",
"msg_date": "Wed, 23 Aug 2023 11:19:02 +0200",
"msg_from": "Martin Querleu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question regarding writes when locking rows"
}
] |
[
{
"msg_contents": "I have this table:\n\nCREATE TABLE media.block (\n\nid uuid NOT NULL,\n\n\"size\" int8 NOT NULL,\n\nnrefs int8 NOT NULL DEFAULT 0,\n\nblock bytea NOT NULL,\n\nhs256 bytea NOT NULL,\n\nCONSTRAINT block_pkey PRIMARY KEY (id),\n\nCONSTRAINT chk_nrefs CHECK ((nrefs >= 0))\n\n)\n\nWITH (\n\ntoast_tuple_target=8160\n\n)\n\nTABLESPACE data_slow\n\n;\n\nalter table media.block alter column block set storage main;\n\nalter table media.block alter column hs256 set storage main;\n\nCREATE INDEX idx_block_unused ON media.block USING btree (id) WHERE (nrefs\n= 0);\n\nCREATE UNIQUE INDEX uidx_block_hs256 ON media.block USING btree (hs256);\n\n\nNumber of rows in this table is about 40M, and most of the rows occupy a\nfull 8K block (in most cases, the \"block\" field contains 7500 bytes).\n\nThe idx_block_unused index should be used to find blocks that are unused,\nso they can be deleted at some point.\n\nThe idx_block_unused index is less than 400MB:\n\n\nSELECT i.relname \"Table Name\",indexrelname \"Index Name\",\n\npg_size_pretty(pg_total_relation_size(relid)) As \"Total Size\",\n\npg_size_pretty(pg_indexes_size(relid)) as \"Total Size of all Indexes\",\n\npg_size_pretty(pg_relation_size(relid)) as \"Table Size\",\n\npg_size_pretty(pg_relation_size(indexrelid)) \"Index Size\",\n\nreltuples::bigint \"Estimated table row count\"\n\nFROM pg_stat_all_indexes i JOIN pg_class c ON i.relid=c.oid\n\nwhere i.relid ='media.block'::regclass\n\n\nTable Name|Index Name |Total Size|Total Size of all Indexes|Table\nSize|Index Size|Estimated table row count|\n----------+----------------+----------+-------------------------+----------+----------+-------------------------+\nblock |block_pkey |352 GB |5584 MB |347 GB\n |1986 MB | 38958848|\nblock |uidx_block_hs256|352 GB |5584 MB |347 GB\n |3226 MB | 38958848|\nblock |idx_block_unused|352 GB |5584 MB |347 GB\n |372 MB | 38958848|\n\nIf I try to select a single unused block this way:\n\nexplain analyze select id from media.block b where nrefs =0 limit 1\n\n\nthen it runs for more than 10 minutes (I'm not sure how long, I cancelled\nthe query after 10 minutes).\n\nIf I run this without analyze:\n\nexplain select id from media.block b where nrefs =0 limit 1\n\n\nQUERY PLAN\n |\n-----------------------------------------------------------------------------------------------+\nLimit (cost=0.38..0.76 rows=1 width=16)\n |\n -> Index Only Scan using idx_block_unused on block b (cost=0.38..869.83\nrows=2274 width=16)|\n\nI believe it is not actually using the index, because reading a single\n(random?) entry from an index should not run for >10 minutes.\n\nWhat am I doing wrong?\n\nThank you,\n\n Laszlo\n\nI have this table:CREATE TABLE media.block (\tid uuid NOT NULL, \"size\" int8 NOT NULL,\tnrefs int8 NOT NULL DEFAULT 0,\tblock bytea NOT NULL,\ths256 bytea NOT NULL, CONSTRAINT block_pkey PRIMARY KEY (id), CONSTRAINT chk_nrefs CHECK ((nrefs >= 0)))WITH (\ttoast_tuple_target=8160)TABLESPACE data_slow;alter table media.block alter column block set storage main;alter table media.block alter column hs256 set storage main;CREATE INDEX idx_block_unused ON media.block USING btree (id) WHERE (nrefs = 0);CREATE UNIQUE INDEX uidx_block_hs256 ON media.block USING btree (hs256);Number of rows in this table is about 40M, and most of the rows occupy a full 8K block (in most cases, the \"block\" field contains 7500 bytes).The idx_block_unused index should be used to find blocks that are unused, so they can be deleted at some point.The idx_block_unused index is less than 400MB:SELECT i.relname \"Table Name\",indexrelname \"Index Name\", pg_size_pretty(pg_total_relation_size(relid)) As \"Total Size\", pg_size_pretty(pg_indexes_size(relid)) as \"Total Size of all Indexes\", pg_size_pretty(pg_relation_size(relid)) as \"Table Size\", pg_size_pretty(pg_relation_size(indexrelid)) \"Index Size\", reltuples::bigint \"Estimated table row count\" FROM pg_stat_all_indexes i JOIN pg_class c ON i.relid=c.oid where i.relid ='media.block'::regclassTable Name|Index Name |Total Size|Total Size of all Indexes|Table Size|Index Size|Estimated table row count|----------+----------------+----------+-------------------------+----------+----------+-------------------------+block |block_pkey |352 GB |5584 MB |347 GB |1986 MB | 38958848|block |uidx_block_hs256|352 GB |5584 MB |347 GB |3226 MB | 38958848|block |idx_block_unused|352 GB |5584 MB |347 GB |372 MB | 38958848|If I try to select a single unused block this way:explain analyze select id from media.block b where nrefs =0 limit 1then it runs for more than 10 minutes (I'm not sure how long, I cancelled the query after 10 minutes).If I run this without analyze:explain select id from media.block b where nrefs =0 limit 1QUERY PLAN |-----------------------------------------------------------------------------------------------+Limit (cost=0.38..0.76 rows=1 width=16) | -> Index Only Scan using idx_block_unused on block b (cost=0.38..869.83 rows=2274 width=16)|I believe it is not actually using the index, because reading a single (random?) entry from an index should not run for >10 minutes.What am I doing wrong?Thank you, Laszlo",
"msg_date": "Sun, 27 Aug 2023 13:58:19 +0200",
"msg_from": "Les <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query, possibly not using index"
},
{
"msg_contents": "Les <[email protected]> writes:\n> If I try to select a single unused block this way:\n> explain analyze select id from media.block b where nrefs =0 limit 1\n> then it runs for more than 10 minutes (I'm not sure how long, I cancelled\n> the query after 10 minutes).\n\nAre you sure it isn't blocked on a lock?\n\nAnother theory is that the index contains many thousands of references\nto now-dead rows, and the query is vainly searching for a live entry.\nGiven that EXPLAIN thinks there are only about 2300 live entries,\nand yet you say the index is 400MB, this seems pretty plausible.\nHave you disabled autovacuum, or something like that? (REINDEX\ncould help here, at least till the index gets bloated again.)\n\nYou might think that even so, it shouldn't take that long ... but\nindexes on UUID columns are a well known performance antipattern.\nThe index entry order is likely to have precisely zip to do with\nthe table's physical order, resulting in exceedingly-random access\nto the table, which'll be horribly expensive when the table is so\nmuch bigger than RAM. Can you replace the UUID column with a simple\nserial (identity) column?\n\n> I believe it is not actually using the index, because reading a single\n> (random?) entry from an index should not run for >10 minutes.\n\nYou should believe what EXPLAIN tells you about the plan shape.\n(Its rowcount estimates are only estimates, though.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 27 Aug 2023 09:27:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, possibly not using index"
},
{
"msg_contents": "> > If I try to select a single unused block this way:\n> > explain analyze select id from media.block b where nrefs =0 limit 1\n> > then it runs for more than 10 minutes (I'm not sure how long, I cancelled\n> > the query after 10 minutes).\n>\n> Are you sure it isn't blocked on a lock?\n>\n\nYes, I'm sure. I have created a single database instance from a zfs\nsnapshot and tried the query on that database. It was the only client.\n\n\n> Another theory is that the index contains many thousands of references\n> to now-dead rows, and the query is vainly searching for a live entry.\n> Given that EXPLAIN thinks there are only about 2300 live entries,\n> and yet you say the index is 400MB, this seems pretty plausible.\n>\n\nNobody ever deleted anything from this table. Since it was created, this\nhas been a write-only table.\n\n\n> Have you disabled autovacuum, or something like that? (REINDEX\n> could help here, at least till the index gets bloated again.)\n>\nI did not disable autovacuum.\n\n>\n> You might think that even so, it shouldn't take that long ... but\n> indexes on UUID columns are a well known performance antipattern.\n> The index entry order is likely to have precisely zip to do with\n> the table's physical order, resulting in exceedingly-random access\n> to the table, which'll be horribly expensive when the table is so\n> much bigger than RAM. Can you replace the UUID column with a simple\n> serial (identity) column?\n>\n\nI'm aware of the problems with random UUID values. I was using this\nfunction to create ulids from the beginning:\n\nCREATE OR REPLACE FUNCTION public.gen_ulid()\n\nRETURNS uuid\n\nLANGUAGE sql\n\nAS $function$\n\nSELECT (lpad(to_hex(floor(extract(epoch FROM clock_timestamp()) * 1000)::\nbigint), 12, '0') || encode(gen_random_bytes(10), 'hex'))::uuid;\n\n$function$\n\n;\n\n\nIf I order some rows by id values, I can see that their creation times are\nstrictly ascending. I did not write this function, it was taken from this\nwebsite:\n\nhttps://blog.daveallie.com/ulid-primary-keys\n\nThey have a benchmark section where they show that these ULID values are\nslower to generate (at least with this implementation) but much faster to\ninsert.\n\nI might be able to replace these with int8 values, I need to check.\n\n\n>\n> > I believe it is not actually using the index, because reading a single\n> > (random?) entry from an index should not run for >10 minutes.\n>\n> You should believe what EXPLAIN tells you about the plan shape.\n> (Its rowcount estimates are only estimates, though.)\n>\n\nAll of the 40M rows in this table are live. I'm 100% sure about this,\nbecause nobody ever deleted rows from this table.\n\nI can try to do VACUUM on this table, but I'm limited on resources. I think\nit will take days to do this. Maybe I can try to dump the whole database\nand restore it on another machine. Would that eliminate dead rows? (Is\nthere a way to check the number of dead rows?)\n\nRegards,\n\n Laszlo\n\n> If I try to select a single unused block this way:\n> explain analyze select id from media.block b where nrefs =0 limit 1\n> then it runs for more than 10 minutes (I'm not sure how long, I cancelled\n> the query after 10 minutes).\n\nAre you sure it isn't blocked on a lock?Yes, I'm sure. I have created a single database instance from a zfs snapshot and tried the query on that database. It was the only client.\n\nAnother theory is that the index contains many thousands of references\nto now-dead rows, and the query is vainly searching for a live entry.\nGiven that EXPLAIN thinks there are only about 2300 live entries,\nand yet you say the index is 400MB, this seems pretty plausible.Nobody ever deleted anything from this table. Since it was created, this has been a write-only table. \nHave you disabled autovacuum, or something like that? (REINDEX\ncould help here, at least till the index gets bloated again.)I did not disable autovacuum. \n\nYou might think that even so, it shouldn't take that long ... but\nindexes on UUID columns are a well known performance antipattern.\nThe index entry order is likely to have precisely zip to do with\nthe table's physical order, resulting in exceedingly-random access\nto the table, which'll be horribly expensive when the table is so\nmuch bigger than RAM. Can you replace the UUID column with a simple\nserial (identity) column?I'm aware of the problems with random UUID values. I was using this function to create ulids from the beginning:CREATE OR REPLACE FUNCTION public.gen_ulid() RETURNS uuid LANGUAGE sqlAS $function$ SELECT (lpad(to_hex(floor(extract(epoch FROM clock_timestamp()) * 1000)::bigint), 12, '0') || encode(gen_random_bytes(10), 'hex'))::uuid;$function$;If I order some rows by id values, I can see that their creation times are strictly ascending. I did not write this function, it was taken from this website:https://blog.daveallie.com/ulid-primary-keysThey have a benchmark section where they show that these ULID values are slower to generate (at least with this implementation) but much faster to insert. I might be able to replace these with int8 values, I need to check. \n\n> I believe it is not actually using the index, because reading a single\n> (random?) entry from an index should not run for >10 minutes.\n\nYou should believe what EXPLAIN tells you about the plan shape.\n(Its rowcount estimates are only estimates, though.)All of the 40M rows in this table are live. I'm 100% sure about this, because nobody ever deleted rows from this table.I can try to do VACUUM on this table, but I'm limited on resources. I think it will take days to do this. Maybe I can try to dump the whole database and restore it on another machine. Would that eliminate dead rows? (Is there a way to check the number of dead rows?)Regards, Laszlo",
"msg_date": "Sun, 27 Aug 2023 19:39:40 +0200",
"msg_from": "Les <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query, possibly not using index"
},
{
"msg_contents": "> Nobody ever deleted anything from this table. Since it was created, this\nhas been a write-only table.\n\ndoes write-only include updates? that would create the dead rows tom is\nreferring to.\n\n> I believe it is not actually using the index, because reading a single\n(random?) entry from an index should not run for >10 minutes.\n\nit is using the index. You can disable it, and see how long your query\ntakes (\"set enable_indexonlyscan = off\", optionally enable_indexscan too),\nchances are it will take even longer. If somehow it does not, by luck, then\nyou no vacuum has run for a while, for some reason (perhaps canceled due to\nfrequent table activity?). That too you can check.\n\n>\n\n> Nobody ever deleted anything from this table. Since it was created, this has been a write-only table.does write-only include updates? that would create the dead rows tom is referring to. > I believe it is not actually using the index, because reading a single (random?) entry from an index should not run for >10 minutes.it is using the index. You can disable it, and see how long your query takes (\"set enable_indexonlyscan = off\", optionally enable_indexscan too), chances are it will take even longer. If somehow it does not, by luck, then you no vacuum has run for a while, for some reason (perhaps canceled due to frequent table activity?). That too you can check.",
"msg_date": "Sun, 27 Aug 2023 14:54:02 -0400",
"msg_from": "Wael Khobalatte <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, possibly not using index"
},
{
"msg_contents": "Les <[email protected]> writes:\n>>> If I try to select a single unused block this way:\n>>> explain analyze select id from media.block b where nrefs =0 limit 1\n>>> then it runs for more than 10 minutes (I'm not sure how long, I cancelled\n>>> the query after 10 minutes).\n\n>> You might think that even so, it shouldn't take that long ... but\n>> indexes on UUID columns are a well known performance antipattern.\n\n> I'm aware of the problems with random UUID values. I was using this\n> function to create ulids from the beginning:\n\nOh, well that would have been useful information to provide at the\noutset. Now that we know the index order is correlated with creation\ntime, I wonder if it is also correlated with nrefs, in such a way that\nscanning in index order is disastrous because all the desired rows are\nat the end of the index.\n\nAlso, you deny deleting any rows, but that wasn't the point. Do you\never update nrefs from zero to nonzero? That'd also leave dead\nentries behind in this index. If that is a routine event that is\ncorrelated with creation time, it gets easier to believe that your\nindex could have lots of dead entries at the front.\n\nWe'd still have to figure out why autovacuum is failing to clean out\nthose entries in a timely fashion, but that seems like a plausible\nway for the performance problem to exist.\n\n> I can try to do VACUUM on this table, but I'm limited on resources. I think\n> it will take days to do this. Maybe I can try to dump the whole database\n> and restore it on another machine.\n\nPretty hard to believe that dump-and-restore would be faster than\nVACUUM.\n\n> (Is there a way to check the number of dead rows?)\n\nI think contrib/pgstattuple might help.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 27 Aug 2023 19:34:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, possibly not using index"
},
{
"msg_contents": ">\n>\n>\n> > I'm aware of the problems with random UUID values. I was using this\n> > function to create ulids from the beginning:\n>\n> Oh, well that would have been useful information to provide at the\n> outset.\n\nI'm sorry, I left this out.\n\n> Now that we know the index order is correlated with creation\n> time, I wonder if it is also correlated with nrefs, in such a way that\n> scanning in index order is disastrous because all the desired rows are\n> at the end of the index.\n>\nPossibly, I have no idea.\n\n>\n> Also, you deny deleting any rows, but that wasn't the point. Do you\n> ever update nrefs from zero to nonzero? That'd also leave dead\n> entries behind in this index. If that is a routine event that is\n> correlated with creation time, it gets easier to believe that your\n> index could have lots of dead entries at the front.\n>\n\nI have checked the trigger that is maintaining the nrefs field. Blocks are\nreferenced from a \"file_block\" table. Every time a block is created, it\nfirst has an initial value of nrefs=0, then a file_block row (reference) is\ninserted, and nrefs is incremented to one. It means that every block has\nshown up once in the index, and then disappeared. If the index was never\nvacuumed, then it is very plausible that it is full of dead rows.\n\nCREATE OR REPLACE FUNCTION media.trg_aiud_file_block()\n\nRETURNS trigger\n\nLANGUAGE plpgsql\n\nAS $function$\n\nbegin\n\nif TG_OP='INSERT' then\n\nupdate media.block set nrefs = nrefs + 1 where id = new.block_id;\n\nreturn new;\n\nend if;\n\nif TG_OP='UPDATE' then\n\nif old.block_id is distinct from new.block_id then\n\nupdate media.block set nrefs = nrefs + 1 where id = new.block_id;\n\nupdate media.block set nrefs = nrefs - 1 where id = old.block_id;\n\nend if;\n\nreturn new;\n\nend if;\n\nif TG_OP='DELETE' then\n\nupdate media.block set nrefs = nrefs - 1 where id = old.block_id;\n\nreturn old;\n\nend if;\n\nend;\n\n$function$\n\n;\n\n\nThe idea was to create an index that can help in quickly removing unused\nblocks, to free up disk space. It would be much better to keep out the\ninitially inserted (not yet references) from the index, but I don't know\nhow to do this.\n\n\n> We'd still have to figure out why autovacuum is failing to clean out\n> those entries in a timely fashion, but that seems like a plausible\n> way for the performance problem to exist.\n>\nYes, that would be very good to know. I cloud drop and recreate the index\nnow, but after some time I would be facing the same situation again.\n\nI double checked, and the \"autovacuum launcher\" process is running.\n\nHere are the current settings:\n\n=# select name, setting, unit, min_val, max_val, boot_val, reset_val,\npending_restart from pg_settings where name like '%vacuum%';\n name | setting | unit | min_val |\n max_val | boot_val | reset_val | pending_restart\n---------------------------------------+------------+------+---------+------------+------------+------------+-----------------\n autovacuum | on | | |\n | on | on | f\n autovacuum_analyze_scale_factor | 0.1 | | 0 | 100\n | 0.1 | 0.1 | f\n autovacuum_analyze_threshold | 50 | | 0 |\n2147483647 | 50 | 50 | f\n autovacuum_freeze_max_age | 200000000 | | 100000 |\n2000000000 | 200000000 | 200000000 | f\n autovacuum_max_workers | 3 | | 1 |\n262143 | 3 | 3 | f\n autovacuum_multixact_freeze_max_age | 400000000 | | 10000 |\n2000000000 | 400000000 | 400000000 | f\n autovacuum_naptime | 60 | s | 1 |\n2147483 | 60 | 60 | f\n autovacuum_vacuum_cost_delay | 2 | ms | -1 | 100\n | 2 | 2 | f\n autovacuum_vacuum_cost_limit | -1 | | -1 |\n10000 | -1 | -1 | f\n autovacuum_vacuum_insert_scale_factor | 0.2 | | 0 | 100\n | 0.2 | 0.2 | f\n autovacuum_vacuum_insert_threshold | 1000 | | -1 |\n2147483647 | 1000 | 1000 | f\n autovacuum_vacuum_scale_factor | 0.2 | | 0 | 100\n | 0.2 | 0.2 | f\n autovacuum_vacuum_threshold | 50 | | 0 |\n2147483647 | 50 | 50 | f\n autovacuum_work_mem | -1 | kB | -1 |\n2147483647 | -1 | -1 | f\n log_autovacuum_min_duration | 600000 | ms | -1 |\n2147483647 | 600000 | 600000 | f\n vacuum_cost_delay | 0 | ms | 0 | 100\n | 0 | 0 | f\n vacuum_cost_limit | 200 | | 1 |\n10000 | 200 | 200 | f\n vacuum_cost_page_dirty | 20 | | 0 |\n10000 | 20 | 20 | f\n vacuum_cost_page_hit | 1 | | 0 |\n10000 | 1 | 1 | f\n vacuum_cost_page_miss | 2 | | 0 |\n10000 | 2 | 2 | f\n vacuum_defer_cleanup_age | 0 | | 0 |\n1000000 | 0 | 0 | f\n vacuum_failsafe_age | 1600000000 | | 0 |\n2100000000 | 1600000000 | 1600000000 | f\n vacuum_freeze_min_age | 50000000 | | 0 |\n1000000000 | 50000000 | 50000000 | f\n vacuum_freeze_table_age | 150000000 | | 0 |\n2000000000 | 150000000 | 150000000 | f\n vacuum_multixact_failsafe_age | 1600000000 | | 0 |\n2100000000 | 1600000000 | 1600000000 | f\n vacuum_multixact_freeze_min_age | 5000000 | | 0 |\n1000000000 | 5000000 | 5000000 | f\n vacuum_multixact_freeze_table_age | 150000000 | | 0 |\n2000000000 | 150000000 | 150000000 | f\n(27 rows)\n\nI think I did not change the defaults.\n\n\n> > I can try to do VACUUM on this table, but I'm limited on resources. I\n> think\n> > it will take days to do this. Maybe I can try to dump the whole database\n> > and restore it on another machine.\n>\n> Pretty hard to believe that dump-and-restore would be faster than\n> VACUUM.\n>\n> > (Is there a way to check the number of dead rows?)\n>\n> I think contrib/pgstattuple might help.\n>\n>\nAll right, I started pgstattuple() and I'll also do pgstatindex(), but it\ntakes a while. I'll get back with the results.\n\nThank you for your help!\n\nRegards,\n\n Laszlo\n\n\n> I'm aware of the problems with random UUID values. I was using this\n> function to create ulids from the beginning:\n\nOh, well that would have been useful information to provide at the\noutset. I'm sorry, I left this out. Now that we know the index order is correlated with creation\ntime, I wonder if it is also correlated with nrefs, in such a way that\nscanning in index order is disastrous because all the desired rows are\nat the end of the index.Possibly, I have no idea.\n\nAlso, you deny deleting any rows, but that wasn't the point. Do you\never update nrefs from zero to nonzero? That'd also leave dead\nentries behind in this index. If that is a routine event that is\ncorrelated with creation time, it gets easier to believe that your\nindex could have lots of dead entries at the front.I have checked the trigger that is maintaining the nrefs field. Blocks are referenced from a \"file_block\" table. Every time a block is created, it first has an initial value of nrefs=0, then a file_block row (reference) is inserted, and nrefs is incremented to one. It means that every block has shown up once in the index, and then disappeared. If the index was never vacuumed, then it is very plausible that it is full of dead rows.CREATE OR REPLACE FUNCTION media.trg_aiud_file_block() RETURNS trigger LANGUAGE plpgsqlAS $function$begin if TG_OP='INSERT' then update media.block set nrefs = nrefs + 1 where id = new.block_id; return new; end if; if TG_OP='UPDATE' then if old.block_id is distinct from new.block_id then update media.block set nrefs = nrefs + 1 where id = new.block_id; update media.block set nrefs = nrefs - 1 where id = old.block_id; end if; return new; end if; if TG_OP='DELETE' then update media.block set nrefs = nrefs - 1 where id = old.block_id; return old; end if;end;$function$;The idea was to create an index that can help in quickly removing unused blocks, to free up disk space. It would be much better to keep out the initially inserted (not yet references) from the index, but I don't know how to do this.\n\nWe'd still have to figure out why autovacuum is failing to clean out\nthose entries in a timely fashion, but that seems like a plausible\nway for the performance problem to exist.Yes, that would be very good to know. I cloud drop and recreate the index now, but after some time I would be facing the same situation again.I double checked, and the \"autovacuum launcher\" process is running.Here are the current settings:=# select name, setting, unit, min_val, max_val, boot_val, reset_val, pending_restart from pg_settings where name like '%vacuum%'; name | setting | unit | min_val | max_val | boot_val | reset_val | pending_restart ---------------------------------------+------------+------+---------+------------+------------+------------+----------------- autovacuum | on | | | | on | on | f autovacuum_analyze_scale_factor | 0.1 | | 0 | 100 | 0.1 | 0.1 | f autovacuum_analyze_threshold | 50 | | 0 | 2147483647 | 50 | 50 | f autovacuum_freeze_max_age | 200000000 | | 100000 | 2000000000 | 200000000 | 200000000 | f autovacuum_max_workers | 3 | | 1 | 262143 | 3 | 3 | f autovacuum_multixact_freeze_max_age | 400000000 | | 10000 | 2000000000 | 400000000 | 400000000 | f autovacuum_naptime | 60 | s | 1 | 2147483 | 60 | 60 | f autovacuum_vacuum_cost_delay | 2 | ms | -1 | 100 | 2 | 2 | f autovacuum_vacuum_cost_limit | -1 | | -1 | 10000 | -1 | -1 | f autovacuum_vacuum_insert_scale_factor | 0.2 | | 0 | 100 | 0.2 | 0.2 | f autovacuum_vacuum_insert_threshold | 1000 | | -1 | 2147483647 | 1000 | 1000 | f autovacuum_vacuum_scale_factor | 0.2 | | 0 | 100 | 0.2 | 0.2 | f autovacuum_vacuum_threshold | 50 | | 0 | 2147483647 | 50 | 50 | f autovacuum_work_mem | -1 | kB | -1 | 2147483647 | -1 | -1 | f log_autovacuum_min_duration | 600000 | ms | -1 | 2147483647 | 600000 | 600000 | f vacuum_cost_delay | 0 | ms | 0 | 100 | 0 | 0 | f vacuum_cost_limit | 200 | | 1 | 10000 | 200 | 200 | f vacuum_cost_page_dirty | 20 | | 0 | 10000 | 20 | 20 | f vacuum_cost_page_hit | 1 | | 0 | 10000 | 1 | 1 | f vacuum_cost_page_miss | 2 | | 0 | 10000 | 2 | 2 | f vacuum_defer_cleanup_age | 0 | | 0 | 1000000 | 0 | 0 | f vacuum_failsafe_age | 1600000000 | | 0 | 2100000000 | 1600000000 | 1600000000 | f vacuum_freeze_min_age | 50000000 | | 0 | 1000000000 | 50000000 | 50000000 | f vacuum_freeze_table_age | 150000000 | | 0 | 2000000000 | 150000000 | 150000000 | f vacuum_multixact_failsafe_age | 1600000000 | | 0 | 2100000000 | 1600000000 | 1600000000 | f vacuum_multixact_freeze_min_age | 5000000 | | 0 | 1000000000 | 5000000 | 5000000 | f vacuum_multixact_freeze_table_age | 150000000 | | 0 | 2000000000 | 150000000 | 150000000 | f(27 rows)I think I did not change the defaults.\n\n> I can try to do VACUUM on this table, but I'm limited on resources. I think\n> it will take days to do this. Maybe I can try to dump the whole database\n> and restore it on another machine.\n\nPretty hard to believe that dump-and-restore would be faster than\nVACUUM.\n\n> (Is there a way to check the number of dead rows?)\n\nI think contrib/pgstattuple might help.\n All right, I started pgstattuple() and I'll also do pgstatindex(), but it takes a while. I'll get back with the results.Thank you for your help!Regards, Laszlo",
"msg_date": "Mon, 28 Aug 2023 08:04:28 +0200",
"msg_from": "Les <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query, possibly not using index"
},
{
"msg_contents": ">\n>>\n> All right, I started pgstattuple() and I'll also do pgstatindex(), but it\n> takes a while. I'll get back with the results.\n>\n\n=# select * from pgstattuple('media.block');\n\n table_len | tuple_count | tuple_len | tuple_percent |\ndead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space |\nfree_percent\n--------------+-------------+--------------+---------------+------------------+----------------+--------------------+-------------+--------------\n 372521984000 | 39652836 | 299148572428 | 80.3 |\n 3578319 | 26977942540 | 7.24 | 44638265312 | 11.98\n(1 row)\n\n=# select * from pgstatindex('media.idx_block_unused');\n version | tree_level | index_size | root_block_no | internal_pages |\nleaf_pages | empty_pages | deleted_pages | avg_leaf_density |\nleaf_fragmentation\n---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------\n 4 | 2 | 389677056 | 546 | 114 |\n 23069 | 0 | 24384 | 90.03 | 0\n(1 row)\n\nAs far as I understand these numbers, the media.block table itself is in\ngood shape, but the index is not. Should I vacuum the whole table? Or would\nit be better to REINDEX INDEX media.idx_block_unused CONCURRENTLY ?\n\nMore important question is, how can I find out why the index was not auto\nvacuumed.\n\nThank you,\n\n Laszlo\n\n All right, I started pgstattuple() and I'll also do pgstatindex(), but it takes a while. I'll get back with the results.=# select * from pgstattuple('media.block'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent --------------+-------------+--------------+---------------+------------------+----------------+--------------------+-------------+-------------- 372521984000 | 39652836 | 299148572428 | 80.3 | 3578319 | 26977942540 | 7.24 | 44638265312 | 11.98(1 row)=# select * from pgstatindex('media.idx_block_unused'); version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+-------------------- 4 | 2 | 389677056 | 546 | 114 | 23069 | 0 | 24384 | 90.03 | 0(1 row)As far as I understand these numbers, the media.block table itself is in good shape, but the index is not. Should I vacuum the whole table? Or would it be better to REINDEX INDEX media.idx_block_unused CONCURRENTLY ?More important question is, how can I find out why the index was not auto vacuumed.Thank you, Laszlo",
"msg_date": "Mon, 28 Aug 2023 09:21:01 +0200",
"msg_from": "Les <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query, possibly not using index"
},
{
"msg_contents": ">\n>\n> =# select * from pgstatindex('media.idx_block_unused');\n> version | tree_level | index_size | root_block_no | internal_pages |\n> leaf_pages | empty_pages | deleted_pages | avg_leaf_density |\n> leaf_fragmentation\n>\n> ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------\n> 4 | 2 | 389677056 | 546 | 114 |\n> 23069 | 0 | 24384 | 90.03 | 0\n> (1 row)\n>\n> After reindex:\n\n=# select * from pgstatindex('media.idx_block_unused');\n version | tree_level | index_size | root_block_no | internal_pages |\nleaf_pages | empty_pages | deleted_pages | avg_leaf_density |\nleaf_fragmentation\n---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------\n 4 | 0 | 8192 | 0 | 0 |\n 0 | 0 | 0 | NaN | NaN\n(1 row)\n\n explain analyze select id from media.block b where nrefs =0 limit 1\n\nQUERY PLAN\n |\n-----------------------------------------------------------------------------------------------------------------------------------------+\nLimit (cost=0.14..0.46 rows=1 width=16) (actual time=0.010..0.011 rows=0\nloops=1) |\n -> Index Only Scan using idx_block_unused on block b (cost=0.14..698.91\nrows=2231 width=16) (actual time=0.008..0.009 rows=0 loops=1)|\n Heap Fetches: 0\n |\nPlanning Time: 0.174 ms\n |\nExecution Time: 0.030 ms\n |\n\nIt is actually empty.\n\nNow I only need to figure out why autovacuum did not work on the index.\n\nThank you\n\n Laszlo\n\n=# select * from pgstatindex('media.idx_block_unused'); version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+-------------------- 4 | 2 | 389677056 | 546 | 114 | 23069 | 0 | 24384 | 90.03 | 0(1 row)After reindex:=# select * from pgstatindex('media.idx_block_unused'); version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+-------------------- 4 | 0 | 8192 | 0 | 0 | 0 | 0 | 0 | NaN | NaN(1 row) explain analyze select id from media.block b where nrefs =0 limit 1QUERY PLAN |-----------------------------------------------------------------------------------------------------------------------------------------+Limit (cost=0.14..0.46 rows=1 width=16) (actual time=0.010..0.011 rows=0 loops=1) | -> Index Only Scan using idx_block_unused on block b (cost=0.14..698.91 rows=2231 width=16) (actual time=0.008..0.009 rows=0 loops=1)| Heap Fetches: 0 |Planning Time: 0.174 ms |Execution Time: 0.030 ms |It is actually empty.Now I only need to figure out why autovacuum did not work on the index.Thank you Laszlo",
"msg_date": "Mon, 28 Aug 2023 12:59:40 +0200",
"msg_from": "Les <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query, possibly not using index"
},
{
"msg_contents": "po 28. 8. 2023 v 13:00 odesílatel Les <[email protected]> napsal:\n\n>\n>\n>>\n>> =# select * from pgstatindex('media.idx_block_unused');\n>> version | tree_level | index_size | root_block_no | internal_pages |\n>> leaf_pages | empty_pages | deleted_pages | avg_leaf_density |\n>> leaf_fragmentation\n>>\n>> ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------\n>> 4 | 2 | 389677056 | 546 | 114 |\n>> 23069 | 0 | 24384 | 90.03 | 0\n>> (1 row)\n>>\n>> After reindex:\n>\n> =# select * from pgstatindex('media.idx_block_unused');\n> version | tree_level | index_size | root_block_no | internal_pages |\n> leaf_pages | empty_pages | deleted_pages | avg_leaf_density |\n> leaf_fragmentation\n>\n> ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------\n> 4 | 0 | 8192 | 0 | 0 |\n> 0 | 0 | 0 | NaN | NaN\n> (1 row)\n>\n> explain analyze select id from media.block b where nrefs =0 limit 1\n>\n> QUERY PLAN\n> |\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------+\n> Limit (cost=0.14..0.46 rows=1 width=16) (actual time=0.010..0.011 rows=0\n> loops=1) |\n> -> Index Only Scan using idx_block_unused on block b\n> (cost=0.14..698.91 rows=2231 width=16) (actual time=0.008..0.009 rows=0\n> loops=1)|\n> Heap Fetches: 0\n> |\n> Planning Time: 0.174 ms\n> |\n> Execution Time: 0.030 ms\n> |\n>\n> It is actually empty.\n>\n> Now I only need to figure out why autovacuum did not work on the index.\n>\n\nAutovacuum doesn't reindex.\n\nRegards\n\nPavel\n\n\n>\n> Thank you\n>\n> Laszlo\n>\n>\n\npo 28. 8. 2023 v 13:00 odesílatel Les <[email protected]> napsal:=# select * from pgstatindex('media.idx_block_unused'); version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+-------------------- 4 | 2 | 389677056 | 546 | 114 | 23069 | 0 | 24384 | 90.03 | 0(1 row)After reindex:=# select * from pgstatindex('media.idx_block_unused'); version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+-------------------- 4 | 0 | 8192 | 0 | 0 | 0 | 0 | 0 | NaN | NaN(1 row) explain analyze select id from media.block b where nrefs =0 limit 1QUERY PLAN |-----------------------------------------------------------------------------------------------------------------------------------------+Limit (cost=0.14..0.46 rows=1 width=16) (actual time=0.010..0.011 rows=0 loops=1) | -> Index Only Scan using idx_block_unused on block b (cost=0.14..698.91 rows=2231 width=16) (actual time=0.008..0.009 rows=0 loops=1)| Heap Fetches: 0 |Planning Time: 0.174 ms |Execution Time: 0.030 ms |It is actually empty.Now I only need to figure out why autovacuum did not work on the index.Autovacuum doesn't reindex.RegardsPavel Thank you Laszlo",
"msg_date": "Mon, 28 Aug 2023 13:03:12 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, possibly not using index"
},
{
"msg_contents": "On Mon, 28 Aug 2023 at 19:21, Les <[email protected]> wrote:\n> More important question is, how can I find out why the index was not auto vacuumed.\n\nYou should have a look at pg_stat_user_tables. It'll let you know if\nthe table is being autovacuumed and how often. If you're concerned\nabout autovacuum not running properly, then you might want to lower\nlog_autovacuum_min_duration. Generally, anything that takes a\nconflicting lock will cause autovacuum to cancel so that the\nconflicting locker can get through. Things like ALTER TABLE or even\nan ANALYZE running will cancel most autovacuum runs on tables.\n\nAlso, this is a fairly large table and you do have the standard\nautovacuum settings. Going by pgstattuple, the table has 39652836\ntuples. Autovacuum will trigger when the statistics indicate that 20%\nof tuples are dead, which is about 8 million tuples. Perhaps that's\nenough for the index scan to have to skip over a large enough number\nof dead tuples to make it slow. You might want to consider lowering\nthe autovacuum scale factor for this table.\n\nAlso, ensure you're not doing anything like calling pg_stat_reset();\n\nIt might be worth showing us the output of:\n\nselect * from pg_stat_user_tables where relid = 'media.block'::regclass;\n\nDavid\n\n\n",
"msg_date": "Mon, 28 Aug 2023 23:42:30 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, possibly not using index"
},
{
"msg_contents": ">\n>\n>\n> > More important question is, how can I find out why the index was not\n> auto vacuumed.\n>\n> You should have a look at pg_stat_user_tables. It'll let you know if\n> the table is being autovacuumed and how often. If you're concerned\n> about autovacuum not running properly, then you might want to lower\n> log_autovacuum_min_duration. Generally, anything that takes a\n> conflicting lock will cause autovacuum to cancel so that the\n> conflicting locker can get through. Things like ALTER TABLE or even\n> an ANALYZE running will cancel most autovacuum runs on tables.\n>\n> Also, this is a fairly large table and you do have the standard\n> autovacuum settings. Going by pgstattuple, the table has 39652836\n> tuples. Autovacuum will trigger when the statistics indicate that 20%\n> of tuples are dead, which is about 8 million tuples. Perhaps that's\n> enough for the index scan to have to skip over a large enough number\n> of dead tuples to make it slow. You might want to consider lowering\n> the autovacuum scale factor for this table.\n>\n> Also, ensure you're not doing anything like calling pg_stat_reset();\n>\n> It might be worth showing us the output of:\n>\n> select * from pg_stat_user_tables where relid = 'media.block'::regclass;\n>\nThank you for your suggestion, this is really very helpful.\n\nselect * from pg_stat_user_tables where relid = 'media.block'::regclass;\n\n\n\nName |Value |\n-------------------+-----------------------------+\nrelid |25872 |\nschemaname |media |\nrelname |block |\nseq_scan |8 |\nseq_tup_read |139018370 |\nidx_scan |45023556 |\nidx_tup_fetch |37461539 |\nn_tup_ins |7556051 |\nn_tup_upd |7577720 |\nn_tup_del |0 |\nn_tup_hot_upd |0 |\nn_live_tup |39782042 |\nn_dead_tup |5938057 |\nn_mod_since_analyze|1653427 |\nn_ins_since_vacuum |5736676 |\nlast_vacuum | |\nlast_autovacuum |2023-08-17 22:39:29.383 +0200|\nlast_analyze | |\nlast_autoanalyze |2023-08-22 16:02:56.093 +0200|\nvacuum_count |0 |\nautovacuum_count |1 |\nanalyze_count |0 |\nautoanalyze_count |4 |\n\nRegards,\n\n Laszlo\n\n\n> More important question is, how can I find out why the index was not auto vacuumed.\n\nYou should have a look at pg_stat_user_tables. It'll let you know if\nthe table is being autovacuumed and how often. If you're concerned\nabout autovacuum not running properly, then you might want to lower\nlog_autovacuum_min_duration. Generally, anything that takes a\nconflicting lock will cause autovacuum to cancel so that the\nconflicting locker can get through. Things like ALTER TABLE or even\nan ANALYZE running will cancel most autovacuum runs on tables.\n\nAlso, this is a fairly large table and you do have the standard\nautovacuum settings. Going by pgstattuple, the table has 39652836\ntuples. Autovacuum will trigger when the statistics indicate that 20%\nof tuples are dead, which is about 8 million tuples. Perhaps that's\nenough for the index scan to have to skip over a large enough number\nof dead tuples to make it slow. You might want to consider lowering\nthe autovacuum scale factor for this table.\n\nAlso, ensure you're not doing anything like calling pg_stat_reset();\n\nIt might be worth showing us the output of:\n\nselect * from pg_stat_user_tables where relid = 'media.block'::regclass;Thank you for your suggestion, this is really very helpful.select * from pg_stat_user_tables where relid = 'media.block'::regclass;Name |Value |-------------------+-----------------------------+relid |25872 |schemaname |media |relname |block |seq_scan |8 |seq_tup_read |139018370 |idx_scan |45023556 |idx_tup_fetch |37461539 |n_tup_ins |7556051 |n_tup_upd |7577720 |n_tup_del |0 |n_tup_hot_upd |0 |n_live_tup |39782042 |n_dead_tup |5938057 |n_mod_since_analyze|1653427 |n_ins_since_vacuum |5736676 |last_vacuum | |last_autovacuum |2023-08-17 22:39:29.383 +0200|last_analyze | |last_autoanalyze |2023-08-22 16:02:56.093 +0200|vacuum_count |0 |autovacuum_count |1 |analyze_count |0 |autoanalyze_count |4 | Regards, Laszlo",
"msg_date": "Mon, 28 Aug 2023 13:47:22 +0200",
"msg_from": "Les <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query, possibly not using index"
}
] |
[
{
"msg_contents": "Hi,\n\nTL;DR:\nObservations:\n\n 1. REINDEX requires a full table scan\n - Roughly create a new index, rename index, drop old index.\n - REINDEX is not incremental. running reindex frequently does not\n reduce the future reindex time.\n 2. REINDEX does not use the index itself\n 3. VACUUM does not clean up the indices. (relpages >> reltuples) I\n understand, vacuum is supposed to remove pages only if there are no live\n tuples in the page, but somehow, even immediately after vacuum, I see\n relpages significantly greater than reltuples. I would have assumed,\n relpages <= reltuples\n 4. Query Planner does not consider index bloat, so uses highly bloated\n partial index that is terribly slow over other index\n\nQuestion: Is there a way to optimize postgres vacuum/reindex when using\npartial indexes?\n\nWe have a large table (tasks) that keep track of all the tasks that are\ncreated and their statuses. Around 1.4 million tasks per day are created\nevery day (~15 inserts per second).\n\nOne of the columns is int `status` that can be one of (1 - Init, 2 -\nInProgress, 3 - Success, 4 - Aborted, 5 - Failure) (Actually, there are\nmore statuses, but this would give the idea)\n\nOn average, a task completes in around a minute with some outliers that can\ngo as long as a few weeks. There is a periodic heartbeat that updates the\nlast updated time in the table.\n\nAt any moment, there are *around 1000-1500 tasks in pending statuses* (Init\n+ InProgress) out of around 500 million tasks.\n\nNow, we have a task monitoring query that will look for all pending tasks\nthat have not received any update in the last n minutes.\n\n```\nSELECT [columns list]\n FROM tasks\n WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days' AND\nupdated < NOW() - interval '30 minutes'\n```\n\nSince we are only interested in the pending tasks, I created a partial index\n `*\"tasks_pending_status_created_type_idx\" btree (status, created,\ntask_type) WHERE status <> ALL (ARRAY[3, 4, 5])*`.\n\nThis worked great initially, however this started to get bloated very very\nquickly because, every task starts in pending state, gets multiple updates\n(and many of them are not HOT updates, working on optimizing fill factor\nnow), and eventually gets deleted from the index (as status changes to\nsuccess).\n\n\n```\n\n\\d+ tasks\n\nTable \"public.tasks\"\n Column | Type | Collation |\nNullable | Default | Storage | Compression |\nStats target | Description\n-------------------------------+----------------------------+-----------+----------+-----------------------------------+----------+-------------+--------------+-------------\n id | bigint | |\nnot null | nextval('tasks_id_seq'::regclass) | plain | |\n |\n client_id | bigint | |\nnot null | | plain | |\n |\n status | integer | |\nnot null | | plain | |\n |\n description | character varying(128) | |\nnot null | | extended | |\n |\n current_count | bigint | |\nnot null | | plain | |\n |\n target_count | bigint | |\nnot null | | plain | |\n |\n status_msg | character varying(4096) | |\n | | extended | |\n |\n blob_key | bigint | |\n | | plain | |\n |\n created | timestamp with time zone | |\nnot null | | plain | |\n |\n updated | timestamp with time zone | |\nnot null | | plain | |\n |\n idle_time | integer | |\nnot null | 0 | plain | |\n |\n started | timestamp with time zone | |\n | | plain | |\n |\nIndexes:\n \"tasks_pkey\" PRIMARY KEY, btree (id)\n \"tasks_created_idx\" btree (created)\n \"tasks_pending_status_created_idx\" btree (status, created) WHERE status\n<> ALL (ARRAY[3, 4, 5])\n\n \"tasks_client_id_status_created_idx\" btree (client_id, status, created\nDESC)\n \"tasks_status_idx\" btree (status)\nAccess method: heap\nOptions: autovacuum_vacuum_scale_factor=0.02,\nautovacuum_analyze_scale_factor=0.02, fillfactor=70\n```\n\nImmediately after REINDEX\n\n```\nSELECT relname,reltuples,relpages FROM pg_class WHERE relname like\n'tasks%idx%';\n\n relname | reltuples | relpages\n------------------------------------+----------------+----------\n tasks_pending_status_created_idx | 34175 | 171\n tasks_created_idx | 5.3920026e+08 | 11288121\n tasks_client_id_status_created_idx | 5.3920026e+08 | 7031615\n tasks_status_idx | 5.3920026e+08 | 2215403\n(9 rows)\n\n```\n\nA couple of days after manual full REINDEX.\n```\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE (relname\nlike 'tasks%idx%' OR relname='tasks');\n relname | relpages | reltuples |\nrelallvisible | relkind | relnatts | relhassubclass | reloptions |\npg_table_size\n------------------------------------+----------+----------------+---------------+---------+----------+----------------+------------+---------------\n tasks_pending_status_created_idx | 79664 | 201831 |\n 0 | i | 3 | f | | 652771328\n tasks_created_idx | 11384992 | 5.42238e+08 |\n 0 | i | 1 | f | | 93481443328\n tasks_client_id_status_created_idx | 7167147 | 5.42274e+08 |\n 0 | i | 5 | f | | 58727710720\n tasks_status_idx | 2258820 | 5.4223546e+08 |\n 0 | i | 1 | f | | 18508734464\n tasks | 71805187 | 5.171037e+08 |\n 71740571 | r | 30 | f | | 613282308096\n```\n\nHi,TL;DR: Observations:REINDEX requires a full table scanRoughly create a new index, rename index, drop old index.REINDEX is not incremental. running reindex frequently does not reduce the future reindex time.REINDEX does not use the index itselfVACUUM does not clean up the indices. (relpages >> reltuples) I understand, vacuum is supposed to remove pages only if there are no live tuples in the page, but somehow, even immediately after vacuum, I see relpages significantly greater than reltuples. I would have assumed, relpages <= reltuples Query Planner does not consider index bloat, so uses highly bloated partial index that is terribly slow over other indexQuestion: Is there a way to optimize postgres vacuum/reindex when using partial indexes?We have a large table (tasks) that keep track of all the tasks that are created and their statuses. Around 1.4 million tasks per day are created every day (~15 inserts per second). One of the columns is int `status` that can be one of (1 - Init, 2 - InProgress, 3 - Success, 4 - Aborted, 5 - Failure) (Actually, there are more statuses, but this would give the idea)On average, a task completes in around a minute with some outliers that can go as long as a few weeks. There is a periodic heartbeat that updates the last updated time in the table.At any moment, there are around 1000-1500 tasks in pending statuses (Init + InProgress) out of around 500 million tasks. Now, we have a task monitoring query that will look for all pending tasks that have not received any update in the last n minutes.```SELECT [columns list] FROM tasks WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days' AND updated < NOW() - interval '30 minutes'```Since we are only interested in the pending tasks, I created a partial index `\"tasks_pending_status_created_type_idx\" btree (status, created, task_type) WHERE status <> ALL (ARRAY[3, 4, 5])`.This worked great initially, however this started to get bloated very very quickly because, every task starts in pending state, gets multiple updates (and many of them are not HOT updates, working on optimizing fill factor now), and eventually gets deleted from the index (as status changes to success). ```\\d+ tasks Table \"public.tasks\" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description -------------------------------+----------------------------+-----------+----------+-----------------------------------+----------+-------------+--------------+------------- id | bigint | | not null | nextval('tasks_id_seq'::regclass) | plain | | | client_id | bigint | | not null | | plain | | | status | integer | | not null | | plain | | | description | character varying(128) | | not null | | extended | | | current_count | bigint | | not null | | plain | | | target_count | bigint | | not null | | plain | | | status_msg | character varying(4096) | | | | extended | | | blob_key | bigint | | | | plain | | | created | timestamp with time zone | | not null | | plain | | | updated | timestamp with time zone | | not null | | plain | | | idle_time | integer | | not null | 0 | plain | | | started | timestamp with time zone | | | | plain | | | Indexes: \"tasks_pkey\" PRIMARY KEY, btree (id) \"tasks_created_idx\" btree (created) \"tasks_pending_status_created_idx\" btree (status, created) WHERE status <> ALL (ARRAY[3, 4, 5])\n \"tasks_client_id_status_created_idx\" btree (client_id, status, created DESC) \"tasks_status_idx\" btree (status)Access method: heapOptions: autovacuum_vacuum_scale_factor=0.02, autovacuum_analyze_scale_factor=0.02, fillfactor=70```Immediately after REINDEX```SELECT relname,reltuples,relpages FROM pg_class WHERE relname like 'tasks%idx%'; relname | reltuples | relpages------------------------------------+----------------+---------- tasks_pending_status_created_idx | 34175 | 171 tasks_created_idx | 5.3920026e+08 | 11288121 tasks_client_id_status_created_idx | 5.3920026e+08 | 7031615 tasks_status_idx | 5.3920026e+08 | 2215403(9 rows)```A couple of days after manual full REINDEX.```SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE (relname like 'tasks%idx%' OR relname='tasks'); relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size ------------------------------------+----------+----------------+---------------+---------+----------+----------------+------------+--------------- tasks_pending_status_created_idx | 79664 | 201831 | 0 | i | 3 | f | | 652771328 tasks_created_idx | 11384992 | 5.42238e+08 | 0 | i | 1 | f | | 93481443328 tasks_client_id_status_created_idx | 7167147 | 5.42274e+08 | 0 | i | 5 | f | | 58727710720 tasks_status_idx | 2258820 | 5.4223546e+08 | 0 | i | 1 | f | | 18508734464 tasks | 71805187 | 5.171037e+08 | 71740571 | r | 30 | f | | 613282308096```",
"msg_date": "Mon, 28 Aug 2023 17:32:38 -0700",
"msg_from": "jayaprabhakar k <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 5:33 PM jayaprabhakar k <[email protected]> wrote:\n> REINDEX requires a full table scan\n>\n> Roughly create a new index, rename index, drop old index.\n> REINDEX is not incremental. running reindex frequently does not reduce the future reindex time.\n\nYou didn't say which Postgres version you're on. Note that Postgres 14\ncan deal with index bloat a lot better than earlier versions could.\nThis is known to work well with partial indexes. See:\n\nhttps://www.postgresql.org/message-id/flat/CAL9smLAjt9mZC2%3DqBeJwuNPq7KMAYGTWWQw_hvA-Lfo0b3ycow%40mail.gmail.com\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 28 Aug 2023 18:49:13 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": "Thanks Peter. It is *14.4*, But on AWS RDS Aurora instance. I am trying to\nread the links you shared - B-Tree Deletion and deduplication, etc. I still\ndon't fully understand what I need to do. In the BTree documentation,\n\n> The average and worst-case number of versions per logical row can be kept\n> low purely through targeted incremental deletion passes. It's quite\n> possible that the on-disk size of certain indexes will never increase by\n> even one single page/block despite *constant* version churn from UPDATEs.\n\n\nIn our case, almost all the tuples stop being covered by the index as they\nfail the predicate, and only a tiny 1000s of rows pass the index predicate\nat any point in time. But, we still see the index size continue to\nincrease, index lookups become slow over time, and vacuum (non full)\ndoesn't reduce the index size much.\n\nDo we need to do anything specific to better utilize the targeted\nincremental deletion passes?\n\n\nSELECT VERSION();\n version\n\n-------------------------------------------------------------------------------------------------\n PostgreSQL 14.4 on x86_64-pc-linux-gnu, compiled by\nx86_64-pc-linux-gnu-gcc (GCC) 7.4.0, 64-bit\n(1 row)\n\n\n\n\n\n\nOn Mon, 28 Aug 2023 at 18:49, Peter Geoghegan <[email protected]> wrote:\n\n> On Mon, Aug 28, 2023 at 5:33 PM jayaprabhakar k <[email protected]>\n> wrote:\n> > REINDEX requires a full table scan\n> >\n> > Roughly create a new index, rename index, drop old index.\n> > REINDEX is not incremental. running reindex frequently does not reduce\n> the future reindex time.\n>\n> You didn't say which Postgres version you're on. Note that Postgres 14\n> can deal with index bloat a lot better than earlier versions could.\n> This is known to work well with partial indexes. See:\n>\n>\n> https://www.postgresql.org/message-id/flat/CAL9smLAjt9mZC2%3DqBeJwuNPq7KMAYGTWWQw_hvA-Lfo0b3ycow%40mail.gmail.com\n>\n> --\n> Peter Geoghegan\n>\n\nThanks Peter. It is 14.4, But on AWS RDS Aurora instance. I am trying to read the links you shared - B-Tree Deletion and deduplication, etc. I still don't fully understand what I need to do. In the BTree documentation,> The average and worst-case number of versions per logical row can be kept low purely through targeted incremental deletion passes. It's quite possible that the on-disk size of certain indexes will never increase by even one single page/block despite constant version churn from UPDATEs.In our case, almost all the tuples stop being covered by the index as they fail the predicate, and only a tiny 1000s of rows pass the index predicate at any point in time. But, we still see the index size continue to increase, index lookups become slow over time, and vacuum (non full) doesn't reduce the index size much.Do we need to do anything specific to better utilize the targeted incremental deletion passes?SELECT VERSION(); version ------------------------------------------------------------------------------------------------- PostgreSQL 14.4 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (GCC) 7.4.0, 64-bit(1 row)On Mon, 28 Aug 2023 at 18:49, Peter Geoghegan <[email protected]> wrote:On Mon, Aug 28, 2023 at 5:33 PM jayaprabhakar k <[email protected]> wrote:\n> REINDEX requires a full table scan\n>\n> Roughly create a new index, rename index, drop old index.\n> REINDEX is not incremental. running reindex frequently does not reduce the future reindex time.\n\nYou didn't say which Postgres version you're on. Note that Postgres 14\ncan deal with index bloat a lot better than earlier versions could.\nThis is known to work well with partial indexes. See:\n\nhttps://www.postgresql.org/message-id/flat/CAL9smLAjt9mZC2%3DqBeJwuNPq7KMAYGTWWQw_hvA-Lfo0b3ycow%40mail.gmail.com\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 29 Aug 2023 09:47:18 -0700",
"msg_from": "jayaprabhakar k <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 8:33 PM jayaprabhakar k <[email protected]>\nwrote:\n\n> Hi,\n>\n> TL;DR:\n> Observations:\n>\n> 1. REINDEX requires a full table scan\n> - Roughly create a new index, rename index, drop old index.\n> - REINDEX is not incremental. running reindex frequently does not\n> reduce the future reindex time.\n> 2. REINDEX does not use the index itself\n> 3. VACUUM does not clean up the indices. (relpages >> reltuples) I\n> understand, vacuum is supposed to remove pages only if there are no live\n> tuples in the page, but somehow, even immediately after vacuum, I see\n> relpages significantly greater than reltuples. I would have assumed,\n> relpages <= reltuples\n> 4. Query Planner does not consider index bloat, so uses highly bloated\n> partial index that is terribly slow over other index\n>\n> Your points 3 and 4 are not correct. empty index pages are put on a\nfreelist for future reuse, they are not physically removed from the\nunderlying index files. Maybe they are not actually getting put on the\nfreelist or not being reused from the freelist for some reason, but that\nwould be a different issue. Use the extension pgstattuple to see what its\nfunction pgstatindex says about the index.\n\nThe planner does take index bloat into consideration, but its effect size\nis low. Which it should be, as empty or irrelevant pages should be\nefficiently skipped during the course of most index operations. To figure\nout what is going with your queries, you should do an EXPLAIN (ANALYZE,\nBUFFERS) of them, but with it being slow and with it being fast.\n\n\n> Question: Is there a way to optimize postgres vacuum/reindex when using\n> partial indexes?\n>\n\nWithout knowing what is actually going wrong, I can only offer\ngeneralities. Make sure you don't have long-lived transactions which\nprevent efficient clean up. Increase the frequency on which vacuum runs on\nthe table. It can't reduce the size of an already bloated index, but by\nkeeping the freelist stocked it should be able prevent it from getting\nbloated in the first place. Also, it can remove empty pages from being\nlinked into the index tree structure, which means they won't need to be\nscanned even though they are still in the file. It can also free up space\ninside non-empty pages for future reuse within that same page, and so that\nindex tuples don't need to be chased down in the table only to be found to\nbe not visible.\n\n\n> ```\n> SELECT [columns list]\n> FROM tasks\n> WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days' AND\n> updated < NOW() - interval '30 minutes'\n> ```\n>\n> Since we are only interested in the pending tasks, I created a partial\n> index\n> `*\"tasks_pending_status_created_type_idx\" btree (status, created,\n> task_type) WHERE status <> ALL (ARRAY[3, 4, 5])*`.\n>\n\nThis looks like a poorly designed index. Since the status condition\nexactly matches the index where clause, there is no residual point in\nhaving \"status\" be the first column in the index, it can only get in the\nway (for this particular query). Move it to the end, or remove it\naltogether.\n\nWithin the tuples which pass the status check, which inequality is more\nselective, the \"created\" one or \"updated\" one?\n\nCheers,\n\nJeff\n\nOn Mon, Aug 28, 2023 at 8:33 PM jayaprabhakar k <[email protected]> wrote:Hi,TL;DR: Observations:REINDEX requires a full table scanRoughly create a new index, rename index, drop old index.REINDEX is not incremental. running reindex frequently does not reduce the future reindex time.REINDEX does not use the index itselfVACUUM does not clean up the indices. (relpages >> reltuples) I understand, vacuum is supposed to remove pages only if there are no live tuples in the page, but somehow, even immediately after vacuum, I see relpages significantly greater than reltuples. I would have assumed, relpages <= reltuples Query Planner does not consider index bloat, so uses highly bloated partial index that is terribly slow over other indexYour points 3 and 4 are not correct. empty index pages are put on a freelist for future reuse, they are not physically removed from the underlying index files. Maybe they are not actually getting put on the freelist or not being reused from the freelist for some reason, but that would be a different issue. Use the extension pgstattuple to see what its function pgstatindex says about the index. The planner does take index bloat into consideration, but its effect size is low. Which it should be, as empty or irrelevant pages should be efficiently skipped during the course of most index operations. To figure out what is going with your queries, you should do an EXPLAIN (ANALYZE, BUFFERS) of them, but with it being slow and with it being fast. Question: Is there a way to optimize postgres vacuum/reindex when using partial indexes?Without knowing what is actually going wrong, I can only offer generalities. Make sure you don't have long-lived transactions which prevent efficient clean up. Increase the frequency on which vacuum runs on the table. It can't reduce the size of an already bloated index, but by keeping the freelist stocked it should be able prevent it from getting bloated in the first place. Also, it can remove empty pages from being linked into the index tree structure, which means they won't need to be scanned even though they are still in the file. It can also free up space inside non-empty pages for future reuse within that same page, and so that index tuples don't need to be chased down in the table only to be found to be not visible. ```SELECT [columns list] FROM tasks WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days' AND updated < NOW() - interval '30 minutes'```Since we are only interested in the pending tasks, I created a partial index `\"tasks_pending_status_created_type_idx\" btree (status, created, task_type) WHERE status <> ALL (ARRAY[3, 4, 5])`.This looks like a poorly designed index. Since the status condition exactly matches the index where clause, there is no residual point in having \"status\" be the first column in the index, it can only get in the way (for this particular query). Move it to the end, or remove it altogether.Within the tuples which pass the status check, which inequality is more selective, the \"created\" one or \"updated\" one? Cheers,Jeff",
"msg_date": "Tue, 29 Aug 2023 15:43:05 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": "On Tue, Aug 29, 2023, 12:43 PM Jeff Janes <[email protected]> wrote:\n\n> On Mon, Aug 28, 2023 at 8:33 PM jayaprabhakar k <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> TL;DR:\n>> Observations:\n>>\n>> 1. REINDEX requires a full table scan\n>> - Roughly create a new index, rename index, drop old index.\n>> - REINDEX is not incremental. running reindex frequently does not\n>> reduce the future reindex time.\n>> 2. REINDEX does not use the index itself\n>> 3. VACUUM does not clean up the indices. (relpages >> reltuples) I\n>> understand, vacuum is supposed to remove pages only if there are no live\n>> tuples in the page, but somehow, even immediately after vacuum, I see\n>> relpages significantly greater than reltuples. I would have assumed,\n>> relpages <= reltuples\n>> 4. Query Planner does not consider index bloat, so uses highly\n>> bloated partial index that is terribly slow over other index\n>>\n>> Your points 3 and 4 are not correct. empty index pages are put on a\n> freelist for future reuse, they are not physically removed from the\n> underlying index files. Maybe they are not actually getting put on the\n> freelist or not being reused from the freelist for some reason, but that\n> would be a different issue. Use the extension pgstattuple to see what its\n> function pgstatindex says about the index.\n>\n> The planner does take index bloat into consideration, but its effect size\n> is low. Which it should be, as empty or irrelevant pages should be\n> efficiently skipped during the course of most index operations. To figure\n> out what is going with your queries, you should do an EXPLAIN (ANALYZE,\n> BUFFERS) of them, but with it being slow and with it being fast.\n>\n>\n>> Question: Is there a way to optimize postgres vacuum/reindex when using\n>> partial indexes?\n>>\n>\n> Without knowing what is actually going wrong, I can only offer\n> generalities. Make sure you don't have long-lived transactions which\n> prevent efficient clean up. Increase the frequency on which vacuum runs on\n> the table. It can't reduce the size of an already bloated index, but by\n> keeping the freelist stocked it should be able prevent it from getting\n> bloated in the first place. Also, it can remove empty pages from being\n> linked into the index tree structure, which means they won't need to be\n> scanned even though they are still in the file. It can also free up space\n> inside non-empty pages for future reuse within that same page, and so that\n> index tuples don't need to be chased down in the table only to be found to\n> be not visible.\n>\n>\n>> ```\n>> SELECT [columns list]\n>> FROM tasks\n>> WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days'\n>> AND updated < NOW() - interval '30 minutes'\n>> ```\n>>\n>> Since we are only interested in the pending tasks, I created a partial\n>> index\n>> `*\"tasks_pending_status_created_type_idx\" btree (status, created,\n>> task_type) WHERE status <> ALL (ARRAY[3, 4, 5])*`.\n>>\n>\n> This looks like a poorly designed index. Since the status condition\n> exactly matches the index where clause, there is no residual point in\n> having \"status\" be the first column in the index, it can only get in the\n> way (for this particular query). Move it to the end, or remove it\n> altogether.\n>\nInteresting. I don't understand why it will get in the way. Unfortunately\nwe have a few other cases where status is used in filter. That said, I will\nconsider how to get this to work.\nWould removing status from the index column, improve HOT updates %? For\nexample, changing status from 1->2, doesn't change anything on the index\n(assuming other criteria for HOT updates are met), but I am not sure how\nthe implementation is.\n\n\n> Within the tuples which pass the status check, which inequality is more\n> selective, the \"created\" one or \"updated\" one?\n>\nObviously updated time is more selective (after status), and the created\ntime is included only to exclude some bugs in our system that had left some\nold tasks stuck in progress (and for sorting). We do try to clean\nup occasionally, but not each time.\nHowever we cannot add an index on `updated` column because that timestamp\ngets updated over 10x on average for each task. Since if a single index use\na column, then the update will not be HOT, and every index needs to be\nupdated. That will clearly add a bloat to every index. Did I miss something?\n\n\n>\n> Cheers,\n>\n> Jeff\n>\n\nOn Tue, Aug 29, 2023, 12:43 PM Jeff Janes <[email protected]> wrote:On Mon, Aug 28, 2023 at 8:33 PM jayaprabhakar k <[email protected]> wrote:Hi,TL;DR: Observations:REINDEX requires a full table scanRoughly create a new index, rename index, drop old index.REINDEX is not incremental. running reindex frequently does not reduce the future reindex time.REINDEX does not use the index itselfVACUUM does not clean up the indices. (relpages >> reltuples) I understand, vacuum is supposed to remove pages only if there are no live tuples in the page, but somehow, even immediately after vacuum, I see relpages significantly greater than reltuples. I would have assumed, relpages <= reltuples Query Planner does not consider index bloat, so uses highly bloated partial index that is terribly slow over other indexYour points 3 and 4 are not correct. empty index pages are put on a freelist for future reuse, they are not physically removed from the underlying index files. Maybe they are not actually getting put on the freelist or not being reused from the freelist for some reason, but that would be a different issue. Use the extension pgstattuple to see what its function pgstatindex says about the index. The planner does take index bloat into consideration, but its effect size is low. Which it should be, as empty or irrelevant pages should be efficiently skipped during the course of most index operations. To figure out what is going with your queries, you should do an EXPLAIN (ANALYZE, BUFFERS) of them, but with it being slow and with it being fast. Question: Is there a way to optimize postgres vacuum/reindex when using partial indexes?Without knowing what is actually going wrong, I can only offer generalities. Make sure you don't have long-lived transactions which prevent efficient clean up. Increase the frequency on which vacuum runs on the table. It can't reduce the size of an already bloated index, but by keeping the freelist stocked it should be able prevent it from getting bloated in the first place. Also, it can remove empty pages from being linked into the index tree structure, which means they won't need to be scanned even though they are still in the file. It can also free up space inside non-empty pages for future reuse within that same page, and so that index tuples don't need to be chased down in the table only to be found to be not visible. ```SELECT [columns list] FROM tasks WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days' AND updated < NOW() - interval '30 minutes'```Since we are only interested in the pending tasks, I created a partial index `\"tasks_pending_status_created_type_idx\" btree (status, created, task_type) WHERE status <> ALL (ARRAY[3, 4, 5])`.This looks like a poorly designed index. Since the status condition exactly matches the index where clause, there is no residual point in having \"status\" be the first column in the index, it can only get in the way (for this particular query). Move it to the end, or remove it altogether.Interesting. I don't understand why it will get in the way. Unfortunately we have a few other cases where status is used in filter. That said, I will consider how to get this to work. Would removing status from the index column, improve HOT updates %? For example, changing status from 1->2, doesn't change anything on the index (assuming other criteria for HOT updates are met), but I am not sure how the implementation is.Within the tuples which pass the status check, which inequality is more selective, the \"created\" one or \"updated\" one?Obviously updated time is more selective (after status), and the created time is included only to exclude some bugs in our system that had left some old tasks stuck in progress (and for sorting). We do try to clean up occasionally, but not each time. However we cannot add an index on `updated` column because that timestamp gets updated over 10x on average for each task. Since if a single index use a column, then the update will not be HOT, and every index needs to be updated. That will clearly add a bloat to every index. Did I miss something? Cheers,Jeff",
"msg_date": "Wed, 30 Aug 2023 17:42:58 -0700",
"msg_from": "jayaprabhakar k <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": "> At any moment, there are *around 1000-1500 tasks in pending statuses*\n> (Init + InProgress) out of around 500 million tasks.\n>\n> Now, we have a task monitoring query that will look for all pending tasks\n> that have not received any update in the last n minutes.\n>\n> ```\n> SELECT [columns list]\n> FROM tasks\n> WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days' AND\n> updated < NOW() - interval '30 minutes'\n> ```\n>\n> Since we are only interested in the pending tasks, I created a partial\n> index\n> `*\"tasks_pending_status_created_type_idx\" btree (status, created,\n> task_type) WHERE status <> ALL (ARRAY[3, 4, 5])*`.\n>\n> This worked great initially, however this started to get bloated very very\n> quickly because, every task starts in pending state, gets multiple updates\n> (and many of them are not HOT updates, working on optimizing fill factor\n> now), and eventually gets deleted from the index (as status changes to\n> success).\n>\n\n From my experience I suspect that there is a problem with \"of around 500\nmillion tasks.\"\nAutovacuum indeed cleans old dead index entries, but how many such dead\nindex entries will be collected on the 500M table before autovacuum kicks\nin?\n\nWith the default value of autovacuum_vacuum_scale_factor (The default is\n0.2 (20% of table size).) index will collect like 100M outdated/dead index\nentries before autovacuum kicks in and cleans them all (in a worst case),\nand of course it will lead to huge index bloat and awful performance.\n\nEven if you scale down autovacuum_vacuum_scale_factor to some unreasonable\nlow value like 0.01, the index still bloats to the 5M dead entries before\nautovacuum run, and constant vacuuming of a huge 500M table will put a huge\nload on the database server.\n\nUnfortunately there is no easy way out of this situation from database\nside, in general I recommend not trying to implement a fast pacing queue\nlike load inside of a huge and constantly growing table, it never works\nwell because you cannot keep up partial efficient indexes for the queue in\na clean/non-bloated state.\n\nIn my opinion the best solution is to keep list of entries to process (\"*around\n1000-1500 tasks in pending statuses\")* duplicated in the separate tiny\ntable (via triggers or implement it on the application level), in that case\nautovacuum will be able quickly clean dead entries from the index.\n\nKind Regards,\nMaxim\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\n\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nAt any moment, there are around 1000-1500 tasks in pending statuses (Init + InProgress) out of around 500 million tasks. Now, we have a task monitoring query that will look for all pending tasks that have not received any update in the last n minutes.```SELECT [columns list] FROM tasks WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days' AND updated < NOW() - interval '30 minutes'```Since we are only interested in the pending tasks, I created a partial index `\"tasks_pending_status_created_type_idx\" btree (status, created, task_type) WHERE status <> ALL (ARRAY[3, 4, 5])`.This worked great initially, however this started to get bloated very very quickly because, every task starts in pending state, gets multiple updates (and many of them are not HOT updates, working on optimizing fill factor now), and eventually gets deleted from the index (as status changes to success). From my experience I suspect that there is a problem with \"of around 500 million tasks.\"Autovacuum indeed cleans old dead index entries, but how many such dead index entries will be collected on the 500M table before autovacuum kicks in?With the default value of autovacuum_vacuum_scale_factor (The default is 0.2 (20% of table size).) index will collect like 100M outdated/dead index entries before autovacuum kicks in and cleans them all (in a worst case), and of course it will lead to huge index bloat and awful performance.Even if you scale down autovacuum_vacuum_scale_factor to some unreasonable low value like 0.01, the index still bloats to the 5M dead entries before autovacuum run, and constant vacuuming of a huge 500M table will put a huge load on the database server.Unfortunately there is no easy way out of this situation from database side, in general I recommend not trying to implement a fast pacing queue like load inside of a huge and constantly growing table, it never works well because you cannot keep up partial efficient indexes for the queue in a clean/non-bloated state.In my opinion the best solution is to keep list of entries to process (\"around 1000-1500 tasks in pending statuses\") duplicated in the separate tiny table (via triggers or implement it on the application level), in that case autovacuum will be able quickly clean dead entries from the index.Kind Regards,Maxim-- Maxim BogukSenior Postgresql DBAPhone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Thu, 31 Aug 2023 18:05:41 +0300",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": "Thanks Maxim, that's something we are considering now - keep the in\nprogress tasks in one table and periodically move the old and completed\ntasks to an archive table.\nWe could use a view that unions them for most queries.\n\nI'm not sure if that's the best alternative though, and we want to know if\nthere are any gotchas to worry about.\n\nOn Thu, Aug 31, 2023, 8:06 AM Maxim Boguk <[email protected]> wrote:\n\n>\n> At any moment, there are *around 1000-1500 tasks in pending statuses*\n>> (Init + InProgress) out of around 500 million tasks.\n>>\n>> Now, we have a task monitoring query that will look for all pending tasks\n>> that have not received any update in the last n minutes.\n>>\n>> ```\n>> SELECT [columns list]\n>> FROM tasks\n>> WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days'\n>> AND updated < NOW() - interval '30 minutes'\n>> ```\n>>\n>> Since we are only interested in the pending tasks, I created a partial\n>> index\n>> `*\"tasks_pending_status_created_type_idx\" btree (status, created,\n>> task_type) WHERE status <> ALL (ARRAY[3, 4, 5])*`.\n>>\n>> This worked great initially, however this started to get bloated very\n>> very quickly because, every task starts in pending state, gets multiple\n>> updates (and many of them are not HOT updates, working on optimizing fill\n>> factor now), and eventually gets deleted from the index (as status changes\n>> to success).\n>>\n>\n> From my experience I suspect that there is a problem with \"of around 500\n> million tasks.\"\n> Autovacuum indeed cleans old dead index entries, but how many such dead\n> index entries will be collected on the 500M table before autovacuum kicks\n> in?\n>\n> With the default value of autovacuum_vacuum_scale_factor (The default is\n> 0.2 (20% of table size).) index will collect like 100M outdated/dead index\n> entries before autovacuum kicks in and cleans them all (in a worst case),\n> and of course it will lead to huge index bloat and awful performance.\n>\n> Even if you scale down autovacuum_vacuum_scale_factor to some\n> unreasonable low value like 0.01, the index still bloats to the 5M dead\n> entries before autovacuum run, and constant vacuuming of a huge 500M table\n> will put a huge load on the database server.\n>\n> Unfortunately there is no easy way out of this situation from database\n> side, in general I recommend not trying to implement a fast pacing queue\n> like load inside of a huge and constantly growing table, it never works\n> well because you cannot keep up partial efficient indexes for the queue in\n> a clean/non-bloated state.\n>\n> In my opinion the best solution is to keep list of entries to process (\"*around\n> 1000-1500 tasks in pending statuses\")* duplicated in the separate tiny\n> table (via triggers or implement it on the application level), in that case\n> autovacuum will be able quickly clean dead entries from the index.\n>\n> Kind Regards,\n> Maxim\n>\n>\n> --\n> Maxim Boguk\n> Senior Postgresql DBA\n>\n> Phone UA: +380 99 143 0000\n> Phone AU: +61 45 218 5678\n>\n>\n\nThanks Maxim, that's something we are considering now - keep the in progress tasks in one table and periodically move the old and completed tasks to an archive table.We could use a view that unions them for most queries.I'm not sure if that's the best alternative though, and we want to know if there are any gotchas to worry about.On Thu, Aug 31, 2023, 8:06 AM Maxim Boguk <[email protected]> wrote:At any moment, there are around 1000-1500 tasks in pending statuses (Init + InProgress) out of around 500 million tasks. Now, we have a task monitoring query that will look for all pending tasks that have not received any update in the last n minutes.```SELECT [columns list] FROM tasks WHERE status NOT IN (3,4,5) AND created > NOW() - INTERVAL '30 days' AND updated < NOW() - interval '30 minutes'```Since we are only interested in the pending tasks, I created a partial index `\"tasks_pending_status_created_type_idx\" btree (status, created, task_type) WHERE status <> ALL (ARRAY[3, 4, 5])`.This worked great initially, however this started to get bloated very very quickly because, every task starts in pending state, gets multiple updates (and many of them are not HOT updates, working on optimizing fill factor now), and eventually gets deleted from the index (as status changes to success). From my experience I suspect that there is a problem with \"of around 500 million tasks.\"Autovacuum indeed cleans old dead index entries, but how many such dead index entries will be collected on the 500M table before autovacuum kicks in?With the default value of autovacuum_vacuum_scale_factor (The default is 0.2 (20% of table size).) index will collect like 100M outdated/dead index entries before autovacuum kicks in and cleans them all (in a worst case), and of course it will lead to huge index bloat and awful performance.Even if you scale down autovacuum_vacuum_scale_factor to some unreasonable low value like 0.01, the index still bloats to the 5M dead entries before autovacuum run, and constant vacuuming of a huge 500M table will put a huge load on the database server.Unfortunately there is no easy way out of this situation from database side, in general I recommend not trying to implement a fast pacing queue like load inside of a huge and constantly growing table, it never works well because you cannot keep up partial efficient indexes for the queue in a clean/non-bloated state.In my opinion the best solution is to keep list of entries to process (\"around 1000-1500 tasks in pending statuses\") duplicated in the separate tiny table (via triggers or implement it on the application level), in that case autovacuum will be able quickly clean dead entries from the index.Kind Regards,Maxim-- Maxim BogukSenior Postgresql DBAPhone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Thu, 31 Aug 2023 18:05:59 -0700",
"msg_from": "jayaprabhakar k <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 8:43 PM jayaprabhakar k <[email protected]>\nwrote:\n\n>\n>\n> On Tue, Aug 29, 2023, 12:43 PM Jeff Janes <[email protected]> wrote:\n>\n>> On Mon, Aug 28, 2023 at 8:33 PM jayaprabhakar k <[email protected]>\n>> wrote:\n>>\n>>>\n>>> Since we are only interested in the pending tasks, I created a partial\n>>> index\n>>> `*\"tasks_pending_status_created_type_idx\" btree (status, created,\n>>> task_type) WHERE status <> ALL (ARRAY[3, 4, 5])*`.\n>>>\n>>\n>> This looks like a poorly designed index. Since the status condition\n>> exactly matches the index where clause, there is no residual point in\n>> having \"status\" be the first column in the index, it can only get in the\n>> way (for this particular query). Move it to the end, or remove it\n>> altogether.\n>>\n> Interesting. I don't understand why it will get in the way. Unfortunately\n> we have a few other cases where status is used in filter. That said, I will\n> consider how to get this to work.\n> Would removing status from the index column, improve HOT updates %? For\n> example, changing status from 1->2, doesn't change anything on the index\n> (assuming other criteria for HOT updates are met), but I am not sure how\n> the implementation is.\n>\n\nNo, changes to the status column will not qualify as HOT updates, even if\nstatus is only in the WHERE clause and not the index body. I don't know if\nthere is a fundamental reason that those can't be done as HOT, or if it is\njust an optimization that no one implemented.\n\n\n>\n>\n>> Within the tuples which pass the status check, which inequality is more\n>> selective, the \"created\" one or \"updated\" one?\n>>\n> Obviously updated time is more selective (after status), and the created\n> time is included only to exclude some bugs in our system that had left some\n> old tasks stuck in progress (and for sorting). We do try to clean\n> up occasionally, but not each time.\n>\n\nIf \"created\" were the leading column in the index, then it could jump\ndirectly to the part of the index which meets the `created > ...` without\nhaving to scroll through all of them and throw them out one by one. But it\nsounds like there are so few of them that being able to skip them wouldn't\nbe worth very much.\n\n\n>\n> However we cannot add an index on `updated` column because that timestamp\n> gets updated over 10x on average for each task. Since if a single index use\n> a column, then the update will not be HOT, and every index needs to be\n> updated. That will clearly add a bloat to every index. Did I miss something?\n>\n\nWhy does it get updated so much? It seems like status should go from 1 to\n2, then from 2 to 3,4,or 5, and then be done. So only 2 updates, not 10.\nMaybe the feature which needs this frequent update could be done in some\nother way which is less disruptive.\n\nBut anyway, PostgreSQL has features to prevent the index bloat from\nbecoming too severe of a problem, and you should figure out why they are\nnot working for you. The most common ones I know of are 1) long open\nsnapshots preventing clean up, 2) all index scans being bitmap index scans,\nwhich don't to micro-vacuuming/index hinting the way ordinary btree\nindex scans do, and 3) running the queries on a hot-standby, where index\nhint bits must be ignored. If you could identify and solve this issue,\nthen you wouldn't need to twist yourself into knots avoiding non-HOT\nupdates.\n\nCheers,\n\nJeff\n\n>\n\nOn Wed, Aug 30, 2023 at 8:43 PM jayaprabhakar k <[email protected]> wrote:On Tue, Aug 29, 2023, 12:43 PM Jeff Janes <[email protected]> wrote:On Mon, Aug 28, 2023 at 8:33 PM jayaprabhakar k <[email protected]> wrote:Since we are only interested in the pending tasks, I created a partial index `\"tasks_pending_status_created_type_idx\" btree (status, created, task_type) WHERE status <> ALL (ARRAY[3, 4, 5])`.This looks like a poorly designed index. Since the status condition exactly matches the index where clause, there is no residual point in having \"status\" be the first column in the index, it can only get in the way (for this particular query). Move it to the end, or remove it altogether.Interesting. I don't understand why it will get in the way. Unfortunately we have a few other cases where status is used in filter. That said, I will consider how to get this to work. Would removing status from the index column, improve HOT updates %? For example, changing status from 1->2, doesn't change anything on the index (assuming other criteria for HOT updates are met), but I am not sure how the implementation is.No, changes to the status column will not qualify as HOT updates, even if status is only in the WHERE clause and not the index body. I don't know if there is a fundamental reason that those can't be done as HOT, or if it is just an optimization that no one implemented. Within the tuples which pass the status check, which inequality is more selective, the \"created\" one or \"updated\" one?Obviously updated time is more selective (after status), and the created time is included only to exclude some bugs in our system that had left some old tasks stuck in progress (and for sorting). We do try to clean up occasionally, but not each time.If \"created\" were the leading column in the index, then it could jump directly to the part of the index which meets the `created > ...` without having to scroll through all of them and throw them out one by one. But it sounds like there are so few of them that being able to skip them wouldn't be worth very much. However we cannot add an index on `updated` column because that timestamp gets updated over 10x on average for each task. Since if a single index use a column, then the update will not be HOT, and every index needs to be updated. That will clearly add a bloat to every index. Did I miss something?Why does it get updated so much? It seems like status should go from 1 to 2, then from 2 to 3,4,or 5, and then be done. So only 2 updates, not 10. Maybe the feature which needs this frequent update could be done in some other way which is less disruptive.But anyway, PostgreSQL has features to prevent the index bloat from becoming too severe of a problem, and you should figure out why they are not working for you. The most common ones I know of are 1) long open snapshots preventing clean up, 2) all index scans being bitmap index scans, which don't to micro-vacuuming/index hinting the way ordinary btree index scans do, and 3) running the queries on a hot-standby, where index hint bits must be ignored. If you could identify and solve this issue, then you wouldn't need to twist yourself into knots avoiding non-HOT updates. Cheers,Jeff",
"msg_date": "Thu, 31 Aug 2023 23:01:06 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 11:06 AM Maxim Boguk <[email protected]> wrote:\n\n\n> With the default value of autovacuum_vacuum_scale_factor (The default is\n> 0.2 (20% of table size).) index will collect like 100M outdated/dead index\n> entries before autovacuum kicks in and cleans them all (in a worst case),\n> and of course it will lead to huge index bloat and awful performance.\n>\n\nIndex bloat doesn't automatically lead to awful performance. There must be\nsome additional factor at play.\n\n\n> Even if you scale down autovacuum_vacuum_scale_factor to some\n> unreasonable low value like 0.01, the index still bloats to the 5M dead\n> entries before autovacuum run, and constant vacuuming of a huge 500M table\n> will put a huge load on the database server.\n>\n\nFor this type of situation, I would generally set\nautovacuum_vacuum_scale_factor to 0, and use autovacuum_vacuum_threshold to\ndrive the vacuuming instead. But I'd make those changes just on the queue\ntable(s), not system wide. Due to the visibility map, the load on the\nserver does not need to be huge just due to the table, as the stable part\nof the table can be ignored. The problem is that each index still needs to\nbe read entirely for each vacuum cycle, which would not be much of a\nproblem for the partial indexes, but certainly could be for the full\nindexes. There are some very recent improvements in this area, but I don't\nthink they can be applied selectively to specific indexes.\n\n\n\n>\n> Unfortunately there is no easy way out of this situation from database\n> side, in general I recommend not trying to implement a fast pacing queue\n> like load inside of a huge and constantly growing table, it never works\n> well because you cannot keep up partial efficient indexes for the queue in\n> a clean/non-bloated state.\n>\n> In my opinion the best solution is to keep list of entries to process (\"*around\n> 1000-1500 tasks in pending statuses\")* duplicated in the separate tiny\n> table (via triggers or implement it on the application level), in that case\n> autovacuum will be able quickly clean dead entries from the index.\n>\n\nYou should be able to use declarative partitioning to separate the \"final\"\ntuples from the \"active\" tuples, to get the same benefit but with less work.\n\nCheers,\n\nJeff\n\nOn Thu, Aug 31, 2023 at 11:06 AM Maxim Boguk <[email protected]> wrote: With the default value of autovacuum_vacuum_scale_factor (The default is 0.2 (20% of table size).) index will collect like 100M outdated/dead index entries before autovacuum kicks in and cleans them all (in a worst case), and of course it will lead to huge index bloat and awful performance.Index bloat doesn't automatically lead to awful performance. There must be some additional factor at play. Even if you scale down autovacuum_vacuum_scale_factor to some unreasonable low value like 0.01, the index still bloats to the 5M dead entries before autovacuum run, and constant vacuuming of a huge 500M table will put a huge load on the database server.For this type of situation, I would generally set autovacuum_vacuum_scale_factor to 0, and use autovacuum_vacuum_threshold to drive the vacuuming instead. But I'd make those changes just on the queue table(s), not system wide. Due to the visibility map, the load on the server does not need to be huge just due to the table, as the stable part of the table can be ignored. The problem is that each index still needs to be read entirely for each vacuum cycle, which would not be much of a problem for the partial indexes, but certainly could be for the full indexes. There are some very recent improvements in this area, but I don't think they can be applied selectively to specific indexes. Unfortunately there is no easy way out of this situation from database side, in general I recommend not trying to implement a fast pacing queue like load inside of a huge and constantly growing table, it never works well because you cannot keep up partial efficient indexes for the queue in a clean/non-bloated state.In my opinion the best solution is to keep list of entries to process (\"around 1000-1500 tasks in pending statuses\") duplicated in the separate tiny table (via triggers or implement it on the application level), in that case autovacuum will be able quickly clean dead entries from the index.You should be able to use declarative partitioning to separate the \"final\" tuples from the \"active\" tuples, to get the same benefit but with less work.Cheers,Jeff",
"msg_date": "Thu, 31 Aug 2023 23:18:12 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": ">\n> But anyway, PostgreSQL has features to prevent the index bloat from\n> becoming too severe of a problem, and you should figure out why they are\n> not working for you. The most common ones I know of are 1) long open\n> snapshots preventing clean up, 2) all index scans being bitmap index scans,\n> which don't to micro-vacuuming/index hinting the way ordinary btree\n> index scans do, and 3) running the queries on a hot-standby, where index\n> hint bits must be ignored. If you could identify and solve this issue,\n> then you wouldn't need to twist yourself into knots avoiding non-HOT\n> updates.\n>\n\nI am not sure that kill bits could be a complete fix for indexes with tens\nof millions dead entries and only a handful of live entries. As I\nunderstand the mechanics of killbits - they help to avoid excessive heap\nvisibility checks for dead tuples, but tuples with killbit are still should\nbe read from the index first. And with many millions of dead entries it\nisn't free.\n\nPS: ignoring killbits on hot standby slaves is a source of endless pain in\nmany cases.\n\n--\nMaxim Boguk\nSenior Postgresql DBA\n\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nBut anyway, PostgreSQL has features to prevent the index bloat from becoming too severe of a problem, and you should figure out why they are not working for you. The most common ones I know of are 1) long open snapshots preventing clean up, 2) all index scans being bitmap index scans, which don't to micro-vacuuming/index hinting the way ordinary btree index scans do, and 3) running the queries on a hot-standby, where index hint bits must be ignored. If you could identify and solve this issue, then you wouldn't need to twist yourself into knots avoiding non-HOT updates. I am not sure that kill bits could be a complete fix for indexes with tens of millions dead entries and only a handful of live entries. As I understand the mechanics of killbits - they help to avoid excessive heap visibility checks for dead tuples, but tuples with killbit are still should be read from the index first. And with many millions of dead entries it isn't free.PS: ignoring killbits on hot standby slaves is a source of endless pain in many cases.--Maxim BogukSenior Postgresql DBAPhone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Fri, 1 Sep 2023 21:01:26 +0300",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
},
{
"msg_contents": "Thanks Maxim and Jeff.\n1. Do you have any pointers to the killbits issue on hot standby slaves? We\ndo use a hot standby instance for many queries. So I want to learn more\nabout it.\n2. I am now considering partitioning the table. I am curious if we can set\nup partitions by mutable columns. More specifically, <status, created>,\nwhere the status is mutable, and usually ends up in terminal states\n(success, failure or aborted).\n\nI could not find any documentation on the performance implication of\npartitioning by mutable column, any guidance would be helpful. I had\npreviously underestimated the impact of index on a mutable column, so I\nwant to be cautious this time.\n\n\n\n\n\nOn Fri, 1 Sept 2023 at 11:02, Maxim Boguk <[email protected]> wrote:\n\n> But anyway, PostgreSQL has features to prevent the index bloat from\n>> becoming too severe of a problem, and you should figure out why they are\n>> not working for you. The most common ones I know of are 1) long open\n>> snapshots preventing clean up, 2) all index scans being bitmap index scans,\n>> which don't to micro-vacuuming/index hinting the way ordinary btree\n>> index scans do, and 3) running the queries on a hot-standby, where index\n>> hint bits must be ignored. If you could identify and solve this issue,\n>> then you wouldn't need to twist yourself into knots avoiding non-HOT\n>> updates.\n>>\n>\n> I am not sure that kill bits could be a complete fix for indexes with tens\n> of millions dead entries and only a handful of live entries. As I\n> understand the mechanics of killbits - they help to avoid excessive heap\n> visibility checks for dead tuples, but tuples with killbit are still should\n> be read from the index first. And with many millions of dead entries it\n> isn't free.\n>\n> PS: ignoring killbits on hot standby slaves is a source of endless pain in\n> many cases.\n>\n> --\n> Maxim Boguk\n> Senior Postgresql DBA\n>\n> Phone UA: +380 99 143 0000\n> Phone AU: +61 45 218 5678\n>\n>\n\nThanks Maxim and Jeff. 1. Do you have any pointers to the killbits issue on hot standby slaves? We do use a hot standby instance for many queries. So I want to learn more about it.2. I am now considering partitioning the table. I am curious if we can set up partitions by mutable columns. More specifically, <status, created>, where the status is mutable, and usually ends up in terminal states (success, failure or aborted). I could not find any documentation on the performance implication of partitioning by mutable column, any guidance would be helpful. I had previously underestimated the impact of index on a mutable column, so I want to be cautious this time. On Fri, 1 Sept 2023 at 11:02, Maxim Boguk <[email protected]> wrote:But anyway, PostgreSQL has features to prevent the index bloat from becoming too severe of a problem, and you should figure out why they are not working for you. The most common ones I know of are 1) long open snapshots preventing clean up, 2) all index scans being bitmap index scans, which don't to micro-vacuuming/index hinting the way ordinary btree index scans do, and 3) running the queries on a hot-standby, where index hint bits must be ignored. If you could identify and solve this issue, then you wouldn't need to twist yourself into knots avoiding non-HOT updates. I am not sure that kill bits could be a complete fix for indexes with tens of millions dead entries and only a handful of live entries. As I understand the mechanics of killbits - they help to avoid excessive heap visibility checks for dead tuples, but tuples with killbit are still should be read from the index first. And with many millions of dead entries it isn't free.PS: ignoring killbits on hot standby slaves is a source of endless pain in many cases.--Maxim BogukSenior Postgresql DBAPhone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Tue, 5 Sep 2023 23:50:35 -0700",
"msg_from": "jayaprabhakar k <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index bloat and REINDEX/VACUUM optimization for partial index"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm trying to implement some range partitioning on timeseries data. But it\nlooks some queries involving date_trunc() doesn't make use of partitioning.\n\nBEGIN;\nCREATE TABLE test (\n time TIMESTAMP WITHOUT TIME ZONE NOT NULL,\n value FLOAT NOT NULL\n) PARTITION BY RANGE (time);\nCREATE INDEX test_time_idx ON test(time DESC);\nCREATE TABLE test_y2010 PARTITION OF test FOR VALUES FROM ('2020-01-01') TO ('2021-01-01');\nCREATE TABLE test_y2011 PARTITION OF test FOR VALUES FROM ('2021-01-01') TO ('2022-01-01');\nCREATE VIEW vtest AS SELECT DATE_TRUNC('year', time) AS time, SUM(value) AS value FROM test GROUP BY 1;\nEXPLAIN (COSTS OFF) SELECT * FROM vtest WHERE time >= TIMESTAMP '2021-01-01';\nROLLBACK;\n\nThe plan query all partitions:\n\nHashAggregate\n Group Key: (date_trunc('year'::text, test.\"time\"))\n -> Append\n -> Seq Scan on test_y2010 test_1\n Filter: (date_trunc('year'::text, \"time\") >= '2021-01-01 00:00:00'::timestamp without time zone)\n -> Seq Scan on test_y2011 test_2\n Filter: (date_trunc('year'::text, \"time\") >= '2021-01-01 00:00:00'::timestamp without time zone)\n\n\nThe view is there so show the use case, but we get almost similar plan with SELECT * FROM test WHERE DATE_TRUNC('year', time) >= TIMESTAMP '2021-01-01';\n\n\nI tested a variation with timescaledb which seem using trigger based\npartitioning:\n\nBEGIN;\nCREATE EXTENSION IF NOT EXISTS timescaledb;\nCREATE TABLE test (\n time TIMESTAMP WITHOUT TIME ZONE NOT NULL,\n value FLOAT NOT NULL\n);\nSELECT create_hypertable('test', 'time', chunk_time_interval => INTERVAL '1 year');\nCREATE VIEW vtest AS SELECT time_bucket('1 year', time) AS time, SUM(value) AS value FROM test GROUP BY 1;\n-- insert some data as partitions are created on the fly\nINSERT INTO test VALUES (TIMESTAMP '2020-01-15', 1.0), (TIMESTAMP '2021-12-15', 2.0);\n\\d+ test\nEXPLAIN (COSTS OFF) SELECT * FROM vtest WHERE time >= TIMESTAMP '2021-01-01';\nROLLBACK;\n\n\nThe plan query a single partition:\n\nGroupAggregate\n Group Key: (time_bucket('1 year'::interval, _hyper_1_2_chunk.\"time\"))\n -> Result\n -> Index Scan Backward using _hyper_1_2_chunk_test_time_idx on _hyper_1_2_chunk\n Index Cond: (\"time\" >= '2021-01-01 00:00:00'::timestamp without time zone)\n Filter: (time_bucket('1 year'::interval, \"time\") >= '2021-01-01 00:00:00'::timestamp without time zone)\n\n\nNote single partition query only works with time_bucket(), not with date_trunc(), I guess\nthere is some magic regarding this in time_bucket() implementation.\n\n\nI wonder if there is a way with a reasonable amount of SQL code to achieve this\nwith vanilla postgres ?\n\nMaybe by taking assumption that DATE_TRUNC(..., time) <= time ?\n\nThanks!\n\n\n",
"msg_date": "Tue, 29 Aug 2023 09:40:06 +0200",
"msg_from": "Philippe Pepiot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Range partitioning query performance with date_trunc (vs timescaledb)"
},
{
"msg_contents": "On Tue, 29 Aug 2023 at 19:40, Philippe Pepiot <[email protected]> wrote:\n> I'm trying to implement some range partitioning on timeseries data. But it\n> looks some queries involving date_trunc() doesn't make use of partitioning.\n>\n> BEGIN;\n> CREATE TABLE test (\n> time TIMESTAMP WITHOUT TIME ZONE NOT NULL,\n> value FLOAT NOT NULL\n> ) PARTITION BY RANGE (time);\n> CREATE INDEX test_time_idx ON test(time DESC);\n> CREATE TABLE test_y2010 PARTITION OF test FOR VALUES FROM ('2020-01-01') TO ('2021-01-01');\n> CREATE TABLE test_y2011 PARTITION OF test FOR VALUES FROM ('2021-01-01') TO ('2022-01-01');\n> CREATE VIEW vtest AS SELECT DATE_TRUNC('year', time) AS time, SUM(value) AS value FROM test GROUP BY 1;\n> EXPLAIN (COSTS OFF) SELECT * FROM vtest WHERE time >= TIMESTAMP '2021-01-01';\n> ROLLBACK;\n>\n> The plan query all partitions:\n\n> I wonder if there is a way with a reasonable amount of SQL code to achieve this\n> with vanilla postgres ?\n\nThe only options I see for you are\n\n1) partition by LIST(date_Trunc('year', time)), or;\n2) use a set-returning function instead of a view and pass the date\nrange you want to select from the underlying table via parameters.\n\nI imagine you won't want to do #1. However, it would at least also\nallow the aggregation to be performed before the Append if you SET\nenable_partitionwise_aggregate=1.\n\n#2 isn't as flexible as a view as you'd have to create another\nfunction or expand the parameters of the existing one if you want to\nadd items to the WHERE clause.\n\nUnfortunately, date_trunc is just a black box to partition pruning, so\nit's not able to determine that DATE_TRUNC('year', time) >=\n'2021-01-01' is the same as time >= '2021-01-01'. It would be\npossible to make PostgreSQL do that, but that's a core code change,\nnot something that you can do from SQL.\n\nDavid\n\n\n",
"msg_date": "Tue, 29 Aug 2023 21:38:05 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Range partitioning query performance with date_trunc (vs\n timescaledb)"
},
{
"msg_contents": "On 29/08/2023, David Rowley wrote:\n> On Tue, 29 Aug 2023 at 19:40, Philippe Pepiot <[email protected]> wrote:\n> > I'm trying to implement some range partitioning on timeseries data. But it\n> > looks some queries involving date_trunc() doesn't make use of partitioning.\n> >\n> > BEGIN;\n> > CREATE TABLE test (\n> > time TIMESTAMP WITHOUT TIME ZONE NOT NULL,\n> > value FLOAT NOT NULL\n> > ) PARTITION BY RANGE (time);\n> > CREATE INDEX test_time_idx ON test(time DESC);\n> > CREATE TABLE test_y2010 PARTITION OF test FOR VALUES FROM ('2020-01-01') TO ('2021-01-01');\n> > CREATE TABLE test_y2011 PARTITION OF test FOR VALUES FROM ('2021-01-01') TO ('2022-01-01');\n> > CREATE VIEW vtest AS SELECT DATE_TRUNC('year', time) AS time, SUM(value) AS value FROM test GROUP BY 1;\n> > EXPLAIN (COSTS OFF) SELECT * FROM vtest WHERE time >= TIMESTAMP '2021-01-01';\n> > ROLLBACK;\n> >\n> > The plan query all partitions:\n> \n> > I wonder if there is a way with a reasonable amount of SQL code to achieve this\n> > with vanilla postgres ?\n> \n> The only options I see for you are\n> \n> 1) partition by LIST(date_Trunc('year', time)), or;\n> 2) use a set-returning function instead of a view and pass the date\n> range you want to select from the underlying table via parameters.\n> \n> I imagine you won't want to do #1. However, it would at least also\n> allow the aggregation to be performed before the Append if you SET\n> enable_partitionwise_aggregate=1.\n> \n> #2 isn't as flexible as a view as you'd have to create another\n> function or expand the parameters of the existing one if you want to\n> add items to the WHERE clause.\n> \n> Unfortunately, date_trunc is just a black box to partition pruning, so\n> it's not able to determine that DATE_TRUNC('year', time) >=\n> '2021-01-01' is the same as time >= '2021-01-01'. It would be\n> possible to make PostgreSQL do that, but that's a core code change,\n> not something that you can do from SQL.\n\nOk I think I'll go for Set-returning function since\nLIST or RANGE on (date_trunc('year', time)) will break advantage of\npartitioning when querying with \"time betwen x and y\".\n\nThanks!\n\n\n",
"msg_date": "Mon, 11 Sep 2023 14:21:30 +0200",
"msg_from": "Philippe Pepiot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Range partitioning query performance with date_trunc (vs\n timescaledb)"
}
] |
[
{
"msg_contents": "I have a legacy system that uses `Posgresql 9.6` and `Ubuntu 16.04`. Everything was fine several days ago even with standard Postgresql settings. I dumped a database with the compression option (maximum compression level -Z 9) in order to have a smaller size (`pg_dump --compress=9 database_name > database_name.sql`). After that I got a lot of problems. Some queries for certain tables started to be executed very slow. Queries for other tables work fine. Here are the tables that I have issues with. asins: id (integer) value (string), index b-tree type (string) books: id (integer) asin (string), index b-tree ... (total 32 columns) asins_statistics: id (integer) average_price (float) average_rating (integer) asin_id (foreign key) ... (total 17 columns) These tables contain 1 400 000 rows each. Detailed info in attachments. Basically I used the following query and it worked well: (1) SELECT * FROM ISBNS JOIN BOOKS ON BOOKS.ISBN = ISBN.VALUE JOIN ISBNS_STATISTICS ON ISBNS_STATISTICS.ISBN_ID = ISBNS.ID ORDER BY ISBNS.VALUE LIMIT 100; But after I made the dump it started to be executed extremely slow. I'm not sure whether it's because of the dump, but before the dump everything worked well. This query also works well: SELECT * FROM ISBNS JOIN BOOKS ON BOOKS.ISBN = ISBN.VALUE JOIN ISBNS_STATISTICS ON ISBNS_STATISTICS.ISBN_ID = ISBNS.ID LIMIT 100; This query is executed quickly too: SELECT * FROM ISBNS JOIN BOOKS ON BOOKS.ISBN = ISBN.VALUE ORDER BY ISBNS.VALUE LIMIT 100; I changed performance settings (for instance, increased `shared_buffer`), but it didn't increase speed too much. I've read that queries containing LIMIT and ORDER BY work very slow, but if I make such queries to other tables it works fine. The query plan for query (1) is in attachment. So, the questions are:1. Why everything worked well and started to work slowly?2. Why similar queries to other tables are still executed quickly? Thank you in advance. Cheers,Serg",
"msg_date": "Tue, 29 Aug 2023 20:47:19 +0300",
"msg_from": "Rondat Flyag <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 1:47 PM Rondat Flyag <[email protected]> wrote:\n\n> I have a legacy system that uses `Posgresql 9.6` and `Ubuntu 16.04`.\n> Everything was fine several days ago even with standard Postgresql\n> settings. I dumped a database with the compression option (maximum\n> compression level -Z 9) in order to have a smaller size (`pg_dump\n> --compress=9 database_name > database_name.sql`). After that I got a lot of\n> problems.\n>\n\nYou describe taking a dump of the database, but don't describe doing\nanything with it. Did you replace your system with one restored from that\ndump? If so, did vacuum and analyze afterwards?\n\nCheers,\n\nJeff\n\nOn Tue, Aug 29, 2023 at 1:47 PM Rondat Flyag <[email protected]> wrote:I have a legacy system that uses `Posgresql 9.6` and `Ubuntu 16.04`. Everything was fine several days ago even with standard Postgresql settings. I dumped a database with the compression option (maximum compression level -Z 9) in order to have a smaller size (`pg_dump --compress=9 database_name > database_name.sql`). After that I got a lot of problems. You describe taking a dump of the database, but don't describe doing anything with it. Did you replace your system with one restored from that dump? If so, did vacuum and analyze afterwards?Cheers,Jeff",
"msg_date": "Tue, 29 Aug 2023 14:42:27 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "I took the dump just to store it on another storage (external HDD). I didn't do anything with it. 29.08.2023, 21:42, \"Jeff Janes\" <[email protected]>: On Tue, Aug 29, 2023 at 1:47 PM Rondat Flyag <[email protected]> wrote:I have a legacy system that uses `Posgresql 9.6` and `Ubuntu 16.04`. Everything was fine several days ago even with standard Postgresql settings. I dumped a database with the compression option (maximum compression level -Z 9) in order to have a smaller size (`pg_dump --compress=9 database_name > database_name.sql`). After that I got a lot of problems. You describe taking a dump of the database, but don't describe doing anything with it. Did you replace your system with one restored from that dump? If so, did vacuum and analyze afterwards? Cheers, Jeff",
"msg_date": "Tue, 29 Aug 2023 21:55:56 +0300",
"msg_from": "Rondat Flyag <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 2:55 PM Rondat Flyag <[email protected]> wrote:\n\n> I took the dump just to store it on another storage (external HDD). I\n> didn't do anything with it.\n>\n\nI don't see how that could cause the problem, it is probably just a\ncoincidence. Maybe taking the dump held a long-lived snapshot open which\ncaused some bloat. But if that was enough to push your system over the\nedge, it was probably too close to the edge to start with.\n\nDo you have a plan for the query while it was fast? If not, maybe you can\nforce it back to the old plan by setting enable_seqscan=off or perhaps\nenable_sort=off, to let you capture the old plan for comparison.\n\nThe estimate for the seq scan of isbns_statistics is off by almost a\nfactor of 2. A seq scan with no filters and which can not stop early\nshould not be hard to estimate accurately, so this suggests autovac is not\nkeeping up. VACUUM ANALYZE all of the involved tables and see if that\nfixes things.\n\nCheers,\n\nJeff\n\nOn Tue, Aug 29, 2023 at 2:55 PM Rondat Flyag <[email protected]> wrote:I took the dump just to store it on another storage (external HDD). I didn't do anything with it.I don't see how that could cause the problem, it is probably just a coincidence. Maybe taking the dump held a long-lived snapshot open which caused some bloat. But if that was enough to push your system over the edge, it was probably too close to the edge to start with.Do you have a plan for the query while it was fast? If not, maybe you can force it back to the old plan by setting enable_seqscan=off or perhaps enable_sort=off, to let you capture the old plan for comparison.The estimate for the seq scan of isbns_statistics is off by almost a factor of 2. A seq scan with no filters and which can not stop early should not be hard to estimate accurately, so this suggests autovac is not keeping up. VACUUM ANALYZE all of the involved tables and see if that fixes things.Cheers,Jeff",
"msg_date": "Tue, 29 Aug 2023 16:11:39 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 3:57 PM Rondat Flyag <[email protected]> wrote:\n\n> I took the dump just to store it on another storage (external HDD). I\n> didn't do anything with it.\n>\n> 29.08.2023, 21:42, \"Jeff Janes\" <[email protected]>:\n>\n>\n>\n> On Tue, Aug 29, 2023 at 1:47 PM Rondat Flyag <[email protected]>\n> wrote:\n>\n> I have a legacy system that uses `Posgresql 9.6` and `Ubuntu 16.04`.\n> Everything was fine several days ago even with standard Postgresql\n> settings. I dumped a database with the compression option (maximum\n> compression level -Z 9) in order to have a smaller size (`pg_dump\n> --compress=9 database_name > database_name.sql`). After that I got a lot of\n> problems.\n>\n>\n> You describe taking a dump of the database, but don't describe doing\n> anything with it. Did you replace your system with one restored from that\n> dump? If so, did vacuum and analyze afterwards?\n>\n> Cheers,\n>\n> Jeff\n>\n>\nSince this is a very old system and backups are fairly I/O intensive, it is\npossible you have a disk going bad? Sometimes after doing a bunch of I/O\non an old disk, it will accelerate its decline. You could be about to lose\nit altogether.\n\nOn Tue, Aug 29, 2023 at 3:57 PM Rondat Flyag <[email protected]> wrote:I took the dump just to store it on another storage (external HDD). I didn't do anything with it. 29.08.2023, 21:42, \"Jeff Janes\" <[email protected]>: On Tue, Aug 29, 2023 at 1:47 PM Rondat Flyag <[email protected]> wrote:I have a legacy system that uses `Posgresql 9.6` and `Ubuntu 16.04`. Everything was fine several days ago even with standard Postgresql settings. I dumped a database with the compression option (maximum compression level -Z 9) in order to have a smaller size (`pg_dump --compress=9 database_name > database_name.sql`). After that I got a lot of problems. You describe taking a dump of the database, but don't describe doing anything with it. Did you replace your system with one restored from that dump? If so, did vacuum and analyze afterwards? Cheers, JeffSince this is a very old system and backups are fairly I/O intensive, it is possible you have a disk going bad? Sometimes after doing a bunch of I/O on an old disk, it will accelerate its decline. You could be about to lose it altogether.",
"msg_date": "Tue, 29 Aug 2023 17:06:54 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "Hi and thank you for the response. I tried VACUUM ANALYZE for three tables, but without success. I also tried to set enable_seqscan=off and the query took even more time. If I set enable_sort=off then the query takes a lot of time and I cancel it. Please see the attached query plans. Cheers,Serg 29.08.2023, 23:11, \"Jeff Janes\" <[email protected]>:On Tue, Aug 29, 2023 at 2:55 PM Rondat Flyag <[email protected]> wrote:I took the dump just to store it on another storage (external HDD). I didn't do anything with it. I don't see how that could cause the problem, it is probably just a coincidence. Maybe taking the dump held a long-lived snapshot open which caused some bloat. But if that was enough to push your system over the edge, it was probably too close to the edge to start with. Do you have a plan for the query while it was fast? If not, maybe you can force it back to the old plan by setting enable_seqscan=off or perhaps enable_sort=off, to let you capture the old plan for comparison. The estimate for the seq scan of isbns_statistics is off by almost a factor of 2. A seq scan with no filters and which can not stop early should not be hard to estimate accurately, so this suggests autovac is not keeping up. VACUUM ANALYZE all of the involved tables and see if that fixes things. Cheers, Jeff",
"msg_date": "Wed, 30 Aug 2023 20:31:05 +0300",
"msg_from": "Rondat Flyag <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "Thanks for the response.Sure, I thought about it and even bought another drive. The current drive is SSD, as far as I'm concerned write operations degrade SSDs. Even so, why other queries work fine? Why the query joining two tables instead of three works fine? Cheers,Serg 30.08.2023, 00:07, \"Rick Otten\" <[email protected]>: On Tue, Aug 29, 2023 at 3:57 PM Rondat Flyag <[email protected]> wrote:I took the dump just to store it on another storage (external HDD). I didn't do anything with it. 29.08.2023, 21:42, \"Jeff Janes\" <[email protected]>: On Tue, Aug 29, 2023 at 1:47 PM Rondat Flyag <[email protected]> wrote:I have a legacy system that uses `Posgresql 9.6` and `Ubuntu 16.04`. Everything was fine several days ago even with standard Postgresql settings. I dumped a database with the compression option (maximum compression level -Z 9) in order to have a smaller size (`pg_dump --compress=9 database_name > database_name.sql`). After that I got a lot of problems. You describe taking a dump of the database, but don't describe doing anything with it. Did you replace your system with one restored from that dump? If so, did vacuum and analyze afterwards? Cheers, Jeff Since this is a very old system and backups are fairly I/O intensive, it is possible you have a disk going bad? Sometimes after doing a bunch of I/O on an old disk, it will accelerate its decline. You could be about to lose it altogether. ",
"msg_date": "Wed, 30 Aug 2023 20:46:46 +0300",
"msg_from": "Rondat Flyag <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "On Thu, 31 Aug 2023 at 06:32, Rondat Flyag <[email protected]> wrote:\n> I tried VACUUM ANALYZE for three tables, but without success. I also tried to set enable_seqscan=off and the query took even more time. If I set enable_sort=off then the query takes a lot of time and I cancel it.\n>\n> Please see the attached query plans.\n\nIt's a little hard to comment here as I don't see what the plan was\nbefore when you were happy with the performance. I also see the\nqueries you mentioned in the initial email don't match the plans.\nThere's no table called \"isbns\" in the query. I guess this is \"asins\"?\n\nLikely you could get a faster plan if there was an index on\nasins_statistics (asin_id). That would allow a query plan that scans\nthe isbns_value_key index and performs a parameterised nested loop on\nasins_statistics using the asins_statistics (asin_id) index. Looking\nat your schema, I don't see that index, so it's pretty hard to guess\nwhy the plan used to be faster. Even if the books/asins merge join\nused to take place first, there'd have been no efficient way to join\nto the asins_statistics table and preserve the Merge Join's order (I'm\nassuming non-parameterized nested loops would be inefficient in this\ncase). Doing that would have also required the asins_statistics\n(asin_id) index. Are you sure that index wasn't dropped?\n\nHowever, likely it's a waste of time to try to figure out what the\nplan used to be. Better to focus on trying to make it faster. I\nsuggest you create the asins_statistics (asin_id) index. However, I\ncan't say with any level of confidence that the planner would opt to\nuse that index if it did exist. Lowering random_page_cost or\nincreasing effective_cache_size would increase the chances of that.\n\nDavid\n\n\n",
"msg_date": "Thu, 31 Aug 2023 09:43:00 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 1:31 PM Rondat Flyag <[email protected]> wrote:\n\n> Hi and thank you for the response.\n>\n> I tried VACUUM ANALYZE for three tables, but without success. I also tried\n> to set enable_seqscan=off and the query took even more time. If I set\n> enable_sort=off then the query takes a lot of time and I cancel it.\n>\n\nMaybe you could restore (to a temp server, not the production) a physical\nbackup taken from before the change happened, and get an old plan that\nway. I'm guessing that somehow an index got dropped around the same time\nyou took the dump. That might be a lot of work, and maybe it would just be\neasier to optimize the current query while ignoring the past. But you\nseem to be interested in a root-cause analysis, and I don't see any other\nway to do one of those.\n\nWhat I would expect to be the winning plan would be something sort-free\nlike:\n\nLimit\n merge join\n index scan yielding books in asin order (already being done)\n nested loop\n index scan yielding asins in value order\n index scan probing asins_statistics driven\nby asins_statistics.asin_id = asins.id\n\nOr possibly a 2nd nested loop rather than the merge join just below the\nlimit, but with the rest the same\n\nIn addition to the \"books\" index already evident in your current plan, you\nwould also need an index leading with asins_statistics.asin_id, and one\nleading with asins.value. But if all those indexes exists, it is hard to\nsee why setting enable_seqscan=off wouldn't have forced them to be used.\n\n Cheers,\n\nJeff\n\nOn Wed, Aug 30, 2023 at 1:31 PM Rondat Flyag <[email protected]> wrote:Hi and thank you for the response. I tried VACUUM ANALYZE for three tables, but without success. I also tried to set enable_seqscan=off and the query took even more time. If I set enable_sort=off then the query takes a lot of time and I cancel it.Maybe you could restore (to a temp server, not the production) a physical backup taken from before the change happened, and get an old plan that way. I'm guessing that somehow an index got dropped around the same time you took the dump. That might be a lot of work, and maybe it would just be easier to optimize the current query while ignoring the past. But you seem to be interested in a root-cause analysis, and I don't see any other way to do one of those.What I would expect to be the winning plan would be something sort-free like:Limit merge join index scan yielding books in asin order (already being done) nested loop index scan yielding asins in value order index scan probing asins_statistics driven by asins_statistics.asin_id = asins.idOr possibly a 2nd nested loop rather than the merge join just below the limit, but with the rest the sameIn addition to the \"books\" index already evident in your current plan, you would also need an index leading with asins_statistics.asin_id, and one leading with asins.value. But if all those indexes exists, it is hard to see why setting enable_seqscan=off wouldn't have forced them to be used. Cheers,Jeff",
"msg_date": "Thu, 31 Aug 2023 12:52:31 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "Hi David. Thank you so much for your help. The problem was in the dropped asins_statistics(asin_id) index. I had set it, but it was dropped somehow during the dump. I set it again andeverything works fine now.Thank you again. P.S. There are two close terms: ASIN and ISBN. I use ASIN in my tables, but ISBN is well-known to people. I changed ASIN to ISBN in the text files, but forgot to replace the last time.This is why the names didn't correspond. Cheers,Serg 31.08.2023, 00:43, \"David Rowley\" <[email protected]>:On Thu, 31 Aug 2023 at 06:32, Rondat Flyag <[email protected]> wrote: I tried VACUUM ANALYZE for three tables, but without success. I also tried to set enable_seqscan=off and the query took even more time. If I set enable_sort=off then the query takes a lot of time and I cancel it. Please see the attached query plans.It's a little hard to comment here as I don't see what the plan wasbefore when you were happy with the performance. I also see thequeries you mentioned in the initial email don't match the plans.There's no table called \"isbns\" in the query. I guess this is \"asins\"?Likely you could get a faster plan if there was an index onasins_statistics (asin_id). That would allow a query plan that scansthe isbns_value_key index and performs a parameterised nested loop onasins_statistics using the asins_statistics (asin_id) index. Lookingat your schema, I don't see that index, so it's pretty hard to guesswhy the plan used to be faster. Even if the books/asins merge joinused to take place first, there'd have been no efficient way to jointo the asins_statistics table and preserve the Merge Join's order (I'massuming non-parameterized nested loops would be inefficient in thiscase). Doing that would have also required the asins_statistics(asin_id) index. Are you sure that index wasn't dropped?However, likely it's a waste of time to try to figure out what theplan used to be. Better to focus on trying to make it faster. Isuggest you create the asins_statistics (asin_id) index. However, Ican't say with any level of confidence that the planner would opt touse that index if it did exist. Lowering random_page_cost orincreasing effective_cache_size would increase the chances of that.David",
"msg_date": "Fri, 01 Sep 2023 16:41:13 +0300",
"msg_from": "Rondat Flyag <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
},
{
"msg_contents": "Hello Jeff.Thank you too for your efforts and help. The problem was in the dropped index for asins_statistics(asin_id). It existed, but was dropped during the dump I suppose. I created it again and everything is fine now. Cheers,Serg 31.08.2023, 19:52, \"Jeff Janes\" <[email protected]>:On Wed, Aug 30, 2023 at 1:31 PM Rondat Flyag <[email protected]> wrote:Hi and thank you for the response. I tried VACUUM ANALYZE for three tables, but without success. I also tried to set enable_seqscan=off and the query took even more time. If I set enable_sort=off then the query takes a lot of time and I cancel it. Maybe you could restore (to a temp server, not the production) a physical backup taken from before the change happened, and get an old plan that way. I'm guessing that somehow an index got dropped around the same time you took the dump. That might be a lot of work, and maybe it would just be easier to optimize the current query while ignoring the past. But you seem to be interested in a root-cause analysis, and I don't see any other way to do one of those. What I would expect to be the winning plan would be something sort-free like: Limit merge join index scan yielding books in asin order (already being done) nested loop index scan yielding asins in value order index scan probing asins_statistics driven by asins_statistics.asin_id = asins.id Or possibly a 2nd nested loop rather than the merge join just below the limit, but with the rest the same In addition to the \"books\" index already evident in your current plan, you would also need an index leading with asins_statistics.asin_id, and one leading with asins.value. But if all those indexes exists, it is hard to see why setting enable_seqscan=off wouldn't have forced them to be used. Cheers, Jeff",
"msg_date": "Fri, 01 Sep 2023 16:45:06 +0300",
"msg_from": "Rondat Flyag <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queries containing ORDER BY and LIMIT started to work slowly"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm from the Hibernate team (Java ORM) and a user recently reported that \na change in our SQL rendering affected his query plans in a bad way.\n\nIn short, we decided to model certain constructs in our ORM with \"nested \njoins\" i.e. using parenthesis to express the join order. This is what we \nwant to model semantically, though there are cases when we could detect \nthat we don't need the explicit join ordering to get the same semantics. \nI expected that the PostgreSQL optimizer can do the same reasoning and \ndo further join re-ordering to produce an optimal plan, but to my \nsurprise it seems it isn't capable to do that.\n\nThe query we generate right now is of the following structure:\n\nfrom tbl1 t1\njoin (tbl2 t2\n left join tbl3 t3 on t3.pkAndFk = t2.pk\n left join tbl4 t4 on t4.pkAndFk = t2.pk\n ...\n left join tbl9 t9 on t9.pkAndFk = t2.pk\n) on t2.fk = t1.pk\nwhere t1.indexedColumn = ...\n\nwhereas the query we generated before, which is semantically equivalent, \nis the following:\n\nfrom tbl1 t1\njoin tbl2 t2 on t2.fk = t1.pk\nleft join tbl3 t3 on t3.pkAndFk = t2.pk\nleft join tbl4 t4 on t4.pkAndFk = t2.pk\n...\nleft join tbl9 t9 on t9.pkAndFk = t2.pk\nwhere t1.indexedColumn = ...\n\n\nYou can find the full queries in the attachments section of the issue \nreport from the user: https://hibernate.atlassian.net/browse/HHH-16595\n\nQuery_Hibernate5.txt shows the old style query without parenthesis and \nQuery_Hibernate6.txt shows the new style. You will also find the query \nplans for the two queries attached as CSV files.\n\nIt almost seems like the PostgreSQL optimizer sees the parenthesis for \njoin ordering as an optimization fence!?\n\nThe user reported that the behavior is reproducible in PostgreSQL \nversions 11 and 15. He promised to provide a full reproducer for this \nwhich I am still waiting for, but I'll share it with you as soon as that \nwas provided if needed.\n\n\nI think that we can detect that the parenthesis is unnecessary in this \nparticular case, but ideally PostgreSQL would be able to detect this as \nwell to plan the optimal join order. Any ideas what is going on here? Is \nthis a bug or missed optimization in the query optimizer?\n\nI'm a bit worried about what PostgreSQL will produce for queries that \nreally need the parenthesis for join ordering e.g.\n\nfrom tbl1 t1\nleft join (tbl2 t2\n join tbl3 t3 on t3.pkAndFk = t2.pk\n join tbl4 t4 on t4.pkAndFk = t2.pk\n ...\n join tbl9 t9 on t9.pkAndFk = t2.pk\n) on t2.fk = t1.pk\nwhere t1.indexedColumn = ...\n\nThanks for any help.\n\nChristian\n\n\n\n",
"msg_date": "Thu, 31 Aug 2023 10:19:10 +0200",
"msg_from": "Christian Beikov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Join order optimization"
}
] |
[
{
"msg_contents": "Hi,\n PGv14.8, OS RHEL8, no SSL enabled in this database, we have a lot of client sessions who check it's ssl state by query, all other sessions got done very quickly, but only 1 session hang there in 100% cpu. It looks like abnormal.\n\n select ssl from pg_stat_ssl where pid=pg_backend_pid();\n\ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n\ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n\ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ nMaj nMin WCHAN COMMAND\n1245344 postgres 20 0 32.5g 12468 12164 R 99.5 0.0 4219:12 0 1343 - postgres: test testdb 10.250.193.40(48282) BIND\n\n#0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\n#1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\n#2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\n resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\n#3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\n resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\n#4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at nodeFunctionscan.c:370\n#5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at execProcnode.c:255\n#6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\n#7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\n#8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\n#9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\n at pquery.c:514\n#10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\n#11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\n#12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4259\n#14 ServerLoop () at postmaster.c:1745\n#15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\n#16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\nHi, \n PGv14.8, OS RHEL8, no SSL enabled in this database, we have a lot of client sessions who check it’s ssl state by query, all other sessions got done very quickly, but only 1 session hang there in 100% cpu. It looks like abnormal.\n \n select ssl from pg_stat_ssl where pid=pg_backend_pid();\n \ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n \ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n \ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n \n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ nMaj nMin WCHAN COMMAND\n1245344 postgres 20 0 32.5g 12468 12164 R 99.5 0.0 4219:12 0 1343 - postgres: test testdb 10.250.193.40(48282) BIND\n \n#0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\n#1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\n#2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\n resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\n#3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\n resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\n#4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at nodeFunctionscan.c:370\n#5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at execProcnode.c:255\n#6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\n#7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\n#8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\n#9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\n at pquery.c:514\n#10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\n#11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\n#12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4259\n#14 ServerLoop () at postmaster.c:1745\n#15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\n#16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\n \nThanks,\n \nJames",
"msg_date": "Wed, 6 Sep 2023 01:34:05 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "query pg_stat_ssl hang 100%cpu "
}
] |
[
{
"msg_contents": "Hi,\n PGv14.8, OS RHEL8, no SSL enabled in this database, we have a lot of client sessions who check it's ssl state by query, all other sessions got done very quickly, but only 1 session hang there in 100% cpu tens of hours, even pg_terminate_backend does not make it stopped either. It looks like abnormal.\n\n select ssl from pg_stat_ssl where pid=pg_backend_pid();\n\ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n\ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n\ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ nMaj nMin WCHAN COMMAND\n1245344 postgres 20 0 32.5g 12468 12164 R 99.5 0.0 4219:12 0 1343 - postgres: test testdb 10.250.193.40(48282) BIND\n\n#0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\n#1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\n#2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\n resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\n#3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\n resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\n#4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at nodeFunctionscan.c:370\n#5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at execProcnode.c:255\n#6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\n#7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\n#8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\n#9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\n at pquery.c:514\n#10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\n#11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\n#12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4259\n#14 ServerLoop () at postmaster.c:1745\n#15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\n#16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\nHi, \n PGv14.8, OS RHEL8, no SSL enabled in this database, we have a lot of client sessions who check it’s ssl state by query, all other sessions got done very quickly, but only 1 session hang there in 100% cpu tens of hours, even pg_terminate_backend\n does not make it stopped either. It looks like abnormal.\n \n select ssl from pg_stat_ssl where pid=pg_backend_pid();\n \ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n \ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n \ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n \n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ nMaj nMin WCHAN COMMAND\n1245344 postgres 20 0 32.5g 12468 12164 R 99.5 0.0 4219:12 0 1343 - postgres: test testdb 10.250.193.40(48282) BIND\n \n#0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\n#1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\n#2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\n resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\n#3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\n resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\n#4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at nodeFunctionscan.c:370\n#5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at execProcnode.c:255\n#6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\n#7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\n#8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\n#9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\n at pquery.c:514\n#10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\n#11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\n#12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4259\n#14 ServerLoop () at postmaster.c:1745\n#15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\n#16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\n \nThanks,\n \nJames",
"msg_date": "Wed, 6 Sep 2023 01:40:56 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "query pg_stat_ssl hang 100%cpu "
},
{
"msg_contents": "Looks like an abnormal case.\n\nFrom: James Pang (chaolpan)\nSent: Wednesday, September 6, 2023 9:41 AM\nTo: [email protected]\nSubject: query pg_stat_ssl hang 100%cpu\n\nHi,\n PGv14.8, OS RHEL8, no SSL enabled in this database, we have a lot of client sessions who check it's ssl state by query, all other sessions got done very quickly, but only 1 session hang there in 100% cpu tens of hours, even pg_terminate_backend does not make it stopped either. It looks like abnormal.\n\n select ssl from pg_stat_ssl where pid=pg_backend_pid();\n\ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n\ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n\ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ nMaj nMin WCHAN COMMAND\n1245344 postgres 20 0 32.5g 12468 12164 R 99.5 0.0 4219:12 0 1343 - postgres: test testdb 10.250.193.40(48282) BIND\n\n#0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\n#1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\n#2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\n resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\n#3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\n resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\n#4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at nodeFunctionscan.c:370\n#5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at execProcnode.c:255\n#6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\n#7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\n#8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\n#9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\n at pquery.c:514\n#10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\n#11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\n#12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4259\n#14 ServerLoop () at postmaster.c:1745\n#15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\n#16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\nLooks like an abnormal case.\n \n\n\nFrom: James Pang (chaolpan)\n\nSent: Wednesday, September 6, 2023 9:41 AM\nTo: [email protected]\nSubject: query pg_stat_ssl hang 100%cpu \n\n\n \nHi, \n PGv14.8, OS RHEL8, no SSL enabled in this database, we have a lot of client sessions who check it’s ssl state by query, all other sessions got done very quickly, but only 1 session hang there in 100% cpu tens of hours, even pg_terminate_backend\n does not make it stopped either. It looks like abnormal.\n \n select ssl from pg_stat_ssl where pid=pg_backend_pid();\n \ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n \ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n \ntestdb=# select pid,usename,application_name,query_start,xact_start,state_change,wait_event_type,state,query from pg_stat_activity where pid=1245344;\n pid | usename | application_name | query_start | xact_start | state_change | wait_event_type |\nstate | query\n---------+---------+------------------------+------------------------------+-------------------------------+------------------------------+-----------------+\n--------+--------------------------------------------------------\n1245344 | test | PostgreSQL JDBC Driver | 2023-09-03 02:36:23.40238+00 | 2023-09-03 02:36:23.402331+00 | 2023-09-03 02:36:23.40238+00 | |\nactive | select ssl from pg_stat_ssl where pid=pg_backend_pid()\n(1 row)\n \n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ nMaj nMin WCHAN COMMAND\n1245344 postgres 20 0 32.5g 12468 12164 R 99.5 0.0 4219:12 0 1343 - postgres: test testdb 10.250.193.40(48282) BIND\n \n#0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\n#1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\n#2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\n resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\n#3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\n resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\n#4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at nodeFunctionscan.c:370\n#5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at execProcnode.c:255\n#6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\n#7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\n#8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\n#9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\n at pquery.c:514\n#10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\n#11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\n#12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4259\n#14 ServerLoop () at postmaster.c:1745\n#15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\n#16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\n \nThanks,\n \nJames",
"msg_date": "Thu, 7 Sep 2023 01:35:00 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: query pg_stat_ssl hang 100%cpu "
},
{
"msg_contents": "On Thu, Sep 07, 2023 at 01:35:00AM +0000, James Pang (chaolpan) wrote:\n> PGv14.8, OS RHEL8, no SSL enabled in this database, we have a\n> lot of client sessions who check it's ssl state by query, all\n> other sessions got done very quickly, but only 1 session hang\n> there in 100% cpu tens of hours, even pg_terminate_backend does\n> not make it stopped either. It looks like abnormal. \n> \n> select ssl from pg_stat_ssl where pid=pg_backend_pid();\n\nThis is hard to act on without more details or even a reproducible and\nself-contained test case. Even a java script based on the JDBC driver\nwould be OK for me, for example, if it helps digging into what you are\nseeing.\n\n> #0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\n> #1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\n> #2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\n> resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\n> #3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\n> resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\n> #4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at nodeFunctionscan.c:370\n> #5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at execProcnode.c:255\n> #6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\n> #7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\n> #8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\n> #9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\n> at pquery.c:514\n> #10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\n> #11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\n> #12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\n> #13 BackendStartup (port=<optimized out>) at postmaster.c:4259\n> #14 ServerLoop () at postmaster.c:1745\n> #15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\n> #16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\n\nThis stack is referring to a code path where we are checking that some\nof the type-related data associated to a record is around, but this\ndoes not say exactly where the loop happens, so... Are we looking on\na loop in the function execution itself from which the information of\npg_stat_ssl is retrieved (aka pg_stat_get_activity())? Or is the\ntype cache somewhat broken because of the extended query protocol?\nThat's not really possible to see any evidence based on the\ninformation provided, though it provides a few hits that can help.\nFWIW, I've not heard about an issue like that in the field.\n\nThe first thing I would do is update to 14.9, which is the latest\nversion of Postgres available for this major version.\n--\nMichael",
"msg_date": "Thu, 7 Sep 2023 13:04:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "Yes, this backend has been always on same call stack hours , no changed, and 100% cpu. \n\n-----Original Message-----\nFrom: Michael Paquier <[email protected]> \nSent: Thursday, September 7, 2023 12:05 PM\nTo: James Pang (chaolpan) <[email protected]>\nCc: PostgreSQL mailing lists <[email protected]>\nSubject: Re: FW: query pg_stat_ssl hang 100%cpu\n\nOn Thu, Sep 07, 2023 at 01:35:00AM +0000, James Pang (chaolpan) wrote:\n> PGv14.8, OS RHEL8, no SSL enabled in this database, we have a\n> lot of client sessions who check it's ssl state by query, all\n> other sessions got done very quickly, but only 1 session hang\n> there in 100% cpu tens of hours, even pg_terminate_backend does\n> not make it stopped either. It looks like abnormal. \n> \n> select ssl from pg_stat_ssl where pid=pg_backend_pid();\n\nThis is hard to act on without more details or even a reproducible and self-contained test case. Even a java script based on the JDBC driver would be OK for me, for example, if it helps digging into what you are seeing.\n\n> #0 ensure_record_cache_typmod_slot_exists (typmod=0) at \n> typcache.c:1714\n> #1 0x000000000091185b in assign_record_type_typmod \n> (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\n> #2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\n> resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at \n> funcapi.c:393\n> #3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\n> resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at \n> funcapi.c:230\n> #4 0x00000000006a2fa5 in ExecInitFunctionScan \n> (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, \n> eflags=eflags@entry=16) at nodeFunctionscan.c:370\n> #5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, \n> estate=estate@entry=0x269e948, eflags=eflags@entry=16) at \n> execProcnode.c:255\n> #6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at \n> execMain.c:936\n> #7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at \n> execMain.c:263\n> #8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, \n> eflags=<optimized out>) at pg_stat_statements.c:965\n> #9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\n> at pquery.c:514\n> #10 0x00000000007fa27f in exec_bind_message \n> (input_message=0x7ffc9dff90d0) at postgres.c:1995\n> #11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, \n> dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\n> #12 0x000000000077a4ea in BackendRun (port=<optimized out>, \n> port=<optimized out>) at postmaster.c:4537\n> #13 BackendStartup (port=<optimized out>) at postmaster.c:4259\n> #14 ServerLoop () at postmaster.c:1745\n> #15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, \n> argv=argv@entry=0x256abc0) at postmaster.c:1417\n> #16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\n\nThis stack is referring to a code path where we are checking that some of the type-related data associated to a record is around, but this does not say exactly where the loop happens, so... Are we looking on a loop in the function execution itself from which the information of pg_stat_ssl is retrieved (aka pg_stat_get_activity())? Or is the type cache somewhat broken because of the extended query protocol?\nThat's not really possible to see any evidence based on the information provided, though it provides a few hits that can help.\nFWIW, I've not heard about an issue like that in the field.\n\nThe first thing I would do is update to 14.9, which is the latest version of Postgres available for this major version.\n--\nMichael\n\n\n",
"msg_date": "Thu, 7 Sep 2023 08:46:29 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "\n\n Yes, this backend has been always on same call stack tens of hours and 100% cpu there. It's still hang there now, but I can not reproduce that in other similar environment. I found this query start a transaction \"xact_start\" from \" 2023-09-03 02:36:23\" from pg_stat_activity, no idea why. \n\n-----Original Message-----\nFrom: Michael Paquier <[email protected]>\nSent: Thursday, September 7, 2023 12:05 PM\nTo: James Pang (chaolpan) <[email protected]>\nCc: PostgreSQL mailing lists <[email protected]>\nSubject: Re: FW: query pg_stat_ssl hang 100%cpu\n\nOn Thu, Sep 07, 2023 at 01:35:00AM +0000, James Pang (chaolpan) wrote:\n> PGv14.8, OS RHEL8, no SSL enabled in this database, we have a\n> lot of client sessions who check it's ssl state by query, all\n> other sessions got done very quickly, but only 1 session hang\n> there in 100% cpu tens of hours, even pg_terminate_backend does\n> not make it stopped either. It looks like abnormal. \n> \n> select ssl from pg_stat_ssl where pid=pg_backend_pid();\n\nThis is hard to act on without more details or even a reproducible and self-contained test case. Even a java script based on the JDBC driver would be OK for me, for example, if it helps digging into what you are seeing.\n\n> #0 ensure_record_cache_typmod_slot_exists (typmod=0) at\n> typcache.c:1714\n> #1 0x000000000091185b in assign_record_type_typmod \n> (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\n> #2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\n> resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at\n> funcapi.c:393\n> #3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\n> resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at\n> funcapi.c:230\n> #4 0x00000000006a2fa5 in ExecInitFunctionScan \n> (node=node@entry=0x273afa8, estate=estate@entry=0x269e948,\n> eflags=eflags@entry=16) at nodeFunctionscan.c:370\n> #5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, \n> estate=estate@entry=0x269e948, eflags=eflags@entry=16) at\n> execProcnode.c:255\n> #6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at\n> execMain.c:936\n> #7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at\n> execMain.c:263\n> #8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, \n> eflags=<optimized out>) at pg_stat_statements.c:965\n> #9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\n> at pquery.c:514\n> #10 0x00000000007fa27f in exec_bind_message\n> (input_message=0x7ffc9dff90d0) at postgres.c:1995\n> #11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, \n> dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\n> #12 0x000000000077a4ea in BackendRun (port=<optimized out>, \n> port=<optimized out>) at postmaster.c:4537\n> #13 BackendStartup (port=<optimized out>) at postmaster.c:4259\n> #14 ServerLoop () at postmaster.c:1745\n> #15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5,\n> argv=argv@entry=0x256abc0) at postmaster.c:1417\n> #16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\n\nThis stack is referring to a code path where we are checking that some of the type-related data associated to a record is around, but this does not say exactly where the loop happens, so... Are we looking on a loop in the function execution itself from which the information of pg_stat_ssl is retrieved (aka pg_stat_get_activity())? Or is the type cache somewhat broken because of the extended query protocol?\nThat's not really possible to see any evidence based on the information provided, though it provides a few hits that can help.\nFWIW, I've not heard about an issue like that in the field.\n\nThe first thing I would do is update to 14.9, which is the latest version of Postgres available for this major version.\n--\nMichael\n\n\n",
"msg_date": "Thu, 7 Sep 2023 08:54:09 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "> #0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\n\nAre you able to print out the value of global variable\nRecordCacheArrayLen? I wonder if this loop in\nensure_record_cache_typmod_slot_exists() is not terminating:\n\n int32 newlen = RecordCacheArrayLen * 2;\n\n while (typmod >= newlen)\n newlen *= 2;\n\n\n",
"msg_date": "Thu, 7 Sep 2023 21:31:00 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "(gdb) bt\r\n#0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\r\n#1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\r\n#2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\r\n resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\r\n#3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\r\n resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\r\n#4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16)\r\n at nodeFunctionscan.c:370\r\n#5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16)\r\n at execProcnode.c:255\r\n#6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\r\n#7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\r\n#8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\r\n#9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0,\r\n snapshot=snapshot@entry=0x0) at pquery.c:514\r\n#10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\r\n#11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>)\r\n at postgres.c:4552\r\n#12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\r\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4259\r\n#14 ServerLoop () at postmaster.c:1745\r\n#15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\r\n#16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\r\n(gdb) p RecordCacheArrayLen\r\n$3 = 0\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro <[email protected]> \r\nSent: Thursday, September 7, 2023 5:31 PM\r\nTo: James Pang (chaolpan) <[email protected]>\r\nCc: Michael Paquier <[email protected]>; PostgreSQL mailing lists <[email protected]>\r\nSubject: Re: FW: query pg_stat_ssl hang 100%cpu\r\n\r\n> #0 ensure_record_cache_typmod_slot_exists (typmod=0) at \r\n> typcache.c:1714\r\n\r\nAre you able to print out the value of global variable RecordCacheArrayLen? I wonder if this loop in\r\nensure_record_cache_typmod_slot_exists() is not terminating:\r\n\r\n int32 newlen = RecordCacheArrayLen * 2;\r\n\r\n while (typmod >= newlen)\r\n newlen *= 2;\r\n",
"msg_date": "Thu, 7 Sep 2023 09:38:15 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "\r\n#0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\r\n#1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\r\n#2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\r\n resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\r\n#3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\r\n resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\r\n#4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at nodeFunctionscan.c:370\r\n#5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at execProcnode.c:255\r\n#6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\r\n#7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\r\n#8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\r\n#9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\r\n at pquery.c:514\r\n#10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\r\n#11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\r\n#12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\r\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4259\r\n#14 ServerLoop () at postmaster.c:1745\r\n#15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\r\n#16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\r\n(gdb) p RecordCacheArrayLen\r\n$1 = 0\r\n(gdb) p RecordCacheArrayLen\r\n$2 = 0\r\n(gdb) p RecordCacheArrayLen\r\n$3 = 0\r\n-----Original Message-----\r\nFrom: Thomas Munro <[email protected]> \r\nSent: Thursday, September 7, 2023 5:31 PM\r\nTo: James Pang (chaolpan) <[email protected]>\r\nCc: Michael Paquier <[email protected]>; PostgreSQL mailing lists <[email protected]>\r\nSubject: Re: FW: query pg_stat_ssl hang 100%cpu\r\n\r\n> #0 ensure_record_cache_typmod_slot_exists (typmod=0) at \r\n> typcache.c:1714\r\n\r\nAre you able to print out the value of global variable RecordCacheArrayLen? I wonder if this loop in\r\nensure_record_cache_typmod_slot_exists() is not terminating:\r\n\r\n int32 newlen = RecordCacheArrayLen * 2;\r\n\r\n while (typmod >= newlen)\r\n newlen *= 2;\r\n",
"msg_date": "Thu, 7 Sep 2023 09:54:47 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 9:38 PM James Pang (chaolpan) <[email protected]> wrote:\n> (gdb) p RecordCacheArrayLen\n> $3 = 0\n\nClearly this system lacks check against wrapping around, but it must\nbe hard work to allocate billions of typmods...\n\nOr maybe if in an earlier call we assigned RecordCacheArray but the\nallocation of RecordIdentifierArray failed (a clue would be that it is\nstill NULL), we never manage to assign RecordCacheArrayLen a non-zero\nvalue? But it must be unlikely for a small allocation to fail...\n\n\n",
"msg_date": "Thu, 7 Sep 2023 22:01:12 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": " Yes, checked the server history logs, we found when the backend starting hang there, operating system has out of memory that may lead to the allocation of RecordIdentierArray failed. \r\n\r\nThanks,\r\n\r\nJames\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro <[email protected]> \r\nSent: Thursday, September 7, 2023 6:01 PM\r\nTo: James Pang (chaolpan) <[email protected]>\r\nCc: Michael Paquier <[email protected]>; PostgreSQL mailing lists <[email protected]>\r\nSubject: Re: FW: query pg_stat_ssl hang 100%cpu\r\n\r\nOn Thu, Sep 7, 2023 at 9:38 PM James Pang (chaolpan) <[email protected]> wrote:\r\n> (gdb) p RecordCacheArrayLen\r\n> $3 = 0\r\n\r\nClearly this system lacks check against wrapping around, but it must be hard work to allocate billions of typmods...\r\n\r\nOr maybe if in an earlier call we assigned RecordCacheArray but the allocation of RecordIdentifierArray failed (a clue would be that it is still NULL), we never manage to assign RecordCacheArrayLen a non-zero value? But it must be unlikely for a small allocation to fail...\r\n",
"msg_date": "Thu, 7 Sep 2023 10:31:25 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 10:31 PM James Pang (chaolpan)\n<[email protected]> wrote:\n> Yes, checked the server history logs, we found when the backend starting hang there, operating system has out of memory that may lead to the allocation of RecordIdentierArray failed.\n\nCan you please print out RecordCacheArray and RecordIdentifierArray?\nFor that theory, the first one must be non-NULL and the second one\nNULL. That would lead to the infinite loop, I think.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 22:36:07 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "#0 ensure_record_cache_typmod_slot_exists (typmod=0) at typcache.c:1714\r\n#1 0x000000000091185b in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x27bc738) at typcache.c:2001\r\n#2 0x000000000091df03 in internal_get_result_type (funcid=<optimized out>, call_expr=<optimized out>, rsinfo=<optimized out>,\r\n resultTypeId=<optimized out>, resultTupleDesc=0x7ffc9dff8cd0) at funcapi.c:393\r\n#3 0x000000000091e263 in get_expr_result_type (expr=expr@entry=0x2792798, resultTypeId=resultTypeId@entry=0x7ffc9dff8ccc,\r\n resultTupleDesc=resultTupleDesc@entry=0x7ffc9dff8cd0) at funcapi.c:230\r\n#4 0x00000000006a2fa5 in ExecInitFunctionScan (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at nodeFunctionscan.c:370\r\n#5 0x000000000069084e in ExecInitNode (node=node@entry=0x273afa8, estate=estate@entry=0x269e948, eflags=eflags@entry=16) at execProcnode.c:255\r\n#6 0x000000000068a96d in InitPlan (eflags=16, queryDesc=0x273b2d8) at execMain.c:936\r\n#7 standard_ExecutorStart (queryDesc=0x273b2d8, eflags=16) at execMain.c:263\r\n#8 0x00007f67c2821d5d in pgss_ExecutorStart (queryDesc=0x273b2d8, eflags=<optimized out>) at pg_stat_statements.c:965\r\n#9 0x00000000007fc226 in PortalStart (portal=portal@entry=0x26848b8, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0)\r\n at pquery.c:514\r\n#10 0x00000000007fa27f in exec_bind_message (input_message=0x7ffc9dff90d0) at postgres.c:1995\r\n#11 PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffc9dff9370, dbname=<optimized out>, username=<optimized out>) at postgres.c:4552\r\n#12 0x000000000077a4ea in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4537\r\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4259\r\n#14 ServerLoop () at postmaster.c:1745\r\n#15 0x000000000077b363 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x256abc0) at postmaster.c:1417\r\n#16 0x00000000004fec63 in main (argc=5, argv=0x256abc0) at main.c:209\r\n(gdb) p RecordCacheArray\r\n$1 = (TupleDesc *) 0x7f5fac365d90\r\n(gdb) p RecordIdentifierArray\r\n$2 = (uint64 *) 0x0\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro <[email protected]> \r\nSent: Thursday, September 7, 2023 6:36 PM\r\nTo: James Pang (chaolpan) <[email protected]>\r\nCc: Michael Paquier <[email protected]>; PostgreSQL mailing lists <[email protected]>\r\nSubject: Re: FW: query pg_stat_ssl hang 100%cpu\r\n\r\nOn Thu, Sep 7, 2023 at 10:31 PM James Pang (chaolpan) <[email protected]> wrote:\r\n> Yes, checked the server history logs, we found when the backend starting hang there, operating system has out of memory that may lead to the allocation of RecordIdentierArray failed.\r\n\r\nCan you please print out RecordCacheArray and RecordIdentifierArray?\r\nFor that theory, the first one must be non-NULL and the second one NULL. That would lead to the infinite loop, I think.\r\n",
"msg_date": "Thu, 7 Sep 2023 10:39:09 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 10:39 PM James Pang (chaolpan)\n<[email protected]> wrote:\n> (gdb) p RecordCacheArray\n> $1 = (TupleDesc *) 0x7f5fac365d90\n> (gdb) p RecordIdentifierArray\n> $2 = (uint64 *) 0x0\n\nHah, yeah that's it, and you've been extremely unlucky to hit it.\nensure_record_cache_typmod_slot_exists() should be more careful about\ncleaning up on allocation failure, to avoid this state.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 22:59:14 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 10:59 PM Thomas Munro <[email protected]> wrote:\n> On Thu, Sep 7, 2023 at 10:39 PM James Pang (chaolpan)\n> <[email protected]> wrote:\n> > (gdb) p RecordCacheArray\n> > $1 = (TupleDesc *) 0x7f5fac365d90\n> > (gdb) p RecordIdentifierArray\n> > $2 = (uint64 *) 0x0\n>\n> Hah, yeah that's it, and you've been extremely unlucky to hit it.\n> ensure_record_cache_typmod_slot_exists() should be more careful about\n> cleaning up on allocation failure, to avoid this state.\n\nI think the lazy fix would be to re-order those allocations. A\nmarginally more elegant fix would be to merge the arrays, as in the\nattached. Thoughts?",
"msg_date": "Fri, 8 Sep 2023 11:45:51 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 10:39 PM James Pang (chaolpan)\n<chaolpan(at)cisco(dot)com> wrote:\n> (gdb) p RecordCacheArray\n> $1 = (TupleDesc *) 0x7f5fac365d90\n> (gdb) p RecordIdentifierArray\n> $2 = (uint64 *) 0x0\n\nOh, yeah, this stack is broken. You have been really unlucky to hit\nthat. This can randomly cause any session to get stuck, and no need\nfor the extended query protocol here.\n\n(I am not sure how, but my email server has somewhat not been able to\nget the previous messages from James. Anyway.)\n\nOn Fri, Sep 08, 2023 at 11:45:51AM +1200, Thomas Munro wrote:\n> I think the lazy fix would be to re-order those allocations. A\n> marginally more elegant fix would be to merge the arrays, as in the\n> attached. Thoughts?\n\nSo, ensure_record_cache_typmod_slot_exists() would allocate the\ninitial RecordCacheArray and if it fails on the second one it keeps\nRecordCacheArrayLen at 0. When coming back again to this code path,\nthe second part of the routine causes an infinite loop because the\nallocation has been done, but the tracked length is 0. Fun.\n\nThis is broken since 4b93f57 where the second array has been\nintroduced. Getting that fixed before 11 is EOL is nice as it was\nintroduced there, so good timing.\n\nThere is a repalloc_extended(), but I cannot get excited to use\nMCXT_ALLOC_NO_OOM in this code path if there is a simpler method to\navoid this issue with a single allocation for the all information set.\n\n+static RecordCacheArrayEntry * RecordCacheArray = NULL; \n\npgindent is annoyed by that.. typedefs.list has been updated in your\npatch, so I guess that you missed one extra indentation after this is\nrefreshed.\n\nNote that RememberToFreeTupleDescAtEOX() does something similar to the\ntype cache, and it uses the same approach as your patch.\n\n+1 to your proposal of using a struct for the entries in the cache.\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 11:48:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 2:48 PM Michael Paquier <[email protected]> wrote:\n> +1 to your proposal of using a struct for the entries in the cache.\n\nCool, I'll push/back-patch after 16.0. Even though this seems\nsimple enough, it's an extremely low probability failure and I'd\nrather keep out of REL_16_STABLE's way...\n\n\n",
"msg_date": "Mon, 11 Sep 2023 10:37:34 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 10:37:34AM +1200, Thomas Munro wrote:\n> Cool, I'll push/back-patch after 16.0. Even though this seems\n> simple enough, it's an extremely low probability failure and I'd\n> rather keep out of REL_16_STABLE's way...\n\n+1.\n--\nMichael",
"msg_date": "Mon, 11 Sep 2023 07:47:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 10:47 AM Michael Paquier <[email protected]> wrote:\n> On Mon, Sep 11, 2023 at 10:37:34AM +1200, Thomas Munro wrote:\n> > Cool, I'll push/back-patch after 16.0. Even though this seems\n> > simple enough, it's an extremely low probability failure and I'd\n> > rather keep out of REL_16_STABLE's way...\n>\n> +1.\n\nDone.\n\n\n",
"msg_date": "Wed, 13 Sep 2023 15:11:13 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: query pg_stat_ssl hang 100%cpu"
}
] |
[
{
"msg_contents": "Hello,\nI have three tables:\n - test_db_bench_1\n - test_db_bench_tenants\n - test_db_bench_tenant_closure\n\nAnd the query to join them:\nSELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n FROM \"test_db_bench_1\"\n JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON\n((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\")\n AND\n(\"tenants_child\".\"is_deleted\" != true))\n JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON\n((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n AND\n(\"tenants_closure\".\"barrier\" <= 0))\n JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON\n((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n AND\n(\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330'))\n AND\n(\"tenants_parent\".\"is_deleted\" != true))\n LIMIT 1\n\n\nWith following execution plan:\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011 rows=0\nloops=1)\n -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual\ntime=0.009..0.009 rows=0 loops=1)\n -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual\ntime=0.008..0.009 rows=0 loops=1)\n -> Nested Loop (cost=0.84..48.09 rows=7 width=8) (actual\ntime=0.008..0.009 rows=0 loops=1)\n -> Index Scan using test_db_bench_tenants_uuid on\ntest_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8)\n(actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: ((uuid)::text =\n'4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n Filter: (NOT is_deleted)\n -> Index Scan using test_db_bench_tenant_closure_pkey\non test_db_bench_tenant_closure tenants_closure (cost=0.42..45.06 rows=40\nwidth=16) (never executed)\n Index Cond: (parent_id = tenants_parent.id)\n Filter: (barrier <= 0)\n -> Index Scan using test_db_bench_tenants_pkey on\ntest_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44)\n(never executed)\n Index Cond: (id = tenants_closure.child_id)\n Filter: (NOT is_deleted)\n -> Index Scan using test_db_bench_1_idx_tenant_id_3 on\nacronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never\nexecuted)\n Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n Planning Time: 0.732 ms\n Execution Time: 0.039 ms\n\n\nWhere the planning time gets in the way as it takes an order of magnitude\nmore time than the actual execution.\n\nIs there a possibility to reduce this time? And, in general, to understand\nwhy planning takes so much time.\n\nWhat I have tried:\n- disabled JIT, which resulted in a minor improvement, around 5\nmicroseconds.\n- disabled constraint_exclusion, which also didn't have a significant\nimpact.\n\nSizes of tables and indexes:\n-- test_db_bench_1\n List of relations\n Schema | Name | Type | Owner | Persistence | Access method |\n Size | Description\n--------+-----------------+-------+--------+-------------+---------------+---------+-------------\n public | test_db_bench_1 | table | dbuser | permanent | heap |\n5351 MB |\n\n Column | Type | Collation | Nullable\n| Default\n\n---------------------------+------------------------+-----------+----------+----------------------------------------------\n------\n id | bigint | | not null\n| nextval('test_db_bench_1_id_seq'::regclass)\n uuid | uuid | | not null |\n checksum | character varying(64) | | not null |\n tenant_id | character varying(36) | | not null |\n cti_entity_uuid | character varying(36) | | |\n euc_id | character varying(64) | | not null |\n workflow_id | bigint | | |\n state | integer | | not null |\n type | character varying(64) | | not null |\n queue | character varying(64) | | not null |\n priority | integer | | not null |\n issuer_id | character varying(64) | | not null |\n issuer_cluster_id | character varying(64) | | |\n heartbeat_ivl_str | character varying(64) | | |\n heartbeat_ivl_ns | bigint | | |\n queue_timeout_str | character varying(64) | | |\n queue_timeout_ns | bigint | | |\n ack_timeout_str | character varying(64) | | |\n ack_timeout_ns | bigint | | |\n exec_timeout_str | character varying(64) | | |\n exec_timeout_ns | bigint | | |\n life_time_str | character varying(64) | | |\n life_time_ns | bigint | | |\n max_assign_count | integer | | not null |\n assign_count | integer | | not null |\n max_fail_count | integer | | not null |\n fail_count | integer | | not null |\n cancellable | boolean | | not null |\n cancel_requested | boolean | | not null |\n blocker_count | integer | | not null |\n started_by_user | character varying(256) | | |\n policy_id | character varying(64) | | |\n policy_type | character varying(64) | | |\n policy_name | character varying(256) | | |\n resource_id | character varying(64) | | |\n resource_type | character varying(64) | | |\n resource_name | character varying(256) | | |\n tags | text | | |\n affinity_agent_id | character varying(64) | | not null |\n affinity_cluster_id | character varying(64) | | not null |\n argument | bytea | | |\n context | bytea | | |\n progress | integer | | |\n progress_total | integer | | |\n assigned_agent_id | character varying(64) | | |\n assigned_agent_cluster_id | character varying(64) | | |\n enqueue_time_str | character varying(64) | | |\n enqueue_time_ns | bigint | | not null |\n assign_time_str | character varying(64) | | |\n assign_time_ns | bigint | | |\n start_time_str | character varying(64) | | |\n start_time_ns | bigint | | |\n update_time_str | character varying(64) | | not null |\n update_time_ns | bigint | | not null |\n completion_time_str | character varying(64) | | |\n completion_time_ns | bigint | | |\n result_code | integer | | |\n result_error | bytea | | |\n result_warnings | bytea | | |\n result_payload | bytea | | |\n const_val | integer | | |\nIndexes:\n \"test_db_bench_1_pkey\" PRIMARY KEY, btree (id)\n \"test_db_bench_1_idx_completion_time_ns_1\" btree (completion_time_ns)\n \"test_db_bench_1_idx_cti_entity_uuid_2\" btree (cti_entity_uuid)\n \"test_db_bench_1_idx_enqueue_time_ns_10\" btree (enqueue_time_ns)\n \"test_db_bench_1_idx_euc_id_4\" btree (euc_id)\n \"test_db_bench_1_idx_policy_id_12\" btree (policy_id)\n \"test_db_bench_1_idx_queue_18\" btree (queue, type, tenant_id)\n \"test_db_bench_1_idx_queue_19\" btree (queue, type, euc_id)\n \"test_db_bench_1_idx_queue_5\" btree (queue, state, affinity_agent_id,\naffinity_cluster_id, tenant_id, priority)\n \"test_db_bench_1_idx_queue_6\" btree (queue, state, affinity_agent_id,\naffinity_cluster_id, euc_id, priority)\n \"test_db_bench_1_idx_resource_id_11\" btree (resource_id)\n \"test_db_bench_1_idx_resource_id_14\" btree (resource_id,\nenqueue_time_ns)\n \"test_db_bench_1_idx_result_code_13\" btree (result_code)\n \"test_db_bench_1_idx_start_time_ns_9\" btree (start_time_ns)\n \"test_db_bench_1_idx_state_8\" btree (state, completion_time_ns)\n \"test_db_bench_1_idx_tenant_id_3\" btree (tenant_id)\n \"test_db_bench_1_idx_type_15\" btree (type)\n \"test_db_bench_1_idx_type_16\" btree (type, tenant_id, enqueue_time_ns)\n \"test_db_bench_1_idx_type_17\" btree (type, euc_id, enqueue_time_ns)\n \"test_db_bench_1_idx_update_time_ns_7\" btree (update_time_ns)\n \"test_db_bench_1_idx_uuid_0\" btree (uuid)\n \"test_db_bench_1_uuid_key\" UNIQUE CONSTRAINT, btree (uuid)\n\n\n\n-- test_db_bench_tenants\n Schema | Name | Type | Owner | Persistence | Access\nmethod | Size | Description\n--------+-----------------------+-------+--------+-------------+---------------+---------+-------------\n public | test_db_bench_tenants | table | dbuser | permanent | heap\n | 8432 kB |\n\n Column | Type | Collation | Nullable | Default\n-------------------+------------------------+-----------+----------+---------\n id | bigint | | not null |\n uuid | character varying(36) | | not null |\n name | character varying(255) | | not null |\n kind | character(1) | | not null |\n is_deleted | boolean | | not null | false\n parent_id | bigint | | not null |\n parent_has_access | boolean | | not null | true\n nesting_level | smallint | | not null |\nIndexes:\n \"test_db_bench_tenants_pkey\" PRIMARY KEY, btree (id)\n \"test_db_bench_tenants_uuid\" UNIQUE CONSTRAINT, btree (uuid)\n\n-- test_db_bench_tenant_closure\n Schema | Name | Type | Owner | Persistence |\nAccess method | Size | Description\n--------+------------------------------+-------+--------+-------------+---------------+-------+-------------\n public | test_db_bench_tenant_closure | table | dbuser | permanent |\nheap | 22 MB |\n\n Column | Type | Collation | Nullable | Default\n-------------+--------------+-----------+----------+---------\n parent_id | bigint | | not null |\n child_id | bigint | | not null |\n parent_kind | character(1) | | not null |\n barrier | smallint | | not null | 0\nIndexes:\n \"test_db_bench_tenant_closure_pkey\" PRIMARY KEY, btree (parent_id,\nchild_id)\n \"cybercache_tenants_closure_child_id_idx\" btree (child_id)\n\n\nPostgresql version: 15.3 (Debian 15.3-1.pgdg110+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\nAnd just in case it matters, this is an experimental setup, so Postgresql\nrunning in Docker.\n\nThank you.\n\n--\nMikhail\n\nHello,I have three tables: - test_db_bench_1 - test_db_bench_tenants - test_db_bench_tenant_closureAnd the query to join them:SELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\" FROM \"test_db_bench_1\" JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\") AND (\"tenants_child\".\"is_deleted\" != true)) JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\") AND (\"tenants_closure\".\"barrier\" <= 0)) JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\") AND (\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330')) AND (\"tenants_parent\".\"is_deleted\" != true)) LIMIT 1With following execution plan: QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011 rows=0 loops=1) -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual time=0.009..0.009 rows=0 loops=1) -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual time=0.008..0.009 rows=0 loops=1) -> Nested Loop (cost=0.84..48.09 rows=7 width=8) (actual time=0.008..0.009 rows=0 loops=1) -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8) (actual time=0.008..0.008 rows=0 loops=1) Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text) Filter: (NOT is_deleted) -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.42..45.06 rows=40 width=16) (never executed) Index Cond: (parent_id = tenants_parent.id) Filter: (barrier <= 0) -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44) (never executed) Index Cond: (id = tenants_closure.child_id) Filter: (NOT is_deleted) -> Index Scan using test_db_bench_1_idx_tenant_id_3 on acronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never executed) Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text) Planning Time: 0.732 ms Execution Time: 0.039 msWhere the planning time gets in the way as it takes an order of magnitude more time than the actual execution.Is there a possibility to reduce this time? And, in general, to understand why planning takes so much time.What I have tried:- disabled JIT, which resulted in a minor improvement, around 5 microseconds.- disabled constraint_exclusion, which also didn't have a significant impact.Sizes of tables and indexes:-- test_db_bench_1 List of relations Schema | Name | Type | Owner | Persistence | Access method | Size | Description--------+-----------------+-------+--------+-------------+---------------+---------+------------- public | test_db_bench_1 | table | dbuser | permanent | heap | 5351 MB | Column | Type | Collation | Nullable | Default---------------------------+------------------------+-----------+----------+---------------------------------------------------- id | bigint | | not null | nextval('test_db_bench_1_id_seq'::regclass) uuid | uuid | | not null | checksum | character varying(64) | | not null | tenant_id | character varying(36) | | not null | cti_entity_uuid | character varying(36) | | | euc_id | character varying(64) | | not null | workflow_id | bigint | | | state | integer | | not null | type | character varying(64) | | not null | queue | character varying(64) | | not null | priority | integer | | not null | issuer_id | character varying(64) | | not null | issuer_cluster_id | character varying(64) | | | heartbeat_ivl_str | character varying(64) | | | heartbeat_ivl_ns | bigint | | | queue_timeout_str | character varying(64) | | | queue_timeout_ns | bigint | | | ack_timeout_str | character varying(64) | | | ack_timeout_ns | bigint | | | exec_timeout_str | character varying(64) | | | exec_timeout_ns | bigint | | | life_time_str | character varying(64) | | | life_time_ns | bigint | | | max_assign_count | integer | | not null | assign_count | integer | | not null | max_fail_count | integer | | not null | fail_count | integer | | not null | cancellable | boolean | | not null | cancel_requested | boolean | | not null | blocker_count | integer | | not null | started_by_user | character varying(256) | | | policy_id | character varying(64) | | | policy_type | character varying(64) | | | policy_name | character varying(256) | | | resource_id | character varying(64) | | | resource_type | character varying(64) | | | resource_name | character varying(256) | | | tags | text | | | affinity_agent_id | character varying(64) | | not null | affinity_cluster_id | character varying(64) | | not null | argument | bytea | | | context | bytea | | | progress | integer | | | progress_total | integer | | | assigned_agent_id | character varying(64) | | | assigned_agent_cluster_id | character varying(64) | | | enqueue_time_str | character varying(64) | | | enqueue_time_ns | bigint | | not null | assign_time_str | character varying(64) | | | assign_time_ns | bigint | | | start_time_str | character varying(64) | | | start_time_ns | bigint | | | update_time_str | character varying(64) | | not null | update_time_ns | bigint | | not null | completion_time_str | character varying(64) | | | completion_time_ns | bigint | | | result_code | integer | | | result_error | bytea | | | result_warnings | bytea | | | result_payload | bytea | | | const_val | integer | | |Indexes: \"test_db_bench_1_pkey\" PRIMARY KEY, btree (id) \"test_db_bench_1_idx_completion_time_ns_1\" btree (completion_time_ns) \"test_db_bench_1_idx_cti_entity_uuid_2\" btree (cti_entity_uuid) \"test_db_bench_1_idx_enqueue_time_ns_10\" btree (enqueue_time_ns) \"test_db_bench_1_idx_euc_id_4\" btree (euc_id) \"test_db_bench_1_idx_policy_id_12\" btree (policy_id) \"test_db_bench_1_idx_queue_18\" btree (queue, type, tenant_id) \"test_db_bench_1_idx_queue_19\" btree (queue, type, euc_id) \"test_db_bench_1_idx_queue_5\" btree (queue, state, affinity_agent_id, affinity_cluster_id, tenant_id, priority) \"test_db_bench_1_idx_queue_6\" btree (queue, state, affinity_agent_id, affinity_cluster_id, euc_id, priority) \"test_db_bench_1_idx_resource_id_11\" btree (resource_id) \"test_db_bench_1_idx_resource_id_14\" btree (resource_id, enqueue_time_ns) \"test_db_bench_1_idx_result_code_13\" btree (result_code) \"test_db_bench_1_idx_start_time_ns_9\" btree (start_time_ns) \"test_db_bench_1_idx_state_8\" btree (state, completion_time_ns) \"test_db_bench_1_idx_tenant_id_3\" btree (tenant_id) \"test_db_bench_1_idx_type_15\" btree (type) \"test_db_bench_1_idx_type_16\" btree (type, tenant_id, enqueue_time_ns) \"test_db_bench_1_idx_type_17\" btree (type, euc_id, enqueue_time_ns) \"test_db_bench_1_idx_update_time_ns_7\" btree (update_time_ns) \"test_db_bench_1_idx_uuid_0\" btree (uuid) \"test_db_bench_1_uuid_key\" UNIQUE CONSTRAINT, btree (uuid)-- test_db_bench_tenants Schema | Name | Type | Owner | Persistence | Access method | Size | Description--------+-----------------------+-------+--------+-------------+---------------+---------+------------- public | test_db_bench_tenants | table | dbuser | permanent | heap | 8432 kB | Column | Type | Collation | Nullable | Default-------------------+------------------------+-----------+----------+--------- id | bigint | | not null | uuid | character varying(36) | | not null | name | character varying(255) | | not null | kind | character(1) | | not null | is_deleted | boolean | | not null | false parent_id | bigint | | not null | parent_has_access | boolean | | not null | true nesting_level | smallint | | not null |Indexes: \"test_db_bench_tenants_pkey\" PRIMARY KEY, btree (id) \"test_db_bench_tenants_uuid\" UNIQUE CONSTRAINT, btree (uuid)-- test_db_bench_tenant_closure Schema | Name | Type | Owner | Persistence | Access method | Size | Description--------+------------------------------+-------+--------+-------------+---------------+-------+------------- public | test_db_bench_tenant_closure | table | dbuser | permanent | heap | 22 MB | Column | Type | Collation | Nullable | Default-------------+--------------+-----------+----------+--------- parent_id | bigint | | not null | child_id | bigint | | not null | parent_kind | character(1) | | not null | barrier | smallint | | not null | 0Indexes: \"test_db_bench_tenant_closure_pkey\" PRIMARY KEY, btree (parent_id, child_id) \"cybercache_tenants_closure_child_id_idx\" btree (child_id)Postgresql version: 15.3 (Debian 15.3-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bitAnd just in case it matters, this is an experimental setup, so Postgresql running in Docker.Thank you.--Mikhail",
"msg_date": "Fri, 8 Sep 2023 18:51:16 +0800",
"msg_from": "Mikhail Balayan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planning time is time-consuming"
},
{
"msg_contents": "On Fri, 2023-09-08 at 18:51 +0800, Mikhail Balayan wrote:\n> I have three tables:\n> - test_db_bench_1\n> - test_db_bench_tenants\n> - test_db_bench_tenant_closure\n> \n> And the query to join them:\n> SELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n> FROM \"test_db_bench_1\"\n> JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\") \n> AND (\"tenants_child\".\"is_deleted\" != true))\n> JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n> AND (\"tenants_closure\".\"barrier\" <= 0))\n> JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n> AND (\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330'))\n> AND (\"tenants_parent\".\"is_deleted\" != true))\n> LIMIT 1\n> \n> \n> With following execution plan:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ---------------\n> Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011 rows=0 loops=1)\n> -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual time=0.009..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=0.84..48.09 rows=7 width=8) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8) (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.42..45.06 rows=40 width=16) (never executed)\n> Index Cond: (parent_id = tenants_parent.id)\n> Filter: (barrier <= 0)\n> -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44) (never executed)\n> Index Cond: (id = tenants_closure.child_id)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_1_idx_tenant_id_3 on acronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never executed)\n> Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n> Planning Time: 0.732 ms\n> Execution Time: 0.039 ms\n> \n> \n> Where the planning time gets in the way as it takes an order of magnitude more time than the actual execution.\n> \n> Is there a possibility to reduce this time? And, in general, to understand why planning takes so much time.\n\nYou could try to VACUUM the involved tables; indexes with many entries pointing to dead tuples\ncan cause a long planing time.\n\nAlso, there are quite a lot of indexes on \"test_db_bench_1\". On a test database, drop some\nindexes and see if that makes a difference.\n\nFinally, check if \"default_statistics_target\" is set to a high value, or if the \"Stats target\"\nfor some column in the \"\\d+ tablename\" output is set higher than 100.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 11 Sep 2023 03:15:43 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "Also, if you write sql with bind params, planning time should be once for the sql. Subsequent sql will use cached stmt.\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Laurenz Albe <[email protected]>\nSent: Sunday, September 10, 2023 6:15:43 PM\nTo: Mikhail Balayan <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Planning time is time-consuming\n\nOn Fri, 2023-09-08 at 18:51 +0800, Mikhail Balayan wrote:\n> I have three tables:\n> - test_db_bench_1\n> - test_db_bench_tenants\n> - test_db_bench_tenant_closure\n>\n> And the query to join them:\n> SELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n> FROM \"test_db_bench_1\"\n> JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\")\n> AND (\"tenants_child\".\"is_deleted\" != true))\n> JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n> AND (\"tenants_closure\".\"barrier\" <= 0))\n> JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n> AND (\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330'))\n> AND (\"tenants_parent\".\"is_deleted\" != true))\n> LIMIT 1\n>\n>\n> With following execution plan:\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ---------------\n> Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011 rows=0 loops=1)\n> -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual time=0.009..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=0.84..48.09 rows=7 width=8) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8) (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.42..45.06 rows=40 width=16) (never executed)\n> Index Cond: (parent_id = tenants_parent.id)\n> Filter: (barrier <= 0)\n> -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44) (never executed)\n> Index Cond: (id = tenants_closure.child_id)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_1_idx_tenant_id_3 on acronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never executed)\n> Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n> Planning Time: 0.732 ms\n> Execution Time: 0.039 ms\n>\n>\n> Where the planning time gets in the way as it takes an order of magnitude more time than the actual execution.\n>\n> Is there a possibility to reduce this time? And, in general, to understand why planning takes so much time.\n\nYou could try to VACUUM the involved tables; indexes with many entries pointing to dead tuples\ncan cause a long planing time.\n\nAlso, there are quite a lot of indexes on \"test_db_bench_1\". On a test database, drop some\nindexes and see if that makes a difference.\n\nFinally, check if \"default_statistics_target\" is set to a high value, or if the \"Stats target\"\nfor some column in the \"\\d+ tablename\" output is set higher than 100.\n\nYours,\nLaurenz Albe\n\n\n\n\n\n\n\n\nAlso, if you write sql with bind params, planning time should be once for the sql. Subsequent sql will use cached stmt. \n\n\n\nGet Outlook for Android\n\nFrom: Laurenz Albe <[email protected]>\nSent: Sunday, September 10, 2023 6:15:43 PM\nTo: Mikhail Balayan <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Planning time is time-consuming\n \n\n\nOn Fri, 2023-09-08 at 18:51 +0800, Mikhail Balayan wrote:\n> I have three tables:\n> - test_db_bench_1\n> - test_db_bench_tenants\n> - test_db_bench_tenant_closure\n> \n> And the query to join them:\n> SELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n> FROM \"test_db_bench_1\"\n> JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\")\n\n> AND (\"tenants_child\".\"is_deleted\" != true))\n> JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n> AND (\"tenants_closure\".\"barrier\" <= 0))\n> JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n> AND (\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330'))\n> AND (\"tenants_parent\".\"is_deleted\" != true))\n> LIMIT 1\n> \n> \n> With following execution plan:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ---------------\n> Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011 rows=0 loops=1)\n> -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual time=0.009..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=0.84..48.09 rows=7 width=8) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8) (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.42..45.06 rows=40 width=16) (never executed)\n> Index Cond: (parent_id = tenants_parent.id)\n> Filter: (barrier <= 0)\n> -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44) (never executed)\n> Index Cond: (id = tenants_closure.child_id)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_1_idx_tenant_id_3 on acronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never executed)\n> Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n> Planning Time: 0.732 ms\n> Execution Time: 0.039 ms\n> \n> \n> Where the planning time gets in the way as it takes an order of magnitude more time than the actual execution.\n> \n> Is there a possibility to reduce this time? And, in general, to understand why planning takes so much time.\n\nYou could try to VACUUM the involved tables; indexes with many entries pointing to dead tuples\ncan cause a long planing time.\n\nAlso, there are quite a lot of indexes on \"test_db_bench_1\". On a test database, drop some\nindexes and see if that makes a difference.\n\nFinally, check if \"default_statistics_target\" is set to a high value, or if the \"Stats target\"\nfor some column in the \"\\d+ tablename\" output is set higher than 100.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 11 Sep 2023 01:23:46 +0000",
"msg_from": "Anupam b <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "\n\nOn 11 September 2023 03:15:43 CEST, Laurenz Albe <[email protected]> wrote:\n>On Fri, 2023-09-08 at 18:51 +0800, Mikhail Balayan wrote:\n>> I have three tables:\n>> - test_db_bench_1\n>> - test_db_bench_tenants\n>> - test_db_bench_tenant_closure\n>> \n>> And the query to join them:\n>> SELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n>> FROM \"test_db_bench_1\"\n>> JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\") \n>> AND (\"tenants_child\".\"is_deleted\" != true))\n>> JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n>> AND (\"tenants_closure\".\"barrier\" <= 0))\n>> JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n>> AND (\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330'))\n>> AND (\"tenants_parent\".\"is_deleted\" != true))\n>> LIMIT 1\n>> \n>> \n>> With following execution plan:\n>> \n>> QUERY PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> ---------------\n>> Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011 rows=0 loops=1)\n>> -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual time=0.009..0.009 rows=0 loops=1)\n>> -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual time=0.008..0.009 rows=0 loops=1)\n>> -> Nested Loop (cost=0.84..48.09 rows=7 width=8) (actual time=0.008..0.009 rows=0 loops=1)\n>> -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8) (actual time=0.008..0.008 rows=0 loops=1)\n>> Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n>> Filter: (NOT is_deleted)\n>> -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.42..45.06 rows=40 width=16) (never executed)\n>> Index Cond: (parent_id = tenants_parent.id)\n>> Filter: (barrier <= 0)\n>> -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44) (never executed)\n>> Index Cond: (id = tenants_closure.child_id)\n>> Filter: (NOT is_deleted)\n>> -> Index Scan using test_db_bench_1_idx_tenant_id_3 on acronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never executed)\n>> Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n>> Planning Time: 0.732 ms\n>> Execution Time: 0.039 ms\n>> \n>> \n>> Where the planning time gets in the way as it takes an order of magnitude more time than the actual execution.\n>> \n>> Is there a possibility to reduce this time? And, in general, to understand why planning takes so much time.\n>\n>You could try to VACUUM the involved tables; indexes with many entries pointing to dead tuples\n>can cause a long planing time.\n>\n>Also, there are quite a lot of indexes on \"test_db_bench_1\". On a test database, drop some\n>indexes and see if that makes a difference.\n\nYou can use pg_stat_user_indexes to check if those indexes are in use or not.\n\n\n\n>\n>Finally, check if \"default_statistics_target\" is set to a high value, or if the \"Stats target\"\n>for some column in the \"\\d+ tablename\" output is set higher than 100.\n>\n>Yours,\n>Laurenz Albe\n>\n>\n\n\n",
"msg_date": "Mon, 11 Sep 2023 06:45:54 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "Hi Laurenz,\n\nMy bad, I forgot to write that I tried vacuum too, but it didn't help. To\ndemonstrate the result, I did it again:\n\n# vacuum (analyze, verbose) test_db_bench_1;\nINFO: vacuuming \"perfkit.public.test_db_bench_1\"\nINFO: launched 2 parallel vacuum workers for index cleanup (planned: 2)\nINFO: finished vacuuming \"perfkit.public.test_db_bench_1\": index scans: 0\npages: 0 removed, 684731 remain, 17510 scanned (2.56% of total)\ntuples: 0 removed, 3999770 remain, 0 are dead but not yet removable\nremovable cutoff: 27200203, which was 0 XIDs old when operation ended\nindex scan bypassed: 7477 pages from table (1.09% of total) have 20072 dead\nitem identifiers\navg read rate: 0.099 MB/s, avg write rate: 0.009 MB/s\nbuffer usage: 27770 hits, 11 misses, 1 dirtied\nWAL usage: 1 records, 1 full page images, 1762 bytes\nsystem usage: CPU: user: 0.15 s, system: 0.71 s, elapsed: 0.87 s\nINFO: vacuuming \"perfkit.pg_toast.pg_toast_16554\"\nINFO: finished vacuuming \"perfkit.pg_toast.pg_toast_16554\": index scans: 0\npages: 0 removed, 0 remain, 0 scanned (100.00% of total)\ntuples: 0 removed, 0 remain, 0 are dead but not yet removable\nremovable cutoff: 27200203, which was 0 XIDs old when operation ended\nnew relfrozenxid: 27200203, which is 4000060 XIDs ahead of previous value\nindex scan not needed: 0 pages from table (100.00% of total) had 0 dead\nitem identifiers removed\navg read rate: 113.225 MB/s, avg write rate: 0.000 MB/s\nbuffer usage: 3 hits, 1 misses, 0 dirtied\nWAL usage: 1 records, 0 full page images, 188 bytes\nsystem usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nINFO: analyzing \"public.test_db_bench_1\"\nINFO: \"test_db_bench_1\": scanned 30000 of 684731 pages, containing 175085\nlive rows and 897 dead rows; 30000 rows in sample, 3996204 estimated total\nrows\nVACUUM\n\n\n\n# vacuum (analyze, verbose) test_db_bench_tenants;\nINFO: vacuuming \"perfkit.public.test_db_bench_tenants\"\nINFO: launched 2 parallel vacuum workers for index cleanup (planned: 2)\nINFO: finished vacuuming \"perfkit.public.test_db_bench_tenants\": index\nscans: 0\npages: 0 removed, 78154 remain, 1 scanned (0.00% of total)\ntuples: 0 removed, 4064008 remain, 0 are dead but not yet removable\nremovable cutoff: 27200204, which was 0 XIDs old when operation ended\nnew relfrozenxid: 27200204, which is 2 XIDs ahead of previous value\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead item\nidentifiers removed\navg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\nbuffer usage: 34 hits, 0 misses, 0 dirtied\nWAL usage: 1 records, 0 full page images, 188 bytes\nsystem usage: CPU: user: 0.01 s, system: 0.08 s, elapsed: 0.10 s\nINFO: analyzing \"public.test_db_bench_tenants\"\nINFO: \"test_db_bench_tenants\": scanned 30000 of 78154 pages, containing\n1560000 live rows and 0 dead rows; 30000 rows in sample, 4064008 estimated\ntotal rows\nVACUUM\n\n\n\n# vacuum (analyze, verbose) test_db_bench_tenant_closure;\nINFO: vacuuming \"perfkit.public.test_db_bench_tenant_closure\"\nINFO: launched 1 parallel vacuum worker for index cleanup (planned: 1)\nINFO: finished vacuuming \"perfkit.public.test_db_bench_tenant_closure\":\nindex scans: 0\npages: 0 removed, 181573 remain, 3808 scanned (2.10% of total)\ntuples: 0 removed, 28505125 remain, 0 are dead but not yet removable\nremovable cutoff: 27200205, which was 0 XIDs old when operation ended\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead item\nidentifiers removed\navg read rate: 0.000 MB/s, avg write rate: 97.907 MB/s\nbuffer usage: 7680 hits, 0 misses, 3803 dirtied\nWAL usage: 3800 records, 2 full page images, 224601 bytes\nsystem usage: CPU: user: 0.08 s, system: 0.21 s, elapsed: 0.30 s\nINFO: analyzing \"public.test_db_bench_tenant_closure\"\nINFO: \"test_db_bench_tenant_closure\": scanned 30000 of 181573 pages,\ncontaining 4709835 live rows and 0 dead rows; 30000 rows in sample,\n28505962 estimated total rows\nVACUUM\n\n\n\n Limit (cost=1.98..152.05 rows=1 width=44) (actual time=0.012..0.013\nrows=0 loops=1)\n -> Nested Loop (cost=1.98..1052.49 rows=7 width=44) (actual\ntime=0.011..0.012 rows=0 loops=1)\n -> Nested Loop (cost=1.55..1022.18 rows=7 width=37) (actual\ntime=0.011..0.011 rows=0 loops=1)\n -> Nested Loop (cost=1.12..1019.03 rows=7 width=8) (actual\ntime=0.011..0.011 rows=0 loops=1)\n -> Index Scan using test_db_bench_tenants_uuid on\ntest_db_bench_tenants tenants_parent (cost=0.56..2.77 rows=1 width=8)\n(actual time=0.010..0.010 rows=0 loops=1)\n Index Cond: ((uuid)::text =\n'4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n Filter: (NOT is_deleted)\n -> Index Scan using test_db_bench_tenant_closure_pkey\non test_db_bench_tenant_closure tenants_closure (cost=0.56..1006.97\nrows=929 width=16) (never executed)\n Index Cond: (parent_id = tenants_parent.id)\n Filter: (barrier <= 0)\n -> Index Scan using test_db_bench_tenants_pkey on\ntest_db_bench_tenants tenants_child (cost=0.43..0.45 rows=1 width=45)\n(never executed)\n Index Cond: (id = tenants_closure.child_id)\n Filter: (NOT is_deleted)\n -> Index Scan using test_db_bench_1_idx_tenant_id_3 on\ntest_db_bench_1 (cost=0.43..2.98 rows=135 width=44) (never executed)\n Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n Planning Time: 0.874 ms\n Execution Time: 0.053 ms\n(17 rows)\n\nThe planning time even increased :)\n\n\n\nPlayed around with the indexes:\nFirstly I dropped all the indexes that contained tenant_id field, except\nthe one that is used in the execution plan:\nDROP INDEX test_db_bench_1_idx_type_16;\nDROP INDEX test_db_bench_1_idx_queue_18;\nDROP INDEX test_db_bench_1_idx_queue_5;\n\nAfter that:\n Planning Time: 0.889 ms\n Execution Time: 0.047 ms\n\n\nDROP INDEX test_db_bench_1_idx_uuid_0;\n\n Planning Time: 0.841 ms\n Execution Time: 0.047 ms\n\n\nDROP INDEX test_db_bench_1_idx_completion_time_ns_1;\nDROP INDEX test_db_bench_1_idx_cti_entity_uuid_2;\nDROP INDEX test_db_bench_1_idx_enqueue_time_ns_10;\n\n Planning Time: 0.830 ms\n Execution Time: 0.048 ms\n\n\nDROP INDEX test_db_bench_1_idx_euc_id_4;\nDROP INDEX test_db_bench_1_idx_policy_id_12;\nDROP INDEX test_db_bench_1_idx_queue_19;\n\n Planning Time: 0.826 ms\n Execution Time: 0.044 ms\n\nDROP INDEX test_db_bench_1_idx_queue_6;\nDROP INDEX test_db_bench_1_idx_resource_id_11;\nDROP INDEX test_db_bench_1_idx_resource_id_14;\n\n Planning Time: 0.821 ms\n Execution Time: 0.048 ms\n\n\nDROP INDEX test_db_bench_1_idx_result_code_13;\nDROP INDEX test_db_bench_1_idx_start_time_ns_9;\nDROP INDEX test_db_bench_1_idx_state_8;\n\n Planning Time: 0.803 ms\n Execution Time: 0.044 ms\n\n\nDROP INDEX test_db_bench_1_idx_type_15;\nDROP INDEX test_db_bench_1_idx_type_17;\nDROP INDEX test_db_bench_1_idx_update_time_ns_7;\n\nAt that moment only 3 indexes left on the table and a slight improvements\nin Planning Time:\nIndexes:\n \"test_db_bench_1_pkey\" PRIMARY KEY, btree (id)\n \"test_db_bench_1_idx_tenant_id_3\" btree (tenant_id)\n \"test_db_bench_1_uuid_key\" UNIQUE CONSTRAINT, btree (uuid)\n\n Planning Time: 0.799 ms\n Execution Time: 0.044 ms\n\n\nI.e. the situation is still not good - almost all indexes have been\nremoved, the planning time has been reduced insignificantly and it still\nremains much longer than the query execution time.\n\n\nAs for the stats - default_statistics_target has not been changed, has a\nvalue of 100, and no explicit settings for the columns have been applied\n(\"Stats target\" is empty).\n\nCould it be a regression? I'll check it on PG14 when I get a chance.\n\n\n--\nMikhail\n\nOn Mon, 11 Sept 2023 at 09:15, Laurenz Albe <[email protected]>\nwrote:\n\n> On Fri, 2023-09-08 at 18:51 +0800, Mikhail Balayan wrote:\n> > I have three tables:\n> > - test_db_bench_1\n> > - test_db_bench_tenants\n> > - test_db_bench_tenant_closure\n> >\n> > And the query to join them:\n> > SELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n> > FROM \"test_db_bench_1\"\n> > JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON\n> ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\")\n> > AND\n> (\"tenants_child\".\"is_deleted\" != true))\n> > JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON\n> ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n> > AND\n> (\"tenants_closure\".\"barrier\" <= 0))\n> > JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON\n> ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n> > AND\n> (\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330'))\n> > AND\n> (\"tenants_parent\".\"is_deleted\" != true))\n> > LIMIT 1\n> >\n> >\n> > With following execution plan:\n> >\n> >\n> QUERY PLAN\n> >\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > ---------------\n> > Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011\n> rows=0 loops=1)\n> > -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual\n> time=0.009..0.009 rows=0 loops=1)\n> > -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual\n> time=0.008..0.009 rows=0 loops=1)\n> > -> Nested Loop (cost=0.84..48.09 rows=7 width=8)\n> (actual time=0.008..0.009 rows=0 loops=1)\n> > -> Index Scan using test_db_bench_tenants_uuid on\n> test_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8)\n> (actual time=0.008..0.008 rows=0 loops=1)\n> > Index Cond: ((uuid)::text =\n> '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n> > Filter: (NOT is_deleted)\n> > -> Index Scan using\n> test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure\n> tenants_closure (cost=0.42..45.06 rows=40 width=16) (never executed)\n> > Index Cond: (parent_id = tenants_parent.id)\n> > Filter: (barrier <= 0)\n> > -> Index Scan using test_db_bench_tenants_pkey on\n> test_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44)\n> (never executed)\n> > Index Cond: (id = tenants_closure.child_id)\n> > Filter: (NOT is_deleted)\n> > -> Index Scan using test_db_bench_1_idx_tenant_id_3 on\n> acronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never\n> executed)\n> > Index Cond: ((tenant_id)::text =\n> (tenants_child.uuid)::text)\n> > Planning Time: 0.732 ms\n> > Execution Time: 0.039 ms\n> >\n> >\n> > Where the planning time gets in the way as it takes an order of\n> magnitude more time than the actual execution.\n> >\n> > Is there a possibility to reduce this time? And, in general, to\n> understand why planning takes so much time.\n>\n> You could try to VACUUM the involved tables; indexes with many entries\n> pointing to dead tuples\n> can cause a long planing time.\n>\n> Also, there are quite a lot of indexes on \"test_db_bench_1\". On a test\n> database, drop some\n> indexes and see if that makes a difference.\n>\n> Finally, check if \"default_statistics_target\" is set to a high value, or\n> if the \"Stats target\"\n> for some column in the \"\\d+ tablename\" output is set higher than 100.\n>\n> Yours,\n> Laurenz Albe\n>\n\nHi Laurenz,My bad, I forgot to write that I tried vacuum too, but it didn't help. To demonstrate the result, I did it again:# vacuum (analyze, verbose) test_db_bench_1;INFO: vacuuming \"perfkit.public.test_db_bench_1\"INFO: launched 2 parallel vacuum workers for index cleanup (planned: 2)INFO: finished vacuuming \"perfkit.public.test_db_bench_1\": index scans: 0pages: 0 removed, 684731 remain, 17510 scanned (2.56% of total)tuples: 0 removed, 3999770 remain, 0 are dead but not yet removableremovable cutoff: 27200203, which was 0 XIDs old when operation endedindex scan bypassed: 7477 pages from table (1.09% of total) have 20072 dead item identifiersavg read rate: 0.099 MB/s, avg write rate: 0.009 MB/sbuffer usage: 27770 hits, 11 misses, 1 dirtiedWAL usage: 1 records, 1 full page images, 1762 bytessystem usage: CPU: user: 0.15 s, system: 0.71 s, elapsed: 0.87 sINFO: vacuuming \"perfkit.pg_toast.pg_toast_16554\"INFO: finished vacuuming \"perfkit.pg_toast.pg_toast_16554\": index scans: 0pages: 0 removed, 0 remain, 0 scanned (100.00% of total)tuples: 0 removed, 0 remain, 0 are dead but not yet removableremovable cutoff: 27200203, which was 0 XIDs old when operation endednew relfrozenxid: 27200203, which is 4000060 XIDs ahead of previous valueindex scan not needed: 0 pages from table (100.00% of total) had 0 dead item identifiers removedavg read rate: 113.225 MB/s, avg write rate: 0.000 MB/sbuffer usage: 3 hits, 1 misses, 0 dirtiedWAL usage: 1 records, 0 full page images, 188 bytessystem usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 sINFO: analyzing \"public.test_db_bench_1\"INFO: \"test_db_bench_1\": scanned 30000 of 684731 pages, containing 175085 live rows and 897 dead rows; 30000 rows in sample, 3996204 estimated total rowsVACUUM# vacuum (analyze, verbose) test_db_bench_tenants;INFO: vacuuming \"perfkit.public.test_db_bench_tenants\"INFO: launched 2 parallel vacuum workers for index cleanup (planned: 2)INFO: finished vacuuming \"perfkit.public.test_db_bench_tenants\": index scans: 0pages: 0 removed, 78154 remain, 1 scanned (0.00% of total)tuples: 0 removed, 4064008 remain, 0 are dead but not yet removableremovable cutoff: 27200204, which was 0 XIDs old when operation endednew relfrozenxid: 27200204, which is 2 XIDs ahead of previous valueindex scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removedavg read rate: 0.000 MB/s, avg write rate: 0.000 MB/sbuffer usage: 34 hits, 0 misses, 0 dirtiedWAL usage: 1 records, 0 full page images, 188 bytessystem usage: CPU: user: 0.01 s, system: 0.08 s, elapsed: 0.10 sINFO: analyzing \"public.test_db_bench_tenants\"INFO: \"test_db_bench_tenants\": scanned 30000 of 78154 pages, containing 1560000 live rows and 0 dead rows; 30000 rows in sample, 4064008 estimated total rowsVACUUM# vacuum (analyze, verbose) test_db_bench_tenant_closure;INFO: vacuuming \"perfkit.public.test_db_bench_tenant_closure\"INFO: launched 1 parallel vacuum worker for index cleanup (planned: 1)INFO: finished vacuuming \"perfkit.public.test_db_bench_tenant_closure\": index scans: 0pages: 0 removed, 181573 remain, 3808 scanned (2.10% of total)tuples: 0 removed, 28505125 remain, 0 are dead but not yet removableremovable cutoff: 27200205, which was 0 XIDs old when operation endedindex scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removedavg read rate: 0.000 MB/s, avg write rate: 97.907 MB/sbuffer usage: 7680 hits, 0 misses, 3803 dirtiedWAL usage: 3800 records, 2 full page images, 224601 bytessystem usage: CPU: user: 0.08 s, system: 0.21 s, elapsed: 0.30 sINFO: analyzing \"public.test_db_bench_tenant_closure\"INFO: \"test_db_bench_tenant_closure\": scanned 30000 of 181573 pages, containing 4709835 live rows and 0 dead rows; 30000 rows in sample, 28505962 estimated total rowsVACUUM Limit (cost=1.98..152.05 rows=1 width=44) (actual time=0.012..0.013 rows=0 loops=1) -> Nested Loop (cost=1.98..1052.49 rows=7 width=44) (actual time=0.011..0.012 rows=0 loops=1) -> Nested Loop (cost=1.55..1022.18 rows=7 width=37) (actual time=0.011..0.011 rows=0 loops=1) -> Nested Loop (cost=1.12..1019.03 rows=7 width=8) (actual time=0.011..0.011 rows=0 loops=1) -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.56..2.77 rows=1 width=8) (actual time=0.010..0.010 rows=0 loops=1) Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text) Filter: (NOT is_deleted) -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.56..1006.97 rows=929 width=16) (never executed) Index Cond: (parent_id = tenants_parent.id) Filter: (barrier <= 0) -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.43..0.45 rows=1 width=45) (never executed) Index Cond: (id = tenants_closure.child_id) Filter: (NOT is_deleted) -> Index Scan using test_db_bench_1_idx_tenant_id_3 on test_db_bench_1 (cost=0.43..2.98 rows=135 width=44) (never executed) Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text) Planning Time: 0.874 ms Execution Time: 0.053 ms(17 rows)The planning time even increased :)Played around with the indexes:Firstly I dropped all the indexes that contained tenant_id field, except the one that is used in the execution plan:DROP INDEX test_db_bench_1_idx_type_16;DROP INDEX test_db_bench_1_idx_queue_18;DROP INDEX test_db_bench_1_idx_queue_5;After that: Planning Time: 0.889 ms Execution Time: 0.047 msDROP INDEX test_db_bench_1_idx_uuid_0; Planning Time: 0.841 ms Execution Time: 0.047 msDROP INDEX test_db_bench_1_idx_completion_time_ns_1;DROP INDEX test_db_bench_1_idx_cti_entity_uuid_2;DROP INDEX test_db_bench_1_idx_enqueue_time_ns_10; Planning Time: 0.830 ms Execution Time: 0.048 msDROP INDEX test_db_bench_1_idx_euc_id_4;DROP INDEX test_db_bench_1_idx_policy_id_12;DROP INDEX test_db_bench_1_idx_queue_19; Planning Time: 0.826 ms Execution Time: 0.044 msDROP INDEX test_db_bench_1_idx_queue_6;DROP INDEX test_db_bench_1_idx_resource_id_11;DROP INDEX test_db_bench_1_idx_resource_id_14; Planning Time: 0.821 ms Execution Time: 0.048 msDROP INDEX test_db_bench_1_idx_result_code_13;DROP INDEX test_db_bench_1_idx_start_time_ns_9;DROP INDEX test_db_bench_1_idx_state_8; Planning Time: 0.803 ms Execution Time: 0.044 msDROP INDEX test_db_bench_1_idx_type_15;DROP INDEX test_db_bench_1_idx_type_17;DROP INDEX test_db_bench_1_idx_update_time_ns_7;At that moment only 3 indexes left on the table and a slight improvements in Planning Time:Indexes: \"test_db_bench_1_pkey\" PRIMARY KEY, btree (id) \"test_db_bench_1_idx_tenant_id_3\" btree (tenant_id) \"test_db_bench_1_uuid_key\" UNIQUE CONSTRAINT, btree (uuid) Planning Time: 0.799 ms Execution Time: 0.044 msI.e. the situation is still not good - almost all indexes have been removed, the planning time has been reduced insignificantly and it still remains much longer than the query execution time.As for the stats - default_statistics_target has not been changed, has a value of 100, and no explicit settings for the columns have been applied (\"Stats target\" is empty).Could it be a regression? I'll check it on PG14 when I get a chance.--MikhailOn Mon, 11 Sept 2023 at 09:15, Laurenz Albe <[email protected]> wrote:On Fri, 2023-09-08 at 18:51 +0800, Mikhail Balayan wrote:\n> I have three tables:\n> - test_db_bench_1\n> - test_db_bench_tenants\n> - test_db_bench_tenant_closure\n> \n> And the query to join them:\n> SELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n> FROM \"test_db_bench_1\"\n> JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\") \n> AND (\"tenants_child\".\"is_deleted\" != true))\n> JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n> AND (\"tenants_closure\".\"barrier\" <= 0))\n> JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n> AND (\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330'))\n> AND (\"tenants_parent\".\"is_deleted\" != true))\n> LIMIT 1\n> \n> \n> With following execution plan:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ---------------\n> Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011 rows=0 loops=1)\n> -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual time=0.009..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=0.84..48.09 rows=7 width=8) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8) (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.42..45.06 rows=40 width=16) (never executed)\n> Index Cond: (parent_id = tenants_parent.id)\n> Filter: (barrier <= 0)\n> -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44) (never executed)\n> Index Cond: (id = tenants_closure.child_id)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_1_idx_tenant_id_3 on acronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never executed)\n> Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n> Planning Time: 0.732 ms\n> Execution Time: 0.039 ms\n> \n> \n> Where the planning time gets in the way as it takes an order of magnitude more time than the actual execution.\n> \n> Is there a possibility to reduce this time? And, in general, to understand why planning takes so much time.\n\nYou could try to VACUUM the involved tables; indexes with many entries pointing to dead tuples\ncan cause a long planing time.\n\nAlso, there are quite a lot of indexes on \"test_db_bench_1\". On a test database, drop some\nindexes and see if that makes a difference.\n\nFinally, check if \"default_statistics_target\" is set to a high value, or if the \"Stats target\"\nfor some column in the \"\\d+ tablename\" output is set higher than 100.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 11 Sep 2023 12:55:36 +0800",
"msg_from": "Mikhail Balayan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Planning time is time-consuming"
},
{
"msg_contents": "Thanks for the idea. I was surprised to find that this is not the way it\nworks and the planning time remains the same. To keep the experiment clean,\nI ran it several times, first a couple of times explain analyze, then a\ncouple of times the query itself:\n\n# PREPARE the_query (varchar) AS\nSELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n FROM \"test_db_bench_1\"\n JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON\n((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\")\n AND\n(\"tenants_child\".\"is_deleted\" != true))\n JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON\n((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n AND\n(\"tenants_closure\".\"barrier\" <= 0))\n JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON\n((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n AND\n(\"tenants_parent\".\"uuid\" IN ($1))\n AND\n(\"tenants_parent\".\"is_deleted\" != true))\n LIMIT 1;\n\n# explain analyze EXECUTE the_query('4c79c1c5-21ae-45a0-8734-75d67abd0330');\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1.98..152.05 rows=1 width=152) (actual time=0.014..0.015\nrows=0 loops=1)\n -> Nested Loop (cost=1.98..1052.49 rows=7 width=152) (actual\ntime=0.013..0.013 rows=0 loops=1)\n -> Nested Loop (cost=1.55..1022.18 rows=7 width=108) (actual\ntime=0.013..0.013 rows=0 loops=1)\n -> Nested Loop (cost=1.12..1019.03 rows=7 width=63)\n(actual time=0.012..0.013 rows=0 loops=1)\n -> Index Scan using test_db_bench_tenants_uuid on\ntest_db_bench_tenants tenants_parent (cost=0.56..2.77 rows=1 width=45)\n(actual time=0.012..0.012 rows=0 loops=1)\n Index Cond: ((uuid)::text =\n'4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n Filter: (NOT is_deleted)\n -> Index Scan using test_db_bench_tenant_closure_pkey\non test_db_bench_tenant_closure tenants_closure (cost=0.56..1006.97\nrows=929 width=18) (never executed)\n Index Cond: (parent_id = tenants_parent.id)\n Filter: (barrier <= 0)\n -> Index Scan using test_db_bench_tenants_pkey on\ntest_db_bench_tenants tenants_child (cost=0.43..0.45 rows=1 width=45)\n(never executed)\n Index Cond: (id = tenants_closure.child_id)\n Filter: (NOT is_deleted)\n -> Index Scan using test_db_bench_1_idx_tenant_id_3 on\ntest_db_bench_1 (cost=0.43..2.98 rows=135 width=44) (never executed)\n Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n Planning Time: 0.982 ms\n Execution Time: 0.059 ms\n(17 rows)\n\n# explain analyze EXECUTE the_query('4c79c1c5-21ae-45a0-8734-75d67abd0330');\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1.98..152.05 rows=1 width=152) (actual time=0.011..0.012\nrows=0 loops=1)\n -> Nested Loop (cost=1.98..1052.49 rows=7 width=152) (actual\ntime=0.010..0.011 rows=0 loops=1)\n -> Nested Loop (cost=1.55..1022.18 rows=7 width=108) (actual\ntime=0.010..0.011 rows=0 loops=1)\n -> Nested Loop (cost=1.12..1019.03 rows=7 width=63)\n(actual time=0.010..0.010 rows=0 loops=1)\n -> Index Scan using test_db_bench_tenants_uuid on\ntest_db_bench_tenants tenants_parent (cost=0.56..2.77 rows=1 width=45)\n(actual time=0.010..0.010 rows=0 loops=1)\n Index Cond: ((uuid)::text =\n'4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n Filter: (NOT is_deleted)\n -> Index Scan using test_db_bench_tenant_closure_pkey\non test_db_bench_tenant_closure tenants_closure (cost=0.56..1006.97\nrows=929 width=18) (never executed)\n Index Cond: (parent_id = tenants_parent.id)\n Filter: (barrier <= 0)\n -> Index Scan using test_db_bench_tenants_pkey on\ntest_db_bench_tenants tenants_child (cost=0.43..0.45 rows=1 width=45)\n(never executed)\n Index Cond: (id = tenants_closure.child_id)\n Filter: (NOT is_deleted)\n -> Index Scan using test_db_bench_1_idx_tenant_id_3 on\ntest_db_bench_1 (cost=0.43..2.98 rows=135 width=44) (never executed)\n Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n Planning Time: 0.843 ms\n Execution Time: 0.046 ms\n(17 rows)\n\n# EXECUTE the_query('4c79c1c5-21ae-45a0-8734-75d67abd0330');\n id | tenant_id\n----+-----------\n(0 rows)\n\nTime: 1.311 ms\n\n# EXECUTE the_query('4c79c1c5-21ae-45a0-8734-75d67abd0330');\n id | tenant_id\n----+-----------\n(0 rows)\n\nTime: 1.230 ms\n\n--\nMikhail\n\n\nOn Mon, 11 Sept 2023 at 09:23, Anupam b <[email protected]> wrote:\n\n> Also, if you write sql with bind params, planning time should be once for\n> the sql. Subsequent sql will use cached stmt.\n>\n> Get Outlook for Android <https://aka.ms/AAb9ysg>\n> ------------------------------\n> *From:* Laurenz Albe <[email protected]>\n> *Sent:* Sunday, September 10, 2023 6:15:43 PM\n> *To:* Mikhail Balayan <[email protected]>;\n> [email protected] <[email protected]>\n> *Subject:* Re: Planning time is time-consuming\n>\n> On Fri, 2023-09-08 at 18:51 +0800, Mikhail Balayan wrote:\n> > I have three tables:\n> > - test_db_bench_1\n> > - test_db_bench_tenants\n> > - test_db_bench_tenant_closure\n> >\n> > And the query to join them:\n> > SELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n> > FROM \"test_db_bench_1\"\n> > JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON\n> ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\")\n> > AND\n> (\"tenants_child\".\"is_deleted\" != true))\n> > JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON\n> ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n> > AND\n> (\"tenants_closure\".\"barrier\" <= 0))\n> > JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON\n> ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n> > AND\n> (\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330'))\n> > AND\n> (\"tenants_parent\".\"is_deleted\" != true))\n> > LIMIT 1\n> >\n> >\n> > With following execution plan:\n> >\n> >\n> QUERY PLAN\n> >\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > ---------------\n> > Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011\n> rows=0 loops=1)\n> > -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual\n> time=0.009..0.009 rows=0 loops=1)\n> > -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual\n> time=0.008..0.009 rows=0 loops=1)\n> > -> Nested Loop (cost=0.84..48.09 rows=7 width=8)\n> (actual time=0.008..0.009 rows=0 loops=1)\n> > -> Index Scan using test_db_bench_tenants_uuid on\n> test_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8)\n> (actual time=0.008..0.008 rows=0 loops=1)\n> > Index Cond: ((uuid)::text =\n> '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n> > Filter: (NOT is_deleted)\n> > -> Index Scan using\n> test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure\n> tenants_closure (cost=0.42..45.06 rows=40 width=16) (never executed)\n> > Index Cond: (parent_id = tenants_parent.id)\n> > Filter: (barrier <= 0)\n> > -> Index Scan using test_db_bench_tenants_pkey on\n> test_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44)\n> (never executed)\n> > Index Cond: (id = tenants_closure.child_id)\n> > Filter: (NOT is_deleted)\n> > -> Index Scan using test_db_bench_1_idx_tenant_id_3 on\n> acronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never\n> executed)\n> > Index Cond: ((tenant_id)::text =\n> (tenants_child.uuid)::text)\n> > Planning Time: 0.732 ms\n> > Execution Time: 0.039 ms\n> >\n> >\n> > Where the planning time gets in the way as it takes an order of\n> magnitude more time than the actual execution.\n> >\n> > Is there a possibility to reduce this time? And, in general, to\n> understand why planning takes so much time.\n>\n> You could try to VACUUM the involved tables; indexes with many entries\n> pointing to dead tuples\n> can cause a long planing time.\n>\n> Also, there are quite a lot of indexes on \"test_db_bench_1\". On a test\n> database, drop some\n> indexes and see if that makes a difference.\n>\n> Finally, check if \"default_statistics_target\" is set to a high value, or\n> if the \"Stats target\"\n> for some column in the \"\\d+ tablename\" output is set higher than 100.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n\nThanks for the idea. I was surprised to find that this is not the way it works and the planning time remains the same. To keep the experiment clean, I ran it several times, first a couple of times explain analyze, then a couple of times the query itself:# PREPARE the_query (varchar) ASSELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\" FROM \"test_db_bench_1\" JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\") AND (\"tenants_child\".\"is_deleted\" != true)) JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\") AND (\"tenants_closure\".\"barrier\" <= 0)) JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\") AND (\"tenants_parent\".\"uuid\" IN ($1)) AND (\"tenants_parent\".\"is_deleted\" != true)) LIMIT 1;# explain analyze EXECUTE the_query('4c79c1c5-21ae-45a0-8734-75d67abd0330'); QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=1.98..152.05 rows=1 width=152) (actual time=0.014..0.015 rows=0 loops=1) -> Nested Loop (cost=1.98..1052.49 rows=7 width=152) (actual time=0.013..0.013 rows=0 loops=1) -> Nested Loop (cost=1.55..1022.18 rows=7 width=108) (actual time=0.013..0.013 rows=0 loops=1) -> Nested Loop (cost=1.12..1019.03 rows=7 width=63) (actual time=0.012..0.013 rows=0 loops=1) -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.56..2.77 rows=1 width=45) (actual time=0.012..0.012 rows=0 loops=1) Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text) Filter: (NOT is_deleted) -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.56..1006.97 rows=929 width=18) (never executed) Index Cond: (parent_id = tenants_parent.id) Filter: (barrier <= 0) -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.43..0.45 rows=1 width=45) (never executed) Index Cond: (id = tenants_closure.child_id) Filter: (NOT is_deleted) -> Index Scan using test_db_bench_1_idx_tenant_id_3 on test_db_bench_1 (cost=0.43..2.98 rows=135 width=44) (never executed) Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text) Planning Time: 0.982 ms Execution Time: 0.059 ms(17 rows)# explain analyze EXECUTE the_query('4c79c1c5-21ae-45a0-8734-75d67abd0330'); QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=1.98..152.05 rows=1 width=152) (actual time=0.011..0.012 rows=0 loops=1) -> Nested Loop (cost=1.98..1052.49 rows=7 width=152) (actual time=0.010..0.011 rows=0 loops=1) -> Nested Loop (cost=1.55..1022.18 rows=7 width=108) (actual time=0.010..0.011 rows=0 loops=1) -> Nested Loop (cost=1.12..1019.03 rows=7 width=63) (actual time=0.010..0.010 rows=0 loops=1) -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.56..2.77 rows=1 width=45) (actual time=0.010..0.010 rows=0 loops=1) Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text) Filter: (NOT is_deleted) -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.56..1006.97 rows=929 width=18) (never executed) Index Cond: (parent_id = tenants_parent.id) Filter: (barrier <= 0) -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.43..0.45 rows=1 width=45) (never executed) Index Cond: (id = tenants_closure.child_id) Filter: (NOT is_deleted) -> Index Scan using test_db_bench_1_idx_tenant_id_3 on test_db_bench_1 (cost=0.43..2.98 rows=135 width=44) (never executed) Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text) Planning Time: 0.843 ms Execution Time: 0.046 ms(17 rows)# EXECUTE the_query('4c79c1c5-21ae-45a0-8734-75d67abd0330'); id | tenant_id----+-----------(0 rows)Time: 1.311 ms# EXECUTE the_query('4c79c1c5-21ae-45a0-8734-75d67abd0330'); id | tenant_id----+-----------(0 rows)Time: 1.230 ms--MikhailOn Mon, 11 Sept 2023 at 09:23, Anupam b <[email protected]> wrote:\n\nAlso, if you write sql with bind params, planning time should be once for the sql. Subsequent sql will use cached stmt. \n\n\n\nGet Outlook for Android\n\nFrom: Laurenz Albe <[email protected]>\nSent: Sunday, September 10, 2023 6:15:43 PM\nTo: Mikhail Balayan <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Planning time is time-consuming\n \n\n\nOn Fri, 2023-09-08 at 18:51 +0800, Mikhail Balayan wrote:\n> I have three tables:\n> - test_db_bench_1\n> - test_db_bench_tenants\n> - test_db_bench_tenant_closure\n> \n> And the query to join them:\n> SELECT \"test_db_bench_1\".\"id\" id, \"test_db_bench_1\".\"tenant_id\"\n> FROM \"test_db_bench_1\"\n> JOIN \"test_db_bench_tenants\" AS \"tenants_child\" ON ((\"tenants_child\".\"uuid\" = \"test_db_bench_1\".\"tenant_id\")\n\n> AND (\"tenants_child\".\"is_deleted\" != true))\n> JOIN \"test_db_bench_tenant_closure\" AS \"tenants_closure\" ON ((\"tenants_closure\".\"child_id\" = \"tenants_child\".\"id\")\n> AND (\"tenants_closure\".\"barrier\" <= 0))\n> JOIN \"test_db_bench_tenants\" AS \"tenants_parent\" ON ((\"tenants_parent\".\"id\" = \"tenants_closure\".\"parent_id\")\n> AND (\"tenants_parent\".\"uuid\" IN ('4c79c1c5-21ae-45a0-8734-75d67abd0330'))\n> AND (\"tenants_parent\".\"is_deleted\" != true))\n> LIMIT 1\n> \n> \n> With following execution plan:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ---------------\n> Limit (cost=1.56..1.92 rows=1 width=44) (actual time=0.010..0.011 rows=0 loops=1)\n> -> Nested Loop (cost=1.56..162.42 rows=438 width=44) (actual time=0.009..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=1.13..50.27 rows=7 width=36) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Nested Loop (cost=0.84..48.09 rows=7 width=8) (actual time=0.008..0.009 rows=0 loops=1)\n> -> Index Scan using test_db_bench_tenants_uuid on test_db_bench_tenants tenants_parent (cost=0.41..2.63 rows=1 width=8) (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: ((uuid)::text = '4c79c1c5-21ae-45a0-8734-75d67abd0330'::text)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_tenant_closure_pkey on test_db_bench_tenant_closure tenants_closure (cost=0.42..45.06 rows=40 width=16) (never executed)\n> Index Cond: (parent_id = tenants_parent.id)\n> Filter: (barrier <= 0)\n> -> Index Scan using test_db_bench_tenants_pkey on test_db_bench_tenants tenants_child (cost=0.29..0.31 rows=1 width=44) (never executed)\n> Index Cond: (id = tenants_closure.child_id)\n> Filter: (NOT is_deleted)\n> -> Index Scan using test_db_bench_1_idx_tenant_id_3 on acronis_db_bench_heavy (cost=0.43..14.66 rows=136 width=44) (never executed)\n> Index Cond: ((tenant_id)::text = (tenants_child.uuid)::text)\n> Planning Time: 0.732 ms\n> Execution Time: 0.039 ms\n> \n> \n> Where the planning time gets in the way as it takes an order of magnitude more time than the actual execution.\n> \n> Is there a possibility to reduce this time? And, in general, to understand why planning takes so much time.\n\nYou could try to VACUUM the involved tables; indexes with many entries pointing to dead tuples\ncan cause a long planing time.\n\nAlso, there are quite a lot of indexes on \"test_db_bench_1\". On a test database, drop some\nindexes and see if that makes a difference.\n\nFinally, check if \"default_statistics_target\" is set to a high value, or if the \"Stats target\"\nfor some column in the \"\\d+ tablename\" output is set higher than 100.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 11 Sep 2023 12:57:20 +0800",
"msg_from": "Mikhail Balayan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Planning time is time-consuming"
},
{
"msg_contents": "On Mon, 2023-09-11 at 12:57 +0800, Mikhail Balayan wrote:\n> Thanks for the idea. I was surprised to find that this is not the way it works and the planning time remains the same.\n\nTo benefit from the speed gains of a prepared statement, you'd have to execute it\nat least seven times. If a generic plan is used (which should happen), you will\nsee $1 instead of the literal argument in the execution plan.\n\nPrepared statements are probably your best bet.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 11 Sep 2023 10:13:26 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Planning time is time-consuming"
},
{
"msg_contents": "On Mon, 11 Sept 2023 at 18:16, Laurenz Albe <[email protected]> wrote:\n> Also, there are quite a lot of indexes on \"test_db_bench_1\". On a test database, drop some\n> indexes and see if that makes a difference.\n\nYeah, I count 3 that either have the key columns as some prefix of\nanother index or are just a duplicate of some other index.\n\nGetting rid of those 3 will save some time in create_index_paths().\n\nDavid\n\n\n",
"msg_date": "Mon, 11 Sep 2023 20:24:35 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "On Mon, 11 Sept 2023 at 21:54, Mikhail Balayan <[email protected]> wrote:\n> Could it be a regression? I'll check it on PG14 when I get a chance.\n\nI'm not sure if you're asking for help here because you need planning\nto be faster than it currently is, or if it's because you believe that\nplanning should always be faster than execution. If you think the\nlatter, then you're mistaken. It seems to me that the complexity of\nplanning this query is much more complex than executing it. The outer\nside of the inner-most nested loop finds 0 rows, so it need not scan\nthe inner side, which results in that nested loop producing 0 rows,\ntherefore the outer side of none of the subsequent nested loops find\nany rows. This is why you see \"(never executed)\" in the EXPLAIN\nANALYZE.\n\nYou could use perf record or perf top to dig into what's slow.\n\nOn the other hand, please report back if you find PG14 to be much faster here.\n\nYou could also experiment with a set of tables which are empty. It's\npossible getting the relation sizes are a factor to consider here.\nmdnblocks() needs to do a bit more work when the relation has multiple\nsegments.\n\nDavid\n\n\n",
"msg_date": "Mon, 11 Sep 2023 22:17:09 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "Any statement that is executed has to go through the 4 stages of query execution:\n- parse\n- rewrite\n- plan\n- execute\n\nThe execute phase is the phase that mostly is the focus on, and is the phase in which normally is spent the most time.\n\nIn the postgres backend main loop, there are multiple ways of getting a statement to go through these stages.\nThe simple query execution is a single call that performs going through all these stages and the other common method is to use the client parse (which includes the server side parse and rewrite), bind (which performs the server side plan) and execute commands from this backend main loop.\n\nA prepared statement, or named statement, is a way of performing statement execution where some of the intermediate results are stored in a memory area in the backend and thus allows the backend to persist some of the execution details. Non-prepared statement reuse the memory area, and thus flush any metadata.\n\nThe reason for explaining this is that when preparing a statement, the result of the phases of parse and rewrite, which is the parse tree, is stored.\nThat means that after the prepare, the work of generating the parse tree can be omitted by only performing calling bind and execute for the prepared/named statement.\n\nThe planner statistics are recorded for the calculated cost of a statement with the specified variables/binds, and record a cost of when the specified binds would be “non specific” alias generic.\nAfter 5 times of execution of a prepared statement, if the generic plan is costed equal or lower during than the plan of the statement with the specified bind variables, then the backend will switch to the generic plan. \n\nThe advantage of switching to the generic plan is that it will not perform the plan costing and all accompanied transformations, but instead directly use the generic plan.\nFor this question, this would ’solve’ the issue of the plan phase taking more time than the execution, but potentially only after 5 times of executing the prepared statement.\nThe downside is that because the costing is skipped, it cannot choose another plan anymore for that named statement for the lifetime of the prepared statement in that backend, unless the backend is instructed explicitly to not to use the generic statement.\n\nFrits Hoogland\n\n\n\n\n> On 11 Sep 2023, at 10:13, Laurenz Albe <[email protected]> wrote:\n> \n> On Mon, 2023-09-11 at 12:57 +0800, Mikhail Balayan wrote:\n>> Thanks for the idea. I was surprised to find that this is not the way it works and the planning time remains the same.\n> \n> To benefit from the speed gains of a prepared statement, you'd have to execute it\n> at least seven times. If a generic plan is used (which should happen), you will\n> see $1 instead of the literal argument in the execution plan.\n> \n> Prepared statements are probably your best bet.\n> \n> Yours,\n> Laurenz Albe\n> \n> \n\n\nAny statement that is executed has to go through the 4 stages of query execution:- parse- rewrite- plan- executeThe execute phase is the phase that mostly is the focus on, and is the phase in which normally is spent the most time.In the postgres backend main loop, there are multiple ways of getting a statement to go through these stages.The simple query execution is a single call that performs going through all these stages and the other common method is to use the client parse (which includes the server side parse and rewrite), bind (which performs the server side plan) and execute commands from this backend main loop.A prepared statement, or named statement, is a way of performing statement execution where some of the intermediate results are stored in a memory area in the backend and thus allows the backend to persist some of the execution details. Non-prepared statement reuse the memory area, and thus flush any metadata.The reason for explaining this is that when preparing a statement, the result of the phases of parse and rewrite, which is the parse tree, is stored.That means that after the prepare, the work of generating the parse tree can be omitted by only performing calling bind and execute for the prepared/named statement.The planner statistics are recorded for the calculated cost of a statement with the specified variables/binds, and record a cost of when the specified binds would be “non specific” alias generic.After 5 times of execution of a prepared statement, if the generic plan is costed equal or lower during than the plan of the statement with the specified bind variables, then the backend will switch to the generic plan. The advantage of switching to the generic plan is that it will not perform the plan costing and all accompanied transformations, but instead directly use the generic plan.For this question, this would ’solve’ the issue of the plan phase taking more time than the execution, but potentially only after 5 times of executing the prepared statement.The downside is that because the costing is skipped, it cannot choose another plan anymore for that named statement for the lifetime of the prepared statement in that backend, unless the backend is instructed explicitly to not to use the generic statement.Frits Hoogland\n\nOn 11 Sep 2023, at 10:13, Laurenz Albe <[email protected]> wrote:On Mon, 2023-09-11 at 12:57 +0800, Mikhail Balayan wrote:Thanks for the idea. I was surprised to find that this is not the way it works and the planning time remains the same.To benefit from the speed gains of a prepared statement, you'd have to execute itat least seven times. If a generic plan is used (which should happen), you willsee $1 instead of the literal argument in the execution plan.Prepared statements are probably your best bet.Yours,Laurenz Albe",
"msg_date": "Mon, 11 Sep 2023 15:54:59 +0200",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> I'm not sure if you're asking for help here because you need planning\n> to be faster than it currently is, or if it's because you believe that\n> planning should always be faster than execution. If you think the\n> latter, then you're mistaken.\n\nYeah. I don't see anything particularly troubling here. Taking\ncirca three-quarters of a millisecond (on typical current hardware)\nto plan a four-way join on large tables is not unreasonable.\nIn most cases one could expect the execution of such a query to\ntake a good bit longer than that. I think the OP is putting too\nmuch emphasis on an edge case where execution finishes quickly\nbecause there are in fact zero rows matching the uuid restriction.\n\nBTW, in addition to the duplicative indexes, I wonder why the\nuuid columns being joined on aren't all of \"uuid\" type. While\nI doubt fixing that would move the needle greatly, it still\nseems sloppy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Sep 2023 10:27:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "Hi Mikhail.\n\nPostgresql version: 15.3 (Debian 15.3-1.pgdg110+1) on x86_64-pc-linux-gnu,\n> compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n> And just in case it matters, this is an experimental setup, so Postgresql\n> running in Docker.\n>\n\nAre you using the official Docker Postgres image, specifically\n`postgres:15.3-bullseye`? ( https://hub.docker.com/_/postgres )\n- If so, consider upgrading to version 15.4. It has some planner fixes not\ndirectly related to your issue. Check details here:\n PostgreSQL 15.4 Release Notes\nhttps://www.postgresql.org/docs/release/15.4/\n- For all technical text type id columns *apply the `Collate \"C\"` *. (\nlike `assign_time_str` and `cti_entity_uuid` )\n Alternatively, use the \"uuid\" column type everywhere, as Tom Lane\nsuggests.\n\n- Could you provide details on your current tuning settings? I'm interested\nin `work_mem`, `shared_buffers`, `effective_cache_size`, and others.\n- Please test with different `work_mem` values.\n\nIf it's not too much trouble, can you also test with: ( These version uses\na different locale and LLVM (JIT). )\n- postgres:15.4-bookworm\n- postgres:15.4-alpine3.18\n\nRegards,\n Imre\n\nHi Mikhail.Postgresql version: 15.3 (Debian 15.3-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bitAnd just in case it matters, this is an experimental setup, so Postgresql running in Docker.Are you using the official Docker Postgres image, specifically `postgres:15.3-bullseye`? ( https://hub.docker.com/_/postgres )- If so, consider upgrading to version 15.4. It has some planner fixes not directly related to your issue. Check details here: PostgreSQL 15.4 Release Notes https://www.postgresql.org/docs/release/15.4/- For all technical text type id columns apply the `Collate \"C\"` . ( like `assign_time_str` and `cti_entity_uuid` ) Alternatively, use the \"uuid\" column type everywhere, as Tom Lane suggests.- Could you provide details on your current tuning settings? I'm interested in `work_mem`, `shared_buffers`, `effective_cache_size`, and others.- Please test with different `work_mem` values.If it's not too much trouble, can you also test with: ( These version uses a different locale and LLVM (JIT). )- postgres:15.4-bookworm- postgres:15.4-alpine3.18Regards, Imre",
"msg_date": "Mon, 11 Sep 2023 17:17:33 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "On Tue, 12 Sept 2023 at 02:27, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > I'm not sure if you're asking for help here because you need planning\n> > to be faster than it currently is, or if it's because you believe that\n> > planning should always be faster than execution. If you think the\n> > latter, then you're mistaken.\n>\n> Yeah. I don't see anything particularly troubling here. Taking\n> circa three-quarters of a millisecond (on typical current hardware)\n> to plan a four-way join on large tables is not unreasonable.\n\nI took a few minutes to reverse engineer the tables in question (with\nassistance from an AI bot) and ran the query in question.\nUnsurprisingly, I also see planning as slower than execution, but with\na ratio of about planning being 12x slower than execution vs the\nreported ~18x.\n\nPlanning Time: 0.581 ms\nExecution Time: 0.048 ms\n\nNothing alarming in perf top of executing the query in pgbench with -M\nsimple. I think this confirms the problem is just with expectations.\n\n 5.09% postgres [.] AllocSetAlloc\n 2.99% postgres [.] SearchCatCacheInternal\n 2.52% postgres [.] palloc\n 2.38% postgres [.] expression_tree_walker_impl\n 1.82% postgres [.] add_path_precheck\n 1.78% postgres [.] add_path\n 1.73% postgres [.] MemoryContextAllocZeroAligned\n 1.63% postgres [.] base_yyparse\n 1.61% postgres [.] CatalogCacheComputeHashValue\n 1.38% postgres [.] try_nestloop_path\n 1.36% postgres [.] stack_is_too_deep\n 1.33% postgres [.] add_paths_to_joinrel\n 1.19% postgres [.] core_yylex\n 1.18% postgres [.] lappend\n 1.15% postgres [.] initial_cost_nestloop\n 1.13% postgres [.] hash_search_with_hash_value\n 1.01% postgres [.] palloc0\n 0.95% postgres [.] get_memoize_path\n 0.90% postgres [.] equal\n 0.88% postgres [.] get_eclass_for_sort_expr\n 0.81% postgres [.] compare_pathkeys\n 0.80% postgres [.] bms_is_subset\n 0.77% postgres [.] ResourceArrayRemove\n 0.77% postgres [.] check_stack_depth\n 0.77% libc.so.6 [.] __memmove_avx_unaligned_erms\n 0.74% libc.so.6 [.] __memset_avx2_unaligned\n 0.73% postgres [.] AllocSetFree\n 0.71% postgres [.] final_cost_nestloop\n 0.69% postgres [.] compare_path_costs_fuzzily\n 0.68% postgres [.] initial_cost_mergejoin\n 0.64% libc.so.6 [.] __memset_avx2_unaligned_erms\n 0.61% postgres [.] create_nestloop_path\n 0.61% postgres [.] examine_variable\n 0.59% postgres [.] hash_bytes\n 0.56% postgres [.] truncate_useless_pathkeys\n 0.56% postgres [.] bms_overlap\n\nDavid\n\n\n",
"msg_date": "Tue, 12 Sep 2023 15:06:12 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 11:07 PM David Rowley <[email protected]> wrote:\n\n> On Tue, 12 Sept 2023 at 02:27, Tom Lane <[email protected]> wrote:\n> >\n> > David Rowley <[email protected]> writes:\n> > > I'm not sure if you're asking for help here because you need planning\n> > > to be faster than it currently is, or if it's because you believe that\n> > > planning should always be faster than execution. If you think the\n> > > latter, then you're mistaken.\n> >\n> > Yeah. I don't see anything particularly troubling here. Taking\n> > circa three-quarters of a millisecond (on typical current hardware)\n> > to plan a four-way join on large tables is not unreasonable.\n>\n> I took a few minutes to reverse engineer the tables in question (with\n> assistance from an AI bot) and ran the query in question.\n> Unsurprisingly, I also see planning as slower than execution, but with\n> a ratio of about planning being 12x slower than execution vs the\n> reported ~18x.\n>\n> Planning Time: 0.581 ms\n> Execution Time: 0.048 ms\n>\n> Nothing alarming in perf top of executing the query in pgbench with -M\n> simple. I think this confirms the problem is just with expectations.\n>\n\nYep. Very fast executing queries often have faster execution than plan\ntimes. Postgres has a really dynamic version of SQL, for example,\noperator overloading for example, which probably doesn't help things. This\nis just the nature of SQL really. To improve things, just use prepared\nstatements -- that's why they are there.\n\nAside, this style of SQL as produced for this test, guids, and record at a\ntime thinking, is also not my cup of tea. There are some pros to it, but\nit tends to beat on a database. If you move this logic into the database,\nthis kind of problem tends to evaporate. It's a very curious mode of\nthinking I see, that in order to \"reduce load on the database\", it is asked\nto set up and tear down a transaction for every single record fetched :).\n\nmerlin\n\nOn Mon, Sep 11, 2023 at 11:07 PM David Rowley <[email protected]> wrote:On Tue, 12 Sept 2023 at 02:27, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > I'm not sure if you're asking for help here because you need planning\n> > to be faster than it currently is, or if it's because you believe that\n> > planning should always be faster than execution. If you think the\n> > latter, then you're mistaken.\n>\n> Yeah. I don't see anything particularly troubling here. Taking\n> circa three-quarters of a millisecond (on typical current hardware)\n> to plan a four-way join on large tables is not unreasonable.\n\nI took a few minutes to reverse engineer the tables in question (with\nassistance from an AI bot) and ran the query in question.\nUnsurprisingly, I also see planning as slower than execution, but with\na ratio of about planning being 12x slower than execution vs the\nreported ~18x.\n\nPlanning Time: 0.581 ms\nExecution Time: 0.048 ms\n\nNothing alarming in perf top of executing the query in pgbench with -M\nsimple. I think this confirms the problem is just with expectations.Yep. Very fast executing queries often have faster execution than plan times. Postgres has a really dynamic version of SQL, for example, operator overloading for example, which probably doesn't help things. This is just the nature of SQL really. To improve things, just use prepared statements -- that's why they are there. Aside, this style of SQL as produced for this test, guids, and record at a time thinking, is also not my cup of tea. There are some pros to it, but it tends to beat on a database. If you move this logic into the database, this kind of problem tends to evaporate. It's a very curious mode of thinking I see, that in order to \"reduce load on the database\", it is asked to set up and tear down a transaction for every single record fetched :).merlin",
"msg_date": "Fri, 15 Dec 2023 15:49:43 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
},
{
"msg_contents": "> On 15 Dec 2023, at 22:49, Merlin Moncure <[email protected]> wrote:\n> \n> On Mon, Sep 11, 2023 at 11:07 PM David Rowley <[email protected] <mailto:[email protected]>> wrote:\n>> \nSnip\n>> I took a few minutes to reverse engineer the tables in question (with\n>> assistance from an AI bot) and ran the query in question.\n>> Unsurprisingly, I also see planning as slower than execution, but with\n>> a ratio of about planning being 12x slower than execution vs the\n>> reported ~18x.\n>> \n>> Planning Time: 0.581 ms\n>> Execution Time: 0.048 ms\n>> \n>> Nothing alarming in perf top of executing the query in pgbench with -M\n>> simple. I think this confirms the problem is just with expectations.\n> \n> Yep. Very fast executing queries often have faster execution than plan times. Postgres has a really dynamic version of SQL, for example, operator overloading for example, which probably doesn't help things. This is just the nature of SQL really. To improve things, just use prepared statements -- that's why they are there. \n\nJust to add my 2 cents: use prepared statements and - when applicable force generic plans: https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PLAN-CACHE-MODE\n\n—\nMichal\n\n\nOn 15 Dec 2023, at 22:49, Merlin Moncure <[email protected]> wrote:On Mon, Sep 11, 2023 at 11:07 PM David Rowley <[email protected]> wrote:Snip\nI took a few minutes to reverse engineer the tables in question (with\nassistance from an AI bot) and ran the query in question.\nUnsurprisingly, I also see planning as slower than execution, but with\na ratio of about planning being 12x slower than execution vs the\nreported ~18x.\n\nPlanning Time: 0.581 ms\nExecution Time: 0.048 ms\n\nNothing alarming in perf top of executing the query in pgbench with -M\nsimple. I think this confirms the problem is just with expectations.Yep. Very fast executing queries often have faster execution than plan times. Postgres has a really dynamic version of SQL, for example, operator overloading for example, which probably doesn't help things. This is just the nature of SQL really. To improve things, just use prepared statements -- that's why they are there. Just to add my 2 cents: use prepared statements and - when applicable force generic plans: https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PLAN-CACHE-MODE—Michal",
"msg_date": "Sat, 16 Dec 2023 05:59:20 +0100",
"msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning time is time-consuming"
}
] |
[
{
"msg_contents": "Hello.\n\nI just had an outage on postgres 14 due to multixact members limit exceeded.\n\nSo the documentation says \"There is a separate storage area which holds the\nlist of members in each multixact, which also uses a 32-bit counter and\nwhich must also be managed.\"\n\nQuestions:\nhaving a 32-bit counter on this separated storage means that there is a\nglobal limit of multixact IDs for a database OID?\n\nIs there a way to monitor this storage limit or counter using any pg_stat\ntable/view?\n\nare foreign keys a big source of multixact IDs so not recommended on tables\nwith a lot of data and a lot of churn?\n\nThanks\n\n-- \nBruno da Silva\n\nHello.I just had an outage on postgres 14 due to multixact members limit exceeded.So the documentation says \"There is a separate storage area which holds the list of members in each multixact, which also uses a 32-bit counter and which must also be managed.\"Questions:having a 32-bit counter on this separated storage means that there is a global limit of multixact IDs for a database OID?Is there a way to monitor this storage limit or counter using any pg_stat table/view?are foreign keys a big source of multixact IDs so not recommended on tables with a lot of data and a lot of churn? Thanks-- Bruno da Silva",
"msg_date": "Wed, 13 Sep 2023 08:21:40 -0400",
"msg_from": "bruno da silva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multixact wraparound monitoring"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 8:29 AM bruno da silva <[email protected]> wrote:\n\n>\n> are foreign keys a big source of multixact IDs so not recommended on\n> tables with a lot of data and a lot of churn?\n>\n\nI am curious to hear other answers or if anything has changed. I\nexperienced this problem a couple of times on PG 11. In each situation my\nsetup looked something like this:\n\ncreate table small(id int primary key, data text);\ncreate table large(id bigint primary key, small_id int references\nsmall(id));\n\nTable large is hundreds of GB or more and accepting heavy writes in batches\nof 1K records, and every batch contains exactly one reference to each row\nin small (every batch references the same 1K rows in small).\n\nThe only way I was able to get around problems with multixact member ID\nwraparound was dropping the foreign key constraint.\n\nOn Wed, Sep 13, 2023 at 8:29 AM bruno da silva <[email protected]> wrote:are foreign keys a big source of multixact IDs so not recommended on tables with a lot of data and a lot of churn? I am curious to hear other answers or if anything has changed. I experienced this problem a couple of times on PG 11. In each situation my setup looked something like this:create table small(id int primary key, data text);create table large(id bigint primary key, small_id int references small(id));Table large is hundreds of GB or more and accepting heavy writes in batches of 1K records, and every batch contains exactly one reference to each row in small (every batch references the same 1K rows in small).The only way I was able to get around problems with multixact member ID wraparound was dropping the foreign key constraint.",
"msg_date": "Wed, 13 Sep 2023 22:14:28 -0700",
"msg_from": "Wyatt Alt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multixact wraparound monitoring"
},
{
"msg_contents": "On 2023-Sep-13, bruno da silva wrote:\n\n> I just had an outage on postgres 14 due to multixact members limit exceeded.\n\nSadly, that's not as uncommon as we would like.\n\n> So the documentation says \"There is a separate storage area which holds the\n> list of members in each multixact, which also uses a 32-bit counter and\n> which must also be managed.\"\n\nRight.\n\n> Questions:\n> having a 32-bit counter on this separated storage means that there is a\n> global limit of multixact IDs for a database OID?\n\nA global limit of multixact members (each multixact ID can have one or\nmore members), across the whole instance. It is a shared resource for\nall databases in an instance.\n\n> Is there a way to monitor this storage limit or counter using any pg_stat\n> table/view?\n\nNot at present.\n\n> are foreign keys a big source of multixact IDs so not recommended on tables\n> with a lot of data and a lot of churn?\n\nWell, ideally you shouldn't consider operating without foreign keys at\nany rate, but yes, foreign keys are one of the most common causes of\nmultixacts being used, and removing FKs may mean a decrease in multixact\nusage. (The other use case of multixact usage is tuples being locked\nand updated with an intervening savepoint.)\n\n\nWe could have a mode that we can set on tables with little movement and\nmany incoming FKs, that tells the system something like \"in this table,\ndeletes/updates are disallowed, so FKs don't need to lock rows\". Or\nmaybe \"in this table, deletes are disallowed and updates can only change\ncolumns that aren't used by UNIQUE NOT NULL indexes, so FKs don't need\nto lock rows\". This might save a ton of multixact traffic.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n\n\n",
"msg_date": "Thu, 14 Sep 2023 13:23:41 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multixact wraparound monitoring"
},
{
"msg_contents": "This problem is more acute when the FK Table stores a small number of rows\nlike types or codes.\nI think in those cases an enum type should be used instead of a column with\na FK.\nThanks.\n\nOn Thu, Sep 14, 2023 at 7:23 AM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2023-Sep-13, bruno da silva wrote:\n>\n> > I just had an outage on postgres 14 due to multixact members limit\n> exceeded.\n>\n> Sadly, that's not as uncommon as we would like.\n>\n> > So the documentation says \"There is a separate storage area which holds\n> the\n> > list of members in each multixact, which also uses a 32-bit counter and\n> > which must also be managed.\"\n>\n> Right.\n>\n> > Questions:\n> > having a 32-bit counter on this separated storage means that there is a\n> > global limit of multixact IDs for a database OID?\n>\n> A global limit of multixact members (each multixact ID can have one or\n> more members), across the whole instance. It is a shared resource for\n> all databases in an instance.\n>\n> > Is there a way to monitor this storage limit or counter using any pg_stat\n> > table/view?\n>\n> Not at present.\n>\n> > are foreign keys a big source of multixact IDs so not recommended on\n> tables\n> > with a lot of data and a lot of churn?\n>\n> Well, ideally you shouldn't consider operating without foreign keys at\n> any rate, but yes, foreign keys are one of the most common causes of\n> multixacts being used, and removing FKs may mean a decrease in multixact\n> usage. (The other use case of multixact usage is tuples being locked\n> and updated with an intervening savepoint.)\n>\n>\n> We could have a mode that we can set on tables with little movement and\n> many incoming FKs, that tells the system something like \"in this table,\n> deletes/updates are disallowed, so FKs don't need to lock rows\". Or\n> maybe \"in this table, deletes are disallowed and updates can only change\n> columns that aren't used by UNIQUE NOT NULL indexes, so FKs don't need\n> to lock rows\". This might save a ton of multixact traffic.\n>\n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n> \"Hay dos momentos en la vida de un hombre en los que no debería\n> especular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n>\n\n\n-- \nBruno da Silva\n\nThis problem is more acute when the FK Table stores a small number of rows like types or codes. I think in those cases an enum type should be used instead of a column with a FK. Thanks.On Thu, Sep 14, 2023 at 7:23 AM Alvaro Herrera <[email protected]> wrote:On 2023-Sep-13, bruno da silva wrote:\n\n> I just had an outage on postgres 14 due to multixact members limit exceeded.\n\nSadly, that's not as uncommon as we would like.\n\n> So the documentation says \"There is a separate storage area which holds the\n> list of members in each multixact, which also uses a 32-bit counter and\n> which must also be managed.\"\n\nRight.\n\n> Questions:\n> having a 32-bit counter on this separated storage means that there is a\n> global limit of multixact IDs for a database OID?\n\nA global limit of multixact members (each multixact ID can have one or\nmore members), across the whole instance. It is a shared resource for\nall databases in an instance.\n\n> Is there a way to monitor this storage limit or counter using any pg_stat\n> table/view?\n\nNot at present.\n\n> are foreign keys a big source of multixact IDs so not recommended on tables\n> with a lot of data and a lot of churn?\n\nWell, ideally you shouldn't consider operating without foreign keys at\nany rate, but yes, foreign keys are one of the most common causes of\nmultixacts being used, and removing FKs may mean a decrease in multixact\nusage. (The other use case of multixact usage is tuples being locked\nand updated with an intervening savepoint.)\n\n\nWe could have a mode that we can set on tables with little movement and\nmany incoming FKs, that tells the system something like \"in this table,\ndeletes/updates are disallowed, so FKs don't need to lock rows\". Or\nmaybe \"in this table, deletes are disallowed and updates can only change\ncolumns that aren't used by UNIQUE NOT NULL indexes, so FKs don't need\nto lock rows\". This might save a ton of multixact traffic.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n-- Bruno da Silva",
"msg_date": "Thu, 14 Sep 2023 15:35:10 -0400",
"msg_from": "bruno da silva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multixact wraparound monitoring"
},
{
"msg_contents": "On 2023-Sep-14, bruno da silva wrote:\n\n> This problem is more acute when the FK Table stores a small number of rows\n> like types or codes.\n\nRight, because the likelihood of multiple transactions creating\nnew references to the same row is higher.\n\n> I think in those cases an enum type should be used instead of a column with\n> a FK.\n\nRight, that alleviates the issue, but IMO it's a workaround whose need\nis caused by a deficiency in our implementation.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)\n\n\n",
"msg_date": "Fri, 15 Sep 2023 11:48:47 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multixact wraparound monitoring"
}
] |
[
{
"msg_contents": "Dear All\n\nI have a weird problem, I am trying to improve performance on this query :\n\nSELECT text('[email protected]') from mail_vessel_addressbook where \ntext('[email protected]') ~* address_regex limit 1;\n\nThe first system (linux) is a linux hosted in a cloud, kernel \n3.16.0-4-amd64, 32GB mem, SSD, 4 x Intel(R) Xeon(R) CPU E7-4860 v2 @ \n2.60GHz ,\n\nThe second (freebsd) system, used as test, is my local FreeBSD \n13.1-RELEASE workstation, 32GB mem, ZFS/magnetic disks ,16 x AMD Ryzen 7 \n5800X 3800.16-MHz .\n\nOverall my workstation is faster, but my issue is not plain speed. The \nproblem is as follows :\n\n*FreeBSD*\n\npostgres@[local]/dynacom=# explain (analyze,buffers) SELECT \ntext('[email protected]') from mail_vessel_addressbook where \ntext('[email protected]') ~* address_regex limit 1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nLimit (cost=0.42..5.11 rows=1 width=32) (actual time=96.705..96.706 \nrows=1 loops=1)\n Buffers: shared hit=71\n -> Index Only Scan using mail_vessel_addressbook_address_regex_idx \non mail_vessel_addressbook (cost=0.42..2912.06 rows=620 width=32) \n(actual time=96.704..96.705 rows=1 loops=1)\n Filter: ('[email protected]'::text ~* address_regex)\n Rows Removed by Filter: 14738\n Heap Fetches: 0\n Buffers: shared hit=71\nPlanning time: 0.082 ms\nExecution time: 96.725 ms\n(9 rows)\n\nTime: 97.038 ms\npostgres@[local]/dynacom=#\n\n*Linux*\n\ndynacom=# explain (analyze,buffers) SELECT text('[email protected]') from \nmail_vessel_addressbook where text('[email protected]') ~* address_regex limit 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ \n\nLimit (cost=0.42..5.12 rows=1 width=32) (actual time=1768.725..1768.727 \nrows=1 loops=1)\n Buffers: shared hit=530\n -> Index Only Scan using mail_vessel_addressbook_address_regex_idx \non mail_vessel_addressbook (cost=0.42..2913.04 rows=620 width=32) \n(actual time=1768.724..1768.725 rows=1 loops=1)\n Filter: ('[email protected]'::text ~* address_regex)\n Rows Removed by Filter: 97781\n Heap Fetches: 0\n Buffers: shared hit=530\nPlanning time: 1.269 ms\nExecution time: 1768.998 ms\n(9 rows)\n\n\nThe file in FreeBSD came by pg_dump from the linux system, I am puzzled \nwhy this huge difference in Buffers: shared hit. All table/index sizes \nare identical on both systems, I did vacuum full on the linux one, and \nalso did vacuum freeze on both. I analyzed both, reindexed both (several \ntimes). Still the FreeBSD seems to access about 7 times less number of \nblocks from shared_buffers than linux : 71 vs 530 . There is no bloat , \nI tested with newly fresh table in both systems as well.\n\nThank you for any help.\n\n\n\n\n\n\nDear All\nI have a weird problem, I am trying to improve performance on\n this query :\nSELECT\n text('[email protected]') from mail_vessel_addressbook where\n text('[email protected]') ~* address_regex limit 1;\n\nThe first system\n (linux) is a linux hosted in a cloud, kernel 3.16.0-4-amd64,\n 32GB mem, SSD, 4 x Intel(R)\n Xeon(R) CPU E7-4860 v2 @ 2.60GHz , \n\nThe second (freebsd) system, used as test, is my local FreeBSD\n 13.1-RELEASE workstation, 32GB mem, ZFS/magnetic disks ,16\n x AMD Ryzen 7\n 5800X 3800.16-MHz .\nOverall my workstation is faster,\n but my issue is not plain speed. The problem is as follows :\nFreeBSD \n\npostgres@[local]/dynacom=#\n explain (analyze,buffers) SELECT text('[email protected]') from\n mail_vessel_addressbook where text('[email protected]') ~*\n address_regex limit 1;\n \n QUERY\n PLAN\n \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n \n Limit (cost=0.42..5.11 rows=1 width=32) (actual\n time=96.705..96.706 rows=1 loops=1)\n \n Buffers: shared hit=71\n \n -> Index Only Scan using\n mail_vessel_addressbook_address_regex_idx on\n mail_vessel_addressbook (cost=0.42..2912.06 rows=620 width=32)\n (actual time=96.704..96.705 rows=1 loops=1)\n \n Filter: ('[email protected]'::text ~* address_regex)\n \n Rows Removed by Filter: 14738\n \n Heap Fetches: 0\n \n Buffers: shared hit=71\n \n Planning time: 0.082 ms\n \n Execution time: 96.725 ms\n \n (9 rows)\n \n\n Time: 97.038 ms\n \n postgres@[local]/dynacom=# \n\nLinux \ndynacom=#\n explain (analyze,buffers) SELECT text('[email protected]') from\n mail_vessel_addressbook where text('[email protected]') ~*\n address_regex limit 1;\n \n QUERY\n PLAN\n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n \n Limit (cost=0.42..5.12 rows=1 width=32) (actual\n time=1768.725..1768.727 rows=1 loops=1)\n \n Buffers: shared hit=530\n \n -> Index Only Scan using\n mail_vessel_addressbook_address_regex_idx on\n mail_vessel_addressbook (cost=0.42..2913.04 rows=620 width=32)\n (actual time=1768.724..1768.725 rows=1 loops=1)\n \n Filter: ('[email protected]'::text ~* address_regex)\n \n Rows Removed by Filter: 97781\n \n Heap Fetches: 0\n \n Buffers: shared hit=530\n \n Planning time: 1.269 ms\n \n Execution time: 1768.998 ms\n \n (9 rows)\n\n\n\nThe file in FreeBSD came by pg_dump from the linux system, I am\n puzzled why this huge difference in Buffers: shared hit. All\n table/index sizes are identical on both systems, I did vacuum full\n on the linux one, and also did vacuum freeze on both. I analyzed\n both, reindexed both (several times). Still the FreeBSD seems to\n access about 7 times less number of blocks from shared_buffers\n than linux : 71 vs 530 . There is no bloat , I tested with newly\n fresh table in both systems as well.\nThank you for any help.",
"msg_date": "Fri, 15 Sep 2023 17:30:39 +0300",
"msg_from": "Achilleas Mantzios - cloud <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql 10.23 , different systems, same table , same plan, different\n Buffers: shared hit"
},
{
"msg_contents": "Achilleas Mantzios - cloud <[email protected]> writes:\n> *FreeBSD*\n> \n> -> Index Only Scan using mail_vessel_addressbook_address_regex_idx \n> on mail_vessel_addressbook (cost=0.42..2912.06 rows=620 width=32) \n> (actual time=96.704..96.705 rows=1 loops=1)\n> Filter: ('[email protected]'::text ~* address_regex)\n> Rows Removed by Filter: 14738\n> Heap Fetches: 0\n> Buffers: shared hit=71\n> \n> *Linux*\n> \n> -> Index Only Scan using mail_vessel_addressbook_address_regex_idx \n> on mail_vessel_addressbook (cost=0.42..2913.04 rows=620 width=32) \n> (actual time=1768.724..1768.725 rows=1 loops=1)\n> Filter: ('[email protected]'::text ~* address_regex)\n> Rows Removed by Filter: 97781\n> Heap Fetches: 0\n> Buffers: shared hit=530\n\n> The file in FreeBSD came by pg_dump from the linux system, I am puzzled \n> why this huge difference in Buffers: shared hit.\n\nThe \"rows removed\" value is also quite a bit different, so it's not\njust a matter of buffer touches --- there's evidently some real difference\nin how much of the index is being scanned. I speculate that you are\nusing different collations on the two systems, and FreeBSD's collation\nhappens to place the first matching row earlier in the index.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Sep 2023 11:23:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql 10.23 , different systems, same table , same plan,\n different Buffers: shared hit"
},
{
"msg_contents": "Στις 15/9/23 18:23, ο/η Tom Lane έγραψε:\n> Achilleas Mantzios - cloud<[email protected]> writes:\n>> *FreeBSD*\n>>\n>> -> Index Only Scan using mail_vessel_addressbook_address_regex_idx\n>> on mail_vessel_addressbook (cost=0.42..2912.06 rows=620 width=32)\n>> (actual time=96.704..96.705 rows=1 loops=1)\n>> Filter: ('[email protected]'::text ~* address_regex)\n>> Rows Removed by Filter: 14738\n>> Heap Fetches: 0\n>> Buffers: shared hit=71\n>>\n>> *Linux*\n>>\n>> -> Index Only Scan using mail_vessel_addressbook_address_regex_idx\n>> on mail_vessel_addressbook (cost=0.42..2913.04 rows=620 width=32)\n>> (actual time=1768.724..1768.725 rows=1 loops=1)\n>> Filter: ('[email protected]'::text ~* address_regex)\n>> Rows Removed by Filter: 97781\n>> Heap Fetches: 0\n>> Buffers: shared hit=530\n>> The file in FreeBSD came by pg_dump from the linux system, I am puzzled\n>> why this huge difference in Buffers: shared hit.\n> The \"rows removed\" value is also quite a bit different, so it's not\n> just a matter of buffer touches --- there's evidently some real difference\n> in how much of the index is being scanned. I speculate that you are\n> using different collations on the two systems, and FreeBSD's collation\n> happens to place the first matching row earlier in the index.\n\nThank you, I see that both systems use en_US.UTF-8 as lc_collate and \nlc_ctype, and that in both systems :\n\ndynacom=# \\dOS+\n List of collations\n Schema | Name | Collate | Ctype | Provider | Description\n------------+---------+---------+-------+----------+------------------------------ \n\npg_catalog | C | C | C | libc | standard C collation\npg_catalog | POSIX | POSIX | POSIX | libc | standard POSIX \ncollation\npg_catalog | default | | | default | database's default \ncollation\n(3 rows)\n\n\ndynacom=# \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype | Access \nprivileges\n-----------+----------+-----------+-------------+-------------+------------------------ \n\ndynacom | postgres | SQL_ASCII | en_US.UTF-8 | en_US.UTF-8 |\n\nthe below seems ok\n\nFreeBSD :\n\npostgres@[local]/dynacom=# select * from (values \n('a'),('Z'),('_'),('.'),('0')) as qry order by column1::text;\ncolumn1\n---------\n_\n.\n0\na\nZ\n(5 rows)\n\nLinux:\n\ndynacom=# select * from (values ('a'),('Z'),('_'),('.'),('0')) as qry \norder by column1::text;\ncolumn1\n---------\n_\n.\n0\na\nZ\n(5 rows)\n\ndynacom=#\n\nbut :\n\nFreebsd :\n\npostgres@[local]/dynacom=# select distinct address_regex from \nmail_vessel_addressbook order by address_regex::text ASC limit 5;\n address_regex\n----------------------------------------------------------\n_cmo.ship.inf@<hide>.<hid>\n_EMD_REEFER@hide>.<hid>\n_OfficeHayPoint@hide>.<hid>\n_Sabtank_PCQ1_All_SSVSSouth_area@hide>.<hid>\n_Sabtank_PCQ1_Lead_OperatorsSouth_area@hide>.<hid>\n(5 rows)\n\nWhile in Linux :\n\ndynacom=# select distinct address_regex from mail_vessel_addressbook \norder by address_regex::text ASC limit 5;\n address_regex\n-----------------------------------\n0033240902573@<hidden>.<hid>\n0033442057364@<hidden>.<hid>\n0072usl@<hidden>.<hid>\n\n0081354426912@<hidden>.<hid>\n00862163602861@<hidden>.<hid>\n(5 rows)\n\nsomethings does not seem right.\n\n>\n> \t\t\tregards, tom lane\n\n-- \nAchilleas Mantzios\n IT DEV - HEAD\n IT DEPT\n Dynacom Tankers Mgmt\n\n\n\n\n\n\nΣτις 15/9/23 18:23, ο/η Tom Lane\n έγραψε:\n\n\nAchilleas Mantzios - cloud <[email protected]> writes:\n\n\n*FreeBSD*\n\n -> Index Only Scan using mail_vessel_addressbook_address_regex_idx \non mail_vessel_addressbook (cost=0.42..2912.06 rows=620 width=32) \n(actual time=96.704..96.705 rows=1 loops=1)\n Filter: ('[email protected]'::text ~* address_regex)\n Rows Removed by Filter: 14738\n Heap Fetches: 0\n Buffers: shared hit=71\n\n*Linux*\n\n -> Index Only Scan using mail_vessel_addressbook_address_regex_idx \non mail_vessel_addressbook (cost=0.42..2913.04 rows=620 width=32) \n(actual time=1768.724..1768.725 rows=1 loops=1)\n Filter: ('[email protected]'::text ~* address_regex)\n Rows Removed by Filter: 97781\n Heap Fetches: 0\n Buffers: shared hit=530\n\n\n\n\n\nThe file in FreeBSD came by pg_dump from the linux system, I am puzzled \nwhy this huge difference in Buffers: shared hit.\n\n\n\nThe \"rows removed\" value is also quite a bit different, so it's not\njust a matter of buffer touches --- there's evidently some real difference\nin how much of the index is being scanned. I speculate that you are\nusing different collations on the two systems, and FreeBSD's collation\nhappens to place the first matching row earlier in the index.\n\nThank you, I see that both systems use en_US.UTF-8 as \n lc_collate and lc_ctype, and that in both systems :\ndynacom=#\n \\dOS+\n \n List of collations\n \n Schema | Name | Collate | Ctype | Provider |\n Description \n------------+---------+---------+-------+----------+------------------------------\n \n pg_catalog | C | C | C | libc | standard C\n collation\n \n pg_catalog | POSIX | POSIX | POSIX | libc | standard\n POSIX collation\n \n pg_catalog | default | | | default | database's\n default collation\n \n (3 rows)\n\n\n\ndynacom=# \\l\n \n List of databases\n \n Name | Owner | Encoding | Collate | Ctype |\n Access privileges \n-----------+----------+-----------+-------------+-------------+------------------------\n \n dynacom | postgres | SQL_ASCII | en_US.UTF-8 | en_US.UTF-8 | \n\nthe below seems ok\nFreeBSD :\npostgres@[local]/dynacom=#\n select * from (values ('a'),('Z'),('_'),('.'),('0')) as qry\n order by column1::text;\n \n column1 \n ---------\n \n _\n \n .\n \n 0\n \n a\n \n Z\n \n (5 rows)\n\n\nLinux:\ndynacom=#\n select * from (values ('a'),('Z'),('_'),('.'),('0')) as qry\n order by column1::text;\n \n column1 \n ---------\n \n _\n \n .\n \n 0\n \n a\n \n Z\n \n (5 rows)\n \n\n dynacom=# \n\n\nbut :\nFreebsd :\npostgres@[local]/dynacom=#\n select distinct address_regex from mail_vessel_addressbook\n order by address_regex::text ASC limit 5;\n \n address_regex \n ----------------------------------------------------------\n \n _cmo.ship.inf@<hide>.<hid>\n \n _EMD_REEFER@hide>.<hid>\n\n _OfficeHayPoint@hide>.<hid>\n\n _Sabtank_PCQ1_All_SSVSSouth_area@hide>.<hid>\n\n _Sabtank_PCQ1_Lead_OperatorsSouth_area@hide>.<hid>\n\n (5 rows)\n\n\nWhile in Linux :\ndynacom=#\n select distinct address_regex from mail_vessel_addressbook\n order by address_regex::text ASC limit 5; \n address_regex \n -----------------------------------\n \n 0033240902573@<hidden>.<hid>\n \n 0033442057364@<hidden>.<hid>\n\n 0072usl@<hidden>.<hid>\n\n0081354426912@<hidden>.<hid>\n\n 00862163602861@<hidden>.<hid>\n\n (5 rows)\nsomethings does not seem\n right.\n\n\n\n\n\t\t\tregards, tom lane\n\n\n-- \nAchilleas Mantzios\n IT DEV - HEAD\n IT DEPT\n Dynacom Tankers Mgmt",
"msg_date": "Fri, 15 Sep 2023 22:31:37 +0300",
"msg_from": "Achilleas Mantzios <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql 10.23 , different systems, same table , same plan,\n different Buffers: shared hit"
},
{
"msg_contents": "Achilleas Mantzios <[email protected]> writes:\n> Thank you, I see that both systems use en_US.UTF-8 as lc_collate and \n> lc_ctype,\n\nDoesn't necessarily mean they interpret that the same way, though :-(\n\n> the below seems ok\n\n> FreeBSD :\n\n> postgres@[local]/dynacom=# select * from (values \n> ('a'),('Z'),('_'),('.'),('0')) as qry order by column1::text;\n> column1\n> ---------\n> _\n> .\n> 0\n> a\n> Z\n> (5 rows)\n\nSadly, this proves very little about Linux's behavior. glibc's idea\nof en_US involves some very complicated multi-pass sort rules.\nAFAICT from the FreeBSD sort(1) man page, FreeBSD defines en_US\nas \"same as C except case-insensitive\", whereas I'm pretty sure\nthat underscores and other punctuation are nearly ignored in\nglibc's interpretation; they'll only be taken into account if the\nalphanumeric parts of the strings sort equal.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Sep 2023 15:42:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql 10.23 , different systems, same table , same plan,\n different Buffers: shared hit"
},
{
"msg_contents": "Στις 15/9/23 22:42, ο/η Tom Lane έγραψε:\n> Achilleas Mantzios <[email protected]> writes:\n>> Thank you, I see that both systems use en_US.UTF-8 as lc_collate and\n>> lc_ctype,\n> Doesn't necessarily mean they interpret that the same way, though :-(\n>\n>> the below seems ok\n>> FreeBSD :\n>> postgres@[local]/dynacom=# select * from (values\n>> ('a'),('Z'),('_'),('.'),('0')) as qry order by column1::text;\n>> column1\n>> ---------\n>> _\n>> .\n>> 0\n>> a\n>> Z\n>> (5 rows)\n> Sadly, this proves very little about Linux's behavior. glibc's idea\n> of en_US involves some very complicated multi-pass sort rules.\n> AFAICT from the FreeBSD sort(1) man page, FreeBSD defines en_US\n> as \"same as C except case-insensitive\", whereas I'm pretty sure\n> that underscores and other punctuation are nearly ignored in\n> glibc's interpretation; they'll only be taken into account if the\n\nThank you so much. Makes perfect sense.\n\nThis begs the question asked also in the -sql list : how do I index on \nregex'es, or at least have a barely scalable solution? Here I try to \nmatch a given string against a stored regex, whereas in pg_trgm's case \nthe user tries to match a stored text against a given regex.\n\n> alphanumeric parts of the strings sort equal.\n>\n> \t\t\tregards, tom lane\n\n-- \nAchilleas Mantzios\n IT DEV - HEAD\n IT DEPT\n Dynacom Tankers Mgmt\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 23:36:48 +0300",
"msg_from": "Achilleas Mantzios <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql 10.23 , different systems, same table , same plan,\n different Buffers: shared hit"
},
{
"msg_contents": "On Sat, Sep 16, 2023 at 7:42 AM Tom Lane <[email protected]> wrote:\n> Sadly, this proves very little about Linux's behavior. glibc's idea\n> of en_US involves some very complicated multi-pass sort rules.\n> AFAICT from the FreeBSD sort(1) man page, FreeBSD defines en_US\n> as \"same as C except case-insensitive\", whereas I'm pretty sure\n> that underscores and other punctuation are nearly ignored in\n> glibc's interpretation; they'll only be taken into account if the\n> alphanumeric parts of the strings sort equal.\n\nAchilleas didn't mention the glibc version, but based on the kernel\nvintage mentioned I guess that must be the \"old\" (pre 2.28) glibc\nsorting. In 2.28 they did a big sync-up with ISO 14651, while FreeBSD\nfollows the UCA, a closely related standard[1]. I think newer\nLinux/glibc systems should agree with FreeBSD's libc in more cases\n(and also agree with ICU).\n\n[1] https://unicode.org/reports/tr10/#Synch_ISO14651\n\n\n",
"msg_date": "Sat, 16 Sep 2023 11:08:55 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql 10.23 , different systems, same table , same plan,\n different Buffers: shared hit"
},
{
"msg_contents": "Στις 16/9/23 02:08, ο/η Thomas Munro έγραψε:\n> On Sat, Sep 16, 2023 at 7:42 AM Tom Lane<[email protected]> wrote:\n>> Sadly, this proves very little about Linux's behavior. glibc's idea\n>> of en_US involves some very complicated multi-pass sort rules.\n>> AFAICT from the FreeBSD sort(1) man page, FreeBSD defines en_US\n>> as \"same as C except case-insensitive\", whereas I'm pretty sure\n>> that underscores and other punctuation are nearly ignored in\n>> glibc's interpretation; they'll only be taken into account if the\n>> alphanumeric parts of the strings sort equal.\n> Achilleas didn't mention the glibc version, but based on the kernel\n> vintage mentioned I guess that must be the \"old\" (pre 2.28) glibc\n> sorting. In 2.28 they did a big sync-up with ISO 14651, while FreeBSD\n> follows the UCA, a closely related standard[1]. I think newer\n> Linux/glibc systems should agree with FreeBSD's libc in more cases\n> (and also agree with ICU).\nThank you Thomas , our linux's glibc is on version : 2.19-18+deb8u10, we \nneed to upgrade on so many levels.\n>\n> [1]https://unicode.org/reports/tr10/#Synch_ISO14651\n\n-- \nAchilleas Mantzios\n IT DEV - HEAD\n IT DEPT\n Dynacom Tankers Mgmt\n\n\n\n\n\n\nΣτις 16/9/23 02:08, ο/η Thomas Munro\n έγραψε:\n\n\nOn Sat, Sep 16, 2023 at 7:42 AM Tom Lane <[email protected]> wrote:\n\n\nSadly, this proves very little about Linux's behavior. glibc's idea\nof en_US involves some very complicated multi-pass sort rules.\nAFAICT from the FreeBSD sort(1) man page, FreeBSD defines en_US\nas \"same as C except case-insensitive\", whereas I'm pretty sure\nthat underscores and other punctuation are nearly ignored in\nglibc's interpretation; they'll only be taken into account if the\nalphanumeric parts of the strings sort equal.\n\n\n\nAchilleas didn't mention the glibc version, but based on the kernel\nvintage mentioned I guess that must be the \"old\" (pre 2.28) glibc\nsorting. In 2.28 they did a big sync-up with ISO 14651, while FreeBSD\nfollows the UCA, a closely related standard[1]. I think newer\nLinux/glibc systems should agree with FreeBSD's libc in more cases\n(and also agree with ICU).\n\n Thank you Thomas , our linux's glibc is on version : 2.19-18+deb8u10,\n we need to upgrade on so many levels.\n\n\n\n[1] https://unicode.org/reports/tr10/#Synch_ISO14651\n\n\n-- \nAchilleas Mantzios\n IT DEV - HEAD\n IT DEPT\n Dynacom Tankers Mgmt",
"msg_date": "Sat, 16 Sep 2023 06:27:16 +0300",
"msg_from": "Achilleas Mantzios <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql 10.23 , different systems, same table , same plan,\n different Buffers: shared hit"
}
] |
[
{
"msg_contents": "I'm researching a query that's slow occasionally, and I'm seeing dirtied\nreads and am asking for some help in understanding.\n\nThe table has the following relevant fields:\n- insert_timestamp (timestamp without timezone, nullable, default now())\n- hasbeenchecked ( boolean, not null )\n- hasbeenverified ( boolean. not null )\n\nI'm doing the following query:\nselect * from my_table where hasbeenchecked = true and hasbeenverified =\ntrue and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;\n\nThe date is an example, it is the format that is used in the query.\n\nThe table has 81M rows. Is 50GB in size. And the index is 34MB\n\nThe index is as follows:\nbtree (insert_timestamp DESC) WHERE hasbeenchecked = true\nAND hasbeenverified = true\n\nI'm seeing a slow query first, then a fast one, and if I move the date, a\nslow query again.\n\nWhat I'm seeing is:\nAttempt 1:\nHit: 5171(40MB)\nRead: 16571(130MB)\nDirtied: 3940(31MB)\n\nAttempt 2:\nHit: 21745 (170MB)\nRead: Nothing\nDirtied: Nothing.\n\nIt's slow once, then consistently fast, and then slow again if I move the\ndate around.\nAnd by slow I mean: around 60 seconds. And fast is below 1 second.\n\nMy settings:\nshared_buffers = 2048MB\neffective_cache_size = 6GB\ncheckpoint_completion_target = 0.5\ndefault_statistics_target = 100\nrandom_page_cost = 1.1\neffective_io_concurrency = 200\nwork_mem = 64MB\n\nThe data is on an SSD. 4CPU/32GB ram.\n\nI've tried increasing the amount of CPUs, but that doesn't seem to affect\nthe performance.\n\nI'm having trouble identifying what exactly is the culprit here, or if\nthere are multiple. Is the table simply too big? Is the query always going\nto be problematic and I probably need to look at a fundamentally different\nway of gathering this data? Is it not enough memory? Something else?\n\nAny help would be appreciated.\n\nI'm using the analysis methods explained here to gather this data:\nhttps://github.com/dalibo/pev2\n\nRegards,\nKoen De Groote\n\nI'm researching a query that's slow occasionally, and I'm seeing dirtied reads and am asking for some help in understanding.The table has the following relevant fields:- insert_timestamp (timestamp without timezone, nullable, default now())- hasbeenchecked ( boolean, not null )- hasbeenverified ( boolean. not null )I'm doing the following query:select * from my_table where hasbeenchecked = true and hasbeenverified = true and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;The date is an example, it is the format that is used in the query.The table has 81M rows. Is 50GB in size. And the index is 34MBThe index is as follows:btree (insert_timestamp DESC) WHERE hasbeenchecked = true AND hasbeenverified = trueI'm seeing a slow query first, then a fast one, and if I move the date, a slow query again.What I'm seeing is:Attempt 1:Hit: 5171(40MB)Read: 16571(130MB)Dirtied: 3940(31MB)Attempt 2:Hit: 21745 (170MB)Read: NothingDirtied: Nothing.It's slow once, then consistently fast, and then slow again if I move the date around.And by slow I mean: around 60 seconds. And fast is below 1 second.My settings:shared_buffers = 2048MBeffective_cache_size = 6GBcheckpoint_completion_target = 0.5default_statistics_target = 100random_page_cost = 1.1effective_io_concurrency = 200work_mem = 64MBThe data is on an SSD. 4CPU/32GB ram.I've tried increasing the amount of CPUs, but that doesn't seem to affect the performance.I'm having trouble identifying what exactly is the culprit here, or if there are multiple. Is the table simply too big? Is the query always going to be problematic and I probably need to look at a fundamentally different way of gathering this data? Is it not enough memory? Something else?Any help would be appreciated.I'm using the analysis methods explained here to gather this data: https://github.com/dalibo/pev2Regards,Koen De Groote",
"msg_date": "Thu, 21 Sep 2023 17:05:06 +0200",
"msg_from": "Koen De Groote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dirty reads on index scan,"
},
{
"msg_contents": "On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:\n> I'm doing the following query:\n> select * from my_table where hasbeenchecked = true and hasbeenverified = true and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;\n> \n> The date is an example, it is the format that is used in the query.\n> \n> The table has 81M rows. Is 50GB in size. And the index is 34MB\n> \n> The index is as follows:\n> btree (insert_timestamp DESC) WHERE hasbeenchecked = true AND hasbeenverified = true\n> \n> I'm seeing a slow query first, then a fast one, and if I move the date, a slow query again.\n> \n> What I'm seeing is:\n> Attempt 1:\n> Hit: 5171(40MB)\n> Read: 16571(130MB)\n> Dirtied: 3940(31MB)\n> \n> Attempt 2:\n> Hit: 21745 (170MB)\n> Read: Nothing\n> Dirtied: Nothing.\n> \n> It's slow once, then consistently fast, and then slow again if I move the date around.\n> And by slow I mean: around 60 seconds. And fast is below 1 second.\n\nThat's normal behavior: after the first execution, the data are cached, so the query\nbecomes much faster.\n\nDirtying pages happens because the first reader has to set hint bits, which is an extra\nchore. You can avoid that if you VACUUM the table before you query it.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 21 Sep 2023 21:30:37 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dirty reads on index scan,"
},
{
"msg_contents": "Alright.\n\nSo, if I want to speed up the query, apart from trying to vacuum it\nbeforehand, I suspect I've hit the limit of what this query can do?\n\nBecause, the table is just going to keep growing. And it's a usually a\nquery that runs one time per day, so it's a cold run each time.\n\nIs this just going to get slower and slower and there's nothing that can be\ndone about it?\n\nRegards,\nKoen De Groote\n\n\n\nOn Thu, Sep 21, 2023 at 9:30 PM Laurenz Albe <[email protected]>\nwrote:\n\n> On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:\n> > I'm doing the following query:\n> > select * from my_table where hasbeenchecked = true and hasbeenverified =\n> true and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;\n> >\n> > The date is an example, it is the format that is used in the query.\n> >\n> > The table has 81M rows. Is 50GB in size. And the index is 34MB\n> >\n> > The index is as follows:\n> > btree (insert_timestamp DESC) WHERE hasbeenchecked = true\n> AND hasbeenverified = true\n> >\n> > I'm seeing a slow query first, then a fast one, and if I move the date,\n> a slow query again.\n> >\n> > What I'm seeing is:\n> > Attempt 1:\n> > Hit: 5171(40MB)\n> > Read: 16571(130MB)\n> > Dirtied: 3940(31MB)\n> >\n> > Attempt 2:\n> > Hit: 21745 (170MB)\n> > Read: Nothing\n> > Dirtied: Nothing.\n> >\n> > It's slow once, then consistently fast, and then slow again if I move\n> the date around.\n> > And by slow I mean: around 60 seconds. And fast is below 1 second.\n>\n> That's normal behavior: after the first execution, the data are cached, so\n> the query\n> becomes much faster.\n>\n> Dirtying pages happens because the first reader has to set hint bits,\n> which is an extra\n> chore. You can avoid that if you VACUUM the table before you query it.\n>\n> Yours,\n> Laurenz Albe\n>\n\nAlright.So, if I want to speed up the query, apart from trying to vacuum it beforehand, I suspect I've hit the limit of what this query can do?Because, the table is just going to keep growing. And it's a usually a query that runs one time per day, so it's a cold run each time.Is this just going to get slower and slower and there's nothing that can be done about it?Regards,Koen De GrooteOn Thu, Sep 21, 2023 at 9:30 PM Laurenz Albe <[email protected]> wrote:On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:\n> I'm doing the following query:\n> select * from my_table where hasbeenchecked = true and hasbeenverified = true and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;\n> \n> The date is an example, it is the format that is used in the query.\n> \n> The table has 81M rows. Is 50GB in size. And the index is 34MB\n> \n> The index is as follows:\n> btree (insert_timestamp DESC) WHERE hasbeenchecked = true AND hasbeenverified = true\n> \n> I'm seeing a slow query first, then a fast one, and if I move the date, a slow query again.\n> \n> What I'm seeing is:\n> Attempt 1:\n> Hit: 5171(40MB)\n> Read: 16571(130MB)\n> Dirtied: 3940(31MB)\n> \n> Attempt 2:\n> Hit: 21745 (170MB)\n> Read: Nothing\n> Dirtied: Nothing.\n> \n> It's slow once, then consistently fast, and then slow again if I move the date around.\n> And by slow I mean: around 60 seconds. And fast is below 1 second.\n\nThat's normal behavior: after the first execution, the data are cached, so the query\nbecomes much faster.\n\nDirtying pages happens because the first reader has to set hint bits, which is an extra\nchore. You can avoid that if you VACUUM the table before you query it.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 22 Sep 2023 10:35:12 +0200",
"msg_from": "Koen De Groote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dirty reads on index scan,"
},
{
"msg_contents": "On Fri, 2023-09-22 at 10:35 +0200, Koen De Groote wrote:\n> On Thu, Sep 21, 2023 at 9:30 PM Laurenz Albe <[email protected]> wrote:\n> > On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:\n> > > I'm doing the following query:\n> > > select * from my_table where hasbeenchecked = true and hasbeenverified = true and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;\n> > > \n> > > The date is an example, it is the format that is used in the query.\n> > > \n> > > The table has 81M rows. Is 50GB in size. And the index is 34MB\n> > > \n> > > The index is as follows:\n> > > btree (insert_timestamp DESC) WHERE hasbeenchecked = true AND hasbeenverified = true\n> > > \n> > > I'm seeing a slow query first, then a fast one, and if I move the date, a slow query again.\n> > > \n> > > What I'm seeing is:\n> > > Attempt 1:\n> > > Hit: 5171(40MB)\n> > > Read: 16571(130MB)\n> > > Dirtied: 3940(31MB)\n> > > \n> > > Attempt 2:\n> > > Hit: 21745 (170MB)\n> > > Read: Nothing\n> > > Dirtied: Nothing.\n> > > \n> > > It's slow once, then consistently fast, and then slow again if I move the date around.\n> > > And by slow I mean: around 60 seconds. And fast is below 1 second.\n> > \n> > That's normal behavior: after the first execution, the data are cached, so the query\n> > becomes much faster.\n> > \n> > Dirtying pages happens because the first reader has to set hint bits, which is an extra\n> > chore. You can avoid that if you VACUUM the table before you query it.\n>\n> So, if I want to speed up the query, apart from trying to vacuum it beforehand, I suspect\n> I've hit the limit of what this query can do?\n>\n> Because, the table is just going to keep growing. And it's a usually a query that runs one\n> time per day, so it's a cold run each time.\n>\n> Is this just going to get slower and slower and there's nothing that can be done about it?\n\nEssentially yes.\n\nIf the table does not have too many columns, or you can be more selective than \"SELECT *\",\nyou could use an index-only scan with an index like\n\n CREATE INDEX ON my_table (insert_timestamp)\n INCLUDE (/* all the columns in the SELECT list */)\n WHERE hasbeenchecked AND hasbeenverified;\n\n VACUUM my_table;\n\nYou need to configure autovacuum so that it vacuums the table often enough if you want\nan efficient index-only scan.\n\nIf that is not feasible, you can gain speed by clustering the table. For that, you need\na different index:\n\n CREATE INDEX ckuster_idx ON my_table (hasbeenchecked, hasbeenverified, insert_timestamp);\n\n CLUSTER my_table USING cluster_idx; -- attention: rewrites the table\n\nThat should speed up the query considerably, because it will have to read way fewer pages\nfrom disk. However, CLUSTER is not without problems. Look at the documentation for the\ncaveats.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 22 Sep 2023 11:14:27 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dirty reads on index scan,"
},
{
"msg_contents": "The actual thing that might be good to see is the query plan (explain).\nIt is commonly regarded an issue to select ‘*’, in many cases only a subset of the rows are needed, but I don’t know your exact case.\nIf a limited number of columns are actually needed from the table, it might help to create an index which has got all the columns in the index, either directly for the index, or included with the index.\nThis is called a covering index, and could prevent the need to read the actual table, which is visible by the row source 'index only scan’.\nBut that potential can only be assessed by looking at the explain output.\n\nA covering index needs the visibility map to be recent for the blocks, otherwise a table visit must be done to get the latest tuple state. This can be done by vacuuming.\n\nWhen your query is as efficient as it can be, there are two things left.\nOne is that blocks in the database buffer cache that are not frequently accessed will age out in favour of blocks that are accessed more recently. \nOn the operating system, the same mechanism takes place, postgres reads data buffered, which means the operating system caches the IOs for the database blocks too.\n\nThis means that if you query data that is stored in blocks that are not recently used, these will not be present in the database cache, and not in the operating system cache, and thus require a physical IO from disk to be obtained. If the amount of blocks relative to the caches is modest, another execute of the same SQL can take advantage, and thus result in much lower latency.\n\nYou describe the query to be using a timestamp. If the timestamp moves forward in time, and the amount of data is equal over time, then the latency for the two scenario’s should remain stable. \nIf the amount of data increases over time, and thus more blocks are needed to be read because more rows are stored that needs scanning to get a result, then the latency will increase.\n\nFrits Hoogland\n\n\n\n\n> On 22 Sep 2023, at 10:35, Koen De Groote <[email protected]> wrote:\n> \n> Alright.\n> \n> So, if I want to speed up the query, apart from trying to vacuum it beforehand, I suspect I've hit the limit of what this query can do?\n> \n> Because, the table is just going to keep growing. And it's a usually a query that runs one time per day, so it's a cold run each time.\n> \n> Is this just going to get slower and slower and there's nothing that can be done about it?\n> \n> Regards,\n> Koen De Groote\n> \n> \n> \n> On Thu, Sep 21, 2023 at 9:30 PM Laurenz Albe <[email protected] <mailto:[email protected]>> wrote:\n>> On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:\n>> > I'm doing the following query:\n>> > select * from my_table where hasbeenchecked = true and hasbeenverified = true and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;\n>> > \n>> > The date is an example, it is the format that is used in the query.\n>> > \n>> > The table has 81M rows. Is 50GB in size. And the index is 34MB\n>> > \n>> > The index is as follows:\n>> > btree (insert_timestamp DESC) WHERE hasbeenchecked = true AND hasbeenverified = true\n>> > \n>> > I'm seeing a slow query first, then a fast one, and if I move the date, a slow query again.\n>> > \n>> > What I'm seeing is:\n>> > Attempt 1:\n>> > Hit: 5171(40MB)\n>> > Read: 16571(130MB)\n>> > Dirtied: 3940(31MB)\n>> > \n>> > Attempt 2:\n>> > Hit: 21745 (170MB)\n>> > Read: Nothing\n>> > Dirtied: Nothing.\n>> > \n>> > It's slow once, then consistently fast, and then slow again if I move the date around.\n>> > And by slow I mean: around 60 seconds. And fast is below 1 second.\n>> \n>> That's normal behavior: after the first execution, the data are cached, so the query\n>> becomes much faster.\n>> \n>> Dirtying pages happens because the first reader has to set hint bits, which is an extra\n>> chore. You can avoid that if you VACUUM the table before you query it.\n>> \n>> Yours,\n>> Laurenz Albe\n\n\nThe actual thing that might be good to see is the query plan (explain).It is commonly regarded an issue to select ‘*’, in many cases only a subset of the rows are needed, but I don’t know your exact case.If a limited number of columns are actually needed from the table, it might help to create an index which has got all the columns in the index, either directly for the index, or included with the index.This is called a covering index, and could prevent the need to read the actual table, which is visible by the row source 'index only scan’.But that potential can only be assessed by looking at the explain output.A covering index needs the visibility map to be recent for the blocks, otherwise a table visit must be done to get the latest tuple state. This can be done by vacuuming.When your query is as efficient as it can be, there are two things left.One is that blocks in the database buffer cache that are not frequently accessed will age out in favour of blocks that are accessed more recently. On the operating system, the same mechanism takes place, postgres reads data buffered, which means the operating system caches the IOs for the database blocks too.This means that if you query data that is stored in blocks that are not recently used, these will not be present in the database cache, and not in the operating system cache, and thus require a physical IO from disk to be obtained. If the amount of blocks relative to the caches is modest, another execute of the same SQL can take advantage, and thus result in much lower latency.You describe the query to be using a timestamp. If the timestamp moves forward in time, and the amount of data is equal over time, then the latency for the two scenario’s should remain stable. If the amount of data increases over time, and thus more blocks are needed to be read because more rows are stored that needs scanning to get a result, then the latency will increase.\nFrits Hoogland\n\nOn 22 Sep 2023, at 10:35, Koen De Groote <[email protected]> wrote:Alright.So, if I want to speed up the query, apart from trying to vacuum it beforehand, I suspect I've hit the limit of what this query can do?Because, the table is just going to keep growing. And it's a usually a query that runs one time per day, so it's a cold run each time.Is this just going to get slower and slower and there's nothing that can be done about it?Regards,Koen De GrooteOn Thu, Sep 21, 2023 at 9:30 PM Laurenz Albe <[email protected]> wrote:On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:\n> I'm doing the following query:\n> select * from my_table where hasbeenchecked = true and hasbeenverified = true and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;\n> \n> The date is an example, it is the format that is used in the query.\n> \n> The table has 81M rows. Is 50GB in size. And the index is 34MB\n> \n> The index is as follows:\n> btree (insert_timestamp DESC) WHERE hasbeenchecked = true AND hasbeenverified = true\n> \n> I'm seeing a slow query first, then a fast one, and if I move the date, a slow query again.\n> \n> What I'm seeing is:\n> Attempt 1:\n> Hit: 5171(40MB)\n> Read: 16571(130MB)\n> Dirtied: 3940(31MB)\n> \n> Attempt 2:\n> Hit: 21745 (170MB)\n> Read: Nothing\n> Dirtied: Nothing.\n> \n> It's slow once, then consistently fast, and then slow again if I move the date around.\n> And by slow I mean: around 60 seconds. And fast is below 1 second.\n\nThat's normal behavior: after the first execution, the data are cached, so the query\nbecomes much faster.\n\nDirtying pages happens because the first reader has to set hint bits, which is an extra\nchore. You can avoid that if you VACUUM the table before you query it.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 22 Sep 2023 15:22:22 +0200",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dirty reads on index scan,"
},
{
"msg_contents": "The \"select * \" is a replacement for the actual fields, which are all\nqueried. I simply want to avoid pasting the entire query. The names that\nare there, too, are edited.\n\n From what I'm reading, my best chance is to limit the amount of variables I\nneed and change to index, plus tune for more frequent vacuuming of the\ntable. I'll look into that.\n\nThanks for the advice, both of you.\n\nRegards,\nKoen De Groote\n\nOn Fri, Sep 22, 2023 at 3:22 PM Frits Hoogland <[email protected]>\nwrote:\n\n> The actual thing that might be good to see is the query plan (explain).\n> It is commonly regarded an issue to select ‘*’, in many cases only a\n> subset of the rows are needed, but I don’t know your exact case.\n> If a limited number of columns are actually needed from the table, it\n> might help to create an index which has got all the columns in the index,\n> either directly for the index, or included with the index.\n> This is called a covering index, and could prevent the need to read the\n> actual table, which is visible by the row source 'index only scan’.\n> But that potential can only be assessed by looking at the explain output.\n>\n> A covering index needs the visibility map to be recent for the blocks,\n> otherwise a table visit must be done to get the latest tuple state. This\n> can be done by vacuuming.\n>\n> When your query is as efficient as it can be, there are two things left.\n> One is that blocks in the database buffer cache that are not frequently\n> accessed will age out in favour of blocks that are accessed more recently.\n> On the operating system, the same mechanism takes place, postgres reads\n> data buffered, which means the operating system caches the IOs for the\n> database blocks too.\n>\n> This means that if you query data that is stored in blocks that are not\n> recently used, these will not be present in the database cache, and not in\n> the operating system cache, and thus require a physical IO from disk to be\n> obtained. If the amount of blocks relative to the caches is modest, another\n> execute of the same SQL can take advantage, and thus result in much lower\n> latency.\n>\n> You describe the query to be using a timestamp. If the timestamp moves\n> forward in time, and the amount of data is equal over time, then the\n> latency for the two scenario’s should remain stable.\n> If the amount of data increases over time, and thus more blocks are needed\n> to be read because more rows are stored that needs scanning to get a\n> result, then the latency will increase.\n>\n> *Frits Hoogland*\n>\n>\n>\n>\n> On 22 Sep 2023, at 10:35, Koen De Groote <[email protected]> wrote:\n>\n> Alright.\n>\n> So, if I want to speed up the query, apart from trying to vacuum it\n> beforehand, I suspect I've hit the limit of what this query can do?\n>\n> Because, the table is just going to keep growing. And it's a usually a\n> query that runs one time per day, so it's a cold run each time.\n>\n> Is this just going to get slower and slower and there's nothing that can\n> be done about it?\n>\n> Regards,\n> Koen De Groote\n>\n>\n>\n> On Thu, Sep 21, 2023 at 9:30 PM Laurenz Albe <[email protected]>\n> wrote:\n>\n>> On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:\n>> > I'm doing the following query:\n>> > select * from my_table where hasbeenchecked = true and hasbeenverified\n>> = true and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;\n>> >\n>> > The date is an example, it is the format that is used in the query.\n>> >\n>> > The table has 81M rows. Is 50GB in size. And the index is 34MB\n>> >\n>> > The index is as follows:\n>> > btree (insert_timestamp DESC) WHERE hasbeenchecked = true\n>> AND hasbeenverified = true\n>> >\n>> > I'm seeing a slow query first, then a fast one, and if I move the date,\n>> a slow query again.\n>> >\n>> > What I'm seeing is:\n>> > Attempt 1:\n>> > Hit: 5171(40MB)\n>> > Read: 16571(130MB)\n>> > Dirtied: 3940(31MB)\n>> >\n>> > Attempt 2:\n>> > Hit: 21745 (170MB)\n>> > Read: Nothing\n>> > Dirtied: Nothing.\n>> >\n>> > It's slow once, then consistently fast, and then slow again if I move\n>> the date around.\n>> > And by slow I mean: around 60 seconds. And fast is below 1 second.\n>>\n>> That's normal behavior: after the first execution, the data are cached,\n>> so the query\n>> becomes much faster.\n>>\n>> Dirtying pages happens because the first reader has to set hint bits,\n>> which is an extra\n>> chore. You can avoid that if you VACUUM the table before you query it.\n>>\n>> Yours,\n>> Laurenz Albe\n>>\n>\n>\n\nThe \"select * \" is a replacement for the actual fields, which are all queried. I simply want to avoid pasting the entire query. The names that are there, too, are edited.From what I'm reading, my best chance is to limit the amount of variables I need and change to index, plus tune for more frequent vacuuming of the table. I'll look into that.Thanks for the advice, both of you.Regards,Koen De GrooteOn Fri, Sep 22, 2023 at 3:22 PM Frits Hoogland <[email protected]> wrote:The actual thing that might be good to see is the query plan (explain).It is commonly regarded an issue to select ‘*’, in many cases only a subset of the rows are needed, but I don’t know your exact case.If a limited number of columns are actually needed from the table, it might help to create an index which has got all the columns in the index, either directly for the index, or included with the index.This is called a covering index, and could prevent the need to read the actual table, which is visible by the row source 'index only scan’.But that potential can only be assessed by looking at the explain output.A covering index needs the visibility map to be recent for the blocks, otherwise a table visit must be done to get the latest tuple state. This can be done by vacuuming.When your query is as efficient as it can be, there are two things left.One is that blocks in the database buffer cache that are not frequently accessed will age out in favour of blocks that are accessed more recently. On the operating system, the same mechanism takes place, postgres reads data buffered, which means the operating system caches the IOs for the database blocks too.This means that if you query data that is stored in blocks that are not recently used, these will not be present in the database cache, and not in the operating system cache, and thus require a physical IO from disk to be obtained. If the amount of blocks relative to the caches is modest, another execute of the same SQL can take advantage, and thus result in much lower latency.You describe the query to be using a timestamp. If the timestamp moves forward in time, and the amount of data is equal over time, then the latency for the two scenario’s should remain stable. If the amount of data increases over time, and thus more blocks are needed to be read because more rows are stored that needs scanning to get a result, then the latency will increase.\nFrits Hoogland\n\nOn 22 Sep 2023, at 10:35, Koen De Groote <[email protected]> wrote:Alright.So, if I want to speed up the query, apart from trying to vacuum it beforehand, I suspect I've hit the limit of what this query can do?Because, the table is just going to keep growing. And it's a usually a query that runs one time per day, so it's a cold run each time.Is this just going to get slower and slower and there's nothing that can be done about it?Regards,Koen De GrooteOn Thu, Sep 21, 2023 at 9:30 PM Laurenz Albe <[email protected]> wrote:On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:\n> I'm doing the following query:\n> select * from my_table where hasbeenchecked = true and hasbeenverified = true and insert_timestamp <= '2023-09-01 00:00:00.000' limit 1000;\n> \n> The date is an example, it is the format that is used in the query.\n> \n> The table has 81M rows. Is 50GB in size. And the index is 34MB\n> \n> The index is as follows:\n> btree (insert_timestamp DESC) WHERE hasbeenchecked = true AND hasbeenverified = true\n> \n> I'm seeing a slow query first, then a fast one, and if I move the date, a slow query again.\n> \n> What I'm seeing is:\n> Attempt 1:\n> Hit: 5171(40MB)\n> Read: 16571(130MB)\n> Dirtied: 3940(31MB)\n> \n> Attempt 2:\n> Hit: 21745 (170MB)\n> Read: Nothing\n> Dirtied: Nothing.\n> \n> It's slow once, then consistently fast, and then slow again if I move the date around.\n> And by slow I mean: around 60 seconds. And fast is below 1 second.\n\nThat's normal behavior: after the first execution, the data are cached, so the query\nbecomes much faster.\n\nDirtying pages happens because the first reader has to set hint bits, which is an extra\nchore. You can avoid that if you VACUUM the table before you query it.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 22 Sep 2023 15:54:18 +0200",
"msg_from": "Koen De Groote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dirty reads on index scan,"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 5:44 AM Koen De Groote <[email protected]> wrote:\n\n> Alright.\n>\n> So, if I want to speed up the query, apart from trying to vacuum it\n> beforehand, I suspect I've hit the limit of what this query can do?\n>\n\nIt is more a limit on the system as a whole, not just one query. How is\nthis table being inserted? updated? deleted? Is the physical row order\ncorrelated on the insert_timestamp column (look at pg_stats.correlation)?\nIf not, why not? (Based on the name of the column, i would expect it to be\nhighly correlated)\n\nDid you try the VACUUM and if so did it work? Knowing that might help us\nfigure out what else might work, even if you don't want to do the vacuum.\nBut why not just do the vacuum?\n\nYou should show us the actual plans, not just selected excerpts from it.\nThere might be clues there that you haven't excerpted. Turn on\ntrack_io_timing first if it is not on already.\n\n\n> Because, the table is just going to keep growing. And it's a usually a\n> query that runs one time per day, so it's a cold run each time.\n>\n\nWhy do you care if a query run once per day takes 1 minute to run?\n\n\n> Is this just going to get slower and slower and there's nothing that can\n> be done about it?\n>\n\nIt is probably not so much the size of the data (given that it is already\nfar too large to stay in cache) as the number of dead tuples it had to wade\nthrough. Having to read 16571 pages just to find 1000 tuples from a\nsingle-loop index scan suggests you have a lot of dead tuples. Like, 16\nfor every live tuple. Why do you have so many, and why isn't index\nmicro-vacuuming cleaning them up? Do you have long-running transactions\nwhich are preventing clean up? Are you running this on a standby?\n\nCheers,\n\nJeff\n\nOn Fri, Sep 22, 2023 at 5:44 AM Koen De Groote <[email protected]> wrote:Alright.So, if I want to speed up the query, apart from trying to vacuum it beforehand, I suspect I've hit the limit of what this query can do?It is more a limit on the system as a whole, not just one query. How is this table being inserted? updated? deleted? Is the physical row order correlated on the insert_timestamp column (look at pg_stats.correlation)? If not, why not? (Based on the name of the column, i would expect it to be highly correlated)Did you try the VACUUM and if so did it work? Knowing that might help us figure out what else might work, even if you don't want to do the vacuum. But why not just do the vacuum?You should show us the actual plans, not just selected excerpts from it. There might be clues there that you haven't excerpted. Turn on track_io_timing first if it is not on already. Because, the table is just going to keep growing. And it's a usually a query that runs one time per day, so it's a cold run each time.Why do you care if a query run once per day takes 1 minute to run? Is this just going to get slower and slower and there's nothing that can be done about it?It is probably not so much the size of the data (given that it is already far too large to stay in cache) as the number of dead tuples it had to wade through. Having to read 16571 pages just to find 1000 tuples from a single-loop index scan suggests you have a lot of dead tuples. Like, 16 for every live tuple. Why do you have so many, and why isn't index micro-vacuuming cleaning them up? Do you have long-running transactions which are preventing clean up? Are you running this on a standby?Cheers,Jeff",
"msg_date": "Sun, 24 Sep 2023 16:17:45 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dirty reads on index scan,"
}
] |
[
{
"msg_contents": "Have encountered an intriguing issue processing a table with a large \nnumber of rows. The process is to loop over the table processing each \nrow executing the Apache AGE cypher function over each row \nindividually.The looping is necessary because of encountering a limit on \nthe number of rows that the cypher function would process.\n\nThe process terminates unexpectedly with the following message. Notable \nthat it runs for quite some time before termination.:\n\nSQL Error [42703]: ERROR: could not find rte for \na01a724103fbb3d059b8387bf043dbc8\n Where: PL/pgSQL function \nanalysis.create_trips(text,text,text,text,text,text,integer,text,integer) \nline 5 at EXECUTE\n\nOf note the the string refers to a value in the field service_key.\n\nThe first instance of the service_key when ordered is in row 7741 shown \nbelow.\n\n*row_num* \t*service_key* \t*service_id* \t*trip_id* \t*trip_headsign* \n*route_id* \t*direction_id* \t*shape_id* \t*wheelchair_accessible*\n7741 \ta01a724103fbb3d059b8387bf043dbc8 \tFR \t307 \tGungahlin Pl \tX1 \t0 \n1002 \t1\n\nThe database version is PostgreSQL 15.4 (Debian 15.4-2.pgdg120+1) on \nx86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit.\n\nThe attributes of the table allservices.trips are as follows:\n\n * Total size with indexes: 60 MB\n * Number of rows: 231,131\n\nThis is the function definition and the process to load the table \nallservices.trips into the Apache AGE graph schema.\n\nAny assistance in refining the process to ensure completion welcome.\n\nRegards\n\nMatt.\n\nSELECT create_graph('transport_network');\n\nCREATE OR REPLACE FUNCTION analysis.create_trips\n\n(graph_name text,\n\nservice_key text,service_id text, trip_id text, trip_headsign text, \nroute_id text, direction_id int, shape_id text, wheelchair_accessible int)\n\nreturns text\n\nLANGUAGE plpgsql\n\nVOLATILE\n\nas $trips$\n\ndeclare\n\nnodename text := graph_name || '.' || 'trips';\n\nBEGIN\n\nexecute\n\nformat ('select * from cypher(''%1$s'', $$match (v:routes {id: %6$s})\n\ncreate(v)-[:USES]->\n\n(t:trips\n\n{service_key: %2$s, service_id: %3$s, id: %4$s, headsign: %5$s, \nroute_id: %6$s, direction_id: %7$s, shape_id: %8$s,\n\nwheelchair_accessible: %9$s})$$) as (t agtype);',\n\nquote_ident(graph_name),\n\nquote_ident(service_key),quote_ident(service_id),\n\nquote_ident(trip_id),quote_ident(trip_headsign),\n\nquote_ident(route_id),to_char(direction_id,'9'),quote_ident(shape_id),to_char(wheelchair_accessible,'9'));\n\nreturn nodename;\n\nEND\n\n$trips$\n\n;\n\nselect create_vlabel('transport_network','trips');\n\ndo $$\n\ndeclare temprow record;\n\ngraph_name text:='transport_network';\n\ncounter integer := 0 ;\n\nbegin\n\nfor temprow in select service_key, service_id, trip_id\tfrom \nallservices.trips\n\norder by service_key,trip_id\n\nloop\n\ncounter := counter+1; -- Prevent replication of row\n\nperform\n\nanalysis.create_trips\n\n(graph_name,\n\na.service_key, a.service_id,\n\na.trip_id, a.trip_headsign,\n\na.route_id, a.direction_id, a.shape_id,\n\na.wheelchair_accessible)\n\nfrom\n\n(select row_number() over (order by service_key,trip_id) as row_num,\n\nservice_key, service_id,\n\ntrip_id, trip_headsign,\n\nroute_id, direction_id, shape_id,\n\ncoalesce(wheelchair_accessible,0) as wheelchair_accessible from \nallservices.trips) a\n\nwhere a.row_num=counter\n\n;\n\nend loop;\n\nend; $$;\n\n\n\n\n\n\n\nHave encountered an intriguing issue processing a table with a\n large number of rows. The process is to loop over the table\n processing each row executing the Apache AGE cypher function over\n each row individually.The looping is necessary because of\n encountering a limit on the number of rows that the cypher\n function would process.\nThe process terminates unexpectedly with the following message.\n Notable that it runs for quite some time before termination.:\n\n\n\nSQL Error [42703]: ERROR: could not find rte\n for a01a724103fbb3d059b8387bf043dbc8\n Where: PL/pgSQL function\nanalysis.create_trips(text,text,text,text,text,text,integer,text,integer)\n line 5 at EXECUTE\n\n\n\nOf note the the string refers to a value in the field\n service_key. \n\nThe first instance of the service_key when ordered is in row 7741\n shown below.\n \n\n \n \n \n \n \n\nrow_num\nservice_key\nservice_id\ntrip_id\ntrip_headsign\nroute_id\ndirection_id\nshape_id\nwheelchair_accessible\n\n\n7741\na01a724103fbb3d059b8387bf043dbc8\nFR\n307\nGungahlin Pl\nX1\n0\n1002\n1\n\n\n\n\n\nThe database version is PostgreSQL 15.4 (Debian 15.4-2.pgdg120+1)\n on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0,\n 64-bit.\nThe attributes of the table allservices.trips are as follows:\n\nTotal size with indexes: 60 MB\nNumber of rows: 231,131\n\n\nThis is the function definition and the process to load the table\n allservices.trips into the Apache AGE graph schema.\nAny assistance in refining the process to ensure completion\n welcome.\nRegards\nMatt.\n\n\n\n\n\n\n\nSELECT create_graph('transport_network');\n\nCREATE OR REPLACE FUNCTION analysis.create_trips(graph_name text, service_key text,service_id text, trip_id text, trip_headsign text, route_id text, direction_id int, shape_id text, wheelchair_accessible int)returns text LANGUAGE plpgsqlVOLATILEas $trips$declare nodename text := graph_name || '.' || 'trips';BEGINexecute format ('select * from cypher(''%1$s'', $$match (v:routes {id: %6$s})\t\t\tcreate(v)-[:USES]->\t\t\t(t:trips \t\t\t{service_key: %2$s, service_id: %3$s, id: %4$s, headsign: %5$s, route_id: %6$s, direction_id: %7$s, shape_id: %8$s, \t\t\t\t\twheelchair_accessible: %9$s})$$) as (t agtype);', quote_ident(graph_name), quote_ident(service_key),quote_ident(service_id), quote_ident(trip_id),quote_ident(trip_headsign), quote_ident(route_id),to_char(direction_id,'9'),quote_ident(shape_id),to_char(wheelchair_accessible,'9'));return nodename;END$trips$;\nselect create_vlabel('transport_network','trips');\n\ndo $$declare temprow record; graph_name text:='transport_network'; counter integer := 0 ;begin for temprow in select service_key, service_id, trip_id from allservices.trips order by service_key,trip_idloopcounter := counter+1; -- Prevent replication of rowperform analysis.create_trips\t\t\t(graph_name, a.service_key, a.service_id, a.trip_id, a.trip_headsign, a.route_id, a.direction_id, a.shape_id, a.wheelchair_accessible) from \t(select row_number() over (order by service_key,trip_id) as row_num, service_key, service_id, trip_id, trip_headsign, route_id, direction_id, shape_id, coalesce(wheelchair_accessible,0) as wheelchair_accessible from allservices.trips) a where a.row_num=counter;end loop;end; $$;",
"msg_date": "Mon, 2 Oct 2023 11:57:01 +1100",
"msg_from": "Matt Gibbins <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unexpected termination looping over table."
},
{
"msg_contents": "Matt Gibbins <[email protected]> writes:\n> Have encountered an intriguing issue processing a table with a large \n> number of rows. The process is to loop over the table processing each \n> row executing the Apache AGE cypher function over each row \n> individually.The looping is necessary because of encountering a limit on \n> the number of rows that the cypher function would process.\n\n> The process terminates unexpectedly with the following message. Notable \n> that it runs for quite some time before termination.:\n\n> SQL Error [42703]: ERROR: could not find rte for \n> a01a724103fbb3d059b8387bf043dbc8\n> Where: PL/pgSQL function \n> analysis.create_trips(text,text,text,text,text,text,integer,text,integer) \n> line 5 at EXECUTE\n\nThere is no occurrence of \"could not find rte\" anywhere in the\ncore Postgres source code. I surmise that you're using some\nextension that isn't happy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Oct 2023 21:09:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected termination looping over table."
},
{
"msg_contents": "On 2/10/23 12:09, Tom Lane wrote:\n> Matt Gibbins <[email protected]> writes:\n>> Have encountered an intriguing issue processing a table with a large\n>> number of rows. The process is to loop over the table processing each\n>> row executing the Apache AGE cypher function over each row\n>> individually.The looping is necessary because of encountering a limit on\n>> the number of rows that the cypher function would process.\n>> The process terminates unexpectedly with the following message. Notable\n>> that it runs for quite some time before termination.:\n>> SQL Error [42703]: ERROR: could not find rte for\n>> a01a724103fbb3d059b8387bf043dbc8\n>> Where: PL/pgSQL function\n>> analysis.create_trips(text,text,text,text,text,text,integer,text,integer)\n>> line 5 at EXECUTE\n> There is no occurrence of \"could not find rte\" anywhere in the\n> core Postgres source code. I surmise that you're using some\n> extension that isn't happy.\n>\n> \t\t\tregards, tom lane\n\nTom\n\nThanks for your response. I am using the Apache AGE extension which \nenables graph data processing on PostgreSQL.\n\nhttps://age.apache.org/\n\nI'll enquire with the Apache AGE list for an answer.\n\nRegards\n\nMatt.\n\n\n\n\n\n",
"msg_date": "Mon, 2 Oct 2023 13:36:06 +1100",
"msg_from": "Matt Gibbins <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected termination looping over table."
}
] |
[
{
"msg_contents": "Hello everyone!\n\n\nRecently, we upgraded the AWS RDS instance from Postgres 12.14 to 15.4 \nand noticed extremely high disk consumption on the following query \nexecution:\n\nselect (exists (select 1 as \"one\" from \"public\".\"indexed_commit\" where \n\"public\".\"indexed_commit\".\"repo_id\" in (964992,964994,964999, ...);\n\nFor some reason, the query planner starts using Seq Scan instead of the \nindex on the \"repo_id\" column when requesting under user limited with \nRLS. On prod, it happens when there are more than 316 IDs in the IN part \nof the query, on stage - 3. If we execute the request from Superuser, \nthe planner always uses the \"repo_id\" index.\n\nLuckily, we can easily reproduce this on our stage database (which is \nsmaller). If we add a multicolumn \"repo_id, tenant_id\" index, the \nplanner uses it (Index Only Scan) with any IN params count under RLS.\n\nCould you please clarify if this is a Postgres bug or not? Should we \ninclude the \"tenant_id\" column in all our indexes to make them work \nunder RLS?\n\n\n Postgres version / Operating system+version\n\n\nPostgreSQL 15.4 on aarch64-unknown-linux-gnu, compiled by gcc (GCC) \n7.3.1 20180712 (Red Hat 7.3.1-6), 64-bit\n\n\n Full Table and Index Schema\n\n\\d indexed_commit\n Table \"public.indexed_commit\"\n Column | Type | Collation | Nullable | \nDefault\n---------------+-----------------------------+-----------+----------+---------\n id | bigint | | not null |\n commit_hash | character varying(40) | | not null |\n parent_hash | text | | |\n created_ts | timestamp without time zone | | not null |\n repo_id | bigint | | not null |\n lines_added | bigint | | |\n lines_removed | bigint | | |\n tenant_id | uuid | | not null |\n author_id | uuid | | not null |\nIndexes:\n \"indexed-commit-repo-idx\" btree (repo_id)\n \"indexed_commit_commit_hash_repo_id_key\" UNIQUE CONSTRAINT, btree \n(commit_hash, repo_id) REPLICA IDENTITY\n \"indexed_commit_repo_id_without_loc_idx\" btree (repo_id) WHERE \nlines_added IS NULL OR lines_removed IS NULL\nPolicies:\n POLICY \"commit_isolation_policy\"\n USING ((tenant_id = \n(current_setting('app.current_tenant_id'::text))::uuid))\n\n\n Table Metadata\n\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, \nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE \nrelname='indexed_commit';\n relname | relpages | reltuples | relallvisible | relkind | \nrelnatts | relhassubclass | reloptions | pg_table_size\n----------------+----------+--------------+---------------+---------+----------+----------------+---------------------------------------------------------------------------------------------------------------------------------------------+---------------\n indexed_commit | 18170522 | 7.451964e+08 | 18104744 | r | \n9 | f | \n{autovacuum_vacuum_scale_factor=0,autovacuum_analyze_scale_factor=0,autovacuum_vacuum_threshold=200000,autovacuum_analyze_threshold=100000} \n| 148903337984\n\n\n EXPLAIN (ANALYZE, BUFFERS), not just EXPLAIN\n\nProduction queries:\n\n316 ids under RLS limited user\n<https://explain.depesz.com/s/X7Iq>\n\n392 ids under RLS limited user <https://explain.depesz.com/s/lbkX>\n\n392 ids under Superuser <https://explain.depesz.com/s/uKSG>\n\n\n History\n\nIt became slow after the upgrade to 15.4. We never had any issues before.\n\n\n Hardware\n\nAWS DB class db.t4g.large + GP3 400GB disk\n\n\n Maintenance Setup\n\nAre you running autovacuum? Yes\n\nIf so, with what settings?\n\nautovacuum_vacuum_scale_factor=0,autovacuum_analyze_scale_factor=0,autovacuum_vacuum_threshold=200000,autovacuum_analyze_threshold=100000\n\nSELECT * FROM pg_stat_user_tables WHERE relname='indexed_commit';\n relid | schemaname | relname | seq_scan | seq_tup_read | \nidx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | \nn_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze | \nn_ins_since_vacuum | last_vacuum | last_autovacuum | \nlast_analyze | last_autoanalyze | vacuum_count | \nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+----------------+----------+--------------+-----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-------------------------------+--------------+-------------------------------+--------------+------------------+---------------+-------------------\n 24662 | public | indexed_commit | 2485 | 49215378424 | \n374533865 | 4050928807 | 764089750 | 2191615 | 18500311 \n| 0 | 745241398 | 383 | 46018 \n| 45343 | | 2023-10-11 23:51:29.170378+00 \n| | 2023-10-11 23:50:18.922351+00 | 0 \n| 672 | 0 | 753\n\n\n WAL Configuration\n\nFor data writing queries: have you moved the WAL to a different disk? \nChanged the settings? No.\n\n\n GUC Settings\n\nWhat database configuration settings have you changed? We use default \nsettings.\n\nWhat are their values?\n\nSELECT * FROM pg_settings WHERE name IN ('effective_cache_size', \n'shared_buffers', 'work_mem');\n name | setting | unit | category | \nshort_desc | extra_desc | context | vartype | source | \nmin_val | max_val | enumvals | boot_val | reset_val | sourcefile | \nsourceline | pending_restart\n----------------------+---------+------+---------------------------------------+------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------+---------+--------------------+---------+------------+----------+----------+-----------+------------+------------+-----------------\n effective_cache_size | 494234 | 8kB | Query Tuning / Planner Cost \nConstants | Sets the planner's assumption about the total size of the \ndata caches. | That is, the total size of the caches (kernel cache and \nshared buffers) used for PostgreSQL data files. This is measured in disk \npages, which are normally 8 kB each. | user | integer | \nconfiguration file | 1 | 2147483647 | | 524288 | \n494234 | | | f\n shared_buffers | 247117 | 8kB | Resource Usage / \nMemory | Sets the number of shared memory buffers used by \nthe server. | | postmaster | integer | configuration file | 16 | \n1073741823 | | 16384 | 247117 | | | f\n work_mem | 4096 | kB | Resource Usage / \nMemory | Sets the maximum memory to be used for query \nworkspaces. | This much memory can be used by each \ninternal sort operation and hash table before switching to temporary \ndisk files. | user \n| integer | default | 64 | 2147483647 | | \n4096 | 4096 | | | f\n\n\n Statistics: n_distinct, MCV, histogram\n\nUseful to check statistics leading to bad join plan. SELECT (SELECT \nsum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, \ninherited, null_frac, n_distinct, array_length(most_common_vals,1) \nn_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM \npg_stats WHERE attname='...' AND tablename='...' ORDER BY 1 DESC;\n\nReturns 0 rows.\n\n\nKind regards,\n\nAlexander\n\n\n\n\n\n\nHello everyone!\n\n\nRecently, we upgraded the AWS RDS instance from Postgres 12.14\n to 15.4 and noticed extremely high disk consumption on the\n following query execution:\nselect (exists (select 1 as \"one\" from\n \"public\".\"indexed_commit\" where\n \"public\".\"indexed_commit\".\"repo_id\" in (964992,964994,964999,\n ...);\nFor some reason, the query planner starts using Seq Scan instead\n of the index on the \"repo_id\" column when requesting under user\n limited with RLS. On prod, it happens when there are more than 316\n IDs in the IN part of the query, on stage - 3. If we execute the\n request from Superuser, the planner always uses the \"repo_id\"\n index.\nLuckily, we can easily reproduce this on our stage database\n (which is smaller). If we add a multicolumn \"repo_id, tenant_id\"\n index, the planner uses it (Index Only Scan) with any IN params\n count under RLS.\n Could you please clarify if this is a Postgres bug or not? Should we\n include the \"tenant_id\" column in all our indexes to make them work\n under RLS? \n\n\nPostgres version\n / Operating system+version\n\nPostgreSQL 15.4 on aarch64-unknown-linux-gnu, compiled by gcc\n (GCC) 7.3.1 20180712 (Red Hat 7.3.1-6), 64-bit\nFull Table\n and Index Schema\n\\d indexed_commit\n Table \"public.indexed_commit\"\n Column | Type | Collation |\n Nullable | Default \n---------------+-----------------------------+-----------+----------+---------\n id | bigint | | not\n null | \n commit_hash | character varying(40) | | not\n null | \n parent_hash | text | \n | | \n created_ts | timestamp without time zone | | not\n null | \n repo_id | bigint | | not\n null | \n lines_added | bigint | \n | | \n lines_removed | bigint | \n | | \n tenant_id | uuid | | not\n null | \n author_id | uuid | | not\n null | \n Indexes:\n \"indexed-commit-repo-idx\" btree (repo_id)\n \"indexed_commit_commit_hash_repo_id_key\" UNIQUE CONSTRAINT,\n btree (commit_hash, repo_id) REPLICA IDENTITY\n \"indexed_commit_repo_id_without_loc_idx\" btree (repo_id) WHERE\n lines_added IS NULL OR lines_removed IS NULL\n Policies:\n POLICY \"commit_isolation_policy\"\n USING ((tenant_id =\n (current_setting('app.current_tenant_id'::text))::uuid))\nTable Metadata\n SELECT relname, relpages, reltuples, relallvisible, relkind,\n relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM\n pg_class WHERE relname='indexed_commit';\n relname | relpages | reltuples | relallvisible | relkind\n | relnatts | relhassubclass\n | \nreloptions \n | pg_table_size \n----------------+----------+--------------+---------------+---------+----------+----------------+---------------------------------------------------------------------------------------------------------------------------------------------+---------------\n indexed_commit | 18170522 | 7.451964e+08 | 18104744 | r \n | 9 | f |\n{autovacuum_vacuum_scale_factor=0,autovacuum_analyze_scale_factor=0,autovacuum_vacuum_threshold=200000,autovacuum_analyze_threshold=100000}\n | 148903337984\n\nEXPLAIN\n (ANALYZE, BUFFERS), not just EXPLAIN\nProduction queries:\n\n316 ids under RLS\n limited user\n\n392 ids under RLS\n limited user\n\n392 ids under\n Superuser\n\nHistory\nIt became slow after the upgrade to 15.4. We never had any issues\n before.\n\nHardware\nAWS DB class db.t4g.large + GP3 400GB disk\n\nMaintenance\n Setup\nAre you running autovacuum? Yes\n\nIf so, with what settings?\nautovacuum_vacuum_scale_factor=0,autovacuum_analyze_scale_factor=0,autovacuum_vacuum_threshold=200000,autovacuum_analyze_threshold=100000\nSELECT * FROM pg_stat_user_tables WHERE relname='indexed_commit';\n relid | schemaname | relname | seq_scan | seq_tup_read |\n idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del |\n n_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze |\n n_ins_since_vacuum | last_vacuum | last_autovacuum |\n last_analyze | last_autoanalyze | vacuum_count |\n autovacuum_count | analyze_count | autoanalyze_count \n-------+------------+----------------+----------+--------------+-----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-------------------------------+--------------+-------------------------------+--------------+------------------+---------------+-------------------\n 24662 | public | indexed_commit | 2485 | 49215378424 |\n 374533865 | 4050928807 | 764089750 | 2191615 | 18500311\n | 0 | 745241398 | 383 | 46018\n | 45343 | | 2023-10-11 23:51:29.170378+00\n | | 2023-10-11 23:50:18.922351+00 | 0\n | 672 | 0 | 753\nWAL\n Configuration\nFor data writing queries: have you moved the WAL to a different\n disk? Changed the settings? No.\nGUC Settings\nWhat database configuration settings have you changed? We use\n default settings.\n\nWhat are their values?\nSELECT * FROM pg_settings WHERE name IN ('effective_cache_size',\n 'shared_buffers', 'work_mem');\n name | setting | unit | \n category | \n short_desc \n| \nextra_desc \n | context | vartype | source | min_val | max_val \n | enumvals | boot_val | reset_val | sourcefile | sourceline |\n pending_restart \n----------------------+---------+------+---------------------------------------+------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------+---------+--------------------+---------+------------+----------+----------+-----------+------------+------------+-----------------\n effective_cache_size | 494234 | 8kB | Query Tuning / Planner\n Cost Constants | Sets the planner's assumption about the total\n size of the data caches. | That is, the total size of the caches\n (kernel cache and shared buffers) used for PostgreSQL data files.\n This is measured in disk pages, which are normally 8 kB each. |\n user | integer | configuration file | 1 | 2147483647\n | | 524288 | 494234 | | | f\n shared_buffers | 247117 | 8kB | Resource Usage /\n Memory | Sets the number of shared memory buffers\n used by the server. \n| \n | postmaster | integer | configuration file | 16 | 1073741823\n | | 16384 | 247117 | | | f\n work_mem | 4096 | kB | Resource Usage /\n Memory | Sets the maximum memory to be used for\n query workspaces. | This much memory can be used by\n each internal sort operation and hash table before switching to\n temporary disk\n files. |\n user | integer | default | 64 | 2147483647\n | | 4096 | 4096 | | | f\n\n\n\nStatistics:\n n_distinct, MCV, histogram\nUseful to check statistics leading to bad join plan. SELECT\n (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\n tablename, attname, inherited, null_frac, n_distinct,\n array_length(most_common_vals,1) n_mcv,\n array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats\n WHERE attname='...' AND tablename='...' ORDER BY 1 DESC; \n\nReturns 0 rows.\n\n\nKind regards,\nAlexander",
"msg_date": "Thu, 12 Oct 2023 18:41:40 +0200",
"msg_from": "Alexander Okulovich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Alexander Okulovich <[email protected]> writes:\n> Recently, we upgraded the AWS RDS instance from Postgres 12.14 to 15.4 \n> and noticed extremely high disk consumption on the following query \n> execution:\n> select (exists (select 1 as \"one\" from \"public\".\"indexed_commit\" where \n> \"public\".\"indexed_commit\".\"repo_id\" in (964992,964994,964999, ...);\n> For some reason, the query planner starts using Seq Scan instead of the \n> index on the \"repo_id\" column when requesting under user limited with \n> RLS. On prod, it happens when there are more than 316 IDs in the IN part \n> of the query, on stage - 3. If we execute the request from Superuser, \n> the planner always uses the \"repo_id\" index.\n\nThe superuser bypasses the RLS policy. When that's enforced, the\nquery can no longer use an index-only scan (because it needs to fetch\ntenant_id too). Moreover, it may be that only a small fraction of the\nrows fetched via the index will satisfy the RLS condition. So the\nestimated cost of an indexscan query could be high enough to persuade\nthe planner that a seqscan is a better idea.\n\n> Luckily, we can easily reproduce this on our stage database (which is \n> smaller). If we add a multicolumn \"repo_id, tenant_id\" index, the \n> planner uses it (Index Only Scan) with any IN params count under RLS.\n\nYeah, that would be the obvious way to ameliorate both problems.\n\nIf in fact you were getting decent performance from an indexscan plan\nbefore, the only explanation I can think of is that the repo_ids you\nare querying for are correlated with the tenant_id, so that the RLS\nfilter doesn't eliminate very many rows from the index result. The\nplanner wouldn't realize that by default, but if you create extended\nstatistics on repo_id and tenant_id then it might do better. Still,\nyou probably want the extra index.\n\n> Could you please clarify if this is a Postgres bug or not?\n\nYou haven't shown any evidence suggesting that.\n\n> Should we \n> include the \"tenant_id\" column in all our indexes to make them work \n> under RLS?\n\nAdding tenant_id is going to bloat your indexes quite a bit,\nso I wouldn't do that except in cases where you've demonstrated\nit's important.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Oct 2023 16:26:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Hi Oscar,\n\nThank you for the suggestion.\n\nUnfortunately, I didn't mention that on prod we performed the upgrade \nfrom Postgres 12 to 15 using replication to another instance with \npglogical, so I assume that the index was filled from scratch by \nPostgres 15.\n\nWe upgraded stage instance by changing Postgres version only, so \npotentially could run into the index issue there. I've tried to execute \nREINDEX CONCURRENTLY, but the performance issue hasn't gone. The problem \nis probably somewhere else. However, I do not exclude that we'll perform \nREINDEX on prod.\n\nKind regards,\n\nAlexander\n\nOn 13.10.2023 11:44, Oscar van Baten wrote:\n\n> Hi Alexander,\n>\n> I think this is caused by the de-duplication of B-tree index entries \n> which was added to postgres in version 13\n> https://www.postgresql.org/docs/release/13.0/\n>\n> \"\n> More efficiently store duplicates in B-tree indexes (Anastasia \n> Lubennikova, Peter Geoghegan)\n> This allows efficient B-tree indexing of low-cardinality columns by \n> storing duplicate keys only once. Users upgrading with pg_upgrade will \n> need to use REINDEX to make an existing index use this feature.\n> \"\n>\n> When we upgraded from 12->13 we had a similar issue. We had to rebuild \n> the indexes and it was fixed..\n>\n>\n> regards,\n> Oscar\n>\n>\n> Op do 12 okt 2023 om 18:41 schreef Alexander Okulovich \n> <[email protected]>:\n>\n> Hello everyone!\n>\n>\n> Recently, we upgraded the AWS RDS instance from Postgres 12.14 to\n> 15.4 and noticed extremely high disk consumption on the following\n> query execution:\n>\n> select (exists (select 1 as \"one\" from \"public\".\"indexed_commit\"\n> where \"public\".\"indexed_commit\".\"repo_id\" in\n> (964992,964994,964999, ...);\n>\n> For some reason, the query planner starts using Seq Scan instead\n> of the index on the \"repo_id\" column when requesting under user\n> limited with RLS. On prod, it happens when there are more than 316\n> IDs in the IN part of the query, on stage - 3. If we execute the\n> request from Superuser, the planner always uses the \"repo_id\" index.\n>\n> Luckily, we can easily reproduce this on our stage database (which\n> is smaller). If we add a multicolumn \"repo_id, tenant_id\" index,\n> the planner uses it (Index Only Scan) with any IN params count\n> under RLS.\n>\n> Could you please clarify if this is a Postgres bug or not? Should\n> we include the \"tenant_id\" column in all our indexes to make them\n> work under RLS?\n>\n>\n> Postgres version / Operating system+version\n>\n>\n> PostgreSQL 15.4 on aarch64-unknown-linux-gnu, compiled by gcc\n> (GCC) 7.3.1 20180712 (Red Hat 7.3.1-6), 64-bit\n>\n>\n> Full Table and Index Schema\n>\n> \\d indexed_commit\n> Table \"public.indexed_commit\"\n> Column | Type | Collation |\n> Nullable | Default\n> ---------------+-----------------------------+-----------+----------+---------\n> id | bigint | | not null |\n> commit_hash | character varying(40) | | not null |\n> parent_hash | text | | |\n> created_ts | timestamp without time zone | | not null |\n> repo_id | bigint | | not null |\n> lines_added | bigint | | |\n> lines_removed | bigint | | |\n> tenant_id | uuid | | not null |\n> author_id | uuid | | not null |\n> Indexes:\n> \"indexed-commit-repo-idx\" btree (repo_id)\n> \"indexed_commit_commit_hash_repo_id_key\" UNIQUE CONSTRAINT,\n> btree (commit_hash, repo_id) REPLICA IDENTITY\n> \"indexed_commit_repo_id_without_loc_idx\" btree (repo_id) WHERE\n> lines_added IS NULL OR lines_removed IS NULL\n> Policies:\n> POLICY \"commit_isolation_policy\"\n> USING ((tenant_id =\n> (current_setting('app.current_tenant_id'::text))::uuid))\n>\n>\n> Table Metadata\n>\n> SELECT relname, relpages, reltuples, relallvisible, relkind,\n> relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM\n> pg_class WHERE relname='indexed_commit';\n> relname | relpages | reltuples | relallvisible |\n> relkind | relnatts | relhassubclass | reloptions | pg_table_size\n> ----------------+----------+--------------+---------------+---------+----------+----------------+---------------------------------------------------------------------------------------------------------------------------------------------+---------------\n> indexed_commit | 18170522 | 7.451964e+08 | 18104744 |\n> r | 9 | f |\n> {autovacuum_vacuum_scale_factor=0,autovacuum_analyze_scale_factor=0,autovacuum_vacuum_threshold=200000,autovacuum_analyze_threshold=100000}\n> | 148903337984\n>\n>\n> EXPLAIN (ANALYZE, BUFFERS), not just EXPLAIN\n>\n> Production queries:\n>\n> 316 ids under RLS limited user\n> <https://explain.depesz.com/s/X7Iq>\n>\n> 392 ids under RLS limited user <https://explain.depesz.com/s/lbkX>\n>\n> 392 ids under Superuser <https://explain.depesz.com/s/uKSG>\n>\n>\n> History\n>\n> It became slow after the upgrade to 15.4. We never had any issues\n> before.\n>\n>\n> Hardware\n>\n> AWS DB class db.t4g.large + GP3 400GB disk\n>\n>\n> Maintenance Setup\n>\n> Are you running autovacuum? Yes\n>\n> If so, with what settings?\n>\n> autovacuum_vacuum_scale_factor=0,autovacuum_analyze_scale_factor=0,autovacuum_vacuum_threshold=200000,autovacuum_analyze_threshold=100000\n>\n> SELECT * FROM pg_stat_user_tables WHERE relname='indexed_commit';\n> relid | schemaname | relname | seq_scan | seq_tup_read |\n> idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del |\n> n_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze |\n> n_ins_since_vacuum | last_vacuum | last_autovacuum |\n> last_analyze | last_autoanalyze | vacuum_count |\n> autovacuum_count | analyze_count | autoanalyze_count\n> -------+------------+----------------+----------+--------------+-----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-------------------------------+--------------+-------------------------------+--------------+------------------+---------------+-------------------\n> 24662 | public | indexed_commit | 2485 | 49215378424 |\n> 374533865 | 4050928807 | 764089750 | 2191615 | 18500311\n> | 0 | 745241398 | 383 | 46018\n> | 45343 | | 2023-10-11 23:51:29.170378+00\n> | | 2023-10-11 23:50:18.922351+00 | 0\n> | 672 | 0 | 753\n>\n>\n> WAL Configuration\n>\n> For data writing queries: have you moved the WAL to a different\n> disk? Changed the settings? No.\n>\n>\n> GUC Settings\n>\n> What database configuration settings have you changed? We use\n> default settings.\n>\n> What are their values?\n>\n> SELECT * FROM pg_settings WHERE name IN ('effective_cache_size',\n> 'shared_buffers', 'work_mem');\n> name | setting | unit | category |\n> short_desc | extra_desc | context | vartype | \n> source | min_val | max_val | enumvals | boot_val |\n> reset_val | sourcefile | sourceline | pending_restart\n> ----------------------+---------+------+---------------------------------------+------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------+---------+--------------------+---------+------------+----------+----------+-----------+------------+------------+-----------------\n> effective_cache_size | 494234 | 8kB | Query Tuning / Planner\n> Cost Constants | Sets the planner's assumption about the total\n> size of the data caches. | That is, the total size of the caches\n> (kernel cache and shared buffers) used for PostgreSQL data files.\n> This is measured in disk pages, which are normally 8 kB each. |\n> user | integer | configuration file | 1 | 2147483647\n> | | 524288 | 494234 | | | f\n> shared_buffers | 247117 | 8kB | Resource Usage /\n> Memory | Sets the number of shared memory buffers\n> used by the server. | | postmaster | integer | configuration file\n> | 16 | 1073741823 | | 16384 | 247117 |\n> | | f\n> work_mem | 4096 | kB | Resource Usage /\n> Memory | Sets the maximum memory to be used for\n> query workspaces. | This much memory can be used by\n> each internal sort operation and hash table before switching to\n> temporary disk\n> files. |\n> user | integer | default | 64 | 2147483647\n> | | 4096 | 4096 | | | f\n>\n>\n> Statistics: n_distinct, MCV, histogram\n>\n> Useful to check statistics leading to bad join plan. SELECT\n> (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\n> tablename, attname, inherited, null_frac, n_distinct,\n> array_length(most_common_vals,1) n_mcv,\n> array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats\n> WHERE attname='...' AND tablename='...' ORDER BY 1 DESC;\n>\n> Returns 0 rows.\n>\n>\n> Kind regards,\n>\n> Alexander\n>\n\n\n\n\n\nHi Oscar,\nThank you for the suggestion.\n\nUnfortunately, I didn't mention that on prod we performed the\n upgrade from Postgres 12 to 15 using replication to another\n instance with pglogical, so I assume that the index was filled\n from scratch by Postgres 15.\nWe upgraded stage instance by changing Postgres version only, so\n potentially could run into the index issue there. I've tried to\n execute REINDEX CONCURRENTLY, but the performance issue hasn't\n gone. The problem is probably somewhere else. However, I do not\n exclude that we'll perform REINDEX on prod.\nKind regards,\nAlexander\n\nOn 13.10.2023 11:44, Oscar van Baten wrote:\n\n\nHi Alexander,\n\n I think this is caused by the de-duplication of B-tree index\n entries which was added to postgres in version 13\nhttps://www.postgresql.org/docs/release/13.0/\n\n \"\n More efficiently store duplicates in B-tree indexes (Anastasia\n Lubennikova, Peter Geoghegan)\n This allows efficient B-tree indexing of low-cardinality columns\n by storing duplicate keys only once. Users upgrading with\n pg_upgrade will need to use REINDEX to make an existing index\n use this feature.\n \"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n When we upgraded\n from 12->13 we\n had a similar\n issue. We had to\n rebuild the\n indexes and it was\n fixed..\n\n\n\n\nregards,\n Oscar\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOp do 12 okt 2023 om 18:41\n schreef Alexander Okulovich <[email protected]>:\n\n\n\nHello everyone!\n\n\nRecently, we upgraded the AWS RDS instance from Postgres\n 12.14 to 15.4 and noticed extremely high disk\n consumption on the following query execution:\nselect (exists (select 1 as \"one\" from\n \"public\".\"indexed_commit\" where\n \"public\".\"indexed_commit\".\"repo_id\" in\n (964992,964994,964999, ...);\nFor some reason, the query planner starts using Seq Scan\n instead of the index on the \"repo_id\" column when\n requesting under user limited with RLS. On prod, it\n happens when there are more than 316 IDs in the IN part of\n the query, on stage - 3. If we execute the request from\n Superuser, the planner always uses the \"repo_id\" index.\nLuckily, we can easily reproduce this on our stage\n database (which is smaller). If we add a multicolumn\n \"repo_id, tenant_id\" index, the planner uses it (Index\n Only Scan) with any IN params count under RLS.\n Could you please clarify if this is a Postgres bug or not?\n Should we include the \"tenant_id\" column in all our indexes\n to make them work under RLS? \n\n\nPostgres\n version / Operating\n system+version\n\nPostgreSQL 15.4 on aarch64-unknown-linux-gnu, compiled by\n gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-6), 64-bit\nFull\n Table and Index Schema\n\\d indexed_commit\n Table \"public.indexed_commit\"\n Column | Type | Collation |\n Nullable | Default \n---------------+-----------------------------+-----------+----------+---------\n id | bigint | |\n not null | \n commit_hash | character varying(40) | |\n not null | \n parent_hash | text | \n | | \n created_ts | timestamp without time zone | |\n not null | \n repo_id | bigint | |\n not null | \n lines_added | bigint | \n | | \n lines_removed | bigint | \n | | \n tenant_id | uuid | |\n not null | \n author_id | uuid | |\n not null | \n Indexes:\n \"indexed-commit-repo-idx\" btree (repo_id)\n \"indexed_commit_commit_hash_repo_id_key\" UNIQUE\n CONSTRAINT, btree (commit_hash, repo_id) REPLICA IDENTITY\n \"indexed_commit_repo_id_without_loc_idx\" btree\n (repo_id) WHERE lines_added IS NULL OR lines_removed IS\n NULL\n Policies:\n POLICY \"commit_isolation_policy\"\n USING ((tenant_id =\n (current_setting('app.current_tenant_id'::text))::uuid))\nTable\n Metadata\n SELECT relname, relpages, reltuples, relallvisible, relkind,\n relnatts, relhassubclass, reloptions, pg_table_size(oid)\n FROM pg_class WHERE relname='indexed_commit';\n relname | relpages | reltuples | relallvisible |\n relkind | relnatts | relhassubclass\n | \nreloptions \n | pg_table_size \n----------------+----------+--------------+---------------+---------+----------+----------------+---------------------------------------------------------------------------------------------------------------------------------------------+---------------\n indexed_commit | 18170522 | 7.451964e+08 | 18104744 |\n r | 9 | f |\n{autovacuum_vacuum_scale_factor=0,autovacuum_analyze_scale_factor=0,autovacuum_vacuum_threshold=200000,autovacuum_analyze_threshold=100000}\n | 148903337984\n\nEXPLAIN\n (ANALYZE, BUFFERS), not just EXPLAIN\nProduction queries:\n\n316 ids under RLS\n limited user\n\n392 ids under RLS\n limited user\n\n392 ids under\n Superuser\n\nHistory\nIt became slow after the upgrade to 15.4. We never had\n any issues before.\n\nHardware\nAWS DB class db.t4g.large + GP3 400GB disk\n\nMaintenance\n Setup\nAre you running autovacuum? Yes\n\nIf so, with what settings?\nautovacuum_vacuum_scale_factor=0,autovacuum_analyze_scale_factor=0,autovacuum_vacuum_threshold=200000,autovacuum_analyze_threshold=100000\nSELECT * FROM pg_stat_user_tables WHERE\n relname='indexed_commit';\n relid | schemaname | relname | seq_scan |\n seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins |\n n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup |\n n_dead_tup | n_mod_since_analyze | n_ins_since_vacuum |\n last_vacuum | last_autovacuum | last_analyze\n | last_autoanalyze | vacuum_count |\n autovacuum_count | analyze_count | autoanalyze_count \n-------+------------+----------------+----------+--------------+-----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-------------------------------+--------------+-------------------------------+--------------+------------------+---------------+-------------------\n 24662 | public | indexed_commit | 2485 | \n 49215378424 | 374533865 | 4050928807 | 764089750 | \n 2191615 | 18500311 | 0 | 745241398 | \n 383 | 46018 | 45343\n | | 2023-10-11 23:51:29.170378+00\n | | 2023-10-11 23:50:18.922351+00\n | 0 | 672 | 0\n | 753\nWAL\n Configuration\nFor data writing queries: have you moved the WAL to a\n different disk? Changed the settings? No.\nGUC\n Settings\nWhat database configuration settings have you changed? We\n use default settings.\n\nWhat are their values?\nSELECT * FROM pg_settings WHERE name IN\n ('effective_cache_size', 'shared_buffers', 'work_mem');\n name | setting | unit | \n category | \n short_desc \n| \nextra_desc \n | context | vartype | source | min_val | \n max_val | enumvals | boot_val | reset_val | sourcefile |\n sourceline | pending_restart \n----------------------+---------+------+---------------------------------------+------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------+---------+--------------------+---------+------------+----------+----------+-----------+------------+------------+-----------------\n effective_cache_size | 494234 | 8kB | Query Tuning /\n Planner Cost Constants | Sets the planner's assumption\n about the total size of the data caches. | That is, the\n total size of the caches (kernel cache and shared buffers)\n used for PostgreSQL data files. This is measured in disk\n pages, which are normally 8 kB each. | user |\n integer | configuration file | 1 | 2147483647\n | | 524288 | 494234 | \n | | f\n shared_buffers | 247117 | 8kB | Resource Usage /\n Memory | Sets the number of shared memory\n buffers used by the server. \n| \n | postmaster | integer | configuration file | 16 |\n 1073741823 | | 16384 | 247117 | \n | | f\n work_mem | 4096 | kB | Resource Usage /\n Memory | Sets the maximum memory to be used\n for query workspaces. | This much memory can\n be used by each internal sort operation and hash table\n before switching to temporary disk\n files. |\n user | integer | default | 64 |\n 2147483647 | | 4096 | 4096 | \n | | f\n\n\n\nStatistics:\n n_distinct, MCV, histogram\nUseful to check statistics leading to bad join plan.\n SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x)\n frac_MCV, tablename, attname, inherited, null_frac,\n n_distinct, array_length(most_common_vals,1) n_mcv,\n array_length(histogram_bounds,1) n_hist, correlation FROM\n pg_stats WHERE attname='...' AND tablename='...' ORDER BY\n 1 DESC; \n\nReturns 0 rows.\n\n\nKind regards,\nAlexander",
"msg_date": "Wed, 18 Oct 2023 12:29:26 +0200",
"msg_from": "Alexander Okulovich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Hi Tom,\n\n> If in fact you were getting decent performance from an indexscan plan\n> before, the only explanation I can think of is that the repo_ids you\n> are querying for are correlated with the tenant_id, so that the RLS\n> filter doesn't eliminate very many rows from the index result. The\n> planner wouldn't realize that by default, but if you create extended\n> statistics on repo_id and tenant_id then it might do better. Still,\n> you probably want the extra index.\n\nDo you have any idea how to measure that correlation?\n\n> You haven't shown any evidence suggesting that.\nMy suggestion is based on following backward reasoning.\n\nWe used the product with the default settings. The requests are simple. \nWe didn't change the hardware (actually, we use even more performant \nhardware because of that issue) and DDL. I've checked the request on old \nand new databases. Requests that rely on this index execute more than 10 \ntimes longer. Planner indeed used Index Scan before, but now it doesn't.\n\nSo, from my perspective, the only reason we experience that is database \nlogic change. I think we could probably try to reproduce the issue on \ndifferent Postgres versions and find the specific version that causes this.\n\n> Adding tenant_id is going to bloat your indexes quite a bit,\n> so I wouldn't do that except in cases where you've demonstrated\n> it's important.\n\nAny recommendations from the Postgres team on how to use the indexes \nunder RLS would help a lot here, but I didn't find them.\n\nKind regards,\n\nAlexander\n\nOn 13.10.2023 22:26, Tom Lane wrote:\n> Alexander Okulovich <[email protected]> writes:\n>> Recently, we upgraded the AWS RDS instance from Postgres 12.14 to 15.4\n>> and noticed extremely high disk consumption on the following query\n>> execution:\n>> select (exists (select 1 as \"one\" from \"public\".\"indexed_commit\" where\n>> \"public\".\"indexed_commit\".\"repo_id\" in (964992,964994,964999, ...);\n>> For some reason, the query planner starts using Seq Scan instead of the\n>> index on the \"repo_id\" column when requesting under user limited with\n>> RLS. On prod, it happens when there are more than 316 IDs in the IN part\n>> of the query, on stage - 3. If we execute the request from Superuser,\n>> the planner always uses the \"repo_id\" index.\n> The superuser bypasses the RLS policy. When that's enforced, the\n> query can no longer use an index-only scan (because it needs to fetch\n> tenant_id too). Moreover, it may be that only a small fraction of the\n> rows fetched via the index will satisfy the RLS condition. So the\n> estimated cost of an indexscan query could be high enough to persuade\n> the planner that a seqscan is a better idea.\n>\n>> Luckily, we can easily reproduce this on our stage database (which is\n>> smaller). If we add a multicolumn \"repo_id, tenant_id\" index, the\n>> planner uses it (Index Only Scan) with any IN params count under RLS.\n> Yeah, that would be the obvious way to ameliorate both problems.\n>\n> If in fact you were getting decent performance from an indexscan plan\n> before, the only explanation I can think of is that the repo_ids you\n> are querying for are correlated with the tenant_id, so that the RLS\n> filter doesn't eliminate very many rows from the index result. The\n> planner wouldn't realize that by default, but if you create extended\n> statistics on repo_id and tenant_id then it might do better. Still,\n> you probably want the extra index.\n>\n>> Could you please clarify if this is a Postgres bug or not?\n> You haven't shown any evidence suggesting that.\n>\n>> Should we\n>> include the \"tenant_id\" column in all our indexes to make them work\n>> under RLS?\n> Adding tenant_id is going to bloat your indexes quite a bit,\n> so I wouldn't do that except in cases where you've demonstrated\n> it's important.\n>\n> \t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Oct 2023 16:07:38 +0200",
"msg_from": "Alexander Okulovich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Alexander Okulovich <[email protected]> writes:\n> We used the product with the default settings. The requests are simple. \n> We didn't change the hardware (actually, we use even more performant \n> hardware because of that issue) and DDL. I've checked the request on old \n> and new databases. Requests that rely on this index execute more than 10 \n> times longer. Planner indeed used Index Scan before, but now it doesn't.\n\n> So, from my perspective, the only reason we experience that is database \n> logic change.\n\n[ shrug... ] Maybe, but it's still not clear if it's a bug, or an\nintentional change, or just a cost estimate that was on the hairy\nedge before and your luck ran out.\n\nIf you could provide a self-contained test case that performs 10x worse\nunder v15 than v12, we'd surely take a look at it. But with the\ninformation you've given so far, little is possible beyond speculation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Oct 2023 16:35:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Hi Alexander!\nApart from the problem you are writing about I'd like to ask you to explain\nhow you interpret counted frac_MCV - for me it has no sense at all to\nsummarize most_common_freqs.\nPlease rethink it and explain what was the idea of such SUM ? I understand\nthat it can be some measure for ratio of NULL values but only in some cases\nwhen n_distinct is small.\n\nregards\n\n> Statistics: n_distinct, MCV, histogram\n>>\n>> Useful to check statistics leading to bad join plan. SELECT (SELECT\n>> sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname,\n>> inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv,\n>> array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE\n>> attname='...' AND tablename='...' ORDER BY 1 DESC;\n>>\n>> Returns 0 rows.\n>>\n>>\n>> Kind regards,\n>>\n>> Alexander\n>>\n>\n\nHi Alexander!Apart from the problem you are writing about I'd like to ask you to explain how you interpret counted frac_MCV - for me it has no sense at all to summarize most_common_freqs.Please rethink it and explain what was the idea of such SUM ? I understand that it can be some measure for ratio of NULL values but only in some cases when n_distinct is small.regardsStatistics:\n n_distinct, MCV, histogram\nUseful to check statistics leading to bad join plan.\n SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x)\n frac_MCV, tablename, attname, inherited, null_frac,\n n_distinct, array_length(most_common_vals,1) n_mcv,\n array_length(histogram_bounds,1) n_hist, correlation FROM\n pg_stats WHERE attname='...' AND tablename='...' ORDER BY\n 1 DESC; \n\nReturns 0 rows.\n\n\nKind regards,\nAlexander",
"msg_date": "Thu, 19 Oct 2023 09:43:44 +0200",
"msg_from": "Tomek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Hi Tomek,\n\nUnfortunately, I didn't dig into this. This request is recommended to \nprovide when describing \n<https://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct,_MCV,_histogram> \nslow query issues, but looks like it relates to JOINs in the query, \nwhich we don't have.\n\nKind regards,\n\nAlexander\n\nOn 19.10.2023 09:43, Tomek wrote:\n> Hi Alexander!\n> Apart from the problem you are writing about I'd like to ask you to \n> explain how you interpret counted frac_MCV - for me it has no sense at \n> all to summarize most_common_freqs.\n> Please rethink it and explain what was the idea of such SUM ? I \n> understand that it can be some measure for ratio of NULL values but \n> only in some cases when n_distinct is small.\n>\n> regards\n>\n>>\n>> Statistics: n_distinct, MCV, histogram\n>>\n>> Useful to check statistics leading to bad join plan. SELECT\n>> (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\n>> tablename, attname, inherited, null_frac, n_distinct,\n>> array_length(most_common_vals,1) n_mcv,\n>> array_length(histogram_bounds,1) n_hist, correlation FROM\n>> pg_stats WHERE attname='...' AND tablename='...' ORDER BY 1\n>> DESC;\n>>\n>> Returns 0 rows.\n>>\n>>\n>> Kind regards,\n>>\n>> Alexander\n>>\n\n\n\n\n\nHi Tomek,\nUnfortunately, I didn't dig into this. This request is\n recommended to provide when describing\n slow query issues, but looks like it relates to JOINs in the\n query, which we don't have.\n\nKind regards,\nAlexander \n\nOn 19.10.2023 09:43, Tomek wrote:\n\n\n\n\n\nHi\n Alexander!\nApart\n from the problem you are writing about I'd like to ask you\n to explain how you interpret counted frac_MCV -\n for me it has no sense at all to summarize most_common_freqs.\nPlease\n rethink it and explain what was the idea of such SUM ? I\n understand that it can be some measure for ratio of NULL\n values but only in some cases when n_distinct is small.\n\n\nregards\n\n\n\n\n\n\nStatistics:\n n_distinct, MCV, histogram\nUseful to check statistics leading to bad join\n plan. SELECT (SELECT sum(x) FROM\n unnest(most_common_freqs) x) frac_MCV,\n tablename, attname, inherited, null_frac,\n n_distinct, array_length(most_common_vals,1)\n n_mcv, array_length(histogram_bounds,1) n_hist,\n correlation FROM pg_stats WHERE attname='...'\n AND tablename='...' ORDER BY 1 DESC; \n\nReturns 0 rows.\n\n\nKind regards,\nAlexander",
"msg_date": "Thu, 19 Oct 2023 11:58:30 +0200",
"msg_from": "Alexander Okulovich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Hi Tom,\n\nI've attempted to reproduce this on my PC in Docker from the stage \ndatabase dump, but no luck. The first query execution on Postgres 15 \nbehaves like on the real stage, but subsequent ones use the index. Also, \nthey execute much faster. Looks like the hardware and(or) the data \nstructure on disk matters.\n\nHere is the Docker Compose sample config:\n\n> version:'2.4' services:\n> database-15:\n> image: postgres:15.4\n> ports:\n> -\"7300:5432\" environment:\n> POSTGRES_DB: stage_db\n> POSTGRES_USER: stage\n> POSTGRES_PASSWORD: stage\n> volumes:\n> -\"./init.sql:/docker-entrypoint-initdb.d/init.sql\" -\"./pgdb/aws-15:/var/lib/postgresql/data\" mem_limit: 512M\n> cpus: 2\n> blkio_config:\n> device_read_bps:\n> -path: /dev/nvme0n1\n> rate:'10mb' device_read_iops:\n> -path: /dev/nvme0n1\n> rate: 2000\n> device_write_bps:\n> -path: /dev/nvme0n1\n> rate:'10mb' device_write_iops:\n> -path: /dev/nvme0n1\n> rate: 2000\nI performed tests only with CPU and memory limits. If I try to limit the \ndisk(blkio_config), my system hangs on container startup after a while.\nCould you please share your thoughts on how to create such a \nself-contained test case.\n\nKind regards,\n\nAlexander\n\nOn 18.10.2023 22:35, Tom Lane wrote:\n> If you could provide a self-contained test case that performs 10x \n> worse under v15 than v12, we'd surely take a look at it. But with the \n> information you've given so far, little is possible beyond speculation. \n\n\n\n\n\nHi Tom,\nI've attempted to reproduce this on my PC in Docker from the\n stage database dump, but no luck. The first query execution on\n Postgres 15 behaves like on the real stage, but subsequent ones\n use the index. Also, they execute much faster. Looks like the\n hardware and(or) the data structure on disk matters.\nHere is the Docker Compose sample config:\n\n\n\nversion: '2.4'\n\nservices:\n\ndatabase-15:\n image: postgres:15.4\n ports:\n - \"7300:5432\"\n environment:\n POSTGRES_DB: stage_db\n POSTGRES_USER: stage\n POSTGRES_PASSWORD: stage\n volumes:\n - \"./init.sql:/docker-entrypoint-initdb.d/init.sql\"\n - \"./pgdb/aws-15:/var/lib/postgresql/data\"\n mem_limit: 512M\n cpus: 2\n blkio_config:\n device_read_bps:\n - path: /dev/nvme0n1\n rate: '10mb'\n device_read_iops:\n - path: /dev/nvme0n1\n rate: 2000\n device_write_bps:\n - path: /dev/nvme0n1\n rate: '10mb'\n device_write_iops:\n - path: /dev/nvme0n1\n rate: 2000\n\n\n I performed tests only with CPU and memory limits. If I try to\n limit the disk(blkio_config), my system hangs on container startup\n after a while.\n Could you please share your thoughts on how to create such a\n self-contained test case.\n Kind regards,\nAlexander\n\nOn 18.10.2023 22:35, Tom Lane wrote:\n\nIf\n you could provide a self-contained test case that performs 10x\n worse\n under v15 than v12, we'd surely take a look at it. But with the\n information you've given so far, little is possible beyond\n speculation.",
"msg_date": "Thu, 26 Oct 2023 15:47:03 +0200",
"msg_from": "Alexander Okulovich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Alexander Okulovich <[email protected]> writes:\n> I've attempted to reproduce this on my PC in Docker from the stage \n> database dump, but no luck. The first query execution on Postgres 15 \n> behaves like on the real stage, but subsequent ones use the index.\n\nCan you force it in either direction with \"set enable_seqscan = off\"\n(resp. \"set enable_indexscan = off\")? If so, how do the estimated\ncosts compare for the two plan shapes?\n\n> Also, \n> they execute much faster. Looks like the hardware and(or) the data \n> structure on disk matters.\n\nMaybe your prod installation has a bloated index, and that's driving\nup the estimated cost enough to steer the planner away from it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Oct 2023 10:09:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Hi Tom,\n\n> Can you force it in either direction with \"set enable_seqscan = off\"\n> (resp. \"set enable_indexscan = off\")? If so, how do the estimated\n> costs compare for the two plan shapes?\nHere are the results from the prod instance:\n\nseqscan off <https://explain.depesz.com/s/9AWx>\n\nindexscan_off <https://explain.depesz.com/s/mTU2>\n\nJust noticed that the WHEN clause differs from the initial one (392 ids \nunder RLS). Probably, this is why the execution time isn't so \ncatastrophic. Please let me know if this matters, and I'll rerun this \nwith the initial request.\n\nSpeaking of the stage vs local Docker Postgres instance, the execution \ntime on stage is so short (0.1 ms with seq scan, 0.195 with index scan) \nthat we probably should not consider them. But I'll execute the requests \nif it's necessary.\n\n> Maybe your prod installation has a bloated index, and that's driving\n> up the estimated cost enough to steer the planner away from it.\nWe tried to make REINDEX CONCURRENTLY on a prod copy, but the planner \nstill used Seq Scan instead of Index Scan afterward.\n\nKind regards,\n\nAlexander\n\n\n\n\n\n\nHi Tom,\n\n\nCan you force it in either direction with \"set enable_seqscan = off\"\n(resp. \"set enable_indexscan = off\")? If so, how do the estimated\ncosts compare for the two plan shapes?\n\n Here are the results from the prod instance:\n seqscan off\n\nindexscan_off\nJust noticed that the WHEN clause differs from the initial one\n (392 ids under RLS). Probably, this is why the execution time\n isn't so catastrophic. Please let me know if this matters, and\n I'll rerun this with the initial request.\n Speaking of the stage vs local Docker Postgres instance, the\n execution time on stage is so short (0.1 ms with seq scan, 0.195\n with index scan) that we probably should not consider them. But I'll\n execute the requests if it's necessary.\n \n\nMaybe your prod installation has a bloated index, and that's driving\nup the estimated cost enough to steer the planner away from it.\n\n We tried to make REINDEX CONCURRENTLY on a prod copy, but the\n planner still used Seq Scan instead of Index Scan afterward.\nKind regards,\nAlexander",
"msg_date": "Tue, 31 Oct 2023 17:01:29 +0100",
"msg_from": "Alexander Okulovich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 15 SELECT query doesn't use index under RLS"
},
{
"msg_contents": "Relatively new to Postgres. Running into a locking situation and I need to make sure I understand output. I found this query to show a lock tree:\r\n\r\nwldomart01a=> WITH\r\nwldomart01a-> RECURSIVE l AS (\r\nwldomart01a(> SELECT pid, locktype, mode, granted,\r\nwldomart01a(> ROW(locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid) obj\r\nwldomart01a(> FROM pg_locks),\r\nwldomart01a-> pairs AS (\r\nwldomart01a(> SELECT w.pid waiter, l.pid locker, l.obj, l.mode\r\nwldomart01a(> FROM l w\r\nwldomart01a(> JOIN l\r\nwldomart01a(> ON l.obj IS NOT DISTINCT FROM w.obj\r\nwldomart01a(> AND l.locktype=w.locktype\r\nwldomart01a(> AND NOT l.pid=w.pid\r\nwldomart01a(> AND l.granted\r\nwldomart01a(> WHERE NOT w.granted),\r\nwldomart01a-> tree AS (\r\nwldomart01a(> SELECT l.locker pid, l.locker root, NULL::record obj, NULL AS mode, 0 lvl, locker::text path, array_agg(l.locker) OVER () all_pids\r\nwldomart01a(> FROM ( SELECT DISTINCT locker FROM pairs l WHERE NOT EXISTS (SELECT 1 FROM pairs WHERE waiter=l.locker) ) l\r\nwldomart01a(> UNION ALL\r\nwldomart01a(> SELECT w.waiter pid, tree.root, w.obj, w.mode, tree.lvl+1, tree.path||'.'||w.waiter, all_pids || array_agg(w.waiter) OVER ()\r\nwldomart01a(> FROM tree\r\nwldomart01a(> JOIN pairs w\r\nwldomart01a(> ON tree.pid=w.locker\r\nwldomart01a(> AND NOT w.waiter = ANY ( all_pids ))\r\nwldomart01a-> SELECT\r\nwldomart01a-> path, repeat(' .', lvl)||' '|| tree.pid as pid_tree, tree.pid,\r\nwldomart01a-> (clock_timestamp() - a.xact_start)::interval(3) AS ts_age,\r\nwldomart01a-> replace(a.state, 'idle in transaction', 'idletx') state,\r\nwldomart01a-> wait_event_type wait_type,\r\nwldomart01a-> wait_event,\r\nwldomart01a-> (clock_timestamp() - state_change)::interval(3) AS change_age,\r\nwldomart01a-> lvl,\r\nwldomart01a-> (SELECT count(*) FROM tree p WHERE p.path ~ ('^'||tree.path) AND NOT p.path=tree.path) blocked,\r\nwldomart01a-> repeat(' .', lvl)||' '||left(query,100) query\r\nwldomart01a-> FROM tree\r\nwldomart01a-> JOIN pg_stat_activity a\r\nwldomart01a-> USING (pid)\r\nwldomart01a-> ORDER BY path;\r\n path | pid_tree | pid | ts_age | state | wait_type | wait_event | change_age | lvl | blocked | query\r\n-----------+----------+------+--------------+--------+-----------+---------------+--------------+-----+---------+------------------------------------\r\n3740 | 3740 | 3740 | 01:23:03.294 | idletx | Client | ClientRead | 00:00:00.004 | 0 | 1 | update \"wln_mart\".\"ee_fact\" set +\r\n | | | | | | | | | | \"changed_on\" = $1 +\r\n | | | | | | | | | | where \"ee_fact_id\" = $2\r\n3740.3707 | . 3707 | 3707 | 01:23:03.294 | active | Lock | transactionid | 01:23:03.29 | 1 | 0 | . update \"wln_mart\".\"ee_fact\" set+\r\n | | | | | | | | | | \"changed_on\" = $1 +\r\n | | | | | | | | | | where \"ee_fact_id\" = $2\r\n(2 rows)\r\n\r\nAbove I can see PID 3740 is blocking PID 3707. The PK on table wln_mart.ee_fact is ee_fact_id. I assume PID 3740 has updated a row (but not committed it yet) that PID 3707 is also trying to update. But I am being told those 2 sessions should not be trying to process the same PK rows.\r\n\r\nHere is output from pg_locks for those 2 sessions:\r\n\r\nwldomart01a=> select * from pg_locks where pid in (3740,3707) order by pid;\r\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath | waitstart\r\n---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+------------------+---------+----------+-------------------------------\r\ntransactionid | | | | | | 251189989 | | | | 54/196626 | 3707 | ExclusiveLock | t | f |\r\nrelation | 91999 | 94619 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94615 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94611 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94610 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94609 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94569 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 93050 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nvirtualxid | | | | | 54/196626 | | | | | 54/196626 | 3707 | ExclusiveLock | t | t |\r\ntransactionid | | | | | | 251189988 | | | | 54/196626 | 3707 | ExclusiveLock | t | f |\r\ntransactionid | | | | | | 251189986 | | | | 54/196626 | 3707 | ShareLock | f | f | 2023-10-31 14:40:21.837507-05\r\ntuple | 91999 | 93050 | 0 | 1 | | | | | | 54/196626 | 3707 | ExclusiveLock | t | f |\r\nrelation | 91999 | 308853 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94693 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94693 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94619 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94615 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94611 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94610 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94609 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94569 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 93050 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nvirtualxid | | | | | 60/259887 | | | | | 60/259887 | 3740 | ExclusiveLock | t | t |\r\ntransactionid | | | | | | 251189986 | | | | 60/259887 | 3740 | ExclusiveLock | t | f |\r\nrelation | 91999 | 308853 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\n(25 rows)\r\n\r\n\r\nI believe the locktype relation is pointing to the table and the indexes on the table. Which data point(s) above point to this being row-level locking and not some other level of locking? I am very familiar with Oracle locking and different levels and am trying to quickly get up-to-speed on Postgres locking. I am continuing to google for this but figured I could post this to see if someone can provide a quick response.\r\n\r\nThanks\r\nSteve\r\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html\r\n\n\n\n\n\n\n\n\n\nRelatively new to Postgres. Running into a locking situation and I need to make sure I understand output. I found this query to show a lock tree:\n \nwldomart01a=> WITH\nwldomart01a-> RECURSIVE l AS (\nwldomart01a(> SELECT pid, locktype, mode, granted,\nwldomart01a(> ROW(locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid) obj\nwldomart01a(> FROM pg_locks),\nwldomart01a-> pairs AS (\nwldomart01a(> SELECT w.pid waiter, l.pid locker, l.obj, l.mode\nwldomart01a(> FROM l w\nwldomart01a(> JOIN l\nwldomart01a(> ON l.obj IS NOT DISTINCT FROM w.obj\nwldomart01a(> AND l.locktype=w.locktype\nwldomart01a(> AND NOT l.pid=w.pid\nwldomart01a(> AND l.granted\nwldomart01a(> WHERE NOT w.granted),\nwldomart01a-> tree AS (\nwldomart01a(> SELECT l.locker pid, l.locker root, NULL::record obj, NULL AS mode, 0 lvl, locker::text path, array_agg(l.locker) OVER\r\n () all_pids\nwldomart01a(> FROM ( SELECT DISTINCT locker FROM pairs l WHERE NOT EXISTS (SELECT 1 FROM pairs WHERE waiter=l.locker) ) l\nwldomart01a(> UNION ALL\nwldomart01a(> SELECT w.waiter pid, tree.root, w.obj, w.mode, tree.lvl+1, tree.path||'.'||w.waiter, all_pids || array_agg(w.waiter)\r\n OVER ()\nwldomart01a(> FROM tree\nwldomart01a(> JOIN pairs w\nwldomart01a(> ON tree.pid=w.locker\nwldomart01a(> AND NOT w.waiter = ANY ( all_pids ))\nwldomart01a-> SELECT\nwldomart01a-> path, repeat(' .', lvl)||' '|| tree.pid as pid_tree, tree.pid,\nwldomart01a-> (clock_timestamp() - a.xact_start)::interval(3) AS ts_age,\nwldomart01a-> replace(a.state, 'idle in transaction', 'idletx') state,\nwldomart01a-> wait_event_type wait_type,\nwldomart01a-> wait_event,\nwldomart01a-> (clock_timestamp() - state_change)::interval(3) AS change_age,\nwldomart01a-> lvl,\nwldomart01a-> (SELECT count(*) FROM tree p WHERE p.path ~ ('^'||tree.path) AND NOT p.path=tree.path) blocked,\nwldomart01a-> repeat(' .', lvl)||' '||left(query,100) query\nwldomart01a-> FROM tree\nwldomart01a-> JOIN pg_stat_activity a\nwldomart01a-> USING (pid)\nwldomart01a-> ORDER BY path;\n path | pid_tree | pid | ts_age | state | wait_type | wait_event | change_age | lvl | blocked | query\n-----------+----------+------+--------------+--------+-----------+---------------+--------------+-----+---------+------------------------------------\n3740 | 3740 | 3740 | 01:23:03.294 | idletx | Client | ClientRead | 00:00:00.004 | 0 | 1 | update \"wln_mart\".\"ee_fact\" set +\n | | | | | | | | | | \"changed_on\" = $1 +\n | | | | | | | | | | where \"ee_fact_id\" = $2\n3740.3707 | . 3707 | 3707 | 01:23:03.294 | active | Lock | transactionid | 01:23:03.29 | 1 | 0 | . update \"wln_mart\".\"ee_fact\" set+\n | | | | | | | | | | \"changed_on\" = $1 +\n | | | | | | | | | | where \"ee_fact_id\" = $2\n(2 rows)\n \nAbove I can see PID 3740 is blocking PID 3707. The PK on table wln_mart.ee_fact is ee_fact_id. I assume PID 3740 has updated a row (but not committed it yet) that PID 3707 is also trying to update. But I am being told those 2 sessions\r\n should not be trying to process the same PK rows.\n \nHere is output from pg_locks for those 2 sessions:\n \nwldomart01a=> select * from pg_locks where pid in (3740,3707) order by pid;\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode \r\n | granted | fastpath | waitstart\n---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+------------------+---------+----------+-------------------------------\ntransactionid | | | | | | 251189989 | | | | 54/196626 | 3707 | ExclusiveLock \r\n | t | f |\nrelation | 91999 | 94619 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94615 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94611 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94610 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94609 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94569 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 93050 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nvirtualxid | | | | | 54/196626 | | | | | 54/196626 | 3707 | ExclusiveLock \r\n | t | t |\ntransactionid | | | | | | 251189988 | | | | 54/196626 | 3707 | ExclusiveLock \r\n | t | f |\ntransactionid | | | | | | 251189986 | | | | 54/196626 | 3707 | ShareLock \r\n | f | f | 2023-10-31 14:40:21.837507-05\ntuple | 91999 | 93050 | 0 | 1 | | | | | | 54/196626 | 3707 | ExclusiveLock \r\n | t | f |\nrelation | 91999 | 308853 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94693 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94693 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94619 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94615 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94611 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94610 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94609 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94569 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 93050 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nvirtualxid | | | | | 60/259887 | | | | | 60/259887 | 3740 | ExclusiveLock \r\n | t | t |\ntransactionid | | | | | | 251189986 | | | | 60/259887 | 3740 | ExclusiveLock \r\n | t | f |\nrelation | 91999 | 308853 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\n(25 rows)\n \n \nI believe the locktype relation is pointing to the table and the indexes on the table. Which data point(s) above point to this being row-level locking and not some other level of locking? I am very familiar with Oracle locking and different\r\n levels and am trying to quickly get up-to-speed on Postgres locking. I am continuing to google for this but figured I could post this to see if someone can provide a quick response.\n \nThanks\nSteve\n\r\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain\r\n required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html",
"msg_date": "Tue, 31 Oct 2023 21:12:24 +0000",
"msg_from": "\"Dirschel, Steve\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Postgres Locking"
},
{
"msg_contents": "Hi Steve,\r\n\r\nIts literally the same query. I would try to extract the actual named object that is in the lock to verify. Is there any partitioning? An explain plan may be helpful.\r\n\r\n\r\n\r\nThank you\r\nTravis\r\n\r\nFrom: Dirschel, Steve <[email protected]>\r\nSent: Tuesday, October 31, 2023 4:12 PM\r\nTo: [email protected]\r\nCc: Wong, Kam Fook (TR Technology) <[email protected]>\r\nSubject: Postgres Locking\r\n\r\n\r\n***ATTENTION!! This message originated from outside of Circana. Treat hyperlinks and attachments in this email with caution.***\r\n\r\nRelatively new to Postgres. Running into a locking situation and I need to make sure I understand output. I found this query to show a lock tree:\r\n\r\nwldomart01a=> WITH\r\nwldomart01a-> RECURSIVE l AS (\r\nwldomart01a(> SELECT pid, locktype, mode, granted,\r\nwldomart01a(> ROW(locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid) obj\r\nwldomart01a(> FROM pg_locks),\r\nwldomart01a-> pairs AS (\r\nwldomart01a(> SELECT w.pid waiter, l.pid locker, l.obj, l.mode\r\nwldomart01a(> FROM l w\r\nwldomart01a(> JOIN l\r\nwldomart01a(> ON l.obj IS NOT DISTINCT FROM w.obj\r\nwldomart01a(> AND l.locktype=w.locktype\r\nwldomart01a(> AND NOT l.pid=w.pid\r\nwldomart01a(> AND l.granted\r\nwldomart01a(> WHERE NOT w.granted),\r\nwldomart01a-> tree AS (\r\nwldomart01a(> SELECT l.locker pid, l.locker root, NULL::record obj, NULL AS mode, 0 lvl, locker::text path, array_agg(l.locker) OVER () all_pids\r\nwldomart01a(> FROM ( SELECT DISTINCT locker FROM pairs l WHERE NOT EXISTS (SELECT 1 FROM pairs WHERE waiter=l.locker) ) l\r\nwldomart01a(> UNION ALL\r\nwldomart01a(> SELECT w.waiter pid, tree.root, w.obj, w.mode, tree.lvl+1, tree.path||'.'||w.waiter, all_pids || array_agg(w.waiter) OVER ()\r\nwldomart01a(> FROM tree\r\nwldomart01a(> JOIN pairs w\r\nwldomart01a(> ON tree.pid=w.locker\r\nwldomart01a(> AND NOT w.waiter = ANY ( all_pids ))\r\nwldomart01a-> SELECT\r\nwldomart01a-> path, repeat(' .', lvl)||' '|| tree.pid as pid_tree, tree.pid,\r\nwldomart01a-> (clock_timestamp() - a.xact_start)::interval(3) AS ts_age,\r\nwldomart01a-> replace(a.state, 'idle in transaction', 'idletx') state,\r\nwldomart01a-> wait_event_type wait_type,\r\nwldomart01a-> wait_event,\r\nwldomart01a-> (clock_timestamp() - state_change)::interval(3) AS change_age,\r\nwldomart01a-> lvl,\r\nwldomart01a-> (SELECT count(*) FROM tree p WHERE p.path ~ ('^'||tree.path) AND NOT p.path=tree.path) blocked,\r\nwldomart01a-> repeat(' .', lvl)||' '||left(query,100) query\r\nwldomart01a-> FROM tree\r\nwldomart01a-> JOIN pg_stat_activity a\r\nwldomart01a-> USING (pid)\r\nwldomart01a-> ORDER BY path;\r\n path | pid_tree | pid | ts_age | state | wait_type | wait_event | change_age | lvl | blocked | query\r\n-----------+----------+------+--------------+--------+-----------+---------------+--------------+-----+---------+------------------------------------\r\n3740 | 3740 | 3740 | 01:23:03.294 | idletx | Client | ClientRead | 00:00:00.004 | 0 | 1 | update \"wln_mart\".\"ee_fact\" set +\r\n | | | | | | | | | | \"changed_on\" = $1 +\r\n | | | | | | | | | | where \"ee_fact_id\" = $2\r\n3740.3707 | . 3707 | 3707 | 01:23:03.294 | active | Lock | transactionid | 01:23:03.29 | 1 | 0 | . update \"wln_mart\".\"ee_fact\" set+\r\n | | | | | | | | | | \"changed_on\" = $1 +\r\n | | | | | | | | | | where \"ee_fact_id\" = $2\r\n(2 rows)\r\n\r\nAbove I can see PID 3740 is blocking PID 3707. The PK on table wln_mart.ee_fact is ee_fact_id. I assume PID 3740 has updated a row (but not committed it yet) that PID 3707 is also trying to update. But I am being told those 2 sessions should not be trying to process the same PK rows.\r\n\r\nHere is output from pg_locks for those 2 sessions:\r\n\r\nwldomart01a=> select * from pg_locks where pid in (3740,3707) order by pid;\r\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath | waitstart\r\n---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+------------------+---------+----------+-------------------------------\r\ntransactionid | | | | | | 251189989 | | | | 54/196626 | 3707 | ExclusiveLock | t | f |\r\nrelation | 91999 | 94619 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94615 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94611 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94610 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94609 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94569 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 93050 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nvirtualxid | | | | | 54/196626 | | | | | 54/196626 | 3707 | ExclusiveLock | t | t |\r\ntransactionid | | | | | | 251189988 | | | | 54/196626 | 3707 | ExclusiveLock | t | f |\r\ntransactionid | | | | | | 251189986 | | | | 54/196626 | 3707 | ShareLock | f | f | 2023-10-31 14:40:21.837507-05\r\ntuple | 91999 | 93050 | 0 | 1 | | | | | | 54/196626 | 3707 | ExclusiveLock | t | f |\r\nrelation | 91999 | 308853 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94693 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94693 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94619 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94615 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94611 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94610 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94609 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 94569 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nrelation | 91999 | 93050 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\nvirtualxid | | | | | 60/259887 | | | | | 60/259887 | 3740 | ExclusiveLock | t | t |\r\ntransactionid | | | | | | 251189986 | | | | 60/259887 | 3740 | ExclusiveLock | t | f |\r\nrelation | 91999 | 308853 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock | t | t |\r\n(25 rows)\r\n\r\n\r\nI believe the locktype relation is pointing to the table and the indexes on the table. Which data point(s) above point to this being row-level locking and not some other level of locking? I am very familiar with Oracle locking and different levels and am trying to quickly get up-to-speed on Postgres locking. I am continuing to google for this but figured I could post this to see if someone can provide a quick response.\r\n\r\nThanks\r\nSteve\r\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html\r\n\n\n\n\n\n\n\n\n\nHi Steve,\n \nIts literally the same query. I would try to extract the actual named object that is in the lock to verify. Is there any partitioning? An explain plan may be helpful.\n \n \n \nThank you\nTravis\n \n\n\nFrom: Dirschel, Steve <[email protected]>\r\n\nSent: Tuesday, October 31, 2023 4:12 PM\nTo: [email protected]\nCc: Wong, Kam Fook (TR Technology) <[email protected]>\nSubject: Postgres Locking\n\n\n \n***ATTENTION!! This message originated from outside of Circana. Treat hyperlinks and attachments in this email with caution.***\n \n\nRelatively new to Postgres. Running into a locking situation and I need to make sure I understand output. I found this query to show a lock tree:\n \nwldomart01a=> WITH\nwldomart01a-> RECURSIVE l AS (\nwldomart01a(> SELECT pid, locktype, mode, granted,\nwldomart01a(> ROW(locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid) obj\nwldomart01a(> FROM pg_locks),\nwldomart01a-> pairs AS (\nwldomart01a(> SELECT w.pid waiter, l.pid locker, l.obj, l.mode\nwldomart01a(> FROM l w\nwldomart01a(> JOIN l\nwldomart01a(> ON l.obj IS NOT DISTINCT FROM w.obj\nwldomart01a(> AND l.locktype=w.locktype\nwldomart01a(> AND NOT l.pid=w.pid\nwldomart01a(> AND l.granted\nwldomart01a(> WHERE NOT w.granted),\nwldomart01a-> tree AS (\nwldomart01a(> SELECT l.locker pid, l.locker root, NULL::record obj, NULL AS mode, 0 lvl, locker::text path, array_agg(l.locker) OVER\r\n () all_pids\nwldomart01a(> FROM ( SELECT DISTINCT locker FROM pairs l WHERE NOT EXISTS (SELECT 1 FROM pairs WHERE waiter=l.locker) ) l\nwldomart01a(> UNION ALL\nwldomart01a(> SELECT w.waiter pid, tree.root, w.obj, w.mode, tree.lvl+1, tree.path||'.'||w.waiter, all_pids || array_agg(w.waiter)\r\n OVER ()\nwldomart01a(> FROM tree\nwldomart01a(> JOIN pairs w\nwldomart01a(> ON tree.pid=w.locker\nwldomart01a(> AND NOT w.waiter = ANY ( all_pids ))\nwldomart01a-> SELECT\nwldomart01a-> path, repeat(' .', lvl)||' '|| tree.pid as pid_tree, tree.pid,\nwldomart01a-> (clock_timestamp() - a.xact_start)::interval(3) AS ts_age,\nwldomart01a-> replace(a.state, 'idle in transaction', 'idletx') state,\nwldomart01a-> wait_event_type wait_type,\nwldomart01a-> wait_event,\nwldomart01a-> (clock_timestamp() - state_change)::interval(3) AS change_age,\nwldomart01a-> lvl,\nwldomart01a-> (SELECT count(*) FROM tree p WHERE p.path ~ ('^'||tree.path) AND NOT p.path=tree.path) blocked,\nwldomart01a-> repeat(' .', lvl)||' '||left(query,100) query\nwldomart01a-> FROM tree\nwldomart01a-> JOIN pg_stat_activity a\nwldomart01a-> USING (pid)\nwldomart01a-> ORDER BY path;\n path | pid_tree | pid | ts_age | state | wait_type | wait_event | change_age | lvl | blocked | query\n-----------+----------+------+--------------+--------+-----------+---------------+--------------+-----+---------+------------------------------------\n3740 | 3740 | 3740 | 01:23:03.294 | idletx | Client | ClientRead | 00:00:00.004 | 0 | 1 | update \"wln_mart\".\"ee_fact\" set +\n | | | | | | | | | | \"changed_on\" = $1 +\n | | | | | | | | | | where \"ee_fact_id\" = $2\n3740.3707 | . 3707 | 3707 | 01:23:03.294 | active | Lock | transactionid | 01:23:03.29 | 1 | 0 | . update \"wln_mart\".\"ee_fact\" set+\n | | | | | | | | | | \"changed_on\" = $1 +\n | | | | | | | | | | where \"ee_fact_id\" = $2\n(2 rows)\n \nAbove I can see PID 3740 is blocking PID 3707. The PK on table wln_mart.ee_fact is ee_fact_id. I assume PID 3740 has updated a row (but not committed it yet) that PID 3707 is also trying to update. But I am being told those 2 sessions\r\n should not be trying to process the same PK rows.\n \nHere is output from pg_locks for those 2 sessions:\n \nwldomart01a=> select * from pg_locks where pid in (3740,3707) order by pid;\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode \r\n | granted | fastpath | waitstart\n---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+------------------+---------+----------+-------------------------------\ntransactionid | | | | | | 251189989 | | | | 54/196626 | 3707 | ExclusiveLock \r\n | t | f |\nrelation | 91999 | 94619 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94615 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94611 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94610 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94609 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94569 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 93050 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nvirtualxid | | | | | 54/196626 | | | | | 54/196626 | 3707 | ExclusiveLock \r\n | t | t |\ntransactionid | | | | | | 251189988 | | | | 54/196626 | 3707 | ExclusiveLock \r\n | t | f |\ntransactionid | | | | | | 251189986 | | | | 54/196626 | 3707 | ShareLock \r\n | f | f | 2023-10-31 14:40:21.837507-05\ntuple | 91999 | 93050 | 0 | 1 | | | | | | 54/196626 | 3707 | ExclusiveLock \r\n | t | f |\nrelation | 91999 | 308853 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94693 | | | | | | | | 54/196626 | 3707 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94693 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94619 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94615 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94611 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94610 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94609 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 94569 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nrelation | 91999 | 93050 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\nvirtualxid | | | | | 60/259887 | | | | | 60/259887 | 3740 | ExclusiveLock \r\n | t | t |\ntransactionid | | | | | | 251189986 | | | | 60/259887 | 3740 | ExclusiveLock \r\n | t | f |\nrelation | 91999 | 308853 | | | | | | | | 60/259887 | 3740 | RowExclusiveLock\r\n | t | t |\n(25 rows)\n \n \nI believe the locktype relation is pointing to the table and the indexes on the table. Which data point(s) above point to this being row-level locking and not some other level of locking? I am very familiar with Oracle locking and different\r\n levels and am trying to quickly get up-to-speed on Postgres locking. I am continuing to google for this but figured I could post this to see if someone can provide a quick response.\n \nThanks\nSteve\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any\r\n attachments. Certain required legal entity disclosures can be accessed on our website:\r\nhttps://www.thomsonreuters.com/en/resources/disclosures.html",
"msg_date": "Tue, 31 Oct 2023 21:43:57 +0000",
"msg_from": "\"Smith, Travis\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Postgres Locking"
},
{
"msg_contents": "\"Dirschel, Steve\" <[email protected]> writes:\n> Above I can see PID 3740 is blocking PID 3707. The PK on table\n> wln_mart.ee_fact is ee_fact_id. I assume PID 3740 has updated a row\n> (but not committed it yet) that PID 3707 is also trying to update.\n\nHmm. We can see that 3707 is waiting for 3740 to commit, because it's\ntrying to take ShareLock on 3740's transactionid:\n\n> transactionid | | | | | | 251189986 | | | | 54/196626 | 3707 | ShareLock | f | f | 2023-10-31 14:40:21.837507-05\n\n251189986 is indeed 3740's, because it has ExclusiveLock on that:\n\n> transactionid | | | | | | 251189986 | | | | 60/259887 | 3740 | ExclusiveLock | t | f |\n\nThere are many reasons why one xact might be waiting on another to commit,\nnot only that they tried to update the same tuple. However, in this case\nI suspect that that is the problem, because we can also see that 3707 has\nan exclusive tuple-level lock:\n\n> tuple | 91999 | 93050 | 0 | 1 | | | | | | 54/196626 | 3707 | ExclusiveLock | t | f |\n\nThat kind of lock would only be held while queueing to modify a tuple.\n(Basically, it establishes that 3707 is next in line, in case some\nother transaction comes along and also wants to modify the same tuple.)\nIt should be released as soon as the tuple update is made, so 3707 is\ndefinitely stuck waiting to modify a tuple, and AFAICS it must be stuck\nbecause of 3740's uncommitted earlier update.\n\n> But I am being told those 2 sessions should not be trying to process the\n> same PK rows.\n\nPerhaps \"should not\" is wrong. Or it could be some indirect update\n(caused by a foreign key with CASCADE, or the like).\n\nYou have here the relation OID (try \"SELECT 93050::regclass\" to\ndecode it) and the tuple ID, so it should work to do\n\nSELECT * FROM that_table WHERE ctid = '(0,1)';\n\nto see the previous state of the problematic tuple. Might\nhelp to decipher the problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 Oct 2023 17:45:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Locking"
},
{
"msg_contents": "Hi,\n\nAsking for help with a JDBC related issue.\nEnvironment: Linux 7.9 PG 14.9 , very busy PG Server.\n\nA big query - 3 unions and about 10 joins runs :\n- 70ms on psql , DBeaver with JDBC 42 and in our Server using old JDBC 9.2\n- 2500 ms in our Server using new JDBC 42 driver. ( and this is running many times) \n\nQuestion: Is there a structured way to identify optimization setup ( Planner Method s ) changes?\nAre there any known changes specific to JDBC 42. \nCapture a vector of session optimization setup? \nAny other Idea ?\n\nThanks\n\nDanny\n\n\n\n",
"msg_date": "Sat, 4 Nov 2023 19:08:22 +0000",
"msg_from": "\"Abraham, Danny\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Performance down with JDBC 42"
},
{
"msg_contents": "On Sat, 2023-11-04 at 19:08 +0000, Abraham, Danny wrote:\n> Asking for help with a JDBC related issue.\n> Environment: Linux 7.9 PG 14.9 , very busy PG Server.\n> \n> A big query - 3 unions and about 10 joins runs :\n> - 70ms on psql , DBeaver with JDBC 42 and in our Server using old JDBC 9.2\n> - 2500 ms in our Server using new JDBC 42 driver. ( and this is running many times) \n> \n> Question: Is there a structured way to identify optimization setup ( Planner Method s ) changes?\n> Are there any known changes specific to JDBC 42. \n\nWhat I would do is enable auto_explain and look at the execution plan\nwhen the statement is run by the JDBC driver. Then you can compare the\nexecution plans and spot the difference.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sat, 04 Nov 2023 22:07:19 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance down with JDBC 42"
},
{
"msg_contents": "Thanks Laurenz,\r\n\r\nTraced two huge plans. They differ.\r\nThe fast one does use Materialize and Memoize (the psql).\r\nIs there something in JDBC 42 that blocks these algoruthms?\r\n\r\nThanks again\r\n\r\nDanny\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <[email protected]> \r\nSent: Saturday, November 4, 2023 11:07 PM\r\nTo: Abraham, Danny <[email protected]>; psql-performance <[email protected]>\r\nSubject: [EXTERNAL] Re: Performance down with JDBC 42\r\n\r\nOn Sat, 2023-11-04 at 19:08 +0000, Abraham, Danny wrote:\r\n> Asking for help with a JDBC related issue.\r\n> Environment: Linux 7.9 PG 14.9 , very busy PG Server.\r\n> \r\n> A big query - 3 unions and about 10 joins runs :\r\n> - 70ms on psql , DBeaver with JDBC 42 and in our Server using old \r\n> JDBC 9.2\r\n> - 2500 ms in our Server using new JDBC 42 driver. ( and this is \r\n> running many times)\r\n> \r\n> Question: Is there a structured way to identify optimization setup ( Planner Method s ) changes?\r\n> Are there any known changes specific to JDBC 42. \r\n\r\nWhat I would do is enable auto_explain and look at the execution plan when the statement is run by the JDBC driver. Then you can compare the execution plans and spot the difference.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Sun, 5 Nov 2023 16:20:07 +0000",
"msg_from": "\"Abraham, Danny\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [EXTERNAL] Re: Performance down with JDBC 42"
},
{
"msg_contents": "\n\nAm 05.11.23 um 17:20 schrieb Abraham, Danny:\n> Thanks Laurenz,\n>\n> Traced two huge plans. They differ.\n> The fast one does use Materialize and Memoize (the psql).\n> Is there something in JDBC 42 that blocks these algoruthms?\n\n*maybe* the driver changed some settings. You can check it with\n\nselect name, setting from pg_settings where name ~ 'enable';\n\nusing the JDBC-connection.\n\n\nRegards, Andreas\n\n\n>\n> Thanks again\n>\n> Danny\n>\n> -----Original Message-----\n> From: Laurenz Albe <[email protected]>\n> Sent: Saturday, November 4, 2023 11:07 PM\n> To: Abraham, Danny <[email protected]>; psql-performance <[email protected]>\n> Subject: [EXTERNAL] Re: Performance down with JDBC 42\n>\n> On Sat, 2023-11-04 at 19:08 +0000, Abraham, Danny wrote:\n>> Asking for help with a JDBC related issue.\n>> Environment: Linux 7.9 PG 14.9 , very busy PG Server.\n>>\n>> A big query - 3 unions and about 10 joins runs :\n>> - 70ms on psql , DBeaver with JDBC 42 and in our Server using old\n>> JDBC 9.2\n>> - 2500 ms in our Server using new JDBC 42 driver. ( and this is\n>> running many times)\n>>\n>> Question: Is there a structured way to identify optimization setup ( Planner Method s ) changes?\n>> Are there any known changes specific to JDBC 42.\n> What I would do is enable auto_explain and look at the execution plan when the statement is run by the JDBC driver. Then you can compare the execution plans and spot the difference.\n>\n> Yours,\n> Laurenz Albe\n\n-- \nAndreas Kretschmer - currently still (garden leave)\nTechnical Account Manager (TAM)\nwww.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 5 Nov 2023 17:52:13 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Performance down with JDBC 42"
},
{
"msg_contents": "Are you absolutely sure that the two databases you’re comparing the executing with are identical, and that the objects involved in the query are physically and logically identical?\n\nThe planning is done based on cost/statistics of the objects. If the statistics are different, the planner may come up with another plan.\n\nFrits\n\n\n\n> Op 5 nov 2023 om 17:20 heeft Abraham, Danny <[email protected]> het volgende geschreven:\n> \n> Thanks Laurenz,\n> \n> Traced two huge plans. They differ.\n> The fast one does use Materialize and Memoize (the psql).\n> Is there something in JDBC 42 that blocks these algoruthms?\n> \n> Thanks again\n> \n> Danny\n> \n> -----Original Message-----\n> From: Laurenz Albe <[email protected]>\n> Sent: Saturday, November 4, 2023 11:07 PM\n> To: Abraham, Danny <[email protected]>; psql-performance <[email protected]>\n> Subject: [EXTERNAL] Re: Performance down with JDBC 42\n> \n>> On Sat, 2023-11-04 at 19:08 +0000, Abraham, Danny wrote:\n>> Asking for help with a JDBC related issue.\n>> Environment: Linux 7.9 PG 14.9 , very busy PG Server.\n>> \n>> A big query - 3 unions and about 10 joins runs :\n>> - 70ms on psql , DBeaver with JDBC 42 and in our Server using old\n>> JDBC 9.2\n>> - 2500 ms in our Server using new JDBC 42 driver. ( and this is\n>> running many times)\n>> \n>> Question: Is there a structured way to identify optimization setup ( Planner Method s ) changes?\n>> Are there any known changes specific to JDBC 42.\n> \n> What I would do is enable auto_explain and look at the execution plan when the statement is run by the JDBC driver. Then you can compare the execution plans and spot the difference.\n> \n> Yours,\n> Laurenz Albe\n\n\n",
"msg_date": "Sun, 5 Nov 2023 18:34:57 +0100",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Performance down with JDBC 42"
},
{
"msg_contents": "On Sun, Nov 5, 2023 at 11:20 AM Abraham, Danny <[email protected]>\nwrote:\n\n> Thanks Laurenz,\n>\n> Traced two huge plans. They differ.\n> The fast one does use Materialize and Memoize (the psql).\n> Is there something in JDBC 42 that blocks these algoruthms?\n\n\nDirectly blocking those is not likely. Maybe the way the drivers fetch\npartial results is different, such that with one the planner knows to\nexpect only partial results to be fetched and with the other it does not.\nSo in one case it chooses the fast-start plan, and in the other it\ndoesn't. But it will be hard to get anywhere if you just dribble\ninformation at us a bit at a time. Can you come up with a self-contained\ntest case? Or at least show the entirety of both plans?\n\nCheers,\n\nJeff\n\nOn Sun, Nov 5, 2023 at 11:20 AM Abraham, Danny <[email protected]> wrote:Thanks Laurenz,\n\nTraced two huge plans. They differ.\nThe fast one does use Materialize and Memoize (the psql).\nIs there something in JDBC 42 that blocks these algoruthms?Directly blocking those is not likely. Maybe the way the drivers fetch partial results is different, such that with one the planner knows to expect only partial results to be fetched and with the other it does not. So in one case it chooses the fast-start plan, and in the other it doesn't. But it will be hard to get anywhere if you just dribble information at us a bit at a time. Can you come up with a self-contained test case? Or at least show the entirety of both plans?Cheers,Jeff",
"msg_date": "Sun, 5 Nov 2023 14:11:46 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Performance down with JDBC 42"
},
{
"msg_contents": "Thanks for the help.\r\nBoth plans refer to the same DB.\r\n\r\n#1 – Fast – using psql or old JDBC driver\r\n==>\r\nSort (cost=13113.27..13113.33 rows=24 width=622)\r\n Output: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n Sort Key: dm.calname, dm.jobyear\r\n -> HashAggregate (cost=13112.24..13112.48 rows=24 width=622)\r\n Output: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n Group Key: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n -> Append (cost=4603.96..13112.00 rows=24 width=622)\r\n -> Unique (cost=4603.96..4604.20 rows=19 width=535)\r\n Output: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n -> Sort (cost=4603.96..4604.01 rows=19 width=535)\r\n Output: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n Sort Key: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n -> Nested Loop (cost=0.00..4603.56 rows=19 width=535)\r\n Output: dm.calname, dm.jobyear, dm.caltype, (dm.daymask)::character varying(400)\r\n Join Filter: (((dm.calname)::text = (jd.dayscal)::text) OR ((dm.calname)::text = (jd.weekcal)::text) OR ((dm.calname)::text = (jd.confcal)::text))\r\n -> Seq Scan on public.cms_datemm dm (cost=0.00..16.33 rows=171 width=389)\r\n Output: dm.calname, dm.jobyear, dm.daymask, dm.caltype, dm.caldesc\r\n Filter: ((dm.jobyear >= '2021'::bpchar) AND (dm.jobyear <= '2025'::bpchar))\r\n -> Materialize (cost=0.00..4559.84 rows=8 width=3)\r\n Output: jd.dayscal, jd.weekcal, jd.confcal\r\n -> Seq Scan on public.cms_jobdef jd (cost=0.00..4559.80 rows=8 width=3)\r\n Output: jd.dayscal, jd.weekcal, jd.confcal\r\n Filter: (((jd.schedtab)::text = 'PZL-ZETA_REDIS_UTILITY_PSE'::text) OR ((jd.schedtab)::text ~~ 'PZL-ZETA_REDIS_UTILITY_PSE/%'::text))\r\n -> Unique (cost=3857.44..3857.46 rows=1 width=535)\r\n Output: dm_1.calname, dm_1.jobyear, dm_1.caltype, ((dm_1.daymask)::character varying(400))\r\n -> Sort (cost=3857.44..3857.45 rows=1 width=535)\r\n Output: dm_1.calname, dm_1.jobyear, dm_1.caltype, ((dm_1.daymask)::character varying(400))\r\n Sort Key: dm_1.calname, dm_1.jobyear, dm_1.caltype, ((dm_1.daymask)::character varying(400))\r\n -> Nested Loop (cost=0.30..3857.43 rows=1 width=535)\r\n Output: dm_1.calname, dm_1.jobyear, dm_1.caltype, (dm_1.daymask)::character varying(400)\r\n Join Filter: (((dm_1.calname)::text = (tag.dayscal)::text) OR ((dm_1.calname)::text = (tag.weekcal)::text) OR ((dm_1.calname)::text = (tag.confcal)::text))\r\n -> Nested Loop (cost=0.30..3838.11 rows=1 width=3)\r\n Output: tag.dayscal, tag.weekcal, tag.confcal\r\n Inner Unique: true\r\n -> Seq Scan on public.cms_tag tag (cost=0.00..30.96 rows=1396 width=7)\r\n Output: tag.tagname, tag.groupid, tag.maxwait, tag.cal_andor, tag.monthstr, tag.dayscal, tag.weekcal, tag.confcal, tag.shift, tag.retro, tag.daystr, tag.wdaystr, tag.tagfrom, tag.tagtill, tag.roworder, tag.exclude_rbc\r\n -> Memoize (cost=0.30..4.02 rows=1 width=4)\r\n Output: jd_1.jobno\r\n Cache Key: tag.groupid\r\n Cache Mode: logical\r\n -> Index Scan using job on public.cms_jobdef jd_1 (cost=0.29..4.01 rows=1 width=4)\r\n Output: jd_1.jobno\r\n Index Cond: (jd_1.jobno = tag.groupid)\r\n Filter: (((jd_1.schedtab)::text = 'PZL-ZETA_REDIS_UTILITY_PSE'::text) OR ((jd_1.schedtab)::text ~~ 'PZL-ZETA_REDIS_UTILITY_PSE/%'::text))\r\n -> Seq Scan on public.cms_datemm dm_1 (cost=0.00..16.33 rows=171 width=389)\r\n Output: dm_1.calname, dm_1.jobyear, dm_1.daymask, dm_1.caltype, dm_1.caldesc\r\n Filter: ((dm_1.jobyear >= '2021'::bpchar) AND (dm_1.jobyear <= '2025'::bpchar))\r\n -> Unique (cost=4649.93..4649.98 rows=4 width=535)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.caltype, ((dm_2.daymask)::character varying(400))\r\n -> Sort (cost=4649.93..4649.94 rows=4 width=535)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.caltype, ((dm_2.daymask)::character varying(400))\r\n Sort Key: dm_2.calname, dm_2.jobyear, dm_2.caltype, ((dm_2.daymask)::character varying(400))\r\n -> Nested Loop (cost=0.56..4649.89 rows=4 width=535)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.caltype, (dm_2.daymask)::character varying(400)\r\n Join Filter: (((dm_2.calname)::text = (tag_1.dayscal)::text) OR ((dm_2.calname)::text = (tag_1.weekcal)::text) OR ((dm_2.calname)::text = (tag_1.confcal)::text))\r\n -> Seq Scan on public.cms_datemm dm_2 (cost=0.00..16.33 rows=171 width=389)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.daymask, dm_2.caltype, dm_2.caldesc\r\n Filter: ((dm_2.jobyear >= '2021'::bpchar) AND (dm_2.jobyear <= '2025'::bpchar))\r\n -> Materialize (cost=0.56..4626.72 rows=2 width=3)\r\n Output: tag_1.dayscal, tag_1.weekcal, tag_1.confcal\r\n -> Nested Loop (cost=0.56..4626.71 rows=2 width=3)\r\n Output: tag_1.dayscal, tag_1.weekcal, tag_1.confcal\r\n Inner Unique: true\r\n -> Nested Loop (cost=0.29..4626.32 rows=1 width=5)\r\n Output: tl.tagname\r\n -> Seq Scan on public.cms_jobdef jd_2 (cost=0.00..4559.80 rows=8 width=4)\r\n Output: jd_2.jobname, jd_2.jobno, jd_2.descript, jd_2.applic, jd_2.applgroup, jd_2.schedtab, jd_2.author, jd_2.owner, jd_2.priority, jd_2.critical, jd_2.cyclic, jd_2.retro, jd_2.autoarch, jd_2.taskclass, jd_2.cyclicint, jd_2.tasktype, jd_2.datemem, jd_2.nodegrp, jd_2.platform, jd_2.nodeid, jd_2.doclib, jd_2.docmem, jd_2.memlib, jd_2.memname, jd_2.overlib, jd_2.cmdline, jd_2.maxrerun, jd_2.maxdays, jd_2.maxruns, jd_2.fromtime, jd_2.until, jd_2.maxwait, jd_2.daystr, jd_2.wdaystr, jd_2.monthstr, jd_2.ajfsonstr, jd_2.conf, jd_2.unknowntim, jd_2.dayscal, jd_2.weekcal, jd_2.confcal, jd_2.cal_andor, jd_2.shift, jd_2.adjust_cond, jd_2.startendcycind, jd_2.creationuserid, jd_2.creationdatetime, jd_2.changeuserid, jd_2.changedatetime, jd_2.relationship, jd_2.groupid, jd_2.tabrowno, jd_2.multiagent, jd_2.appltype, jd_2.timezone, jd_2.statemsk, jd_2.applver, jd_2.timeref, jd_2.actfrom, jd_2.acttill, jd_2.cmver, jd_2.applform, jd_2.instream_ind, jd_2.instream_script, jd_2.run_times, jd_2.interval_sequence, jd_2.tolerance, jd_2.cyclic_type, jd_2.removeatonce, jd_2.dayskeepinnotok, jd_2.delay, jd_2.end_folder, jd_2.is_reference, jd_2.referenced_path\r\n Filter: (((jd_2.schedtab)::text = 'PZL-ZETA_REDIS_UTILITY_PSE'::text) OR ((jd_2.schedtab)::text ~~ 'PZL-ZETA_REDIS_UTILITY_PSE/%'::text))\r\n -> Index Scan using job_tag on public.cms_taglink tl (cost=0.29..8.30 rows=1 width=9)\r\n Output: tl.tagname, tl.groupid, tl.jobno, tl.roworder, tl.exclude_rbc\r\n Index Cond: (tl.jobno = jd_2.jobno)\r\n Filter: (tl.groupid = 0)\r\n -> Index Scan using gro_tag on public.cms_tag tag_1 (cost=0.28..0.39 rows=1 width=14)\r\n Output: tag_1.tagname, tag_1.groupid, tag_1.maxwait, tag_1.cal_andor, tag_1.monthstr, tag_1.dayscal, tag_1.weekcal, tag_1.confcal, tag_1.shift, tag_1.retro, tag_1.daystr, tag_1.wdaystr, tag_1.tagfrom, tag_1.tagtill, tag_1.roworder, tag_1.exclude_rbc\r\n Index Cond: ((tag_1.groupid = 0) AND ((tag_1.tagname)::text = (tl.tagname)::text))\r\n==>\r\nSlow – when using JDBC 42\r\n==>\r\nSort (cost=11316.99..11317.00 rows=3 width=622)\r\n Output: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n Sort Key: dm.calname, dm.jobyear\r\n -> HashAggregate (cost=11316.91..11316.94 rows=3 width=622)\r\n Output: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n Group Key: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n -> Append (cost=10252.89..11316.88 rows=3 width=622)\r\n -> Unique (cost=10252.89..10252.92 rows=1 width=535)\r\n Output: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n -> Sort (cost=10252.89..10252.89 rows=3 width=535)\r\n Output: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n Sort Key: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character varying(400))\r\n -> Nested Loop (cost=0.14..10252.86 rows=3 width=535)\r\n Output: dm.calname, dm.jobyear, dm.caltype, (dm.daymask)::character varying(400)\r\n Join Filter: (((dm.calname)::text = (jd.dayscal)::text) OR ((dm.calname)::text = (jd.weekcal)::text) OR ((dm.calname)::text = (jd.confcal)::text))\r\n -> Index Scan using calendar on public.cms_datemm dm (cost=0.14..14.38 rows=1 width=389)\r\n Output: dm.calname, dm.jobyear, dm.daymask, dm.caltype, dm.caldesc\r\n Index Cond: ((dm.jobyear >= ($3)::bpchar) AND (dm.jobyear <= ($4)::bpchar))\r\n -> Seq Scan on public.cms_jobdef jd (cost=0.00..10235.19 rows=188 width=3)\r\n Output: jd.dayscal, jd.weekcal, jd.confcal\r\n Filter: (((jd.schedtab)::text = ($1)::text) OR ((jd.schedtab)::text ~~ ($2)::text))\r\n -> Unique (cost=180.91..180.93 rows=1 width=535)\r\n Output: dm_1.calname, dm_1.jobyear, dm_1.caltype, ((dm_1.daymask)::character varying(400))\r\n -> Sort (cost=180.91..180.92 rows=1 width=535)\r\n Output: dm_1.calname, dm_1.jobyear, dm_1.caltype, ((dm_1.daymask)::character varying(400))\r\n Sort Key: dm_1.calname, dm_1.jobyear, dm_1.caltype, ((dm_1.daymask)::character varying(400))\r\n -> Nested Loop (cost=0.56..180.90 rows=1 width=535)\r\n Output: dm_1.calname, dm_1.jobyear, dm_1.caltype, (dm_1.daymask)::character varying(400)\r\n Inner Unique: true\r\n -> Nested Loop (cost=0.14..74.77 rows=18 width=393)\r\n Output: dm_1.calname, dm_1.jobyear, dm_1.caltype, dm_1.daymask, tag.groupid\r\n Join Filter: (((dm_1.calname)::text = (tag.dayscal)::text) OR ((dm_1.calname)::text = (tag.weekcal)::text) OR ((dm_1.calname)::text = (tag.confcal)::text))\r\n -> Index Scan using calendar on public.cms_datemm dm_1 (cost=0.14..14.38 rows=1 width=389)\r\n Output: dm_1.calname, dm_1.jobyear, dm_1.daymask, dm_1.caltype, dm_1.caldesc\r\n Index Cond: ((dm_1.jobyear >= ($7)::bpchar) AND (dm_1.jobyear <= ($8)::bpchar))\r\n -> Seq Scan on public.cms_tag tag (cost=0.00..35.96 rows=1396 width=7)\r\n Output: tag.tagname, tag.groupid, tag.maxwait, tag.cal_andor, tag.monthstr, tag.dayscal, tag.weekcal, tag.confcal, tag.shift, tag.retro, tag.daystr, tag.wdaystr, tag.tagfrom, tag.tagtill, tag.roworder, tag.exclude_rbc\r\n -> Index Scan using job on public.cms_jobdef jd_1 (cost=0.41..5.89 rows=1 width=4)\r\n Output: jd_1.jobno\r\n Index Cond: (jd_1.jobno = tag.groupid)\r\n Filter: (((jd_1.schedtab)::text = ($5)::text) OR ((jd_1.schedtab)::text ~~ ($6)::text))\r\n -> Unique (cost=882.97..882.99 rows=1 width=535)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.caltype, ((dm_2.daymask)::character varying(400))\r\n -> Sort (cost=882.97..882.98 rows=1 width=535)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.caltype, ((dm_2.daymask)::character varying(400))\r\n Sort Key: dm_2.calname, dm_2.jobyear, dm_2.caltype, ((dm_2.daymask)::character varying(400))\r\n -> Nested Loop (cost=67.06..882.96 rows=1 width=535)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.caltype, (dm_2.daymask)::character varying(400)\r\n Inner Unique: true\r\n -> Hash Join (cost=66.64..225.90 rows=104 width=393)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.caltype, dm_2.daymask, tl.jobno\r\n Hash Cond: ((tl.tagname)::text = (tag_1.tagname)::text)\r\n -> Bitmap Heap Scan on public.cms_taglink tl (cost=16.79..169.52 rows=1098 width=13)\r\n Output: tl.tagname, tl.groupid, tl.jobno, tl.roworder, tl.exclude_rbc\r\n Recheck Cond: (tl.groupid = 0)\r\n -> Bitmap Index Scan on tl_groupid (cost=0.00..16.52 rows=1098 width=0)\r\n Index Cond: (tl.groupid = 0)\r\n -> Hash (cost=49.82..49.82 rows=2 width=404)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.caltype, dm_2.daymask, tag_1.tagname, tag_1.groupid\r\n -> Nested Loop (cost=9.48..49.82 rows=2 width=404)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.caltype, dm_2.daymask, tag_1.tagname, tag_1.groupid\r\n Join Filter: (((dm_2.calname)::text = (tag_1.dayscal)::text) OR ((dm_2.calname)::text = (tag_1.weekcal)::text) OR ((dm_2.calname)::text = (tag_1.confcal)::text))\r\n -> Index Scan using calendar on public.cms_datemm dm_2 (cost=0.14..14.38 rows=1 width=389)\r\n Output: dm_2.calname, dm_2.jobyear, dm_2.daymask, dm_2.caltype, dm_2.caldesc\r\n Index Cond: ((dm_2.jobyear >= ($11)::bpchar) AND (dm_2.jobyear <= ($12)::bpchar))\r\n -> Bitmap Heap Scan on public.cms_tag tag_1 (cost=9.34..33.05 rows=137 width=18)\r\n Output: tag_1.tagname, tag_1.groupid, tag_1.maxwait, tag_1.cal_andor, tag_1.monthstr, tag_1.dayscal, tag_1.weekcal, tag_1.confcal, tag_1.shift, tag_1.retro, tag_1.daystr, tag_1.wdaystr, tag_1.tagfrom, tag_1.tagtill, tag_1.roworder, tag_1.exclude_rbc\r\n Recheck Cond: (tag_1.groupid = 0)\r\n -> Bitmap Index Scan on gro_tag (cost=0.00..9.30 rows=137 width=0)\r\n Index Cond: (tag_1.groupid = 0)\r\n -> Index Scan using job on public.cms_jobdef jd_2 (cost=0.41..6.32 rows=1 width=4)\r\n Output: jd_2.jobno\r\n Index Cond: (jd_2.jobno = tl.jobno)\r\n Filter: (((jd_2.schedtab)::text = ($9)::text) OR ((jd_2.schedtab)::text ~~ ($10)::text))\r\n==>\r\n",
"msg_date": "Sun, 5 Nov 2023 19:37:12 +0000",
"msg_from": "\"Abraham, Danny\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [EXTERNAL] Re: Performance down with JDBC 42"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 08:37, Abraham, Danny <[email protected]> wrote:\n>\n> Both plans refer to the same DB.\n\nJDBC is making use of PREPARE statements, whereas psql, unless you're\nusing PREPARE is not.\n\n> #1 – Fast – using psql or old JDBC driver\n\nThe absence of any $1 type parameters here shows that's a custom plan\nthat's planned specifically using the parameter values given.\n\n> Slow – when using JDBC 42\n\nBecause this query has $1, $2, etc, that's a generic plan. When\nlooking up statistics histogram bounds and MCV slots cannot be\nchecked. Only ndistinct is used. If you have a skewed dataset, then\nthis might not be very good.\n\nYou might find things run better if you adjust postgresql.conf and set\nplan_cache_mode = force_custom_plan then select pg_reload_conf();\n\nPlease also check the documentation so that you understand the full\nimplications for that.\n\nDavid\n\n\n",
"msg_date": "Mon, 6 Nov 2023 08:47:05 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Performance down with JDBC 42"
},
{
"msg_contents": "Very good point from Danny: generic and custom plans.\n\nOne thing that is almost certainly not at play here, and is mentioned: there are some specific cases where the planner does not optimise for the query in total to be executed as fast/cheap as possible, but for the first few rows. One reason for that to happen is if a query is used as a cursor.\n\n(Warning: shameless promotion) I did a writeup on JDBC clientside/serverside prepared statements and custom and generic plans: https://dev.to/yugabyte/postgres-query-execution-jdbc-prepared-statements-51e2\nThe next obvious question then is if something material did change with JDBC for your old and new JDBC versions, I do believe the prepareThreshold did not change.\n\n\nFrits Hoogland\n\n\n\n\n> On 5 Nov 2023, at 20:47, David Rowley <[email protected]> wrote:\n> \n> On Mon, 6 Nov 2023 at 08:37, Abraham, Danny <[email protected]> wrote:\n>> \n>> Both plans refer to the same DB.\n> \n> JDBC is making use of PREPARE statements, whereas psql, unless you're\n> using PREPARE is not.\n> \n>> #1 – Fast – using psql or old JDBC driver\n> \n> The absence of any $1 type parameters here shows that's a custom plan\n> that's planned specifically using the parameter values given.\n> \n>> Slow – when using JDBC 42\n> \n> Because this query has $1, $2, etc, that's a generic plan. When\n> looking up statistics histogram bounds and MCV slots cannot be\n> checked. Only ndistinct is used. If you have a skewed dataset, then\n> this might not be very good.\n> \n> You might find things run better if you adjust postgresql.conf and set\n> plan_cache_mode = force_custom_plan then select pg_reload_conf();\n> \n> Please also check the documentation so that you understand the full\n> implications for that.\n> \n> David\n> \n> \n\n\nVery good point from Danny: generic and custom plans.One thing that is almost certainly not at play here, and is mentioned: there are some specific cases where the planner does not optimise for the query in total to be executed as fast/cheap as possible, but for the first few rows. One reason for that to happen is if a query is used as a cursor.(Warning: shameless promotion) I did a writeup on JDBC clientside/serverside prepared statements and custom and generic plans: https://dev.to/yugabyte/postgres-query-execution-jdbc-prepared-statements-51e2The next obvious question then is if something material did change with JDBC for your old and new JDBC versions, I do believe the prepareThreshold did not change.\nFrits Hoogland\n\nOn 5 Nov 2023, at 20:47, David Rowley <[email protected]> wrote:On Mon, 6 Nov 2023 at 08:37, Abraham, Danny <[email protected]> wrote:Both plans refer to the same DB.JDBC is making use of PREPARE statements, whereas psql, unless you'reusing PREPARE is not.#1 – Fast – using psql or old JDBC driverThe absence of any $1 type parameters here shows that's a custom planthat's planned specifically using the parameter values given.Slow – when using JDBC 42Because this query has $1, $2, etc, that's a generic plan. Whenlooking up statistics histogram bounds and MCV slots cannot bechecked. Only ndistinct is used. If you have a skewed dataset, thenthis might not be very good.You might find things run better if you adjust postgresql.conf and setplan_cache_mode = force_custom_plan then select pg_reload_conf();Please also check the documentation so that you understand the fullimplications for that.David",
"msg_date": "Mon, 6 Nov 2023 10:24:43 +0100",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Performance down with JDBC 42"
},
{
"msg_contents": "Dear all,\n\nI'm running a query from Java on a postgres database:\n\nJava version: 17\nJDBC version: 42.4.2\nPostgres version: 13.1\n\nIn parallel I'm testing the same queries from pgAdmin 4 version 6.13\n\nThe tables I'm using contains more than 10million rows each and I have two\nquestions here:\n\n1. I need to extract the path of a file without the file itself. For this I\nuse two alternatives as I found that sentence \"A\" is much faster than the\n\"B\" one:\n\n\"A\" sentence:\n\nSELECT DISTINCT ( LEFT(opf.file_path, length(opf.file_path) - position('/'\nin reverse(opf.file_path))) ) AS path\n FROM product AS op JOIN product_file AS opf ON\nopf.product_id = op.id\n WHERE op.proprietary_end_date <= CURRENT_DATE\nAND op.id LIKE 'urn:esa:psa:%'\n\n\"B\" sentence:\n\nSELECT DISTINCT ( regexp_replace(opf.file_path, '(.*)\\/(.*)$', '\\1') ) AS\npath\n FROM product AS op JOIN product_file AS opf ON\nopf.product_id = op.id\n WHERE op.proprietary_end_date <= CURRENT_DATE\nAND op.id LIKE 'urn:esa:psa:%'\n\n2. Running sentence \"A\" on the pgAdmin client takes 4-5 minutes to finish\nbut running it from a Java program it never ends. This is still the case\nwhen I limit the output to the first 100 rows so I assume this is not a\nproblem with the amount of data being transferred but the way postgres\nresolve the query. To make it work in Java I had to define a postgres\nfunction that I call from the Java code instead of running the query\ndirectly.\n\nI had a similar problem in the past with a query that performed very poorly\nfrom a Java client while it was fine from pgAdmin or a python script. In\nthat case it was a matter of column types not compatible with the JDBC (citext)\nderiving in an implicit cast that prevented the postgres engine from using\na given index or to cast all the values of that column before using it, not\nsure now. But I don't think this is not the case here.\n\nCould anyone help me again?\n\nMany thanks in advance\nJose\n\nDear all,I'm running a query from Java on a postgres database:Java version: 17JDBC version: 42.4.2Postgres version: 13.1In parallel I'm testing the same queries from pgAdmin 4 version 6.13The tables I'm using contains more than 10million rows each and I have two questions here:1. I need to extract the path of a file without the file itself. For this I use two alternatives as I found that sentence \"A\" is much faster than the \"B\" one:\"A\" sentence:SELECT DISTINCT ( LEFT(opf.file_path, length(opf.file_path) - position('/' in reverse(opf.file_path))) ) AS path FROM product AS op JOIN product_file AS opf ON opf.product_id = op.id WHERE op.proprietary_end_date <= CURRENT_DATE AND op.id LIKE 'urn:esa:psa:%'\"B\" sentence:SELECT DISTINCT ( regexp_replace(opf.file_path, '(.*)\\/(.*)$', '\\1') ) AS path FROM product AS op JOIN product_file AS opf ON opf.product_id = op.id WHERE op.proprietary_end_date <= CURRENT_DATE AND op.id LIKE 'urn:esa:psa:%'2. Running sentence \"A\" on the pgAdmin client takes 4-5 minutes to finish but running it from a Java program it never ends. This is still the case when I limit the output to the first 100 rows so I assume this is not a problem with the amount of data being transferred but the way postgres resolve the query. To make it work in Java I had to define a postgres function that I call from the Java code instead of running the query directly. I had a similar problem in the past with a query that performed very poorly from a Java client while it was fine from pgAdmin or a python script. In that case it was a matter of column types not compatible with the JDBC (citext) deriving in an implicit cast that prevented the postgres engine from using a given index or to cast all the values of that column before using it, not sure now. But I don't think this is not the case here.Could anyone help me again?Many thanks in advanceJose",
"msg_date": "Mon, 6 Nov 2023 15:59:24 +0100",
"msg_from": "Jose Osinde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Performance problems with Postgres JDBC 42.4.2"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 09:59, Jose Osinde <[email protected]> wrote:\n\n>\n> Dear all,\n>\n> I'm running a query from Java on a postgres database:\n>\n> Java version: 17\n> JDBC version: 42.4.2\n> Postgres version: 13.1\n>\n> In parallel I'm testing the same queries from pgAdmin 4 version 6.13\n>\n> The tables I'm using contains more than 10million rows each and I have two\n> questions here:\n>\n> 1. I need to extract the path of a file without the file itself. For this\n> I use two alternatives as I found that sentence \"A\" is much faster than\n> the \"B\" one:\n>\n> \"A\" sentence:\n>\n> SELECT DISTINCT ( LEFT(opf.file_path, length(opf.file_path) - position('/'\n> in reverse(opf.file_path))) ) AS path\n> FROM product AS op JOIN product_file AS opf ON\n> opf.product_id = op.id\n> WHERE op.proprietary_end_date <= CURRENT_DATE\n> AND op.id LIKE 'urn:esa:psa:%'\n>\n> \"B\" sentence:\n>\n> SELECT DISTINCT ( regexp_replace(opf.file_path, '(.*)\\/(.*)$', '\\1') ) AS\n> path\n> FROM product AS op JOIN product_file AS opf ON\n> opf.product_id = op.id\n> WHERE op.proprietary_end_date <= CURRENT_DATE\n> AND op.id LIKE 'urn:esa:psa:%'\n>\n> 2. Running sentence \"A\" on the pgAdmin client takes 4-5 minutes to finish\n> but running it from a Java program it never ends. This is still the case\n> when I limit the output to the first 100 rows so I assume this is not a\n> problem with the amount of data being transferred but the way postgres\n> resolve the query. To make it work in Java I had to define a postgres\n> function that I call from the Java code instead of running the query\n> directly.\n>\n> I had a similar problem in the past with a query that performed very\n> poorly from a Java client while it was fine from pgAdmin or a python\n> script. In that case it was a matter of column types not compatible with\n> the JDBC (citext) deriving in an implicit cast that prevented the\n> postgres engine from using a given index or to cast all the values of that\n> column before using it, not sure now. But I don't think this is not the\n> case here.\n>\n> Could anyone help me again?\n>\n\nCan you share your java code ?\n\nIf you are using a PreparedStatement the driver will use the extended\nprotocol which may be slower. Statements use SimpleQuery which is faster\nand more like pgadmin\n\nIssuing a Query and Processing the Result | pgJDBC (postgresql.org)\n<https://jdbc.postgresql.org/documentation/query/#example51processing-a-simple-query-in-jdbc>\n\n<https://jdbc.postgresql.org/documentation/query/#example51processing-a-simple-query-in-jdbc>\nDave\n\n>\n>\n\nOn Mon, 6 Nov 2023 at 09:59, Jose Osinde <[email protected]> wrote:Dear all,I'm running a query from Java on a postgres database:Java version: 17JDBC version: 42.4.2Postgres version: 13.1In parallel I'm testing the same queries from pgAdmin 4 version 6.13The tables I'm using contains more than 10million rows each and I have two questions here:1. I need to extract the path of a file without the file itself. For this I use two alternatives as I found that sentence \"A\" is much faster than the \"B\" one:\"A\" sentence:SELECT DISTINCT ( LEFT(opf.file_path, length(opf.file_path) - position('/' in reverse(opf.file_path))) ) AS path FROM product AS op JOIN product_file AS opf ON opf.product_id = op.id WHERE op.proprietary_end_date <= CURRENT_DATE AND op.id LIKE 'urn:esa:psa:%'\"B\" sentence:SELECT DISTINCT ( regexp_replace(opf.file_path, '(.*)\\/(.*)$', '\\1') ) AS path FROM product AS op JOIN product_file AS opf ON opf.product_id = op.id WHERE op.proprietary_end_date <= CURRENT_DATE AND op.id LIKE 'urn:esa:psa:%'2. Running sentence \"A\" on the pgAdmin client takes 4-5 minutes to finish but running it from a Java program it never ends. This is still the case when I limit the output to the first 100 rows so I assume this is not a problem with the amount of data being transferred but the way postgres resolve the query. To make it work in Java I had to define a postgres function that I call from the Java code instead of running the query directly. I had a similar problem in the past with a query that performed very poorly from a Java client while it was fine from pgAdmin or a python script. In that case it was a matter of column types not compatible with the JDBC (citext) deriving in an implicit cast that prevented the postgres engine from using a given index or to cast all the values of that column before using it, not sure now. But I don't think this is not the case here.Could anyone help me again?Can you share your java code ?If you are using a PreparedStatement the driver will use the extended protocol which may be slower. Statements use SimpleQuery which is faster and more like pgadminIssuing a Query and Processing the Result | pgJDBC (postgresql.org)Dave",
"msg_date": "Wed, 8 Nov 2023 11:55:32 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems with Postgres JDBC 42.4.2"
},
{
"msg_contents": "Hi guys,\r\nThanks for the help.\r\nI was able to recreate the problem , on the same DB, with PSQL only. No JDBC.\r\n\r\nA plain run of a complicated query : 50ms\r\nA prepare and then execute of the same query: 2500ms.\r\n\r\nThe plans are different, as discussed above. The fast one is using Materialize and Memoize.\r\n\r\nThanks\r\n\r\nDanny\r\n\r\n\r\n",
"msg_date": "Thu, 9 Nov 2023 14:00:29 +0000",
"msg_from": "\"Abraham, Danny\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [EXTERNAL] Performance down with JDBC 42"
}
] |
[
{
"msg_contents": "Hi all,\n\nWorking on the emaj extension (for the curious ones, \nhttps://emaj.readthedocs.io/en/latest/ and \nhttps://github.com/dalibo/emaj), I recently faced a performance problem \nwhen querying and aggregating data changes. A query with 3 CTE has a O^2 \nbehavior (https://explain.dalibo.com/plan/1ded242d4ebf3gch#plan). I have \nfound a workaround by setting enable_nestloop to FALSE. But this has \ndrawbacks. So I want to better understand the issue.\n\nDuring my analysis, I realized that the output rows estimate of the \nsecond CTE is really bad, leading to a bad plan for the next CTE.\n\nI reproduced the issue in a very small test case with a simplified \nquery. Attached is a shell script and its output.\n\nA simple table is created, filled and analyzed.\n\nThe simplified statement is:\n WITH keys AS (\n SELECT c1, min(seq) AS seq FROM perf GROUP BY c1\n )\n SELECT tbl.*\n FROM perf tbl JOIN keys ON (keys.c1 = tbl.c1 AND keys.seq = tbl.seq);\n\nIts plan is:\n Hash Join (cost=958.00..1569.00 rows=1 width=262) (actual \ntime=18.516..30.702 rows=10000 loops=1)\n Output: tbl.c1, tbl.seq, tbl.c2\n Inner Unique: true\n Hash Cond: ((tbl.c1 = perf.c1) AND (tbl.seq = (min(perf.seq))))\n Buffers: shared hit=856\n -> Seq Scan on public.perf tbl (cost=0.00..548.00 rows=12000 \nwidth=262) (actual time=0.007..2.323 rows=12000 loops=1)\n Output: tbl.c1, tbl.seq, tbl.c2\n Buffers: shared hit=428\n -> Hash (cost=808.00..808.00 rows=10000 width=8) (actual \ntime=18.480..18.484 rows=10000 loops=1)\n Output: perf.c1, (min(perf.seq))\n Buckets: 16384 Batches: 1 Memory Usage: 519kB\n Buffers: shared hit=428\n -> HashAggregate (cost=608.00..708.00 rows=10000 width=8) \n(actual time=10.688..14.321 rows=10000 loops=1)\n Output: perf.c1, min(perf.seq)\n Group Key: perf.c1\n Batches: 1 Memory Usage: 1425kB\n Buffers: shared hit=428\n -> Seq Scan on public.perf (cost=0.00..548.00 \nrows=12000 width=8) (actual time=0.002..2.330 rows=12000 loops=1)\n Output: perf.c1, perf.seq, perf.c2\n Buffers: shared hit=428\n\nIt globally looks good to me, with 2 sequential scans and a hash join.\nBut the number of returned rows estimate is always 1, while it actually \ndepends on the data content (here 10000).\n\nFor the hash join node, the plan shows a \"Inner Unique: true\" property. \nI wonder if this is normal. It look likes the optimizer doesn't take \ninto account the presence of the GROUP BY clause in its estimate.\n\nI reproduce the case with all supported postgres versions.\n\nThanks by advance for any explanation.\nPhilippe.",
"msg_date": "Sun, 15 Oct 2023 10:38:22 +0200",
"msg_from": "Philippe BEAUDOIN <[email protected]>",
"msg_from_op": true,
"msg_subject": "Underestimated number of output rows with an aggregate function"
},
{
"msg_contents": "Philippe BEAUDOIN <[email protected]> writes:\n> During my analysis, I realized that the output rows estimate of the \n> second CTE is really bad, leading to a bad plan for the next CTE.\n> I reproduced the issue in a very small test case with a simplified \n> query. Attached is a shell script and its output.\n\nYeah. If you try it you'll see that the estimates for the\n\"keys.c1 = tbl.c1\" and \"keys.seq = tbl.seq\" clauses are spot-on\nindividually. The problem is that the planner assumes that they\nare independent clauses, so it multiplies those selectivities together.\nIn reality, because seq is already unique, the condition on c1 adds\nno additional selectivity.\n\nIf seq is guaranteed unique in your real application, you could just\ndrop the condition on c1. Otherwise I'm not sure about a good\nanswer. In principle creating extended stats on c1 and seq should\nhelp, but I think we don't yet apply those for join clauses.\n\nA partial answer could be to defeat application of the table's\nstatistics by writing\n\n JOIN keys ON (keys.c1 = tbl.c1+0 AND keys.seq = tbl.seq+0)\n\nFor me this gives an output estimate of 3000 rows, which is still not\ngreat but should at least prevent choice of an insane plan at the\nnext join level. However, it pessimizes the plan for this query\nitself a little bit (about doubling the runtime).\n\n> For the hash join node, the plan shows a \"Inner Unique: true\" property. \n> I wonder if this is normal.\n\nSure. The output of the WITH is visibly unique on c1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Oct 2023 12:37:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Underestimated number of output rows with an aggregate function"
},
{
"msg_contents": "Le 15/10/2023 à 18:37, Tom Lane a écrit :\n> Philippe BEAUDOIN<[email protected]> writes:\n>> During my analysis, I realized that the output rows estimate of the\n>> second CTE is really bad, leading to a bad plan for the next CTE.\n>> I reproduced the issue in a very small test case with a simplified\n>> query. Attached is a shell script and its output.\n> Yeah. If you try it you'll see that the estimates for the\n> \"keys.c1 = tbl.c1\" and \"keys.seq = tbl.seq\" clauses are spot-on\n> individually. The problem is that the planner assumes that they\n> are independent clauses, so it multiplies those selectivities together.\n> In reality, because seq is already unique, the condition on c1 adds\n> no additional selectivity.\n>\n> If seq is guaranteed unique in your real application, you could just\n> drop the condition on c1. Otherwise I'm not sure about a good\n> answer. In principle creating extended stats on c1 and seq should\n> help, but I think we don't yet apply those for join clauses.\n>\n> A partial answer could be to defeat application of the table's\n> statistics by writing\n>\n> JOIN keys ON (keys.c1 = tbl.c1+0 AND keys.seq = tbl.seq+0)\n>\n> For me this gives an output estimate of 3000 rows, which is still not\n> great but should at least prevent choice of an insane plan at the\n> next join level. However, it pessimizes the plan for this query\n> itself a little bit (about doubling the runtime).\n\nThanks for the trick (and the quick answer). In the test case, it \neffectively brings a pretty good plan.\n\nUnfortunately, as these statements are generated and depend on the base \ntable structure, the issue remains for some of them (but not all). So, \nfor the moment at least, I keep the previous workaround (disabling \nnested loops).\n\n>> For the hash join node, the plan shows a \"Inner Unique: true\" property.\n>> I wonder if this is normal.\n> Sure. The output of the WITH is visibly unique on c1.\nOK, I see.\n> \t\t\tregards, tom lane\n\n\n\n\n\n\n\nLe 15/10/2023 à 18:37, Tom Lane a\n écrit :\n\n\nPhilippe BEAUDOIN <[email protected]> writes:\n\n\nDuring my analysis, I realized that the output rows estimate of the \nsecond CTE is really bad, leading to a bad plan for the next CTE.\nI reproduced the issue in a very small test case with a simplified \nquery. Attached is a shell script and its output.\n\n\n\nYeah. If you try it you'll see that the estimates for the\n\"keys.c1 = tbl.c1\" and \"keys.seq = tbl.seq\" clauses are spot-on\nindividually. The problem is that the planner assumes that they\nare independent clauses, so it multiplies those selectivities together.\nIn reality, because seq is already unique, the condition on c1 adds\nno additional selectivity.\n\nIf seq is guaranteed unique in your real application, you could just\ndrop the condition on c1. Otherwise I'm not sure about a good\nanswer. In principle creating extended stats on c1 and seq should\nhelp, but I think we don't yet apply those for join clauses.\n\nA partial answer could be to defeat application of the table's\nstatistics by writing\n\n JOIN keys ON (keys.c1 = tbl.c1+0 AND keys.seq = tbl.seq+0)\n\nFor me this gives an output estimate of 3000 rows, which is still not\ngreat but should at least prevent choice of an insane plan at the\nnext join level. However, it pessimizes the plan for this query\nitself a little bit (about doubling the runtime).\n\nThanks for the trick (and the quick answer). In the test case, it\n effectively brings a pretty good plan.\n\nUnfortunately, as these statements are generated and depend on\n the base table structure, the issue remains for some of them (but\n not all). So, for the moment at least, I keep the previous\n workaround (disabling nested loops).\n\n\n\n\nFor the hash join node, the plan shows a \"Inner Unique: true\" property. \nI wonder if this is normal.\n\n\n\nSure. The output of the WITH is visibly unique on c1.\n\n OK, I see.\n\n\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 16 Oct 2023 18:52:11 +0200",
"msg_from": "Philippe BEAUDOIN <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Underestimated number of output rows with an aggregate function"
}
] |
[
{
"msg_contents": "Hello! We have an issue with database planner choosing really expensive sequence scan instead of an index scan in some cases.\nI'm reaching out in order to maybe get some idea on what we're dealing with / what could be the actual issue here.\nThe table affected is the users table with a field called \"private_metadata\" which is JSONB type.\nThere are currently two GIN indexes on that table - one GIN and second which is GIN jsonb_path_ops.\nThe operator that is used for the problematic query is contains (@>) so jsonb_path_ops should be preferred according to docs https://www.postgresql.org/docs/11/gin-builtin-opclasses.html.\nHowever the database sometimes decides NOT to use the index and perform the sequence scan.\nI have observed it happens in intervals - ex. 10 hours everything is ok, then something snaps and there are sequence scans for few more hours, then back to index and so on.\nMore context:\n- Database version: 11.18\n- The table schema with indexes\nCREATE TABLE account_user (\n id integer NOT NULL,\n private_metadata jsonb,\n);\nCREATE INDEX user_p_meta_idx ON account_user USING gin (private_metadata);\nCREATE INDEX user_p_meta_jsonb_path_idx ON account_user USING gin (private_metadata jsonb_path_ops);\n- The query that is perfomed\nSELECT \"account_user\".\"id\", \"account_user\".\"private_metadata\"\nFROM \"account_user\"\nWHERE \"account_user\".\"private_metadata\" @ > '{\"somekey\": \"somevalue\"}'\nLIMIT 21;\n- Plan when it uses an index\n{\n \"Plan\": {\n \"Node Type\": \"Limit\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 1091.29,\n \"Total Cost\": 1165.26,\n \"Plan Rows\": 21,\n \"Plan Width\": 4,\n \"Plans\": [\n {\n \"Node Type\": \"Bitmap Heap Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Relation Name\": \"account_user\",\n \"Alias\": \"account_user\",\n \"Startup Cost\": 1091.29,\n \"Total Cost\": 17129.12,\n \"Plan Rows\": 4553,\n \"Plan Width\": 4,\n \"Recheck Cond\": \"( private_metadata @ > ? :: jsonb )\",\n \"Plans\": [\n {\n \"Node Type\": \"Bitmap Index Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Index Name\": \"user_p_meta_jsonb_path_idx\",\n \"Startup Cost\": 0,\n \"Total Cost\": 1090.15,\n \"Plan Rows\": 4553,\n \"Plan Width\": 0,\n \"Index Cond\": \"( private_metadata @ > ? :: jsonb )\"\n }\n ]\n }\n ]\n }\n}\n- Plan when it doesn't use an index\n{\n \"Plan\": {\n \"Node Type\": \"Limit\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 0,\n \"Total Cost\": 1184.3,\n \"Plan Rows\": 21,\n \"Plan Width\": 4,\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Relation Name\": \"account_user\",\n \"Alias\": \"account_user\",\n \"Startup Cost\": 0,\n \"Total Cost\": 256768.27,\n \"Plan Rows\": 4553,\n \"Plan Width\": 4,\n \"Filter\": \"( private_metadata @ > ? :: jsonb )\"\n }\n ]\n }\n}\n- There are ~4.5M rows on the table\n- We currently have a script that is heavily inserting rows 24/7, about 130k rows / day (1,5 row/s)\n- It seems maybe the index can't keep up(?) because of this heavy insertion\nSELECT * FROM pgstatginindex('user_p_meta_jsonb_path_idx');\n version | pending_pages | pending_tuples\n---------+---------------+----------------\n 2 | 98 | 28807\n(1 row)\nMight it be the case that is cloggs up and cannot use the index when reading?\n- Last autovacuum for some reason happened 4 days ago\nselect * from pg_stat_user_tables where relname = 'account_user';\n-[ RECORD 1\n]-------+-----------------------------\nrelid | 74937\nschemaname | public\nrelname | account_user\nseq_scan | 66578\nseq_tup_read | 99542628744\nidx_scan | 9342647\nidx_tup_fetch | 105527685\nn_tup_ins | 518901\nn_tup_upd | 684607\nn_tup_del | 25\nn_tup_hot_upd | 82803\nn_live_tup | 4591412\nn_dead_tup | 370828\nn_mod_since_analyze | 152968\nlast_vacuum |\nlast_autovacuum | 2023-10-13 07: 35: 29.11448+00\nlast_analyze |\nlast_autoanalyze | 2023-10-13 07: 44: 31.90437+00\nvacuum_count | 0\nautovacuum_count | 2\nanalyze_count | 0\n- The plans above comes from a production system, I failed to reproduce it locally, only two cases in which I managed to get a seq scan were:\na) when there are very few rows in the table\nb) when I run the filter @> '{}' in which case I suppose postgres deduces \"you want everything\" so additional index lookup in not necessary (checked and this filter is impossible to induce by our code).\n\nAny feedback is appreciated! What would be the possible reason the planner chooses seq scan in that case?\n\n",
"msg_date": "Tue, 17 Oct 2023 15:48:41 +0200",
"msg_from": "=?utf-8?Q?Tomasz_Szyma=C5=84ski?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "GIN JSONB path index is not always used"
},
{
"msg_contents": "On Tue, 2023-10-17 at 15:48 +0200, Tomasz Szymański wrote:\n> Hello! We have an issue with database planner choosing really expensive sequence scan instead of an index scan in some cases.\n\nTo analyze that, we'd need the output from EXPLAIN (ANALYZE, BUFFERS) SELECT ...\nPlain text format please, no JSON.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 17 Oct 2023 16:17:26 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN JSONB path index is not always used"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 10:09 AM Tomasz Szymański <[email protected]> wrote:\n\n> - Database version: 11.18\n\nThat is pretty old. It is 3 bug-fix releases out of date even for its\nmajor version, and the major version itself is just about to reach EOL and\nis missing relevant improvements.\n\n- Plan when it uses an index\n> \"Total Cost\": 1165.26,\n> - Plan when it doesn't use an index\n> \"Total Cost\": 1184.3,\n>\n\nThe JSON format for plans is pretty non-ideal for human inspection;\nespecially so once you include ANALYZE and BUFFERS, which you should do.\nPlease use the plain text format instead. But I can see that the plans are\nvery similar in cost, so it wouldn't take much to shift between them.\nShould we assume that not using the index is much slower (otherwise, why\nwould you be asking the question?)?\n\n\n\n> - It seems maybe the index can't keep up(?) because of this heavy insertion\n> SELECT * FROM pgstatginindex('user_p_meta_jsonb_path_idx');\n> version | pending_pages | pending_tuples\n> ---------+---------------+----------------\n> 2 | 98 | 28807\n> (1 row)\n> Might it be the case that is cloggs up and cannot use the index when\n> reading?\n>\n\nDefinitely possible. The planner does take those numbers into account when\nplanning. The easiest thing would be to just turn off fastupdate for those\nindexes. That might make the INSERTs somewhat slower (it is hard to\npredict how much and you haven't complained about the performance of the\nINSERTs anyway) but should make the SELECTs more predictable and generally\nfaster. I habitually turn fastupdate off and then turn it back on only if\nI have an identifiable cause to do so.\n\nIf you don't want to turn fastupdate off, you could instead change the\ntable's autovac parameters to be more aggressive (particularly\nn_ins_since_vacuum, except that that doesn't exist until v13), or have a\ncron job call gin_clean_pending_list periodically.\n\n\n- Last autovacuum for some reason happened 4 days ago\n>\n> n_live_tup | 4591412\n> n_dead_tup | 370828\n>\n\nBased on those numbers and default parameters, there is no reason for it to\nbe running any sooner. That reflects only 8% turnover while the default\nfactor is 20%.\n\nCheers,\n\nJeff\n\nOn Tue, Oct 17, 2023 at 10:09 AM Tomasz Szymański <[email protected]> wrote:> - Database version: 11.18That is pretty old. It is 3 bug-fix releases out of date even for its major version, and the major version itself is just about to reach EOL and is missing relevant improvements. - Plan when it uses an index \"Total Cost\": 1165.26,- Plan when it doesn't use an index \"Total Cost\": 1184.3,\nThe JSON format for plans is pretty non-ideal for human inspection; especially so once you include ANALYZE and BUFFERS, which you should do. Please use the plain text format instead. But I can see that the plans are very similar in cost, so it wouldn't take much to shift between them. Should we assume that not using the index is much slower (otherwise, why would you be asking the question?)? - It seems maybe the index can't keep up(?) because of this heavy insertion\nSELECT * FROM pgstatginindex('user_p_meta_jsonb_path_idx');\n version | pending_pages | pending_tuples\n---------+---------------+----------------\n 2 | 98 | 28807\n(1 row)\nMight it be the case that is cloggs up and cannot use the index when reading?Definitely possible. The planner does take those numbers into account when planning. The easiest thing would be to just turn off fastupdate for those indexes. That might make the INSERTs somewhat slower (it is hard to predict how much and you haven't complained about the performance of the INSERTs anyway) but should make the SELECTs more predictable and generally faster. I habitually turn fastupdate off and then turn it back on only if I have an identifiable cause to do so. If you don't want to turn fastupdate off, you could instead change the table's autovac parameters to be more aggressive (particularly n_ins_since_vacuum, except that that doesn't exist until v13), or have a cron job call gin_clean_pending_list periodically.\n- Last autovacuum for some reason happened 4 days ago\nn_live_tup | 4591412\nn_dead_tup | 370828Based on those numbers and default parameters, there is no reason for it to be running any sooner. That reflects only 8% turnover while the default factor is 20%.Cheers,Jeff",
"msg_date": "Tue, 17 Oct 2023 13:05:42 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN JSONB path index is not always used"
},
{
"msg_contents": "Sorry for missing analyze and buffers, we did only had these plans at the time, providing ones performed with such:\n\nWhen it does us an index:\n----------------------------------------------------------------------------------------------------------------------------------+\nLimit (cost=255.29..329.26 rows=21 width=0) (actual time=8.023..8.025 rows=1 loops=1) \n Buffers: shared hit=54 read=6 \n I/O Timings: read=7.094 \n -> Bitmap Heap Scan on account_user (cost=255.29..16293.12 rows=4553 width=0) (actual time=8.022..8.023 rows=1 loops=1) \n Recheck Cond: (private_metadata @> '{\"somekey\": \"somevalue\"}'::jsonb) \n Heap Blocks: exact=2 \n Buffers: shared hit=54 read=6 \n I/O Timings: read=7.094 \n -> Bitmap Index Scan on user_p_meta_idx (cost=0.00..254.15 rows=4553 width=0) (actual time=7.985..7.985 rows=2 loops=1) |\n Index Cond: (private_metadata @> '{\"somekey\": \"somevalue\"}'::jsonb)|\n Buffers: shared hit=52 read=6 \n I/O Timings: read=7.094 \nPlanning Time: 1.134 ms \nExecution Time: 8.065 ms \n----------------------------------------------------------------------------------------------------------------------------------+\n\nWhen it does not:\n----------------------------------------------------------------------------------------------------------------------------------+\n Limit (cost=0.00..1184.30 rows=21 width=4) (actual time=1567.136..1619.956 rows=1 loops=1)\n Buffers: shared hit=199857\n -> Seq Scan on account_user (cost=0.00..256768.27 rows=4553 width=4) (actual time=1567.135..1619.953 rows=1 loops=1)\n Filter: (private_metadata @> '{\"somekey\": \"somevalue\"}'::jsonb)\n Rows Removed by Filter: 4592408\n Buffers: shared hit=199857\n Planning Time: 0.072 ms\n Execution Time: 1619.972 ms\n----------------------------------------------------------------------------------------------------------------------------------+\n\n> Should we assume that not using the index is much slower (otherwise, why would you be asking the question?)?\nYes, the issue is the sequence scan being expensive and slow.\n\n\n\n\n",
"msg_date": "Mon, 23 Oct 2023 12:33:46 +0200",
"msg_from": "=?utf-8?Q?Tomasz_Szyma=C5=84ski?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GIN JSONB path index is not always used"
},
{
"msg_contents": "On Mon, Oct 23, 2023 at 6:33 AM Tomasz Szymański <[email protected]> wrote:\n\n\n> Limit (cost=0.00..1184.30 rows=21 width=4) (actual\n> time=1567.136..1619.956 rows=1 loops=1)\n> -> Seq Scan on account_user (cost=0.00..256768.27 rows=4553 width=4)\n> (actual time=1567.135..1619.953 rows=1 loops=1)\n>\n\n\nIt thinks the seq scan will stop 99.5% early, after finding 21 out of 4553\nqualifying tuples. But instead it has to read the entire table to actually\nfind only 1.\n\nThe selectivity estimate of the @> operator has been substantially improved\nin v13. It is still far from perfect, but should be good enough to solve\nthis problem for this case and most similar cases. Turning off fastupdate\non the index would probably also solve the problem, for a different reason.\n\nCheers,\n\nJeff\n\nOn Mon, Oct 23, 2023 at 6:33 AM Tomasz Szymański <[email protected]> wrote: Limit (cost=0.00..1184.30 rows=21 width=4) (actual time=1567.136..1619.956 rows=1 loops=1) -> Seq Scan on account_user (cost=0.00..256768.27 rows=4553 width=4) (actual time=1567.135..1619.953 rows=1 loops=1) It thinks the seq scan will stop 99.5% early, after finding 21 out of 4553 qualifying tuples. But instead it has to read the entire table to actually find only 1.The selectivity estimate of the @> operator has been substantially improved in v13. It is still far from perfect, but should be good enough to solve this problem for this case and most similar cases. Turning off fastupdate on the index would probably also solve the problem, for a different reason.Cheers,Jeff",
"msg_date": "Mon, 23 Oct 2023 10:05:01 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN JSONB path index is not always used"
}
] |
[
{
"msg_contents": "Hi all--\n\nI'm having a performance problem in 12.16 that I'm hoping someone can help\nwith.\n\nI have a table shaped roughly like this:\n\n Table \"public.data\"\n\n Column | Type | Collation | Nullable |\n Default\n-----------------+-----------------------------+-----------+----------+------------------------------------------------\n id | integer | | not null |\nnextval('data_seq'::regclass)\n timestamp | timestamp without time zone | | not null |\n sn | character varying(36) | | not null |\n\nIndexes:\n \"data_pkey\" PRIMARY KEY, btree (id)\n \"data_multicol_sn_and_timestamp_desc_idx\" btree (sn, \"timestamp\" DESC)\n\nwith something in the 10M to 100M range in terms of number of rows.\n\n\nI have a query more or less like:\n\n\nWITH periods as (\n SELECT\n *\n FROM\n (\n SELECT\n s at time zone 'utc' AS period_start,\n LEAD(s) OVER (\n ORDER BY\n s\n ) at time zone 'utc' AS period_end\n FROM\n generate_series(\n ('2023-01-01T00:00:00-08:00') :: timestamptz,\n ('2023-11-01T00:00:00-07:00') :: timestamptz,\n ('1 day') :: interval\n ) s\n ) with_junk_period\n WHERE\n period_end IS NOT NULL\n)\nSELECT\n p.period_start,\n p.period_end,\n COUNT (distinct d.id)\nFROM\n periods p\n LEFT JOIN data d\n ON\n d.timestamp >= (p.period_start)\n AND d.\"timestamp\" < (p.period_end)\n AND d.sn = 'BLAH'\nGROUP BY\n p.period_start,\n p.period_end\nORDER BY\n p.period_start ASC\n;\n\n\nThis worked fine on a smaller, but same-shaped, set of data on a staging\ndatabase, where the query plan was:\n\nGroupAggregate (cost=14843021.48..15549022.41 rows=200 width=24) (actual\ntime=1311.052..1515.344 rows=303 loops=1)\n Group Key: with_junk_period.period_start, with_junk_period.period_end\n -> Sort (cost=14843021.48..15019521.21 rows=70599893 width=20) (actual\ntime=1305.969..1384.676 rows=463375 loops=1)\n Sort Key: with_junk_period.period_start, with_junk_period.period_end\n Sort Method: external merge Disk: 13632kB\n -> Nested Loop Left Join (cost=60.26..2329833.01 rows=70599893\nwidth=20) (actual time=355.379..917.049 rows=463375 loops=1)\n -> Subquery Scan on with_junk_period (cost=59.83..92.33\nrows=995 width=16) (actual time=355.307..358.978 rows=303 loops=1)\n Filter: (with_junk_period.period_end IS NOT NULL)\n Rows Removed by Filter: 1\n -> WindowAgg (cost=59.83..82.33 rows=1000 width=24)\n(actual time=355.302..358.723 rows=304 loops=1)\n -> Sort (cost=59.83..62.33 rows=1000 width=8)\n(actual time=355.265..355.510 rows=304 loops=1)\n Sort Key: s.s\n Sort Method: quicksort Memory: 39kB\n -> Function Scan on generate_series s\n (cost=0.00..10.00 rows=1000 width=8) (actual time=355.175..355.215\nrows=304 loops=1)\n -> Index Scan using data_multicol_sn_and_timestamp_desc_idx\non data d (cost=0.43..1631.90 rows=70955 width=12) (actual\ntime=0.042..1.516 rows=1529 loops=303)\n\" Index Cond: (((sn)::text = 'BLAH'::text) AND\n(\"\"timestamp\"\" >= with_junk_period.period_start) AND (\"\"timestamp\"\" <\nwith_junk_period.period_end))\"\nPlanning Time: 0.283 ms\nJIT:\n Functions: 20\n Options: Inlining true, Optimization true, Expressions true, Deforming\ntrue\n Timing: Generation 1.570 ms, Inlining 29.051 ms, Optimization 192.084 ms,\nEmission 134.022 ms, Total 356.727 ms\nExecution Time: 1523.983 ms\n\nBut on the production database, the query is no longer runnable for the\nsame frame, and when we restrict, it we can see that index is no longer\nbeing fully utilized:\n\nGroupAggregate (cost=56901238.84..58446602.98 rows=200 width=24) (actual\ntime=3652.892..3653.669 rows=4 loops=1)\n Group Key: with_junk_period.period_start, with_junk_period.period_end\n -> Sort (cost=56901238.84..57287579.38 rows=154536214 width=20) (actual\ntime=3652.544..3652.765 rows=5740 loops=1)\n Sort Key: with_junk_period.period_start, with_junk_period.period_end\n Sort Method: quicksort Memory: 641kB\n -> Nested Loop Left Join (cost=60.40..32259766.02 rows=154536214\nwidth=20) (actual time=172.908..3651.658 rows=5740 loops=1)\n\" Join Filter: ((d.\"\"timestamp\"\" >=\nwith_junk_period.period_start) AND (d.\"\"timestamp\"\" <\nwith_junk_period.period_end))\"\n Rows Removed by Join Filter: 5310656\n -> Subquery Scan on with_junk_period (cost=59.83..92.33\nrows=995 width=16) (actual time=152.963..153.014 rows=4 loops=1)\n Filter: (with_junk_period.period_end IS NOT NULL)\n Rows Removed by Filter: 1\n -> WindowAgg (cost=59.83..82.33 rows=1000 width=24)\n(actual time=152.958..153.004 rows=5 loops=1)\n -> Sort (cost=59.83..62.33 rows=1000 width=8)\n(actual time=152.937..152.945 rows=5 loops=1)\n Sort Key: s.s\n Sort Method: quicksort Memory: 25kB\n -> Function Scan on generate_series s\n (cost=0.00..10.00 rows=1000 width=8) (actual time=152.928..152.930 rows=5\nloops=1)\n -> Materialize (cost=0.57..1138670.54 rows=1397815\nwidth=12) (actual time=0.017..811.790 rows=1329099 loops=4)\n -> Index Scan using\nmdata_multicol_sn_and_timestamp_desc_idx on data d (cost=0.57..1124855.46\nrows=1397815 width=12) (actual time=0.051..2577.248 rows=1329099 loops=1)\n\" Index Cond: ((sn)::text = 'BLAH'::text)\"\nPlanning Time: 0.184 ms\nJIT:\n Functions: 22\n Options: Inlining true, Optimization true, Expressions true, Deforming\ntrue\n Timing: Generation 1.429 ms, Inlining 11.254 ms, Optimization 75.746 ms,\nEmission 65.959 ms, Total 154.388 ms\nExecution Time: 3662.526 ms\n\nInstead, the time filtering has been moved up to the join condition, which\nmeans that the database needs to look at many, many more rows.\n\nOne optimization I was able to make was to manually add the starts and end\ndates of the generate_series as an overall filter whole for the right side:\n\n periods p\n LEFT JOIN data d\n ON\n d.timestamp >= (p.period_start)\n AND d.\"timestamp\" < (p.period_end)\n AND d.sn = 'BLAH'\n AND d.timestamp >= ('2023-01-01T00:00:00-08:00'::timestamptz) at\ntime zone 'utc'\n AND d.timestamp < ('2023-11-01T00:00:00-08:00'::timestamptz) at\ntime zone 'utc'\n\nThis improves the query's performance somewhat, because the last two\nconditions make it into the index usage, but the time-related join\nconditions remain as part of the join filter, so the upside is limited. It\nalso seems like this is a kind of \"hint\" that the planner should really be\nable to infer from the structure of \"periods\".\n\nStepping back a bit, I think there should be a way to write this query so\nthat the planner recognizes that each row of data should only fit into one\nperiods bucket, and to utilize the fact that the table are both easily\nsortable to perform a more efficient join (merge?) than the nested loop.\nBut I'd definitely settle for getting it to use the index like it does on\nthe staging data.\n\nA few things I've tried to no avail:\n- moving around the non-joiny parts of the join to where, or to a CTE of\ndata\n- ordering each of the components descending in various ways to match\ndata's index\n- starting with data as the left table and periods on the right (I'd have\nto do some post-processing here, but if it was fast it would be worth it)\n\nThe only thing I can think of left to do is to run 12 separate queries--one\nfor each month, which is a time period that blows up the nested loop just\nenough that the query can finish--and then concatenate them after the fact.\nBut it seems like that's just using some knowledge about the structure of\nthe tables that I should be able to communicate to the planner, and I'd\nreally love to keep it all in the database!\n\nThanks for any and all help and suggestions.\n\n\n\n-- \nLincoln Swaine-Moore\n\nHi all--I'm having a performance problem in 12.16 that I'm hoping someone can help with.I have a table shaped roughly like this: Table \"public.data\" Column | Type | Collation | Nullable | Default-----------------+-----------------------------+-----------+----------+------------------------------------------------ id | integer | | not null | nextval('data_seq'::regclass) timestamp | timestamp without time zone | | not null | sn | character varying(36) | | not null |Indexes: \"data_pkey\" PRIMARY KEY, btree (id) \"data_multicol_sn_and_timestamp_desc_idx\" btree (sn, \"timestamp\" DESC)with something in the 10M to 100M range in terms of number of rows.I have a query more or less like:WITH periods as ( SELECT * FROM ( SELECT s at time zone 'utc' AS period_start, LEAD(s) OVER ( ORDER BY s ) at time zone 'utc' AS period_end FROM generate_series( ('2023-01-01T00:00:00-08:00') :: timestamptz, ('2023-11-01T00:00:00-07:00') :: timestamptz, ('1 day') :: interval ) s ) with_junk_period WHERE period_end IS NOT NULL )SELECT p.period_start, p.period_end, COUNT (distinct d.id)FROM periods p LEFT JOIN data d ON d.timestamp >= (p.period_start) AND d.\"timestamp\" < (p.period_end) AND d.sn = 'BLAH'GROUP BY p.period_start, p.period_endORDER BY p.period_start ASC;This worked fine on a smaller, but same-shaped, set of data on a staging database, where the query plan was:GroupAggregate (cost=14843021.48..15549022.41 rows=200 width=24) (actual time=1311.052..1515.344 rows=303 loops=1) Group Key: with_junk_period.period_start, with_junk_period.period_end -> Sort (cost=14843021.48..15019521.21 rows=70599893 width=20) (actual time=1305.969..1384.676 rows=463375 loops=1) Sort Key: with_junk_period.period_start, with_junk_period.period_end Sort Method: external merge Disk: 13632kB -> Nested Loop Left Join (cost=60.26..2329833.01 rows=70599893 width=20) (actual time=355.379..917.049 rows=463375 loops=1) -> Subquery Scan on with_junk_period (cost=59.83..92.33 rows=995 width=16) (actual time=355.307..358.978 rows=303 loops=1) Filter: (with_junk_period.period_end IS NOT NULL) Rows Removed by Filter: 1 -> WindowAgg (cost=59.83..82.33 rows=1000 width=24) (actual time=355.302..358.723 rows=304 loops=1) -> Sort (cost=59.83..62.33 rows=1000 width=8) (actual time=355.265..355.510 rows=304 loops=1) Sort Key: s.s Sort Method: quicksort Memory: 39kB -> Function Scan on generate_series s (cost=0.00..10.00 rows=1000 width=8) (actual time=355.175..355.215 rows=304 loops=1) -> Index Scan using data_multicol_sn_and_timestamp_desc_idx on data d (cost=0.43..1631.90 rows=70955 width=12) (actual time=0.042..1.516 rows=1529 loops=303)\" Index Cond: (((sn)::text = 'BLAH'::text) AND (\"\"timestamp\"\" >= with_junk_period.period_start) AND (\"\"timestamp\"\" < with_junk_period.period_end))\"Planning Time: 0.283 msJIT: Functions: 20 Options: Inlining true, Optimization true, Expressions true, Deforming true Timing: Generation 1.570 ms, Inlining 29.051 ms, Optimization 192.084 ms, Emission 134.022 ms, Total 356.727 msExecution Time: 1523.983 msBut on the production database, the query is no longer runnable for the same frame, and when we restrict, it we can see that index is no longer being fully utilized:GroupAggregate (cost=56901238.84..58446602.98 rows=200 width=24) (actual time=3652.892..3653.669 rows=4 loops=1) Group Key: with_junk_period.period_start, with_junk_period.period_end -> Sort (cost=56901238.84..57287579.38 rows=154536214 width=20) (actual time=3652.544..3652.765 rows=5740 loops=1) Sort Key: with_junk_period.period_start, with_junk_period.period_end Sort Method: quicksort Memory: 641kB -> Nested Loop Left Join (cost=60.40..32259766.02 rows=154536214 width=20) (actual time=172.908..3651.658 rows=5740 loops=1)\" Join Filter: ((d.\"\"timestamp\"\" >= with_junk_period.period_start) AND (d.\"\"timestamp\"\" < with_junk_period.period_end))\" Rows Removed by Join Filter: 5310656 -> Subquery Scan on with_junk_period (cost=59.83..92.33 rows=995 width=16) (actual time=152.963..153.014 rows=4 loops=1) Filter: (with_junk_period.period_end IS NOT NULL) Rows Removed by Filter: 1 -> WindowAgg (cost=59.83..82.33 rows=1000 width=24) (actual time=152.958..153.004 rows=5 loops=1) -> Sort (cost=59.83..62.33 rows=1000 width=8) (actual time=152.937..152.945 rows=5 loops=1) Sort Key: s.s Sort Method: quicksort Memory: 25kB -> Function Scan on generate_series s (cost=0.00..10.00 rows=1000 width=8) (actual time=152.928..152.930 rows=5 loops=1) -> Materialize (cost=0.57..1138670.54 rows=1397815 width=12) (actual time=0.017..811.790 rows=1329099 loops=4) -> Index Scan using mdata_multicol_sn_and_timestamp_desc_idx on data d (cost=0.57..1124855.46 rows=1397815 width=12) (actual time=0.051..2577.248 rows=1329099 loops=1)\" Index Cond: ((sn)::text = 'BLAH'::text)\"Planning Time: 0.184 msJIT: Functions: 22 Options: Inlining true, Optimization true, Expressions true, Deforming true Timing: Generation 1.429 ms, Inlining 11.254 ms, Optimization 75.746 ms, Emission 65.959 ms, Total 154.388 msExecution Time: 3662.526 msInstead, the time filtering has been moved up to the join condition, which means that the database needs to look at many, many more rows.One optimization I was able to make was to manually add the starts and end dates of the generate_series as an overall filter whole for the right side: periods p LEFT JOIN data d ON d.timestamp >= (p.period_start) AND d.\"timestamp\" < (p.period_end) AND d.sn = 'BLAH' AND d.timestamp >= ('2023-01-01T00:00:00-08:00'::timestamptz) at time zone 'utc' AND d.timestamp < ('2023-11-01T00:00:00-08:00'::timestamptz) at time zone 'utc'This improves the query's performance somewhat, because the last two conditions make it into the index usage, but the time-related join conditions remain as part of the join filter, so the upside is limited. It also seems like this is a kind of \"hint\" that the planner should really be able to infer from the structure of \"periods\".Stepping back a bit, I think there should be a way to write this query so that the planner recognizes that each row of data should only fit into one periods bucket, and to utilize the fact that the table are both easily sortable to perform a more efficient join (merge?) than the nested loop. But I'd definitely settle for getting it to use the index like it does on the staging data.A few things I've tried to no avail:- moving around the non-joiny parts of the join to where, or to a CTE of data- ordering each of the components descending in various ways to match data's index- starting with data as the left table and periods on the right (I'd have to do some post-processing here, but if it was fast it would be worth it)The only thing I can think of left to do is to run 12 separate queries--one for each month, which is a time period that blows up the nested loop just enough that the query can finish--and then concatenate them after the fact. But it seems like that's just using some knowledge about the structure of the tables that I should be able to communicate to the planner, and I'd really love to keep it all in the database!Thanks for any and all help and suggestions.-- Lincoln Swaine-Moore",
"msg_date": "Wed, 8 Nov 2023 17:26:39 -0800",
"msg_from": "Lincoln Swaine-Moore <[email protected]>",
"msg_from_op": true,
"msg_subject": "Awkward Join between generate_series and long table"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 6:26 PM Lincoln Swaine-Moore <[email protected]>\nwrote:\n\n> SELECT\n>\n s at time zone 'utc' AS period_start,\n> LEAD(s) OVER (\n> ORDER BY\n> s\n> ) at time zone 'utc' AS period_end\n>\n\nMaybe doesn't help overall but this can be equivalently written as:\ns + '1 day'::interval as period_end\n\nResorting to a window function here is expensive waste, the lead() value\ncan be computed, not queried.\n\n\n> SELECT\n> p.period_start,\n> p.period_end,\n> COUNT (distinct d.id)\n> FROM\n> periods p\n> LEFT JOIN data d\n> ON\n> d.timestamp >= (p.period_start)\n> AND d.\"timestamp\" < (p.period_end)\n> AND d.sn = 'BLAH'\n>\n\nThis seems better written (semantically, not sure about execution dynamics)\nas:\n\nFROM periods AS p\nLEFT JOIN LATERAL (SELECT count(distinct? d.id) FROM data AS d WHERE\nd.timestamp >= p.period_start AND d.timestamp < p.period_end AND d.sn =\n'BLAH') AS cnt_d\n-- NO grouping required at this query level\n\nDavid J.\n\nOn Wed, Nov 8, 2023 at 6:26 PM Lincoln Swaine-Moore <[email protected]> wrote: SELECT s at time zone 'utc' AS period_start, LEAD(s) OVER ( ORDER BY s ) at time zone 'utc' AS period_endMaybe doesn't help overall but this can be equivalently written as:s + '1 day'::interval as period_endResorting to a window function here is expensive waste, the lead() value can be computed, not queried. SELECT p.period_start, p.period_end, COUNT (distinct d.id)FROM periods p LEFT JOIN data d ON d.timestamp >= (p.period_start) AND d.\"timestamp\" < (p.period_end) AND d.sn = 'BLAH'This seems better written (semantically, not sure about execution dynamics) as:FROM periods AS pLEFT JOIN LATERAL (SELECT count(distinct? d.id) FROM data AS d WHERE d.timestamp >= p.period_start AND d.timestamp < p.period_end AND d.sn = 'BLAH') AS cnt_d-- NO grouping required at this query levelDavid J.",
"msg_date": "Wed, 8 Nov 2023 18:44:53 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Awkward Join between generate_series and long table"
},
{
"msg_contents": "> Maybe doesn't help overall but this can be equivalently written as:\ns + '1 day'::interval as period_end\n\nAh, so I've glossed over a detail here which is that I'm relying on some\ntimezone specific behavior and not actually generate_series itself. If\nyou're curious, the details are here:\nhttps://www.postgresql.org/message-id/2582288.1696428710%40sss.pgh.pa.us\n\nI think that makes the window function necessary, or at least something a\nlittle more sophisticated than addition of a day (though I'd be happy to be\nwrong about that).\n\n> LEFT JOIN LATERAL (SELECT\n\nOh wow, this seems to get the index used! That's wonderful news--thank you.\n\nI'd be super curious if anyone has any intuition about why the planner is\nso much more successful there--most of what I see online about LATERAL\nJOINs is focused as you said on semantics not performance. But in terms of\nsolving my problem, this seems to do the trick.\n\nThanks again!\n\n\nOn Wed, Nov 8, 2023 at 5:45 PM David G. Johnston <[email protected]>\nwrote:\n\n> On Wed, Nov 8, 2023 at 6:26 PM Lincoln Swaine-Moore <\n> [email protected]> wrote:\n>\n>> SELECT\n>>\n> s at time zone 'utc' AS period_start,\n>> LEAD(s) OVER (\n>> ORDER BY\n>> s\n>> ) at time zone 'utc' AS period_end\n>>\n>\n> Maybe doesn't help overall but this can be equivalently written as:\n> s + '1 day'::interval as period_end\n>\n> Resorting to a window function here is expensive waste, the lead() value\n> can be computed, not queried.\n>\n>\n>> SELECT\n>> p.period_start,\n>> p.period_end,\n>> COUNT (distinct d.id)\n>> FROM\n>> periods p\n>> LEFT JOIN data d\n>> ON\n>> d.timestamp >= (p.period_start)\n>> AND d.\"timestamp\" < (p.period_end)\n>> AND d.sn = 'BLAH'\n>>\n>\n> This seems better written (semantically, not sure about execution\n> dynamics) as:\n>\n> FROM periods AS p\n> LEFT JOIN LATERAL (SELECT count(distinct? d.id) FROM data AS d WHERE\n> d.timestamp >= p.period_start AND d.timestamp < p.period_end AND d.sn =\n> 'BLAH') AS cnt_d\n> -- NO grouping required at this query level\n>\n> David J.\n>\n>\n\n-- \nLincoln Swaine-Moore\n\n> Maybe doesn't help overall but this can be equivalently written as:s + '1 day'::interval as period_endAh, so I've glossed over a detail here which is that I'm relying on some timezone specific behavior and not actually generate_series itself. If you're curious, the details are here: https://www.postgresql.org/message-id/2582288.1696428710%40sss.pgh.pa.usI think that makes the window function necessary, or at least something a little more sophisticated than addition of a day (though I'd be happy to be wrong about that).> LEFT JOIN LATERAL (SELECTOh wow, this seems to get the index used! That's wonderful news--thank you.I'd be super curious if anyone has any intuition about why the planner is so much more successful there--most of what I see online about LATERAL JOINs is focused as you said on semantics not performance. But in terms of solving my problem, this seems to do the trick.Thanks again!On Wed, Nov 8, 2023 at 5:45 PM David G. Johnston <[email protected]> wrote:On Wed, Nov 8, 2023 at 6:26 PM Lincoln Swaine-Moore <[email protected]> wrote: SELECT s at time zone 'utc' AS period_start, LEAD(s) OVER ( ORDER BY s ) at time zone 'utc' AS period_endMaybe doesn't help overall but this can be equivalently written as:s + '1 day'::interval as period_endResorting to a window function here is expensive waste, the lead() value can be computed, not queried. SELECT p.period_start, p.period_end, COUNT (distinct d.id)FROM periods p LEFT JOIN data d ON d.timestamp >= (p.period_start) AND d.\"timestamp\" < (p.period_end) AND d.sn = 'BLAH'This seems better written (semantically, not sure about execution dynamics) as:FROM periods AS pLEFT JOIN LATERAL (SELECT count(distinct? d.id) FROM data AS d WHERE d.timestamp >= p.period_start AND d.timestamp < p.period_end AND d.sn = 'BLAH') AS cnt_d-- NO grouping required at this query levelDavid J.\n\n-- Lincoln Swaine-Moore",
"msg_date": "Wed, 8 Nov 2023 18:19:34 -0800",
"msg_from": "Lincoln Swaine-Moore <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Awkward Join between generate_series and long table"
},
{
"msg_contents": "\n\n> On Nov 8, 2023, at 8:26 PM, Lincoln Swaine-Moore <[email protected]> wrote:\n> \n> Hi all--\n> \n> I'm having a performance problem in 12.16 that I'm hoping someone can help with.\n\n<much useful info snipped>\n\n> Thanks for any and all help and suggestions.\n\n\nHi Lincoln,\nI haven't read your SQL carefully so I may be completely off base, but I wanted to share an experience I had with generate_series() that caused some planner headaches that may be affecting you too. \n\nI was using generate_series() in a CROSS JOIN with a large table. The call to generate_series() only emitted a small number of rows (4 - 24) but the planner estimated it would emit 1000 rows because that's Postgres' default in absence of other info. (See https://www.postgresql.org/docs/current/sql-createfunction.html, \"The default assumption is 1000 rows.\") I see an estimate for 1000 rows in your EXPLAIN output too, so you're experiencing the same although in your case the estimate of 1000 might be more accurate. The misestimation was causing significant performance problems for me.\n\nMy solution was to wrap generate_series() in a custom function that had a ROWS qualifier (documented at the same link as above) to better inform the planner. In my case I used ROWS 16 since that was relatively accurate -- a lot more accurate than 1000, anyway.\n\nThen I found that my pure SQL custom function was getting inlined, which caused the information in the ROWS qualifier to get lost. :-) I rewrote it in PL/pgSQL to prevent the inlining, and that solution worked well for me. (See convo at https://www.postgresql.org/message-id/flat/76B16E5F-59D0-4C97-8DBA-4B3BB21E2009%40americanefficient.com)\n\nOn another note, I have also seen unexpected performance gains from introducing LATERAL into a JOIN. My guess is that I got lucky, and that the use of LATERAL sent the planner down a better path. \n\nHope this is at least a little helpful!\n\nGood luck,\nPhilip\n\n",
"msg_date": "Thu, 9 Nov 2023 08:54:36 -0500",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Awkward Join between generate_series and long table"
},
{
"msg_contents": "> I see an estimate for 1000 rows in your EXPLAIN output too, so you're\nexperiencing the same\n> although in your case the estimate of 1000 might be more accurate. The\nmisestimation was causing\n> significant performance problems for me.\n\n> My solution was to wrap generate_series() in a custom function that had a\nROWS qualifier\n\nThat's interesting! I actually wasn't familiar with the ROWs feature at\nall, so that is good knowledge to pocket.\n\nIn my case, I think the number of rows will vary quite a bit for different\ntime periods/resolutions (and 1000 might not be a bad estimate for some of\nthe workloads). I do wonder whether if the planner had a sense of how big\nthe series result could be for longer periods/finer resolutions (which is a\nbit of information I could actually trivially generate outside and encode\ninto the query explicitly if need be), it might avoid/minimize the NESTED\nLOOP at all costs, but I'm not sure how to communicate that information.\n\nAnyway, thank you for sharing! Very helpful to hear what other people have\ndealt with in similar situations.\n\n> I see an estimate for 1000 rows in your EXPLAIN output too, so you're experiencing the same > although in your case the estimate of 1000 might be more accurate. The misestimation was causing > significant performance problems for me.> My solution was to wrap generate_series() in a custom function that had a ROWS qualifier That's interesting! I actually wasn't familiar with the ROWs feature at all, so that is good knowledge to pocket.In my case, I think the number of rows will vary quite a bit for different time periods/resolutions (and 1000 might not be a bad estimate for some of the workloads). I do wonder whether if the planner had a sense of how big the series result could be for longer periods/finer resolutions (which is a bit of information I could actually trivially generate outside and encode into the query explicitly if need be), it might avoid/minimize the NESTED LOOP at all costs, but I'm not sure how to communicate that information.Anyway, thank you for sharing! Very helpful to hear what other people have dealt with in similar situations.",
"msg_date": "Thu, 9 Nov 2023 10:00:24 -0800",
"msg_from": "Lincoln Swaine-Moore <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Awkward Join between generate_series and long table"
}
] |
[
{
"msg_contents": "Hi,\n We found one simple query manually run very fast(finished in several milliseconds), but there are 2 sessions within long transaction to run same sql with same bind variables took tens of seconds.\nManually run this sql only show <100 shared_blks_hit and very small reads, but for these 2 long running SQL from pg_stat_statements, it show huge shared_blks_hits/reads, and some shared_blks_dirtied/written for this query too. It's a very hot table and a lot of sessions update/delete/insert on this table. It looks like the query have scan huge blocks for MVCC ? any suggestions to tune this case ? any idea why shared_blks_dirtied/written for this query?\n\n\nuserid | 17443\ndbid | 16384\ntoplevel | t\nqueryid | 6334282481325858045\nquery | SELECT xxxx, xxxx FROM test.xxxxx\nWHERE ( ( id1 = $1 ) ) AND ( ( id2 = $2 ) ) AND ( ( id3 = $3 ) )\nplans | 0\ntotal_plan_time | 0\nmin_plan_time | 0\nmax_plan_time | 0\nmean_plan_time | 0\nstddev_plan_time | 0\ncalls | 2142\ntotal_exec_time | 66396685.619224936\nmin_exec_time | 7221.611607\nmax_exec_time | 391486.974656\nmean_exec_time | 30997.51896322356\nstddev_exec_time | 31073.83250436726\nrows | 153\nshared_blks_hit | 7133350479\nshared_blks_read | 2783620426\nshared_blks_dirtied | 1853702\nshared_blks_written | 2329513\nlocal_blks_hit | 0\nlocal_blks_read | 0\nlocal_blks_dirtied | 0\nlocal_blks_written | 0\ntemp_blks_read | 0\ntemp_blks_written | 0\nblk_read_time | 0\nblk_write_time | 0\nwal_records | 237750\nwal_fpi | 207790\nwal_bytes | 442879812\n\npid | state | xact_start | query_start | wait_event_type | wait_event | backend_xid | backend\n_xmin\n---------+------------+-------------------------------+-------------------------------+-----------------+------------+-------------+--------\n------\n 3671416 | active | 2023-11-16 06:38:16.802127+00 | 2023-11-16 08:08:15.739509+00 | | | 159763259 | 159763259\n 3671407 | active | 2023-11-16 06:38:16.807064+00 | 2023-11-16 08:08:17.195405+00 | | | 159764118 | 159763259\n\n--table size\nrelpages | reltuples\n----------+---------------\n 3146219 | 1.9892568e+08\n--index size\nrelpages | reltuples\n----------+---------------\n 1581759 | 1.9892568e+08\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\nHi, \n We found one simple query manually run very fast(finished in several milliseconds), but there are 2 sessions within long transaction to run same sql with same bind variables took tens of seconds.\nManually run this sql only show <100 shared_blks_hit and very small reads, but for these 2 long running SQL from pg_stat_statements, it show huge shared_blks_hits/reads, and some shared_blks_dirtied/written for this query too. It’s a\n very hot table and a lot of sessions update/delete/insert on this table. It looks like the query have scan huge blocks for MVCC ? any suggestions to tune this case ? any idea why shared_blks_dirtied/written for this query? \n\n \n \nuserid | 17443\ndbid | 16384\ntoplevel | t\nqueryid | 6334282481325858045\nquery | SELECT xxxx, xxxx FROM test.xxxxx\nWHERE ( ( id1 = $1 ) ) AND ( ( id2 = $2 ) ) AND ( ( id3 = $3 ) )\n\nplans | 0\ntotal_plan_time | 0\nmin_plan_time | 0\nmax_plan_time | 0\nmean_plan_time | 0\nstddev_plan_time | 0\ncalls | 2142\ntotal_exec_time | 66396685.619224936\nmin_exec_time | 7221.611607\nmax_exec_time | 391486.974656\nmean_exec_time | 30997.51896322356\nstddev_exec_time | 31073.83250436726\nrows | 153\nshared_blks_hit | 7133350479\nshared_blks_read | 2783620426\nshared_blks_dirtied | 1853702\nshared_blks_written | 2329513\nlocal_blks_hit | 0\nlocal_blks_read | 0\nlocal_blks_dirtied | 0\nlocal_blks_written | 0\ntemp_blks_read | 0\ntemp_blks_written | 0\nblk_read_time | 0\nblk_write_time | 0\nwal_records | 237750\nwal_fpi | 207790\nwal_bytes | 442879812\n \npid | state | xact_start | query_start | wait_event_type | wait_event | backend_xid | backend\n_xmin \n---------+------------+-------------------------------+-------------------------------+-----------------+------------+-------------+--------\n------ \n 3671416 | active | 2023-11-16 06:38:16.802127+00 | 2023-11-16 08:08:15.739509+00 | | | 159763259 | 159763259 \n\n 3671407 | active | 2023-11-16 06:38:16.807064+00 | 2023-11-16 08:08:17.195405+00 | | | 159764118 | 159763259 \n\n \n--table size \nrelpages | reltuples\n----------+---------------\n 3146219 | 1.9892568e+08\n--index size \nrelpages | reltuples\n----------+---------------\n 1581759 | 1.9892568e+08\n \nThanks,\n \nJames",
"msg_date": "Fri, 17 Nov 2023 08:10:45 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "simple query running long time within a long transaction."
},
{
"msg_contents": "\n\nAm 17.11.23 um 09:10 schrieb James Pang (chaolpan):\n>\n> Hi,\n>\n> We found one simple query manually run very fast(finished in \n> several milliseconds), but there are 2 sessions within long \n> transaction to run same sql with same bind variables took tens of seconds.\n>\nyou try to set plan_cache_mode to force_custom_plan, default is auto and \nwith that and bind variables pg will use a generic plan.\n\n\nRegards, Andreas\n\n-- \nAndreas Kretschmer - currently still (garden leave)\nTechnical Account Manager (TAM)\nwww.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 17 Nov 2023 10:17:18 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple query running long time within a long transaction."
},
{
"msg_contents": "Looks like it's not sql issue, manually running still use prepared statements and use same sql plan. \r\n\r\n-----Original Message-----\r\nFrom: Andreas Kretschmer <[email protected]> \r\nSent: Friday, November 17, 2023 5:17 PM\r\nTo: [email protected]\r\nSubject: Re: simple query running long time within a long transaction.\r\n\r\n\r\n\r\nAm 17.11.23 um 09:10 schrieb James Pang (chaolpan):\r\n>\r\n> Hi,\r\n>\r\n> We found one simple query manually run very fast(finished in \r\n> several milliseconds), but there are 2 sessions within long \r\n> transaction to run same sql with same bind variables took tens of seconds.\r\n>\r\nyou try to set plan_cache_mode to force_custom_plan, default is auto and with that and bind variables pg will use a generic plan.\r\n\r\n\r\nRegards, Andreas\r\n\r\n-- \r\nAndreas Kretschmer - currently still (garden leave)\r\nTechnical Account Manager (TAM)\r\nwww.enterprisedb.com\r\n\r\n\r\n\r\n",
"msg_date": "Sat, 18 Nov 2023 11:13:50 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: simple query running long time within a long transaction."
},
{
"msg_contents": "Any statement will run using a custom plan at first.\nOnly a prepared statement creates the memory area in the backend that can hold the custom plan statistics, that is why generic plans only work with prepared statements.\nA prepared statement has to run at least 5 times before the planner looks at the plan statistics and determine whether a generic plan would work (=generic plan cost being equal or lower than the average custom plan cost)\nThat means that the values in the binds/filters also play a role (as far as I know).\n\nOnce a generic plan is selected, it doesn’t do the statistics evaluation anymore, and thus the generic plan is fixed until the prepared statement is closed or the session is terminated.\nIt therefore also cannot choose a different plan based on the bind values anymore.\n\nThis means that if you want to manually replay your issue, just issuing it using a prepared statement manually is not exactly what happens in a session.\nSuch a replay will always use a custom plan.\nYou have to perform it at least 5 times for the generic plan to be considered.\nAnd because that is evaluated cost based upon the cost of the previous custom plans, the binds (filter values) have to entered correctly too that have lead up to a potential generic plan having been chosen.\n\nFrits Hoogland\n\n\n\n\n> On 18 Nov 2023, at 12:13, James Pang (chaolpan) <[email protected]> wrote:\n> \n> Looks like it's not sql issue, manually running still use prepared statements and use same sql plan. \n> \n> -----Original Message-----\n> From: Andreas Kretschmer <[email protected]> \n> Sent: Friday, November 17, 2023 5:17 PM\n> To: [email protected]\n> Subject: Re: simple query running long time within a long transaction.\n> \n> \n> \n> Am 17.11.23 um 09:10 schrieb James Pang (chaolpan):\n>> \n>> Hi,\n>> \n>> We found one simple query manually run very fast(finished in \n>> several milliseconds), but there are 2 sessions within long \n>> transaction to run same sql with same bind variables took tens of seconds.\n>> \n> you try to set plan_cache_mode to force_custom_plan, default is auto and with that and bind variables pg will use a generic plan.\n> \n> \n> Regards, Andreas\n> \n> -- \n> Andreas Kretschmer - currently still (garden leave)\n> Technical Account Manager (TAM)\n> www.enterprisedb.com\n> \n> \n> \n\n\nAny statement will run using a custom plan at first.Only a prepared statement creates the memory area in the backend that can hold the custom plan statistics, that is why generic plans only work with prepared statements.A prepared statement has to run at least 5 times before the planner looks at the plan statistics and determine whether a generic plan would work (=generic plan cost being equal or lower than the average custom plan cost)That means that the values in the binds/filters also play a role (as far as I know).Once a generic plan is selected, it doesn’t do the statistics evaluation anymore, and thus the generic plan is fixed until the prepared statement is closed or the session is terminated.It therefore also cannot choose a different plan based on the bind values anymore.This means that if you want to manually replay your issue, just issuing it using a prepared statement manually is not exactly what happens in a session.Such a replay will always use a custom plan.You have to perform it at least 5 times for the generic plan to be considered.And because that is evaluated cost based upon the cost of the previous custom plans, the binds (filter values) have to entered correctly too that have lead up to a potential generic plan having been chosen.\nFrits Hoogland\n\nOn 18 Nov 2023, at 12:13, James Pang (chaolpan) <[email protected]> wrote:Looks like it's not sql issue, manually running still use prepared statements and use same sql plan. -----Original Message-----From: Andreas Kretschmer <[email protected]> Sent: Friday, November 17, 2023 5:17 PMTo: [email protected]: Re: simple query running long time within a long transaction.Am 17.11.23 um 09:10 schrieb James Pang (chaolpan):Hi, We found one simple query manually run very fast(finished in several milliseconds), but there are 2 sessions within long transaction to run same sql with same bind variables took tens of seconds.you try to set plan_cache_mode to force_custom_plan, default is auto and with that and bind variables pg will use a generic plan.Regards, Andreas-- Andreas Kretschmer - currently still (garden leave)Technical Account Manager (TAM)www.enterprisedb.com",
"msg_date": "Sat, 18 Nov 2023 12:46:45 +0100",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple query running long time within a long transaction."
}
] |
[
{
"msg_contents": "\n\n\n",
"msg_date": "Sat, 18 Nov 2023 12:18:44 +0100",
"msg_from": "Gulp <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "Hello,\n\nI just switched from PG11 to PG15 on our production server (Version is \n15.5). Just made a vacuum full analyze on the DB.\n\nI have a relatively simple query that used to be fast and is now taking \nvery long (from less than 10 seconds to 3mn+)\n\nIf I remove a WHERE condition changes the calculation time dramatically. \nThe result is not exactly the same but that extra filtering seems very \nlong...\n\nAlso, adding \"materialized\" to both \"withcwrack\" and \"withcwrack0\" CTEs \ngets the result in acceptable timings (a few seconds). The problem with \nthis is that we have some clients with older versions of PG and I guess \nblindly adding the \"materialized\" keyword will cause errors.\n\nIs there anything I can do to prevent that kind of behaviour ? I'm a \nlittle afraid to have to review all the queries in my softwares to keep \ngood performances with PG 15 ? Maybe there's a way to configure the \nserver so that CTEs are materialized by default ? Not ideal but I could \nslowly refine queries to enforce \"not materialized\" and benefit from the \nimprovement without affecting all our users.\n\nThanks for your inputs.\n\nJC\n\n\nHere is the query:\n\nexplain (analyze,buffers)\nWITH myselect AS (\n SELECT DISTINCT og.idoeu\n FROM oegroupes og\n WHERE (og.idgroupe = 4470)\n)\n , withcwrack0 AS (\n SELECT idoeu, idthirdparty, ackcode\n FROM (\n SELECT imd.idoeu,\n imd.idthirdparty,\n imd.ackcode,\n RANK() OVER (PARTITION BY imd.idoeu, \nimd.idthirdparty ORDER BY imd.idimport DESC) AS rang\n FROM importdetails imd\n WHERE imd.ackcode NOT IN ('RA', '')\n ) x\n WHERE x.rang = 1\n)\n , withcwrack AS (\n SELECT idoeu,\n STRING_AGG(DISTINCT tp.nom, ', ' ORDER BY \ntp.nom) FILTER (WHERE ackcode IN ('AS', 'AC', 'NP', 'DU')) AS cwrackok,\n STRING_AGG(DISTINCT tp.nom, ', ' ORDER BY \ntp.nom) FILTER (WHERE ackcode IN ('CO', 'RJ', 'RC')) AS cwracknotok\n FROM withcwrack0\n JOIN thirdparty tp USING (idthirdparty)\n GROUP BY idoeu\n)\nSELECT DISTINCT og.idoegroupe,\n og.idoeu,\n o.titrelong,\n o.created,\n o.datedepotsacem,\n s.nom AS companyname,\n na.aggname AS actorsnames,\n COALESCE(TRIM(o.repnom1), '') || COALESCE(' / ' || \nTRIM(o.repnom2), '') ||\n COALESCE(' / ' || TRIM(o.repnom3), '') AS \nactorsnamesinfosrepart,\n o.cocv AS favcode,\n o.contrattiredufilm,\n o.interprete,\n o.codecocv,\n o.idsociete,\n o.idimport,\n o.donotexport,\n o.observations,\n withcwrack.cwracknotok AS cwracknotok,\n withcwrack.cwrackok AS cwrackok,\n oghl.idgroupe IS NOT NULL AS list_highlight1\nFROM oegroupes og\nJOIN myselect ON myselect.idoeu = og.idoeu\nJOIN oeu o ON o.idoeu = og.idoeu\nLEFT JOIN societes s ON s.idsociete = o.idsociete\nLEFT JOIN nomsad na ON na.idoeu = o.idoeu\nLEFT JOIN withcwrack ON withcwrack.idoeu = o.idoeu\nLEFT JOIN oegroupes oghl ON o.idoeu = oghl.idoeu AND oghl.idgroupe = NULL\n\n-- Commenting out the following line makes the query fast :\n\n WHERE (og.idgroupe=4470)\n\n\n\n\n\nFast version (without the final where) :\n\nUnique (cost=8888.76..8906.76 rows=360 width=273) (actual \ntime=343.424..345.687 rows=3004 loops=1)\n Buffers: shared hit=26366\n -> Sort (cost=8888.76..8889.66 rows=360 width=273) (actual \ntime=343.422..343.742 rows=3004 loops=1)\n Sort Key: og.idoegroupe, og.idoeu, o.titrelong, o.created, \no.datedepotsacem, s.nom, na.aggname, (((COALESCE(TRIM(BOTH FROM \no.repnom1), ''::text) || COALESCE((' / '::text || TRIM(BOTH FROM \no.repnom2)), ''::text)) || COALESCE((' / '::text || TRIM(BOTH FROM \no.repnom3)), ''::text))), o.cocv, o.contrattiredufilm, o.interprete, \n(codecocv(o.*)), o.idsociete, o.idimport, o.donotexport, o.observations, \n(string_agg(DISTINCT (tp.nom)::text, ', '::text ORDER BY (tp.nom)::text) \nFILTER (WHERE ((x.ackcode)::text = ANY ('{CO,RJ,RC}'::text[])))), \n(string_agg(DISTINCT (tp.nom)::text, ', '::text ORDER BY (tp.nom)::text) \nFILTER (WHERE ((x.ackcode)::text = ANY ('{AS,AC,NP,DU}'::text[])))), \n((idgroupe IS NOT NULL))\n Sort Method: quicksort Memory: 524kB\n Buffers: shared hit=26366\n -> Nested Loop Left Join (cost=6811.39..8873.48 rows=360 \nwidth=273) (actual time=291.636..340.755 rows=3004 loops=1)\n Join Filter: false\n Buffers: shared hit=26355\n -> Nested Loop (cost=6811.39..8773.58 rows=360 \nwidth=2964) (actual time=290.747..301.506 rows=3004 loops=1)\n Join Filter: (og_1.idoeu = og.idoeu)\n Buffers: shared hit=14173\n -> Hash Left Join (cost=6810.97..8718.89 rows=75 \nwidth=2964) (actual time=290.726..293.678 rows=1453 loops=1)\n Hash Cond: (o.idsociete = s.idsociete)\n Buffers: shared hit=6810\n -> Hash Right Join (cost=6809.36..8717.06 \nrows=75 width=2953) (actual time=290.689..292.781 rows=1453 loops=1)\n Hash Cond: (na.idoeu = o.idoeu)\n Buffers: shared hit=6809\n -> Seq Scan on nomsad na \n(cost=0.00..1592.24 rows=83924 width=41) (actual time=0.011..9.667 \nrows=83924 loops=1)\n Buffers: shared hit=753\n -> Hash (cost=6808.42..6808.42 \nrows=75 width=2916) (actual time=263.634..263.641 rows=1453 loops=1)\n Buckets: 2048 (originally 1024) \nBatches: 1 (originally 1) Memory Usage: 515kB\n Buffers: shared hit=6056\n -> Merge Left Join \n(cost=5108.25..6808.42 rows=75 width=2916) (actual time=256.175..262.913 \nrows=1453 loops=1)\n Merge Cond: (o.idoeu = x.idoeu)\n Buffers: shared hit=6056\n -> Nested Loop \n(cost=268.28..852.37 rows=75 width=2852) (actual time=0.995..7.211 \nrows=1453 loops=1)\n Buffers: shared hit=4375\n -> Unique \n(cost=267.99..268.37 rows=75 width=4) (actual time=0.962..1.693 \nrows=1453 loops=1)\n Buffers: shared \nhit=16\n -> Sort \n(cost=267.99..268.18 rows=75 width=4) (actual time=0.959..1.132 \nrows=1453 loops=1)\n Sort Key: \nog_1.idoeu\n Sort \nMethod: quicksort Memory: 49kB\nBuffers: shared hit=16\n -> Bitmap \nHeap Scan on oegroupes og_1 (cost=5.00..265.66 rows=75 width=4) (actual \ntime=0.183..0.684 rows=1453 loops=1)\nRecheck Cond: (idgroupe = 4470)\nHeap Blocks: exact=10\nBuffers: shared hit=16\n-> Bitmap Index Scan on ix_oegroupes_idgr_idoeu_unique2 \n(cost=0.00..4.99 rows=75 width=0) (actual time=0.156..0.156 rows=1453 \nloops=1)\nIndex Cond: (idgroupe = 4470)\nBuffers: shared hit=6\n -> Index Scan using \noeu_pkey on oeu o (cost=0.29..7.78 rows=1 width=2848) (actual \ntime=0.003..0.003 rows=1 loops=1453)\n Index Cond: \n(idoeu = og_1.idoeu)\n Buffers: shared \nhit=4359\n -> GroupAggregate \n(cost=4839.96..5953.90 rows=157 width=68) (actual time=52.418..251.636 \nrows=27905 loops=1)\n Group Key: x.idoeu\n Buffers: shared hit=1681\n -> Nested Loop \n(cost=4839.96..5948.97 rows=158 width=24) (actual time=52.369..136.128 \nrows=28325 loops=1)\n Buffers: shared \nhit=1681\n -> Subquery \nScan on x (cost=4839.81..5943.32 rows=158 width=10) (actual \ntime=52.341..108.978 rows=28325 loops=1)\nFilter: (x.rang = 1)\nBuffers: shared hit=1669\n -> \nWindowAgg (cost=4839.81..5549.21 rows=31529 width=22) (actual \ntime=52.340..101.941 rows=28325 loops=1)\nRun Condition: (rank() OVER (?) <= 1)\nBuffers: shared hit=1669\n-> Sort (cost=4839.81..4918.63 rows=31529 width=14) (actual \ntime=52.321..56.410 rows=31526 loops=1)\nSort Key: imd.idoeu, imd.idthirdparty, imd.idimport DESC\nSort Method: quicksort Memory: 2493kB\nBuffers: shared hit=1669\n-> Seq Scan on importdetails imd (cost=0.00..2483.90 rows=31529 \nwidth=14) (actual time=0.028..34.438 rows=31526 loops=1)\n\" Filter: ((ackcode)::text <> ALL ('{RA,\"\"\"\"}'::text[]))\"\nRows Removed by Filter: 33666\nBuffers: shared hit=1669\n -> Memoize \n(cost=0.15..0.30 rows=1 width=22) (actual time=0.000..0.000 rows=1 \nloops=28325)\n Cache \nKey: x.idthirdparty\n Cache \nMode: logical\n Hits: \n28319 Misses: 6 Evictions: 0 Overflows: 0 Memory Usage: 1kB\nBuffers: shared hit=12\n -> Index \nScan using providers_pkey on thirdparty tp (cost=0.14..0.29 rows=1 \nwidth=22) (actual time=0.009..0.009 rows=1 loops=6)\nIndex Cond: (idthirdparty = x.idthirdparty)\nBuffers: shared hit=12\n -> Hash (cost=1.27..1.27 rows=27 width=15) \n(actual time=0.024..0.025 rows=27 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 10kB\n Buffers: shared hit=1\n -> Seq Scan on societes s \n(cost=0.00..1.27 rows=27 width=15) (actual time=0.009..0.014 rows=27 \nloops=1)\n Buffers: shared hit=1\n -> Index Scan using ix_oegroupes_idoeu on \noegroupes og (cost=0.42..0.67 rows=5 width=8) (actual time=0.002..0.003 \nrows=2 loops=1453)\n Index Cond: (idoeu = o.idoeu)\n Buffers: shared hit=7363\n -> Result (cost=0.00..0.00 rows=0 width=4) (actual \ntime=0.000..0.000 rows=0 loops=3004)\n One-Time Filter: false\nPlanning:\n Buffers: shared hit=40\nPlanning Time: 3.240 ms\nExecution Time: 346.193 ms\n\n\nSlow version :\n\nUnique (cost=8408.54..8408.59 rows=1 width=273) (actual \ntime=220347.876..220348.736 rows=1453 loops=1)\n Buffers: shared hit=15544\n -> Sort (cost=8408.54..8408.54 rows=1 width=273) (actual \ntime=220347.875..220347.998 rows=1453 loops=1)\n Sort Key: og.idoegroupe, og.idoeu, o.titrelong, o.created, \no.datedepotsacem, s.nom, na.aggname, (((COALESCE(TRIM(BOTH FROM \no.repnom1), ''::text) || COALESCE((' / '::text || TRIM(BOTH FROM \no.repnom2)), ''::text)) || COALESCE((' / '::text || TRIM(BOTH FROM \no.repnom3)), ''::text))), o.cocv, o.contrattiredufilm, o.interprete, \n(codecocv(o.*)), o.idsociete, o.idimport, o.donotexport, o.observations, \n(string_agg(DISTINCT (tp.nom)::text, ', '::text ORDER BY (tp.nom)::text) \nFILTER (WHERE ((x.ackcode)::text = ANY ('{CO,RJ,RC}'::text[])))), \n(string_agg(DISTINCT (tp.nom)::text, ', '::text ORDER BY (tp.nom)::text) \nFILTER (WHERE ((x.ackcode)::text = ANY ('{AS,AC,NP,DU}'::text[])))), \n((idgroupe IS NOT NULL))\n Sort Method: quicksort Memory: 255kB\n Buffers: shared hit=15544\n -> Nested Loop Left Join (cost=5383.80..8408.53 rows=1 \nwidth=273) (actual time=288.376..220345.536 rows=1453 loops=1)\n Join Filter: false\n Buffers: shared hit=15544\n -> Nested Loop Left Join (cost=5383.80..8408.25 rows=1 \nwidth=2964) (actual time=287.986..220284.827 rows=1453 loops=1)\n Join Filter: (x.idoeu = o.idoeu)\n Rows Removed by Join Filter: 40545965\n Buffers: shared hit=9731\n -> Nested Loop Left Join (cost=543.83..2450.82 \nrows=1 width=2904) (actual time=56.081..68.044 rows=1453 loops=1)\n Buffers: shared hit=8050\n -> Hash Right Join (cost=543.70..2450.66 \nrows=1 width=2893) (actual time=56.066..61.414 rows=1453 loops=1)\n Hash Cond: (na.idoeu = o.idoeu)\n Buffers: shared hit=5144\n -> Seq Scan on nomsad na \n(cost=0.00..1592.24 rows=83924 width=41) (actual time=0.013..15.785 \nrows=83924 loops=1)\n Buffers: shared hit=753\n -> Hash (cost=543.68..543.68 rows=1 \nwidth=2856) (actual time=15.342..15.347 rows=1453 loops=1)\n Buckets: 2048 (originally 1024) \nBatches: 1 (originally 1) Memory Usage: 521kB\n Buffers: shared hit=4391\n -> Nested Loop \n(cost=275.35..543.68 rows=1 width=2856) (actual time=2.628..13.995 \nrows=1453 loops=1)\n Buffers: shared hit=4391\n -> Hash Join \n(cost=275.06..535.91 rows=1 width=12) (actual time=2.593..4.334 \nrows=1453 loops=1)\n Hash Cond: (og.idoeu \n= og_1.idoeu)\n Buffers: shared hit=32\n -> Bitmap Heap Scan \non oegroupes og (cost=5.00..265.66 rows=75 width=8) (actual \ntime=0.181..0.614 rows=1453 loops=1)\n Recheck Cond: \n(idgroupe = 4470)\n Heap Blocks: \nexact=10\n Buffers: shared \nhit=16\n -> Bitmap Index \nScan on ix_oegroupes_idgr_idoeu_unique2 (cost=0.00..4.99 rows=75 \nwidth=0) (actual time=0.158..0.158 rows=1453 loops=1)\n Index \nCond: (idgroupe = 4470)\nBuffers: shared hit=6\n -> Hash \n(cost=269.12..269.12 rows=75 width=4) (actual time=2.394..2.396 \nrows=1453 loops=1)\n Buckets: 2048 \n(originally 1024) Batches: 1 (originally 1) Memory Usage: 68kB\n Buffers: shared \nhit=16\n -> Unique \n(cost=267.99..268.37 rows=75 width=4) (actual time=0.894..1.942 \nrows=1453 loops=1)\nBuffers: shared hit=16\n -> Sort \n(cost=267.99..268.18 rows=75 width=4) (actual time=0.891..1.151 \nrows=1453 loops=1)\nSort Key: og_1.idoeu\nSort Method: quicksort Memory: 49kB\nBuffers: shared hit=16\n-> Bitmap Heap Scan on oegroupes og_1 (cost=5.00..265.66 rows=75 \nwidth=4) (actual time=0.139..0.658 rows=1453 loops=1)\nRecheck Cond: (idgroupe = 4470)\nHeap Blocks: exact=10\nBuffers: shared hit=16\n-> Bitmap Index Scan on ix_oegroupes_idgr_idoeu_unique2 \n(cost=0.00..4.99 rows=75 width=0) (actual time=0.121..0.122 rows=1453 \nloops=1)\nIndex Cond: (idgroupe = 4470)\nBuffers: shared hit=6\n -> Index Scan using \noeu_pkey on oeu o (cost=0.29..7.78 rows=1 width=2848) (actual \ntime=0.005..0.005 rows=1 loops=1453)\n Index Cond: (idoeu = \nog_1.idoeu)\n Buffers: shared hit=4359\n -> Index Scan using societes_pkey on \nsocietes s (cost=0.14..0.16 rows=1 width=15) (actual time=0.003..0.003 \nrows=1 loops=1453)\n Index Cond: (idsociete = o.idsociete)\n Buffers: shared hit=2906\n -> GroupAggregate (cost=4839.96..5953.90 rows=157 \nwidth=68) (actual time=0.034..148.224 rows=27905 loops=1453)\n Group Key: x.idoeu\n Buffers: shared hit=1681\n -> Nested Loop (cost=4839.96..5948.97 \nrows=158 width=24) (actual time=0.026..61.006 rows=28325 loops=1453)\n Buffers: shared hit=1681\n -> Subquery Scan on x \n(cost=4839.81..5943.32 rows=158 width=10) (actual time=0.025..40.825 \nrows=28325 loops=1453)\n Filter: (x.rang = 1)\n Buffers: shared hit=1669\n -> WindowAgg \n(cost=4839.81..5549.21 rows=31529 width=22) (actual time=0.025..35.958 \nrows=28325 loops=1453)\n Run Condition: (rank() OVER \n(?) <= 1)\n Buffers: shared hit=1669\n -> Sort \n(cost=4839.81..4918.63 rows=31529 width=14) (actual time=0.023..3.132 \nrows=31526 loops=1453)\n Sort Key: imd.idoeu, \nimd.idthirdparty, imd.idimport DESC\n Sort Method: \nquicksort Memory: 2493kB\n Buffers: shared hit=1669\n -> Seq Scan on \nimportdetails imd (cost=0.00..2483.90 rows=31529 width=14) (actual \ntime=0.021..22.590 rows=31526 loops=1)\n\" Filter: \n((ackcode)::text <> ALL ('{RA,\"\"\"\"}'::text[]))\"\n Rows Removed by \nFilter: 33666\n Buffers: shared \nhit=1669\n -> Memoize (cost=0.15..0.30 rows=1 \nwidth=22) (actual time=0.000..0.000 rows=1 loops=41156225)\n Cache Key: x.idthirdparty\n Cache Mode: logical\n Hits: 41156219 Misses: 6 \nEvictions: 0 Overflows: 0 Memory Usage: 1kB\n Buffers: shared hit=12\n -> Index Scan using \nproviders_pkey on thirdparty tp (cost=0.14..0.29 rows=1 width=22) \n(actual time=0.006..0.006 rows=1 loops=6)\n Index Cond: (idthirdparty = \nx.idthirdparty)\n Buffers: shared hit=12\n -> Result (cost=0.00..0.00 rows=0 width=4) (actual \ntime=0.000..0.000 rows=0 loops=1453)\n One-Time Filter: false\nPlanning:\n Buffers: shared hit=40\nPlanning Time: 3.302 ms\nExecution Time: 220349.106 ms\n\n\nWith materialized :\n\n\nUnique (cost=8428.96..8429.01 rows=1 width=273) (actual \ntime=8422.790..8423.717 rows=1453 loops=1)\n Buffers: shared hit=15537\n CTE withcwrack0\n -> Subquery Scan on x (cost=4839.81..5943.32 rows=158 width=10) \n(actual time=33.309..85.155 rows=28325 loops=1)\n Filter: (x.rang = 1)\n Buffers: shared hit=1669\n -> WindowAgg (cost=4839.81..5549.21 rows=31529 width=22) \n(actual time=33.307..77.580 rows=28325 loops=1)\n Run Condition: (rank() OVER (?) <= 1)\n Buffers: shared hit=1669\n -> Sort (cost=4839.81..4918.63 rows=31529 width=14) \n(actual time=33.291..37.192 rows=31526 loops=1)\n Sort Key: imd.idoeu, imd.idthirdparty, \nimd.idimport DESC\n Sort Method: quicksort Memory: 2493kB\n Buffers: shared hit=1669\n -> Seq Scan on importdetails imd \n(cost=0.00..2483.90 rows=31529 width=14) (actual time=0.024..22.104 \nrows=31526 loops=1)\n\" Filter: ((ackcode)::text <> ALL \n('{RA,\"\"\"\"}'::text[]))\"\n Rows Removed by Filter: 33666\n Buffers: shared hit=1669\n CTE withcwrack\n -> GroupAggregate (cost=17.42..22.75 rows=158 width=68) (actual \ntime=118.918..236.104 rows=27905 loops=1)\n Group Key: withcwrack0.idoeu\n Buffers: shared hit=1672\n -> Sort (cost=17.42..17.81 rows=158 width=80) (actual \ntime=118.874..122.458 rows=28325 loops=1)\n Sort Key: withcwrack0.idoeu\n Sort Method: quicksort Memory: 2320kB\n Buffers: shared hit=1672\n -> Hash Join (cost=8.06..11.65 rows=158 width=80) \n(actual time=33.447..110.595 rows=28325 loops=1)\n Hash Cond: (withcwrack0.idthirdparty = \ntp.idthirdparty)\n Buffers: shared hit=1672\n -> CTE Scan on withcwrack0 (cost=0.00..3.16 \nrows=158 width=66) (actual time=33.311..97.238 rows=28325 loops=1)\n Buffers: shared hit=1669\n -> Hash (cost=5.25..5.25 rows=225 width=22) \n(actual time=0.121..0.121 rows=225 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 21kB\n Buffers: shared hit=3\n -> Seq Scan on thirdparty tp \n(cost=0.00..5.25 rows=225 width=22) (actual time=0.014..0.063 rows=225 \nloops=1)\n Buffers: shared hit=3\n -> Sort (cost=2462.88..2462.89 rows=1 width=273) (actual \ntime=8422.789..8422.925 rows=1453 loops=1)\n Sort Key: og.idoegroupe, og.idoeu, o.titrelong, o.created, \no.datedepotsacem, s.nom, na.aggname, (((COALESCE(TRIM(BOTH FROM \no.repnom1), ''::text) || COALESCE((' / '::text || TRIM(BOTH FROM \no.repnom2)), ''::text)) || COALESCE((' / '::text || TRIM(BOTH FROM \no.repnom3)), ''::text))), o.cocv, o.contrattiredufilm, o.interprete, \n(codecocv(o.*)), o.idsociete, o.idimport, o.donotexport, o.observations, \nwithcwrack.cwracknotok, withcwrack.cwrackok, ((idgroupe IS NOT NULL))\n Sort Method: quicksort Memory: 255kB\n Buffers: shared hit=15537\n -> Nested Loop Left Join (cost=550.47..2462.87 rows=1 \nwidth=273) (actual time=310.118..8421.261 rows=1453 loops=1)\n Join Filter: false\n Buffers: shared hit=15537\n -> Nested Loop Left Join (cost=550.47..2462.59 rows=1 \nwidth=2964) (actual time=309.673..8392.068 rows=1453 loops=1)\n Join Filter: (withcwrack.idoeu = o.idoeu)\n Rows Removed by Join Filter: 40545965\n Buffers: shared hit=9724\n -> Nested Loop Left Join (cost=550.47..2457.46 \nrows=1 width=2904) (actual time=54.495..60.810 rows=1453 loops=1)\n Buffers: shared hit=8052\n -> Hash Right Join (cost=550.34..2457.30 \nrows=1 width=2893) (actual time=54.471..57.459 rows=1453 loops=1)\n Hash Cond: (na.idoeu = o.idoeu)\n Buffers: shared hit=5146\n -> Seq Scan on nomsad na \n(cost=0.00..1592.24 rows=83924 width=41) (actual time=0.012..14.855 \nrows=83924 loops=1)\n Buffers: shared hit=753\n -> Hash (cost=550.32..550.32 rows=1 \nwidth=2856) (actual time=14.905..14.909 rows=1453 loops=1)\n Buckets: 2048 (originally 1024) \nBatches: 1 (originally 1) Memory Usage: 521kB\n Buffers: shared hit=4393\n -> Nested Loop \n(cost=278.71..550.32 rows=1 width=2856) (actual time=2.513..13.598 \nrows=1453 loops=1)\n Buffers: shared hit=4393\n -> Hash Join \n(cost=278.41..542.54 rows=1 width=12) (actual time=2.483..4.219 \nrows=1453 loops=1)\n Hash Cond: (og.idoeu \n= og_1.idoeu)\n Buffers: shared hit=34\n -> Bitmap Heap Scan \non oegroupes og (cost=5.01..268.94 rows=76 width=8) (actual \ntime=0.171..0.573 rows=1453 loops=1)\n Recheck Cond: \n(idgroupe = 4470)\n Heap Blocks: \nexact=10\n Buffers: shared \nhit=17\n -> Bitmap Index \nScan on ix_oegroupes_idgr_idoeu_unique2 (cost=0.00..4.99 rows=76 \nwidth=0) (actual time=0.150..0.150 rows=1453 loops=1)\nIndex Cond: (idgroupe = 4470)\nBuffers: shared hit=7\n -> Hash \n(cost=272.45..272.45 rows=76 width=4) (actual time=2.303..2.305 \nrows=1453 loops=1)\n Buckets: 2048 \n(originally 1024) Batches: 1 (originally 1) Memory Usage: 68kB\n Buffers: shared \nhit=17\n -> Unique \n(cost=271.31..271.69 rows=76 width=4) (actual time=0.800..1.855 \nrows=1453 loops=1)\nBuffers: shared hit=17\n-> Sort (cost=271.31..271.50 rows=76 width=4) (actual \ntime=0.798..1.069 rows=1453 loops=1)\n \nSort Key: og_1.idoeu\n \nSort Method: quicksort Memory: 49kB\nBuffers: shared hit=17\n-> Bitmap Heap Scan on oegroupes og_1 (cost=5.01..268.94 rows=76 \nwidth=4) (actual time=0.128..0.572 rows=1453 loops=1)\nRecheck Cond: (idgroupe = 4470)\nHeap Blocks: exact=10\nBuffers: shared hit=17\n-> Bitmap Index Scan on ix_oegroupes_idgr_idoeu_unique2 \n(cost=0.00..4.99 rows=76 width=0) (actual time=0.113..0.113 rows=1453 \nloops=1)\nIndex Cond: (idgroupe = 4470)\nBuffers: shared hit=7\n -> Index Scan using \noeu_pkey on oeu o (cost=0.29..7.78 rows=1 width=2848) (actual \ntime=0.005..0.005 rows=1 loops=1453)\n Index Cond: (idoeu = \nog_1.idoeu)\n Buffers: shared hit=4359\n -> Index Scan using societes_pkey on \nsocietes s (cost=0.14..0.16 rows=1 width=15) (actual time=0.001..0.001 \nrows=1 loops=1453)\n Index Cond: (idsociete = o.idsociete)\n Buffers: shared hit=2906\n -> CTE Scan on withcwrack (cost=0.00..3.16 \nrows=158 width=68) (actual time=0.082..3.122 rows=27905 loops=1453)\n Buffers: shared hit=1672\n -> Result (cost=0.00..0.00 rows=0 width=4) (actual \ntime=0.000..0.000 rows=0 loops=1453)\n One-Time Filter: false\nPlanning:\n Buffers: shared hit=38\nPlanning Time: 2.927 ms\nExecution Time: 8424.587 ms\n\n\n\n",
"msg_date": "Wed, 22 Nov 2023 12:38:38 +0100",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance degradation with CTEs, switching from PG 11 to PG 15"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 6:39 PM Jean-Christophe Boggio\n<[email protected]> wrote:\n>\n> Hello,\n>\n> I just switched from PG11 to PG15 on our production server (Version is\n> 15.5). Just made a vacuum full analyze on the DB.\n\nNote that \"vacuum full\" is not recommended practice in most\nsituations. Among the downsides, it removes the visibility map, which\nis necessary to allow index-only scans. Plain vacuum should always be\nused except for certain dire situations. Before proceeding further,\nplease perform a plain vacuum on the DB. After that, check if there\nare still problems with your queries.\n\n> Also, adding \"materialized\" to both \"withcwrack\" and \"withcwrack0\" CTEs\n> gets the result in acceptable timings (a few seconds). The problem with\n> this is that we have some clients with older versions of PG and I guess\n> blindly adding the \"materialized\" keyword will cause errors.\n\nYes, meaning 11 and earlier don't recognize that keyword keyword.\n\n> Is there anything I can do to prevent that kind of behaviour ? I'm a\n> little afraid to have to review all the queries in my softwares to keep\n> good performances with PG 15 ? Maybe there's a way to configure the\n> server so that CTEs are materialized by default ?\n\nThere is no such a way. It would be surely be useful for some users to\nhave a way to slowly migrate query plans to new planner versions, but\nthat's not how it works today.\n\n\n",
"msg_date": "Wed, 22 Nov 2023 20:30:28 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation with CTEs, switching from PG 11 to PG 15"
},
{
"msg_contents": "John,\n\nLe 22/11/2023 à 14:30, John Naylor a écrit :\n> Note that \"vacuum full\" is not recommended practice in most > situations. Among the downsides, it removes the visibility map, > \nwhich is necessary to allow index-only scans. Plain vacuum should > \nalways be used except for certain dire situations. Before proceeding > \nfurther, please perform a plain vacuum on the DB. After that, check > if \nthere are still problems with your queries.\nDid both VACUUM ANALYZE and VACUUM (which one did you recommend \nexactly?) and things go much faster now, thanks a lot. I will also check \nwhy autovacuum did not do its job.\n\n>> Is there anything I can do to prevent that kind of behaviour ? I'm >> a little afraid to have to review all the queries in my softwares \n >> to keep good performances with PG 15 ? Maybe there's a way to >> \nconfigure the server so that CTEs are materialized by default ? > > \nThere is no such a way. It would be surely be useful for some users > to \nhave a way to slowly migrate query plans to new planner versions, > but \nthat's not how it works today.\nThanks for your input so I know I did not miss a parameter. And yes, \nthat would be handy.\n\nBest regards,\n\n\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:48:07 +0100",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation with CTEs, switching from PG 11 to PG 15"
},
{
"msg_contents": "\n\nAm 22.11.23 um 12:38 schrieb Jean-Christophe Boggio:\n>\n>\n> Also, adding \"materialized\" to both \"withcwrack\" and \"withcwrack0\" \n> CTEs gets the result in acceptable timings (a few seconds). The \n> problem with this is that we have some clients with older versions of \n> PG and I guess blindly adding the \"materialized\" keyword will cause \n> errors.\n>\n\nyeah, prior to 11 CTEs are a optimizer barrier. You can try to rewrite \nthe queries to not using CTEs - or upgrade. If i were you i would upgrade.\n\n\nRegards, Andreas\n\n-- \nAndreas Kretschmer - currently still (garden leave)\nTechnical Account Manager (TAM)\nwww.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 22 Nov 2023 15:25:11 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation with CTEs, switching from PG 11 to PG 15"
},
{
"msg_contents": "Andreas,\n\nLe 22/11/2023 à 15:25, Andreas Kretschmer a écrit :\n> Am 22.11.23 um 12:38 schrieb Jean-Christophe Boggio: >> Also, adding \"materialized\" to both \"withcwrack\" and \"withcwrack0\" \n >> CTEs gets the result in acceptable timings (a few seconds). The >> \nproblem with this is that we have some clients with older versions >> of \nPG and I guess blindly adding the \"materialized\" keyword will >> cause \nerrors. > yeah, prior to 11 CTEs are a optimizer barrier. You can try to \n > rewrite the queries to not using CTEs - or upgrade. If i were you i > \nwould upgrade.\nI did upgrade :-) But we have many users for which we don't decide on \nwhen they do upgrade so we have to keep compatibility with most versions \nof PG and in that particular case (non-existence of the materialized \nkeyword for PG 11 and before) it is a real problem.\n\nBest regards,\n\nJC\n\n\n\n",
"msg_date": "Wed, 22 Nov 2023 16:58:01 +0100",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation with CTEs, switching from PG 11 to PG 15"
},
{
"msg_contents": "Jean-Christophe Boggio <[email protected]> writes:\n> I did upgrade :-) But we have many users for which we don't decide on \n> when they do upgrade so we have to keep compatibility with most versions \n> of PG and in that particular case (non-existence of the materialized \n> keyword for PG 11 and before) it is a real problem.\n\nPG 11 is out of support as of earlier this month, so your users really\nneed to be prioritizing getting onto more modern versions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Nov 2023 11:07:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation with CTEs, switching from PG 11 to PG 15"
}
] |
[
{
"msg_contents": "Hello,\n\nI am trying to optimize a complex query and while doing some explains, I \nstumbled upon this :\n\n CTE cfg\n -> Result (cost=2.02..2.03 rows=1 width=25) (actual \ntime=7167.478..7167.481 rows=1 loops=1)\n Buffers: shared hit=2\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..1.01 rows=1 width=1) (actual \ntime=0.058..0.058 rows=1 loops=1)\n Buffers: shared hit=1\n -> Seq Scan on config (cost=0.00..1.01 rows=1 \nwidth=1) (actual time=0.024..0.024 rows=1 loops=1)\n Buffers: shared hit=1\n InitPlan 2 (returns $1)\n -> Limit (cost=0.00..1.01 rows=1 width=4) (actual \ntime=0.003..0.004 rows=1 loops=1)\n Buffers: shared hit=1\n -> Seq Scan on config config_1 (cost=0.00..1.01 \nrows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Buffers: shared hit=1\n\nThe CTE query is this:\n\nWITH cfg AS (\n SELECT\n (SELECT multidevise FROM config LIMIT 1) AS p_multidevise\n ,(SELECT monnaie FROM config LIMIT 1) AS p_defaultdevise\n ,:datedu::DATE AS p_datedu\n ,:dateau::DATE AS p_dateau\n)\n\nTable config table only has one row. :datedu and :dateau are named params.\n\nHow can this take 7 seconds?\n\nI am creating this CTE at the start of the query and CROSS JOIN it all \nalong the query. Is it a bad practice to do so? Are these 7 seconds an \nartefact?\n\nAlso, when that cfg CTE is being used, sometimes it uses close to nothing:\n\n-> CTE Scan on cfg cfg_3 (cost=0.00..0.02 rows=1 width=4) (actual \ntime=0.000..0.001 rows=1 loops=1)\n\nAnd sometimes it takes 7 seconds ?!\n\n-> CTE Scan on cfg cfg_7 (cost=0.00..0.02 rows=1 width=16) (actual \ntime=7167.481..7167.482 rows=1 loops=1)\n\nThis really looks like an artefact (maybe in relation to the JIT compiler?)\n\nThanks for your enlightenments.\n\nJC\n\n\nHere's the full EXPLAIN PLAN:\n\nSort (cost=3837999522.01..3838152992.01 rows=61388000 width=1454) \n(actual time=117437.996..117438.093 rows=492 loops=1)\n Sort Key: s.nom, cl.name, a.nom\n Sort Method: quicksort Memory: 251kB\n Buffers: shared hit=71920\n CTE cfg\n -> Result (cost=2.02..2.03 rows=1 width=25) (actual \ntime=7167.478..7167.481 rows=1 loops=1)\n Buffers: shared hit=2\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..1.01 rows=1 width=1) (actual \ntime=0.058..0.058 rows=1 loops=1)\n Buffers: shared hit=1\n -> Seq Scan on config (cost=0.00..1.01 rows=1 \nwidth=1) (actual time=0.024..0.024 rows=1 loops=1)\n Buffers: shared hit=1\n InitPlan 2 (returns $1)\n -> Limit (cost=0.00..1.01 rows=1 width=4) (actual \ntime=0.003..0.004 rows=1 loops=1)\n Buffers: shared hit=1\n -> Seq Scan on config config_1 (cost=0.00..1.01 \nrows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Buffers: shared hit=1\n CTE daz_adinroy\n -> HashAggregate (cost=209985.73..241521.32 rows=3153559 \nwidth=12) (never executed)\n Group Key: ra.idad, ra.idoeu, (COALESCE(ra.controllingsoc, \no.idsociete))\n -> Append (cost=786.61..186334.04 rows=3153559 width=12) \n(never executed)\n -> HashAggregate (cost=786.61..806.47 rows=1986 \nwidth=12) (never executed)\n Group Key: ra.idad, ra.idoeu, \nCOALESCE(ra.controllingsoc, o.idsociete)\n -> Nested Loop (cost=0.43..771.72 rows=1986 \nwidth=12) (never executed)\n -> Seq Scan on royaltiesad ra \n(cost=0.00..50.86 rows=1986 width=12) (never executed)\n -> Memoize (cost=0.43..5.34 rows=1 \nwidth=8) (never executed)\n Cache Key: ra.idoeu\n Cache Mode: logical\n -> Index Scan using oeu_pkey on oeu \no (cost=0.42..5.33 rows=1 width=8) (never executed)\n Index Cond: (idoeu = ra.idoeu)\n -> HashAggregate (cost=106708.45..138224.18 \nrows=3151573 width=12) (never executed)\n Group Key: ra_1.idad, o_1.idoeu, \nCOALESCE(ra_1.controllingsoc, a_1.idsociete, o_1.idsociete)\n -> Hash Join (cost=14973.25..83071.65 \nrows=3151573 width=12) (never executed)\n Hash Cond: (ra_1.idagreement = a_1.idagreement)\n -> Hash Join (cost=14923.91..80302.56 \nrows=1033462 width=24) (never executed)\n Hash Cond: (og.idoeu = o_1.idoeu)\n -> Hash Join (cost=264.24..62930.01 \nrows=1033462 width=20) (never executed)\n Hash Cond: (og.idgroupe = \ng.idgroupe)\n -> Seq Scan on oegroupes og \n(cost=0.00..45363.20 rows=1858120 width=8) (never executed)\n -> Hash (cost=248.34..248.34 \nrows=1272 width=20) (never executed)\n -> Hash Join \n(cost=85.46..248.34 rows=1272 width=20) (never executed)\n Hash Cond: \n(ra_1.idagreement = g.idagreement)\n -> Seq Scan on \nroyaltiesad ra_1 (cost=0.00..50.86 rows=1986 width=12) (never executed)\n -> Hash \n(cost=56.87..56.87 rows=2287 width=8) (never executed)\n -> Seq Scan \non groupes g (cost=0.00..56.87 rows=2287 width=8) (never executed)\n -> Hash (cost=11666.52..11666.52 \nrows=239452 width=8) (never executed)\n -> Seq Scan on oeu o_1 \n(cost=0.00..11666.52 rows=239452 width=8) (never executed)\n -> Hash (cost=34.71..34.71 rows=1171 \nwidth=8) (never executed)\n -> Seq Scan on agreements a_1 \n(cost=0.00..34.71 rows=1171 width=8) (never executed)\n CTE currentsoldes2\n -> Subquery Scan on currentsoldes (cost=1197.97..1301.70 rows=23 \nwidth=27) (actual time=17.699..18.476 rows=1767 loops=1)\n Filter: (currentsoldes.rang = 1)\n Buffers: shared hit=304\n -> HashAggregate (cost=1197.97..1244.07 rows=4610 width=39) \n(actual time=17.695..18.209 rows=1767 loops=1)\n Group Key: soldes_2.idad, soldes_2.idsociete, rank() \nOVER (?), soldes_2.newbalance, soldes_2.postponed_gross_master, \nCOALESCE(soldes_2.laststatementnet, '0'::double precision)\n Batches: 1 Memory Usage: 473kB\n Buffers: shared hit=304\n -> WindowAgg (cost=997.16..1117.65 rows=5355 \nwidth=39) (actual time=10.749..16.554 rows=1767 loops=1)\n Run Condition: (rank() OVER (?) <= 1)\n Buffers: shared hit=304\n -> Sort (cost=997.16..1010.55 rows=5355 \nwidth=31) (actual time=10.712..11.828 rows=10412 loops=1)\n Sort Key: soldes_2.idad, \nsoldes_2.idsociete, (COALESCE(soldes_2.date_closingperiod, \n'1900-01-01'::date)) DESC\n Sort Method: quicksort Memory: 1117kB\n Buffers: shared hit=304\n -> Nested Loop (cost=0.00..665.50 \nrows=5355 width=31) (actual time=0.017..5.694 rows=10412 loops=1)\n Join Filter: \n((soldes_2.date_closingperiod < cfg_3.p_datedu) OR \n(soldes_2.date_closingperiod IS NULL))\n Rows Removed by Join Filter: 5654\n Buffers: shared hit=304\n -> CTE Scan on cfg cfg_3 \n(cost=0.00..0.02 rows=1 width=4) (actual time=0.000..0.003 rows=1 loops=1)\n -> Seq Scan on soldes soldes_2 \n(cost=0.00..464.66 rows=16066 width=31) (actual time=0.009..2.751 \nrows=16066 loops=1)\n Buffers: shared hit=304\n CTE detailcalcul\n -> Append (cost=68592.14..497586.31 rows=2597561 width=248) \n(actual time=1639.872..11346.625 rows=2598337 loops=1)\n Buffers: shared hit=55228\n\" -> Subquery Scan on \"\"*SELECT* 1_1\"\" \n(cost=68592.14..484569.77 rows=2597528 width=236) (actual \ntime=1639.870..11004.550 rows=2598255 loops=1)\"\n Buffers: shared hit=55217\n -> Hash Join (cost=68592.14..445606.85 rows=2597528 \nwidth=228) (actual time=1639.867..10540.052 rows=2598255 loops=1)\n Hash Cond: (q.idzdroits = d.idzdroits)\n Buffers: shared hit=55217\n -> Seq Scan on zquoteparts q \n(cost=0.00..68558.26 rows=2597528 width=28) (actual time=40.897..766.174 \nrows=2598255 loops=1)\n Filter: (selectedqp > '0'::double precision)\n Rows Removed by Filter: 103467\n Buffers: shared hit=34784\n -> Hash (cost=55214.69..55214.69 rows=1070196 \nwidth=45) (actual time=1595.683..1595.686 rows=1070196 loops=1)\n Buckets: 2097152 Batches: 1 Memory Usage: \n75956kB\n Buffers: shared hit=20433\n -> Hash Left Join (cost=1.02..55214.69 \nrows=1070196 width=45) (actual time=0.069..1275.813 rows=1070196 loops=1)\n Hash Cond: ((upper((d.devise)::text) \n= upper((p_1.code)::text)) AND (upper((CASE WHEN cfg_4.p_multidevise \nTHEN d.devise ELSE cfg_4.p_defaultdevise END)::text) = \nupper((p_1.codedest)::text)))\n Buffers: shared hit=20433\n -> Nested Loop (cost=0.00..41835.94 \nrows=1070196 width=37) (actual time=0.030..359.845 rows=1070196 loops=1)\n Buffers: shared hit=20432\n -> CTE Scan on cfg cfg_4 \n(cost=0.00..0.02 rows=1 width=17) (actual time=0.003..0.020 rows=1 loops=1)\n -> Seq Scan on zdroits d \n(cost=0.00..31133.96 rows=1070196 width=20) (actual time=0.016..120.020 \nrows=1070196 loops=1)\n Buffers: shared hit=20432\n -> Hash (cost=1.01..1.01 rows=1 \nwidth=16) (actual time=0.023..0.023 rows=1 loops=1)\n Buckets: 1024 Batches: 1 \nMemory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on parites p_1 \n(cost=0.00..1.01 rows=1 width=16) (actual time=0.008..0.009 rows=1 loops=1)\n Buffers: shared hit=1\n\" -> Subquery Scan on \"\"*SELECT* 2_1\"\" (cost=26.90..28.74 \nrows=33 width=232) (actual time=0.322..0.395 rows=82 loops=1)\"\n Buffers: shared hit=11\n -> Nested Loop (cost=26.90..27.91 rows=33 width=64) \n(actual time=0.319..0.356 rows=82 loops=1)\n Buffers: shared hit=11\n -> CTE Scan on cfg cfg_5 (cost=0.00..0.02 \nrows=1 width=16) (actual time=0.001..0.002 rows=1 loops=1)\n -> HashAggregate (cost=26.90..27.23 rows=33 \nwidth=24) (actual time=0.314..0.334 rows=82 loops=1)\n Group Key: ce.idsociete, ce.idad\n Batches: 1 Memory Usage: 32kB\n Buffers: shared hit=11\n -> Nested Loop (cost=0.00..26.57 rows=33 \nwidth=17) (actual time=0.183..0.262 rows=151 loops=1)\n Join Filter: ((ce.datecredit >= \ncfg_6.p_datedu) AND (ce.datecredit <= cfg_6.p_dateau))\n Rows Removed by Join Filter: 738\n Buffers: shared hit=11\n -> CTE Scan on cfg cfg_6 \n(cost=0.00..0.02 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=1)\n -> Seq Scan on creditsex ce \n(cost=0.00..22.11 rows=296 width=21) (actual time=0.031..0.180 rows=889 \nloops=1)\n Filter: (COALESCE(idcredextype, \n0) < 1000)\n Buffers: shared hit=11\n CTE result1\n -> WindowAgg (cost=3780661766.78..3780662817.80 rows=32339 \nwidth=259) (actual time=117198.780..117199.680 rows=742 loops=1)\n Buffers: shared hit=58377\n -> HashAggregate (cost=3780661766.78..3780662090.17 \nrows=32339 width=251) (actual time=117198.771..117199.261 rows=742 loops=1)\n\" Group Key: \"\"*SELECT* 1_2\"\".idad, ((\"\"*SELECT* \n1_2\"\".collecte)::numeric), ((\"\"*SELECT* \n1_2\"\".collectepondere)::numeric), ((\"\"*SELECT* 1_2\"\".droits)::numeric), \n((\"\"*SELECT* 1_2\"\".droitsmaster)::numeric), ((\"\"*SELECT* \n1_2\"\".droitsdep)::numeric), ((\"\"*SELECT* 1_2\"\".droitsdrm)::numeric), \n((\"\"*SELECT* 1_2\"\".credex)::double precision), ((\"\"*SELECT* \n1_2\"\".avances)::double precision), \"\"*SELECT* 1_2\"\".idsoc, \"\"*SELECT* \n1_2\"\".devise, (false), (false), (false)\"\n Batches: 1 Memory Usage: 1705kB\n Buffers: shared hit=58377\n -> Append (cost=118530.92..3780660634.92 rows=32339 \nwidth=251) (actual time=21679.748..117196.878 rows=742 loops=1)\n Buffers: shared hit=58377\n\" -> Subquery Scan on \"\"*SELECT* 1_2\"\" \n(cost=118530.92..118778.04 rows=16 width=235) (actual \ntime=21679.746..21681.827 rows=532 loops=1)\"\n Buffers: shared hit=58193\n -> Nested Loop (cost=118530.92..118777.56 \nrows=16 width=59) (actual time=21679.743..21681.564 rows=532 loops=1)\n Buffers: shared hit=58193\n -> Subquery Scan on dc \n(cost=118530.63..118532.03 rows=31 width=56) (actual \ntime=21679.708..21680.034 rows=540 loops=1)\n Filter: (NOT (hashed SubPlan 7))\n Rows Removed by Filter: 2\n Buffers: shared hit=56573\n -> HashAggregate \n(cost=118529.62..118530.24 rows=62 width=56) (actual \ntime=21639.945..21640.121 rows=542 loops=1)\n Group Key: a_3.idad, \na_3.forcedecomptesoc, cfg_7.p_defaultdevise, (0), (0), (0), (0), (0), \n(0), (0), (0)\n Batches: 1 Memory Usage: \n129kB\n Buffers: shared hit=56572\n -> Append \n(cost=58445.12..118527.92 rows=62 width=56) (actual \ntime=20816.522..21639.564 rows=556 loops=1)\n Buffers: shared \nhit=56572\n -> Nested Loop \n(cost=58445.12..60081.10 rows=51 width=56) (actual \ntime=20816.521..20826.515 rows=20 loops=1)\n Buffers: \nshared hit=56268\n -> CTE Scan \non cfg cfg_7 (cost=0.00..0.02 rows=1 width=16) (actual \ntime=7167.481..7167.482 rows=1 loops=1)\nBuffers: shared hit=2\n -> Seq Scan \non ad a_3 (cost=58445.12..60080.57 rows=51 width=8) (actual \ntime=13649.036..13659.022 rows=20 loops=1)\nFilter: ((forcedecomptesoc IS NOT NULL) AND (NOT (hashed SubPlan 8)))\n Rows \nRemoved by Filter: 47779\nBuffers: shared hit=56266\nSubPlan 8\n-> CTE Scan on detailcalcul (cost=0.00..51951.22 rows=2597561 width=4) \n(actual time=1639.877..13071.959 rows=2598337 loops=1)\nBuffers: shared hit=55228\n -> Nested Loop \n(cost=58445.12..58445.88 rows=11 width=56) (actual time=811.132..812.981 \nrows=536 loops=1)\n Buffers: \nshared hit=304\n -> CTE Scan \non cfg cfg_8 (cost=0.00..0.02 rows=1 width=16) (actual \ntime=0.000..0.002 rows=1 loops=1)\n -> CTE Scan \non currentsoldes2 cs (cost=58445.12..58445.75 rows=11 width=8) (actual \ntime=811.127..812.898 rows=536 loops=1)\nFilter: ((NOT (hashed SubPlan 9)) AND ((newbalance > '0'::double \nprecision) OR (laststatementnet <> '0'::double precision)))\n Rows \nRemoved by Filter: 1231\nBuffers: shared hit=304\nSubPlan 9\n-> CTE Scan on detailcalcul detailcalcul_1 (cost=0.00..51951.22 \nrows=2597561 width=4) (actual time=39.106..363.440 rows=2598337 loops=1)\n SubPlan 7\n -> Seq Scan on \ndeceasedadlinks (cost=0.00..1.01 rows=1 width=4) (actual \ntime=38.904..38.906 rows=1 loops=1)\n Buffers: shared hit=1\n -> Index Scan using ad_pkey on ad \na_2 (cost=0.29..7.92 rows=1 width=4) (actual time=0.002..0.002 rows=1 \nloops=540)\n Index Cond: (idad = dc.idad)\n Filter: (NOT \nCOALESCE(isgroupead, false))\n Rows Removed by Filter: 0\n Buffers: shared hit=1620\n -> Subquery Scan on p_2 \n(cost=63302.48..3628482823.36 rows=31024 width=251) (actual \ntime=13645.266..93288.857 rows=209 loops=1)\n Buffers: shared hit=180\n -> HashAggregate (cost=63302.48..64078.08 \nrows=31024 width=248) (actual time=13273.998..13275.661 rows=209 loops=1)\n Group Key: a_4.idad, dc_1.idsoc, \ndc_1.devise\n Batches: 1 Memory Usage: 2065kB\n Buffers: shared hit=180\n -> Hash Join (cost=432.55..62449.32 \nrows=31024 width=248) (actual time=7.477..3304.431 rows=19190470 loops=1)\n Hash Cond: (dc_1.idad = \ngl.idadmembre)\n Buffers: shared hit=180\n -> CTE Scan on detailcalcul \ndc_1 (cost=0.00..51951.22 rows=2597561 width=248) (actual \ntime=0.001..349.551 rows=2598337 loops=1)\n -> Hash (cost=432.41..432.41 \nrows=11 width=8) (actual time=7.463..7.465 rows=9850 loops=1)\n Buckets: 16384 \n(originally 1024) Batches: 1 (originally 1) Memory Usage: 513kB\n Buffers: shared hit=180\n -> Nested Loop \n(cost=0.30..432.41 rows=11 width=8) (actual time=0.029..5.794 rows=9850 \nloops=1)\n Buffers: shared hit=180\n -> Seq Scan on \ngroupesadlink gl (cost=0.00..152.50 rows=9850 width=8) (actual \ntime=0.011..1.113 rows=9850 loops=1)\n Buffers: \nshared hit=54\n -> Memoize \n(cost=0.30..0.79 rows=1 width=4) (actual time=0.000..0.000 rows=1 \nloops=9850)\n Cache Key: \ngl.idadgroupe\n Cache Mode: \nlogical\n Hits: 9808 \nMisses: 42 Evictions: 0 Overflows: 0 Memory Usage: 5kB\n Buffers: \nshared hit=126\n -> Index Scan \nusing ad_pkey on ad a_4 (cost=0.29..0.78 rows=1 width=4) (actual \ntime=0.003..0.003 rows=1 loops=42)\nIndex Cond: (idad = gl.idadgroupe)\nFilter: isgroupead\nBuffers: shared hit=126\n SubPlan 10\n -> Aggregate (cost=58477.59..58477.60 \nrows=1 width=8) (actual time=188.655..188.655 rows=1 loops=209)\n -> CTE Scan on detailcalcul tdc \n(cost=0.00..58445.12 rows=12988 width=8) (actual time=188.651..188.651 \nrows=0 loops=209)\n Filter: (idad = p_2.idad)\n Rows Removed by Filter: 2598337\n SubPlan 11\n -> Aggregate (cost=58477.59..58477.60 \nrows=1 width=8) (actual time=194.170..194.170 rows=1 loops=209)\n -> CTE Scan on detailcalcul tdc_1 \n(cost=0.00..58445.12 rows=12988 width=8) (actual time=194.166..194.166 \nrows=0 loops=209)\n Filter: (idad = p_2.idad)\n Rows Removed by Filter: 2598337\n -> Subquery Scan on p_3 \n(cost=133539.19..152058548.58 rows=1299 width=251) (actual \ntime=2225.769..2225.777 rows=1 loops=1)\n Buffers: shared hit=4\n -> GroupAggregate \n(cost=133539.19..133711.31 rows=1299 width=297) (actual \ntime=1864.488..1864.493 rows=1 loops=1)\n Group Key: a_5.idad, dc_2.idsoc, \ndc_2.devise, dal.is_heir\n Buffers: shared hit=4\n -> Sort (cost=133539.19..133542.44 \nrows=1299 width=257) (actual time=1864.454..1864.458 rows=1 loops=1)\n Sort Key: a_5.idad, dc_2.idsoc, \ndc_2.devise, dal.is_heir\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=4\n -> Hash Join \n(cost=123393.48..133472.01 rows=1299 width=257) (actual \ntime=1862.196..1864.432 rows=1 loops=1)\n Hash Cond: (dc_2.idad = \ndal.idaddeceased)\n Buffers: shared hit=4\n -> HashAggregate \n(cost=123384.15..129878.05 rows=259756 width=248) (actual \ntime=1861.461..1864.170 rows=1275 loops=1)\n Group Key: \ndc_2.idad, dc_2.idsoc, dc_2.devise\n Batches: 1 Memory \nUsage: 16401kB\n -> CTE Scan on \ndetailcalcul dc_2 (cost=0.00..51951.22 rows=2597561 width=248) (actual \ntime=0.001..294.781 rows=2598337 loops=1)\n -> Hash (cost=9.32..9.32 \nrows=1 width=17) (actual time=0.085..0.087 rows=1 loops=1)\n Buckets: 1024 \nBatches: 1 Memory Usage: 9kB\n Buffers: shared hit=4\n -> Nested Loop \n(cost=0.29..9.32 rows=1 width=17) (actual time=0.080..0.082 rows=1 loops=1)\n Buffers: \nshared hit=4\n -> Seq Scan \non deceasedadlinks dal (cost=0.00..1.01 rows=1 width=17) (actual \ntime=0.031..0.032 rows=1 loops=1)\nBuffers: shared hit=1\n -> Index Only \nScan using ad_pkey on ad a_5 (cost=0.29..8.31 rows=1 width=4) (actual \ntime=0.029..0.029 rows=1 loops=1)\nIndex Cond: (idad = dal.idadalive)\n Heap \nFetches: 1\nBuffers: shared hit=3\n SubPlan 12\n -> Aggregate (cost=58477.59..58477.60 \nrows=1 width=8) (actual time=176.146..176.147 rows=1 loops=1)\n -> CTE Scan on detailcalcul tdc_2 \n(cost=0.00..58445.12 rows=12988 width=8) (actual time=176.130..176.130 \nrows=0 loops=1)\n Filter: (idad = p_3.idad)\n Rows Removed by Filter: 2598337\n SubPlan 13\n -> Aggregate (cost=58477.59..58477.60 \nrows=1 width=8) (actual time=185.099..185.100 rows=1 loops=1)\n -> CTE Scan on detailcalcul tdc_3 \n(cost=0.00..58445.12 rows=12988 width=8) (actual time=185.084..185.084 \nrows=0 loops=1)\n Filter: (idad = p_3.idad)\n Rows Removed by Filter: 2598337\n CTE allannuaires\n -> HashAggregate (cost=4874.10..4876.86 rows=276 width=40) \n(actual time=5.365..5.479 rows=513 loops=1)\n Group Key: r_1.idad, (NULL::integer), (array_agg(DISTINCT \nlaa.idannuaire))\n Batches: 1 Memory Usage: 105kB\n Buffers: shared hit=3233\n -> Append (cost=3069.41..4872.03 rows=276 width=40) (actual \ntime=1.434..5.119 rows=513 loops=1)\n Buffers: shared hit=3233\n -> GroupAggregate (cost=3069.41..4023.14 rows=200 \nwidth=40) (actual time=1.434..4.505 rows=509 loops=1)\n Group Key: r_1.idad, NULL::integer\n Buffers: shared hit=3215\n -> Merge Join (cost=3069.41..3769.62 rows=33469 \nwidth=16) (actual time=1.380..3.651 rows=532 loops=1)\n Merge Cond: (laa.idacteur = r_1.idad)\n Buffers: shared hit=3215\n -> Index Scan using \nliensacteursannuaire_idacteur on liensacteursannuaire laa \n(cost=0.28..188.46 rows=3997 width=12) (actual time=0.012..1.606 \nrows=3960 loops=1)\n Buffers: shared hit=3215\n -> Sort (cost=3069.13..3149.98 rows=32339 \nwidth=4) (actual time=1.325..1.384 rows=747 loops=1)\n Sort Key: r_1.idad\n Sort Method: quicksort Memory: 25kB\n -> CTE Scan on result1 r_1 \n(cost=0.00..646.78 rows=32339 width=4) (actual time=0.002..1.223 \nrows=742 loops=1)\n -> GroupAggregate (cost=835.43..844.75 rows=76 \nwidth=40) (actual time=0.467..0.544 rows=4 loops=1)\n Group Key: NULL::integer, s_1.idsociete\n Buffers: shared hit=18\n -> Sort (cost=835.43..837.52 rows=837 width=16) \n(actual time=0.445..0.470 rows=289 loops=1)\n Sort Key: s_1.idsociete\n Sort Method: quicksort Memory: 40kB\n Buffers: shared hit=18\n -> Hash Join (cost=18.66..794.79 rows=837 \nwidth=16) (actual time=0.133..0.398 rows=289 loops=1)\n Hash Cond: (r_2.idsoc = s_1.idsociete)\n Buffers: shared hit=18\n -> CTE Scan on result1 r_2 \n(cost=0.00..646.78 rows=32339 width=4) (actual time=0.000..0.100 \nrows=742 loops=1)\n -> Hash (cost=18.60..18.60 rows=5 \nwidth=12) (actual time=0.121..0.123 rows=8 loops=1)\n Buckets: 1024 Batches: 1 \nMemory Usage: 9kB\n Buffers: shared hit=18\n -> Nested Loop \n(cost=0.29..18.60 rows=5 width=12) (actual time=0.074..0.116 rows=8 loops=1)\n Buffers: shared hit=18\n -> Seq Scan on societes \ns_1 (cost=0.00..2.76 rows=76 width=8) (actual time=0.007..0.016 rows=77 \nloops=1)\n Buffers: shared hit=2\n -> Memoize \n(cost=0.29..2.36 rows=1 width=12) (actual time=0.001..0.001 rows=0 loops=77)\n Cache Key: \ns_1.idactorsolorealm\n Cache Mode: logical\n Hits: 71 Misses: 6 \nEvictions: 0 Overflows: 0 Memory Usage: 1kB\n Buffers: shared hit=16\n -> Index Scan \nusing liensacteursannuaire_idacteur on liensacteursannuaire laa_1 \n(cost=0.28..2.35 rows=1 width=12) (actual time=0.005..0.005 rows=1 loops=6)\n Index Cond: \n(idacteur = s_1.idactorsolorealm)\n Buffers: \nshared hit=16\n -> Nested Loop Left Join (cost=4941.07..9833311.11 rows=61388000 \nwidth=1454) (actual time=117307.683..117434.084 rows=492 loops=1)\n Buffers: shared hit=71920\n -> Hash Left Join (cost=4940.78..11230.83 rows=61388 \nwidth=526) (actual time=117307.188..117323.931 rows=492 loops=1)\n Hash Cond: ((a.idad = ap.idad) AND (s.idsociete = \nap.idsociete))\n Buffers: shared hit=66286\n -> Hash Left Join (cost=3589.51..9557.27 rows=61388 \nwidth=518) (actual time=117289.523..117305.951 rows=492 loops=1)\n Hash Cond: (a.idclient = cl.idclient)\n Buffers: shared hit=65963\n -> Hash Left Join (cost=3588.44..9394.89 \nrows=61388 width=505) (actual time=117289.502..117305.645 rows=492 loops=1)\n Hash Cond: (r.idsoc = aa2.idsociete)\n Buffers: shared hit=65962\n -> Hash Left Join (cost=3579.47..7103.89 \nrows=44484 width=473) (actual time=117289.389..117305.239 rows=492 loops=1)\n Hash Cond: (r.idad = aa.idad)\n Buffers: shared hit=65962\n -> Hash Left Join \n(cost=3570.50..5441.27 rows=32235 width=441) (actual \ntime=117283.600..117299.104 rows=492 loops=1)\n Hash Cond: ((a.idad = x.idad) AND \n(s.idsociete = x.idsociete))\n Buffers: shared hit=62729\n -> Hash Left Join \n(cost=2268.46..3816.86 rows=32235 width=419) (actual \ntime=117262.207..117277.319 rows=490 loops=1)\n Hash Cond: ((a.idad = \nsl.idad) AND (s.idsociete = sl.idsociete))\n Buffers: shared hit=62425\n -> Hash Left Join \n(cost=2267.65..3574.29 rows=32235 width=385) (actual \ntime=117261.493..117276.208 rows=489 loops=1)\n Hash Cond: (r.idsoc = \ns.idsociete)\n Buffers: shared hit=62425\n -> Hash Left Join \n(cost=2263.94..3484.17 rows=32235 width=362) (actual \ntime=117261.404..117275.751 rows=489 loops=1)\n Hash Cond: \n(idannuaire_main(a.*) = ann.idannuaire)\n Buffers: shared \nhit=62423\n -> Hash Join \n(cost=2111.50..2843.18 rows=32235 width=1596) (actual \ntime=117259.217..117260.097 rows=489 loops=1)\n Hash \nCond: (r.idad = a.idad)\nBuffers: shared hit=59415\n -> CTE \nScan on result1 r (cost=0.00..646.78 rows=32339 width=259) (actual \ntime=117198.783..117198.965 rows=742 loops=1)\nBuffers: shared hit=58377\n -> Hash \n(cost=1515.96..1515.96 rows=47643 width=1337) (actual \ntime=60.315..60.316 rows=47648 loops=1)\nBuckets: 65536 Batches: 1 Memory Usage: 10979kB\nBuffers: shared hit=1038\n-> Seq Scan on ad a (cost=0.00..1515.96 rows=47643 width=1337) (actual \ntime=0.047..46.338 rows=47648 loops=1)\nFilter: calculatestatements\nRows Removed by Filter: 151\nBuffers: shared hit=1038\n -> Hash \n(cost=103.31..103.31 rows=3931 width=34) (actual time=1.792..1.792 \nrows=3937 loops=1)\nBuckets: 4096 Batches: 1 Memory Usage: 232kB\nBuffers: shared hit=64\n -> Seq \nScan on annuaire ann (cost=0.00..103.31 rows=3931 width=34) (actual \ntime=0.016..0.927 rows=3937 loops=1)\nBuffers: shared hit=64\n -> Hash \n(cost=2.76..2.76 rows=76 width=23) (actual time=0.079..0.080 rows=77 \nloops=1)\n Buckets: 1024 \nBatches: 1 Memory Usage: 13kB\n Buffers: shared \nhit=2\n -> Seq Scan on \nsocietes s (cost=0.00..2.76 rows=76 width=23) (actual time=0.025..0.052 \nrows=77 loops=1)\nBuffers: shared hit=2\n -> Hash (cost=0.46..0.46 \nrows=23 width=42) (actual time=0.703..0.703 rows=1767 loops=1)\n Buckets: 2048 \n(originally 1024) Batches: 1 (originally 1) Memory Usage: 116kB\n -> CTE Scan on \ncurrentsoldes2 sl (cost=0.00..0.46 rows=23 width=42) (actual \ntime=0.003..0.254 rows=1767 loops=1)\n -> Hash (cost=1301.70..1301.70 \nrows=23 width=30) (actual time=21.382..21.385 rows=2041 loops=1)\n Buckets: 2048 (originally \n1024) Batches: 1 (originally 1) Memory Usage: 104kB\n Buffers: shared hit=304\n -> Subquery Scan on x \n(cost=1197.97..1301.70 rows=23 width=30) (actual time=20.156..20.915 \nrows=2041 loops=1)\n Filter: (x.rang = 1)\n Buffers: shared hit=304\n -> HashAggregate \n(cost=1197.97..1244.07 rows=4610 width=50) (actual time=20.154..20.632 \nrows=2041 loops=1)\n Group Key: \nsoldes.idad, soldes.idsociete, rank() OVER (?), soldes.newbalance, \nsoldes.grossamountearnedexternally, soldes.date_closingperiod\n Batches: 1 \nMemory Usage: 473kB\n Buffers: shared \nhit=304\n -> WindowAgg \n(cost=997.16..1117.65 rows=5355 width=50) (actual time=13.213..19.195 \nrows=2041 loops=1)\n Run \nCondition: (rank() OVER (?) <= 1)\nBuffers: shared hit=304\n -> Sort \n(cost=997.16..1010.55 rows=5355 width=42) (actual time=13.200..14.167 \nrows=14598 loops=1)\nSort Key: soldes.idad, soldes.idsociete, \n(COALESCE(soldes.date_closingperiod, '1900-01-01'::date)) DESC\nSort Method: quicksort Memory: 1411kB\nBuffers: shared hit=304\n-> Nested Loop (cost=0.00..665.50 rows=5355 width=42) (actual \ntime=0.014..5.814 rows=14598 loops=1)\nJoin Filter: ((soldes.date_closingperiod <= cfg.p_dateau) OR \n(soldes.date_closingperiod IS NULL))\nRows Removed by Join Filter: 1468\nBuffers: shared hit=304\n-> CTE Scan on cfg (cost=0.00..0.02 rows=1 width=4) (actual \ntime=0.001..0.001 rows=1 loops=1)\n-> Seq Scan on soldes (cost=0.00..464.66 rows=16066 width=38) (actual \ntime=0.008..2.592 rows=16066 loops=1)\nBuffers: shared hit=304\n -> Hash (cost=5.52..5.52 rows=276 \nwidth=36) (actual time=5.779..5.780 rows=509 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 42kB\n Buffers: shared hit=3233\n -> CTE Scan on allannuaires aa \n(cost=0.00..5.52 rows=276 width=36) (actual time=5.368..5.665 rows=513 \nloops=1)\n Buffers: shared hit=3233\n -> Hash (cost=5.52..5.52 rows=276 width=36) \n(actual time=0.105..0.106 rows=4 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 9kB\n -> CTE Scan on allannuaires aa2 \n(cost=0.00..5.52 rows=276 width=36) (actual time=0.002..0.065 rows=513 \nloops=1)\n -> Hash (cost=1.03..1.03 rows=3 width=17) (actual \ntime=0.012..0.014 rows=3 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on clients cl (cost=0.00..1.03 \nrows=3 width=17) (actual time=0.006..0.007 rows=3 loops=1)\n Buffers: shared hit=1\n -> Hash (cost=1351.26..1351.26 rows=1 width=16) (actual \ntime=17.652..17.658 rows=190 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 17kB\n Buffers: shared hit=323\n -> Subquery Scan on ap (cost=1351.23..1351.26 \nrows=1 width=16) (actual time=17.500..17.618 rows=190 loops=1)\n Buffers: shared hit=323\n -> GroupAggregate (cost=1351.23..1351.25 \nrows=1 width=16) (actual time=17.498..17.593 rows=190 loops=1)\n Group Key: p.idad, p.idsociete\n Buffers: shared hit=323\n -> Sort (cost=1351.23..1351.23 rows=1 \nwidth=16) (actual time=17.491..17.510 rows=190 loops=1)\n Sort Key: p.idad, p.idsociete\n Sort Method: quicksort Memory: 35kB\n Buffers: shared hit=323\n -> Nested Loop \n(cost=1246.18..1351.22 rows=1 width=16) (actual time=16.185..17.447 \nrows=190 loops=1)\n Join Filter: \n(p.dateinperiod <= cfg_1.p_dateau)\n Rows Removed by Join \nFilter: 111\n Buffers: shared hit=323\n -> Hash Join \n(cost=1246.18..1351.19 rows=1 width=20) (actual time=16.174..17.303 \nrows=301 loops=1)\n Hash Cond: ((x_1.idad \n= p.idad) AND (x_1.idsociete = p.idsociete))\n Join Filter: \n(p.dateinperiod > x_1.date_closingperiod)\n Rows Removed by Join \nFilter: 1380\n Buffers: shared hit=323\n -> Subquery Scan on \nx_1 (cost=1184.58..1288.31 rows=23 width=12) (actual \ntime=15.506..16.172 rows=1922 loops=1)\n Filter: \n(x_1.rang = 1)\n Buffers: shared \nhit=304\n -> \nHashAggregate (cost=1184.58..1230.68 rows=4610 width=32) (actual \ntime=15.504..15.928 rows=1922 loops=1)\n Group \nKey: soldes_1.idad, soldes_1.idsociete, rank() OVER (?), \nsoldes_1.newbalance, soldes_1.date_closingperiod\nBatches: 1 Memory Usage: 473kB\nBuffers: shared hit=304\n -> \nWindowAgg (cost=997.16..1117.65 rows=5355 width=32) (actual \ntime=9.371..14.611 rows=1923 loops=1)\nRun Condition: (rank() OVER (?) <= 1)\nBuffers: shared hit=304\n-> Sort (cost=997.16..1010.55 rows=5355 width=24) (actual \ntime=9.360..10.177 rows=12988 loops=1)\nSort Key: soldes_1.idad, soldes_1.idsociete, \n(COALESCE(soldes_1.date_closingperiod, '1900-01-01'::date)) DESC\nSort Method: quicksort Memory: 1298kB\nBuffers: shared hit=304\n-> Nested Loop (cost=0.00..665.50 rows=5355 width=24) (actual \ntime=0.008..4.374 rows=12988 loops=1)\nJoin Filter: ((soldes_1.date_closingperiod < cfg_2.p_dateau) OR \n(soldes_1.date_closingperiod IS NULL))\nRows Removed by Join Filter: 3078\nBuffers: shared hit=304\n-> CTE Scan on cfg cfg_2 (cost=0.00..0.02 rows=1 width=4) (actual \ntime=0.000..0.001 rows=1 loops=1)\n-> Seq Scan on soldes soldes_1 (cost=0.00..464.66 rows=16066 width=20) \n(actual time=0.003..1.879 rows=16066 loops=1)\nBuffers: shared hit=304\n -> Hash \n(cost=36.04..36.04 rows=1704 width=20) (actual time=0.643..0.644 \nrows=1704 loops=1)\n Buckets: 2048 \nBatches: 1 Memory Usage: 103kB\n Buffers: shared \nhit=19\n -> Seq Scan on \npayments p (cost=0.00..36.04 rows=1704 width=20) (actual \ntime=0.009..0.248 rows=1704 loops=1)\nBuffers: shared hit=19\n -> CTE Scan on cfg cfg_1 \n(cost=0.00..0.02 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=301)\n -> Function Scan on calculate_net cn (cost=0.29..10.29 \nrows=1000 width=168) (actual time=0.211..0.211 rows=1 loops=492)\n Buffers: shared hit=5634\nPlanning:\n Buffers: shared hit=106\nPlanning Time: 15.825 ms\nJIT:\n Functions: 458\n Options: Inlining true, Optimization true, Expressions true, \nDeforming true\n Timing: Generation 90.276 ms, Inlining 44.404 ms, Optimization \n4297.544 ms, Emission 2940.759 ms, Total 7372.983 ms\nExecution Time: 117642.499 ms\n\n\nAnd this is the query:\n\nEXPLAIN(ANALYZE, BUFFERS)\nWITH cfg AS (\n SELECT\n (SELECT multidevise FROM config LIMIT 1) AS p_multidevise\n ,(SELECT monnaie FROM config LIMIT 1) AS p_defaultdevise\n ,:datedu::DATE AS p_datedu\n ,:dateau::DATE AS p_dateau\n)\n\n, daz_adinroy AS (\n SELECT DISTINCT idad, idoeu, COALESCE(ra.controllingsoc, o.idsociete) \nAS idsociete\n FROM royaltiesad ra\n JOIN oeu o USING(idoeu)\n\nUNION\n\n SELECT DISTINCT ra.idad, o.idoeu, COALESCE(ra.controllingsoc, \na.idsociete, o.idsociete) AS idsociete\n FROM royaltiesad ra\n JOIN agreements a using(idagreement)\n JOIN groupes g USING(idagreement)\n JOIN oegroupes og USING(idgroupe)\n JOIN oeu o ON og.idoeu=o.idoeu\n)\n, daz_allad AS (\n SELECT idad, idoeu, COALESCE(a.stmt_idsociete_forced, idsociete) AS \nidsociete\n FROM daz_adinroy\n JOIN ad a USING(idad)\n WHERE NULLIF(specialsplit,0) IS NULL\n\nUNION\n\n SELECT ca.idad, air.idoeu, COALESCE(a.stmt_idsociete_forced, \nidsociete) AS idsociete\n FROM daz_adinroy air\n JOIN ad a USING(idad)\n JOIN copyrightad ca ON ca.idoeu=air.idoeu AND ca.iscontrolled and \na.idad=ca.idad\n AND ( (specialsplit=1 AND \nca.role IN ('A','C','CA','AC','AD','AR','I'))\n OR (specialsplit=2 AND \nca.role IN ('E','CE','SE','ES'))\n OR (specialsplit=3) )\n WHERE specialsplit>0\n)\n,payablead AS (\n SELECT DISTINCT idad FROM royaltiesad\n)\n,currentsoldes AS (\n SELECT DISTINCT idad,idsociete,rank() OVER (PARTITION BY \nidad,idsociete ORDER BY COALESCE(date_closingperiod,'1900-01-01') DESC ) \nAS rang, newbalance, postponed_gross_master, \nCOALESCE(laststatementnet,0) AS laststatementnet\n FROM soldes\n CROSS JOIN cfg\n -- attention : < et pas <=\n WHERE date_closingperiod<p_datedu OR date_closingperiod IS NULL\n)\n,currentsoldes2 AS (\n SELECT idad,idsociete,newbalance, postponed_gross_master, \nlaststatementnet\n FROM currentsoldes\n WHERE rang=1\n)\n,detailcalcul AS (\n SELECT q.idad\n ,ARRONDIS4(coalesce(p.taux,1)*d.montant) AS collecte\n,ARRONDIS4(coalesce(p.taux,1)*d.montant*q.baseredevance/100) AS \ncollectepondere\n ,CASE WHEN d.typedroits BETWEEN 1 AND 20 THEN \nARRONDIS4(coalesce(p.taux,1)*(q.selectedqp/100)*d.montant*q.baseredevance/100) \nELSE NULL END AS droits\n ,CASE WHEN d.typedroits BETWEEN 21 AND 22 THEN \nARRONDIS4(coalesce(p.taux,1)*(q.selectedqp/100)*d.montant*q.baseredevance/100) \nELSE NULL END AS droitsmaster\n ,CASE WHEN d.typedroits IN (1,9) THEN \nARRONDIS4(coalesce(p.taux,1)*(q.selectedqp/100)*d.montant*q.baseredevance/100) \nELSE NULL END AS droitsDEP\n ,CASE WHEN d.typedroits IN (2,10) THEN \nARRONDIS4(coalesce(p.taux,1)*(q.selectedqp/100)*d.montant*q.baseredevance/100) \nELSE NULL END AS droitsDRM\n ,0 AS credex\n ,0 AS avances\n ,q.idsoc\n ,CAST(CASE WHEN p_multidevise THEN d.devise ELSE \np_defaultdevise END AS VARCHAR(4)) AS devise\n FROM zDroits d\n CROSS JOIN cfg\n LEFT JOIN parites p ON UPPER(p.code)=UPPER(d.devise) AND \nUPPER(p.codedest)=UPPER(CASE WHEN p_multidevise THEN d.devise ELSE \np_defaultdevise END)\n JOIN zQuoteParts q USING(idzdroits)\n WHERE q.selectedqp>0\n\n UNION ALL\n\n -- Ajouter les crédits exceptionnels pour faire apparaître les \ndécomptes pour ceux qui n'ont pas de droits, juste des crédits ex\n SELECT cr.idad, 0, 0, 0, 0, 0, 0, cr.credex, cr.avances, \ncr.idsociete, p_defaultdevise AS devise\n FROM (\n SELECT idsociete\n ,idad\n ,COALESCE(sum(CASE WHEN NOT COALESCE(isnet,FALSE) THEN \nmontant ELSE 0 END),0) as credex\n ,COALESCE(sum(CASE WHEN isnet THEN montant ELSE 0 END),0) \nas avances\n FROM creditsex ce\n CROSS JOIN cfg\n WHERE datecredit BETWEEN p_datedu AND p_dateau\n AND COALESCE(ce.idcredextype,0)<1000\n GROUP BY idsociete,idad\n ) cr\n CROSS JOIN cfg\n\n)\n, detailcalcul2 AS (\n -- forcer un décompte sur la société XXX\n SELECT a.idad, a.forcedecomptesoc AS idsoc, p_defaultdevise AS \ndevise, 0 AS collecte, 0 AS collectepondere, 0 AS droits, 0 AS \ndroitsmaster, 0 AS droitsDEP, 0 AS droitsDRM, 0 AS credex, 0 AS avances\n FROM ad a\n CROSS JOIN cfg\n WHERE a.forcedecomptesoc IS NOT NULL\n AND a.idad NOT IN (SELECT idad FROM detailcalcul)\n\n -- UNION les AD qui ont un solde brut > 0 ou net <> 0\n UNION\n SELECT cs.idad, cs.idsociete, p_defaultdevise AS devise, 0 AS \ncollecte, 0 AS collectepondere, 0 AS droits, 0 AS droitsmaster, 0 AS \ndroitsDEP, 0 AS droitsDRM, 0 AS credex, 0 AS avances\n FROM currentsoldes2 cs\n CROSS JOIN cfg\n WHERE (cs.newbalance>0 OR cs.laststatementnet<>0)\n AND cs.idad NOT IN (SELECT idad FROM detailcalcul)\n\n -- UNION detailcalcul pour daz in [0,2]\n-- &union1\n\n -- UNION daz_allad pour daz in [1,2]\n-- &union2\n\n)\n, detailscalcul2groupe_pre AS (\n SELECT a.idad\n ,dc.idsoc\n ,dc.devise\n ,SUM(dc.collecte) AS collecte\n ,SUM(dc.collectepondere) AS collectepondere\n ,SUM(dc.droits) AS droits\n ,SUM(dc.droitsmaster) AS droitsmaster\n ,SUM(dc.droitsDEP) AS droitsDEP\n ,SUM(dc.droitsDRM) AS droitsDRM\n ,SUM(dc.credex) AS credex\n ,SUM(dc.avances) AS avances\n FROM detailcalcul dc\n JOIN groupesadlink gl ON gl.idadmembre=dc.idad\n JOIN ad a ON a.idad=gl.idadgroupe\n WHERE a.isgroupead\n GROUP BY a.idad,idsoc, dc.devise\n)\n, detailscalcul2groupe AS (\n SELECT idad\n ,idsoc\n ,devise\n ,collecte\n ,collectepondere\n ,droits\n ,droitsmaster\n ,droitsDEP\n ,droitsDRM\n ,credex + COALESCE((SELECT SUM(credex) FROM detailcalcul \ntdc WHERE tdc.idad=p.idad),0) AS credex\n ,avances + COALESCE((SELECT SUM(avances) FROM detailcalcul \ntdc WHERE tdc.idad=p.idad),0) AS avances\n FROM detailscalcul2groupe_pre p\n)\n,detailscalcul2heritiers_pre1 AS (\n SELECT dc.idad\n ,dc.idsoc\n ,dc.devise\n ,SUM(dc.collecte) AS collecte\n ,SUM(dc.collectepondere) AS collectepondere\n ,SUM(dc.droits) as droits\n ,SUM(dc.droitsmaster) as droitsmaster\n ,SUM(dc.droitsDEP) as droitsDEP\n ,SUM(dc.droitsDRM) as droitsDRM\n ,SUM(dc.credex) AS credex\n ,SUM(dc.avances) AS avances\n FROM detailcalcul dc\n GROUP BY dc.idad, dc.idsoc, dc.devise\n\n-- &union2\n)\n,detailscalcul2heritiers_pre2 AS (\n SELECT a.idad\n ,dc.idsoc\n ,dc.devise\n ,dal.is_heir\n ,SUM(dc.collecte) AS collecte\n ,SUM(dc.collectepondere) AS collectepondere\n ,SUM( ARRONDIS4(dc.droits * COALESCE(dal.share,0)/100) ) \nas droits\n ,SUM( ARRONDIS4(dc.droitsmaster * \nCOALESCE(dal.share,0)/100) ) as droitsmaster\n ,SUM( ARRONDIS4(dc.droitsDEP * COALESCE(dal.share,0)/100) \n) as droitsDEP\n ,SUM( ARRONDIS4(dc.droitsDRM * COALESCE(dal.share,0)/100) \n) as droitsDRM\n ,SUM( ARRONDIS4(dc.credex * COALESCE(dal.share,0)/100) ) \nAS credex\n ,SUM( ARRONDIS4(dc.avances * COALESCE(dal.share,0)/100) ) \nAS avances\n FROM detailscalcul2heritiers_pre1 dc\n JOIN deceasedadlinks dal ON dal.idaddeceased=dc.idad\n JOIN ad a on a.idad=dal.idadalive\n GROUP BY a.idad, idsoc, dc.devise, dal.is_heir\n)\n,detailscalcul2heritiers AS (\n SELECT idad\n ,idsoc\n ,devise\n ,collecte\n ,collectepondere\n ,droits\n ,droitsmaster\n ,droitsDEP\n ,droitsDRM\n ,credex + COALESCE((SELECT SUM(credex) FROM \ndetailcalcul tdc WHERE tdc.idad=p.idad),0) AS credex\n ,avances + COALESCE((SELECT SUM(avances) FROM \ndetailcalcul tdc WHERE tdc.idad=p.idad),0) AS avances\n ,is_heir\n FROM detailscalcul2heritiers_pre2 p\n)\n, result1 AS (\n SELECT z.*, row_number() OVER() AS position\n FROM (\n SELECT dc.idad,dc.collecte, dc.collectepondere, dc.droits, \ndc.droitsmaster, dc.droitsDEP, dc.droitsDRM, dc.credex, dc.avances, \ndc.idsoc, dc.devise, false AS isgroup, false AS isheritier, false AS \ncotisationsheritier\n FROM detailcalcul2 dc\n JOIN ad a USING(idad)\n WHERE NOT COALESCE(a.isgroupead,FALSE)\n AND NOT idad IN (SELECT idadalive FROM deceasedadlinks)\n UNION\n SELECT dc.idad,dc.collecte, dc.collectepondere, dc.droits, \ndc.droitsmaster, dc.droitsDEP, dc.droitsDRM, dc.credex, dc.avances, \ndc.idsoc, dc.devise, true AS isgroup, false AS isheritier, false AS \ncotisationsheritier\n FROM detailscalcul2groupe dc\n UNION\n SELECT dc.idad,dc.collecte, dc.collectepondere, dc.droits, \ndc.droitsmaster, dc.droitsDEP, dc.droitsDRM, dc.credex, dc.avances, \ndc.idsoc, dc.devise, false AS isgroup, true AS isheritier, is_heir AS \ncotisationsheritier\n FROM detailscalcul2heritiers dc\n ) z\n)\n,lastsoldes AS (\n SELECT idad,idsociete,newbalance, date_closingperiod, \nGrossAmountEarnedExternally\n FROM (\n SELECT DISTINCT idad,idsociete,rank() OVER (PARTITION BY \nidad,idsociete ORDER BY COALESCE(date_closingperiod,'1900-01-01') DESC ) \nAS rang, newbalance, GrossAmountEarnedExternally, date_closingperiod\n FROM soldes\n CROSS JOIN cfg\n -- attention : <= et pas <\n WHERE date_closingperiod<=p_dateau OR date_closingperiod IS NULL\n ) x\n WHERE x.rang=1\n)\n,lastsoldesforpayments AS (\n SELECT idad,idsociete,newbalance, date_closingperiod\n FROM (\n SELECT DISTINCT idad,idsociete,rank() OVER (PARTITION BY \nidad,idsociete ORDER BY COALESCE(date_closingperiod,'1900-01-01') DESC ) \nAS rang, newbalance, date_closingperiod\n FROM soldes\n CROSS JOIN cfg\n -- attention : < et pas <= car si la période a déjà été \nclôturée, on ne voit aucun paiement\n WHERE date_closingperiod<p_dateau OR date_closingperiod IS NULL\n ) x\n WHERE rang=1\n)\n,apayments AS (\n SELECT p.idad,p.idsociete,SUM(amount) AS totpayments\n FROM payments p\n CROSS JOIN cfg\n LEFT JOIN lastsoldesforpayments ls ON ls.idad=p.idad AND \nls.idsociete=p.idsociete\n WHERE p.dateinperiod>date_closingperiod AND p.dateinperiod<=p_dateau\n GROUP BY p.idad,p.idsociete\n)\n,allannuaires AS (\n SELECT r.idad, NULL::INT AS idsociete, array_agg(DISTINCT \nidannuaire) AS idannuaires\n FROM result1 r\n JOIN LiensActeursAnnuaire laa ON laa.idacteur=r.idad\n GROUP BY 1,2\n UNION\n SELECT NULL::INT AS idad, s.idsociete, array_agg(DISTINCT \nidannuaire) AS idannuaires\n FROM result1 r\n JOIN societes s ON r.idsoc=s.idsociete\n JOIN LiensActeursAnnuaire laa ON laa.idacteur=s.idactorsolorealm\n GROUP BY 1,2\n)\nSELECT r.idad\n ,r.collecte\n ,r.collectepondere\n ,ARRONDIS(r.droits) AS droits\n ,ARRONDIS(r.droitsmaster) AS droitsmaster\n ,ARRONDIS(r.droitsDEP) AS droitsDEP\n ,ARRONDIS(r.droitsDRM) AS droitsDRM\n ,r.credex\n ,r.avances\n ,ARRONDIS(COALESCE(sl.newbalance,0)+r.droits+r.credex) AS montantdu\n,ARRONDIS(COALESCE(sl.postponed_gross_master,0)+r.droitsmaster) AS \nmontantdumaster\n ,r.idsoc\n ,r.devise\n ,isgroup\n ,r.isheritier\n ,(a.nom || COALESCE(' (' || NULLIF(TRIM(a.libelledecompte),'') || \n')', ''))::VARCHAR(100) AS nomad\n ,NULLIF(TRIM(COALESCE(ann.email,a.email)),'')::VARCHAR(200) AS email\n ,NULLIF(TRIM(COALESCE(ann.email,a.email)),'') IS NOT NULL AS hasemail\n ,COALESCE(s.nom,'Société indéfinie')::VARCHAR(30) AS nomsociete\n ,sl.newbalance\n ,sl.postponed_gross_master\n ,cn.*\n ,r.position\n-- ,CASE WHEN ARRONDIS(COALESCE(sl.newbalance,0)+r.droits+r.credex) \n >= cn.paymentthreshold THEN \nARRONDIS(COALESCE(sl.newbalance,0)+r.droits+r.credex) ELSE 0 END AS apayer\n-- ,CASE WHEN ARRONDIS(COALESCE(sl.newbalance,0)+r.droits+r.credex) \n< cn.paymentthreshold THEN \nARRONDIS(COALESCE(sl.newbalance,0)+r.droits+r.credex) ELSE 0 END AS \nareporter\n ,CASE WHEN ARRONDIS( cn.netpayable + \nCOALESCE(sl.laststatementnet,0) - COALESCE(ap.totpayments,0) ) >= \ncn.paymentthreshold THEN ARRONDIS( cn.netpayable + \nCOALESCE(sl.laststatementnet,0) - COALESCE(ap.totpayments,0) ) ELSE 0 \nEND AS apayer\n ,CASE WHEN ARRONDIS( cn.netpayable + \nCOALESCE(sl.laststatementnet,0) - COALESCE(ap.totpayments,0) ) < \ncn.paymentthreshold THEN \nARRONDIS(COALESCE(sl.newbalance,0)+r.droits+r.credex) ELSE 0 END AS \nareporter\n ,lc.date_closingperiod\n ,a.idclient\n ,cl.name AS clientname\n ,-ap.totpayments AS totpayments\n ,COALESCE(sl.laststatementnet,0) - COALESCE(ap.totpayments,0) AS \nopeningbalance\n ,COALESCE(sl.laststatementnet,0) AS laststatementnet\n ,a.ispayable\n ,r.cotisationsheritier\n ,s.idrealm\n ,ann.idannuaire\n ,COALESCE(ann.wantsenglish,FALSE) AS wantsenglish\n ,aa.idannuaires\n ,aa2.idannuaires AS idannuaires2\n ,COALESCE(ann.disablenotifications,FALSE) AS DisableNotifications\n ,ann.iscompany\nFROM result1 r\nJOIN ad a USING(idad)\nLEFT JOIN allannuaires aa USING(idad)\nLEFT JOIN allannuaires aa2 ON r.idsoc=aa2.idsociete\nLEFT JOIN annuaire ann ON a.idannuaire_main=ann.idannuaire\nLEFT JOIN societes s ON s.idsociete=r.idsoc\nLEFT JOIN currentsoldes2 sl ON sl.idad=a.idad AND sl.idsociete=s.idsociete\nLEFT JOIN lastsoldes lc ON lc.idad=a.idad AND lc.idsociete=s.idsociete\nLEFT JOIN \ncalculate_net(a.idad,s.idsociete,ARRONDIS(COALESCE(sl.newbalance,0)+r.droits+r.credex),r.droitsDEP,r.droitsDRM,ARRONDIS(COALESCE(sl.postponed_gross_master,0)+r.droitsmaster),lc.GrossAmountEarnedExternally,r.cotisationsheritier) \ncn ON TRUE\nLEFT JOIN clients cl ON cl.idclient=a.idclient\nLEFT JOIN apayments ap ON ap.idad=a.idad AND ap.idsociete=s.idsociete\nWHERE a.calculatestatements\nORDER BY s.nom,cl.name,a.nom;\n\n\n\n\n",
"msg_date": "Sat, 2 Dec 2023 17:50:02 +0100",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange \"actual time\" in simple CTE"
},
{
"msg_contents": "On Sat, Dec 2, 2023 at 11:50 AM Jean-Christophe Boggio <\[email protected]> wrote:\n\n> Hello,\n>\n> I am trying to optimize a complex query and while doing some explains, I\n> stumbled upon this :\n>\n> CTE cfg\n> -> Result (cost=2.02..2.03 rows=1 width=25) (actual\n> time=7167.478..7167.481 rows=1 loops=1)\n> ...\n> How can this take 7 seconds?\n>\n\n\n\n> This really looks like an artefact (maybe in relation to the JIT compiler?)\n>\n>\nExactly. The time taking to do the JIT compilations gets measured in\nnon-intuitive places in the plan. I'm guessing that that is what is going\non here, especially since the time separately reported at the end of the\nplan for JIT so closely matches this mysterious time. Just turn JIT off, I\ndoubt it doing you any good anyway.\n\nCheers,\n\nJeff\n\nOn Sat, Dec 2, 2023 at 11:50 AM Jean-Christophe Boggio <[email protected]> wrote:Hello,\n\nI am trying to optimize a complex query and while doing some explains, I \nstumbled upon this :\n\n CTE cfg\n -> Result (cost=2.02..2.03 rows=1 width=25) (actual \ntime=7167.478..7167.481 rows=1 loops=1)...\nHow can this take 7 seconds?\nThis really looks like an artefact (maybe in relation to the JIT compiler?)\nExactly. The time taking to do the JIT compilations gets measured in non-intuitive places in the plan. I'm guessing that that is what is going on here, especially since the time separately reported at the end of the plan for JIT so closely matches this mysterious time. Just turn JIT off, I doubt it doing you any good anyway.Cheers,Jeff",
"msg_date": "Sun, 3 Dec 2023 22:15:27 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange \"actual time\" in simple CTE"
}
] |
[
{
"msg_contents": "We are currently on Postgres 13.9 (and will be moving to later releases).\nWe are capturing json explain plans and storing them in a database table.\nWe can tell that there are different plans for some queries, but that's a\nvery labor intensive process - we'd rather do this using SQL and comparing\nconsistent hash values for the plans. Both Oracle and SQL Server have\nconsistent hash values for query plans and that makes it easy to identify\nwhen there are multiple plans for the same query. Does that concept exist\nin later releases of Postgres (and is the value stored in the json explain\nplan)?\n\nWhile we have a pretty good idea of how to manually generate a consistent\nvalue, we don't want to reinvent the wheel. Is anyone aware of an existing\nsolution that can be called from SQL/jsonb?\n\nThanks,\nJerry\n\nWe are currently on Postgres 13.9 (and will be moving to later releases). We are capturing json explain plans and storing them in a database table. We can tell that there are different plans for some queries, but that's a very labor intensive process - we'd rather do this using SQL and comparing consistent hash values for the plans. Both Oracle and SQL Server have consistent hash values for query plans and that makes it easy to identify when there are multiple plans for the same query. Does that concept exist in later releases of Postgres (and is the value stored in the json explain plan)? While we have a pretty good idea of how to manually generate a consistent value, we don't want to reinvent the wheel. Is anyone aware of an existing solution that can be called from SQL/jsonb?Thanks,Jerry",
"msg_date": "Mon, 4 Dec 2023 06:45:39 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does Postgres have consistent identifiers (plan hash value) for\n explain plans?"
},
{
"msg_contents": "Jerry Brenner <[email protected]> writes:\n> Both Oracle and SQL Server have\n> consistent hash values for query plans and that makes it easy to identify\n> when there are multiple plans for the same query. Does that concept exist\n> in later releases of Postgres (and is the value stored in the json explain\n> plan)?\n\nNo, there's no support currently for obtaining a hash value that's\nassociated with a plan rather than an input query tree.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Dec 2023 09:57:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does Postgres have consistent identifiers (plan hash value) for\n explain plans?"
},
{
"msg_contents": "Hi,\n\nOn Mon, Dec 04, 2023 at 06:45:39AM -0800, Jerry Brenner wrote:\n> We are currently on Postgres 13.9 (and will be moving to later releases).\n> We are capturing json explain plans and storing them in a database table.\n> We can tell that there are different plans for some queries, but that's a\n> very labor intensive process - we'd rather do this using SQL and comparing\n> consistent hash values for the plans. Both Oracle and SQL Server have\n> consistent hash values for query plans and that makes it easy to identify\n> when there are multiple plans for the same query. Does that concept exist\n> in later releases of Postgres (and is the value stored in the json explain\n> plan)?\n>\n> While we have a pretty good idea of how to manually generate a consistent\n> value, we don't want to reinvent the wheel. Is anyone aware of an existing\n> solution that can be called from SQL/jsonb?\n\nYou can look at pg_store_plans extension:\nhttps://github.com/ossc-db/pg_store_plans, it can generate a query plan hash\nand also keeps tracks of the (normalized) plans associated to each (normalized)\nquery.\n\n\n",
"msg_date": "Mon, 4 Dec 2023 18:30:12 +0100",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does Postgres have consistent identifiers (plan hash value) for\n explain plans?"
},
{
"msg_contents": "On Mon, Dec 04, 2023 at 09:57:24AM -0500, Tom Lane wrote:\n> Jerry Brenner <[email protected]> writes:\n>> Both Oracle and SQL Server have\n>> consistent hash values for query plans and that makes it easy to identify\n>> when there are multiple plans for the same query. Does that concept exist\n>> in later releases of Postgres (and is the value stored in the json explain\n>> plan)?\n> \n> No, there's no support currently for obtaining a hash value that's\n> associated with a plan rather than an input query tree.\n\nPlannerGlobal includes a no_query_jumble that gets inherited by all\nits lower-level nodes, so adding support for hashes compiled from\nthese node structures would not be that complicated. My point is that\nthe basic infrastructure is in place in the tree to be able to do\nthat, and it should not be a problem to even publish the compiled\nhashes in EXPLAIN outputs, behind an option of course.\n--\nMichael",
"msg_date": "Tue, 5 Dec 2023 12:05:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does Postgres have consistent identifiers (plan hash value) for\n explain plans?"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Dec 04, 2023 at 09:57:24AM -0500, Tom Lane wrote:\n>> Jerry Brenner <[email protected]> writes:\n>>> Both Oracle and SQL Server have\n>>> consistent hash values for query plans and that makes it easy to identify\n>>> when there are multiple plans for the same query. Does that concept exist\n>>> in later releases of Postgres (and is the value stored in the json explain\n>>> plan)?\n\n>> No, there's no support currently for obtaining a hash value that's\n>> associated with a plan rather than an input query tree.\n\n> PlannerGlobal includes a no_query_jumble that gets inherited by all\n> its lower-level nodes, so adding support for hashes compiled from\n> these node structures would not be that complicated. My point is that\n> the basic infrastructure is in place in the tree to be able to do\n> that, and it should not be a problem to even publish the compiled\n> hashes in EXPLAIN outputs, behind an option of course.\n\nWell, yeah, we could fairly easily activate that infrastructure for\nplans, but we haven't. More to the point, it's not clear to me that\nthat would satisfy the OP's request for \"consistent\" hash values.\nThe hashes would vary depending on object OID values, system version,\npossibly endianness, etc.\n\nI'm also wondering exactly what the OP thinks qualifies as different\nplans. Remembering the amount of fooling-around that's gone on with\nquerytree hashes to satisfy various people's ill-defined desires for\npg_stat_statements aggregation behavior, I'm not really eager to buy\ninto the same definitional morass at the plan level.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Dec 2023 22:29:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does Postgres have consistent identifiers (plan hash value) for\n explain plans?"
},
{
"msg_contents": "Apologies if I'm not using the appropriate Postgres-specific terms. The\nsimplest implementation would consider only the information that is\nconsistent across systems and executions - node types, relation and index\nnames (not ids), aliases ... It would ignore bind variable values and\nexecution statistics. This would make it easy to find:\n\n - Queries with different plans (we would then investigate the causes)\n - Queries where the plans changed over time (we would then investigate\n the causes)\n - The aggregated cost of a query plan shared by multiple queries (due to\n differences in the number of items in an IN list, ...) This is useful when\n no single query in the group registers high, but the group of queries\n registers high.\n - Because we use consistent table and index names across environments,\n this would make it possible to find queries with different plans in\n different environments.\n\nIn a perfect world, the hash would include the filters, index conds, ...\nwith the constant values masked out, but I realize that's much more\ncomplicated. (Without this, we could see plans that are applying different\nnumbers of predicates against an index mapping to the same hash value, but\nit would still be a big improvement.)\n\nAs mentioned before, we are currently storing the explain plans in a\ndatabase table. We use a combination of columns and syntax when querying\nfor the long executions to display some contextual information about the\nexplain plans (does it have a Materialize node, Init Plan, Sub Plan, ...)\nBased on that little contextual information, we can see that there are\nmultiple plans for some queries. Based on manual investigation, we know\nthat there other plan differences. It is very expensive right now to try\nto figure out if a plan is changing over time, why some executions are more\nexpensive than others, ...\n\nThanks,\nJerry\n\nOn Mon, Dec 4, 2023 at 7:29 PM Tom Lane <[email protected]> wrote:\n\n> Michael Paquier <[email protected]> writes:\n> > On Mon, Dec 04, 2023 at 09:57:24AM -0500, Tom Lane wrote:\n> >> Jerry Brenner <[email protected]> writes:\n> >>> Both Oracle and SQL Server have\n> >>> consistent hash values for query plans and that makes it easy to\n> identify\n> >>> when there are multiple plans for the same query. Does that concept\n> exist\n> >>> in later releases of Postgres (and is the value stored in the json\n> explain\n> >>> plan)?\n>\n> >> No, there's no support currently for obtaining a hash value that's\n> >> associated with a plan rather than an input query tree.\n>\n> > PlannerGlobal includes a no_query_jumble that gets inherited by all\n> > its lower-level nodes, so adding support for hashes compiled from\n> > these node structures would not be that complicated. My point is that\n> > the basic infrastructure is in place in the tree to be able to do\n> > that, and it should not be a problem to even publish the compiled\n> > hashes in EXPLAIN outputs, behind an option of course.\n>\n> Well, yeah, we could fairly easily activate that infrastructure for\n> plans, but we haven't. More to the point, it's not clear to me that\n> that would satisfy the OP's request for \"consistent\" hash values.\n> The hashes would vary depending on object OID values, system version,\n> possibly endianness, etc.\n>\n> I'm also wondering exactly what the OP thinks qualifies as different\n> plans. Remembering the amount of fooling-around that's gone on with\n> querytree hashes to satisfy various people's ill-defined desires for\n> pg_stat_statements aggregation behavior, I'm not really eager to buy\n> into the same definitional morass at the plan level.\n>\n> regards, tom lane\n>\n>\n\nApologies if I'm not using the appropriate Postgres-specific terms. The simplest implementation would consider only the information that is consistent across systems and executions - node types, relation and index names (not ids), aliases ... It would ignore bind variable values and execution statistics. This would make it easy to find:Queries with different plans (we would then investigate the causes)Queries where the plans changed over time (we would then investigate the causes)The aggregated cost of a query plan shared by multiple queries (due to differences in the number of items in an IN list, ...) This is useful when no single query in the group registers high, but the group of queries registers high.Because we use consistent table and index names across environments, this would make it possible to find queries with different plans in different environments.In a perfect world, the hash would include the filters, index conds, ... with the constant values masked out, but I realize that's much more complicated. (Without this, we could see plans that are applying different numbers of predicates against an index mapping to the same hash value, but it would still be a big improvement.)As mentioned before, we are currently storing the explain plans in a database table. We use a combination of columns and syntax when querying for the long executions to display some contextual information about the explain plans (does it have a Materialize node, Init Plan, Sub Plan, ...) Based on that little contextual information, we can see that there are multiple plans for some queries. Based on manual investigation, we know that there other plan differences. It is very expensive right now to try to figure out if a plan is changing over time, why some executions are more expensive than others, ... Thanks,JerryOn Mon, Dec 4, 2023 at 7:29 PM Tom Lane <[email protected]> wrote:Michael Paquier <[email protected]> writes:\n> On Mon, Dec 04, 2023 at 09:57:24AM -0500, Tom Lane wrote:\n>> Jerry Brenner <[email protected]> writes:\n>>> Both Oracle and SQL Server have\n>>> consistent hash values for query plans and that makes it easy to identify\n>>> when there are multiple plans for the same query. Does that concept exist\n>>> in later releases of Postgres (and is the value stored in the json explain\n>>> plan)?\n\n>> No, there's no support currently for obtaining a hash value that's\n>> associated with a plan rather than an input query tree.\n\n> PlannerGlobal includes a no_query_jumble that gets inherited by all\n> its lower-level nodes, so adding support for hashes compiled from\n> these node structures would not be that complicated. My point is that\n> the basic infrastructure is in place in the tree to be able to do\n> that, and it should not be a problem to even publish the compiled\n> hashes in EXPLAIN outputs, behind an option of course.\n\nWell, yeah, we could fairly easily activate that infrastructure for\nplans, but we haven't. More to the point, it's not clear to me that\nthat would satisfy the OP's request for \"consistent\" hash values.\nThe hashes would vary depending on object OID values, system version,\npossibly endianness, etc.\n\nI'm also wondering exactly what the OP thinks qualifies as different\nplans. Remembering the amount of fooling-around that's gone on with\nquerytree hashes to satisfy various people's ill-defined desires for\npg_stat_statements aggregation behavior, I'm not really eager to buy\ninto the same definitional morass at the plan level.\n\n regards, tom lane",
"msg_date": "Tue, 5 Dec 2023 06:03:24 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Does Postgres have consistent identifiers (plan hash value) for\n explain plans?"
}
] |
[
{
"msg_contents": "It would be helpful if a timestamp column was added to pg_stat_statements\nto denote when a query entered the view. This would make it easier to tell\nhow frequently a query is being executed (100,000 times since a specific\ntimestamp vs 100,000 times since the execution stats were last reset.)\n\nI realize that Postgres is different from SQL Server. SQL Server has\ntimestamps for both the time that the query entered the cache and the last\nexecution. I assume that adding and maintaining a timestamp for the last\nexecution would be more difficult and expensive. Having that additional\ninformation makes it possible for us to find queries that were executed\nduring a time range that corresponds to a batch process, queries executed\nan abnormally high number of times in a short period of time, ...\n\nWe are taking hourly snapshot of pg_stat_statements and storing the\ninformation in a database table so we can analyze the database activity in\na given interval. We are calculating and storing the deltas as part of\nthat process. We have to make certain simplifying assumptions due to the\nlack of this type of timestamp. (We can live with these assumptions, but\nhaving the additional timestamp(s) would increase the value of the\ninformation.):\n\n - If the number of executions increased since the last snapshot, then\n use the difference as the delta. (We assume that the statement was not\n flushed from the cache and then reloaded later in the interval.)\n - If the number of executions remained the same since the last snapshot,\n then the query was not executed in the interval. (We assume that the\n statement was not flushed from the cache and then reloaded later in the\n interval.)\n - If the number of executions decreased since the last snapshot, then\n the was flushed from the cache at some unknown point in the interval.\n\n\nThanks,\nJerry\n\nIt would be helpful if a timestamp column was added to pg_stat_statements to denote when a query entered the view. This would make it easier to tell how frequently a query is being executed (100,000 times since a specific timestamp vs 100,000 times since the execution stats were last reset.) I realize that Postgres is different from SQL Server. SQL Server has timestamps for both the time that the query entered the cache and the last execution. I assume that adding and maintaining a timestamp for the last execution would be more difficult and expensive. Having that additional information makes it possible for us to find queries that were executed during a time range that corresponds to a batch process, queries executed an abnormally high number of times in a short period of time, ...We are taking hourly snapshot of pg_stat_statements and storing the information in a database table so we can analyze the database activity in a given interval. We are calculating and storing the deltas as part of that process. We have to make certain simplifying assumptions due to the lack of this type of timestamp. (We can live with these assumptions, but having the additional timestamp(s) would increase the value of the information.):If the number of executions increased since the last snapshot, then use the difference as the delta. (We assume that the statement was not flushed from the cache and then reloaded later in the interval.)If the number of executions remained the same since the last snapshot, then the query was not executed in the interval. \n\n(We assume that the statement was not flushed from the cache and then reloaded later in the interval.)If the number of executions decreased since the last snapshot, then the was flushed from the cache at some unknown point in the interval.Thanks,Jerry",
"msg_date": "Tue, 5 Dec 2023 06:28:54 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Include a timestamp in future versions of pg_stat_statements when\n when a query entered the cache?"
},
{
"msg_contents": "Hi,\n\nOn Tue, Dec 05, 2023 at 06:28:54AM -0800, Jerry Brenner wrote:\n> It would be helpful if a timestamp column was added to pg_stat_statements\n> to denote when a query entered the view. This would make it easier to tell\n> how frequently a query is being executed (100,000 times since a specific\n> timestamp vs 100,000 times since the execution stats were last reset.)\n\nThis was actually done a few weeks ago, and will be available with pg 17. You\ncan see the 2 new timestamp counters (one for the whole record, one for the\nminmax counters only) documentation at\nhttps://www.postgresql.org/docs/devel/pgstatstatements.html\n\n\n",
"msg_date": "Wed, 6 Dec 2023 07:45:07 +0100",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Include a timestamp in future versions of pg_stat_statements\n when when a query entered the cache?"
}
] |
[
{
"msg_contents": "Is there any documentation on the semantics of $ variables in json explain\nplans for both InitPlans and SubPlans in 13?\n\nI'm trying to understand the attached json file.\n\n - It looks like $0 represents the value from the outer query block when\n the correlated subquery is evaluated\n - It looks like $1 represents the result of the subquery evaluation\n\nHere are the relevant lines from the plan. (I've attached the full plan as\na file.):\n\n \"Node Type\": \"Subquery Scan\",\n \"Parent Relationship\": \"Inner\",\n \"Parallel Aware\": false,\n \"Alias\": \"ANY_subquery\",\n \"Filter\": \"(qroot.sendorder = \\\"ANY_subquery\\\".col0)\",\n \"Plans\": [\n {\n \"Node Type\": \"Result\",\n \"Parent Relationship\": \"Subquery\",\n \"Parallel Aware\": false,\n \"Plans\": [\n {\n \"Node Type\": \"Limit\",\n \"Parent Relationship\": \"InitPlan\",\n \"Subplan Name\": \"InitPlan 1 (returns $1)\",\n \"Plans\": [\n {\n \"Node Type\": \"Index Only Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Scan Direction\": \"Forward\",\n \"Index Name\":\n\"message_u_destinatio_1kk5be278gggc\",\n \"Relation Name\": \"pc_message\",\n \"Alias\": \"qroot0\",\n \"Index Cond\": \"((destinationid = 67) AND\n(contactid = $0) AND (sendorder IS NOT NULL))\",\n\nHere's a formatted version of the query from the json file:\n\nSELECT /* ISNULL:pc_message.FrozenSetID:, KeyTable:pc_message; */ qRoot.ID\ncol0, qRoot.CreationTime col1\nFROM pc_message qRoot\nWHERE qRoot.DestinationID = $1 AND qRoot.Status = $2 AND qRoot.contactID IS\nNOT NULL AND qRoot.FrozenSetID IS NULL AND qRoot.SendOrder IN\n (\n SELECT MIN (qRoot0.SendOrder) col0\n FROM pc_message qRoot0\n WHERE qRoot0.DestinationID = $3 AND qRoot0.contactID =\nqRoot.contactID)\nORDER BY col1 ASC, col0 ASC LIMIT 100000\n\n\nThanks,\nJerry",
"msg_date": "Fri, 8 Dec 2023 15:09:52 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about semantics of $ variables in json explain plans in 13"
},
{
"msg_contents": "Jerry Brenner <[email protected]> writes:\n> Is there any documentation on the semantics of $ variables in json explain\n> plans for both InitPlans and SubPlans in 13?\n\nI don't think there's anything much in the user-facing docs, which is\nsomewhat unfortunate because it's confusing: the notation is overloaded.\n$N could be a parameter supplied from outside the query (as in your $1,\n$2 and $3 in the source text), but it could also be a parameter supplied\nfrom an outer query level to a subplan, or it could be the result value\nof an InitPlan. The numbering of outside-the-query parameters is\ndisjoint from that of the other kind.\n\n> - It looks like $0 represents the value from the outer query block when\n> the correlated subquery is evaluated\n> - It looks like $1 represents the result of the subquery evaluation\n\nYeah, I think you're right here. $0 evidently corresponds to\nqRoot.contactID from the outer plan, and the plan label itself\nshows that $1 carries the sub-select's value back out. This $1\nis unrelated to the $1 you wrote in the query text. (It looks\nlike this is a custom plan in which \"67\" was explicitly substituted\nfor your $3. Presumably $1 and $2 were replaced as well; we don't\ndo half-custom plans.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Dec 2023 20:03:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about semantics of $ variables in json explain plans in\n 13"
}
] |
[
{
"msg_contents": "We are currently on 13. We are capturing the explain plans for query\nexecutions taking 1 second or longer and storing the json files. We are\nmost of the way through implementing a home grown solution to generate a\nconsistent hash value for a query plan, so we can find queries with\nmultiple plans. I've attached 2 query plans that we've captured that\ndiffer in a seemingly strange way. (All executions are from the same exact\ncode path.) One of the plans has parameter markers in the predicates in\nthe values for \"Recheck Cond\" and \"Index Cond\", while the other does not.\n\nAny insight into why we are seeing parameter markers in the body of the\nquery plan?\n\nExamples of the parameter markers:\n \"Recheck Cond\": \"*((destinationid = $1)* AND (contactid IS\nNOT NULL) AND (status = $2))\",\n \"Index Cond\": \"*((destinationid = $1)* AND (contactid\nIS NOT NULL) AND (status = $2))\",\n\nWhat we normally see:\n \"Recheck Cond\": \"(*(destinationid = 67) *AND (contactid IS\nNOT NULL) AND (status = 1))\",\n \"Index Cond\": \"*((destinationid = 67) *AND (contactid\nIS NOT NULL) AND (status = 1))\",\n\nThe full query text:\n\nSELECT /* ISNULL:pc_message.FrozenSetID:, KeyTable:pc_message; */ qRoot.ID\ncol0, qRoot.CreationTime col1\nFROM pc_message qRoot\nWHERE qRoot.DestinationID = $1 AND qRoot.Status = $2 AND qRoot.contactID IS\nNOT NULL AND qRoot.FrozenSetID IS NULL AND qRoot.SendOrder IN\n (\n SELECT MIN (qRoot0.SendOrder) col0\n FROM pc_message qRoot0\n WHERE qRoot0.DestinationID = $3 AND qRoot0.contactID =\nqRoot.contactID)\nORDER BY col1 ASC, col0 ASC LIMIT 100000\n\n\nThanks,\nJerry",
"msg_date": "Fri, 8 Dec 2023 16:23:41 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "2 json explain plans for the same query/plan - why does one have\n constants while the other has parameter markers?"
},
{
"msg_contents": "Jerry Brenner <[email protected]> writes:\n> We are currently on 13. We are capturing the explain plans for query\n> executions taking 1 second or longer and storing the json files. We are\n> most of the way through implementing a home grown solution to generate a\n> consistent hash value for a query plan, so we can find queries with\n> multiple plans. I've attached 2 query plans that we've captured that\n> differ in a seemingly strange way. (All executions are from the same exact\n> code path.) One of the plans has parameter markers in the predicates in\n> the values for \"Recheck Cond\" and \"Index Cond\", while the other does not.\n> Any insight into why we are seeing parameter markers in the body of the\n> query plan?\n\nThe one with parameter markers is a \"generic\" plan for a parameterized\nquery. When you get a plan without parameter markers for the same\ninput query, that's a \"custom\" plan in which concrete values of the\nparameters have been substituted, possibly allowing const-simplification\nand more accurate rowcount estimates. The backend will generally try\ncustom plans a few times and then try a generic plan to see if that's\nmeaningfully slower -- if not, replanning each time is deemed to be\nwasteful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Dec 2023 19:44:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 2 json explain plans for the same query/plan - why does one have\n constants while the other has parameter markers?"
},
{
"msg_contents": "Thanks for the quick response! That was very helpful!\n My impression is that almost all of the plans being captured are \"custom\",\nbut now I know that I need to look closer. We also store the execution\ntimes, so we can look at the execution order for queries that are executed\noften enough to seem like they should stay in the cache. The addition of\nthe new timestamp columns in pg_stat_statements in 17 will also help us get\na better sense of how long the query had been in the cache.\n\nOn Fri, Dec 8, 2023 at 4:44 PM Tom Lane <[email protected]> wrote:\n\n> Jerry Brenner <[email protected]> writes:\n> > We are currently on 13. We are capturing the explain plans for query\n> > executions taking 1 second or longer and storing the json files. We are\n> > most of the way through implementing a home grown solution to generate a\n> > consistent hash value for a query plan, so we can find queries with\n> > multiple plans. I've attached 2 query plans that we've captured that\n> > differ in a seemingly strange way. (All executions are from the same\n> exact\n> > code path.) One of the plans has parameter markers in the predicates in\n> > the values for \"Recheck Cond\" and \"Index Cond\", while the other does not.\n> > Any insight into why we are seeing parameter markers in the body of the\n> > query plan?\n>\n> The one with parameter markers is a \"generic\" plan for a parameterized\n> query. When you get a plan without parameter markers for the same\n> input query, that's a \"custom\" plan in which concrete values of the\n> parameters have been substituted, possibly allowing const-simplification\n> and more accurate rowcount estimates. The backend will generally try\n> custom plans a few times and then try a generic plan to see if that's\n> meaningfully slower -- if not, replanning each time is deemed to be\n> wasteful.\n>\n> regards, tom lane\n>\n>\n\nThanks for the quick response! That was very helpful! My impression is that almost all of the plans being captured are \"custom\", but now I know that I need to look closer. We also store the execution times, so we can look at the execution order for queries that are executed often enough to seem like they should stay in the cache. The addition of the new timestamp columns in pg_stat_statements in 17 will also help us get a better sense of how long the query had been in the cache.On Fri, Dec 8, 2023 at 4:44 PM Tom Lane <[email protected]> wrote:Jerry Brenner <[email protected]> writes:\n> We are currently on 13. We are capturing the explain plans for query\n> executions taking 1 second or longer and storing the json files. We are\n> most of the way through implementing a home grown solution to generate a\n> consistent hash value for a query plan, so we can find queries with\n> multiple plans. I've attached 2 query plans that we've captured that\n> differ in a seemingly strange way. (All executions are from the same exact\n> code path.) One of the plans has parameter markers in the predicates in\n> the values for \"Recheck Cond\" and \"Index Cond\", while the other does not.\n> Any insight into why we are seeing parameter markers in the body of the\n> query plan?\n\nThe one with parameter markers is a \"generic\" plan for a parameterized\nquery. When you get a plan without parameter markers for the same\ninput query, that's a \"custom\" plan in which concrete values of the\nparameters have been substituted, possibly allowing const-simplification\nand more accurate rowcount estimates. The backend will generally try\ncustom plans a few times and then try a generic plan to see if that's\nmeaningfully slower -- if not, replanning each time is deemed to be\nwasteful.\n\n regards, tom lane",
"msg_date": "Fri, 8 Dec 2023 17:04:52 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 2 json explain plans for the same query/plan - why does one have\n constants while the other has parameter markers?"
},
{
"msg_contents": "Can you consider adding an attribute to the explain plan json in a future\nrelease (to plan?) to denote if the plan is a \"custom\" vs \"generic\" plan?\nThe use of $N variables for both parameter markers and InitPlan and SubPlan\nmakes it harder to programmatically determine the type of plan (and in our\ncase tell if 2 plans only differ by \"custom\" vs \"generic\").\n\nWe use numeric constants in our queries in a small number of cases where we\nknow that there's no potential PII, there's a small number of values and\nthat there's a high probability that the data is skewed. pc_message\ncontains messages to be sent to external systems and hence is a volatile\ntable and the data in the DestinationID column can be highly skewed. In\ntheory, could using a constant instead of a bind variable for this\npredicate help the optimizer?\n\nThanks,\nJerry\n\nOn Fri, Dec 8, 2023 at 5:04 PM Jerry Brenner <[email protected]> wrote:\n\n> Thanks for the quick response! That was very helpful!\n> My impression is that almost all of the plans being captured are\n> \"custom\", but now I know that I need to look closer. We also store the\n> execution times, so we can look at the execution order for queries that are\n> executed often enough to seem like they should stay in the cache. The\n> addition of the new timestamp columns in pg_stat_statements in 17 will also\n> help us get a better sense of how long the query had been in the cache.\n>\n> On Fri, Dec 8, 2023 at 4:44 PM Tom Lane <[email protected]> wrote:\n>\n>> Jerry Brenner <[email protected]> writes:\n>> > We are currently on 13. We are capturing the explain plans for query\n>> > executions taking 1 second or longer and storing the json files. We are\n>> > most of the way through implementing a home grown solution to generate a\n>> > consistent hash value for a query plan, so we can find queries with\n>> > multiple plans. I've attached 2 query plans that we've captured that\n>> > differ in a seemingly strange way. (All executions are from the same\n>> exact\n>> > code path.) One of the plans has parameter markers in the predicates in\n>> > the values for \"Recheck Cond\" and \"Index Cond\", while the other does\n>> not.\n>> > Any insight into why we are seeing parameter markers in the body of the\n>> > query plan?\n>>\n>> The one with parameter markers is a \"generic\" plan for a parameterized\n>> query. When you get a plan without parameter markers for the same\n>> input query, that's a \"custom\" plan in which concrete values of the\n>> parameters have been substituted, possibly allowing const-simplification\n>> and more accurate rowcount estimates. The backend will generally try\n>> custom plans a few times and then try a generic plan to see if that's\n>> meaningfully slower -- if not, replanning each time is deemed to be\n>> wasteful.\n>>\n>> regards, tom lane\n>>\n>>\n\nCan you consider adding an attribute to the explain plan json in a future release (to plan?) to denote if the plan is a \"custom\" vs \"generic\" plan? The use of $N variables for both parameter markers and InitPlan and SubPlan makes it harder to programmatically determine the type of plan (and in our case tell if 2 plans only differ by \"custom\" vs \"generic\").We use numeric constants in our queries in a small number of cases where we know that there's no potential PII, there's a small number of values and that there's a high probability that the data is skewed. pc_message contains messages to be sent to external systems and hence is a volatile table and the data in the DestinationID column can be highly skewed. In theory, could using a constant instead of a bind variable for this predicate help the optimizer? Thanks,JerryOn Fri, Dec 8, 2023 at 5:04 PM Jerry Brenner <[email protected]> wrote:Thanks for the quick response! That was very helpful! My impression is that almost all of the plans being captured are \"custom\", but now I know that I need to look closer. We also store the execution times, so we can look at the execution order for queries that are executed often enough to seem like they should stay in the cache. The addition of the new timestamp columns in pg_stat_statements in 17 will also help us get a better sense of how long the query had been in the cache.On Fri, Dec 8, 2023 at 4:44 PM Tom Lane <[email protected]> wrote:Jerry Brenner <[email protected]> writes:\n> We are currently on 13. We are capturing the explain plans for query\n> executions taking 1 second or longer and storing the json files. We are\n> most of the way through implementing a home grown solution to generate a\n> consistent hash value for a query plan, so we can find queries with\n> multiple plans. I've attached 2 query plans that we've captured that\n> differ in a seemingly strange way. (All executions are from the same exact\n> code path.) One of the plans has parameter markers in the predicates in\n> the values for \"Recheck Cond\" and \"Index Cond\", while the other does not.\n> Any insight into why we are seeing parameter markers in the body of the\n> query plan?\n\nThe one with parameter markers is a \"generic\" plan for a parameterized\nquery. When you get a plan without parameter markers for the same\ninput query, that's a \"custom\" plan in which concrete values of the\nparameters have been substituted, possibly allowing const-simplification\nand more accurate rowcount estimates. The backend will generally try\ncustom plans a few times and then try a generic plan to see if that's\nmeaningfully slower -- if not, replanning each time is deemed to be\nwasteful.\n\n regards, tom lane",
"msg_date": "Sat, 9 Dec 2023 11:58:54 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 2 json explain plans for the same query/plan - why does one have\n constants while the other has parameter markers?"
}
] |
[
{
"msg_contents": "The attached query plan is from 11.\nWe are getting Merge Joins on both sides of the UNION. In both cases, the\nfirst node under the Merge Join returns 0 rows but the other side of the\nMerge Join (the one being sorted) is executed and that's where all of the\ntime is spent.\n\nOn the surface, I don't see any way from the attached explain plan to\ndetermine which side of the Merge Join is executed first. Some questions:\n\n - Which side gets executed first?\n - How would one tell that from the json?\n - Have there been any relevant changes to later releases to make that\n more apparent?\n - Whichever side gets executed first, is the execution of the side that\n would be second get short circuited if 0 rows are returned by the first\n side?\n\nHere's a screenshot from pgMustard.\n\n - Nodes 6 and 14 (the first node under each of the Merge Joins) each\n return 0 rows\n - Nodes 9 and 15 are the expensive sides of the Merge Joins and return\n lots of rows\n\n[image: image.png]\n\nNOTE:\n\n - The query plan in 13 is slightly different, but still includes the\n Merge Joins.\n - Replacing ANY(ARRAY(<subquery)) with IN(<subquery>) fixes the\n performance problem, but we'd still like to understand the execution\n characteristics of Merge Join\n\nThanks,\nJerry",
"msg_date": "Wed, 20 Dec 2023 06:40:48 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Which side of a Merge Join gets executed first? Do both sides always\n get executed?"
},
{
"msg_contents": "\n\nLe 20/12/2023 à 15:40, Jerry Brenner a écrit :\n> The attached query plan is from 11.\n> We are getting Merge Joins on both sides of the UNION. In both cases, \n> the first node under the Merge Join returns 0 rows but the other side of \n> the Merge Join (the one being sorted) is executed and that's where all \n> of the time is spent.\n> \n> On the surface, I don't see any way from the attached explain plan to \n> determine which side of the Merge Join is executed first. Some questions:\n> \n> * Which side gets executed first?\n> * How would one tell that from the json?\n> * Have there been any relevant changes to later releases to make that\n> more apparent?\n> * Whichever side gets executed first, is the execution of the side\n> that would be second get short circuited if 0 rows are returned by\n> the first side?\n> \n> Here's a screenshot from pgMustard.\n> \n> * Nodes 6 and 14 (the first node under each of the Merge Joins) each\n> return 0 rows\n> * Nodes 9 and 15 are the expensive sides of the Merge Joins and return\n> lots of rows\n\nI think those nodes (9 and 15) are expensive because they have to filter \nout 8 millions rows in order to produce their first output row. After \nthat, they get short circuited.\n\nBest regards,\nFrédéric\n\n\n",
"msg_date": "Wed, 20 Dec 2023 19:05:40 +0100",
"msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which side of a Merge Join gets executed first? Do both sides\n always get executed?"
},
{
"msg_contents": "\n\nLe 20/12/2023 à 15:40, Jerry Brenner a écrit :\n> Whichever side gets executed first, is the execution of the side that \n> would be second get short circuited if 0 rows are returned by the first \n> side?\n\nIndeed, if 0 rows are returned from the outer relation, the scan of the \ninner relation is never executed.\n\nBest regards,\nFrédéric\n\n\n",
"msg_date": "Wed, 20 Dec 2023 19:32:47 +0100",
"msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which side of a Merge Join gets executed first? Do both sides\n always get executed?"
},
{
"msg_contents": "Thanks. Does this make sense?\n\n - There are 3 nodes under the Merge Join\n - The first node is an InitPlan, due to the ANY(ARRAY()) - that gets\n executed and finds 0 matching rows\n - The second node is the outer node in the Merge Join and that is the\n expensive node in our query plan\n - The third node is the inner node in the Merge Join and that node\n references the SubPlan generated by the first node. The IndexCond has*\n \"id = ANY($2) AND ...\"* and the comparison with the result of the\n SubPlan does not find a match, so that's where the short-circuiting happens.\n\nHere are the relevant lines from the node (12) accessing the result of the\nSubPlan:\n\n \"Plans\": [\n {\n \"Node Type\": \"Index Only Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Scan Direction\": \"Forward\",\n \"Index Name\":\n\"policyperi_u_id_1mw8mh83lyyd9\",\n \"Relation Name\": \"pc_policyperiod\",\n \"Alias\": \"qroots0\",\n \"Startup Cost\": 0.69,\n \"Total Cost\": 18.15,\n \"Plan Rows\": 10,\n \"Plan Width\": 8,\n \"Actual Startup Time\": 0.045,\n \"Actual Total Time\": 0.045,\n \"Actual Rows\": 0,\n \"Actual Loops\": 1,\n \"Index Cond\": \"(*(id = ANY ($2)) AND*\n(retired = 0) AND (temporarybranch = false))\",\n\n\nHere's the screenshot again:\n\n[image: image.png]\n\nThanks,\nJerry\n\nOn Wed, Dec 20, 2023 at 10:32 AM Frédéric Yhuel <[email protected]>\nwrote:\n\n>\n>\n> Le 20/12/2023 à 15:40, Jerry Brenner a écrit :\n> > Whichever side gets executed first, is the execution of the side that\n> > would be second get short circuited if 0 rows are returned by the first\n> > side?\n>\n> Indeed, if 0 rows are returned from the outer relation, the scan of the\n> inner relation is never executed.\n>\n> Best regards,\n> Frédéric\n>\n>",
"msg_date": "Wed, 20 Dec 2023 11:04:28 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Which side of a Merge Join gets executed first? Do both sides\n always get executed?"
},
{
"msg_contents": "\n\nLe 20/12/2023 à 20:04, Jerry Brenner a écrit :\n> Thanks. Does this make sense?\n> \n> * There are 3 nodes under the Merge Join\n> * The first node is an InitPlan, due to the ANY(ARRAY()) - that gets\n> executed and finds 0 matching rows\n> * The second node is the outer node in the Merge Join and that is the\n> expensive node in our query plan\n> * The third node is the inner node in the Merge Join and that node\n> references the SubPlan generated by the first node. The IndexCond\n> has*\"id = ANY($2) AND ...\"* and the comparison with the result of\n> the SubPlan does not find a match, so that's where the\n> short-circuiting happens.\n\nI think it does.\n\nI'm not very experienced with the customs of these mailing lists, but I \nthink the following would help to get more answers :\n\n* TEXT format of EXPLAIN is much more readable (compared to JSON)\n* A well formatted query would help\n* Screenshots aren't so great\n\nRather than a screenshot, maybe you could use one of explain.depesz.com, \nexplain.dalibo.com, or explain-postgresql.com ?\n\nBest regards,\nFrédéric\n\n\n",
"msg_date": "Thu, 21 Dec 2023 07:27:20 +0100",
"msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which side of a Merge Join gets executed first? Do both sides\n always get executed?"
}
] |
[
{
"msg_contents": "Hello great day, we have a strange case with slow query and would like some help. \n\n\n\nI've already read the article https://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\n\nExplain: https://paste.depesz.com/s/PLP\n\n\n\nExplain2: https://explain-postgresql.com/archive/explain/8e4b573c5f7bcf3a0d30675a430051fd:0:2023-12-26\n\n\n\nQuery: https://paste.depesz.com/s/fd3\n\n\n\nDDL: https://paste.depesz.com/s/vBW\n\n\n\ntunning: https://paste.depesz.com/s/dXa\n\n\n\n\n\n\n\nWe have citus cluster with the following configuration: 1 master + 3 data nodes, each machine have:\n\n- 24 cores (Intel Xeon E5 2620)\n\n- 192 GB RAM\n\n- 1TB SSD\n\n\n\neach node has configured postgres settings using tuning.sql\n\n\n\nThe main Table DDL is in (ddl.sql)\n\n\n\nalso distributed are as follow:\n\n\n\nSELECT create_distributed_table('salert_post', 'id',shard_count := 72);\n\n\n\nSELECT create_distributed_table('salert_q56', 'post',\n\n colocate_with => 'salert_post');\n\n\n\nwhen run the query (query.sql) as you can see in explain (plan4_v3.txt) citus take about 18s to run all fragments\n\nbut each fragment take at most 2s, so my questions are- why citus take this time in run all fragments?\n\n- if I tuned each postgres node efficiently why take much time to make sort and aggregate with citus results?\n\n\n\ngood night, I hope you can help me with some ideas\n\n\n\n\n\nalso we remove partitions, and test only with citus, but query took more than a minute.\n\nas a note, we not have 72 shards on the same node we have 72 in total, 24 shards each node.\n\n\n\nI think the problem was in Sort and in GroupAggregate I no have idea how speed up this in master node, because the Custom Scan (Citus Adaptive) is not too slow, the most time is consumed in master on Sort and group\n\n\n\nI hope you can help me.\nHello great day, we have a strange case with slow query and would like some help. I've already read the article https://wiki.postgresql.org/wiki/Slow_Query_QuestionsExplain: https://paste.depesz.com/s/PLPExplain2: https://explain-postgresql.com/archive/explain/8e4b573c5f7bcf3a0d30675a430051fd:0:2023-12-26Query: https://paste.depesz.com/s/fd3DDL: https://paste.depesz.com/s/vBWtunning: https://paste.depesz.com/s/dXaWe have citus cluster with the following configuration: 1 master + 3 data nodes, each machine have:- 24 cores (Intel Xeon E5 2620)- 192 GB RAM- 1TB SSDeach node has configured postgres settings using tuning.sqlThe main Table DDL is in (ddl.sql)also distributed are as follow:SELECT create_distributed_table('salert_post', 'id',shard_count := 72);SELECT create_distributed_table('salert_q56', 'post', colocate_with => 'salert_post');when run the query (query.sql) as you can see in explain (plan4_v3.txt) citus take about 18s to run all fragmentsbut each fragment take at most 2s, so my questions are- why citus take this time in run all fragments?- if I tuned each postgres node efficiently why take much time to make sort and aggregate with citus results?good night, I hope you can help me with some ideasalso we remove partitions, and test only with citus, but query took more than a minute.as a note, we not have 72 shards on the same node we have 72 in total, 24 shards each node.I think the problem was in Sort and in GroupAggregate I no have idea how speed up this in master node, because the Custom Scan (Citus Adaptive) is not too slow, the most time is consumed in master on Sort and groupI hope you can help me.",
"msg_date": "Mon, 25 Dec 2023 21:49:40 -0500",
"msg_from": "Darwin Correa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow GroupAggregate and Sort"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 12:03 PM Darwin Correa <[email protected]> wrote:\n\n>\n> when run the query (query.sql) as you can see in explain (plan4_v3.txt)\n> citus take about 18s to run all fragments\n>\n\nWhere is plan4_v3.txt? Is that hidden in some non-obvious way in one of\nyour links?\n\n\n> but each fragment take at most 2s, so my questions are- why citus take\n> this time in run all fragments?\n>\n\nI only see that one arbitrary fragment takes 2.7s, with no indication\nwhether that one is the slowest one or not. But I am not used to reading\ncitus plans.\n\n\n> also we remove partitions, and test only with citus, but query took more\n> than a minute.\n> as a note, we not have 72 shards on the same node we have 72 in total, 24\n> shards each node.\n>\n\nI thought the point of sharding was to bring more CPU and RAM to bear than\ncan feasibly be obtained in one machine. Doesn't that make 24 shards per\nmachine completely nuts?\n\n\n>\n> I think the problem was in Sort and in GroupAggregate I no have idea how\n> speed up this in master node, because the Custom Scan (Citus Adaptive) is\n> not too slow, the most time is consumed in master on Sort and group\n>\n\nYou want to know why citus is so slow here, but also say it isn't slow and\nsomething else is slow instead?\n\nI'd break this down into more manageable chunks for investigation.\nPopulate one scratch table (on one node, not a hypertable) with all 2.6\nmillion rows. See how long it takes to populate it based on the citus\nquery, and separately see how long it takes to run the aggregate query on\nthe populated scratch table.\n\nWhat version of PostgreSQL (and citus) are you using? In my hands (without\ncitus being involved), the sort includes \"users\" as the last column, to\nsupport the count(distinct users) operation. I don't know why yours\ndoesn't do that.\n\nCheers,\n\nJeff\n\nOn Thu, Dec 28, 2023 at 12:03 PM Darwin Correa <[email protected]> wrote:when run the query (query.sql) as you can see in explain (plan4_v3.txt) citus take about 18s to run all fragmentsWhere is plan4_v3.txt? Is that hidden in some non-obvious way in one of your links? but each fragment take at most 2s, so my questions are- why citus take this time in run all fragments?I only see that one arbitrary fragment takes 2.7s, with no indication whether that one is the slowest one or not. But I am not used to reading citus plans. also we remove partitions, and test only with citus, but query took more than a minute.as a note, we not have 72 shards on the same node we have 72 in total, 24 shards each node.I thought the point of sharding was to bring more CPU and RAM to bear than can feasibly be obtained in one machine. Doesn't that make 24 shards per machine completely nuts? I think the problem was in Sort and in GroupAggregate I no have idea how speed up this in master node, because the Custom Scan (Citus Adaptive) is not too slow, the most time is consumed in master on Sort and groupYou want to know why citus is so slow here, but also say it isn't slow and something else is slow instead? I'd break this down into more manageable chunks for investigation. Populate one scratch table (on one node, not a hypertable) with all 2.6 million rows. See how long it takes to populate it based on the citus query, and separately see how long it takes to run the aggregate query on the populated scratch table.What version of PostgreSQL (and citus) are you using? In my hands (without citus being involved), the sort includes \"users\" as the last column, to support the count(distinct users) operation. I don't know why yours doesn't do that.Cheers,Jeff",
"msg_date": "Thu, 28 Dec 2023 13:06:18 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow GroupAggregate and Sort"
},
{
"msg_contents": "Hello, Happy New Year! I add my responses in blue.\n\n\n\n\n\n\n\n---- El Thu, 28 Dec 2023 13:06:18 -0500, Jeff Janes <[email protected]> escribió ----\n\n\n\nOn Thu, Dec 28, 2023 at 12:03 PM Darwin Correa <mailto:[email protected]> wrote:\n\n\n\n\n\nwhen run the query (query.sql) as you can see in explain (plan4_v3.txt) citus take about 18s to run all fragments\n\n\n\n\n\n\nWhere is plan4_v3.txt? Is that hidden in some non-obvious way in one of your links?\n\n\n\n\n\n\n\n\nsorry by the wrong name, Yes The explain plan is in the link that said plan, is this\n\n\n\nhttps://explain-postgresql.com/archive/explain/8e4b573c5f7bcf3a0d30675a430051fd:0:2023-12-26 (plan updated)\n\n\n\n\n\n\n\n \n\n\n\nbut each fragment take at most 2s, so my questions are- why citus take this time in run all fragments?\n\n\n\n\n\n\nI only see that one arbitrary fragment takes 2.7s, with no indication whether that one is the slowest one or not. But I am not used to reading citus plans.\n\n\n\n\n\n\n\n\nIn the explain plan citus show one of 72 subtask and show the most slow\n\n\n\n \n\nalso we remove partitions, and test only with citus, but query took more than a minute.\n\nas a note, we not have 72 shards on the same node we have 72 in total, 24 shards each node.\n\n\n\n\n\n\nI thought the point of sharding was to bring more CPU and RAM to bear than can feasibly be obtained in one machine. Doesn't that make 24 shards per machine completely nuts?\n\n\n\n\n\n\n\n\nBased o citus docs the recommended shards is 2x cpu cores in my case I've tested with few shards and 1:1, 2:1 shards but always have slow query time in the last step (sorting and grouping) in máster node.\n\n\n\n\n\n\n\nI think the problem was in Sort and in GroupAggregate I no have idea how speed up this in master node, because the Custom Scan (Citus Adaptive) is not too slow, the most time is consumed in master on Sort and group\n\n\n\n\n\n\nYou want to know why citus is so slow here, but also say it isn't slow and something else is slow instead? \n\n\n\n\n\n\n\n\nI'm refering in general that this query run slow in Citus cluster, but analizing explain plan I think that the specific part of citus (Adaptive executor) is not the slow part, instead of I can show that the “postgres only part” is slow (Sort and GroupAggregate)\n\n\n\n\n\nI'd break this down into more manageable chunks for investigation. Populate one scratch table (on one node, not a hypertable) with all 2.6 million rows. See how long it takes to populate it based on the citus query, and separately see how long it takes to run the aggregate query on the populated scratch table.\n\n\n\n\n\n\n\n\n\nPopulate table based with citus query, took 1.45 seconds each fragment, I don't know how citus run all fragments in parallel but running secuential each fragment, total took 51s\n\n\n\nAfter scratch table filled sort took 32s, explain (https://explain.dalibo.com/plan/8a3h26hcc6328c11)\n\n\n\nand sort+aggregation took 34s explain (https://explain.dalibo.com/plan/c5e4d62ge87cafg4)\n\n\n\nI don't understand \"actual time\" metric, because accordind plan (citus) startup time is high in Sort step\n\n\n\n\n\n\nWhat version of PostgreSQL (and citus) are you using? In my hands (without citus being involved), the sort includes \"users\" as the last column, to support the count(distinct users) operation. I don't know why yours doesn't do that.\n\n\n\n\n\n\nI'm using citus 12.0 wich comes with postgreSQL 16, I upgrade to 12.1 this is the updated plan: (now took more time)\n\n\n\nhttps://explain-postgresql.com/archive/explain/3849220d3e3ff2850fe39c62f954cd32:0:2024-01-01\n\n\n\n\n\n\n\nCheers,\n\n\n\nJeff\n\n\n\n\n\n\n\n\n\n\n \n\n\nDarwin \n\n \n\n\n\n\n\n\nCorrea P. \n\n\n\n// software architect \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n \n\n\n\n \n\n\nVeintimilla y Leonidas Plaza \n\n\n\n0999965925 \n\n\n\nmailto:[email protected] / mailto:[email protected] \n\n\n\n // DESARROLLO E INNOVACIÓN TECNOLÓGICA \n\n\n\n \nHello, Happy New Year! I add my responses in blue.---- El Thu, 28 Dec 2023 13:06:18 -0500, Jeff Janes <[email protected]> escribió ----On Thu, Dec 28, 2023 at 12:03 PM Darwin Correa <[email protected]> wrote:when run the query (query.sql) as you can see in explain (plan4_v3.txt) citus take about 18s to run all fragmentsWhere is plan4_v3.txt? Is that hidden in some non-obvious way in one of your links?sorry by the wrong name, Yes The explain plan is in the link that said plan, is thishttps://explain-postgresql.com/archive/explain/8e4b573c5f7bcf3a0d30675a430051fd:0:2023-12-26 (plan updated) but each fragment take at most 2s, so my questions are- why citus take this time in run all fragments?I only see that one arbitrary fragment takes 2.7s, with no indication whether that one is the slowest one or not. But I am not used to reading citus plans.In the explain plan citus show one of 72 subtask and show the most slow also we remove partitions, and test only with citus, but query took more than a minute.as a note, we not have 72 shards on the same node we have 72 in total, 24 shards each node.I thought the point of sharding was to bring more CPU and RAM to bear than can feasibly be obtained in one machine. Doesn't that make 24 shards per machine completely nuts?Based o citus docs the recommended shards is 2x cpu cores in my case I've tested with few shards and 1:1, 2:1 shards but always have slow query time in the last step (sorting and grouping) in máster node.I think the problem was in Sort and in GroupAggregate I no have idea how speed up this in master node, because the Custom Scan (Citus Adaptive) is not too slow, the most time is consumed in master on Sort and groupYou want to know why citus is so slow here, but also say it isn't slow and something else is slow instead? I'm refering in general that this query run slow in Citus cluster, but analizing explain plan I think that the specific part of citus (Adaptive executor) is not the slow part, instead of I can show that the “postgres only part” is slow (Sort and GroupAggregate)I'd break this down into more manageable chunks for investigation. Populate one scratch table (on one node, not a hypertable) with all 2.6 million rows. See how long it takes to populate it based on the citus query, and separately see how long it takes to run the aggregate query on the populated scratch table.Populate table based with citus query, took 1.45 seconds each fragment, I don't know how citus run all fragments in parallel but running secuential each fragment, total took 51sAfter scratch table filled sort took 32s, explain (https://explain.dalibo.com/plan/8a3h26hcc6328c11)and sort+aggregation took 34s explain (https://explain.dalibo.com/plan/c5e4d62ge87cafg4)I don't understand \"actual time\" metric, because accordind plan (citus) startup time is high in Sort stepWhat version of PostgreSQL (and citus) are you using? In my hands (without citus being involved), the sort includes \"users\" as the last column, to support the count(distinct users) operation. I don't know why yours doesn't do that.I'm using citus 12.0 wich comes with postgreSQL 16, I upgrade to 12.1 this is the updated plan: (now took more time)https://explain-postgresql.com/archive/explain/3849220d3e3ff2850fe39c62f954cd32:0:2024-01-01Cheers,Jeff Darwin Correa P. // software architect Veintimilla y Leonidas Plaza 0999965925 [email protected] / [email protected] // DESARROLLO E INNOVACIÓN TECNOLÓGICA",
"msg_date": "Mon, 01 Jan 2024 09:57:47 -0500",
"msg_from": "Darwin Correa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow GroupAggregate and Sort"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 9:57 AM Darwin Correa <[email protected]> wrote:\n\n> Hello, Happy New Year! I add my responses in blue.\n>\n>\n>\n> ---- El Thu, 28 Dec 2023 13:06:18 -0500, *Jeff Janes\n> <[email protected] <[email protected]>>* escribió ----\n>\n> I thought the point of sharding was to bring more CPU and RAM to bear than\n> can feasibly be obtained in one machine. Doesn't that make 24 shards per\n> machine completely nuts?\n>\n>\n> Based o citus docs the recommended shards is 2x cpu cores in my case I've\n> tested with few shards and 1:1, 2:1 shards but always have slow query time\n> in the last step (sorting and grouping) in máster node.\n>\n\nThat might make sense if PostgreSQL didn't do parallelization itself. But\naccording to your plan, PostgreSQL itself tries to parallelize 4 ways\n(although fails, as it can't find any available workers) and then you have\n24 nodes all doing the same thing, all with only 12 CPU. That doesn't seem\ngood. although it now does seem unrelated to the issue at hand.\n\n\n> I'd break this down into more manageable chunks for investigation.\n> Populate one scratch table (on one node, not a hypertable) with all 2.6\n> million rows. See how long it takes to populate it based on the citus\n> query, and separately see how long it takes to run the aggregate query on\n> the populated scratch table.\n>\n>\n> After scratch table filled sort took 32s, explain (\n> https://explain.dalibo.com/plan/8a3h26hcc6328c11)\n>\n\nSo that plan shows the sort to be egregiously slow, and with no involvement\nof citus and no apparent reason for slowness. I'm thinking you have a\npathological collation being used. What is your default collation? (Your\nDDL shows that no non-default collations are in use, but doesn't indicate\nwhat the default is)\n\nCheers,\n\nJeff\n\nOn Mon, Jan 1, 2024 at 9:57 AM Darwin Correa <[email protected]> wrote:Hello, Happy New Year! I add my responses in blue.---- El Thu, 28 Dec 2023 13:06:18 -0500, Jeff Janes <[email protected]> escribió ----I thought the point of sharding was to bring more CPU and RAM to bear than can feasibly be obtained in one machine. Doesn't that make 24 shards per machine completely nuts?Based o citus docs the recommended shards is 2x cpu cores in my case I've tested with few shards and 1:1, 2:1 shards but always have slow query time in the last step (sorting and grouping) in máster node.That might make sense if PostgreSQL didn't do parallelization itself. But according to your plan, PostgreSQL itself tries to parallelize 4 ways (although fails, as it can't find any available workers) and then you have 24 nodes all doing the same thing, all with only 12 CPU. That doesn't seem good. although it now does seem unrelated to the issue at hand.I'd break this down into more manageable chunks for investigation. Populate one scratch table (on one node, not a hypertable) with all 2.6 million rows. See how long it takes to populate it based on the citus query, and separately see how long it takes to run the aggregate query on the populated scratch table.After scratch table filled sort took 32s, explain (https://explain.dalibo.com/plan/8a3h26hcc6328c11)So that plan shows the sort to be egregiously slow, and with no involvement of citus and no apparent reason for slowness. I'm thinking you have a pathological collation being used. What is your default collation? (Your DDL shows that no non-default collations are in use, but doesn't indicate what the default is)Cheers,Jeff",
"msg_date": "Wed, 3 Jan 2024 21:43:15 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow GroupAggregate and Sort"
},
{
"msg_contents": "Hello, my answers in blue again \n\n\n\n\n\n\n\n---- El Wed, 03 Jan 2024 21:43:15 -0500, Jeff Janes <[email protected]> escribió ----\n\n\n\nOn Mon, Jan 1, 2024 at 9:57 AM Darwin Correa <mailto:[email protected]> wrote:\n\nHello, Happy New Year! I add my responses in blue.\n\n\n\n\n\n\n\n---- El Thu, 28 Dec 2023 13:06:18 -0500, Jeff Janes <mailto:[email protected]> escribió ----\n\n\n\n\nI thought the point of sharding was to bring more CPU and RAM to bear than can feasibly be obtained in one machine. Doesn't that make 24 shards per machine completely nuts?\n\n\n\n\n\n\n\n\nBased o citus docs the recommended shards is 2x cpu cores in my case I've tested with few shards and 1:1, 2:1 shards but always have slow query time in the last step (sorting and grouping) in máster node.\n\n\n\n\n\n\n\n\nThat might make sense if PostgreSQL didn't do parallelization itself. But according to your plan, PostgreSQL itself tries to parallelize 4 ways (although fails, as it can't find any available workers) and then you have 24 nodes all doing the same thing, all with only 12 CPU. That doesn't seem good. although it now does seem unrelated to the issue at hand.\n\n\n\n\n\n\n\n\nBut the coordinator (who make sort and aggr) are in separate server (each node si phisically other server) I no understand why if cooridnator not aree too busy, and I've already test with less shards, and time increment.\n\n\n\n\n\n\n\nI'd break this down into more manageable chunks for investigation. Populate one scratch table (on one node, not a hypertable) with all 2.6 million rows. See how long it takes to populate it based on the citus query, and separately see how long it takes to run the aggregate query on the populated scratch table.\n\n\n\n\n\n\n\n\n\nAfter scratch table filled sort took 32s, explain (https://explain.dalibo.com/plan/8a3h26hcc6328c11)\n\n\n\n\n\n\n\nSo that plan shows the sort to be egregiously slow, and with no involvement of citus and no apparent reason for slowness. I'm thinking you have a pathological collation being used. What is your default collation? (Your DDL shows that no non-default collations are in use, but doesn't indicate what the default is)\n\n\n\n\n\n\n\n\nThe collation is en_US.UFT-8, can you give more detail or which refer to \"pathological collation\" please to research about that? and the data store in this tables and this column specific are only alphanumeric charactres a-z,A-Z and numbers, nothing special\n\n\n\nCheers,\n\n\n\nJeff\n\n\n\n\n\n\n\n\n\n\n \n\n\nDarwin \n\n \n\n\n\n\n\n\nCorrea P. \n\n\n\n// software architect \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n \n\n\n\n \n\n\nVeintimilla y Leonidas Plaza \n\n\n\n0999965925 \n\n\n\nmailto:[email protected] / mailto:[email protected] \n\n\n\n // DESARROLLO E INNOVACIÓN TECNOLÓGICA \n\n\n\n \nHello, my answers in blue again ---- El Wed, 03 Jan 2024 21:43:15 -0500, Jeff Janes <[email protected]> escribió ----On Mon, Jan 1, 2024 at 9:57 AM Darwin Correa <[email protected]> wrote:Hello, Happy New Year! I add my responses in blue.---- El Thu, 28 Dec 2023 13:06:18 -0500, Jeff Janes <[email protected]> escribió ----I thought the point of sharding was to bring more CPU and RAM to bear than can feasibly be obtained in one machine. Doesn't that make 24 shards per machine completely nuts?Based o citus docs the recommended shards is 2x cpu cores in my case I've tested with few shards and 1:1, 2:1 shards but always have slow query time in the last step (sorting and grouping) in máster node.That might make sense if PostgreSQL didn't do parallelization itself. But according to your plan, PostgreSQL itself tries to parallelize 4 ways (although fails, as it can't find any available workers) and then you have 24 nodes all doing the same thing, all with only 12 CPU. That doesn't seem good. although it now does seem unrelated to the issue at hand.But the coordinator (who make sort and aggr) are in separate server (each node si phisically other server) I no understand why if cooridnator not aree too busy, and I've already test with less shards, and time increment.I'd break this down into more manageable chunks for investigation. Populate one scratch table (on one node, not a hypertable) with all 2.6 million rows. See how long it takes to populate it based on the citus query, and separately see how long it takes to run the aggregate query on the populated scratch table.After scratch table filled sort took 32s, explain (https://explain.dalibo.com/plan/8a3h26hcc6328c11)So that plan shows the sort to be egregiously slow, and with no involvement of citus and no apparent reason for slowness. I'm thinking you have a pathological collation being used. What is your default collation? (Your DDL shows that no non-default collations are in use, but doesn't indicate what the default is)The collation is en_US.UFT-8, can you give more detail or which refer to \"pathological collation\" please to research about that? and the data store in this tables and this column specific are only alphanumeric charactres a-z,A-Z and numbers, nothing specialCheers,Jeff Darwin Correa P. // software architect Veintimilla y Leonidas Plaza 0999965925 [email protected] / [email protected] // DESARROLLO E INNOVACIÓN TECNOLÓGICA",
"msg_date": "Sat, 06 Jan 2024 16:09:29 -0500",
"msg_from": "Darwin Correa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow GroupAggregate and Sort"
}
] |
[
{
"msg_contents": "Hello Team,\nI observed that increasing the degree of parallel hint in the SELECT query\ndid not show performance improvements.\nBelow are the details of sample execution with EXPLAIN ANALYZE\n*PostgreSQL Version:* v15.5\n\n*Operating System details:* RHL 7.x\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 16\nOn-line CPU(s) list: 0-15\nThread(s) per core: 1\nCore(s) per socket: 16\nSocket(s): 1\nNUMA node(s): 1\nVendor ID: GenuineIntel\nCPU family: 6\nModel: 79\nModel name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz\nStepping: 1\nCPU MHz: 2294.684\nBogoMIPS: 4589.36\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 32K\nL1i cache: 32K\nL2 cache: 256K\nL3 cache: 51200K\nNUMA node0 CPU(s): 0-15\n\n*PostgreSQL Query:*\nSample sql executed through psql command prompt\nforce_parallel_mode=on; max_parallel_workers_per_gather=200;\nmax_parallel_workers=6\n\nexplain analyze select /*+ PARALLEL(A 6) */ ctid::varchar,\n md5(\"col1\"||'~'||\"col7\"||'~'||\"col9\"::varchar) ,\nmd5(\"id\"||'~'||\"gender\"||'~'||\"firstname\"||'~'||\"lastname\"||'~'||\"address\"||'~'||\"city\"||'~'||\"salary\"||'~'||\"pincode\"||'~'||\"sales\"||'~'||\"phone\"||'~'||\"amount\"||'~'||\"dob\"||'~'||\"starttime\"||'~'||\"timezone\"||'~'||\"status\"||'~'||\"timenow\"||'~'||\"timelater\"||'~'||\"col2\"||'~'||\"col3\"||'~'||\"col4\"||'~'||\"col5\"||'~'||\"col6\"||'~'||\"col8\"||'~'||\"col10\"||'~'||\"col11\"||'~'||\"col12\"||'~'||\"col13\"||'~'||\"col14\"||'~'||\"col15\"||'~'||\"col16\"||'~'||\"col17\"::varchar)\n, md5('@'||\"col1\"||'~'||\"col7\"||'~'||\"col9\"::varchar) from\n\"sp_qarun\".\"basic2\" A order by 2,4,3;\n\n*Output:*\nPSQL query execution with hints 6 for 1st time => 203505.402 ms\nPSQL query execution with hints 6 for 2nd time => 27920.272 ms\nPSQL query execution with hints 6 for 3rd time => 27666.770 ms\nOnly 6 workers launched, and there is no reduction in execution time even\nafter increasing the degree of parallel hints in select query.\n\n*Table Structure:*\ncreate table basic2(id int,gender char,firstname varchar(3000),\nlastname varchar(3000),address varchar(3000),city varchar(900),salary\nsmallint,\npincode bigint,sales numeric,phone real,amount double precision,\ndob date,starttime timestamp,timezone TIMESTAMP WITH TIME ZONE,\nstatus boolean,timenow time,timelater TIME WITH TIME ZONE,col1 int,\ncol2 char,col3 varchar(3000),col4 varchar(3000),col5 varchar(3000),\ncol6 varchar(900),col7 smallint,col8 bigint,col9 numeric,col10 real,\ncol11 double precision,col12 date,col13 timestamp,col14 TIMESTAMP WITH TIME\nZONE,\ncol15 boolean,col16 time,col17 TIME WITH TIME ZONE,primary\nkey(col1,col7,col9));\n\n*Table Data:* 1000000 rows with each Row has a size of 20000.\n\nThanks,\nMohini\n\nHello Team,I observed that increasing the degree of parallel hint in the SELECT query did not show performance improvements.Below are the details of sample execution with EXPLAIN ANALYZEPostgreSQL Version: v15.5Operating System details: RHL 7.xArchitecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 16On-line CPU(s) list: 0-15Thread(s) per core: 1Core(s) per socket: 16Socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 79Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHzStepping: 1CPU MHz: 2294.684BogoMIPS: 4589.36Hypervisor vendor: MicrosoftVirtualization type: fullL1d cache: 32KL1i cache: 32KL2 cache: 256KL3 cache: 51200KNUMA node0 CPU(s): 0-15PostgreSQL Query: Sample sql executed through psql command promptforce_parallel_mode=on; max_parallel_workers_per_gather=200;max_parallel_workers=6explain analyze select /*+ PARALLEL(A 6) */ ctid::varchar, md5(\"col1\"||'~'||\"col7\"||'~'||\"col9\"::varchar) , md5(\"id\"||'~'||\"gender\"||'~'||\"firstname\"||'~'||\"lastname\"||'~'||\"address\"||'~'||\"city\"||'~'||\"salary\"||'~'||\"pincode\"||'~'||\"sales\"||'~'||\"phone\"||'~'||\"amount\"||'~'||\"dob\"||'~'||\"starttime\"||'~'||\"timezone\"||'~'||\"status\"||'~'||\"timenow\"||'~'||\"timelater\"||'~'||\"col2\"||'~'||\"col3\"||'~'||\"col4\"||'~'||\"col5\"||'~'||\"col6\"||'~'||\"col8\"||'~'||\"col10\"||'~'||\"col11\"||'~'||\"col12\"||'~'||\"col13\"||'~'||\"col14\"||'~'||\"col15\"||'~'||\"col16\"||'~'||\"col17\"::varchar) , md5('@'||\"col1\"||'~'||\"col7\"||'~'||\"col9\"::varchar) from \"sp_qarun\".\"basic2\" A order by 2,4,3;Output:PSQL query execution with hints 6 for 1st time => 203505.402 msPSQL query execution with hints 6 for 2nd time => 27920.272 msPSQL query execution with hints 6 for 3rd time => 27666.770 msOnly 6 workers launched, and there is no reduction in execution time even after increasing the degree of parallel hints in select query.Table Structure:create table basic2(id int,gender char,firstname varchar(3000),lastname varchar(3000),address varchar(3000),city varchar(900),salary smallint,pincode bigint,sales numeric,phone real,amount double precision,dob date,starttime timestamp,timezone TIMESTAMP WITH TIME ZONE,status boolean,timenow time,timelater TIME WITH TIME ZONE,col1 int,col2 char,col3 varchar(3000),col4 varchar(3000),col5 varchar(3000),col6 varchar(900),col7 smallint,col8 bigint,col9 numeric,col10 real,col11 double precision,col12 date,col13 timestamp,col14 TIMESTAMP WITH TIME ZONE,col15 boolean,col16 time,col17 TIME WITH TIME ZONE,primary key(col1,col7,col9)); Table Data: 1000000 rows with each Row has a size of 20000.Thanks,Mohini",
"msg_date": "Wed, 27 Dec 2023 18:45:23 +0530",
"msg_from": "mohini mane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "On 12/27/23 14:15, mohini mane wrote:\n> Hello Team,\n> I observed that increasing the degree of parallel hint* *in the SELECT\n> query did not show performance improvements.\n> Below are the details of sample execution with EXPLAIN ANALYZE\n> *PostgreSQL Version:* v15.5\n> \n> *Operating System details:* RHL 7.x\n> Architecture: x86_64\n> CPU op-mode(s): 32-bit, 64-bit\n> Byte Order: Little Endian\n> CPU(s): 16\n> On-line CPU(s) list: 0-15\n> Thread(s) per core: 1\n> Core(s) per socket: 16\n> Socket(s): 1\n> NUMA node(s): 1\n> Vendor ID: GenuineIntel\n> CPU family: 6\n> Model: 79\n> Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz\n> Stepping: 1\n> CPU MHz: 2294.684\n> BogoMIPS: 4589.36\n> Hypervisor vendor: Microsoft\n> Virtualization type: full\n> L1d cache: 32K\n> L1i cache: 32K\n> L2 cache: 256K\n> L3 cache: 51200K\n> NUMA node0 CPU(s): 0-15\n> \n> *PostgreSQL Query:* \n> Sample sql executed through psql command prompt\n> force_parallel_mode=on; max_parallel_workers_per_gather=200;\n> max_parallel_workers=6\n> \n> explain analyze select /*+ PARALLEL(A 6) */ ctid::varchar,\n> md5(\"col1\"||'~'||\"col7\"||'~'||\"col9\"::varchar) ,\n> md5(\"id\"||'~'||\"gender\"||'~'||\"firstname\"||'~'||\"lastname\"||'~'||\"address\"||'~'||\"city\"||'~'||\"salary\"||'~'||\"pincode\"||'~'||\"sales\"||'~'||\"phone\"||'~'||\"amount\"||'~'||\"dob\"||'~'||\"starttime\"||'~'||\"timezone\"||'~'||\"status\"||'~'||\"timenow\"||'~'||\"timelater\"||'~'||\"col2\"||'~'||\"col3\"||'~'||\"col4\"||'~'||\"col5\"||'~'||\"col6\"||'~'||\"col8\"||'~'||\"col10\"||'~'||\"col11\"||'~'||\"col12\"||'~'||\"col13\"||'~'||\"col14\"||'~'||\"col15\"||'~'||\"col16\"||'~'||\"col17\"::varchar) , md5('@'||\"col1\"||'~'||\"col7\"||'~'||\"col9\"::varchar) from \"sp_qarun\".\"basic2\" A order by 2,4,3;\n> \n\nPostgres doesn't support hints, so the /* ... */ part of the query is\njust a comment and doesn't effect the parallelism at all. The main thing\ninfluencing that are the GUC values you set before.\n\n\n> *Output:*\n> PSQL query execution with hints 6 for 1st time => 203505.402 ms\n> PSQL query execution with hints 6 for 2nd time => 27920.272 ms\n> PSQL query execution with hints 6 for 3rd time => 27666.770 ms\n> Only 6 workers launched, and there is no reduction in execution time\n> even after increasing the degree of parallel hints in select query.\n> \n\nIt's unclear if what exactly you changed, and what case you're comparing\nthe timing to. As I explained earlier, the hint comment has no effect.\nSo if that's what you increased, it's not surprising the timing does not\nchange.\n\nAlso, max_parallel_workers is the maximum total number of parallel\nworkers, i.e. it's upper bound of max_parallel_workers_per_gather. So if\nyou set it to 6, there will never be more than 6 workers, no matter what\nvalue max_parallel_workers_per_gather is set to.\n\nFWIW force_parallel_more is really meant for testing (in the context of\ndeveloping the database itself), it's hardly the thing you should do in\nany other case, like for example testing performance.\n\n\n> *Table Structure:*\n> create table basic2(id int,gender char,firstname varchar(3000),\n> lastname varchar(3000),address varchar(3000),city varchar(900),salary\n> smallint,\n> pincode bigint,sales numeric,phone real,amount double precision,\n> dob date,starttime timestamp,timezone TIMESTAMP WITH TIME ZONE,\n> status boolean,timenow time,timelater TIME WITH TIME ZONE,col1 int,\n> col2 char,col3 varchar(3000),col4 varchar(3000),col5 varchar(3000),\n> col6 varchar(900),col7 smallint,col8 bigint,col9 numeric,col10 real,\n> col11 double precision,col12 date,col13 timestamp,col14 TIMESTAMP WITH\n> TIME ZONE,\n> col15 boolean,col16 time,col17 TIME WITH TIME ZONE,primary\n> key(col1,col7,col9)); \n> \n> *Table Data:* 1000000 rows with each Row has a size of 20000.\n> \n\nWithout the data we can't actually try running the query.\n\nIn general it's a good idea to show the \"explain analyze\" output for the\ncases you're comparing. Not only that shows what the database is doing,\nit also shows timings for different parts of the query, how many workers\nwere planned / actually started etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 27 Dec 2023 15:15:11 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 8:15 AM mohini mane <[email protected]>\nwrote:\n\n> Hello Team,\n> I observed that increasing the degree of parallel hint in the SELECT\n> query did not show performance improvements.\n> Below are the details of sample execution with EXPLAIN ANALYZE\n>\n\nPostgreSQL doesn't have hints, unless you are using pg_hint_plan. Which you\nshould say if you are.\n\n*Output:*\n> PSQL query execution with hints 6 for 1st time => 203505.402 ms\n> PSQL query execution with hints 6 for 2nd time => 27920.272 ms\n> PSQL query execution with hints 6 for 3rd time => 27666.770 ms\n> Only 6 workers launched, and there is no reduction in execution time even\n> after increasing the degree of parallel hints in select query.\n>\n\nAll you are showing here is the effect of caching the data in memory. You\nallude to changing the degree, but didn't show any results, or even\ndescribe what the change was. Is 6 the base from which you increased, or\nis it the result of having done the increase?\n\nCheers,\n\nJeff\n\nOn Wed, Dec 27, 2023 at 8:15 AM mohini mane <[email protected]> wrote:Hello Team,I observed that increasing the degree of parallel hint in the SELECT query did not show performance improvements.Below are the details of sample execution with EXPLAIN ANALYZEPostgreSQL doesn't have hints, unless you are using pg_hint_plan. Which you should say if you are.Output:PSQL query execution with hints 6 for 1st time => 203505.402 msPSQL query execution with hints 6 for 2nd time => 27920.272 msPSQL query execution with hints 6 for 3rd time => 27666.770 msOnly 6 workers launched, and there is no reduction in execution time even after increasing the degree of parallel hints in select query.All you are showing here is the effect of caching the data in memory. You allude to changing the degree, but didn't show any results, or even describe what the change was. Is 6 the base from which you increased, or is it the result of having done the increase?Cheers,Jeff",
"msg_date": "Wed, 27 Dec 2023 09:41:36 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "Thank you for your response !!\nI am experimenting with SQL query performance for SELECT queries on large\ntables and I observed that changing/increasing the degree of parallel hint\ndoesn't give the expected performance improvement.\n\nI have executed the SELECT query with 2,4 & 6 parallel degree however every\ntime only 4 workers launched & there was a slight increase in Execution\ntime as well, why there is an increase in execution time with parallel\ndegree 6 as compared to 2 or 4?\nPlease refer to the test results\n\nI am sharing the latest test results here :\n*Session variables set in psql prompt:*\n# show max_parallel_workers;\n max_parallel_workers\n----------------------\n 8\n(1 row)\n\n# show max_parallel_workers_per_gather;\n max_parallel_workers_per_gather\n---------------------------------\n 6\n(1 row)\n\n*1st time query executed with PARALLEL DEGREE 2 *\nexplain analyze select /*+* PARALLEL(A 2)* */ * from\ntest_compare_all_col_src1 A;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=10.00..45524.73 rows=949636 width=97) (actual\ntime=0.673..173.017 rows=955000 loops=1)\n Workers Planned: 4\n * Workers Launched: 4*\n -> Parallel Seq Scan on test_compare_all_col_src1 a\n (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.039..51.941\nrows=191000 loops=5)\n Planning Time: 0.093 ms\n* Execution Time: 209.745 ms*\n(6 rows)\n\n*2nd time query executed with PARALLEL DEGREE 4*\nexplain analyze select /*+ *PARALLEL(A 4)* */ * from\naparopka.test_compare_all_col_src1 A;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=10.00..45524.73 rows=949636 width=97) (actual\ntime=0.459..174.771 rows=955000 loops=1)\n Workers Planned: 4\n *Workers Launched: 4*\n -> Parallel Seq Scan on test_compare_all_col_src1 a\n (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.038..54.320\nrows=191000 loops=5)\n Planning Time: 0.073 ms\n *Execution Time: 210.170 ms*\n(6 rows)\n\n3rd time query executed with PARALLEL DEGREE 6\n\nexplain analyze select /**+ PARALLEL(A 6)* */ * from\naparopka.test_compare_all_col_src1 A;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=10.00..45524.73 rows=949636 width=97) (actual\ntime=0.560..196.586 rows=955000 loops=1)\n Workers Planned: 4\n *Workers Launched: 4*\n -> Parallel Seq Scan on test_compare_all_col_src1 a\n (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.049..58.741\nrows=191000 loops=5)\n Planning Time: 0.095 ms\n *Execution Time: 235.365 ms*\n(6 rows)\n\nTable Schema :\n\n Table \"test_compare_all_col_src1\"\n Column | Type | Collation | Nullable |\nDefault | Storage | Stats target | Description\n-----------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n col_smallint | integer | | |\n | plain | |\n col_int | integer | | |\n | plain | |\n col_bigint | bigint | | not null |\n | plain | |\n col_numeric | numeric | | |\n | main | |\n col_real | real | | |\n | plain | |\n col_double | double precision | | |\n | plain | |\n col_bool | boolean | | |\n | plain | |\n col_char | character(1) | | |\n | extended | |\n col_varchar | character varying(2000) | | |\n | extended | |\n col_date | date | | |\n | plain | |\n col_time | time without time zone | | |\n | plain | |\n col_timetz | time with time zone | | |\n | plain | |\n col_timestamp | timestamp without time zone | | |\n | plain | |\n col_timestamptz | timestamp with time zone | | |\n | plain | |\nIndexes:\n \"test_compare_all_col_src1_pkey\" PRIMARY KEY, btree (col_bigint)\nReplica Identity: FULL\nAccess method: heap\n\n\n# select count(*) from test_compare_all_col_src1;\n count\n--------\n 955000\n(1 row)\n\nThanks,\n--Mohini\n\n\nOn Wed, 27 Dec 2023, 20:11 Jeff Janes, <[email protected]> wrote:\n\n> On Wed, Dec 27, 2023 at 8:15 AM mohini mane <[email protected]>\n> wrote:\n>\n>> Hello Team,\n>> I observed that increasing the degree of parallel hint in the SELECT\n>> query did not show performance improvements.\n>> Below are the details of sample execution with EXPLAIN ANALYZE\n>>\n>\n> PostgreSQL doesn't have hints, unless you are using pg_hint_plan. Which\n> you should say if you are.\n>\n> *Output:*\n>> PSQL query execution with hints 6 for 1st time => 203505.402 ms\n>> PSQL query execution with hints 6 for 2nd time => 27920.272 ms\n>> PSQL query execution with hints 6 for 3rd time => 27666.770 ms\n>> Only 6 workers launched, and there is no reduction in execution time even\n>> after increasing the degree of parallel hints in select query.\n>>\n>\n> All you are showing here is the effect of caching the data in memory. You\n> allude to changing the degree, but didn't show any results, or even\n> describe what the change was. Is 6 the base from which you increased, or\n> is it the result of having done the increase?\n>\n> Cheers,\n>\n> Jeff\n>\n\nThank you for your response !!I am experimenting with SQL query performance for SELECT queries on large tables and I observed that changing/increasing the degree of parallel hint doesn't give the expected performance improvement.I have executed the SELECT query with 2,4 & 6 parallel degree however every time only 4 workers launched & there was a slight increase in Execution time as well, why there is an increase in execution time with parallel degree 6 as compared to 2 or 4?Please refer to the test results I am sharing the latest test results here :Session variables set in psql prompt:# show max_parallel_workers; max_parallel_workers---------------------- 8(1 row)# show max_parallel_workers_per_gather; max_parallel_workers_per_gather--------------------------------- 6(1 row)1st time query executed with PARALLEL DEGREE 2 explain analyze select /*+ PARALLEL(A 2) */ * from test_compare_all_col_src1 A; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------- Gather (cost=10.00..45524.73 rows=949636 width=97) (actual time=0.673..173.017 rows=955000 loops=1) Workers Planned: 4 Workers Launched: 4 -> Parallel Seq Scan on test_compare_all_col_src1 a (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.039..51.941 rows=191000 loops=5) Planning Time: 0.093 ms Execution Time: 209.745 ms(6 rows)2nd time query executed with PARALLEL DEGREE 4explain analyze select /*+ PARALLEL(A 4) */ * from aparopka.test_compare_all_col_src1 A; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------- Gather (cost=10.00..45524.73 rows=949636 width=97) (actual time=0.459..174.771 rows=955000 loops=1) Workers Planned: 4 Workers Launched: 4 -> Parallel Seq Scan on test_compare_all_col_src1 a (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.038..54.320 rows=191000 loops=5) Planning Time: 0.073 ms Execution Time: 210.170 ms(6 rows)3rd time query executed with PARALLEL DEGREE 6explain analyze select /*+ PARALLEL(A 6) */ * from aparopka.test_compare_all_col_src1 A; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------- Gather (cost=10.00..45524.73 rows=949636 width=97) (actual time=0.560..196.586 rows=955000 loops=1) Workers Planned: 4 Workers Launched: 4 -> Parallel Seq Scan on test_compare_all_col_src1 a (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.049..58.741 rows=191000 loops=5) Planning Time: 0.095 ms Execution Time: 235.365 ms(6 rows)Table Schema : Table \"test_compare_all_col_src1\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description-----------------+-----------------------------+-----------+----------+---------+----------+--------------+------------- col_smallint | integer | | | | plain | | col_int | integer | | | | plain | | col_bigint | bigint | | not null | | plain | | col_numeric | numeric | | | | main | | col_real | real | | | | plain | | col_double | double precision | | | | plain | | col_bool | boolean | | | | plain | | col_char | character(1) | | | | extended | | col_varchar | character varying(2000) | | | | extended | | col_date | date | | | | plain | | col_time | time without time zone | | | | plain | | col_timetz | time with time zone | | | | plain | | col_timestamp | timestamp without time zone | | | | plain | | col_timestamptz | timestamp with time zone | | | | plain | |Indexes: \"test_compare_all_col_src1_pkey\" PRIMARY KEY, btree (col_bigint)Replica Identity: FULLAccess method: heap# select count(*) from test_compare_all_col_src1; count-------- 955000(1 row)Thanks,--MohiniOn Wed, 27 Dec 2023, 20:11 Jeff Janes, <[email protected]> wrote:On Wed, Dec 27, 2023 at 8:15 AM mohini mane <[email protected]> wrote:Hello Team,I observed that increasing the degree of parallel hint in the SELECT query did not show performance improvements.Below are the details of sample execution with EXPLAIN ANALYZEPostgreSQL doesn't have hints, unless you are using pg_hint_plan. Which you should say if you are.Output:PSQL query execution with hints 6 for 1st time => 203505.402 msPSQL query execution with hints 6 for 2nd time => 27920.272 msPSQL query execution with hints 6 for 3rd time => 27666.770 msOnly 6 workers launched, and there is no reduction in execution time even after increasing the degree of parallel hints in select query.All you are showing here is the effect of caching the data in memory. You allude to changing the degree, but didn't show any results, or even describe what the change was. Is 6 the base from which you increased, or is it the result of having done the increase?Cheers,Jeff",
"msg_date": "Thu, 28 Dec 2023 18:16:59 +0530",
"msg_from": "mohini mane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 9:47 AM mohini mane <[email protected]>\nwrote:\n\n> Thank you for your response !!\n> I am experimenting with SQL query performance for SELECT queries on large\n> tables and I observed that changing/increasing the degree of parallel hint\n> doesn't give the expected performance improvement.\n>\n\nWhy do you believe you are changing the degree of parallelism? PostgreSQL\ndoes not have parallel hints (or any hint in comments), so you are just\nchanging a comment in the queries, which changes nothing at all in the\nexecution plan.\n\nUnless you are not using vanilla PostgreSQL or you have some extension in\nplace, in which case you didn't provide enough information.\n\nBest regards,\n\n-- \nMatheus de Oliveira\n\nOn Thu, Dec 28, 2023 at 9:47 AM mohini mane <[email protected]> wrote:Thank you for your response !!I am experimenting with SQL query performance for SELECT queries on large tables and I observed that changing/increasing the degree of parallel hint doesn't give the expected performance improvement.Why do you believe you are changing the degree of parallelism? PostgreSQL does not have parallel hints (or any hint in comments), so you are just changing a comment in the queries, which changes nothing at all in the execution plan.Unless you are not using vanilla PostgreSQL or you have some extension in place, in which case you didn't provide enough information.Best regards,-- Matheus de Oliveira",
"msg_date": "Thu, 28 Dec 2023 10:12:10 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "On Thursday, December 28, 2023, mohini mane <[email protected]>\nwrote:\n\n> Thank you for your response !!\n> I am experimenting with SQL query performance for SELECT queries on large\n> tables and I observed that changing/increasing the degree of parallel hint\n> doesn't give the expected performance improvement.\n>\n> I have executed the SELECT query with 2,4 & 6 parallel degree however\n> every time only 4 workers launched & there was a slight increase in\n> Execution time as well, why there is an increase in execution time with\n> parallel degree 6 as compared to 2 or 4?\n>\n\nRandom environmental effects.\n\nAlso, analyzing a performance test without understanding how “buffers” are\nused is largely pointless.\n\nWhatever told you about that comment syntax is hallucinating.\n\nPlease don’t reply by top-posting. Inline reply to the comments others make\ndirectly and trim as needed. Simply restating your first email isn’t\nproductive.\n\nYou cannot enforce the number of workers used, only the the maximum. That\nis you knob.\n\nDavid J.\n\nOn Thursday, December 28, 2023, mohini mane <[email protected]> wrote:Thank you for your response !!I am experimenting with SQL query performance for SELECT queries on large tables and I observed that changing/increasing the degree of parallel hint doesn't give the expected performance improvement.I have executed the SELECT query with 2,4 & 6 parallel degree however every time only 4 workers launched & there was a slight increase in Execution time as well, why there is an increase in execution time with parallel degree 6 as compared to 2 or 4?Random environmental effects.Also, analyzing a performance test without understanding how “buffers” are used is largely pointless.Whatever told you about that comment syntax is hallucinating.Please don’t reply by top-posting. Inline reply to the comments others make directly and trim as needed. Simply restating your first email isn’t productive.You cannot enforce the number of workers used, only the the maximum. That is you knob.David J.",
"msg_date": "Thu, 28 Dec 2023 06:40:00 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 7:47 AM mohini mane <[email protected]>\nwrote:\n\n> Thank you for your response !!\n> I am experimenting with SQL query performance for SELECT queries on large\n> tables and I observed that changing/increasing the degree of parallel hint\n> doesn't give the expected performance improvement.\n>\n\nBut you still have addressed the fact that PostgreSQL *does not have\nplanner hints*.\n\nAre you using some nonstandard extension, or nonstandard fork?\n\n\n> I have executed the SELECT query with 2,4 & 6 parallel degree however\n> every time only 4 workers launched & there was a slight increase in\n> Execution time as well,\n>\n\nAdding an ignored comment to your SQL would not be expected to do\nanything. So it is not surprising that it does not do anything about\nthe number of workers launched. It is just a comment. A note to the human\nwho is reading the code.\n\n\n> why there is an increase in execution time with parallel degree 6 as\n> compared to 2 or 4?\n>\n\nThose small changes seem to be perfectly compatible with random noise. You\nwould need to repeat them dozens of times in random order, and then do a\nstatistical test to convince me otherwise.\n\n\n>\n\nOn Thu, Dec 28, 2023 at 7:47 AM mohini mane <[email protected]> wrote:Thank you for your response !!I am experimenting with SQL query performance for SELECT queries on large tables and I observed that changing/increasing the degree of parallel hint doesn't give the expected performance improvement.But you still have addressed the fact that PostgreSQL does not have planner hints.Are you using some nonstandard extension, or nonstandard fork? I have executed the SELECT query with 2,4 & 6 parallel degree however every time only 4 workers launched & there was a slight increase in Execution time as well,Adding an ignored comment to your SQL would not be expected to do anything. So it is not surprising that it does not do anything about the number of workers launched. It is just a comment. A note to the human who is reading the code. why there is an increase in execution time with parallel degree 6 as compared to 2 or 4?Those small changes seem to be perfectly compatible with random noise. You would need to repeat them dozens of times in random order, and then do a statistical test to convince me otherwise.",
"msg_date": "Thu, 28 Dec 2023 23:55:04 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 10:25 AM Jeff Janes <[email protected]> wrote:\n\n>\n>\n> On Thu, Dec 28, 2023 at 7:47 AM mohini mane <[email protected]>\n> wrote:\n>\n>> Thank you for your response !!\n>> I am experimenting with SQL query performance for SELECT queries on large\n>> tables and I observed that changing/increasing the degree of parallel hint\n>> doesn't give the expected performance improvement.\n>>\n>\n> But you still have addressed the fact that PostgreSQL *does not have\n> planner hints*.\n>\n> Are you using some nonstandard extension, or nonstandard fork?\n> * >> I am using pg_hint_plan extension to enforce the parallel execution\n> of specific table .*\n>\n * postgres=# load 'pg_hint_plan';*\n* LOAD*\n\n\n> I have executed the SELECT query with 2,4 & 6 parallel degree however\n>> every time only 4 workers launched & there was a slight increase in\n>> Execution time as well,\n>>\n>\n> Adding an ignored comment to your SQL would not be expected to do\n> anything. So it is not surprising that it does not do anything about\n> the number of workers launched. It is just a comment. A note to the human\n> who is reading the code.\n> * >> As I am using ph_hint_plan extension so as expected hints should not\n> get ignored by the optimizer .*\n>\n>> why there is an increase in execution time with parallel degree 6 as\n>> compared to 2 or 4?\n>>\n>\n> Those small changes seem to be perfectly compatible with random noise.\n> You would need to repeat them dozens of times in random order, and then do\n> a statistical test to convince me otherwise.\n> * >> I am expecting desired number of parallel workers should get\n> allocated as VM has sufficient vCores [16] and with needed session\n> parameters [parallel_tuple_cost=0.1,max_parallel_workers_per_gather=6,**max_parallel_workers=8\n> and I am using parallel hints like this : * */*+ PARALLEL(A 5 hard) */\n> so 5 worker processes should launched this is not happening]*\n>\n>>\n\nOn Fri, Dec 29, 2023 at 10:25 AM Jeff Janes <[email protected]> wrote:On Thu, Dec 28, 2023 at 7:47 AM mohini mane <[email protected]> wrote:Thank you for your response !!I am experimenting with SQL query performance for SELECT queries on large tables and I observed that changing/increasing the degree of parallel hint doesn't give the expected performance improvement.But you still have addressed the fact that PostgreSQL does not have planner hints.Are you using some nonstandard extension, or nonstandard fork? >> I am using pg_hint_plan extension to enforce the parallel execution of specific table . postgres=# load 'pg_hint_plan'; LOAD I have executed the SELECT query with 2,4 & 6 parallel degree however every time only 4 workers launched & there was a slight increase in Execution time as well,Adding an ignored comment to your SQL would not be expected to do anything. So it is not surprising that it does not do anything about the number of workers launched. It is just a comment. A note to the human who is reading the code. >> As I am using ph_hint_plan extension so as expected hints should not get ignored by the optimizer . why there is an increase in execution time with parallel degree 6 as compared to 2 or 4?Those small changes seem to be perfectly compatible with random noise. You would need to repeat them dozens of times in random order, and then do a statistical test to convince me otherwise. >> I am expecting desired number of parallel workers should get allocated as VM has sufficient vCores [16] and with needed session parameters [parallel_tuple_cost=0.1,max_parallel_workers_per_gather=6,max_parallel_workers=8 and I am using parallel hints like this : /*+ PARALLEL(A 5 hard) */ so 5 worker processes should launched this is not happening]",
"msg_date": "Tue, 2 Jan 2024 20:42:39 +0530",
"msg_from": "mohini mane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "On Tue, Jan 2, 2024 at 8:12 AM mohini mane <[email protected]> wrote:\n\n>\n> I have executed the SELECT query with 2,4 & 6 parallel degree however\n>>> every time only 4 workers launched & there was a slight increase in\n>>> Execution time as well,\n>>>\n>>\n>> Adding an ignored comment to your SQL would not be expected to do\n>> anything. So it is not surprising that it does not do anything about\n>> the number of workers launched. It is just a comment. A note to the human\n>> who is reading the code.\n>> * >> As I am using ph_hint_plan extension so as expected hints should not\n>> get ignored by the optimizer .*\n>>\n>\nSounds like a bug you should go tell the pg_hint_plan authors about then.\n\nDavid J.\n\nOn Tue, Jan 2, 2024 at 8:12 AM mohini mane <[email protected]> wrote:I have executed the SELECT query with 2,4 & 6 parallel degree however every time only 4 workers launched & there was a slight increase in Execution time as well,Adding an ignored comment to your SQL would not be expected to do anything. So it is not surprising that it does not do anything about the number of workers launched. It is just a comment. A note to the human who is reading the code. >> As I am using ph_hint_plan extension so as expected hints should not get ignored by the optimizer .Sounds like a bug you should go tell the pg_hint_plan authors about then.David J.",
"msg_date": "Tue, 2 Jan 2024 09:14:37 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "On Tue, 2 Jan 2024, 21:45 David G. Johnston, <[email protected]>\nwrote:\n\n> On Tue, Jan 2, 2024 at 8:12 AM mohini mane <[email protected]>\n> wrote:\n>\n>>\n>> I have executed the SELECT query with 2,4 & 6 parallel degree however\n>>>> every time only 4 workers launched & there was a slight increase in\n>>>> Execution time as well,\n>>>>\n>>>\n>>> Adding an ignored comment to your SQL would not be expected to do\n>>> anything. So it is not surprising that it does not do anything about\n>>> the number of workers launched. It is just a comment. A note to the human\n>>> who is reading the code.\n>>> * >> As I am using pg_hint_plan extension so as expected hints should\n>>> not get ignored by the optimizer .*\n>>>\n>>\n> Sounds like a bug you should go tell the pg_hint_plan authors about then.\n>\n *>> I am getting same results with or without extension [in my case\nit's pg_hint_plan] still I will check with the respective team, Thanks .*\n\n>\n> David J.\n>\n>\n\nOn Tue, 2 Jan 2024, 21:45 David G. Johnston, <[email protected]> wrote:On Tue, Jan 2, 2024 at 8:12 AM mohini mane <[email protected]> wrote:I have executed the SELECT query with 2,4 & 6 parallel degree however every time only 4 workers launched & there was a slight increase in Execution time as well,Adding an ignored comment to your SQL would not be expected to do anything. So it is not surprising that it does not do anything about the number of workers launched. It is just a comment. A note to the human who is reading the code. >> As I am using pg_hint_plan extension so as expected hints should not get ignored by the optimizer .Sounds like a bug you should go tell the pg_hint_plan authors about then. >> I am getting same results with or without extension [in my case it's pg_hint_plan] still I will check with the respective team, Thanks .David J.",
"msg_date": "Tue, 2 Jan 2024 23:36:10 +0530",
"msg_from": "mohini mane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "> *1st time query executed with PARALLEL DEGREE 2 *\n> explain analyze select /*+* PARALLEL(A 2)* */ * from\n> test_compare_all_col_src1 A;\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Gather (cost=10.00..45524.73 rows=949636 width=97) (actual\n> time=0.673..173.017 rows=955000 loops=1)\n> Workers Planned: 4\n> * Workers Launched: 4*\n> -> Parallel Seq Scan on test_compare_all_col_src1 a\n> (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.039..51.941\n> rows=191000 loops=5)\n> Planning Time: 0.093 ms\n> * Execution Time: 209.745 ms*\n> (6 rows)\n>\n\nYour alias is not enclosed in double quotes, so it is downcased to \"a\" (as\ncan be seen from the alias printed in the plan). But pg_hint_plan hints\ndon't follow the downcasing convention, so the hint on \"A\" does not match\nthe alias \"a\", and so is ignored.\n\nCheers,\n\nJeff\n\n>\n\n 1st time query executed with PARALLEL DEGREE 2 explain analyze select /*+ PARALLEL(A 2) */ * from test_compare_all_col_src1 A; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------- Gather (cost=10.00..45524.73 rows=949636 width=97) (actual time=0.673..173.017 rows=955000 loops=1) Workers Planned: 4 Workers Launched: 4 -> Parallel Seq Scan on test_compare_all_col_src1 a (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.039..51.941 rows=191000 loops=5) Planning Time: 0.093 ms Execution Time: 209.745 ms(6 rows)Your alias is not enclosed in double quotes, so it is downcased to \"a\" (as can be seen from the alias printed in the plan). But pg_hint_plan hints don't follow the downcasing convention, so the hint on \"A\" does not match the alias \"a\", and so is ignored. Cheers,Jeff",
"msg_date": "Wed, 3 Jan 2024 20:43:21 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
},
{
"msg_contents": "On Thu, Jan 4, 2024 at 7:13 AM Jeff Janes <[email protected]> wrote:\n\n>\n>\n>> *1st time query executed with PARALLEL DEGREE 2 *\n>> explain analyze select /*+* PARALLEL(A 2)* */ * from\n>> test_compare_all_col_src1 A;\n>>\n>> QUERY PLAN\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>> Gather (cost=10.00..45524.73 rows=949636 width=97) (actual\n>> time=0.673..173.017 rows=955000 loops=1)\n>> Workers Planned: 4\n>> * Workers Launched: 4*\n>> -> Parallel Seq Scan on test_compare_all_col_src1 a\n>> (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.039..51.941\n>> rows=191000 loops=5)\n>> Planning Time: 0.093 ms\n>> * Execution Time: 209.745 ms*\n>> (6 rows)\n>>\n>\n> Your alias is not enclosed in double quotes, so it is downcased to \"a\" (as\n> can be seen from the alias printed in the plan). But pg_hint_plan hints\n> don't follow the downcasing convention, so the hint on \"A\" does not match\n> the alias \"a\", and so is ignored.\n> * >> Thanks Jeff for the response ! It worked with \"A\" alias *\n> Cheers,\n>\n> Jeff\n>\n>>\n\nOn Thu, Jan 4, 2024 at 7:13 AM Jeff Janes <[email protected]> wrote: 1st time query executed with PARALLEL DEGREE 2 explain analyze select /*+ PARALLEL(A 2) */ * from test_compare_all_col_src1 A; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------- Gather (cost=10.00..45524.73 rows=949636 width=97) (actual time=0.673..173.017 rows=955000 loops=1) Workers Planned: 4 Workers Launched: 4 -> Parallel Seq Scan on test_compare_all_col_src1 a (cost=0.00..44565.09 rows=237409 width=97) (actual time=0.039..51.941 rows=191000 loops=5) Planning Time: 0.093 ms Execution Time: 209.745 ms(6 rows)Your alias is not enclosed in double quotes, so it is downcased to \"a\" (as can be seen from the alias printed in the plan). But pg_hint_plan hints don't follow the downcasing convention, so the hint on \"A\" does not match the alias \"a\", and so is ignored. >> Thanks Jeff for the response ! It worked with \"A\" alias Cheers,Jeff",
"msg_date": "Thu, 4 Jan 2024 13:22:37 +0530",
"msg_from": "mohini mane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel hints in PostgreSQL with consistent perfromance"
}
] |
[
{
"msg_contents": "Hello folks!\r\n\r\nI am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance!\r\n\r\nHere’s my background – Linux RHEL 8 – PostgreSQL 12.17. –\r\n\r\nMemTotal: 263216840 kB\r\n\r\nMemFree: 3728224 kB\r\n\r\nMemAvailable: 197186864 kB\r\n\r\nBuffers: 6704 kB\r\n\r\nCached: 204995024 kB\r\n\r\nSwapCached: 19244 kB\r\n\r\n\r\nfree -m\r\n\r\n total used free shared buff/cache available\r\n\r\nMem: 257047 51860 3722 10718 201464 192644\r\n\r\nSwap: 4095 855 3240\r\n\r\nHere are a few of the settings in our postgres server:\r\n\r\nmax_connections = 300 # (change requires restart)\r\n\r\nshared_buffers = 10GB\r\n\r\ntemp_buffers = 24MB\r\n\r\nwork_mem = 2GB\r\n\r\nmaintenance_work_mem = 1GB\r\n\r\nmost everything else is set to the default.\r\n\r\nThe query is complex with several joins:\r\n\r\n\r\nSELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date, ST_AsGeoJSON(anon_1.granule_geography) AS anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date AS anon_1_granule_update_date, anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id AS collection_1_id, collection_1.entry_id AS collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id AS file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size, file_1.type AS file_1_type, visibility_1.id AS visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value\r\n\r\n FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography, granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date, granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date, granule.visibility_id AS granule_visibility_id\r\n\r\n FROM granule JOIN collection ON collection.id = granule.collection_id\r\n\r\n WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1' OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00') ORDER BY granule.uuid\r\n\r\n LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1 ON collection_1.id = anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS file_1 ON file_1.id = granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id = anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid\r\n\r\nHere’s the explain:\r\n\r\n\r\n Sort (cost=10914809.92..10914810.27 rows=141 width=996)\r\n\r\n Sort Key: granule.uuid\r\n\r\n -> Hash Left Join (cost=740539.73..10914804.89 rows=141 width=996)\r\n\r\n Hash Cond: (granule.visibility_id = visibility_1.id)\r\n\r\n -> Hash Right Join (cost=740537.56..10914731.81 rows=141 width=1725)\r\n\r\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\r\n\r\n -> Hash Join (cost=644236.90..10734681.93 rows=22332751 width=223)\r\n\r\n Hash Cond: (file_1.id = granule_file_1.file_id)\r\n\r\n -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207)\r\n\r\n -> Hash (cost=365077.51..365077.51 rows=22332751 width=20)\r\n\r\n -> Seq Scan on granule_file granule_file_1 (cost=0.00..365077.51 rows=22332751 width=20)\r\n\r\n -> Hash (cost=96300.33..96300.33 rows=26 width=1518)\r\n\r\n -> Nested Loop Left Join (cost=96092.55..96300.33 rows=26 width=1518)\r\n\r\n -> Limit (cost=96092.27..96092.33 rows=26 width=1462)\r\n\r\n -> Sort (cost=96092.27..96100.47 rows=3282 width=1462)\r\n\r\n Sort Key: granule.uuid\r\n\r\n -> Nested Loop (cost=0.56..95998.73 rows=3282 width=1462)\r\n\r\n -> Seq Scan on collection (cost=0.00..3366.24 rows=1 width=4)\r\n\r\n Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\r\n\r\n -> Index Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713 width=1462)\r\n\r\n Index Cond: (collection_id = collection.id)\r\n\r\n Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\r\n\r\nate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_\r\n\r\ndate_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\r\n\r\n < '2015-10-09T00:00:00+00:00'::text)))\r\n\r\n -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..7.99 rows=1 width=56)\r\n\r\n Index Cond: (id = granule.collection_id)\r\n\r\n -> Hash (cost=1.52..1.52 rows=52 width=16)\r\n\r\n -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16)\r\n\r\n\r\nHeres a bit about the tables –\r\n\r\nGranule\r\nCollection\r\nGranule_file\r\nVisibility\r\n\r\nGranule:\r\n\r\npublic | granule | table | ims_api_writer | 36 GB |\r\n\r\n\r\nims_api=# \\d+ granule\r\n\r\n Table \"public.granule\"\r\n\r\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\r\n\r\n-----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\r\n\r\n collection_id | integer | | not null | | plain | |\r\n\r\n create_date | timestamp without time zone | | not null | | plain | |\r\n\r\n delete_date | timestamp without time zone | | | | plain | |\r\n\r\n geometry | geometry(Geometry,4326) | | | | main | |\r\n\r\n is_active | boolean | | | | plain | |\r\n\r\n properties | jsonb | | | | extended | |\r\n\r\n update_date | timestamp without time zone | | not null | | plain | |\r\n\r\n uuid | uuid | | not null | | plain | |\r\n\r\n visibility_id | integer | | not null | | plain | |\r\n\r\n geography | geography(Geometry,4326) | | | | main | |\r\n\r\n visibility_last_update_date | timestamp without time zone | | | | plain | |\r\n\r\nIndexes:\r\n\r\n \"granule_pkey\" PRIMARY KEY, btree (uuid)\r\n\r\n \"granule_is_active_idx\" btree (is_active)\r\n\r\n \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text))\r\n\r\n \"granule_update_date_idx\" btree (update_date)\r\n\r\n \"idx_granule_geometry\" gist (geometry)\r\n\r\n \"ix_granule_collection_id\" btree (collection_id)\r\n\r\nForeign-key constraints:\r\n\r\n \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n\r\n \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\n\r\nReferenced by:\r\n\r\n TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\r\n\r\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\r\n\r\nTriggers:\r\n\r\n granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()\r\n\r\nAccess method: heap\r\n\r\nCollection:\r\n\r\npublic | collection | table | ims_api_writer | 39 MB |\r\n\r\n\r\nims_api=# \\d collection\r\n\r\n Table \"public.collection\"\r\n\r\n Column | Type | Collation | Nullable | Default\r\n\r\n------------------------------+-----------------------------+-----------+----------+----------------------------------------\r\n\r\n id | integer | | not null | nextval('collection_id_seq'::regclass)\r\n\r\n access_constraints | text | | |\r\n\r\n additional_attributes | jsonb | | |\r\n\r\n ancillary_keywords | character varying(160)[] | | |\r\n\r\n create_date | timestamp without time zone | | not null |\r\n\r\n dataset_language | character varying(80)[] | | |\r\n\r\n dataset_progress | text | | |\r\n\r\n data_resolutions | jsonb | | |\r\n\r\n dataset_citation | jsonb | | |\r\n\r\n delete_date | timestamp without time zone | | |\r\n\r\n distribution | jsonb | | |\r\n\r\n doi | character varying(220) | | |\r\n\r\n entry_id | character varying(80) | | not null |\r\n\r\n entry_title | character varying(1030) | | |\r\n\r\n geometry | geometry(Geometry,4326) | | |\r\n\r\n is_active | boolean | | not null |\r\n\r\n iso_topic_categories | character varying[] | | |\r\n\r\n last_update_date | timestamp without time zone | | not null |\r\n\r\n locations | jsonb | | |\r\n\r\n long_name | character varying(1024) | | |\r\n\r\n metadata_associations | jsonb | | |\r\n\r\n metadata_dates | jsonb | | |\r\n\r\n personnel | jsonb | | |\r\n\r\n platforms | jsonb | | |\r\n\r\n processing_level_id | integer | | |\r\n\r\n product_flag | text | | |\r\n\r\n project_id | integer | | |\r\n\r\n properties | jsonb | | |\r\n\r\n quality | jsonb | | |\r\n\r\n references | character varying(12000)[] | | |\r\n\r\n related_urls | jsonb | | |\r\n\r\n summary | jsonb | | |\r\n\r\n short_name | character varying(80) | | |\r\n\r\n temporal_extents | jsonb | | |\r\n\r\n version | character varying(80) | | |\r\n\r\n use_constraints | jsonb | | |\r\n\r\n version_description | text | | |\r\n\r\n visibility_id | integer | | not null |\r\n\r\n world_date | timestamp without time zone | | |\r\n\r\n tiling_identification_system | jsonb | | |\r\n\r\n collection_data_type | text | | |\r\n\r\n standard_product | boolean | | not null | false\r\n\r\nIndexes:\r\n\r\n \"collection_pkey\" PRIMARY KEY, btree (id)\r\n\r\n \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id)\r\n\r\n \"idx_collection_geometry\" gist (geometry)\r\n\r\nForeign-key constraints:\r\n\r\n \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id) REFERENCES processing_level(id)\r\n\r\n \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id)\r\n\r\n \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\n\r\nReferenced by:\r\n\r\n TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n\r\n TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n\r\n TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n\r\n TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n\r\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n\r\n\r\nGranule_file:\r\n\r\n public | granule_file | table | ims_api_writer | 1108 MB |\r\n\r\n\r\n\\d granule_file\r\n\r\n Table \"public.granule_file\"\r\n\r\n Column | Type | Collation | Nullable | Default\r\n\r\n--------------+---------+-----------+----------+---------\r\n\r\n granule_uuid | uuid | | |\r\n\r\n file_id | integer | | |\r\n\r\nForeign-key constraints:\r\n\r\n \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id)\r\n\r\n \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\r\n\r\n\r\n\r\nVisibility:\r\n\r\npublic | visibility | table | ims_api_writer | 40 kB |\r\n\r\n\r\n\\d visibility\r\n\r\n Table \"public.visibility\"\r\n\r\n Column | Type | Collation | Nullable | Default\r\n\r\n--------+-----------------------+-----------+----------+----------------------------------------\r\n\r\n id | integer | | not null | nextval('visibility_id_seq'::regclass)\r\n\r\n name | character varying(80) | | not null |\r\n\r\n value | integer | | not null |\r\n\r\nIndexes:\r\n\r\n \"visibility_pkey\" PRIMARY KEY, btree (id)\r\n\r\n \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name)\r\n\r\n \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)\r\n\r\nReferenced by:\r\n\r\n TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\n\r\n TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nThanks for the help!\r\n\r\n\r\n\r\nMaria Wilson\r\n\r\nNasa/Langley Research Center\r\n\r\nHampton, Virginia USA\r\n\r\[email protected]\r\n\r\n\n\n\n\n\n\n\n\n\nHello folks!\n \nI am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with\r\n Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance!\n \nHere’s my background – Linux RHEL 8 – PostgreSQL 12.17. – \n\nMemTotal: \r\n263216840 kB\nMemFree: \r\n3728224 kB\nMemAvailable: \r\n197186864 kB\nBuffers: \r\n6704 kB\nCached: \r\n204995024 kB\nSwapCached: \r\n19244 kB\n \nfree -m\n total \r\nused \nfree shared \r\nbuff/cache \navailable\nMem: \r\n257047 \n51860 3722\r\n 10718 \r\n201464 \n192644\nSwap: \r\n4095 \n855 3240\n \nHere are a few of the settings in our postgres server:\nmax_connections = 300 \r\n# (change requires restart)\nshared_buffers = 10GB\ntemp_buffers = 24MB\nwork_mem = 2GB\nmaintenance_work_mem = 1GB\n \nmost everything else is set to the default.\n \nThe query is complex with several joins:\n \nSELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date, ST_AsGeoJSON(anon_1.granule_geography) AS\r\n anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date AS anon_1_granule_update_date,\r\n anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id AS collection_1_id, collection_1.entry_id AS\r\n collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id AS file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size,\r\n file_1.type AS file_1_type, visibility_1.id AS visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value\n FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography,\r\n granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date, granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date,\r\n granule.visibility_id AS granule_visibility_id\n FROM granule JOIN collection ON collection.id = granule.collection_id\n WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1' OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id\r\n LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties\r\n #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times,\r\n 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00') ORDER BY granule.uuid\n LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1 ON collection_1.id = anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS\r\n file_1 ON file_1.id = granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id = anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid\n \nHere’s the explain:\n \n Sort \r\n(cost=10914809.92..10914810.27 rows=141 width=996)\n Sort Key: granule.uuid\n -> \r\nHash Left Join \r\n(cost=740539.73..10914804.89 rows=141 width=996)\n Hash Cond: (granule.visibility_id = visibility_1.id)\n -> \r\nHash Right Join \r\n(cost=740537.56..10914731.81 rows=141 width=1725)\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\n -> \r\nHash Join \n(cost=644236.90..10734681.93 rows=22332751 width=223)\n Hash Cond: (file_1.id = granule_file_1.file_id)\n -> \r\nSeq Scan on file file_1 \r\n(cost=0.00..9205050.88 rows=22068888 width=207)\n -> \r\nHash (cost=365077.51..365077.51 rows=22332751 width=20)\n \n-> Seq Scan on granule_file granule_file_1 \r\n(cost=0.00..365077.51 rows=22332751 width=20)\n -> \r\nHash (cost=96300.33..96300.33 rows=26 width=1518)\n -> \r\nNested Loop Left Join \r\n(cost=96092.55..96300.33 rows=26 width=1518)\n \n-> Limit \r\n(cost=96092.27..96092.33 rows=26 width=1462)\n \r\n-> Sort \r\n(cost=96092.27..96100.47 rows=3282 width=1462)\n \r\nSort Key: granule.uuid\n \r\n-> Nested Loop \r\n(cost=0.56..95998.73 rows=3282 width=1462)\n \r\n-> Seq Scan on collection \r\n(cost=0.00..3366.24 rows=1 width=4)\n \r\nFilter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\n \r\n-> Index Scan using ix_granule_collection_id on granule \r\n(cost=0.56..92445.36 rows=18713 width=1462)\n \r\nIndex Cond: (collection_id = collection.id)\n \r\nFilter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\nate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_\ndate_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\n < '2015-10-09T00:00:00+00:00'::text)))\n \n-> Index Scan using collection_pkey on collection collection_1 \r\n(cost=0.28..7.99 rows=1 width=56)\n \r\nIndex Cond: (id = granule.collection_id)\n -> \r\nHash (cost=1.52..1.52 rows=52 width=16)\n -> \r\nSeq Scan on visibility visibility_1 \r\n(cost=0.00..1.52 rows=52 width=16)\n \n \nHeres a bit about the tables – \n \nGranule\nCollection\nGranule_file\nVisibility\n \nGranule:\npublic | granule \r\n| table | ims_api_writer | 36 GB \r\n| \n \nims_api=# \\d+ granule\n \r\nTable \"public.granule\"\n Column \r\n| \nType \n| Collation | Nullable | Default | Storage \r\n| Stats target | Description \n-----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n collection_id\r\n | integer\r\n |\r\n | not null |\r\n | plain \r\n| \r\n| \n create_date\r\n | timestamp without time zone |\r\n | not null |\r\n | plain \r\n| \r\n| \n delete_date\r\n | timestamp without time zone |\r\n | \r\n| \n| plain | \r\n| \n geometry \r\n| geometry(Geometry,4326) \r\n| \n| |\r\n | main\r\n | \r\n| \n is_active\r\n | boolean\r\n |\r\n | \r\n| \n| plain | \r\n| \n properties \r\n| jsonb \r\n| \n| |\r\n | extended | \r\n| \n update_date\r\n | timestamp without time zone |\r\n | not null |\r\n | plain \r\n| \r\n| \n uuid \r\n| uuid \r\n| \n| not null | \n| plain | \r\n| \n visibility_id\r\n | integer\r\n |\r\n | not null |\r\n | plain \r\n| \r\n| \n geography\r\n | geography(Geometry,4326) \r\n| \n| |\r\n | main\r\n | \r\n| \n visibility_last_update_date | timestamp without time zone |\r\n | \r\n| \n| plain | \r\n| \nIndexes:\n \"granule_pkey\" PRIMARY KEY, btree (uuid)\n \"granule_is_active_idx\" btree (is_active)\n \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text))\n \"granule_update_date_idx\" btree (update_date)\n \"idx_granule_geometry\" gist (geometry)\n \"ix_granule_collection_id\" btree (collection_id)\nForeign-key constraints:\n \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\nReferenced by:\n TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\nTriggers:\n granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()\nAccess method: heap\n \nCollection:\npublic | collection \r\n| table | ims_api_writer | 39 MB \r\n| \n \nims_api=# \\d collection\n \r\nTable \"public.collection\"\n Column \r\n| \nType \n| Collation | Nullable | \r\nDefault \n------------------------------+-----------------------------+-----------+----------+----------------------------------------\n id \n | integer\r\n |\r\n | not null | nextval('collection_id_seq'::regclass)\n access_constraints\r\n | text \r\n| \n| | \n additional_attributes \r\n| jsonb \r\n| \n| | \n ancillary_keywords\r\n | character varying(160)[] \r\n| \n| | \n create_date \r\n| timestamp without time zone | \r\n| not null | \n dataset_language\r\n | character varying(80)[]\r\n | \r\n| \n| \n dataset_progress\r\n | text \r\n| \n| | \n data_resolutions\r\n | jsonb\r\n |\r\n | \r\n| \n dataset_citation\r\n | jsonb\r\n |\r\n | \r\n| \n delete_date \r\n| timestamp without time zone | \r\n| \n| \n distribution\r\n | jsonb\r\n |\r\n | \r\n| \n doi \r\n| character varying(220) \r\n| \n| | \n entry_id\r\n | character varying(80)\r\n | \n | not null | \n entry_title \r\n| character varying(1030) \r\n| \n| | \n geometry\r\n | geometry(Geometry,4326)\r\n | \r\n| \n| \n is_active \r\n| boolean \r\n| \n| not null | \n iso_topic_categories\r\n | character varying[]\r\n | \n | \r\n| \n last_update_date\r\n | timestamp without time zone |\r\n | not null | \n locations \r\n| jsonb \r\n| \n| | \n long_name \r\n| character varying(1024) \r\n| \n| | \n metadata_associations \r\n| jsonb \r\n| \n| | \n metadata_dates\r\n | jsonb\r\n |\r\n | \r\n| \n personnel \r\n| jsonb \r\n| \n| | \n platforms \r\n| jsonb \r\n| \n| | \n processing_level_id \r\n| integer \r\n| \n| | \n product_flag\r\n | text \r\n| \n| | \n project_id\r\n | integer\r\n |\r\n | \r\n| \n properties\r\n | jsonb\r\n |\r\n | \r\n| \n quality \r\n| jsonb \r\n| \n| | \n references\r\n | character varying(12000)[] \r\n| \n| | \n related_urls\r\n | jsonb\r\n |\r\n | \r\n| \n summary \r\n| jsonb \r\n| \n| | \n short_name\r\n | character varying(80)\r\n | \n | \r\n| \n temporal_extents\r\n | jsonb\r\n |\r\n | \r\n| \n version \r\n| character varying(80) \r\n| \n| | \n use_constraints \r\n| jsonb \r\n| \n| | \n version_description \r\n| text \r\n| \n| | \n visibility_id \r\n| integer \r\n| \n| not null | \n world_date\r\n | timestamp without time zone |\r\n | \r\n| \n tiling_identification_system | jsonb\r\n |\r\n | \r\n| \n collection_data_type\r\n | text \r\n| \n| | \n standard_product\r\n | boolean\r\n |\r\n | not null | false\nIndexes:\n \"collection_pkey\" PRIMARY KEY, btree (id)\n \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id)\n \"idx_collection_geometry\" gist (geometry)\nForeign-key constraints:\n \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id) REFERENCES processing_level(id)\n \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id)\n \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\nReferenced by:\n TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n \n \nGranule_file:\n public | granule_file\r\n | table | ims_api_writer | 1108 MB | \n \n\\d granule_file\n Table \"public.granule_file\"\n Column \r\n| Type\r\n | Collation | Nullable | Default \n--------------+---------+-----------+----------+---------\n granule_uuid | uuid \r\n| \n| | \n file_id \r\n| integer | \r\n| \n| \nForeign-key constraints:\n \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id)\n \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n \n \nVisibility:\npublic | visibility \r\n| table | ims_api_writer | 40 kB \r\n| \n \n\\d visibility\n \r\nTable \"public.visibility\"\n Column |\r\n Type \r\n| Collation | Nullable | \r\nDefault \n--------+-----------------------+-----------+----------+----------------------------------------\n id \n | integer \n | \n | not null | nextval('visibility_id_seq'::regclass)\n name \n | character varying(80) |\r\n | not null | \n value \r\n| integer \r\n| \n| not null | \nIndexes:\n \"visibility_pkey\" PRIMARY KEY, btree (id)\n \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name)\n \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)\nReferenced by:\n TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n \n \n \n \nThanks for the help!\n \nMaria Wilson\nNasa/Langley Research Center\nHampton, Virginia USA\[email protected]",
"msg_date": "Wed, 27 Dec 2023 15:38:23 +0000",
"msg_from": "\"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need help with performance tuning pg12 on linux"
},
{
"msg_contents": "Hi Maria, could you please run explain analyse for the problem query?\nThe ‘analyze’ addition will track actual spent time and show statistics to validate the planner’s assumptions.\n\nFrits Hoogland\n\n\n\n\n> On 27 Dec 2023, at 16:38, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\n> \n> Hello folks!\n> \n> I am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance!\n> \n> Here’s my background – Linux RHEL 8 – PostgreSQL 12.17. – \n> MemTotal: 263216840 kB\n> MemFree: 3728224 kB\n> MemAvailable: 197186864 kB\n> Buffers: 6704 kB\n> Cached: 204995024 kB\n> SwapCached: 19244 kB\n> \n> free -m\n> total used free shared buff/cache available\n> Mem: 257047 51860 3722 10718 201464 192644\n> Swap: 4095 855 3240\n> \n> Here are a few of the settings in our postgres server:\n> max_connections = 300 # (change requires restart)\n> shared_buffers = 10GB\n> temp_buffers = 24MB\n> work_mem = 2GB\n> maintenance_work_mem = 1GB\n> \n> most everything else is set to the default.\n> \n> The query is complex with several joins:\n> \n> SELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date, ST_AsGeoJSON(anon_1.granule_geography) AS anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date AS anon_1_granule_update_date, anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id <http://collection_1.id/> AS collection_1_id, collection_1.entry_id AS collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id <http://file_1.id/> AS file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size, file_1.type AS file_1_type, visibility_1.id <http://visibility_1.id/> AS visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value\n> FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography, granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date, granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date, granule.visibility_id AS granule_visibility_id\n> FROM granule JOIN collection ON collection.id <http://collection.id/> = granule.collection_id\n> WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1' OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00') ORDER BY granule.uuid\n> LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1 ON collection_1.id <http://collection_1.id/> = anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS file_1 ON file_1.id <http://file_1.id/> = granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id <http://visibility_1.id/> = anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid\n> \n> Here’s the explain:\n> \n> Sort (cost=10914809.92..10914810.27 rows=141 width=996)\n> Sort Key: granule.uuid\n> -> Hash Left Join (cost=740539.73..10914804.89 rows=141 width=996)\n> Hash Cond: (granule.visibility_id = visibility_1.id <http://visibility_1.id/>)\n> -> Hash Right Join (cost=740537.56..10914731.81 rows=141 width=1725)\n> Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\n> -> Hash Join (cost=644236.90..10734681.93 rows=22332751 width=223)\n> Hash Cond: (file_1.id <http://file_1.id/> = granule_file_1.file_id)\n> -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207)\n> -> Hash (cost=365077.51..365077.51 rows=22332751 width=20)\n> -> Seq Scan on granule_file granule_file_1 (cost=0.00..365077.51 rows=22332751 width=20)\n> -> Hash (cost=96300.33..96300.33 rows=26 width=1518)\n> -> Nested Loop Left Join (cost=96092.55..96300.33 rows=26 width=1518)\n> -> Limit (cost=96092.27..96092.33 rows=26 width=1462)\n> -> Sort (cost=96092.27..96100.47 rows=3282 width=1462)\n> Sort Key: granule.uuid\n> -> Nested Loop (cost=0.56..95998.73 rows=3282 width=1462)\n> -> Seq Scan on collection (cost=0.00..3366.24 rows=1 width=4)\n> Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\n> -> Index Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713 width=1462)\n> Index Cond: (collection_id = collection.id <http://collection.id/>)\n> Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\n> ate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_\n> date_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\n> < '2015-10-09T00:00:00+00:00'::text)))\n> -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..7.99 rows=1 width=56)\n> Index Cond: (id = granule.collection_id)\n> -> Hash (cost=1.52..1.52 rows=52 width=16)\n> -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16)\n> \n> \n> Heres a bit about the tables – \n> \n> Granule\n> Collection\n> Granule_file\n> Visibility\n> \n> Granule:\n> public | granule | table | ims_api_writer | 36 GB | \n> \n> ims_api=# \\d+ granule\n> Table \"public.granule\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> -----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n> collection_id | integer | | not null | | plain | | \n> create_date | timestamp without time zone | | not null | | plain | | \n> delete_date | timestamp without time zone | | | | plain | | \n> geometry | geometry(Geometry,4326) | | | | main | | \n> is_active | boolean | | | | plain | | \n> properties | jsonb | | | | extended | | \n> update_date | timestamp without time zone | | not null | | plain | | \n> uuid | uuid | | not null | | plain | | \n> visibility_id | integer | | not null | | plain | | \n> geography | geography(Geometry,4326) | | | | main | | \n> visibility_last_update_date | timestamp without time zone | | | | plain | | \n> Indexes:\n> \"granule_pkey\" PRIMARY KEY, btree (uuid)\n> \"granule_is_active_idx\" btree (is_active)\n> \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text))\n> \"granule_update_date_idx\" btree (update_date)\n> \"idx_granule_geometry\" gist (geometry)\n> \"ix_granule_collection_id\" btree (collection_id)\n> Foreign-key constraints:\n> \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n> \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n> Referenced by:\n> TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n> TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n> Triggers:\n> granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()\n> Access method: heap\n> \n> Collection:\n> public | collection | table | ims_api_writer | 39 MB | \n> \n> ims_api=# \\d collection\n> Table \"public.collection\"\n> Column | Type | Collation | Nullable | Default \n> ------------------------------+-----------------------------+-----------+----------+----------------------------------------\n> id | integer | | not null | nextval('collection_id_seq'::regclass)\n> access_constraints | text | | | \n> additional_attributes | jsonb | | | \n> ancillary_keywords | character varying(160)[] | | | \n> create_date | timestamp without time zone | | not null | \n> dataset_language | character varying(80)[] | | | \n> dataset_progress | text | | | \n> data_resolutions | jsonb | | | \n> dataset_citation | jsonb | | | \n> delete_date | timestamp without time zone | | | \n> distribution | jsonb | | | \n> doi | character varying(220) | | | \n> entry_id | character varying(80) | | not null | \n> entry_title | character varying(1030) | | | \n> geometry | geometry(Geometry,4326) | | | \n> is_active | boolean | | not null | \n> iso_topic_categories | character varying[] | | | \n> last_update_date | timestamp without time zone | | not null | \n> locations | jsonb | | | \n> long_name | character varying(1024) | | | \n> metadata_associations | jsonb | | | \n> metadata_dates | jsonb | | | \n> personnel | jsonb | | | \n> platforms | jsonb | | | \n> processing_level_id | integer | | | \n> product_flag | text | | | \n> project_id | integer | | | \n> properties | jsonb | | | \n> quality | jsonb | | | \n> references | character varying(12000)[] | | | \n> related_urls | jsonb | | | \n> summary | jsonb | | | \n> short_name | character varying(80) | | | \n> temporal_extents | jsonb | | | \n> version | character varying(80) | | | \n> use_constraints | jsonb | | | \n> version_description | text | | | \n> visibility_id | integer | | not null | \n> world_date | timestamp without time zone | | | \n> tiling_identification_system | jsonb | | | \n> collection_data_type | text | | | \n> standard_product | boolean | | not null | false\n> Indexes:\n> \"collection_pkey\" PRIMARY KEY, btree (id)\n> \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id)\n> \"idx_collection_geometry\" gist (geometry)\n> Foreign-key constraints:\n> \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id) REFERENCES processing_level(id)\n> \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id)\n> \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n> Referenced by:\n> TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n> TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n> TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n> TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n> TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n> \n> \n> Granule_file:\n> public | granule_file | table | ims_api_writer | 1108 MB | \n> \n> \\d granule_file\n> Table \"public.granule_file\"\n> Column | Type | Collation | Nullable | Default \n> --------------+---------+-----------+----------+---------\n> granule_uuid | uuid | | | \n> file_id | integer | | | \n> Foreign-key constraints:\n> \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id)\n> \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n> \n> \n> Visibility:\n> public | visibility | table | ims_api_writer | 40 kB | \n> \n> \\d visibility\n> Table \"public.visibility\"\n> Column | Type | Collation | Nullable | Default \n> --------+-----------------------+-----------+----------+----------------------------------------\n> id | integer | | not null | nextval('visibility_id_seq'::regclass)\n> name | character varying(80) | | not null | \n> value | integer | | not null | \n> Indexes:\n> \"visibility_pkey\" PRIMARY KEY, btree (id)\n> \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name)\n> \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)\n> Referenced by:\n> TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n> TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n> \n> \n> \n> \n> Thanks for the help!\n> \n> Maria Wilson\n> Nasa/Langley Research Center\n> Hampton, Virginia USA\n> [email protected] <mailto:[email protected]>\n\nHi Maria, could you please run explain analyse for the problem query?The ‘analyze’ addition will track actual spent time and show statistics to validate the planner’s assumptions.\nFrits Hoogland\n\nOn 27 Dec 2023, at 16:38, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:Hello folks! I am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance! Here’s my background – Linux RHEL 8 – PostgreSQL 12.17. – MemTotal: 263216840 kBMemFree: 3728224 kBMemAvailable: 197186864 kBBuffers: 6704 kBCached: 204995024 kBSwapCached: 19244 kB free -m total used free shared buff/cache availableMem: 257047 51860 3722 10718 201464 192644Swap: 4095 855 3240 Here are a few of the settings in our postgres server:max_connections = 300 # (change requires restart)shared_buffers = 10GBtemp_buffers = 24MBwork_mem = 2GBmaintenance_work_mem = 1GB most everything else is set to the default. The query is complex with several joins: SELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date, ST_AsGeoJSON(anon_1.granule_geography) AS anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date AS anon_1_granule_update_date, anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id AS collection_1_id, collection_1.entry_id AS collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id AS file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size, file_1.type AS file_1_type, visibility_1.id AS visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography, granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date, granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date, granule.visibility_id AS granule_visibility_id FROM granule JOIN collection ON collection.id = granule.collection_id WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1' OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00') ORDER BY granule.uuid LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1 ON collection_1.id = anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS file_1 ON file_1.id = granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id = anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid Here’s the explain: Sort (cost=10914809.92..10914810.27 rows=141 width=996) Sort Key: granule.uuid -> Hash Left Join (cost=740539.73..10914804.89 rows=141 width=996) Hash Cond: (granule.visibility_id = visibility_1.id) -> Hash Right Join (cost=740537.56..10914731.81 rows=141 width=1725) Hash Cond: (granule_file_1.granule_uuid = granule.uuid) -> Hash Join (cost=644236.90..10734681.93 rows=22332751 width=223) Hash Cond: (file_1.id = granule_file_1.file_id) -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207) -> Hash (cost=365077.51..365077.51 rows=22332751 width=20) -> Seq Scan on granule_file granule_file_1 (cost=0.00..365077.51 rows=22332751 width=20) -> Hash (cost=96300.33..96300.33 rows=26 width=1518) -> Nested Loop Left Join (cost=96092.55..96300.33 rows=26 width=1518) -> Limit (cost=96092.27..96092.33 rows=26 width=1462) -> Sort (cost=96092.27..96100.47 rows=3282 width=1462) Sort Key: granule.uuid -> Nested Loop (cost=0.56..95998.73 rows=3282 width=1462) -> Seq Scan on collection (cost=0.00..3366.24 rows=1 width=4) Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text)) -> Index Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713 width=1462) Index Cond: (collection_id = collection.id) Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_date_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[]) < '2015-10-09T00:00:00+00:00'::text))) -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..7.99 rows=1 width=56) Index Cond: (id = granule.collection_id) -> Hash (cost=1.52..1.52 rows=52 width=16) -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16) Heres a bit about the tables – GranuleCollectionGranule_fileVisibility Granule:public | granule | table | ims_api_writer | 36 GB | ims_api=# \\d+ granule Table \"public.granule\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description -----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+------------- collection_id | integer | | not null | | plain | | create_date | timestamp without time zone | | not null | | plain | | delete_date | timestamp without time zone | | | | plain | | geometry | geometry(Geometry,4326) | | | | main | | is_active | boolean | | | | plain | | properties | jsonb | | | | extended | | update_date | timestamp without time zone | | not null | | plain | | uuid | uuid | | not null | | plain | | visibility_id | integer | | not null | | plain | | geography | geography(Geometry,4326) | | | | main | | visibility_last_update_date | timestamp without time zone | | | | plain | | Indexes: \"granule_pkey\" PRIMARY KEY, btree (uuid) \"granule_is_active_idx\" btree (is_active) \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text)) \"granule_update_date_idx\" btree (update_date) \"idx_granule_geometry\" gist (geometry) \"ix_granule_collection_id\" btree (collection_id)Foreign-key constraints: \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)Referenced by: TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid) TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)Triggers: granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()Access method: heap Collection:public | collection | table | ims_api_writer | 39 MB | ims_api=# \\d collection Table \"public.collection\" Column | Type | Collation | Nullable | Default ------------------------------+-----------------------------+-----------+----------+---------------------------------------- id | integer | | not null | nextval('collection_id_seq'::regclass) access_constraints | text | | | additional_attributes | jsonb | | | ancillary_keywords | character varying(160)[] | | | create_date | timestamp without time zone | | not null | dataset_language | character varying(80)[] | | | dataset_progress | text | | | data_resolutions | jsonb | | | dataset_citation | jsonb | | | delete_date | timestamp without time zone | | | distribution | jsonb | | | doi | character varying(220) | | | entry_id | character varying(80) | | not null | entry_title | character varying(1030) | | | geometry | geometry(Geometry,4326) | | | is_active | boolean | | not null | iso_topic_categories | character varying[] | | | last_update_date | timestamp without time zone | | not null | locations | jsonb | | | long_name | character varying(1024) | | | metadata_associations | jsonb | | | metadata_dates | jsonb | | | personnel | jsonb | | | platforms | jsonb | | | processing_level_id | integer | | | product_flag | text | | | project_id | integer | | | properties | jsonb | | | quality | jsonb | | | references | character varying(12000)[] | | | related_urls | jsonb | | | summary | jsonb | | | short_name | character varying(80) | | | temporal_extents | jsonb | | | version | character varying(80) | | | use_constraints | jsonb | | | version_description | text | | | visibility_id | integer | | not null | world_date | timestamp without time zone | | | tiling_identification_system | jsonb | | | collection_data_type | text | | | standard_product | boolean | | not null | falseIndexes: \"collection_pkey\" PRIMARY KEY, btree (id) \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id) \"idx_collection_geometry\" gist (geometry)Foreign-key constraints: \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id) REFERENCES processing_level(id) \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id) \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)Referenced by: TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) Granule_file: public | granule_file | table | ims_api_writer | 1108 MB | \\d granule_file Table \"public.granule_file\" Column | Type | Collation | Nullable | Default --------------+---------+-----------+----------+--------- granule_uuid | uuid | | | file_id | integer | | | Foreign-key constraints: \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id) \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid) Visibility:public | visibility | table | ims_api_writer | 40 kB | \\d visibility Table \"public.visibility\" Column | Type | Collation | Nullable | Default --------+-----------------------+-----------+----------+---------------------------------------- id | integer | | not null | nextval('visibility_id_seq'::regclass) name | character varying(80) | | not null | value | integer | | not null | Indexes: \"visibility_pkey\" PRIMARY KEY, btree (id) \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name) \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)Referenced by: TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id) TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id) Thanks for the help! Maria WilsonNasa/Langley Research CenterHampton, Virginia [email protected]",
"msg_date": "Wed, 27 Dec 2023 16:49:55 +0100",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help with performance tuning pg12 on linux"
},
{
"msg_contents": "Thanks for the reply!! Scroll down a bit – the explain is just a bit further down in the email!\r\nMaria\r\n\r\nFrom: Frits Hoogland <[email protected]>\r\nDate: Wednesday, December 27, 2023 at 10:50 AM\r\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>\r\nCc: \"[email protected]\" <[email protected]>\r\nSubject: [EXTERNAL] Re: Need help with performance tuning pg12 on linux\r\n\r\nCAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\r\n\r\n\r\nHi Maria, could you please run explain analyse for the problem query?\r\nThe ‘analyze’ addition will track actual spent time and show statistics to validate the planner’s assumptions.\r\n\r\nFrits Hoogland\r\n\r\n\r\n\r\n\r\nOn 27 Dec 2023, at 16:38, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\r\n\r\nHello folks!\r\n\r\nI am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance!\r\n\r\nHere’s my background – Linux RHEL 8 – PostgreSQL 12.17. –\r\nMemTotal: 263216840 kB\r\nMemFree: 3728224 kB\r\nMemAvailable: 197186864 kB\r\nBuffers: 6704 kB\r\nCached: 204995024 kB\r\nSwapCached: 19244 kB\r\n\r\nfree -m\r\n total used free shared buff/cache available\r\nMem: 257047 51860 3722 10718 201464 192644\r\nSwap: 4095 855 3240\r\n\r\nHere are a few of the settings in our postgres server:\r\nmax_connections = 300 # (change requires restart)\r\nshared_buffers = 10GB\r\ntemp_buffers = 24MB\r\nwork_mem = 2GB\r\nmaintenance_work_mem = 1GB\r\n\r\nmost everything else is set to the default.\r\n\r\nThe query is complex with several joins:\r\n\r\nSELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date, ST_AsGeoJSON(anon_1.granule_geography) AS anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date AS anon_1_granule_update_date, anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id<http://collection_1.id/> AS collection_1_id, collection_1.entry_id AS collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id<http://file_1.id/> AS file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size, file_1.type AS file_1_type, visibility_1.id<http://visibility_1.id/> AS visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value\r\n FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography, granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date, granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date, granule.visibility_id AS granule_visibility_id\r\n FROM granule JOIN collection ON collection.id<http://collection.id/> = granule.collection_id\r\n WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1' OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00') ORDER BY granule.uuid\r\n LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1 ON collection_1.id<http://collection_1.id/> = anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS file_1 ON file_1.id<http://file_1.id/> = granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id<http://visibility_1.id/> = anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid\r\n\r\nHere’s the explain:\r\n\r\n Sort (cost=10914809.92..10914810.27 rows=141 width=996)\r\n Sort Key: granule.uuid\r\n -> Hash Left Join (cost=740539.73..10914804.89 rows=141 width=996)\r\n Hash Cond: (granule.visibility_id = visibility_1.id<http://visibility_1.id/>)\r\n -> Hash Right Join (cost=740537.56..10914731.81 rows=141 width=1725)\r\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\r\n -> Hash Join (cost=644236.90..10734681.93 rows=22332751 width=223)\r\n Hash Cond: (file_1.id<http://file_1.id/> = granule_file_1.file_id)\r\n -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207)\r\n -> Hash (cost=365077.51..365077.51 rows=22332751 width=20)\r\n -> Seq Scan on granule_file granule_file_1 (cost=0.00..365077.51 rows=22332751 width=20)\r\n -> Hash (cost=96300.33..96300.33 rows=26 width=1518)\r\n -> Nested Loop Left Join (cost=96092.55..96300.33 rows=26 width=1518)\r\n -> Limit (cost=96092.27..96092.33 rows=26 width=1462)\r\n -> Sort (cost=96092.27..96100.47 rows=3282 width=1462)\r\n Sort Key: granule.uuid\r\n -> Nested Loop (cost=0.56..95998.73 rows=3282 width=1462)\r\n -> Seq Scan on collection (cost=0.00..3366.24 rows=1 width=4)\r\n Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\r\n -> Index Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713 width=1462)\r\n Index Cond: (collection_id = collection.id<http://collection.id/>)\r\n Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\r\nate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_\r\ndate_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\r\n < '2015-10-09T00:00:00+00:00'::text)))\r\n -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..7.99 rows=1 width=56)\r\n Index Cond: (id = granule.collection_id)\r\n -> Hash (cost=1.52..1.52 rows=52 width=16)\r\n -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16)\r\n\r\n\r\nHeres a bit about the tables –\r\n\r\nGranule\r\nCollection\r\nGranule_file\r\nVisibility\r\n\r\nGranule:\r\npublic | granule | table | ims_api_writer | 36 GB |\r\n\r\nims_api=# \\d+ granule\r\n Table \"public.granule\"\r\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\r\n-----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\r\n collection_id | integer | | not null | | plain | |\r\n create_date | timestamp without time zone | | not null | | plain | |\r\n delete_date | timestamp without time zone | | | | plain | |\r\n geometry | geometry(Geometry,4326) | | | | main | |\r\n is_active | boolean | | | | plain | |\r\n properties | jsonb | | | | extended | |\r\n update_date | timestamp without time zone | | not null | | plain | |\r\n uuid | uuid | | not null | | plain | |\r\n visibility_id | integer | | not null | | plain | |\r\n geography | geography(Geometry,4326) | | | | main | |\r\n visibility_last_update_date | timestamp without time zone | | | | plain | |\r\nIndexes:\r\n \"granule_pkey\" PRIMARY KEY, btree (uuid)\r\n \"granule_is_active_idx\" btree (is_active)\r\n \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text))\r\n \"granule_update_date_idx\" btree (update_date)\r\n \"idx_granule_geometry\" gist (geometry)\r\n \"ix_granule_collection_id\" btree (collection_id)\r\nForeign-key constraints:\r\n \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\nReferenced by:\r\n TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\r\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\r\nTriggers:\r\n granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()\r\nAccess method: heap\r\n\r\nCollection:\r\npublic | collection | table | ims_api_writer | 39 MB |\r\n\r\nims_api=# \\d collection\r\n Table \"public.collection\"\r\n Column | Type | Collation | Nullable | Default\r\n------------------------------+-----------------------------+-----------+----------+----------------------------------------\r\n id | integer | | not null | nextval('collection_id_seq'::regclass)\r\n access_constraints | text | | |\r\n additional_attributes | jsonb | | |\r\n ancillary_keywords | character varying(160)[] | | |\r\n create_date | timestamp without time zone | | not null |\r\n dataset_language | character varying(80)[] | | |\r\n dataset_progress | text | | |\r\n data_resolutions | jsonb | | |\r\n dataset_citation | jsonb | | |\r\n delete_date | timestamp without time zone | | |\r\n distribution | jsonb | | |\r\n doi | character varying(220) | | |\r\n entry_id | character varying(80) | | not null |\r\n entry_title | character varying(1030) | | |\r\n geometry | geometry(Geometry,4326) | | |\r\n is_active | boolean | | not null |\r\n iso_topic_categories | character varying[] | | |\r\n last_update_date | timestamp without time zone | | not null |\r\n locations | jsonb | | |\r\n long_name | character varying(1024) | | |\r\n metadata_associations | jsonb | | |\r\n metadata_dates | jsonb | | |\r\n personnel | jsonb | | |\r\n platforms | jsonb | | |\r\n processing_level_id | integer | | |\r\n product_flag | text | | |\r\n project_id | integer | | |\r\n properties | jsonb | | |\r\n quality | jsonb | | |\r\n references | character varying(12000)[] | | |\r\n related_urls | jsonb | | |\r\n summary | jsonb | | |\r\n short_name | character varying(80) | | |\r\n temporal_extents | jsonb | | |\r\n version | character varying(80) | | |\r\n use_constraints | jsonb | | |\r\n version_description | text | | |\r\n visibility_id | integer | | not null |\r\n world_date | timestamp without time zone | | |\r\n tiling_identification_system | jsonb | | |\r\n collection_data_type | text | | |\r\n standard_product | boolean | | not null | false\r\nIndexes:\r\n \"collection_pkey\" PRIMARY KEY, btree (id)\r\n \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id)\r\n \"idx_collection_geometry\" gist (geometry)\r\nForeign-key constraints:\r\n \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id) REFERENCES processing_level(id)\r\n \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id)\r\n \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\nReferenced by:\r\n TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n\r\n\r\nGranule_file:\r\n public | granule_file | table | ims_api_writer | 1108 MB |\r\n\r\n\\d granule_file\r\n Table \"public.granule_file\"\r\n Column | Type | Collation | Nullable | Default\r\n--------------+---------+-----------+----------+---------\r\n granule_uuid | uuid | | |\r\n file_id | integer | | |\r\nForeign-key constraints:\r\n \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id)\r\n \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\r\n\r\n\r\nVisibility:\r\npublic | visibility | table | ims_api_writer | 40 kB |\r\n\r\n\\d visibility\r\n Table \"public.visibility\"\r\n Column | Type | Collation | Nullable | Default\r\n--------+-----------------------+-----------+----------+----------------------------------------\r\n id | integer | | not null | nextval('visibility_id_seq'::regclass)\r\n name | character varying(80) | | not null |\r\n value | integer | | not null |\r\nIndexes:\r\n \"visibility_pkey\" PRIMARY KEY, btree (id)\r\n \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name)\r\n \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)\r\nReferenced by:\r\n TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\n TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\n\r\n\r\n\r\n\r\nThanks for the help!\r\n\r\nMaria Wilson\r\nNasa/Langley Research Center\r\nHampton, Virginia USA\r\[email protected]<mailto:[email protected]>\r\n\r\n\n\n\n\n\n\n\n\n\nThanks for the reply!! Scroll down a bit – the explain is just a bit further down in the email!\nMaria\n \n\nFrom: Frits Hoogland <[email protected]>\nDate: Wednesday, December 27, 2023 at 10:50 AM\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: [EXTERNAL] Re: Need help with performance tuning pg12 on linux\n\n\n \n\n\n\n\n\n\nCAUTION:\nThis email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\n\n\n\n\n\n\n\n\n\nHi Maria, could you please run explain analyse for the problem query?\r\n\n\nThe ‘analyze’ addition will track actual spent time and show statistics to validate the planner’s assumptions.\n\n\n \n\n\n\nFrits Hoogland\n\n\n \n\n\n \n\n\n\n\n\n\n\nOn 27 Dec 2023, at 16:38, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\n\n \n\n\nHello folks!\n\n\n \n\n\nI am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with\r\n Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance!\n\n\n \n\n\nHere’s my background – Linux RHEL 8 – PostgreSQL 12.17. – \n\n\nMemTotal: 263216840\r\n kB\n\n\nMemFree: 3728224\r\n kB\n\n\nMemAvailable: 197186864\r\n kB\n\n\nBuffers: 6704\r\n kB\n\n\nCached: 204995024\r\n kB\n\n\nSwapCached: 19244\r\n kB\n\n\n \n\n\nfree -m\n\n\n total \r\n used free \r\n shared buff/cache available\n\n\nMem: 257047 \r\n 51860 3722 \r\n 10718 201464 \r\n 192644\n\n\nSwap: 4095 \r\n 855 3240\n\n\n \n\n\nHere are a few of the settings in our postgres server:\n\n\nmax_connections = 300 #\r\n (change requires restart)\n\n\nshared_buffers = 10GB\n\n\ntemp_buffers = 24MB\n\n\nwork_mem = 2GB\n\n\nmaintenance_work_mem = 1GB\n\n\n \n\n\nmost everything else is set to the default.\n\n\n \n\n\nThe query is complex with several joins:\n\n\n \n\n\nSELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date,\r\n ST_AsGeoJSON(anon_1.granule_geography) AS anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date\r\n AS anon_1_granule_update_date, anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id AS\r\n collection_1_id, collection_1.entry_id AS collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id AS\r\n file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size, file_1.type AS file_1_type, visibility_1.id AS\r\n visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value\n\n\n FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date\r\n AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography, granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date,\r\n granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date, granule.visibility_id AS granule_visibility_id\n\n\n FROM granule JOIN collection ON collection.id =\r\n granule.collection_id\n\n\n WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1'\r\n OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties\r\n #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times,\r\n 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00')\r\n ORDER BY granule.uuid\n\n\n LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1\r\n ON collection_1.id =\r\n anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS file_1 ON file_1.id =\r\n granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id =\r\n anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid\n\n\n \n\n\nHere’s the explain:\n\n\n \n\n\n Sort (cost=10914809.92..10914810.27\r\n rows=141 width=996)\n\n\n Sort Key: granule.uuid\n\n\n -> Hash\r\n Left Join (cost=740539.73..10914804.89 rows=141 width=996)\n\n\n Hash Cond: (granule.visibility_id = visibility_1.id)\n\n\n -> Hash\r\n Right Join (cost=740537.56..10914731.81 rows=141 width=1725)\n\n\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\n\n\n -> Hash\r\n Join (cost=644236.90..10734681.93 rows=22332751 width=223)\n\n\n Hash Cond: (file_1.id =\r\n granule_file_1.file_id)\n\n\n -> Seq\r\n Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207)\n\n\n -> Hash (cost=365077.51..365077.51\r\n rows=22332751 width=20)\n\n\n -> Seq\r\n Scan on granule_file granule_file_1 (cost=0.00..365077.51 rows=22332751 width=20)\n\n\n -> Hash (cost=96300.33..96300.33\r\n rows=26 width=1518)\n\n\n -> Nested\r\n Loop Left Join (cost=96092.55..96300.33 rows=26 width=1518)\n\n\n -> Limit (cost=96092.27..96092.33\r\n rows=26 width=1462)\n\n\n -> Sort (cost=96092.27..96100.47\r\n rows=3282 width=1462)\n\n\n Sort Key: granule.uuid\n\n\n -> Nested\r\n Loop (cost=0.56..95998.73 rows=3282 width=1462)\n\n\n -> Seq\r\n Scan on collection (cost=0.00..3366.24 rows=1 width=4)\n\n\n Filter: (((entry_id)::text\r\n ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\n\n\n -> Index\r\n Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713\r\n width=1462)\n\n\n Index Cond: (collection_id\r\n = collection.id)\n\n\n Filter: (is_active AND\r\n (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\n\n\nate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text))\r\n AND (((properties #>> '{temporal_extent,range_date_times,0,end_\n\n\ndate_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties\r\n #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\n\n\n < '2015-10-09T00:00:00+00:00'::text)))\n\n\n -> Index\r\n Scan using collection_pkey on collection collection_1 (cost=0.28..7.99 rows=1\r\n width=56)\n\n\n Index Cond: (id = granule.collection_id)\n\n\n -> Hash (cost=1.52..1.52\r\n rows=52 width=16)\n\n\n -> Seq\r\n Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16)\n\n\n \n\n\n \n\n\nHeres a bit about the tables – \n\n\n \n\n\nGranule\n\n\nCollection\n\n\nGranule_file\n\n\nVisibility\n\n\n \n\n\nGranule:\n\n\npublic | granule |\r\n table | ims_api_writer | 36 GB | \n\n\n \n\n\nims_api=# \\d+ granule\n\n\n Table \"public.granule\"\n\n\n Column \r\n | Type \r\n | Collation | Nullable | Default | Storage |\r\n Stats target | Description \n\n\n-----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n\n\n collection_id \r\n | integer | \r\n | not null | |\r\n plain | \r\n | \n\n\n create_date \r\n | timestamp without time zone | |\r\n not null | | plain \r\n | | \n\n\n delete_date \r\n | timestamp without time zone | | \r\n | |\r\n plain | \r\n | \n\n\n geometry \r\n | geometry(Geometry,4326) | \r\n | | \r\n | main | \r\n | \n\n\n is_active \r\n | boolean | \r\n | | \r\n | plain | \r\n | \n\n\n properties \r\n | jsonb | \r\n | | \r\n | extended | | \n\n\n update_date \r\n | timestamp without time zone | |\r\n not null | | plain \r\n | | \n\n\n uuid \r\n | uuid | \r\n | not null | |\r\n plain | \r\n | \n\n\n visibility_id \r\n | integer | \r\n | not null | |\r\n plain | \r\n | \n\n\n geography \r\n | geography(Geometry,4326) | \r\n | | \r\n | main | \r\n | \n\n\n visibility_last_update_date | timestamp without time zone | \r\n | | \r\n | plain | \r\n | \n\n\nIndexes:\n\n\n \"granule_pkey\" PRIMARY KEY, btree (uuid)\n\n\n \"granule_is_active_idx\" btree (is_active)\n\n\n \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text))\n\n\n \"granule_update_date_idx\" btree (update_date)\n\n\n \"idx_granule_geometry\" gist (geometry)\n\n\n \"ix_granule_collection_id\" btree (collection_id)\n\n\nForeign-key constraints:\n\n\n \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES\r\n collection(id)\n\n\n \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES\r\n visibility(id)\n\n\nReferenced by:\n\n\n TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN\r\n KEY (granule_uuid) REFERENCES granule(uuid)\n\n\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\"\r\n FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n\n\nTriggers:\n\n\n granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON\r\n granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()\n\n\nAccess method: heap\n\n\n \n\n\nCollection:\n\n\npublic | collection |\r\n table | ims_api_writer | 39 MB | \n\n\n \n\n\nims_api=# \\d collection\n\n\n Table \"public.collection\"\n\n\n Column \r\n | Type \r\n | Collation | Nullable | Default \r\n \n\n\n------------------------------+-----------------------------+-----------+----------+----------------------------------------\n\n\n id \r\n | integer | \r\n | not null | nextval('collection_id_seq'::regclass)\n\n\n access_constraints \r\n | text | \r\n | | \n\n\n additional_attributes \r\n | jsonb | \r\n | | \n\n\n ancillary_keywords \r\n | character varying(160)[] | \r\n | | \n\n\n create_date \r\n | timestamp without time zone | |\r\n not null | \n\n\n dataset_language \r\n | character varying(80)[] | \r\n | | \n\n\n dataset_progress \r\n | text | \r\n | | \n\n\n data_resolutions \r\n | jsonb | \r\n | | \n\n\n dataset_citation \r\n | jsonb | \r\n | | \n\n\n delete_date \r\n | timestamp without time zone | | \r\n | \n\n\n distribution \r\n | jsonb | \r\n | | \n\n\n doi \r\n | character varying(220) | \r\n | | \n\n\n entry_id \r\n | character varying(80) | \r\n | not null | \n\n\n entry_title \r\n | character varying(1030) | \r\n | | \n\n\n geometry \r\n | geometry(Geometry,4326) | \r\n | | \n\n\n is_active \r\n | boolean | \r\n | not null | \n\n\n iso_topic_categories \r\n | character varying[] | \r\n | | \n\n\n last_update_date \r\n | timestamp without time zone | |\r\n not null | \n\n\n locations \r\n | jsonb | \r\n | | \n\n\n long_name \r\n | character varying(1024) | \r\n | | \n\n\n metadata_associations \r\n | jsonb | \r\n | | \n\n\n metadata_dates \r\n | jsonb | \r\n | | \n\n\n personnel \r\n | jsonb | \r\n | | \n\n\n platforms \r\n | jsonb | \r\n | | \n\n\n processing_level_id \r\n | integer | \r\n | | \n\n\n product_flag \r\n | text | \r\n | | \n\n\n project_id \r\n | integer | \r\n | | \n\n\n properties \r\n | jsonb | \r\n | | \n\n\n quality \r\n | jsonb | \r\n | | \n\n\n references \r\n | character varying(12000)[] | \r\n | | \n\n\n related_urls \r\n | jsonb | \r\n | | \n\n\n summary \r\n | jsonb | \r\n | | \n\n\n short_name \r\n | character varying(80) | \r\n | | \n\n\n temporal_extents \r\n | jsonb | \r\n | | \n\n\n version \r\n | character varying(80) | \r\n | | \n\n\n use_constraints \r\n | jsonb | \r\n | | \n\n\n version_description \r\n | text | \r\n | | \n\n\n visibility_id \r\n | integer | \r\n | not null | \n\n\n world_date \r\n | timestamp without time zone | | \r\n | \n\n\n tiling_identification_system | jsonb \r\n | | \r\n | \n\n\n collection_data_type \r\n | text | \r\n | | \n\n\n standard_product \r\n | boolean | \r\n | not null | false\n\n\nIndexes:\n\n\n \"collection_pkey\" PRIMARY KEY, btree (id)\n\n\n \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id)\n\n\n \"idx_collection_geometry\" gist (geometry)\n\n\nForeign-key constraints:\n\n\n \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id)\r\n REFERENCES processing_level(id)\n\n\n \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id)\n\n\n \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES\r\n visibility(id)\n\n\nReferenced by:\n\n\n TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\"\r\n FOREIGN KEY (collection_id) REFERENCES collection(id)\n\n\n TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\"\r\n FOREIGN KEY (collection_id) REFERENCES collection(id)\n\n\n TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\"\r\n FOREIGN KEY (collection_id) REFERENCES collection(id)\n\n\n TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY\r\n (collection_id) REFERENCES collection(id)\n\n\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\"\r\n FOREIGN KEY (collection_id) REFERENCES collection(id)\n\n\n \n\n\n \n\n\nGranule_file:\n\n\n public | granule_file \r\n | table | ims_api_writer | 1108 MB | \n\n\n \n\n\n\\d granule_file\n\n\n Table \"public.granule_file\"\n\n\n Column \r\n | Type |\r\n Collation | Nullable | Default \n\n\n--------------+---------+-----------+----------+---------\n\n\n granule_uuid | uuid \r\n | | \r\n | \n\n\n file_id \r\n | integer | | \r\n | \n\n\nForeign-key constraints:\n\n\n \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id)\n\n\n \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES\r\n granule(uuid)\n\n\n \n\n\n \n\n\nVisibility:\n\n\npublic | visibility |\r\n table | ims_api_writer | 40 kB | \n\n\n \n\n\n\\d visibility\n\n\n Table \"public.visibility\"\n\n\n Column | \r\n Type |\r\n Collation | Nullable | Default \r\n \n\n\n--------+-----------------------+-----------+----------+----------------------------------------\n\n\n id \r\n | integer | \r\n | not null | nextval('visibility_id_seq'::regclass)\n\n\n name |\r\n character varying(80) | | not null | \n\n\n value |\r\n integer | \r\n | not null | \n\n\nIndexes:\n\n\n \"visibility_pkey\" PRIMARY KEY, btree (id)\n\n\n \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name)\n\n\n \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)\n\n\nReferenced by:\n\n\n TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN\r\n KEY (visibility_id) REFERENCES visibility(id)\n\n\n TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY\r\n (visibility_id) REFERENCES visibility(id)\n\n\n \n\n\n \n\n\n \n\n\n \n\n\nThanks for the help!\n\n\n \n\n\nMaria Wilson\n\n\nNasa/Langley Research Center\n\n\nHampton, Virginia USA\n\n\[email protected]",
"msg_date": "Wed, 27 Dec 2023 16:01:14 +0000",
"msg_from": "\"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Need help with performance tuning pg12 on linux"
},
{
"msg_contents": "Yes, there is an explain, but that is an explain that is run without ‘analyze’ added to it.\nThis means the query is parsed and planned, and the resulting parse tree with planner assumptions is shown.\n\nIf you add ‘analyze to ‘explain’, the actual query is run and timed, and statistics about actual execution are shown.\n\nFrits Hoogland\n\n\n\n\n> On 27 Dec 2023, at 17:01, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\n> \n> Thanks for the reply!! Scroll down a bit – the explain is just a bit further down in the email!\n> Maria\n> \n> From: Frits Hoogland <[email protected] <mailto:[email protected]>>\n> Date: Wednesday, December 27, 2023 at 10:50 AM\n> To: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected] <mailto:[email protected]>>\n> Cc: \"[email protected] <mailto:[email protected]>\" <[email protected] <mailto:[email protected]>>\n> Subject: [EXTERNAL] Re: Need help with performance tuning pg12 on linux\n> \n> CAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\n> \n> \n> \n> Hi Maria, could you please run explain analyse for the problem query?\n> The ‘analyze’ addition will track actual spent time and show statistics to validate the planner’s assumptions.\n> \n> Frits Hoogland\n> \n> \n> \n> \n> \n>> On 27 Dec 2023, at 16:38, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\n>> \n>> Hello folks!\n>> \n>> I am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance!\n>> \n>> Here’s my background – Linux RHEL 8 – PostgreSQL 12.17. – \n>> MemTotal: 263216840 kB\n>> MemFree: 3728224 kB\n>> MemAvailable: 197186864 kB\n>> Buffers: 6704 kB\n>> Cached: 204995024 kB\n>> SwapCached: 19244 kB\n>> \n>> free -m\n>> total used free shared buff/cache available\n>> Mem: 257047 51860 3722 10718 201464 192644\n>> Swap: 4095 855 3240\n>> \n>> Here are a few of the settings in our postgres server:\n>> max_connections = 300 # (change requires restart)\n>> shared_buffers = 10GB\n>> temp_buffers = 24MB\n>> work_mem = 2GB\n>> maintenance_work_mem = 1GB\n>> \n>> most everything else is set to the default.\n>> \n>> The query is complex with several joins:\n>> \n>> SELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date, ST_AsGeoJSON(anon_1.granule_geography) AS anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date AS anon_1_granule_update_date, anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id <http://collection_1.id/> AS collection_1_id, collection_1.entry_id AS collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id <http://file_1.id/> AS file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size, file_1.type AS file_1_type, visibility_1.id <http://visibility_1.id/> AS visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value\n>> FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography, granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date, granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date, granule.visibility_id AS granule_visibility_id\n>> FROM granule JOIN collection ON collection.id <http://collection.id/> = granule.collection_id\n>> WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1' OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00') ORDER BY granule.uuid\n>> LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1 ON collection_1.id <http://collection_1.id/> = anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS file_1 ON file_1.id <http://file_1.id/> = granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id <http://visibility_1.id/> = anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid\n>> \n>> Here’s the explain:\n>> \n>> Sort (cost=10914809.92..10914810.27 rows=141 width=996)\n>> Sort Key: granule.uuid\n>> -> Hash Left Join (cost=740539.73..10914804.89 rows=141 width=996)\n>> Hash Cond: (granule.visibility_id = visibility_1.id <http://visibility_1.id/>)\n>> -> Hash Right Join (cost=740537.56..10914731.81 rows=141 width=1725)\n>> Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\n>> -> Hash Join (cost=644236.90..10734681.93 rows=22332751 width=223)\n>> Hash Cond: (file_1.id <http://file_1.id/> = granule_file_1.file_id)\n>> -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207)\n>> -> Hash (cost=365077.51..365077.51 rows=22332751 width=20)\n>> -> Seq Scan on granule_file granule_file_1 (cost=0.00..365077.51 rows=22332751 width=20)\n>> -> Hash (cost=96300.33..96300.33 rows=26 width=1518)\n>> -> Nested Loop Left Join (cost=96092.55..96300.33 rows=26 width=1518)\n>> -> Limit (cost=96092.27..96092.33 rows=26 width=1462)\n>> -> Sort (cost=96092.27..96100.47 rows=3282 width=1462)\n>> Sort Key: granule.uuid\n>> -> Nested Loop (cost=0.56..95998.73 rows=3282 width=1462)\n>> -> Seq Scan on collection (cost=0.00..3366.24 rows=1 width=4)\n>> Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\n>> -> Index Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713 width=1462)\n>> Index Cond: (collection_id = collection.id <http://collection.id/>)\n>> Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\n>> ate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_\n>> date_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\n>> < '2015-10-09T00:00:00+00:00'::text)))\n>> -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..7.99 rows=1 width=56)\n>> Index Cond: (id = granule.collection_id)\n>> -> Hash (cost=1.52..1.52 rows=52 width=16)\n>> -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16)\n>> \n>> \n>> Heres a bit about the tables – \n>> \n>> Granule\n>> Collection\n>> Granule_file\n>> Visibility\n>> \n>> Granule:\n>> public | granule | table | ims_api_writer | 36 GB | \n>> \n>> ims_api=# \\d+ granule\n>> Table \"public.granule\"\n>> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n>> -----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n>> collection_id | integer | | not null | | plain | | \n>> create_date | timestamp without time zone | | not null | | plain | | \n>> delete_date | timestamp without time zone | | | | plain | | \n>> geometry | geometry(Geometry,4326) | | | | main | | \n>> is_active | boolean | | | | plain | | \n>> properties | jsonb | | | | extended | | \n>> update_date | timestamp without time zone | | not null | | plain | | \n>> uuid | uuid | | not null | | plain | | \n>> visibility_id | integer | | not null | | plain | | \n>> geography | geography(Geometry,4326) | | | | main | | \n>> visibility_last_update_date | timestamp without time zone | | | | plain | | \n>> Indexes:\n>> \"granule_pkey\" PRIMARY KEY, btree (uuid)\n>> \"granule_is_active_idx\" btree (is_active)\n>> \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text))\n>> \"granule_update_date_idx\" btree (update_date)\n>> \"idx_granule_geometry\" gist (geometry)\n>> \"ix_granule_collection_id\" btree (collection_id)\n>> Foreign-key constraints:\n>> \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n>> \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n>> Referenced by:\n>> TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n>> TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n>> Triggers:\n>> granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()\n>> Access method: heap\n>> \n>> Collection:\n>> public | collection | table | ims_api_writer | 39 MB | \n>> \n>> ims_api=# \\d collection\n>> Table \"public.collection\"\n>> Column | Type | Collation | Nullable | Default \n>> ------------------------------+-----------------------------+-----------+----------+----------------------------------------\n>> id | integer | | not null | nextval('collection_id_seq'::regclass)\n>> access_constraints | text | | | \n>> additional_attributes | jsonb | | | \n>> ancillary_keywords | character varying(160)[] | | | \n>> create_date | timestamp without time zone | | not null | \n>> dataset_language | character varying(80)[] | | | \n>> dataset_progress | text | | | \n>> data_resolutions | jsonb | | | \n>> dataset_citation | jsonb | | | \n>> delete_date | timestamp without time zone | | | \n>> distribution | jsonb | | | \n>> doi | character varying(220) | | | \n>> entry_id | character varying(80) | | not null | \n>> entry_title | character varying(1030) | | | \n>> geometry | geometry(Geometry,4326) | | | \n>> is_active | boolean | | not null | \n>> iso_topic_categories | character varying[] | | | \n>> last_update_date | timestamp without time zone | | not null | \n>> locations | jsonb | | | \n>> long_name | character varying(1024) | | | \n>> metadata_associations | jsonb | | | \n>> metadata_dates | jsonb | | | \n>> personnel | jsonb | | | \n>> platforms | jsonb | | | \n>> processing_level_id | integer | | | \n>> product_flag | text | | | \n>> project_id | integer | | | \n>> properties | jsonb | | | \n>> quality | jsonb | | | \n>> references | character varying(12000)[] | | | \n>> related_urls | jsonb | | | \n>> summary | jsonb | | | \n>> short_name | character varying(80) | | | \n>> temporal_extents | jsonb | | | \n>> version | character varying(80) | | | \n>> use_constraints | jsonb | | | \n>> version_description | text | | | \n>> visibility_id | integer | | not null | \n>> world_date | timestamp without time zone | | | \n>> tiling_identification_system | jsonb | | | \n>> collection_data_type | text | | | \n>> standard_product | boolean | | not null | false\n>> Indexes:\n>> \"collection_pkey\" PRIMARY KEY, btree (id)\n>> \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id)\n>> \"idx_collection_geometry\" gist (geometry)\n>> Foreign-key constraints:\n>> \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id) REFERENCES processing_level(id)\n>> \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id)\n>> \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n>> Referenced by:\n>> TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n>> TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n>> TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n>> TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n>> TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\n>> \n>> \n>> Granule_file:\n>> public | granule_file | table | ims_api_writer | 1108 MB | \n>> \n>> \\d granule_file\n>> Table \"public.granule_file\"\n>> Column | Type | Collation | Nullable | Default \n>> --------------+---------+-----------+----------+---------\n>> granule_uuid | uuid | | | \n>> file_id | integer | | | \n>> Foreign-key constraints:\n>> \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id)\n>> \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n>> \n>> \n>> Visibility:\n>> public | visibility | table | ims_api_writer | 40 kB | \n>> \n>> \\d visibility\n>> Table \"public.visibility\"\n>> Column | Type | Collation | Nullable | Default \n>> --------+-----------------------+-----------+----------+----------------------------------------\n>> id | integer | | not null | nextval('visibility_id_seq'::regclass)\n>> name | character varying(80) | | not null | \n>> value | integer | | not null | \n>> Indexes:\n>> \"visibility_pkey\" PRIMARY KEY, btree (id)\n>> \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name)\n>> \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)\n>> Referenced by:\n>> TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n>> TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\n>> \n>> \n>> \n>> \n>> Thanks for the help!\n>> \n>> Maria Wilson\n>> Nasa/Langley Research Center\n>> Hampton, Virginia USA\n>> [email protected] <mailto:[email protected]>\n\nYes, there is an explain, but that is an explain that is run without ‘analyze’ added to it.This means the query is parsed and planned, and the resulting parse tree with planner assumptions is shown.If you add ‘analyze to ‘explain’, the actual query is run and timed, and statistics about actual execution are shown.\nFrits Hoogland\n\nOn 27 Dec 2023, at 17:01, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:Thanks for the reply!! Scroll down a bit – the explain is just a bit further down in the email!Maria From: Frits Hoogland <[email protected]>Date: Wednesday, December 27, 2023 at 10:50 AMTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>Cc: \"[email protected]\" <[email protected]>Subject: [EXTERNAL] Re: Need help with performance tuning pg12 on linux CAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.Hi Maria, could you please run explain analyse for the problem query?The ‘analyze’ addition will track actual spent time and show statistics to validate the planner’s assumptions. Frits Hoogland On 27 Dec 2023, at 16:38, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote: Hello folks! I am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance! Here’s my background – Linux RHEL 8 – PostgreSQL 12.17. – MemTotal: 263216840 kBMemFree: 3728224 kBMemAvailable: 197186864 kBBuffers: 6704 kBCached: 204995024 kBSwapCached: 19244 kB free -m total used free shared buff/cache availableMem: 257047 51860 3722 10718 201464 192644Swap: 4095 855 3240 Here are a few of the settings in our postgres server:max_connections = 300 # (change requires restart)shared_buffers = 10GBtemp_buffers = 24MBwork_mem = 2GBmaintenance_work_mem = 1GB most everything else is set to the default. The query is complex with several joins: SELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date, ST_AsGeoJSON(anon_1.granule_geography) AS anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date AS anon_1_granule_update_date, anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id AS collection_1_id, collection_1.entry_id AS collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id AS file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size, file_1.type AS file_1_type, visibility_1.id AS visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography, granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date, granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date, granule.visibility_id AS granule_visibility_id FROM granule JOIN collection ON collection.id = granule.collection_id WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1' OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00') ORDER BY granule.uuid LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1 ON collection_1.id = anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS file_1 ON file_1.id = granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id = anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid Here’s the explain: Sort (cost=10914809.92..10914810.27 rows=141 width=996) Sort Key: granule.uuid -> Hash Left Join (cost=740539.73..10914804.89 rows=141 width=996) Hash Cond: (granule.visibility_id = visibility_1.id) -> Hash Right Join (cost=740537.56..10914731.81 rows=141 width=1725) Hash Cond: (granule_file_1.granule_uuid = granule.uuid) -> Hash Join (cost=644236.90..10734681.93 rows=22332751 width=223) Hash Cond: (file_1.id = granule_file_1.file_id) -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207) -> Hash (cost=365077.51..365077.51 rows=22332751 width=20) -> Seq Scan on granule_file granule_file_1 (cost=0.00..365077.51 rows=22332751 width=20) -> Hash (cost=96300.33..96300.33 rows=26 width=1518) -> Nested Loop Left Join (cost=96092.55..96300.33 rows=26 width=1518) -> Limit (cost=96092.27..96092.33 rows=26 width=1462) -> Sort (cost=96092.27..96100.47 rows=3282 width=1462) Sort Key: granule.uuid -> Nested Loop (cost=0.56..95998.73 rows=3282 width=1462) -> Seq Scan on collection (cost=0.00..3366.24 rows=1 width=4) Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text)) -> Index Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713 width=1462) Index Cond: (collection_id = collection.id) Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_date_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[]) < '2015-10-09T00:00:00+00:00'::text))) -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..7.99 rows=1 width=56) Index Cond: (id = granule.collection_id) -> Hash (cost=1.52..1.52 rows=52 width=16) -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16) Heres a bit about the tables – GranuleCollectionGranule_fileVisibility Granule:public | granule | table | ims_api_writer | 36 GB | ims_api=# \\d+ granule Table \"public.granule\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description -----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+------------- collection_id | integer | | not null | | plain | | create_date | timestamp without time zone | | not null | | plain | | delete_date | timestamp without time zone | | | | plain | | geometry | geometry(Geometry,4326) | | | | main | | is_active | boolean | | | | plain | | properties | jsonb | | | | extended | | update_date | timestamp without time zone | | not null | | plain | | uuid | uuid | | not null | | plain | | visibility_id | integer | | not null | | plain | | geography | geography(Geometry,4326) | | | | main | | visibility_last_update_date | timestamp without time zone | | | | plain | | Indexes: \"granule_pkey\" PRIMARY KEY, btree (uuid) \"granule_is_active_idx\" btree (is_active) \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text)) \"granule_update_date_idx\" btree (update_date) \"idx_granule_geometry\" gist (geometry) \"ix_granule_collection_id\" btree (collection_id)Foreign-key constraints: \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)Referenced by: TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid) TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)Triggers: granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()Access method: heap Collection:public | collection | table | ims_api_writer | 39 MB | ims_api=# \\d collection Table \"public.collection\" Column | Type | Collation | Nullable | Default ------------------------------+-----------------------------+-----------+----------+---------------------------------------- id | integer | | not null | nextval('collection_id_seq'::regclass) access_constraints | text | | | additional_attributes | jsonb | | | ancillary_keywords | character varying(160)[] | | | create_date | timestamp without time zone | | not null | dataset_language | character varying(80)[] | | | dataset_progress | text | | | data_resolutions | jsonb | | | dataset_citation | jsonb | | | delete_date | timestamp without time zone | | | distribution | jsonb | | | doi | character varying(220) | | | entry_id | character varying(80) | | not null | entry_title | character varying(1030) | | | geometry | geometry(Geometry,4326) | | | is_active | boolean | | not null | iso_topic_categories | character varying[] | | | last_update_date | timestamp without time zone | | not null | locations | jsonb | | | long_name | character varying(1024) | | | metadata_associations | jsonb | | | metadata_dates | jsonb | | | personnel | jsonb | | | platforms | jsonb | | | processing_level_id | integer | | | product_flag | text | | | project_id | integer | | | properties | jsonb | | | quality | jsonb | | | references | character varying(12000)[] | | | related_urls | jsonb | | | summary | jsonb | | | short_name | character varying(80) | | | temporal_extents | jsonb | | | version | character varying(80) | | | use_constraints | jsonb | | | version_description | text | | | visibility_id | integer | | not null | world_date | timestamp without time zone | | | tiling_identification_system | jsonb | | | collection_data_type | text | | | standard_product | boolean | | not null | falseIndexes: \"collection_pkey\" PRIMARY KEY, btree (id) \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id) \"idx_collection_geometry\" gist (geometry)Foreign-key constraints: \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id) REFERENCES processing_level(id) \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id) \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)Referenced by: TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id) Granule_file: public | granule_file | table | ims_api_writer | 1108 MB | \\d granule_file Table \"public.granule_file\" Column | Type | Collation | Nullable | Default --------------+---------+-----------+----------+--------- granule_uuid | uuid | | | file_id | integer | | | Foreign-key constraints: \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id) \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid) Visibility:public | visibility | table | ims_api_writer | 40 kB | \\d visibility Table \"public.visibility\" Column | Type | Collation | Nullable | Default --------+-----------------------+-----------+----------+---------------------------------------- id | integer | | not null | nextval('visibility_id_seq'::regclass) name | character varying(80) | | not null | value | integer | | not null | Indexes: \"visibility_pkey\" PRIMARY KEY, btree (id) \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name) \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)Referenced by: TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id) TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id) Thanks for the help! Maria WilsonNasa/Langley Research CenterHampton, Virginia [email protected]",
"msg_date": "Wed, 27 Dec 2023 17:07:10 +0100",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Need help with performance tuning pg12 on linux"
},
{
"msg_contents": "Thanks!! See below:\r\n\r\n\r\nexplain (analyze, buffers)\r\n\r\n\r\n Sort (cost=10914852.09..10914852.45 rows=141 width=996) (actual time=46036.486..46036.540 rows=4 loops=1)\r\n\r\n Sort Key: granule.uuid\r\n\r\n Sort Method: quicksort Memory: 32kB\r\n\r\n Buffers: shared hit=784286 read=8346129\r\n\r\n -> Hash Left Join (cost=740575.40..10914847.06 rows=141 width=996) (actual time=46036.366..46036.457 rows=4 loops=1)\r\n\r\n Hash Cond: (granule.visibility_id = visibility_1.id)\r\n\r\n Buffers: shared hit=784283 read=8346129\r\n\r\n -> Hash Right Join (cost=740573.23..10914773.99 rows=141 width=1725) (actual time=46036.148..46036.208 rows=4 loops=1)\r\n\r\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\r\n\r\n Buffers: shared hit=784282 read=8346129\r\n\r\n -> Hash Join (cost=644250.54..10734700.30 rows=22333224 width=223) (actual time=7864.023..44546.392 rows=22325462 loops=1)\r\n\r\n Hash Cond: (file_1.id = granule_file_1.file_id)\r\n\r\n Buffers: shared hit=780882 read=8345236\r\n\r\n -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207) (actual time=402.706..25222.525 rows=22057988 loops=1)\r\n\r\n Buffers: shared hit=639126 read=8345236\r\n\r\n -> Hash (cost=365085.24..365085.24 rows=22333224 width=20) (actual time=7288.228..7288.235 rows=22325462 loops=1)\r\n\r\n Buckets: 33554432 Batches: 1 Memory Usage: 1391822kB\r\n\r\n Buffers: shared hit=141753\r\n\r\n -> Seq Scan on granule_file granule_file_1 (cost=0.00..365085.24 rows=22333224 width=20) (actual time=0.030..2151.380 rows=22325462 loops=1)\r\n\r\n Buffers: shared hit=141753\r\n\r\n -> Hash (cost=96322.36..96322.36 rows=26 width=1518) (actual time=16.631..16.672 rows=4 loops=1)\r\n\r\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\r\n\r\n Buffers: shared hit=3400 read=893\r\n\r\n -> Nested Loop Left Join (cost=96114.18..96322.36 rows=26 width=1518) (actual time=16.605..16.658 rows=4 loops=1)\r\n\r\n Buffers: shared hit=3400 read=893\r\n\r\n -> Limit (cost=96113.90..96113.97 rows=26 width=1462) (actual time=16.585..16.621 rows=4 loops=1)\r\n\r\n Buffers: shared hit=3388 read=893\r\n\r\n -> Sort (cost=96113.90..96126.16 rows=4902 width=1462) (actual time=16.583..16.610 rows=4 loops=1)\r\n\r\n Sort Key: granule.uuid\r\n\r\n Sort Method: quicksort Memory: 32kB\r\n\r\n Buffers: shared hit=3388 read=893\r\n\r\n -> Nested Loop (cost=0.56..95974.19 rows=4902 width=1462) (actual time=3.805..16.585 rows=4 loops=1)\r\n\r\n Buffers: shared hit=3388 read=893\r\n\r\n -> Seq Scan on collection (cost=0.00..3341.70 rows=1 width=4) (actual time=0.670..5.734 rows=4 loops=1)\r\n\r\n Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\r\n\r\n Rows Removed by Filter: 2481\r\n\r\n Buffers: shared hit=3292\r\n\r\n -> Index Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713 width=1462) (actual time=1.342..2.705 rows=1 loops=4)\r\n\r\n Index Cond: (collection_id = collection.id)\r\n\r\n Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\r\n\r\nate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_\r\n\r\ndate_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\r\n\r\n < '2015-10-09T00:00:00+00:00'::text)))\r\n\r\n Rows Removed by Filter: 243\r\n\r\n Buffers: shared hit=96 read=893\r\n\r\n -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..8.00 rows=1 width=56) (actual time=0.005..0.005 rows=1 loops=4)\r\n\r\n Index Cond: (id = granule.collection_id)\r\n\r\n Buffers: shared hit=12\r\n\r\n -> Hash (cost=1.52..1.52 rows=52 width=16) (actual time=0.038..0.038 rows=52 loops=1)\r\n\r\n Buckets: 1024 Batches: 1 Memory Usage: 11kB\r\n\r\n Buffers: shared hit=1\r\n\r\n -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16) (actual time=0.019..0.022 rows=52 loops=1)\r\n\r\n Buffers: shared hit=1\r\n\r\n Planning Time: 3.510 ms\r\n\r\n Execution Time: 46084.516 ms\r\n\r\n(52 rows)\r\n\r\n\r\n\r\n\r\nFrom: Frits Hoogland <[email protected]>\r\nDate: Wednesday, December 27, 2023 at 11:07 AM\r\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>\r\nCc: \"[email protected]\" <[email protected]>\r\nSubject: Re: [EXTERNAL] Need help with performance tuning pg12 on linux\r\n\r\nCAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\r\n\r\n\r\nYes, there is an explain, but that is an explain that is run without ‘analyze’ added to it.\r\nThis means the query is parsed and planned, and the resulting parse tree with planner assumptions is shown.\r\n\r\nIf you add ‘analyze to ‘explain’, the actual query is run and timed, and statistics about actual execution are shown.\r\n\r\nFrits Hoogland\r\n\r\n\r\n\r\n\r\nOn 27 Dec 2023, at 17:01, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\r\n\r\nThanks for the reply!! Scroll down a bit – the explain is just a bit further down in the email!\r\nMaria\r\n\r\nFrom: Frits Hoogland <[email protected]<mailto:[email protected]>>\r\nDate: Wednesday, December 27, 2023 at 10:50 AM\r\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]<mailto:[email protected]>>\r\nCc: \"[email protected]<mailto:[email protected]>\" <[email protected]<mailto:[email protected]>>\r\nSubject: [EXTERNAL] Re: Need help with performance tuning pg12 on linux\r\n\r\nCAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\r\n\r\n\r\n\r\nHi Maria, could you please run explain analyse for the problem query?\r\nThe ‘analyze’ addition will track actual spent time and show statistics to validate the planner’s assumptions.\r\n\r\nFrits Hoogland\r\n\r\n\r\n\r\n\r\n\r\nOn 27 Dec 2023, at 16:38, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\r\n\r\nHello folks!\r\n\r\nI am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance!\r\n\r\nHere’s my background – Linux RHEL 8 – PostgreSQL 12.17. –\r\nMemTotal: 263216840 kB\r\nMemFree: 3728224 kB\r\nMemAvailable: 197186864 kB\r\nBuffers: 6704 kB\r\nCached: 204995024 kB\r\nSwapCached: 19244 kB\r\n\r\nfree -m\r\n total used free shared buff/cache available\r\nMem: 257047 51860 3722 10718 201464 192644\r\nSwap: 4095 855 3240\r\n\r\nHere are a few of the settings in our postgres server:\r\nmax_connections = 300 # (change requires restart)\r\nshared_buffers = 10GB\r\ntemp_buffers = 24MB\r\nwork_mem = 2GB\r\nmaintenance_work_mem = 1GB\r\n\r\nmost everything else is set to the default.\r\n\r\nThe query is complex with several joins:\r\n\r\nSELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date, ST_AsGeoJSON(anon_1.granule_geography) AS anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date AS anon_1_granule_update_date, anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id<http://collection_1.id/> AS collection_1_id, collection_1.entry_id AS collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id<http://file_1.id/> AS file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size, file_1.type AS file_1_type, visibility_1.id<http://visibility_1.id/> AS visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value\r\n FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography, granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date, granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date, granule.visibility_id AS granule_visibility_id\r\n FROM granule JOIN collection ON collection.id<http://collection.id/> = granule.collection_id\r\n WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1' OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00') ORDER BY granule.uuid\r\n LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1 ON collection_1.id<http://collection_1.id/> = anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS file_1 ON file_1.id<http://file_1.id/> = granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id<http://visibility_1.id/> = anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid\r\n\r\nHere’s the explain:\r\n\r\n Sort (cost=10914809.92..10914810.27 rows=141 width=996)\r\n Sort Key: granule.uuid\r\n -> Hash Left Join (cost=740539.73..10914804.89 rows=141 width=996)\r\n Hash Cond: (granule.visibility_id = visibility_1.id<http://visibility_1.id/>)\r\n -> Hash Right Join (cost=740537.56..10914731.81 rows=141 width=1725)\r\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\r\n -> Hash Join (cost=644236.90..10734681.93 rows=22332751 width=223)\r\n Hash Cond: (file_1.id<http://file_1.id/> = granule_file_1.file_id)\r\n -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207)\r\n -> Hash (cost=365077.51..365077.51 rows=22332751 width=20)\r\n -> Seq Scan on granule_file granule_file_1 (cost=0.00..365077.51 rows=22332751 width=20)\r\n -> Hash (cost=96300.33..96300.33 rows=26 width=1518)\r\n -> Nested Loop Left Join (cost=96092.55..96300.33 rows=26 width=1518)\r\n -> Limit (cost=96092.27..96092.33 rows=26 width=1462)\r\n -> Sort (cost=96092.27..96100.47 rows=3282 width=1462)\r\n Sort Key: granule.uuid\r\n -> Nested Loop (cost=0.56..95998.73 rows=3282 width=1462)\r\n -> Seq Scan on collection (cost=0.00..3366.24 rows=1 width=4)\r\n Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\r\n -> Index Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713 width=1462)\r\n Index Cond: (collection_id = collection.id<http://collection.id/>)\r\n Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\r\nate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_\r\ndate_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\r\n < '2015-10-09T00:00:00+00:00'::text)))\r\n -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..7.99 rows=1 width=56)\r\n Index Cond: (id = granule.collection_id)\r\n -> Hash (cost=1.52..1.52 rows=52 width=16)\r\n -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16)\r\n\r\n\r\nHeres a bit about the tables –\r\n\r\nGranule\r\nCollection\r\nGranule_file\r\nVisibility\r\n\r\nGranule:\r\npublic | granule | table | ims_api_writer | 36 GB |\r\n\r\nims_api=# \\d+ granule\r\n Table \"public.granule\"\r\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\r\n-----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\r\n collection_id | integer | | not null | | plain | |\r\n create_date | timestamp without time zone | | not null | | plain | |\r\n delete_date | timestamp without time zone | | | | plain | |\r\n geometry | geometry(Geometry,4326) | | | | main | |\r\n is_active | boolean | | | | plain | |\r\n properties | jsonb | | | | extended | |\r\n update_date | timestamp without time zone | | not null | | plain | |\r\n uuid | uuid | | not null | | plain | |\r\n visibility_id | integer | | not null | | plain | |\r\n geography | geography(Geometry,4326) | | | | main | |\r\n visibility_last_update_date | timestamp without time zone | | | | plain | |\r\nIndexes:\r\n \"granule_pkey\" PRIMARY KEY, btree (uuid)\r\n \"granule_is_active_idx\" btree (is_active)\r\n \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text))\r\n \"granule_update_date_idx\" btree (update_date)\r\n \"idx_granule_geometry\" gist (geometry)\r\n \"ix_granule_collection_id\" btree (collection_id)\r\nForeign-key constraints:\r\n \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\nReferenced by:\r\n TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\r\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\r\nTriggers:\r\n granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()\r\nAccess method: heap\r\n\r\nCollection:\r\npublic | collection | table | ims_api_writer | 39 MB |\r\n\r\nims_api=# \\d collection\r\n Table \"public.collection\"\r\n Column | Type | Collation | Nullable | Default\r\n------------------------------+-----------------------------+-----------+----------+----------------------------------------\r\n id | integer | | not null | nextval('collection_id_seq'::regclass)\r\n access_constraints | text | | |\r\n additional_attributes | jsonb | | |\r\n ancillary_keywords | character varying(160)[] | | |\r\n create_date | timestamp without time zone | | not null |\r\n dataset_language | character varying(80)[] | | |\r\n dataset_progress | text | | |\r\n data_resolutions | jsonb | | |\r\n dataset_citation | jsonb | | |\r\n delete_date | timestamp without time zone | | |\r\n distribution | jsonb | | |\r\n doi | character varying(220) | | |\r\n entry_id | character varying(80) | | not null |\r\n entry_title | character varying(1030) | | |\r\n geometry | geometry(Geometry,4326) | | |\r\n is_active | boolean | | not null |\r\n iso_topic_categories | character varying[] | | |\r\n last_update_date | timestamp without time zone | | not null |\r\n locations | jsonb | | |\r\n long_name | character varying(1024) | | |\r\n metadata_associations | jsonb | | |\r\n metadata_dates | jsonb | | |\r\n personnel | jsonb | | |\r\n platforms | jsonb | | |\r\n processing_level_id | integer | | |\r\n product_flag | text | | |\r\n project_id | integer | | |\r\n properties | jsonb | | |\r\n quality | jsonb | | |\r\n references | character varying(12000)[] | | |\r\n related_urls | jsonb | | |\r\n summary | jsonb | | |\r\n short_name | character varying(80) | | |\r\n temporal_extents | jsonb | | |\r\n version | character varying(80) | | |\r\n use_constraints | jsonb | | |\r\n version_description | text | | |\r\n visibility_id | integer | | not null |\r\n world_date | timestamp without time zone | | |\r\n tiling_identification_system | jsonb | | |\r\n collection_data_type | text | | |\r\n standard_product | boolean | | not null | false\r\nIndexes:\r\n \"collection_pkey\" PRIMARY KEY, btree (id)\r\n \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id)\r\n \"idx_collection_geometry\" gist (geometry)\r\nForeign-key constraints:\r\n \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id) REFERENCES processing_level(id)\r\n \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id)\r\n \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\nReferenced by:\r\n TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES collection(id)\r\n\r\n\r\nGranule_file:\r\n public | granule_file | table | ims_api_writer | 1108 MB |\r\n\r\n\\d granule_file\r\n Table \"public.granule_file\"\r\n Column | Type | Collation | Nullable | Default\r\n--------------+---------+-----------+----------+---------\r\n granule_uuid | uuid | | |\r\n file_id | integer | | |\r\nForeign-key constraints:\r\n \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id)\r\n \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\r\n\r\n\r\nVisibility:\r\npublic | visibility | table | ims_api_writer | 40 kB |\r\n\r\n\\d visibility\r\n Table \"public.visibility\"\r\n Column | Type | Collation | Nullable | Default\r\n--------+-----------------------+-----------+----------+----------------------------------------\r\n id | integer | | not null | nextval('visibility_id_seq'::regclass)\r\n name | character varying(80) | | not null |\r\n value | integer | | not null |\r\nIndexes:\r\n \"visibility_pkey\" PRIMARY KEY, btree (id)\r\n \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name)\r\n \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)\r\nReferenced by:\r\n TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\n TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES visibility(id)\r\n\r\n\r\n\r\n\r\nThanks for the help!\r\n\r\nMaria Wilson\r\nNasa/Langley Research Center\r\nHampton, Virginia USA\r\[email protected]<mailto:[email protected]>\r\n\r\n\n\n\n\n\n\n\n\n\nThanks!! See below:\n \nexplain (analyze, buffers)\n \n Sort \r\n(cost=10914852.09..10914852.45 rows=141 width=996) (actual time=46036.486..46036.540 rows=4 loops=1)\n Sort Key: granule.uuid\n Sort Method: quicksort \r\nMemory: 32kB\n Buffers: shared hit=784286 read=8346129\n -> \r\nHash Left Join \r\n(cost=740575.40..10914847.06 rows=141 width=996) (actual time=46036.366..46036.457 rows=4 loops=1)\n Hash Cond: (granule.visibility_id = visibility_1.id)\n Buffers: shared hit=784283 read=8346129\n -> \r\nHash Right Join \r\n(cost=740573.23..10914773.99 rows=141 width=1725) (actual time=46036.148..46036.208 rows=4 loops=1)\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\n Buffers: shared hit=784282 read=8346129\n -> \r\nHash Join \n(cost=644250.54..10734700.30 rows=22333224 width=223) (actual time=7864.023..44546.392 rows=22325462 loops=1)\n Hash Cond: (file_1.id = granule_file_1.file_id)\n Buffers: shared hit=780882 read=8345236\n -> \r\nSeq Scan on file file_1 \r\n(cost=0.00..9205050.88 rows=22068888 width=207) (actual time=402.706..25222.525 rows=22057988 loops=1)\n \nBuffers: shared hit=639126 read=8345236\n -> \r\nHash (cost=365085.24..365085.24 rows=22333224 width=20) (actual time=7288.228..7288.235 rows=22325462 loops=1)\n \nBuckets: 33554432 \nBatches: 1 Memory Usage: 1391822kB\n \nBuffers: shared hit=141753\n \n-> Seq Scan on granule_file granule_file_1 \r\n(cost=0.00..365085.24 rows=22333224 width=20) (actual time=0.030..2151.380 rows=22325462 loops=1)\n \r\nBuffers: shared hit=141753\n -> \r\nHash (cost=96322.36..96322.36 rows=26 width=1518) (actual time=16.631..16.672 rows=4 loops=1)\n Buckets: 1024 \r\nBatches: 1 \nMemory Usage: 13kB\n Buffers: shared hit=3400 read=893\n -> \r\nNested Loop Left Join \r\n(cost=96114.18..96322.36 rows=26 width=1518) (actual time=16.605..16.658 rows=4 loops=1)\n \nBuffers: shared hit=3400 read=893\n \n-> Limit \r\n(cost=96113.90..96113.97 rows=26 width=1462) (actual time=16.585..16.621 rows=4 loops=1)\n \r\nBuffers: shared hit=3388 read=893\n \r\n-> Sort \r\n(cost=96113.90..96126.16 rows=4902 width=1462) (actual time=16.583..16.610 rows=4 loops=1)\n \r\nSort Key: granule.uuid\n \r\nSort Method: quicksort \r\nMemory: 32kB\n \r\nBuffers: shared hit=3388 read=893\n \r\n-> Nested Loop \r\n(cost=0.56..95974.19 rows=4902 width=1462) (actual time=3.805..16.585 rows=4 loops=1)\n \r\nBuffers: shared hit=3388 read=893\n \r\n-> Seq Scan on collection \r\n(cost=0.00..3341.70 rows=1 width=4) (actual time=0.670..5.734 rows=4 loops=1)\n \r\nFilter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\n \r\nRows Removed by Filter: 2481\n \r\nBuffers: shared hit=3292\n \r\n-> Index Scan using ix_granule_collection_id on granule \r\n(cost=0.56..92445.36 rows=18713 width=1462) (actual time=1.342..2.705 rows=1 loops=4)\n \r\nIndex Cond: (collection_id = collection.id)\n \r\nFilter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\nate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_\ndate_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\n < '2015-10-09T00:00:00+00:00'::text)))\n \r\nRows Removed by Filter: 243\n \r\nBuffers: shared hit=96 read=893\n \n-> Index Scan using collection_pkey on collection collection_1 \r\n(cost=0.28..8.00 rows=1 width=56) (actual time=0.005..0.005 rows=1 loops=4)\n \r\nIndex Cond: (id = granule.collection_id)\n \r\nBuffers: shared hit=12\n -> \r\nHash (cost=1.52..1.52 rows=52 width=16) (actual time=0.038..0.038 rows=52 loops=1)\n Buckets: 1024 \r\nBatches: 1 \nMemory Usage: 11kB\n Buffers: shared hit=1\n -> \r\nSeq Scan on visibility visibility_1 \r\n(cost=0.00..1.52 rows=52 width=16) (actual time=0.019..0.022 rows=52 loops=1)\n Buffers: shared hit=1\n Planning Time: 3.510 ms\n Execution Time: 46084.516 ms\n(52 rows)\n \n \n \n\nFrom: Frits Hoogland <[email protected]>\nDate: Wednesday, December 27, 2023 at 11:07 AM\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: Re: [EXTERNAL] Need help with performance tuning pg12 on linux\n\n\n \n\n\n\n\n\n\nCAUTION:\nThis email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\n\n\n\n\n\n\n\n\n\nYes, there is an explain, but that is an explain that is run without ‘analyze’ added to it.\r\n\n\nThis means the query is parsed and planned, and the resulting parse tree with planner assumptions is shown.\n\n\n \n\n\nIf you add ‘analyze to ‘explain’, the actual query is run and timed, and statistics about actual execution are shown.\n\n\n \n\n\n\nFrits Hoogland\n\n\n \n\n\n \n\n\n\n\n\n\n\nOn 27 Dec 2023, at 17:01, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\n\n \n\n\nThanks for the reply!! Scroll down a bit – the explain is just a bit further down in the email!\n\n\nMaria\n\n\n \n\n\n\nFrom: Frits Hoogland <[email protected]>\nDate: Wednesday, December 27, 2023 at 10:50 AM\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: [EXTERNAL] Re: Need help with performance tuning pg12 on linux\n\n\n\n\n \n\n\n\n\n\n\n\n\nCAUTION: This email originated from outside of NASA. Please take care when clicking\r\n links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\n\n\n\n\n\n\n\n\n\n\n\nHi Maria, could you please run explain analyse for the problem query?\n\n\n\nThe ‘analyze’ addition will track actual spent time and show statistics to validate the planner’s assumptions.\n\n\n\n\n \n\n\n\n\n\nFrits Hoogland\n\n\n\n\n \n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\nOn 27 Dec 2023, at 16:38, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\n\n\n\n \n\n\n\n\nHello folks!\n\n\n\n\n \n\n\n\n\nI am having a complex query slowing over time increasing in duration. If anyone has a few cycles that they could lend a hand or just point me in the right direction with this – I would surely appreciate it! Fairly beefy Linux server with\r\n Postgres 12 (latest) – this particular query has been getting slower over time & seemingly slowing everything else down. The server is dedicated entirely to this particular database. Let me know if I can provide any additional information!! Thanks in advance!\n\n\n\n\n \n\n\n\n\nHere’s my background – Linux RHEL 8 – PostgreSQL 12.17. – \n\n\n\n\nMemTotal: 263216840\r\n kB\n\n\n\n\nMemFree: 3728224\r\n kB\n\n\n\n\nMemAvailable: 197186864\r\n kB\n\n\n\n\nBuffers: 6704\r\n kB\n\n\n\n\nCached: 204995024\r\n kB\n\n\n\n\nSwapCached: 19244\r\n kB\n\n\n\n\n \n\n\n\n\nfree -m\n\n\n\n\n total \r\n used free \r\n shared buff/cache available\n\n\n\n\nMem: 257047 \r\n 51860 3722 \r\n 10718 201464 \r\n 192644\n\n\n\n\nSwap: 4095 \r\n 855 3240\n\n\n\n\n \n\n\n\n\nHere are a few of the settings in our postgres server:\n\n\n\n\nmax_connections = 300 #\r\n (change requires restart)\n\n\n\n\nshared_buffers = 10GB\n\n\n\n\ntemp_buffers = 24MB\n\n\n\n\nwork_mem = 2GB\n\n\n\n\nmaintenance_work_mem = 1GB\n\n\n\n\n \n\n\n\n\nmost everything else is set to the default.\n\n\n\n\n \n\n\n\n\nThe query is complex with several joins:\n\n\n\n\n \n\n\n\n\nSELECT anon_1.granule_collection_id AS anon_1_granule_collection_id, anon_1.granule_create_date AS anon_1_granule_create_date, anon_1.granule_delete_date AS anon_1_granule_delete_date,\r\n ST_AsGeoJSON(anon_1.granule_geography) AS anon_1_granule_geography, ST_AsGeoJSON(anon_1.granule_geometry) AS anon_1_granule_geometry, anon_1.granule_is_active AS anon_1_granule_is_active, anon_1.granule_properties AS anon_1_granule_properties, anon_1.granule_update_date\r\n AS anon_1_granule_update_date, anon_1.granule_uuid AS anon_1_granule_uuid, anon_1.granule_visibility_last_update_date AS anon_1_granule_visibility_last_update_date, anon_1.granule_visibility_id AS anon_1_granule_visibility_id, collection_1.id AS\r\n collection_1_id, collection_1.entry_id AS collection_1_entry_id, collection_1.short_name AS collection_1_short_name, collection_1.version AS collection_1_version, file_1.id AS\r\n file_1_id, file_1.location AS file_1_location, file_1.md5 AS file_1_md5, file_1.name AS file_1_name, file_1.size AS file_1_size, file_1.type AS file_1_type, visibility_1.id AS\r\n visibility_1_id, visibility_1.name AS visibility_1_name, visibility_1.value AS visibility_1_value\n\n\n\n\n FROM (SELECT granule.collection_id AS granule_collection_id, granule.create_date\r\n AS granule_create_date, granule.delete_date AS granule_delete_date, granule.geography AS granule_geography, granule.geometry AS granule_geometry, granule.is_active AS granule_is_active, granule.properties AS granule_properties, granule.update_date AS granule_update_date,\r\n granule.uuid AS granule_uuid, granule.visibility_last_update_date AS granule_visibility_last_update_date, granule.visibility_id AS granule_visibility_id\n\n\n\n\n FROM granule JOIN collection ON collection.id =\r\n granule.collection_id\n\n\n\n\n WHERE granule.is_active = true AND (collection.entry_id LIKE 'AJAX_CO2_CH4_1'\r\n OR collection.entry_id LIKE 'AJAX_O3_1' OR collection.entry_id LIKE 'AJAX_CH2O_1' OR collection.entry_id LIKE 'AJAX_MMS_1') AND ((granule.properties #>> '{temporal_extent, range_date_times, 0, beginning_date_time}') > '2015-10-06T23:59:59+00:00' OR (granule.properties\r\n #>> '{temporal_extent, single_date_times, 0}') > '2015-10-06T23:59:59+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, start_date}') > '2015-10-06T23:59:59+00:00') AND ((granule.properties #>> '{temporal_extent, range_date_times,\r\n 0, end_date_time}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, single_date_times, 0}') < '2015-10-09T00:00:00+00:00' OR (granule.properties #>> '{temporal_extent, periodic_date_times, 0, end_date}') < '2015-10-09T00:00:00+00:00')\r\n ORDER BY granule.uuid\n\n\n\n\n LIMIT 26) AS anon_1 LEFT OUTER JOIN collection AS collection_1\r\n ON collection_1.id =\r\n anon_1.granule_collection_id LEFT OUTER JOIN (granule_file AS granule_file_1 JOIN file AS file_1 ON file_1.id =\r\n granule_file_1.file_id) ON anon_1.granule_uuid = granule_file_1.granule_uuid LEFT OUTER JOIN visibility AS visibility_1 ON visibility_1.id =\r\n anon_1.granule_visibility_id ORDER BY anon_1.granule_uuid\n\n\n\n\n \n\n\n\n\nHere’s the explain:\n\n\n\n\n \n\n\n\n\n Sort (cost=10914809.92..10914810.27\r\n rows=141 width=996)\n\n\n\n\n Sort Key: granule.uuid\n\n\n\n\n -> Hash\r\n Left Join (cost=740539.73..10914804.89 rows=141 width=996)\n\n\n\n\n Hash Cond: (granule.visibility_id = visibility_1.id)\n\n\n\n\n -> Hash\r\n Right Join (cost=740537.56..10914731.81 rows=141 width=1725)\n\n\n\n\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\n\n\n\n\n -> Hash\r\n Join (cost=644236.90..10734681.93 rows=22332751 width=223)\n\n\n\n\n Hash Cond: (file_1.id =\r\n granule_file_1.file_id)\n\n\n\n\n -> Seq\r\n Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207)\n\n\n\n\n -> Hash (cost=365077.51..365077.51\r\n rows=22332751 width=20)\n\n\n\n\n -> Seq\r\n Scan on granule_file granule_file_1 (cost=0.00..365077.51 rows=22332751 width=20)\n\n\n\n\n -> Hash (cost=96300.33..96300.33\r\n rows=26 width=1518)\n\n\n\n\n -> Nested\r\n Loop Left Join (cost=96092.55..96300.33 rows=26 width=1518)\n\n\n\n\n -> Limit (cost=96092.27..96092.33\r\n rows=26 width=1462)\n\n\n\n\n -> Sort (cost=96092.27..96100.47\r\n rows=3282 width=1462)\n\n\n\n\n Sort Key: granule.uuid\n\n\n\n\n -> Nested\r\n Loop (cost=0.56..95998.73 rows=3282 width=1462)\n\n\n\n\n -> Seq\r\n Scan on collection (cost=0.00..3366.24 rows=1 width=4)\n\n\n\n\n Filter: (((entry_id)::text\r\n ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\n\n\n\n\n -> Index\r\n Scan using ix_granule_collection_id on granule (cost=0.56..92445.36 rows=18713\r\n width=1462)\n\n\n\n\n Index Cond: (collection_id\r\n = collection.id)\n\n\n\n\n Filter: (is_active AND\r\n (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_d\n\n\n\n\nate_times,0}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text))\r\n AND (((properties #>> '{temporal_extent,range_date_times,0,end_\n\n\n\n\ndate_time}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties\r\n #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[])\n\n\n\n\n < '2015-10-09T00:00:00+00:00'::text)))\n\n\n\n\n -> Index\r\n Scan using collection_pkey on collection collection_1 (cost=0.28..7.99 rows=1\r\n width=56)\n\n\n\n\n Index Cond: (id = granule.collection_id)\n\n\n\n\n -> Hash (cost=1.52..1.52\r\n rows=52 width=16)\n\n\n\n\n -> Seq\r\n Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16)\n\n\n\n\n \n\n\n\n\n \n\n\n\n\nHeres a bit about the tables – \n\n\n\n\n \n\n\n\n\nGranule\n\n\n\n\nCollection\n\n\n\n\nGranule_file\n\n\n\n\nVisibility\n\n\n\n\n \n\n\n\n\nGranule:\n\n\n\n\npublic | granule |\r\n table | ims_api_writer | 36 GB | \n\n\n\n\n \n\n\n\n\nims_api=# \\d+ granule\n\n\n\n\n Table \"public.granule\"\n\n\n\n\n Column \r\n | Type \r\n | Collation | Nullable | Default | Storage |\r\n Stats target | Description \n\n\n\n\n-----------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n\n\n\n\n collection_id \r\n | integer | \r\n | not null | |\r\n plain | \r\n | \n\n\n\n\n create_date \r\n | timestamp without time zone | |\r\n not null | | plain \r\n | | \n\n\n\n\n delete_date \r\n | timestamp without time zone | | \r\n | |\r\n plain | \r\n | \n\n\n\n\n geometry \r\n | geometry(Geometry,4326) | \r\n | | \r\n | main | \r\n | \n\n\n\n\n is_active \r\n | boolean | \r\n | | \r\n | plain | \r\n | \n\n\n\n\n properties \r\n | jsonb | \r\n | | \r\n | extended | | \n\n\n\n\n update_date \r\n | timestamp without time zone | |\r\n not null | | plain \r\n | | \n\n\n\n\n uuid \r\n | uuid | \r\n | not null | |\r\n plain | \r\n | \n\n\n\n\n visibility_id \r\n | integer | \r\n | not null | |\r\n plain | \r\n | \n\n\n\n\n geography \r\n | geography(Geometry,4326) | \r\n | | \r\n | main | \r\n | \n\n\n\n\n visibility_last_update_date | timestamp without time zone | \r\n | | \r\n | plain | \r\n | \n\n\n\n\nIndexes:\n\n\n\n\n \"granule_pkey\" PRIMARY KEY, btree (uuid)\n\n\n\n\n \"granule_is_active_idx\" btree (is_active)\n\n\n\n\n \"granule_properties_producer_id_idx\" btree ((properties ->> 'producer_granule_id'::text))\n\n\n\n\n \"granule_update_date_idx\" btree (update_date)\n\n\n\n\n \"idx_granule_geometry\" gist (geometry)\n\n\n\n\n \"ix_granule_collection_id\" btree (collection_id)\n\n\n\n\nForeign-key constraints:\n\n\n\n\n \"granule_collection_id_fkey\" FOREIGN KEY (collection_id) REFERENCES\r\n collection(id)\n\n\n\n\n \"granule_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES\r\n visibility(id)\n\n\n\n\nReferenced by:\n\n\n\n\n TABLE \"granule_file\" CONSTRAINT \"granule_file_granule_uuid_fkey\" FOREIGN\r\n KEY (granule_uuid) REFERENCES granule(uuid)\n\n\n\n\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_granule_uuid_fkey\"\r\n FOREIGN KEY (granule_uuid) REFERENCES granule(uuid)\n\n\n\n\nTriggers:\n\n\n\n\n granule_temporal_range_trigger AFTER INSERT OR DELETE OR UPDATE ON\r\n granule FOR EACH ROW EXECUTE FUNCTION sync_granule_temporal_range()\n\n\n\n\nAccess method: heap\n\n\n\n\n \n\n\n\n\nCollection:\n\n\n\n\npublic | collection |\r\n table | ims_api_writer | 39 MB | \n\n\n\n\n \n\n\n\n\nims_api=# \\d collection\n\n\n\n\n Table \"public.collection\"\n\n\n\n\n Column \r\n | Type \r\n | Collation | Nullable | Default \r\n \n\n\n\n\n------------------------------+-----------------------------+-----------+----------+----------------------------------------\n\n\n\n\n id \r\n | integer | \r\n | not null | nextval('collection_id_seq'::regclass)\n\n\n\n\n access_constraints \r\n | text | \r\n | | \n\n\n\n\n additional_attributes \r\n | jsonb | \r\n | | \n\n\n\n\n ancillary_keywords \r\n | character varying(160)[] | \r\n | | \n\n\n\n\n create_date \r\n | timestamp without time zone | |\r\n not null | \n\n\n\n\n dataset_language \r\n | character varying(80)[] | \r\n | | \n\n\n\n\n dataset_progress \r\n | text | \r\n | | \n\n\n\n\n data_resolutions \r\n | jsonb | \r\n | | \n\n\n\n\n dataset_citation \r\n | jsonb | \r\n | | \n\n\n\n\n delete_date \r\n | timestamp without time zone | | \r\n | \n\n\n\n\n distribution \r\n | jsonb | \r\n | | \n\n\n\n\n doi \r\n | character varying(220) | \r\n | | \n\n\n\n\n entry_id \r\n | character varying(80) | \r\n | not null | \n\n\n\n\n entry_title \r\n | character varying(1030) | \r\n | | \n\n\n\n\n geometry \r\n | geometry(Geometry,4326) | \r\n | | \n\n\n\n\n is_active \r\n | boolean | \r\n | not null | \n\n\n\n\n iso_topic_categories \r\n | character varying[] | \r\n | | \n\n\n\n\n last_update_date \r\n | timestamp without time zone | |\r\n not null | \n\n\n\n\n locations \r\n | jsonb | \r\n | | \n\n\n\n\n long_name \r\n | character varying(1024) | \r\n | | \n\n\n\n\n metadata_associations \r\n | jsonb | \r\n | | \n\n\n\n\n metadata_dates \r\n | jsonb | \r\n | | \n\n\n\n\n personnel \r\n | jsonb | \r\n | | \n\n\n\n\n platforms \r\n | jsonb | \r\n | | \n\n\n\n\n processing_level_id \r\n | integer | \r\n | | \n\n\n\n\n product_flag \r\n | text | \r\n | | \n\n\n\n\n project_id \r\n | integer | \r\n | | \n\n\n\n\n properties \r\n | jsonb | \r\n | | \n\n\n\n\n quality \r\n | jsonb | \r\n | | \n\n\n\n\n references \r\n | character varying(12000)[] | \r\n | | \n\n\n\n\n related_urls \r\n | jsonb | \r\n | | \n\n\n\n\n summary \r\n | jsonb | \r\n | | \n\n\n\n\n short_name \r\n | character varying(80) | \r\n | | \n\n\n\n\n temporal_extents \r\n | jsonb | \r\n | | \n\n\n\n\n version \r\n | character varying(80) | \r\n | | \n\n\n\n\n use_constraints \r\n | jsonb | \r\n | | \n\n\n\n\n version_description \r\n | text | \r\n | | \n\n\n\n\n visibility_id \r\n | integer | \r\n | not null | \n\n\n\n\n world_date \r\n | timestamp without time zone | | \r\n | \n\n\n\n\n tiling_identification_system | jsonb \r\n | | \r\n | \n\n\n\n\n collection_data_type \r\n | text | \r\n | | \n\n\n\n\n standard_product \r\n | boolean | \r\n | not null | false\n\n\n\n\nIndexes:\n\n\n\n\n \"collection_pkey\" PRIMARY KEY, btree (id)\n\n\n\n\n \"collection_entry_id_key\" UNIQUE CONSTRAINT, btree (entry_id)\n\n\n\n\n \"idx_collection_geometry\" gist (geometry)\n\n\n\n\nForeign-key constraints:\n\n\n\n\n \"collection_processing_level_id_fkey\" FOREIGN KEY (processing_level_id)\r\n REFERENCES processing_level(id)\n\n\n\n\n \"collection_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES project(id)\n\n\n\n\n \"collection_visibility_id_fkey\" FOREIGN KEY (visibility_id) REFERENCES\r\n visibility(id)\n\n\n\n\nReferenced by:\n\n\n\n\n TABLE \"collection_organization\" CONSTRAINT \"collection_organization_collection_id_fkey\"\r\n FOREIGN KEY (collection_id) REFERENCES collection(id)\n\n\n\n\n TABLE \"collection_science_keyword\" CONSTRAINT \"collection_science_keyword_collection_id_fkey\"\r\n FOREIGN KEY (collection_id) REFERENCES collection(id)\n\n\n\n\n TABLE \"collection_spatial_processing_hint\" CONSTRAINT \"collection_spatial_processing_hint_collection_id_fkey\"\r\n FOREIGN KEY (collection_id) REFERENCES collection(id)\n\n\n\n\n TABLE \"granule\" CONSTRAINT \"granule_collection_id_fkey\" FOREIGN KEY\r\n (collection_id) REFERENCES collection(id)\n\n\n\n\n TABLE \"granule_temporal_range\" CONSTRAINT \"granule_temporal_range_collection_id_fkey\"\r\n FOREIGN KEY (collection_id) REFERENCES collection(id)\n\n\n\n\n \n\n\n\n\n \n\n\n\n\nGranule_file:\n\n\n\n\n public | granule_file \r\n | table | ims_api_writer | 1108 MB | \n\n\n\n\n \n\n\n\n\n\\d granule_file\n\n\n\n\n Table \"public.granule_file\"\n\n\n\n\n Column \r\n | Type |\r\n Collation | Nullable | Default \n\n\n\n\n--------------+---------+-----------+----------+---------\n\n\n\n\n granule_uuid | uuid \r\n | | \r\n | \n\n\n\n\n file_id \r\n | integer | | \r\n | \n\n\n\n\nForeign-key constraints:\n\n\n\n\n \"granule_file_file_id_fkey\" FOREIGN KEY (file_id) REFERENCES file(id)\n\n\n\n\n \"granule_file_granule_uuid_fkey\" FOREIGN KEY (granule_uuid) REFERENCES\r\n granule(uuid)\n\n\n\n\n \n\n\n\n\n \n\n\n\n\nVisibility:\n\n\n\n\npublic | visibility |\r\n table | ims_api_writer | 40 kB | \n\n\n\n\n \n\n\n\n\n\\d visibility\n\n\n\n\n Table \"public.visibility\"\n\n\n\n\n Column | \r\n Type |\r\n Collation | Nullable | Default \r\n \n\n\n\n\n--------+-----------------------+-----------+----------+----------------------------------------\n\n\n\n\n id \r\n | integer | \r\n | not null | nextval('visibility_id_seq'::regclass)\n\n\n\n\n name |\r\n character varying(80) | | not null | \n\n\n\n\n value |\r\n integer | \r\n | not null | \n\n\n\n\nIndexes:\n\n\n\n\n \"visibility_pkey\" PRIMARY KEY, btree (id)\n\n\n\n\n \"visibility_name_key\" UNIQUE CONSTRAINT, btree (name)\n\n\n\n\n \"visibility_value_key\" UNIQUE CONSTRAINT, btree (value)\n\n\n\n\nReferenced by:\n\n\n\n\n TABLE \"collection\" CONSTRAINT \"collection_visibility_id_fkey\" FOREIGN\r\n KEY (visibility_id) REFERENCES visibility(id)\n\n\n\n\n TABLE \"granule\" CONSTRAINT \"granule_visibility_id_fkey\" FOREIGN KEY\r\n (visibility_id) REFERENCES visibility(id)\n\n\n\n\n \n\n\n\n\n \n\n\n\n\n \n\n\n\n\n \n\n\n\n\nThanks for the help!\n\n\n\n\n \n\n\n\n\nMaria Wilson\n\n\n\n\nNasa/Langley Research Center\n\n\n\n\nHampton, Virginia USA\n\n\n\n\[email protected]",
"msg_date": "Wed, 27 Dec 2023 16:15:23 +0000",
"msg_from": "\"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Need help with performance tuning pg12 on linux"
},
{
"msg_contents": ">\n> -> Hash Join (cost=644250.54..10734700.30 rows=22333224\n> width=223) (actual time=7864.023..44546.392 rows=22325462 loops=1)\n> Hash Cond: (file_1.id = granule_file_1.file_id)\n> Buffers: shared hit=780882 read=8345236\n> -> Seq Scan on file file_1 (cost=0.00..9205050.88\n> rows=22068888 width=207) (actual time=402.706..25222.525 rows=22057988\n> loops=1)\n> Buffers: shared hit=639126 read=8345236\n> -> Hash (cost=365085.24..365085.24 rows=22333224\n> width=20) (actual time=7288.228..7288.235 rows=22325462 loops=1)\n> Buckets: 33554432 Batches: 1 Memory Usage:\n> 1391822kB\n> Buffers: shared hit=141753\n> -> Seq Scan on granule_file granule_file_1 (cost=0.00..365085.24\n> rows=22333224 width=20) (actual time=0.030..2151.380 rows=22325462 loops=1)\n> Buffers: shared hit=141753\n\n\nThis part above is the most expensive so far, and taking a look at your\n`granule_file` table on the first message, it has no indexes nor\nconstraints, which certainly looks like a mistake. I'd start optimizing\nthis, you could add an index on it, but seems that you need a primary key\non both columns of this (junction?) table:\n\n ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\n\nThere are certainly more things to optimize on this query, but I prefer\ndoing one thing at a time. Could you try with the PK and send the EXPLAIN\nANALYZE of the query again after that?\n\nBest regards,\nMatheus de Oliveira\n\n -> Hash Join (cost=644250.54..10734700.30 rows=22333224 width=223) (actual time=7864.023..44546.392 rows=22325462 loops=1) Hash Cond: (file_1.id = granule_file_1.file_id) Buffers: shared hit=780882 read=8345236 -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207) (actual time=402.706..25222.525 rows=22057988 loops=1) Buffers: shared hit=639126 read=8345236 -> Hash (cost=365085.24..365085.24 rows=22333224 width=20) (actual time=7288.228..7288.235 rows=22325462 loops=1) Buckets: 33554432 Batches: 1 Memory Usage: 1391822kB Buffers: shared hit=141753 -> Seq Scan on granule_file granule_file_1 (cost=0.00..365085.24 rows=22333224 width=20) (actual time=0.030..2151.380 rows=22325462 loops=1) Buffers: shared hit=141753This part above is the most expensive so far, and taking a look at your `granule_file` table on the first message, it has no indexes nor constraints, which certainly looks like a mistake. I'd start optimizing this, you could add an index on it, but seems that you need a primary key on both columns of this (junction?) table: ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);There are certainly more things to optimize on this query, but I prefer doing one thing at a time. Could you try with the PK and send the EXPLAIN ANALYZE of the query again after that?Best regards,Matheus de Oliveira",
"msg_date": "Wed, 27 Dec 2023 13:36:27 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Need help with performance tuning pg12 on linux"
},
{
"msg_contents": "Thanks for the reply!! Having some issues due to nulls…. Any other thoughts?\r\n\r\n\r\ni=# ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\r\n\r\nERROR: column \"granule_uuid\" contains null values\r\n\r\n\r\nFrom: Matheus de Oliveira <[email protected]>\r\nDate: Wednesday, December 27, 2023 at 11:36 AM\r\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>\r\nCc: Frits Hoogland <[email protected]>, \"[email protected]\" <[email protected]>\r\nSubject: Re: [EXTERNAL] Need help with performance tuning pg12 on linux\r\n\r\nCAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\r\n\r\n\r\n -> Hash Join (cost=644250.54..10734700.30 rows=22333224 width=223) (actual time=7864.023..44546.392 rows=22325462 loops=1)\r\n Hash Cond: (file_1.id<http://file_1.id/> = granule_file_1.file_id)\r\n Buffers: shared hit=780882 read=8345236\r\n -> Seq Scan on file file_1 (cost=0.00..9205050.88 rows=22068888 width=207) (actual time=402.706..25222.525 rows=22057988 loops=1)\r\n Buffers: shared hit=639126 read=8345236\r\n -> Hash (cost=365085.24..365085.24 rows=22333224 width=20) (actual time=7288.228..7288.235 rows=22325462 loops=1)\r\n Buckets: 33554432 Batches: 1 Memory Usage: 1391822kB\r\n Buffers: shared hit=141753\r\n -> Seq Scan on granule_file granule_file_1 (cost=0.00..365085.24 rows=22333224 width=20) (actual time=0.030..2151.380 rows=22325462 loops=1)\r\n Buffers: shared hit=141753\r\n\r\nThis part above is the most expensive so far, and taking a look at your `granule_file` table on the first message, it has no indexes nor constraints, which certainly looks like a mistake. I'd start optimizing this, you could add an index on it, but seems that you need a primary key on both columns of this (junction?) table:\r\n\r\n ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\r\n\r\nThere are certainly more things to optimize on this query, but I prefer doing one thing at a time. Could you try with the PK and send the EXPLAIN ANALYZE of the query again after that?\r\n\r\nBest regards,\r\nMatheus de Oliveira\r\n\n\n\n\n\n\n\n\n\nThanks for the reply!! Having some issues due to nulls…. Any other thoughts?\n \ni=# ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\nERROR: \r\ncolumn \"granule_uuid\" contains null values\n \n \n\nFrom: Matheus de Oliveira <[email protected]>\nDate: Wednesday, December 27, 2023 at 11:36 AM\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>\nCc: Frits Hoogland <[email protected]>, \"[email protected]\" <[email protected]>\nSubject: Re: [EXTERNAL] Need help with performance tuning pg12 on linux\n\n\n \n\n\n\n\n\n\nCAUTION:\nThis email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\n\n\n\n\n\n\n\n\n\n\n\n\n -> Hash Join (cost=644250.54..10734700.30\r\n rows=22333224 width=223) (actual time=7864.023..44546.392 rows=22325462 loops=1)\n Hash Cond: (file_1.id = granule_file_1.file_id)\n Buffers: shared hit=780882 read=8345236\n -> Seq Scan on file file_1 (cost=0.00..9205050.88\r\n rows=22068888 width=207) (actual time=402.706..25222.525 rows=22057988 loops=1)\n Buffers: shared hit=639126 read=8345236\n -> Hash (cost=365085.24..365085.24\r\n rows=22333224 width=20) (actual time=7288.228..7288.235 rows=22325462 loops=1)\n Buckets: 33554432 Batches:\r\n 1 Memory Usage: 1391822kB\n Buffers: shared hit=141753\n -> Seq Scan on granule_file\r\n granule_file_1 (cost=0.00..365085.24 rows=22333224 width=20) (actual time=0.030..2151.380 rows=22325462 loops=1)\n Buffers: shared hit=141753\n\n \n\nThis part above is the most expensive so far, and taking a look at your `granule_file` table on the first message, it has no indexes nor constraints, which certainly looks like a mistake. I'd start optimizing this, you could add an index\r\n on it, but seems that you need a primary key on both columns of this (junction?) table:\n\r\n ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\n\n\n \n\n\nThere are certainly more things to optimize on this query, but I prefer doing one thing at a time. Could you try with the PK and send the EXPLAIN ANALYZE of the query again after that?\n\n\n \n\n\nBest regards,\n\n\nMatheus de Oliveira",
"msg_date": "Wed, 27 Dec 2023 17:10:59 +0000",
"msg_from": "\"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Need help with performance tuning pg12 on linux"
},
{
"msg_contents": "Em qua., 27 de dez. de 2023 às 14:11, Wilson, Maria Louise\n(LARC-E301)[RSES] <[email protected]> escreveu:\n\n> Thanks for the reply!! Having some issues due to nulls…. Any other\n> thoughts?\n>\n>\n>\n> i=# ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\n>\n> ERROR: column \"granule_uuid\" contains null values\n>\nWell, uuid is a bad datatype for primary keys.\nIf possible in the long run, consider replacing them with bigint.\n\nCan you try a index:\nCREATE INDEX granule_file_file_id_key ON granule_file USING btree(file_id);\n\nAlthough granule_file has an index as a foreign key, it seems to me that it\nis not being considered.\n\nMy 2cents.\n\nBest regards,\nRanier Vilela\n\nEm qua., 27 de dez. de 2023 às 14:11, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> escreveu:\n\n\nThanks for the reply!! Having some issues due to nulls…. Any other thoughts?\n \ni=# ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\nERROR: \ncolumn \"granule_uuid\" contains null valuesWell, uuid is a bad datatype for primary keys.If possible in the long run, consider replacing them with bigint.Can you try a index:CREATE INDEX granule_file_file_id_key ON granule_file USING btree(file_id);Although granule_file has an index as a foreign key, it seems to me that it is not being considered.My 2cents.Best regards,Ranier Vilela",
"msg_date": "Wed, 27 Dec 2023 14:23:33 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Need help with performance tuning pg12 on linux"
},
{
"msg_contents": "Here is the before analyze :\r\n\r\n\r\nSort (cost=3208325.03..3208325.33 rows=117 width=997) (actual time=56683.259..56683.307 rows=4 loops=1)\r\n\r\n Sort Key: granule.uuid\r\n\r\n Sort Method: quicksort Memory: 32kB\r\n\r\n Buffers: shared hit=40 read=795724, temp read=630171 written=630171\r\n\r\n -> Hash Left Join (cost=1844145.13..3208321.02 rows=117 width=997) (actual time=56683.080..56683.184 rows=4 loops=1)\r\n\r\n Hash Cond: (granule.visibility_id = visibility_1.id)\r\n\r\n Buffers: shared hit=37 read=795724, temp read=630171 written=630171\r\n\r\n -> Hash Right Join (cost=1844142.96..3208260.02 rows=117 width=1678) (actual time=56682.840..56682.891 rows=4 loops=1)\r\n\r\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\r\n\r\n Buffers: shared hit=36 read=795724, temp read=630171 written=630171\r\n\r\n -> Hash Join (cost=1752547.97..3034700.90 rows=21856786 width=224) (actual time=21966.799..55011.964 rows=21855206 loops=1)\r\n\r\n Hash Cond: (granule_file_1.file_id = file_1.id)\r\n\r\n Buffers: shared hit=2 read=794153, temp read=630171 written=630171\r\n\r\n -> Seq Scan on granule_file granule_file_1 (cost=0.00..357270.86 rows=21856786 width=20) (actual time=0.334..3267.188 rows=21855206 loops=1)\r\n\r\n Buffers: shared read=138703\r\n\r\n -> Hash (cost=871329.32..871329.32 rows=21587732 width=208) (actual time=13425.791..13425.795 rows=21587732 loops=1)\r\n\r\n Buckets: 8388608 Batches: 8 Memory Usage: 710896kB\r\n\r\n Buffers: shared hit=2 read=655450, temp written=537221\r\n\r\n -> Seq Scan on file file_1 (cost=0.00..871329.32 rows=21587732 width=208) (actual time=0.277..5520.726 rows=21587732 loops=1)\r\n\r\n Buffers: shared hit=2 read=655450\r\n\r\n -> Hash (cost=91594.67..91594.67 rows=26 width=1470) (actual time=189.702..189.736 rows=4 loops=1)\r\n\r\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\r\n\r\n Buffers: shared hit=34 read=1571\r\n\r\n -> Nested Loop Left Join (cost=91434.88..91594.67 rows=26 width=1470) (actual time=189.653..189.704 rows=4 loops=1)\r\n\r\n Buffers: shared hit=34 read=1571\r\n\r\n -> Limit (cost=91434.60..91434.67 rows=26 width=1414) (actual time=189.444..189.473 rows=4 loops=1)\r\n\r\n Buffers: shared hit=23 read=1570\r\n\r\n -> Sort (cost=91434.60..91446.86 rows=4903 width=1414) (actual time=189.441..189.462 rows=4 loops=1)\r\n\r\n Sort Key: granule.uuid\r\n\r\n Sort Method: quicksort Memory: 32kB\r\n\r\n Buffers: shared hit=23 read=1570\r\n\r\n -> Nested Loop (cost=0.56..91294.86 rows=4903 width=1414) (actual time=22.534..189.403 rows=4 loops=1)\r\n\r\n Buffers: shared hit=23 read=1570\r\n\r\n -> Seq Scan on collection (cost=0.00..653.62 rows=1 width=4) (actual time=3.706..14.783 rows=4 loops=1)\r\n\r\n Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\r\n\r\n Rows Removed by Filter: 2477\r\n\r\n Buffers: shared hit=2 read=602\r\n\r\n -> Index Scan using ix_granule_collection_id on granule (cost=0.56..90455.52 rows=18572 width=1414) (actual time=21.662..43.645 rows=1 loops=4)\r\n\r\n Index Cond: (collection_id = collection.id)\r\n\r\n Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}\r\n\r\n'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_date_time}'::text[]) < '\r\n\r\n2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[]) < '2015-10-09T00:00:00+00:00'::text\r\n\r\n)))\r\n\r\n Rows Removed by Filter: 243\r\n\r\n Buffers: shared hit=21 read=968\r\n\r\n -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..6.14 rows=1 width=56) (actual time=0.052..0.052 rows=1 loops=4)\r\n\r\n Index Cond: (id = granule.collection_id)\r\n\r\n Buffers: shared hit=11 read=1\r\n\r\n -> Hash (cost=1.52..1.52 rows=52 width=16) (actual time=0.054..0.054 rows=52 loops=1)\r\n\r\n Buckets: 1024 Batches: 1 Memory Usage: 11kB\r\n\r\n Buffers: shared hit=1\r\n\r\n -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16) (actual time=0.032..0.036 rows=52 loops=1)\r\n\r\n Buffers: shared hit=1\r\n\r\n Planning Time: 14.580 ms\r\n\r\n Execution Time: 56764.259 ms\r\n\r\n(52 rows)\r\n\r\n\r\nThen added the index:\r\n\r\nCREATE INDEX granule_file_file_id_key ON granule_file USING btree(file_id);\r\n\r\nCREATE INDEX\r\n\r\n\r\nvacuum (analyze, verbose) granule_file;\r\n\r\n\r\n& heres the new analyze:\r\n\r\n\r\n Sort (cost=3208262.52..3208262.79 rows=105 width=997) (actual time=64720.414..64720.435 rows=4 loops=1)\r\n\r\n Sort Key: granule.uuid\r\n\r\n Sort Method: quicksort Memory: 32kB\r\n\r\n Buffers: shared hit=140349 read=655418, temp read=630171 written=630171\r\n\r\n -> Hash Left Join (cost=1844145.13..3208259.00 rows=105 width=997) (actual time=64720.258..64720.325 rows=4 loops=1)\r\n\r\n Hash Cond: (granule.visibility_id = visibility_1.id)\r\n\r\n Buffers: shared hit=140346 read=655418, temp read=630171 written=630171\r\n\r\n -> Hash Right Join (cost=1844142.96..3208204.03 rows=105 width=1678) (actual time=64720.083..64720.105 rows=4 loops=1)\r\n\r\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\r\n\r\n Buffers: shared hit=140345 read=655418, temp read=630171 written=630171\r\n\r\n -> Hash Join (cost=1752547.97..3034652.34 rows=21854840 width=224) (actual time=11945.807..63203.012 rows=21855206 loops=1)\r\n\r\n Hash Cond: (granule_file_1.file_id = file_1.id)\r\n\r\n Buffers: shared hit=138740 read=655418, temp read=630171 written=630171\r\n\r\n -> Seq Scan on granule_file granule_file_1 (cost=0.00..357251.40 rows=21854840 width=20) (actual time=0.017..3103.893 rows=21855206 loops=1)\r\n\r\n Buffers: shared hit=138703\r\n\r\n -> Hash (cost=871329.32..871329.32 rows=21587732 width=208) (actual time=11891.143..11891.146 rows=21587732 loops=1)\r\n\r\n Buckets: 8388608 Batches: 8 Memory Usage: 710896kB\r\n\r\n Buffers: shared hit=34 read=655418, temp written=537221\r\n\r\n -> Seq Scan on file file_1 (cost=0.00..871329.32 rows=21587732 width=208) (actual time=0.081..3996.438 rows=21587732 loops=1)\r\n\r\n Buffers: shared hit=34 read=655418\r\n\r\n -> Hash (cost=91594.67..91594.67 rows=26 width=1470) (actual time=19.728..19.740 rows=4 loops=1)\r\n\r\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\r\n\r\n Buffers: shared hit=1605\r\n\r\n -> Nested Loop Left Join (cost=91434.88..91594.67 rows=26 width=1470) (actual time=19.684..19.708 rows=4 loops=1)\r\n\r\n Buffers: shared hit=1605\r\n\r\n -> Limit (cost=91434.60..91434.67 rows=26 width=1414) (actual time=19.650..19.660 rows=4 loops=1)\r\n\r\n Buffers: shared hit=1593\r\n\r\n -> Sort (cost=91434.60..91446.86 rows=4903 width=1414) (actual time=19.648..19.656 rows=4 loops=1)\r\n\r\n Sort Key: granule.uuid\r\n\r\n Sort Method: quicksort Memory: 32kB\r\n\r\n Buffers: shared hit=1593\r\n\r\n -> Nested Loop (cost=0.56..91294.86 rows=4903 width=1414) (actual time=2.765..19.609 rows=4 loops=1)\r\n\r\n Buffers: shared hit=1593\r\n\r\n -> Seq Scan on collection (cost=0.00..653.62 rows=1 width=4) (actual time=1.789..8.057 rows=4 loops=1)\r\n\r\n Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\r\n\r\n Rows Removed by Filter: 2477\r\n\r\n Buffers: shared hit=604\r\n\r\n -> Index Scan using ix_granule_collection_id on granule (cost=0.56..90455.52 rows=18572 width=1414) (actual time=1.311..2.881 rows=1 loops=4)\r\n\r\n Index Cond: (collection_id = collection.id)\r\n\r\n Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}\r\n\r\n'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_date_time}'::text[]) < '\r\n\r\n2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[]) < '2015-10-09T00:00:00+00:00'::text\r\n\r\n)))\r\n\r\n Rows Removed by Filter: 243\r\n\r\n Buffers: shared hit=989\r\n\r\n -> Index Scan using collection_pkey on collection collection_1 (cost=0.28..6.14 rows=1 width=56) (actual time=0.008..0.008 rows=1 loops=4)\r\n\r\n Index Cond: (id = granule.collection_id)\r\n\r\n Buffers: shared hit=12\r\n\r\n -> Hash (cost=1.52..1.52 rows=52 width=16) (actual time=0.045..0.045 rows=52 loops=1)\r\n\r\n Buckets: 1024 Batches: 1 Memory Usage: 11kB\r\n\r\n Buffers: shared hit=1\r\n\r\n -> Seq Scan on visibility visibility_1 (cost=0.00..1.52 rows=52 width=16) (actual time=0.026..0.029 rows=52 loops=1)\r\n\r\n Buffers: shared hit=1\r\n\r\n Planning Time: 7.354 ms\r\n\r\n Execution Time: 64789.927 ms\r\n\r\n(52 rows)\r\n\r\n\r\nFrom: Ranier Vilela <[email protected]>\r\nDate: Wednesday, December 27, 2023 at 12:23 PM\r\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>\r\nCc: Matheus de Oliveira <[email protected]>, Frits Hoogland <[email protected]>, \"[email protected]\" <[email protected]>\r\nSubject: Re: [EXTERNAL] Need help with performance tuning pg12 on linux\r\n\r\nCAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\r\n\r\n\r\nEm qua., 27 de dez. de 2023 às 14:11, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]<mailto:[email protected]>> escreveu:\r\nThanks for the reply!! Having some issues due to nulls…. Any other thoughts?\r\n\r\n\r\ni=# ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\r\n\r\nERROR: column \"granule_uuid\" contains null values\r\nWell, uuid is a bad datatype for primary keys.\r\nIf possible in the long run, consider replacing them with bigint.\r\n\r\nCan you try a index:\r\nCREATE INDEX granule_file_file_id_key ON granule_file USING btree(file_id);\r\n\r\nAlthough granule_file has an index as a foreign key, it seems to me that it is not being considered.\r\n\r\nMy 2cents.\r\n\r\nBest regards,\r\nRanier Vilela\r\n\n\n\n\n\n\n\n\n\nHere is the before analyze :\n \nSort \n(cost=3208325.03..3208325.33 rows=117 width=997) (actual time=56683.259..56683.307 rows=4 loops=1)\n Sort Key: granule.uuid\n Sort Method: quicksort \r\nMemory: 32kB\n Buffers: shared hit=40 read=795724, temp read=630171 written=630171\n -> \r\nHash Left Join \r\n(cost=1844145.13..3208321.02 rows=117 width=997) (actual time=56683.080..56683.184 rows=4 loops=1)\n Hash Cond: (granule.visibility_id = visibility_1.id)\n Buffers: shared hit=37 read=795724, temp read=630171 written=630171\n -> \r\nHash Right Join \r\n(cost=1844142.96..3208260.02 rows=117 width=1678) (actual time=56682.840..56682.891 rows=4 loops=1)\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\n Buffers: shared hit=36 read=795724, temp read=630171 written=630171\n -> \r\nHash Join \n(cost=1752547.97..3034700.90 rows=21856786 width=224) (actual time=21966.799..55011.964 rows=21855206 loops=1)\n Hash Cond: (granule_file_1.file_id = file_1.id)\n Buffers: shared hit=2 read=794153, temp read=630171 written=630171\n -> \r\nSeq Scan on granule_file granule_file_1 \r\n(cost=0.00..357270.86 rows=21856786 width=20) (actual time=0.334..3267.188 rows=21855206 loops=1)\n \nBuffers: shared read=138703\n -> \r\nHash (cost=871329.32..871329.32 rows=21587732 width=208) (actual time=13425.791..13425.795 rows=21587732 loops=1)\n \nBuckets: 8388608 \nBatches: 8 Memory Usage: 710896kB\n \nBuffers: shared hit=2 read=655450, temp written=537221\n \n-> Seq Scan on file file_1 \r\n(cost=0.00..871329.32 rows=21587732 width=208) (actual time=0.277..5520.726 rows=21587732 loops=1)\n \r\nBuffers: shared hit=2 read=655450\n -> \r\nHash (cost=91594.67..91594.67 rows=26 width=1470) (actual time=189.702..189.736 rows=4 loops=1)\n Buckets: 1024 \r\nBatches: 1 \nMemory Usage: 13kB\n Buffers: shared hit=34 read=1571\n -> \r\nNested Loop Left Join \r\n(cost=91434.88..91594.67 rows=26 width=1470) (actual time=189.653..189.704 rows=4 loops=1)\n \nBuffers: shared hit=34 read=1571\n \n-> Limit \r\n(cost=91434.60..91434.67 rows=26 width=1414) (actual time=189.444..189.473 rows=4 loops=1)\n \r\nBuffers: shared hit=23 read=1570\n \r\n-> Sort \r\n(cost=91434.60..91446.86 rows=4903 width=1414) (actual time=189.441..189.462 rows=4 loops=1)\n \r\nSort Key: granule.uuid\n \r\nSort Method: quicksort \r\nMemory: 32kB\n \r\nBuffers: shared hit=23 read=1570\n \r\n-> Nested Loop \r\n(cost=0.56..91294.86 rows=4903 width=1414) (actual time=22.534..189.403 rows=4 loops=1)\n \r\nBuffers: shared hit=23 read=1570\n \r\n-> Seq Scan on collection \r\n(cost=0.00..653.62 rows=1 width=4) (actual time=3.706..14.783 rows=4 loops=1)\n \r\nFilter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\n \r\nRows Removed by Filter: 2477\n \r\nBuffers: shared hit=2 read=602\n \r\n-> Index Scan using ix_granule_collection_id on granule \r\n(cost=0.56..90455.52 rows=18572 width=1414) (actual time=21.662..43.645 rows=1 loops=4)\n \r\nIndex Cond: (collection_id = collection.id)\n \r\nFilter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}\n'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_date_time}'::text[])\r\n < '\n2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[]) < '2015-10-09T00:00:00+00:00'::text\n)))\n \r\nRows Removed by Filter: 243\n \r\nBuffers: shared hit=21 read=968\n \n-> Index Scan using collection_pkey on collection collection_1 \r\n(cost=0.28..6.14 rows=1 width=56) (actual time=0.052..0.052 rows=1 loops=4)\n \r\nIndex Cond: (id = granule.collection_id)\n \r\nBuffers: shared hit=11 read=1\n -> \r\nHash (cost=1.52..1.52 rows=52 width=16) (actual time=0.054..0.054 rows=52 loops=1)\n Buckets: 1024 \r\nBatches: 1 \nMemory Usage: 11kB\n Buffers: shared hit=1\n -> \r\nSeq Scan on visibility visibility_1 \r\n(cost=0.00..1.52 rows=52 width=16) (actual time=0.032..0.036 rows=52 loops=1)\n Buffers: shared hit=1\n Planning Time: 14.580 ms\n Execution Time: 56764.259 ms\n(52 rows)\n \n \nThen added the index:\nCREATE INDEX granule_file_file_id_key ON granule_file USING btree(file_id);\nCREATE INDEX\n \nvacuum (analyze, verbose) granule_file;\n \n \n& heres the new analyze:\n \n Sort \r\n(cost=3208262.52..3208262.79 rows=105 width=997) (actual time=64720.414..64720.435 rows=4 loops=1)\n Sort Key: granule.uuid\n Sort Method: quicksort \r\nMemory: 32kB\n Buffers: shared hit=140349 read=655418, temp read=630171 written=630171\n -> \r\nHash Left Join \r\n(cost=1844145.13..3208259.00 rows=105 width=997) (actual time=64720.258..64720.325 rows=4 loops=1)\n Hash Cond: (granule.visibility_id = visibility_1.id)\n Buffers: shared hit=140346 read=655418, temp read=630171 written=630171\n -> \r\nHash Right Join \r\n(cost=1844142.96..3208204.03 rows=105 width=1678) (actual time=64720.083..64720.105 rows=4 loops=1)\n Hash Cond: (granule_file_1.granule_uuid = granule.uuid)\n Buffers: shared hit=140345 read=655418, temp read=630171 written=630171\n -> \r\nHash Join \n(cost=1752547.97..3034652.34 rows=21854840 width=224) (actual time=11945.807..63203.012 rows=21855206 loops=1)\n Hash Cond: (granule_file_1.file_id = file_1.id)\n Buffers: shared hit=138740 read=655418, temp read=630171 written=630171\n -> \r\nSeq Scan on granule_file granule_file_1 \r\n(cost=0.00..357251.40 rows=21854840 width=20) (actual time=0.017..3103.893 rows=21855206 loops=1)\n \nBuffers: shared hit=138703\n -> \r\nHash (cost=871329.32..871329.32 rows=21587732 width=208) (actual time=11891.143..11891.146 rows=21587732 loops=1)\n \nBuckets: 8388608 \nBatches: 8 Memory Usage: 710896kB\n \nBuffers: shared hit=34 read=655418, temp written=537221\n \n-> Seq Scan on file file_1 \r\n(cost=0.00..871329.32 rows=21587732 width=208) (actual time=0.081..3996.438 rows=21587732 loops=1)\n \r\nBuffers: shared hit=34 read=655418\n -> \r\nHash (cost=91594.67..91594.67 rows=26 width=1470) (actual time=19.728..19.740 rows=4 loops=1)\n Buckets: 1024 \r\nBatches: 1 \nMemory Usage: 13kB\n Buffers: shared hit=1605\n -> \r\nNested Loop Left Join \r\n(cost=91434.88..91594.67 rows=26 width=1470) (actual time=19.684..19.708 rows=4 loops=1)\n \nBuffers: shared hit=1605\n \n-> Limit \r\n(cost=91434.60..91434.67 rows=26 width=1414) (actual time=19.650..19.660 rows=4 loops=1)\n \r\nBuffers: shared hit=1593\n \r\n-> Sort \r\n(cost=91434.60..91446.86 rows=4903 width=1414) (actual time=19.648..19.656 rows=4 loops=1)\n \r\nSort Key: granule.uuid\n \r\nSort Method: quicksort \r\nMemory: 32kB\n \r\nBuffers: shared hit=1593\n \r\n-> Nested Loop \r\n(cost=0.56..91294.86 rows=4903 width=1414) (actual time=2.765..19.609 rows=4 loops=1)\n \r\nBuffers: shared hit=1593\n \r\n-> Seq Scan on collection \r\n(cost=0.00..653.62 rows=1 width=4) (actual time=1.789..8.057 rows=4 loops=1)\n \r\nFilter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))\n \r\nRows Removed by Filter: 2477\n \r\nBuffers: shared hit=604\n \r\n-> Index Scan using ix_granule_collection_id on granule \r\n(cost=0.56..90455.52 rows=18572 width=1414) (actual time=1.311..2.881 rows=1 loops=4)\n \r\nIndex Cond: (collection_id = collection.id)\n \r\nFilter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}\n'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_date_time}'::text[])\r\n < '\n2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[]) < '2015-10-09T00:00:00+00:00'::text\n)))\n \r\nRows Removed by Filter: 243\n \r\nBuffers: shared hit=989\n \n-> Index Scan using collection_pkey on collection collection_1 \r\n(cost=0.28..6.14 rows=1 width=56) (actual time=0.008..0.008 rows=1 loops=4)\n \r\nIndex Cond: (id = granule.collection_id)\n \r\nBuffers: shared hit=12\n -> \r\nHash (cost=1.52..1.52 rows=52 width=16) (actual time=0.045..0.045 rows=52 loops=1)\n Buckets: 1024 \r\nBatches: 1 \nMemory Usage: 11kB\n Buffers: shared hit=1\n -> \r\nSeq Scan on visibility visibility_1 \r\n(cost=0.00..1.52 rows=52 width=16) (actual time=0.026..0.029 rows=52 loops=1)\n Buffers: shared hit=1\n Planning Time: 7.354 ms\n Execution Time: 64789.927 ms\n(52 rows)\n \n \n\nFrom: Ranier Vilela <[email protected]>\nDate: Wednesday, December 27, 2023 at 12:23 PM\nTo: \"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>\nCc: Matheus de Oliveira <[email protected]>, Frits Hoogland <[email protected]>, \"[email protected]\" <[email protected]>\nSubject: Re: [EXTERNAL] Need help with performance tuning pg12 on linux\n\n\n \n\n\n\n\n\n\nCAUTION:\nThis email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the \"Report Message\" button to report suspicious messages to the NASA SOC.\n\n\n\n\n\n\n\n\n\n\n\n\nEm qua., 27 de dez. de 2023 às 14:11, Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> escreveu:\n\n\n\n\n\nThanks for the reply!! Having some issues due to nulls…. Any other thoughts?\n \ni=# ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\nERROR: \r\ncolumn \"granule_uuid\" contains null values\n\n\n\n\n\nWell, uuid is a bad datatype for primary keys.\n\n\nIf possible in the long run, consider replacing them with bigint.\n\n\n \n\n\nCan you try a index:\n\n\nCREATE INDEX granule_file_file_id_key ON granule_file USING btree(file_id);\n\n\n \n\n\nAlthough granule_file has an index as a foreign key, it seems to me that it is not being considered.\n\n\n \n\n\nMy 2cents.\n\n\n \n\n\nBest regards,\n\n\nRanier Vilela",
"msg_date": "Wed, 27 Dec 2023 17:52:25 +0000",
"msg_from": "\"Wilson, Maria Louise (LARC-E301)[RSES]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Need help with performance tuning pg12 on linux"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 2:11 PM Wilson, Maria Louise (LARC-E301)[RSES] <\[email protected]> wrote:\n\n> Thanks for the reply!! Having some issues due to nulls…. Any other\n> thoughts?\n>\n>\n>\n> i=# ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\n>\n> ERROR: column \"granule_uuid\" contains null values\n>\n>\n>\n\nSeems like an odd design for your table. Check if those rows with null\nvalue make any sense for your design.\n\nIn any case, for performance, you can try a plain index:\n\n CREATE INDEX ON granule_file (granule_uuid);\n\nSince you are filtering for granule_uuid first, an index starting with this\ncolumn seems to make more sense (that is why I made the PK starting with it\nbefore).\n\nA composite index is not really necessary, but could help if you get an\nindex-only scan, if you wanna try:\n\n CREATE INDEX ON granule_file (granule_uuid, file_id);\n\nBest regards,\n-- \nMatheus de Oliveira\n\nOn Wed, Dec 27, 2023 at 2:11 PM Wilson, Maria Louise (LARC-E301)[RSES] <[email protected]> wrote:\n\n\nThanks for the reply!! Having some issues due to nulls…. Any other thoughts?\n \ni=# ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);\nERROR: \ncolumn \"granule_uuid\" contains null values\n Seems like an odd design for your table. Check if those rows with null value make any sense for your design.In any case, for performance, you can try a plain index: CREATE INDEX ON granule_file (granule_uuid);Since you are filtering for granule_uuid first, an index starting with this column seems to make more sense (that is why I made the PK starting with it before).A composite index is not really necessary, but could help if you get an index-only scan, if you wanna try: CREATE INDEX ON granule_file (granule_uuid, file_id);Best regards,-- Matheus de Oliveira",
"msg_date": "Wed, 27 Dec 2023 15:05:26 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Need help with performance tuning pg12 on linux"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 2:23 PM Ranier Vilela <[email protected]> wrote:\n\n> ...\n>\n> Although granule_file has an index as a foreign key, it seems to me that\n> it is not being considered.\n>\n\nYou seem to be mistaken here, a foreign key does not automatically create\nan index on the columns, you need to do it by yourself you really want that.\n\nBest regards,\n-- \nMatheus de Oliveira\n\nOn Wed, Dec 27, 2023 at 2:23 PM Ranier Vilela <[email protected]> wrote:...Although granule_file has an index as a foreign key, it seems to me that it is not being considered.You seem to be mistaken here, a foreign key does not automatically create an index on the columns, you need to do it by yourself you really want that.Best regards,-- Matheus de Oliveira",
"msg_date": "Wed, 27 Dec 2023 15:06:45 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Need help with performance tuning pg12 on linux"
}
] |
[
{
"msg_contents": "We are currently on 13.9. For each of the questions, I'd also like to know\nif anything has changed in that area in later releases.\nNOTE: We are capturing all explain plans via auto_explain and storing them\nin a database table. One of our longer term goals is to build the\nrelationship between queries and indexes, so we can tell where each of the\nindexes is used and how it is used (or not used). In the Index Only Scan\nexample below, we think that there are other queries that use the same\nindex AND also access JobID and MostRecentModel, but we want to verify that.\n\n*Any node type accessing an index or table*\n\n - It looks like \"Output\" includes more than just the columns with\n predicates and/or being accessed or returned in other nodes. *Has any\n thought been given to adding an additional attribute listing the columns\n that are actually used?* (While it's possible to do this after getting\n the explain plan, it seems like that information would be available\n internally in Postgres.)\n\n\n*Index Only Scan*\n\n - *Is it safe to assume that the columns listed with \"Output\" in an\n Index Only Scan node are the key columns, in order? * That's what we've\n observed, but I wanted to check if it was safe to make that assumption.\n - NOTE: IMHO, this is a case where showing all of the key columns,\n instead of just the ones that are used, is helpful because the person\n analyzing the query plan doesn't necessarily have direct access to the\n database schema.\n - In this example, policyperi_u_id_1mw8mh83lyyd9 is on\n pc_policyperiod(ID, Retired, JobID, PolicyID, TemporaryBranch,\n MostRecentModel)\n - NOTE: PolicyID is referenced in a node above the Index Only Scan, but\n neither JobID nor MostRecentModel are.\n\n \"Plans\": [\n {\n \"Node Type\": \"Index Only Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Scan Direction\": \"Forward\",\n \"Index Name\":\n\"policyperi_u_id_1mw8mh83lyyd9\",\n \"Relation Name\": \"pc_policyperiod\",\n \"Schema\": \"public\",\n \"Alias\": \"qroots0_1\",\n \"Startup Cost\": 0.57,\n \"Total Cost\": 15.90,\n \"Plan Rows\": 10,\n \"Plan Width\": 8,\n \"Actual Startup Time\": 0.234,\n \"Actual Total Time\": 1.223,\n \"Actual Rows\": 203,\n \"Actual Loops\": 1,\n * \"Output\": [\"qroots0_1.id\n<http://qroots0_1.id>\", \"qroots0_1.retired\", \"qroots0_1.jobid\",\n\"qroots0_1.policyid\", \"qroots0_1.temporarybranch\",\n\"qroots0_1.mostrecentmodel\"],*\n \"Index Cond\": \"((qroots0_1.id = ANY ($4))\nAND (qroots0_1.retired = 0) AND (qroots0_1.temporarybranch = false))\",\n\n*Index Scan*\n\n - *Is it safe to assume that the columns listed are all of the columns\n in the table?* (The table has too many columns to verify.)\n\n {\n \"Node Type\": \"Index Scan\",\n \"Parent Relationship\": \"Inner\",\n \"Parallel Aware\": false,\n \"Scan Direction\": \"Forward\",\n \"Index Name\": \"ppperf10\",\n \"Relation Name\": \"pc_policyperiod\",\n \"Schema\": \"public\",\n \"Alias\": \"groot_1\",\n \"Startup Cost\": 485987.94,\n \"Total Cost\": 485990.69,\n \"Plan Rows\": 1,\n \"Plan Width\": 16,\n \"Actual Startup Time\": 5.710,\n \"Actual Total Time\": 5.710,\n \"Actual Rows\": 0,\n \"Actual Loops\": 117,\n* \"Output\": [\"groot_1.paymentinstrument_wmic\",\n\"groot_1.cipminretainedpremium_wmic\", \"groot_1.pendingreindex\",\n\"groot_1.locked\", \"groot_1.editeffectivedate\", \"groot_1.invoicingmethod\",\n\"groot_1.archivestate\", \"groot_1.archiveschemainfo\",\n\"groot_1.prioraddressfk_ext\", \"groot_1.locationautonumberseq\",\n\"groot_1.csioid_ext\", \"groot_1.updatetime\",\n\"groot_1.multiproddiscapplied_wmic\", \"groot_1.paymentdesc_wmic\",\n\"groot_1.id <http://groot_1.id>\", \"groot_1.singlecheckingpatterncode\",\n\"groot_1.billingmethod\", \"groot_1.fleetdiscount_wmic\",\n\"groot_1.createuserid\", \"groot_1.cp_auditwrapuplblty_wmic\",\n\"groot_1.totalcostourshare\", \"groot_1.allowgapsbefore\",\n\"groot_1.quoteidentifier\", \"groot_1.quotehidden\", \"groot_1.orphaned\",\n\"groot_1.beanversion\", \"groot_1.packagediscount_wmic\",\n\"groot_1.billtoescrow_wmic\", \"groot_1.insurerdenieddetail_wmic\",\n\"groot_1.isprimarypayerremoved_wmic\", \"groot_1.branchname\",\n\"groot_1.updateuserid\", \"groot_1.cancellationdate\",\n\"groot_1.temporarybranch\", \"groot_1.segment\", \"groot_1.primaryinsuredname\",\n\"groot_1.archivedentitypurgedate\", \"groot_1.showtaxexemption_wmic\",\n\"groot_1.vestinginformation_wmic\", \"groot_1.depositoverridepct\",\n\"groot_1.policytermid\", \"groot_1.othercurrentcarrier_wmic\",\n\"groot_1.periodstart\", \"groot_1.livestockclaimscount_wmic\",\n\"groot_1.selectedtermtype\", \"groot_1.claimsystemqueried_wmic\",\n\"groot_1.publicid\", \"groot_1.cpprogramdetails_wmic\",\n\"groot_1.commission_wmic\", \"groot_1.altbillingaccountnumber\",\n\"groot_1.writtendate\", \"groot_1.totalcostrpt\", \"groot_1.totalcostrpt_cur\",\n\"groot_1.ecollectanddistributedisc_wmic\",\n\"groot_1.suppressdocdistribution_wmic\", \"groot_1.mostrecentmodel\",\n\"groot_1.buildingclaimscount_wmic\", \"groot_1.ignorestatusforrequote_wmic\",\n\"groot_1.fleetdiscountvalue_wmic\", \"groot_1.taxexemptionreason_wmic\",\n\"groot_1.docpreferredlanguage_wmic\", \"groot_1.allocationofremainder\",\n\"groot_1.overridebillingallocation\", \"groot_1.currentcarrier_wmic\",\n\"groot_1.subscription_wmic\", \"groot_1.renewalsafterdefaulttrig_wmic\",\n\"groot_1.archivefailuredetailsid\", \"groot_1.modeldate\",\n\"groot_1.leadpolicynumber_wmic\", \"groot_1.brokerquotedpremium_wmic\",\n\"groot_1.invoicestreamcode\", \"groot_1.frozensetid\",\n\"groot_1.taxsurchargesrpt_cur\", \"groot_1.modelnumberindex\",\n\"groot_1.basestate\", \"groot_1.machineryclaimscount_wmic\",\n\"groot_1.quotedate_wmic\", \"groot_1.firstinsurance_wmic\",\n\"groot_1.minimumpremium_wmic\", \"groot_1.mostrecentmodelindex\",\n\"groot_1.archivepartition\", \"groot_1.taxexemptionnumber_wmic\",\n\"groot_1.termtype_wmic\", \"groot_1.subscriptionourrole_wmic\",\n\"groot_1.depositcollected\", \"groot_1.cp_auditbrannualrevenue_wmic\",\n\"groot_1.failedooseevaluation\", \"groot_1.branchnumber\",\n\"groot_1.transactioncostrpt\", \"groot_1.depositcollected_cur\",\n\"groot_1.busopsdesc_wmic\", \"groot_1.transactioncostrpt_cur\",\n\"groot_1.cipcommisionpercentage_wmic\", \"groot_1.basedonid\",\n\"groot_1.archivedate\", \"groot_1.billimmediatelypercentage\",\n\"groot_1.suppressformdistribution_wmic\",\n\"groot_1.quotecloneoriginalperiod\", \"groot_1.depositamount\",\n\"groot_1.periodend\", \"groot_1.preferredcoveragecurrency\",\n\"groot_1.waivebrokerfees_wmic\", \"groot_1.preferredsettlementcurrency\",\n\"groot_1.transactioncostrptci_ext_amt\", \"groot_1.persistency_wmic\",\n\"groot_1.wasperiodquotedbeforeclosed\", \"groot_1.maturedriverdiscount_wmic\",\n\"groot_1.basedondate\", \"groot_1.totalpremiumrpt\",\n\"groot_1.totalpremiumrpt_cur\", \"groot_1.fullyretainedpremium_wmic\",\n\"groot_1.nameofprincipals_wmic\", \"groot_1.validreinsurance\",\n\"groot_1.seriescheckingpatterncode\", \"groot_1.taxexemption_wmic\",\n\"groot_1.donotdestroy\", \"groot_1.pnicontactdenorm\", \"groot_1.editlocked\",\n\"groot_1.quotematuritylevel\", \"groot_1.rateasofdate\", \"groot_1.jobid\",\n\"groot_1.multiproddiscpolicy_wmic\", \"groot_1.uwcompany\",\n\"groot_1.estimatedpremium\", \"groot_1.addfollowupnotes_wmic\",\n\"groot_1.periodid\", \"groot_1.estimatedpremium_cur\",\n\"groot_1.insurerdenied_wmic\", \"groot_1.assignedrisk\",\n\"groot_1.transactionpremiumrpt\", \"groot_1.sourceofbusiness_wmic\",\n\"groot_1.currentinceptiondate_wmic\", \"groot_1.excludereason\",\n\"groot_1.accountorgtype_wmic\", \"groot_1.specialhandling\",\n\"groot_1.temporaryclonestatus\", \"groot_1.transactionpremiumrpt_cur\",\n\"groot_1.checknumber_wmic\", \"groot_1.isconsent_wmic\",\n\"groot_1.certificatenumber\", \"groot_1.cipyearsofexperience_wmic\",\n\"groot_1.archivefailureid\", \"groot_1.totalcostourshare_cur\",\n\"groot_1.failedoosevalidation\", \"groot_1.retired\",\n\"groot_1.personalinsuranceprogram\", \"groot_1.quotenumber_wmic\",\n\"groot_1.preempted\", \"groot_1.futureperiods\",\n\"groot_1.primaryinsurednamedenorm\", \"groot_1.brokerclientid_wmic\",\n\"groot_1.modelnumber\", \"groot_1.cipolicytype_ext\", \"groot_1.termnumber\",\n\"groot_1.waivedepositchange\", \"groot_1.producercodeofrecordid\",\n\"groot_1.cp_auditbdlyinjurypropdmg_wmic\", \"groot_1.cp_renewalcount_wmic\",\n\"groot_1.createtime\", \"groot_1.industrycode\",\n\"groot_1.cipminretainedamount_wmic\", \"groot_1.describesourceofbus_wmic\",\n\"groot_1.policyid\", \"groot_1.followupaltaccnum_wmic\",\n\"groot_1.excludedfromarchive\", \"groot_1.followbillmethod_wmic\",\n\"groot_1.csioagencyid_ext\", \"groot_1.currentpolicynumber_wmic\",\n\"groot_1.taxsurchargesrpt\", \"groot_1.currentexpdate_wmic\",\n\"groot_1.otherorgtypedescription_wmic\", \"groot_1.overrideprequal_wmic\",\n\"groot_1.yearbusinessstarted_wmic\", \"groot_1.quoteclonesequencenumber\",\n\"groot_1.lockingcolumn\", \"groot_1.refundcalcmethod\", \"groot_1.status\",\n\"groot_1.totalpremiumcostourshare\", \"groot_1.transactioncostrptci_ext_cur\",\n\"groot_1.totalpremiumcostourshare_cur\", \"groot_1.depositamount_cur\",\n\"groot_1.commissionoverride_wmic\", \"groot_1.policynumber\",\n\"groot_1.worksetuid\", \"groot_1.appeventsyncstatus\",\n\"groot_1.isfacagreementadded_ext\", \"groot_1.bp_lockactive_ext\"]*,\n \"Index Cond\": \"((groot_1.mostrecentmodel =\ntrue) AND (groot_1.temporarybranch = false) AND (groot_1.retired = 0) AND\n(groot_1.policyid = qroots0_1.policyid))\",\n\n\nThanks,\nJerry\n\nWe are currently on 13.9. For each of the questions, I'd also like to know if anything has changed in that area in later releases.NOTE: We are capturing all explain plans via auto_explain and storing them in a database table. One of our longer term goals is to build the relationship between queries and indexes, so we can tell where each of the indexes is used and how it is used (or not used). In the Index Only Scan example below, we think that there are other queries that use the same index AND also access JobID and MostRecentModel, but we want to verify that.Any node type accessing an index or tableIt looks like \"Output\" includes more than just the columns with predicates and/or being accessed or returned in other nodes. Has any thought been given to adding an additional attribute listing the columns that are actually used? (While it's possible to do this after getting the explain plan, it seems like that information would be available internally in Postgres.)Index Only ScanIs it safe to assume that the columns listed with \"Output\" in an Index Only Scan node are the key columns, in order? That's what we've observed, but I wanted to check if it was safe to make that assumption. NOTE: IMHO, this is a case where showing all of the key columns, instead of just the ones that are used, is helpful because the person analyzing the query plan doesn't necessarily have direct access to the database schema.In this example, policyperi_u_id_1mw8mh83lyyd9 is on pc_policyperiod(ID, Retired, JobID, PolicyID, TemporaryBranch, MostRecentModel)NOTE: PolicyID is referenced in a node above the Index Only Scan, but neither JobID nor MostRecentModel are. \"Plans\": [ { \"Node Type\": \"Index Only Scan\", \"Parent Relationship\": \"Outer\", \"Parallel Aware\": false, \"Scan Direction\": \"Forward\", \"Index Name\": \"policyperi_u_id_1mw8mh83lyyd9\", \"Relation Name\": \"pc_policyperiod\", \"Schema\": \"public\", \"Alias\": \"qroots0_1\", \"Startup Cost\": 0.57, \"Total Cost\": 15.90, \"Plan Rows\": 10, \"Plan Width\": 8, \"Actual Startup Time\": 0.234, \"Actual Total Time\": 1.223, \"Actual Rows\": 203, \"Actual Loops\": 1, \"Output\": [\"qroots0_1.id\", \"qroots0_1.retired\", \"qroots0_1.jobid\", \"qroots0_1.policyid\", \"qroots0_1.temporarybranch\", \"qroots0_1.mostrecentmodel\"], \"Index Cond\": \"((qroots0_1.id = ANY ($4)) AND (qroots0_1.retired = 0) AND (qroots0_1.temporarybranch = false))\",Index ScanIs it safe to assume that the columns listed are all of the columns in the table? (The table has too many columns to verify.) { \"Node Type\": \"Index Scan\", \"Parent Relationship\": \"Inner\", \"Parallel Aware\": false, \"Scan Direction\": \"Forward\", \"Index Name\": \"ppperf10\", \"Relation Name\": \"pc_policyperiod\", \"Schema\": \"public\", \"Alias\": \"groot_1\", \"Startup Cost\": 485987.94, \"Total Cost\": 485990.69, \"Plan Rows\": 1, \"Plan Width\": 16, \"Actual Startup Time\": 5.710, \"Actual Total Time\": 5.710, \"Actual Rows\": 0, \"Actual Loops\": 117, \"Output\": [\"groot_1.paymentinstrument_wmic\", \"groot_1.cipminretainedpremium_wmic\", \"groot_1.pendingreindex\", \"groot_1.locked\", \"groot_1.editeffectivedate\", \"groot_1.invoicingmethod\", \"groot_1.archivestate\", \"groot_1.archiveschemainfo\", \"groot_1.prioraddressfk_ext\", \"groot_1.locationautonumberseq\", \"groot_1.csioid_ext\", \"groot_1.updatetime\", \"groot_1.multiproddiscapplied_wmic\", \"groot_1.paymentdesc_wmic\", \"groot_1.id\", \"groot_1.singlecheckingpatterncode\", \"groot_1.billingmethod\", \"groot_1.fleetdiscount_wmic\", \"groot_1.createuserid\", \"groot_1.cp_auditwrapuplblty_wmic\", \"groot_1.totalcostourshare\", \"groot_1.allowgapsbefore\", \"groot_1.quoteidentifier\", \"groot_1.quotehidden\", \"groot_1.orphaned\", \"groot_1.beanversion\", \"groot_1.packagediscount_wmic\", \"groot_1.billtoescrow_wmic\", \"groot_1.insurerdenieddetail_wmic\", \"groot_1.isprimarypayerremoved_wmic\", \"groot_1.branchname\", \"groot_1.updateuserid\", \"groot_1.cancellationdate\", \"groot_1.temporarybranch\", \"groot_1.segment\", \"groot_1.primaryinsuredname\", \"groot_1.archivedentitypurgedate\", \"groot_1.showtaxexemption_wmic\", \"groot_1.vestinginformation_wmic\", \"groot_1.depositoverridepct\", \"groot_1.policytermid\", \"groot_1.othercurrentcarrier_wmic\", \"groot_1.periodstart\", \"groot_1.livestockclaimscount_wmic\", \"groot_1.selectedtermtype\", \"groot_1.claimsystemqueried_wmic\", \"groot_1.publicid\", \"groot_1.cpprogramdetails_wmic\", \"groot_1.commission_wmic\", \"groot_1.altbillingaccountnumber\", \"groot_1.writtendate\", \"groot_1.totalcostrpt\", \"groot_1.totalcostrpt_cur\", \"groot_1.ecollectanddistributedisc_wmic\", \"groot_1.suppressdocdistribution_wmic\", \"groot_1.mostrecentmodel\", \"groot_1.buildingclaimscount_wmic\", \"groot_1.ignorestatusforrequote_wmic\", \"groot_1.fleetdiscountvalue_wmic\", \"groot_1.taxexemptionreason_wmic\", \"groot_1.docpreferredlanguage_wmic\", \"groot_1.allocationofremainder\", \"groot_1.overridebillingallocation\", \"groot_1.currentcarrier_wmic\", \"groot_1.subscription_wmic\", \"groot_1.renewalsafterdefaulttrig_wmic\", \"groot_1.archivefailuredetailsid\", \"groot_1.modeldate\", \"groot_1.leadpolicynumber_wmic\", \"groot_1.brokerquotedpremium_wmic\", \"groot_1.invoicestreamcode\", \"groot_1.frozensetid\", \"groot_1.taxsurchargesrpt_cur\", \"groot_1.modelnumberindex\", \"groot_1.basestate\", \"groot_1.machineryclaimscount_wmic\", \"groot_1.quotedate_wmic\", \"groot_1.firstinsurance_wmic\", \"groot_1.minimumpremium_wmic\", \"groot_1.mostrecentmodelindex\", \"groot_1.archivepartition\", \"groot_1.taxexemptionnumber_wmic\", \"groot_1.termtype_wmic\", \"groot_1.subscriptionourrole_wmic\", \"groot_1.depositcollected\", \"groot_1.cp_auditbrannualrevenue_wmic\", \"groot_1.failedooseevaluation\", \"groot_1.branchnumber\", \"groot_1.transactioncostrpt\", \"groot_1.depositcollected_cur\", \"groot_1.busopsdesc_wmic\", \"groot_1.transactioncostrpt_cur\", \"groot_1.cipcommisionpercentage_wmic\", \"groot_1.basedonid\", \"groot_1.archivedate\", \"groot_1.billimmediatelypercentage\", \"groot_1.suppressformdistribution_wmic\", \"groot_1.quotecloneoriginalperiod\", \"groot_1.depositamount\", \"groot_1.periodend\", \"groot_1.preferredcoveragecurrency\", \"groot_1.waivebrokerfees_wmic\", \"groot_1.preferredsettlementcurrency\", \"groot_1.transactioncostrptci_ext_amt\", \"groot_1.persistency_wmic\", \"groot_1.wasperiodquotedbeforeclosed\", \"groot_1.maturedriverdiscount_wmic\", \"groot_1.basedondate\", \"groot_1.totalpremiumrpt\", \"groot_1.totalpremiumrpt_cur\", \"groot_1.fullyretainedpremium_wmic\", \"groot_1.nameofprincipals_wmic\", \"groot_1.validreinsurance\", \"groot_1.seriescheckingpatterncode\", \"groot_1.taxexemption_wmic\", \"groot_1.donotdestroy\", \"groot_1.pnicontactdenorm\", \"groot_1.editlocked\", \"groot_1.quotematuritylevel\", \"groot_1.rateasofdate\", \"groot_1.jobid\", \"groot_1.multiproddiscpolicy_wmic\", \"groot_1.uwcompany\", \"groot_1.estimatedpremium\", \"groot_1.addfollowupnotes_wmic\", \"groot_1.periodid\", \"groot_1.estimatedpremium_cur\", \"groot_1.insurerdenied_wmic\", \"groot_1.assignedrisk\", \"groot_1.transactionpremiumrpt\", \"groot_1.sourceofbusiness_wmic\", \"groot_1.currentinceptiondate_wmic\", \"groot_1.excludereason\", \"groot_1.accountorgtype_wmic\", \"groot_1.specialhandling\", \"groot_1.temporaryclonestatus\", \"groot_1.transactionpremiumrpt_cur\", \"groot_1.checknumber_wmic\", \"groot_1.isconsent_wmic\", \"groot_1.certificatenumber\", \"groot_1.cipyearsofexperience_wmic\", \"groot_1.archivefailureid\", \"groot_1.totalcostourshare_cur\", \"groot_1.failedoosevalidation\", \"groot_1.retired\", \"groot_1.personalinsuranceprogram\", \"groot_1.quotenumber_wmic\", \"groot_1.preempted\", \"groot_1.futureperiods\", \"groot_1.primaryinsurednamedenorm\", \"groot_1.brokerclientid_wmic\", \"groot_1.modelnumber\", \"groot_1.cipolicytype_ext\", \"groot_1.termnumber\", \"groot_1.waivedepositchange\", \"groot_1.producercodeofrecordid\", \"groot_1.cp_auditbdlyinjurypropdmg_wmic\", \"groot_1.cp_renewalcount_wmic\", \"groot_1.createtime\", \"groot_1.industrycode\", \"groot_1.cipminretainedamount_wmic\", \"groot_1.describesourceofbus_wmic\", \"groot_1.policyid\", \"groot_1.followupaltaccnum_wmic\", \"groot_1.excludedfromarchive\", \"groot_1.followbillmethod_wmic\", \"groot_1.csioagencyid_ext\", \"groot_1.currentpolicynumber_wmic\", \"groot_1.taxsurchargesrpt\", \"groot_1.currentexpdate_wmic\", \"groot_1.otherorgtypedescription_wmic\", \"groot_1.overrideprequal_wmic\", \"groot_1.yearbusinessstarted_wmic\", \"groot_1.quoteclonesequencenumber\", \"groot_1.lockingcolumn\", \"groot_1.refundcalcmethod\", \"groot_1.status\", \"groot_1.totalpremiumcostourshare\", \"groot_1.transactioncostrptci_ext_cur\", \"groot_1.totalpremiumcostourshare_cur\", \"groot_1.depositamount_cur\", \"groot_1.commissionoverride_wmic\", \"groot_1.policynumber\", \"groot_1.worksetuid\", \"groot_1.appeventsyncstatus\", \"groot_1.isfacagreementadded_ext\", \"groot_1.bp_lockactive_ext\"], \"Index Cond\": \"((groot_1.mostrecentmodel = true) AND (groot_1.temporarybranch = false) AND (groot_1.retired = 0) AND (groot_1.policyid = qroots0_1.policyid))\",Thanks,Jerry",
"msg_date": "Tue, 2 Jan 2024 10:28:31 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questions about \"Output\" in EXPLAIN ANALYZE VERBOSE"
},
{
"msg_contents": "On Tue, Jan 2, 2024 at 1:29 PM Jerry Brenner <[email protected]> wrote:\n\n> We are currently on 13.9.\n>\n\nWhy not just use the latest minor release, 13.13? For security reasons,\nthat is the only minor release of v13 you should be using anyway. I think\nit is a bit much to hope that people will spend their time for free\nresearching obsolete minor releases.\n\n\n> *Any node type accessing an index or table*\n>\n> - It looks like \"Output\" includes more than just the columns with\n> predicates and/or being accessed or returned in other nodes.\n>\n> Not in my hands. For SELECTs it just lists the columns that are needed.\nYour example is hard to follow because it appears to be just snippets of a\nplan, with no example of the query to which it belongs.\n\nCheers,\n\nJeff\n\nOn Tue, Jan 2, 2024 at 1:29 PM Jerry Brenner <[email protected]> wrote:We are currently on 13.9. Why not just use the latest minor release, 13.13? For security reasons, that is the only minor release of v13 you should be using anyway. I think it is a bit much to hope that people will spend their time for free researching obsolete minor releases.Any node type accessing an index or tableIt looks like \"Output\" includes more than just the columns with predicates and/or being accessed or returned in other nodes.Not in my hands. For SELECTs it just lists the columns that are needed. Your example is hard to follow because it appears to be just snippets of a plan, with no example of the query to which it belongs.Cheers,Jeff",
"msg_date": "Tue, 2 Jan 2024 14:15:52 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about \"Output\" in EXPLAIN ANALYZE VERBOSE"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> On Tue, Jan 2, 2024 at 1:29 PM Jerry Brenner <[email protected]> wrote:\n>> - It looks like \"Output\" includes more than just the columns with\n>> predicates and/or being accessed or returned in other nodes.\n\n> Not in my hands. For SELECTs it just lists the columns that are needed.\n\nIt depends. The planner may choose to tell a non-top-level scan node\nto return all columns, in hopes of saving a tuple projection step at\nruntime. That's heuristic and depends on a number of factors, so you\nshouldn't count on it happening or not happening.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jan 2024 14:23:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about \"Output\" in EXPLAIN ANALYZE VERBOSE"
},
{
"msg_contents": "Tom - Thanks for the response. I guess what I am really looking for is a\nsimple way to find all of the columns referenced from a given instance of a\ntable or index from the json file, although it would be even better if it\nwas easy to differentiate between the columns that came from the index vs\nthose that could only come from the table. (We may not have direct access\nto the database, the indexing may have changed since the plan was captured,\n...) I can see that all of the values in \"Index Cond\" and \"Filter\" for the\ngiven \"Alias\" are relevant, but it's unclear what portion of the values in\n\"Output\" are relevant. Some instances of \"Output\" contain a superset of\nthe values in \"Index Cond\" and \"Filter\", including columns that are not\nreferenced in the query (there are a total of 187 columns in\npc_policyperiod and instances in the plan were all of them show up in\n\"Output\" but only 7 of them are actually referenced), others contain a\nmutually exclusive set of values, ... It would be helpful if there was an\nattribute that contained that information.\n\nI now see that groot2.ID is the only column listed in \"Output\" from\npc_policy here and it doesn't show up anywhere else in the plan. It's\nactually used in the evaluation of \"(hashed SubPlan 3)\", but I had to look\nat the SQL to figure that out.\nNOTE: policy_n_producerco_3e8i0ojsyckhx is an index on\npc_policy(producercodeofserviceid, retired). It makes sense for reasons\nbeyond this query to add id as the last key column to the index.\n\n \"Filter\": \"((NOT groot_1.assignedrisk) AND\n((groot_1.producercodeofrecordid = '10791'::bigint) OR *(hashed SubPlan 3)*\n))\",\n \"Rows Removed by Filter\": 0,\n \"Shared Hit Blocks\": 549472,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0,\n \"I/O Read Time\": 0.000,\n \"I/O Write Time\": 0.000,\n \"Plans\": [\n {\n \"Node Type\": \"Index Scan\",\n \"Parent Relationship\": \"SubPlan\",\n \"Subplan Name\": \"SubPlan 3\",\n \"Parallel Aware\": false,\n \"Scan Direction\": \"Forward\",\n \"Index Name\":\n\"policy_n_producerco_3e8i0ojsyckhx\",\n \"Relation Name\": \"pc_policy\",\n \"Schema\": \"public\",\n \"Alias\": \"groot2\",\n \"Startup Cost\": 0.56,\n \"Total Cost\": 484540.46,\n \"Plan Rows\": 578767,\n \"Plan Width\": 8,\n \"Actual Startup Time\": 0.035,\n \"Actual Total Time\": 490.349,\n \"Actual Rows\": 546045,\n \"Actual Loops\": 1,\n *\"Output\": [\"groot2.id\n<http://groot2.id>\"],*\n \"Index Cond\":\n\"((groot2.producercodeofserviceid = '10791'::bigint) AND (groot2.retired =\n0))\",\n\n\nHere's the SQL:\n\nSELECT COUNT(*)\nFROM (\n SELECT *\n FROM (\n SELECT /* ISNULL:pc_policycontactrole.EffectiveDate:,\nISNULL:pc_policycontactrole.ExpirationDate:,\npc:gw.webservice.pc.pc5000.policysearch.PolicySearchAPI#findPolicies_WMIC;\n*/ gRoot.ID col0\n FROM pc_policyperiod gRoot\n WHERE gRoot.AssignedRisk = $1 AND gRoot.MostRecentModel =\n$2 AND gRoot.PolicyID IN\n (\n SELECT qRoots0.PolicyID col0\n FROM pc_policyperiod qRoots0\n WHERE qRoots0.ID = ANY (ARRAY\n (\n SELECT qRoots1.BranchID col0\n FROM pc_policycontactrole qRoots1\n WHERE qRoots1.Subtype = $3 AND\nqRoots1.ContactDenorm IN\n (\n SELECT qRoots2.ID col0\n FROM pc_contact qRoots2\n WHERE qRoots2.FirstNameDenorm =\nLOWER ($4) AND qRoots2.LastNameDenorm = LOWER ($5) AND qRoots2.Retired = 0)\n AND ( ( ( (qRoots1.EffectiveDate <> qRoots1.ExpirationDate) OR\n(qRoots1.EffectiveDate IS NULL) OR (qRoots1.ExpirationDate IS NULL))))))\nAND qRoots0.Retired = 0 AND qRoots0.TemporaryBranch = false)\nAND gRoot.Retired = 0 AND gRoot.TemporaryBranch = false AND ( ( (\n(gRoot.ProducerCodeOfRecordID = $6) OR (gRoot.PolicyID IN\n (\n SELECT gRoot3.ID col0\n FROM pc_policy gRoot3\n WHERE\ngRoot3.ProducerCodeOfServiceID = $7 AND gRoot3.Retired = 0)))))\n\n UNION\n\n SELECT /* ISNULL:pc_policycontactrole.EffectiveDate:,\nISNULL:pc_policycontactrole.ExpirationDate:,\npc:gw.webservice.pc.pc5000.policysearch.PolicySearchAPI#findPolicies_WMIC;\n*/ gRoot.ID col0\n FROM pc_policyperiod gRoot\n WHERE gRoot.AssignedRisk = $8 AND gRoot.MostRecentModel =\n$9 AND gRoot.PolicyID IN\n (\n SELECT qRoots0.PolicyID col0\n FROM pc_policyperiod qRoots0\n WHERE qRoots0.ID = ANY (ARRAY\n (\n SELECT qRoots1.BranchID col0\n FROM pc_policycontactrole qRoots1\n WHERE qRoots1.Subtype = $10 AND\nqRoots1.FirstNameInternalDenorm = LOWER ($11) AND\nqRoots1.LastNameInternalDenorm = LOWER ($12)\n AND ( ( ( (qRoots1.EffectiveDate <>\nqRoots1.ExpirationDate) OR (qRoots1.EffectiveDate IS NULL) OR\n(qRoots1.ExpirationDate IS NULL))))))\nAND qRoots0.Retired = 0 AND qRoots0.TemporaryBranch = false) AND\ngRoot.Retired = 0 AND gRoot.TemporaryBranch = false AND ( ( (\n(gRoot.ProducerCodeOfRecordID = $13) OR (gRoot.PolicyID IN\n (\n SELECT gRoot2.ID col0\n FROM pc_policy gRoot2\n WHERE\ngRoot2.ProducerCodeOfServiceID = $14 AND gRoot2.Retired = 0)))))) a\n FETCH FIRST 301 ROWS ONLY) countTable\n\nI've attached the full json because it is too big to paste (and \"Output\"\ndoesn't show up in the text output of the tools that I've looked at).\n\nThanks,\nJerry\n\nOn Tue, Jan 2, 2024 at 11:23 AM Tom Lane <[email protected]> wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > On Tue, Jan 2, 2024 at 1:29 PM Jerry Brenner <[email protected]>\n> wrote:\n> >> - It looks like \"Output\" includes more than just the columns with\n> >> predicates and/or being accessed or returned in other nodes.\n>\n> > Not in my hands. For SELECTs it just lists the columns that are needed.\n>\n> It depends. The planner may choose to tell a non-top-level scan node\n> to return all columns, in hopes of saving a tuple projection step at\n> runtime. That's heuristic and depends on a number of factors, so you\n> shouldn't count on it happening or not happening.\n>\n> regards, tom lane\n>\n>",
"msg_date": "Tue, 2 Jan 2024 13:27:23 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions about \"Output\" in EXPLAIN ANALYZE VERBOSE"
}
] |
[
{
"msg_contents": "Hi,\n\nrunning postgresql 15.5 I was recently surpised postgresql didn't\nperform an optimization which I thought would be easy to apply.\nso in this case I don't have an actual performance problem but I am\nrather curious if this is limitation in postgresql or whether there is\na semantic difference in the two queries below.\n\nrunning the following query results in a full sort (caused by lead\nover order by) as the ts > '2024-01-04' selection doesn't seem to be\napplied to the CTE but only later:\n\nwith cte as (select ts, lead(ts, 1) over (order by ts) as ts2 from smartmeter)\nselect ts, ts2 from cte where ts > '2024-01-04' and extract(epoch\nfrom ts2) - extract(epoch from ts) > 9;\n\n--------\n Subquery Scan on cte (cost=1116514.38..1419735.26 rows=253 width=16)\n(actual time=117487.536..117999.668 rows=10 loops=1)\n Filter: ((cte.ts > '2024-01-04 00:00:00+00'::timestamp with time\nzone) AND ((EXTRACT(epoch FROM cte.ts2) - EXTRACT(epoch FROM cte.ts))\n> '9'::numeric))\n Rows Removed by Filter: 7580259\n -> WindowAgg (cost=1116514.38..1249173.52 rows=7580522 width=16)\n(actual time=67016.787..114141.495 rows=7580269 loops=1)\n -> Sort (cost=1116514.38..1135465.69 rows=7580522 width=8)\n(actual time=67016.685..81802.822 rows=7580269 loops=1)\n Sort Key: smartmeter.ts\n Sort Method: external merge Disk: 89024kB\n -> Seq Scan on smartmeter (cost=0.00..146651.22\nrows=7580522 width=8) (actual time=7.251..56715.002 rows=7580269\nloops=1)\n Planning Time: 0.502 ms\n Execution Time: 118100.528 ms\n\n\nwhereas if ts > '2024-01-04' is already filtered in the CTE the query\nperforms a lot better:\n\nwith cte as (select ts, lead(ts, 1) over (order by ts) as ts2 from\nsmartmeter where ts > '2024-01-04')\nselect ts, ts2 from cte where extract(epoch from ts2) - extract(epoch\nfrom ts) > 9;\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan on cte (cost=74905.42..74933.84 rows=253 width=16)\n(actual time=334.654..804.286 rows=10 loops=1)\n Filter: ((EXTRACT(epoch FROM cte.ts2) - EXTRACT(epoch FROM cte.ts))\n> '9'::numeric)\n Rows Removed by Filter: 57021\n -> WindowAgg (cost=74905.42..74918.68 rows=758 width=16) (actual\ntime=263.950..550.566 rows=57031 loops=1)\n -> Sort (cost=74905.42..74907.31 rows=758 width=8) (actual\ntime=263.893..295.188 rows=57031 loops=1)\n Sort Key: smartmeter.ts\n Sort Method: quicksort Memory: 1537kB\n -> Bitmap Heap Scan on smartmeter\n(cost=16.37..74869.16 rows=758 width=8) (actual time=170.485..243.591\nrows=57031 loops=1)\n Recheck Cond: (ts > '2024-01-04\n00:00:00+00'::timestamp with time zone)\n Rows Removed by Index Recheck: 141090\n Heap Blocks: lossy=1854\n -> Bitmap Index Scan on smartmeter_ts_idx\n(cost=0.00..16.18 rows=76345 width=0) (actual time=1.142..1.144\nrows=18540 loops=1)\n Index Cond: (ts > '2024-01-04\n00:00:00+00'::timestamp with time zone)\n Planning Time: 0.565 ms\n Execution Time: 804.474 ms\n(15 rows)\n\nThanks a lot, Clemens\n\n\nThe DDL of the table in question is:\n\nCREATE TABLE public.smartmeter (\n leistungsfaktor real,\n momentanleistung integer,\n spannungl1 real,\n spannungl2 real,\n spannungl3 real,\n stroml1 real,\n stroml2 real,\n stroml3 real,\n wirkenergien real,\n wirkenergiep real,\n ts timestamp with time zone NOT NULL\n);\nCREATE INDEX smartmeter_ts_idx ON public.smartmeter USING brin (ts);\n\n\n",
"msg_date": "Sun, 7 Jan 2024 08:37:17 +0100",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Selection not \"pushed down into\" CTE"
},
{
"msg_contents": "Clemens Eisserer <[email protected]> writes:\n> running postgresql 15.5 I was recently surpised postgresql didn't\n> perform an optimization which I thought would be easy to apply.\n\nIt is not.\n\n> running the following query results in a full sort (caused by lead\n> over order by) as the ts > '2024-01-04' selection doesn't seem to be\n> applied to the CTE but only later:\n\n> with cte as (select ts, lead(ts, 1) over (order by ts) as ts2 from smartmeter)\n> select ts, ts2 from cte where ts > '2024-01-04' and extract(epoch\n> from ts2) - extract(epoch from ts) > 9;\n\nThe ts restriction is not pushed down because of the rules in\nallpaths.c:\n\n * 4. If the subquery has any window functions, we must not push down quals\n * that reference any output columns that are not listed in all the subquery's\n * window PARTITION BY clauses. We can push down quals that use only\n * partitioning columns because they should succeed or fail identically for\n * every row of any one window partition, and totally excluding some\n * partitions will not change a window function's results for remaining\n * partitions. (Again, this also requires nonvolatile quals, but\n * subquery_is_pushdown_safe handles that.)\n\nTo conclude that it'd be safe with this particular window function\nrequires deep knowledge of that function's semantics, which the\nplanner has not got.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Jan 2024 11:55:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Selection not \"pushed down into\" CTE"
},
{
"msg_contents": "Hi Tom,\n\nThanks for the detailed explanation what is preventing the\noptimization and the view behind the scenes.\n\nBest regards, Clemens\n\nHi Tom,Thanks for the detailed explanation what is preventing theoptimization and the view behind the scenes.Best regards, Clemens",
"msg_date": "Thu, 11 Jan 2024 18:48:35 +0100",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Selection not \"pushed down into\" CTE"
}
] |
[
{
"msg_contents": "We are on 13.9.\nI'm wondering why a sort is required for this query, as the index should be\nproviding the required ordering to satisfy the ORDER BY clause. Does it\nhave to do with the IS NULL predicate on the leading key column in the\nindex?\n\nThere's an index, job_u_closedate_g9cdc6ghupib, on pc_job(CloseDate,\nRetired, Subtype, CreateTime, ID). All columns have ASC sort order and\nNULLs LAST.\n\n - pc_job is the probe table in a hash join\n - There are IS NULL and equality predicates on the 3 leading columns in\n the index and the last 2 key columns (CreateTime, ID) are the ordering\n columns in the query\n - So, the Index Scan of job_u_closedate_g9cdc6ghupib is returning the\n rows in the sorted order\n - NOTE: The sort is cheap, but I'm investigating this because \"CloseDate\n IS NULL\" is very selective and without forcing the index the optimizer is\n choosing a different sort avert index that does not include CloseDate and\n hence a lot of time is spent filtering out rows on that predicate against\n the heap.\n\nHere's the query\n\nSELECT /* ISNULL:pc_job.CloseDate:, KeyTable:pc_job; */ gRoot.ID col0,\ngRoot.Subtype col1, gRoot.CreateTime col2\nFROM pc_job gRoot INNER JOIN pc_policy policy_0\n ON policy_0.ID = gRoot.PolicyID\nWHERE gRoot.Subtype = 7 AND gRoot.CloseDate IS NULL\n AND gRoot.Retired = 0\n AND policy_0.ProducerCodeOfServiceID IN\n\n (248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719)\n AND policy_0.Retired = 0\nORDER BY col2 ASC, col0 ASC LIMIT 10\n\nHere's the query plan:\n\nLimit (cost=107826.77..107826.79 rows=10 width=20) (actual\ntime=13149.872..13149.877 rows=10 loops=1)\n Buffers: shared hit=2756 read=40121\n I/O Timings: read=105917.908\n -> Sort (cost=107826.77..107827.72 rows=381 width=20) (actual\ntime=13149.871..13149.874 rows=10 loops=1)\n Sort Key: groot.createtime, groot.id\n Sort Method: top-N heapsort Memory: 25kB\n Buffers: shared hit=2756 read=40121\n I/O Timings: read=105917.908\n -> Hash Join (cost=15632.51..107818.53 rows=381 width=20)\n(actual time=578.511..13149.658 rows=144 loops=1)\n Buffers: shared hit=2750 read=40121\n I/O Timings: read=105917.908\n -> Index Scan using job_u_closedate_g9cdc6ghupib on\npc_job groot (cost=0.56..91783.14 rows=153696 width=28) (actual\ntime=3.864..12562.568 rows=75558 loops=1)\n Index Cond: ((groot.closedate IS NULL) AND\n(groot.retired = 0) AND (groot.subtype = 7))\n Buffers: shared hit=2721 read=27934\n I/O Timings: read=58781.220\n -> Hash (cost=15427.92..15427.92 rows=16322 width=8)\n(actual time=543.298..543.299 rows=13016 loops=1)\n Buffers: shared hit=29 read=12187\n I/O Timings: read=47136.688\n -> Index Scan using\npolicy_n_producerco_3e8i0ojsyckhx on pc_policy policy_0\n(cost=0.43..15427.92 rows=16322 width=8) (actual time=6.149..540.501\nrows=13016 loops=1)\n Index Cond:\n((policy_0.producercodeofserviceid = ANY\n('{248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719}'::bigint[]))\nAND (policy_0.retired = 0))\n Buffers: shared hit=29 read=12187\n I/O Timings: read=47136.688\nPlanning time: 0.538 ms\nExecution time: 13150.301 ms\n\nThanks,\n\nJerry\n\nWe are on 13.9. I'm wondering why a sort is required for this query, as the index should be providing the required ordering to satisfy the ORDER BY clause. Does it have to do with the IS NULL predicate on the leading key column in the index?There's an index, job_u_closedate_g9cdc6ghupib, on pc_job(CloseDate, Retired, Subtype, CreateTime, ID). All columns have ASC sort order and NULLs LAST.pc_job is the probe table in a hash joinThere are IS NULL and equality predicates on the 3 leading columns in the index and the last 2 key columns (CreateTime, ID) are the ordering columns in the querySo, the Index Scan of \njob_u_closedate_g9cdc6ghupib is returning the rows in the sorted orderNOTE: The sort is cheap, but I'm investigating this because \"CloseDate IS NULL\" is very selective and without forcing the index the optimizer is choosing a different sort avert index that does not include CloseDate and hence a lot of time is spent filtering out rows on that predicate against the heap. Here's the querySELECT /* ISNULL:pc_job.CloseDate:, KeyTable:pc_job; */ gRoot.ID col0, gRoot.Subtype col1, gRoot.CreateTime col2FROM pc_job gRoot INNER JOIN pc_policy policy_0 ON policy_0.ID = gRoot.PolicyIDWHERE gRoot.Subtype = 7 AND gRoot.CloseDate IS NULL AND gRoot.Retired = 0 AND policy_0.ProducerCodeOfServiceID IN (248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719) AND policy_0.Retired = 0ORDER BY col2 ASC, col0 ASC LIMIT 10Here's the query plan:Limit (cost=107826.77..107826.79 rows=10 width=20) (actual time=13149.872..13149.877 rows=10 loops=1) Buffers: shared hit=2756 read=40121 I/O Timings: read=105917.908 -> Sort (cost=107826.77..107827.72 rows=381 width=20) (actual time=13149.871..13149.874 rows=10 loops=1) Sort Key: groot.createtime, groot.id Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=2756 read=40121 I/O Timings: read=105917.908 -> Hash Join (cost=15632.51..107818.53 rows=381 width=20) (actual time=578.511..13149.658 rows=144 loops=1) Buffers: shared hit=2750 read=40121 I/O Timings: read=105917.908 -> Index Scan using job_u_closedate_g9cdc6ghupib on pc_job groot (cost=0.56..91783.14 rows=153696 width=28) (actual time=3.864..12562.568 rows=75558 loops=1) Index Cond: ((groot.closedate IS NULL) AND (groot.retired = 0) AND (groot.subtype = 7)) Buffers: shared hit=2721 read=27934 I/O Timings: read=58781.220 -> Hash (cost=15427.92..15427.92 rows=16322 width=8) (actual time=543.298..543.299 rows=13016 loops=1) Buffers: shared hit=29 read=12187 I/O Timings: read=47136.688 -> Index Scan using policy_n_producerco_3e8i0ojsyckhx on pc_policy policy_0 (cost=0.43..15427.92 rows=16322 width=8) (actual time=6.149..540.501 rows=13016 loops=1) Index Cond: ((policy_0.producercodeofserviceid = ANY ('{248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719}'::bigint[])) AND (policy_0.retired = 0)) Buffers: shared hit=29 read=12187 I/O Timings: read=47136.688Planning time: 0.538 msExecution time: 13150.301 msThanks,Jerry",
"msg_date": "Wed, 17 Jan 2024 06:39:06 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is a sort required for this query? (IS NULL predicate on leading\n key column)"
},
{
"msg_contents": "Apologies for not including this in the original email. The other\nindex, job_u_createtime_2cy0wgyqpani8,\nis on pc_job(CreateTime, Retired, Subtype, ID). The optimizer chooses\nNested Loop when choosing that index, vs Hash Join when choosing the index\nin the first plan that I posted. It seems like the choice of the Hash Join\nin the 1st plan that I posted is collateral damage from the seemingly\nunnecessary need to do the sort.\n\nHere's the plan without forcing the index:\n\nLimit (cost=1.00..52692.73 rows=10 width=20) (actual\ntime=55219.289..87704.704 rows=10 loops=1)\n Buffers: shared hit=9579294 read=328583\n I/O Timings: read=1157740.299\n -> Nested Loop (cost=1.00..2007555.82 rows=381 width=20) (actual\ntime=55219.288..87704.695 rows=10 loops=1)\n Buffers: shared hit=9579294 read=328583\n I/O Timings: read=1157740.299\n -> Index Scan using job_u_createtime_2cy0wgyqpani8 on pc_job groot\n (cost=0.56..1800117.94 rows=153696 width=28) (actual\ntime=102.075..79470.670 rows=5650 loops=1)\n Index Cond: ((groot.retired = 0) AND (groot.subtype = 7))\n Filter: (groot.closedate IS NULL)\n Rows Removed by Filter: 14994857\n Buffers: shared hit=9563981 read=321566\n I/O Timings: read=1149579.949\n -> Index Scan using pc_policy_pk on pc_policy policy_0\n (cost=0.43..1.35 rows=1 width=8) (actual time=1.456..1.456 rows=0\nloops=5650)\n Index Cond: (policy_0.id = groot.policyid)\n Filter: ((policy_0.retired = 0) AND\n(policy_0.producercodeofserviceid = ANY\n('{248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719}'::bigint[])))\n Rows Removed by Filter: 1\n Buffers: shared hit=15313 read=7017\n I/O Timings: read=8160.350\nPlanning time: 2.209 ms\nExecution time: 87705.116 ms\n\nThanks,\nJerry\n\nOn Wed, Jan 17, 2024 at 6:39 AM Jerry Brenner <[email protected]>\nwrote:\n\n> We are on 13.9.\n> I'm wondering why a sort is required for this query, as the index should\n> be providing the required ordering to satisfy the ORDER BY clause. Does it\n> have to do with the IS NULL predicate on the leading key column in the\n> index?\n>\n> There's an index, job_u_closedate_g9cdc6ghupib, on pc_job(CloseDate,\n> Retired, Subtype, CreateTime, ID). All columns have ASC sort order and\n> NULLs LAST.\n>\n> - pc_job is the probe table in a hash join\n> - There are IS NULL and equality predicates on the 3 leading columns\n> in the index and the last 2 key columns (CreateTime, ID) are the ordering\n> columns in the query\n> - So, the Index Scan of job_u_closedate_g9cdc6ghupib is returning the\n> rows in the sorted order\n> - NOTE: The sort is cheap, but I'm investigating this because\n> \"CloseDate IS NULL\" is very selective and without forcing the index the\n> optimizer is choosing a different sort avert index that does not include\n> CloseDate and hence a lot of time is spent filtering out rows on that\n> predicate against the heap.\n>\n> Here's the query\n>\n> SELECT /* ISNULL:pc_job.CloseDate:, KeyTable:pc_job; */ gRoot.ID col0,\n> gRoot.Subtype col1, gRoot.CreateTime col2\n> FROM pc_job gRoot INNER JOIN pc_policy policy_0\n> ON policy_0.ID = gRoot.PolicyID\n> WHERE gRoot.Subtype = 7 AND gRoot.CloseDate IS NULL\n> AND gRoot.Retired = 0\n> AND policy_0.ProducerCodeOfServiceID IN\n>\n> (248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719)\n> AND policy_0.Retired = 0\n> ORDER BY col2 ASC, col0 ASC LIMIT 10\n>\n> Here's the query plan:\n>\n> Limit (cost=107826.77..107826.79 rows=10 width=20) (actual time=13149.872..13149.877 rows=10 loops=1)\n> Buffers: shared hit=2756 read=40121\n> I/O Timings: read=105917.908\n> -> Sort (cost=107826.77..107827.72 rows=381 width=20) (actual time=13149.871..13149.874 rows=10 loops=1)\n> Sort Key: groot.createtime, groot.id\n> Sort Method: top-N heapsort Memory: 25kB\n> Buffers: shared hit=2756 read=40121\n> I/O Timings: read=105917.908\n> -> Hash Join (cost=15632.51..107818.53 rows=381 width=20) (actual time=578.511..13149.658 rows=144 loops=1)\n> Buffers: shared hit=2750 read=40121\n> I/O Timings: read=105917.908\n> -> Index Scan using job_u_closedate_g9cdc6ghupib on pc_job groot (cost=0.56..91783.14 rows=153696 width=28) (actual time=3.864..12562.568 rows=75558 loops=1)\n> Index Cond: ((groot.closedate IS NULL) AND (groot.retired = 0) AND (groot.subtype = 7))\n> Buffers: shared hit=2721 read=27934\n> I/O Timings: read=58781.220\n> -> Hash (cost=15427.92..15427.92 rows=16322 width=8) (actual time=543.298..543.299 rows=13016 loops=1)\n> Buffers: shared hit=29 read=12187\n> I/O Timings: read=47136.688\n> -> Index Scan using policy_n_producerco_3e8i0ojsyckhx on pc_policy policy_0 (cost=0.43..15427.92 rows=16322 width=8) (actual time=6.149..540.501 rows=13016 loops=1)\n> Index Cond: ((policy_0.producercodeofserviceid = ANY ('{248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719}'::bigint[])) AND (policy_0.retired = 0))\n> Buffers: shared hit=29 read=12187\n> I/O Timings: read=47136.688\n> Planning time: 0.538 ms\n> Execution time: 13150.301 ms\n>\n> Thanks,\n>\n> Jerry\n>\n>\n>\n\nApologies for not including this in the original email. The other index, job_u_createtime_2cy0wgyqpani8, is on pc_job(CreateTime, Retired, Subtype, ID). The optimizer chooses Nested Loop when choosing that index, vs Hash Join when choosing the index in the first plan that I posted. It seems like the choice of the Hash Join in the 1st plan that I posted is collateral damage from the seemingly unnecessary need to do the sort.Here's the plan without forcing the index:Limit (cost=1.00..52692.73 rows=10 width=20) (actual time=55219.289..87704.704 rows=10 loops=1) Buffers: shared hit=9579294 read=328583 I/O Timings: read=1157740.299 -> Nested Loop (cost=1.00..2007555.82 rows=381 width=20) (actual time=55219.288..87704.695 rows=10 loops=1) Buffers: shared hit=9579294 read=328583 I/O Timings: read=1157740.299 -> Index Scan using job_u_createtime_2cy0wgyqpani8 on pc_job groot (cost=0.56..1800117.94 rows=153696 width=28) (actual time=102.075..79470.670 rows=5650 loops=1) Index Cond: ((groot.retired = 0) AND (groot.subtype = 7)) Filter: (groot.closedate IS NULL) Rows Removed by Filter: 14994857 Buffers: shared hit=9563981 read=321566 I/O Timings: read=1149579.949 -> Index Scan using pc_policy_pk on pc_policy policy_0 (cost=0.43..1.35 rows=1 width=8) (actual time=1.456..1.456 rows=0 loops=5650) Index Cond: (policy_0.id = groot.policyid) Filter: ((policy_0.retired = 0) AND (policy_0.producercodeofserviceid = ANY ('{248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719}'::bigint[]))) Rows Removed by Filter: 1 Buffers: shared hit=15313 read=7017 I/O Timings: read=8160.350Planning time: 2.209 msExecution time: 87705.116 msThanks,JerryOn Wed, Jan 17, 2024 at 6:39 AM Jerry Brenner <[email protected]> wrote:We are on 13.9. I'm wondering why a sort is required for this query, as the index should be providing the required ordering to satisfy the ORDER BY clause. Does it have to do with the IS NULL predicate on the leading key column in the index?There's an index, job_u_closedate_g9cdc6ghupib, on pc_job(CloseDate, Retired, Subtype, CreateTime, ID). All columns have ASC sort order and NULLs LAST.pc_job is the probe table in a hash joinThere are IS NULL and equality predicates on the 3 leading columns in the index and the last 2 key columns (CreateTime, ID) are the ordering columns in the querySo, the Index Scan of \njob_u_closedate_g9cdc6ghupib is returning the rows in the sorted orderNOTE: The sort is cheap, but I'm investigating this because \"CloseDate IS NULL\" is very selective and without forcing the index the optimizer is choosing a different sort avert index that does not include CloseDate and hence a lot of time is spent filtering out rows on that predicate against the heap. Here's the querySELECT /* ISNULL:pc_job.CloseDate:, KeyTable:pc_job; */ gRoot.ID col0, gRoot.Subtype col1, gRoot.CreateTime col2FROM pc_job gRoot INNER JOIN pc_policy policy_0 ON policy_0.ID = gRoot.PolicyIDWHERE gRoot.Subtype = 7 AND gRoot.CloseDate IS NULL AND gRoot.Retired = 0 AND policy_0.ProducerCodeOfServiceID IN (248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719) AND policy_0.Retired = 0ORDER BY col2 ASC, col0 ASC LIMIT 10Here's the query plan:Limit (cost=107826.77..107826.79 rows=10 width=20) (actual time=13149.872..13149.877 rows=10 loops=1) Buffers: shared hit=2756 read=40121 I/O Timings: read=105917.908 -> Sort (cost=107826.77..107827.72 rows=381 width=20) (actual time=13149.871..13149.874 rows=10 loops=1) Sort Key: groot.createtime, groot.id Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=2756 read=40121 I/O Timings: read=105917.908 -> Hash Join (cost=15632.51..107818.53 rows=381 width=20) (actual time=578.511..13149.658 rows=144 loops=1) Buffers: shared hit=2750 read=40121 I/O Timings: read=105917.908 -> Index Scan using job_u_closedate_g9cdc6ghupib on pc_job groot (cost=0.56..91783.14 rows=153696 width=28) (actual time=3.864..12562.568 rows=75558 loops=1) Index Cond: ((groot.closedate IS NULL) AND (groot.retired = 0) AND (groot.subtype = 7)) Buffers: shared hit=2721 read=27934 I/O Timings: read=58781.220 -> Hash (cost=15427.92..15427.92 rows=16322 width=8) (actual time=543.298..543.299 rows=13016 loops=1) Buffers: shared hit=29 read=12187 I/O Timings: read=47136.688 -> Index Scan using policy_n_producerco_3e8i0ojsyckhx on pc_policy policy_0 (cost=0.43..15427.92 rows=16322 width=8) (actual time=6.149..540.501 rows=13016 loops=1) Index Cond: ((policy_0.producercodeofserviceid = ANY ('{248,1092,1848,74101,103158,103159,117402,122618,129215,132420,135261,137719}'::bigint[])) AND (policy_0.retired = 0)) Buffers: shared hit=29 read=12187 I/O Timings: read=47136.688Planning time: 0.538 msExecution time: 13150.301 msThanks,Jerry",
"msg_date": "Wed, 17 Jan 2024 06:48:54 -0800",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is a sort required for this query? (IS NULL predicate on\n leading key column)"
},
{
"msg_contents": "Jerry Brenner <[email protected]> writes:\n> I'm wondering why a sort is required for this query, as the index should be\n> providing the required ordering to satisfy the ORDER BY clause. Does it\n> have to do with the IS NULL predicate on the leading key column in the\n> index?\n\nIS NULL is not seen as an equality condition, no. It's pretty much\nof a hack that makes it an indexable condition at all, and we don't\nreally do any advanced optimization with it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jan 2024 10:11:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is a sort required for this query? (IS NULL predicate on\n leading key column)"
}
] |
[
{
"msg_contents": "Hello,\n\nThis EXPLAIN ANALYZE tells me the actual time was 11.xxx ms but the \nfinal Execution time says 493.xxx ms (this last one is true : about 1/2 \nsecond).\n\nI would like to optimize this query but with this inconsistency, it will \nbe difficult. This query is really a function so I added the \"params\" \nCTE to do the tests.\n\nThanks for your help,\n\nJC\n\nPS: PostgreSQL 14.10 (Ubuntu 14.10-1.pgdg23.10+1) on \nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 13.2.0-4ubuntu3) 13.2.0, 64-bit\n\nand JIT is disabled\n\n\nNested Loop Left Join (cost=36411048684.11..36411051002.22 rows=54228 \nwidth=116) (actual time=11.609..11.620 rows=0 loops=1)\n Join Filter: (u_g_internalvalidation.idinternalvalidation IS NOT NULL)\n CTE params\n -> Result (cost=0.00..0.01 rows=1 width=41) (actual \ntime=0.001..0.001 rows=1 loops=1)\n CTE u\n -> Nested Loop (cost=0.25..12.77 rows=250 width=368) (never executed)\n -> CTE Scan on params (cost=0.00..0.02 rows=1 width=41) \n(never executed)\n -> Function Scan on get_user (cost=0.25..10.25 rows=250 \nwidth=327) (never executed)\n Filter: ((NOT isgroup) AND (NOT istemplate))\n CTE internalvalidation\n -> Hash Join (cost=13988.95..14336.16 rows=436 width=42) (actual \ntime=11.601..11.606 rows=0 loops=1)\n Hash Cond: (files.idfile = i_1.idfile)\n -> HashAggregate (cost=13929.87..14039.58 rows=10971 \nwidth=4) (actual time=11.480..11.527 rows=243 loops=1)\n Group Key: files.idfile\n Batches: 1 Memory Usage: 417kB\n -> Append (cost=0.00..13902.45 rows=10971 width=4) \n(actual time=6.593..11.423 rows=243 loops=1)\n -> Nested Loop (cost=0.00..10139.01 rows=10766 \nwidth=4) (actual time=5.143..5.145 rows=0 loops=1)\n Join Filter: ((files.idfile = \n(unnest(params_2.fn_idfile))) OR ((params_1.fn_idrealm = \nfiles.idrealminitial) AND params_1.fn_canseesubfiles))\n -> Seq Scan on files (cost=0.00..2985.74 \nrows=40874 width=8) (actual time=0.002..2.396 rows=40766 loops=1)\n -> Materialize (cost=0.00..0.35 rows=10 \nwidth=9) (actual time=0.000..0.000 rows=0 loops=40766)\n -> Nested Loop (cost=0.00..0.30 \nrows=10 width=9) (actual time=0.003..0.003 rows=0 loops=1)\n -> CTE Scan on params \nparams_1 (cost=0.00..0.02 rows=1 width=5) (actual time=0.002..0.003 \nrows=0 loops=1)\n Filter: (fn_idfile IS NOT \nNULL)\n Rows Removed by Filter: 1\n -> ProjectSet (cost=0.00..0.08 \nrows=10 width=4) (never executed)\n -> CTE Scan on params \nparams_2 (cost=0.00..0.02 rows=1 width=32) (never executed)\n -> Nested Loop (cost=0.00..3598.87 rows=205 \nwidth=4) (actual time=1.448..6.267 rows=243 loops=1)\n Join Filter: (((params_3.fn_idfile IS NULL) \nAND (files_1.idrealm = params_3.fn_idrealm)) OR ((files_1.idrealminitial \n= params_3.fn_idrealm) AND params_3.fn_canseesubfiles))\n Rows Removed by Join Filter: 40523\n -> CTE Scan on params params_3 \n(cost=0.00..0.02 rows=1 width=37) (actual time=0.001..0.001 rows=1 loops=1)\n Filter: ((fn_idfile IS NULL) OR \nfn_canseesubfiles)\n -> Seq Scan on files files_1 \n(cost=0.00..2985.74 rows=40874 width=12) (actual time=0.002..1.946 \nrows=40766 loops=1)\n -> Hash (cost=58.94..58.94 rows=11 width=42) (actual \ntime=0.059..0.060 rows=6 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Nested Loop Left Join (cost=0.29..58.94 rows=11 \nwidth=42) (actual time=0.031..0.055 rows=6 loops=1)\n -> Seq Scan on internalvalidations i_1 \n(cost=0.00..9.81 rows=11 width=26) (actual time=0.012..0.033 rows=6 loops=1)\n Filter: isvisible\n Rows Removed by Filter: 453\n -> Memoize (cost=0.29..7.58 rows=1 width=20) \n(actual time=0.003..0.003 rows=1 loops=6)\n Cache Key: i_1.idclaimer\n Cache Mode: logical\n Hits: 5 Misses: 1 Evictions: 0 Overflows: \n0 Memory Usage: 1kB\n -> Index Scan using users_pkey on users \nusers_1 (cost=0.28..7.57 rows=1 width=20) (actual time=0.012..0.013 \nrows=1 loops=1)\n Index Cond: (iduser = i_1.idclaimer)\n CTE u_g_internalvalidation_without_u_status\n -> Recursive Union (cost=23.65..7914563.16 rows=220606936 \nwidth=19) (never executed)\n -> Nested Loop (cost=23.65..9280.10 rows=1936 width=19) \n(never executed)\n Join Filter: (uhi.iduser = gu.iduser)\n -> Nested Loop (cost=23.40..80.70 rows=733 width=15) \n(never executed)\n -> Hash Join (cost=23.11..42.43 rows=733 \nwidth=10) (never executed)\n Hash Cond: \n(internalvalidation.idinternalvalidation = uhi.idinternalvalidation)\n -> CTE Scan on internalvalidation \n(cost=0.00..8.72 rows=436 width=4) (never executed)\n -> Hash (cost=13.05..13.05 rows=805 \nwidth=10) (never executed)\n -> Seq Scan on \nusers_has_internalvalidations uhi (cost=0.00..13.05 rows=805 width=10) \n(never executed)\n -> Memoize (cost=0.29..1.67 rows=1 width=5) \n(never executed)\n Cache Key: uhi.iduser\n Cache Mode: logical\n -> Index Scan using users_pkey on users v \n(cost=0.28..1.66 rows=1 width=5) (never executed)\n Index Cond: (iduser = uhi.iduser)\n -> Function Scan on get_user gu (cost=0.25..12.75 \nrows=4 width=5) (never executed)\n Filter: ((canvalidate OR v.isgroup) AND (v.iduser \n= iduser))\n -> Nested Loop (cost=750.89..349314.43 rows=22060500 \nwidth=19) (never executed)\n -> Merge Join (cost=750.63..1200.96 rows=29414 \nwidth=18) (never executed)\n Merge Cond: (uhg.idgroup = ugiwus.idusergroup)\n -> Sort (cost=126.96..131.52 rows=1823 width=8) \n(never executed)\n Sort Key: uhg.idgroup\n -> Seq Scan on users_has_groups uhg \n(cost=0.00..28.23 rows=1823 width=8) (never executed)\n -> Sort (cost=623.67..631.74 rows=3227 \nwidth=18) (never executed)\n Sort Key: ugiwus.idusergroup\n -> WorkTable Scan on \nu_g_internalvalidation_without_u_status ugiwus (cost=0.00..435.60 \nrows=3227 width=18) (never executed)\n Filter: (isgroup AND (level < 25))\n -> Memoize (cost=0.26..10.26 rows=750 width=5) (never \nexecuted)\n Cache Key: uhg.iduser\n Cache Mode: binary\n -> Function Scan on get_user gu_1 \n(cost=0.25..10.25 rows=750 width=5) (never executed)\n Filter: (canvalidate OR isgroup)\n -> Hash Join (cost=36403119772.02..36403120996.38 rows=436 \nwidth=120) (actual time=11.609..11.611 rows=0 loops=1)\n Hash Cond: (ug1.idinternalvalidation = i.idinternalvalidation)\n -> Merge Left Join (cost=36403119757.85..36403120966.35 \nrows=200 width=75) (never executed)\n Merge Cond: (ug1.idinternalvalidation = \nu_g_internalvalidation.idinternalvalidation)\n -> GroupAggregate (cost=36397604572.80..36397605775.80 \nrows=200 width=70) (never executed)\n Group Key: ug1.idinternalvalidation\n -> Sort (cost=36397604572.80..36397604772.80 \nrows=80000 width=27) (never executed)\n Sort Key: ug1.idinternalvalidation\n -> Hash Left Join \n(cost=36397596247.48..36397598057.72 rows=80000 width=27) (never executed)\n Hash Cond: (ug1.idusergroup = users.iduser)\n -> HashAggregate \n(cost=36397595908.85..36397596708.85 rows=80000 width=43) (never executed)\n Group Key: \nug1.idinternalvalidation, ug1.idusergroup, ug1.atleastonemustvalidate\n -> Merge Left Join \n(cost=77330317.34..21265069958.96 rows=1210602075991 width=13) (never \nexecuted)\n Merge Cond: \n((ug1.idinternalvalidation = ug2.idinternalvalidation) AND (ug1.iduser = \nug2.iduser))\n -> Sort \n(cost=38754984.35..39306501.69 rows=220606936 width=15) (never executed)\n Sort Key: \nug1.idinternalvalidation, ug1.iduser\n -> CTE Scan on \nu_g_internalvalidation_without_u_status ug1 (cost=0.00..4412138.72 \nrows=220606936 width=15) (never executed)\n -> Materialize \n(cost=38575332.99..39672852.50 rows=219503901 width=10) (never executed)\n -> Sort \n(cost=38575332.99..39124092.74 rows=219503901 width=10) (never executed)\n Sort Key: \nug2.idinternalvalidation, ug2.iduser\n -> CTE Scan on \nu_g_internalvalidation_without_u_status ug2 (cost=0.00..4412138.72 \nrows=219503901 width=10) (never executed)\nFilter: (hasvalidate IS NOT NULL)\n -> Hash (cost=289.95..289.95 \nrows=3895 width=20) (never executed)\n -> Seq Scan on users \n(cost=0.00..289.95 rows=3895 width=20) (never executed)\n -> Sort (cost=5515185.04..5515185.54 rows=200 width=5) \n(never executed)\n Sort Key: u_g_internalvalidation.idinternalvalidation\n -> Subquery Scan on u_g_internalvalidation \n(cost=5515173.40..5515177.40 rows=200 width=5) (never executed)\n -> HashAggregate \n(cost=5515173.40..5515175.40 rows=200 width=5) (never executed)\n Group Key: ugi.idinternalvalidation\n -> CTE Scan on \nu_g_internalvalidation_without_u_status ugi (cost=0.00..4412138.72 \nrows=220606936 width=5) (never executed)\n -> Hash (cost=8.72..8.72 rows=436 width=49) (actual \ntime=11.603..11.603 rows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n -> CTE Scan on internalvalidation i (cost=0.00..8.72 \nrows=436 width=49) (actual time=11.603..11.603 rows=0 loops=1)\n -> CTE Scan on u (cost=0.00..5.00 rows=125 width=4) (never executed)\n Filter: canvalidate\nPlanning Time: 1.037 ms\nExecution Time: 493.815 ms\n\n\nAnd here is the query in case it can help :\n\n\nexplain analyze\nWITH RECURSIVE\n params AS (\n SELECT\n 4 AS fn_iduser\n , 1 AS fn_idrealm\n , NULL::INT[] AS fn_idfile\n , TRUE AS fn_canseesubfiles\n ),\n r AS (\n SELECT realms.*\n FROM realms\n CROSS JOIN params\n WHERE idrealm=fn_idrealm\n)\n,u AS (\n SELECT *\n FROM PARAMS\n CROSS JOIN get_user(fn_iduser)\n WHERE NOT isgroup\n AND NOT istemplate\n)\n,allfiles AS (\n SELECT files.idfile --idfile devrait suffire\n FROM params\n CROSS JOIN (\n SELECT UNNEST(fn_idfile) as idfile FROM params\n ) AS unnestedfiles\n JOIN files ON files.idfile = unnestedfiles.idfile OR (\n -- see sub files\n fn_idrealm = files.idrealminitial\n AND\n fn_canseesubfiles\n )\n WHERE fn_idfile IS NOT NULL\n\n UNION\n\n SELECT files.idfile --idfile devrait suffire\n FROM files\n CROSS JOIN params\n WHERE fn_idfile IS NULL\n AND files.idrealm = fn_idrealm OR ( files.idrealminitial = \nfn_idrealm AND fn_canseesubfiles )\n)\n,internalvalidation as (\n SELECT i.*, users.name\n FROM allfiles pfu\n JOIN internalvalidations i ON i.idfile = pfu.idfile AND i.isvisible\n LEFT JOIN users ON users.iduser = i.idclaimer\n)\n,u_g_internalvalidation_without_u_status AS (\n SELECT uhi.idinternalvalidation\n ,uhi.iduser as idusergroup\n ,uhi.hasvalidate\n ,uhi.atleastonemustvalidate\n ,v.iduser\n ,v.isgroup\n , 0 level\n FROM internalvalidation\n JOIN users_has_internalvalidations uhi ON uhi.idinternalvalidation \n= internalvalidation.idinternalvalidation\n JOIN users v on uhi.iduser = v.iduser\n JOIN get_user(v.iduser) gu ON gu.iduser = v.iduser AND ( \ngu.canvalidate OR v.isgroup )\n\n UNION\n\n SELECT\n ugiwus.idinternalvalidation\n ,ugiwus.iduser as idusergroup\n ,ugiwus.hasvalidate\n ,ugiwus.atleastonemustvalidate\n ,gu.iduser\n ,gu.isgroup\n , ugiwus.level + 1\n FROM u_g_internalvalidation_without_u_status ugiwus\n JOIN users_has_groups uhg ON uhg.idgroup = ugiwus.idusergroup AND \nugiwus.isgroup\n JOIN get_user(uhg.iduser) gu ON ( gu.canvalidate OR gu.isgroup )\n WHERE ugiwus.level < 25\n)\n-- SELECT * FROM u_g_internalvalidation_without_u_status;\n,u_g_internalvalidation AS (\n SELECT ugi.idinternalvalidation, \nbool_or(COALESCE(ugi.hasvalidate,false)) currentuserhasvalidate--, \nbool_or( ugi.iduser = u.iduser ) /* à quoi sert ce champ ? */\n FROM u_g_internalvalidation_without_u_status ugi\n-- CROSS JOIN u /* et donc à quoi sert cette table ? */\n-- WHERE u.iduser = ugi.iduser\n GROUP BY ugi.idinternalvalidation\n)\n,regroup_users as (\n SELECT -- regroup users by group etc ...\n ug1.idinternalvalidation\n ,ug1.idusergroup\n ,BOOL_OR( COALESCE( ug2.hasvalidate, ug1.hasvalidate,false)) \nFILTER ( WHERE NOT ug1.isgroup OR ug2.isgroup ) AS atleastoneuserhasvalidate\n ,BOOL_AND( COALESCE( ug2.hasvalidate, ug1.hasvalidate,false)) \nFILTER ( WHERE NOT ug1.isgroup OR ug2.isgroup ) AS everyuserhasvalidate\n ,ARRAY_AGG( DISTINCT ug1.iduser) AS iduser\n ,ug1.atleastonemustvalidate\n FROM u_g_internalvalidation_without_u_status ug1\n LEFT JOIN u_g_internalvalidation_without_u_status ug2 ON \nug2.idinternalvalidation = ug1.idinternalvalidation AND ug2.iduser = \nug1.iduser AND ug2.hasvalidate IS NOT NULL\n GROUP BY \nug1.idinternalvalidation,ug1.idusergroup,ug1.atleastonemustvalidate\n)\n,regroup_internvalvalidations AS (\n SELECT\n r.idinternalvalidation\n ,ARRAY_AGG(r.idusergroup) AS iduser\n ,ARRAY_AGG(users.name) AS name\n ,BOOL_OR(v.validate) AS atleastoneuserhasvalidate\n ,BOOL_AND(v.validate) AS everyusershasvalidate\n FROM regroup_users r\n LEFT JOIN users ON users.iduser = r.idusergroup\n CROSS JOIN LATERAL (\n SELECT CASE\n WHEN r.atleastonemustvalidate THEN r.atleastoneuserhasvalidate\n ELSE r.everyuserhasvalidate\n END AS validate\n ) AS v\n GROUP BY r.idinternalvalidation\n)\nSELECT\n ri.idinternalvalidation\n ,ri.iduser\n ,ri.name\n ,ri.atleastoneuserhasvalidate\n ,ri.everyusershasvalidate\n ,i.ttl\n ,i.idfile\n ,i.idclaimer\n ,i.name AS claimername\n ,CASE WHEN i.atleastonemustvalidate THEN \nri.atleastoneuserhasvalidate ELSE ri.everyusershasvalidate END AS \ninternalvalidationisdone\n ,CASE WHEN u.iduser IS NULL THEN NULL::BOOLEAN ELSE \nu_g_internalvalidation.currentuserhasvalidate END AS currentuserhasvalidate\nFROM regroup_internvalvalidations ri\nJOIN internalvalidation i ON i.idinternalvalidation = \nri.idinternalvalidation\nLEFT JOIN u_g_internalvalidation ON \nu_g_internalvalidation.idinternalvalidation = ri.idinternalvalidation\nLEFT JOIN u ON u_g_internalvalidation.idinternalvalidation IS NOT NULL \nAND u.canvalidate\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 07:48:52 +0100",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": true,
"msg_subject": "I don't understand that EXPLAIN PLAN timings"
},
{
"msg_contents": "Hello,\n\nNo answer to my previous email, here is a simpler query with the same \nproblem: explain says actual time between 1.093→1.388 but final \nexecution time says 132.880ms?!?\n\nThanks for your help,\n\nexplain analyze\n WITH RECURSIVE u AS (\n SELECT idrealm, canseesubrealm\n FROM get_user(256)\n )\n ,realm_list as (\n -- get the tree view of all visible realms by the user\n SELECT u.idrealm, 0 AS level\n FROM u\n\n UNION\n\n SELECT realms.idrealm, rl.level+1\n FROM realms\n JOIN realm_list rl ON rl.idrealm = realms.idrealmparent\n CROSS JOIN u\n WHERE u.canseesubrealm\n AND rl.level < 20\n )\n SELECT\n r.idrealm\n ,r.idrealmparent\n ,r.name\n ,r.istemplate\n ,r.mustvalidate\n ,r.filesdirectory\n ,r.dbname\n ,r.iconrealm\n ,r.dbhost\n ,r.email\n ,r.urlrealm\n ,r.dbport\n ,r.dbpassword\n ,r.dbloginname\n ,r.dbowner\n ,rl.level\n ,r.pricetag\n FROM realm_list rl\n JOIN realms r ON r.idrealm=rl.idrealm;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nHash Join (cost=520303.68..854752.18 rows=13123940 width=258) (actual \ntime=1.093..1.388 rows=167 loops=1)\n Hash Cond: (rl.idrealm = r.idrealm)\n CTE u\n -> Function Scan on get_user (cost=0.25..10.25 rows=1000 width=5) \n(actual time=0.996..0.997 rows=1 loops=1)\n CTE realm_list\n -> Recursive Union (cost=0.00..520273.42 rows=14746000 width=8) \n(actual time=1.000..1.228 rows=167 loops=1)\n -> CTE Scan on u (cost=0.00..20.00 rows=1000 width=8) \n(actual time=0.998..0.999 rows=1 loops=1)\n -> Nested Loop (cost=266.66..22533.34 rows=1474500 width=8) \n(actual time=0.014..0.038 rows=42 loops=4)\n -> CTE Scan on u u_1 (cost=0.00..20.00 rows=500 \nwidth=0) (actual time=0.000..0.001 rows=1 loops=4)\n Filter: canseesubrealm\n -> Materialize (cost=266.66..403.22 rows=2949 \nwidth=8) (actual time=0.012..0.034 rows=42 loops=4)\n -> Hash Join (cost=266.66..388.47 rows=2949 \nwidth=8) (actual time=0.012..0.026 rows=42 loops=4)\n Hash Cond: (realms.idrealmparent = \nrl_1.idrealm)\n -> Seq Scan on realms (cost=0.00..17.78 \nrows=178 width=8) (actual time=0.000..0.009 rows=167 loops=4)\n -> Hash (cost=225.00..225.00 rows=3333 \nwidth=8) (actual time=0.006..0.006 rows=42 loops=4)\n Buckets: 4096 Batches: 1 Memory \nUsage: 38kB\n -> WorkTable Scan on realm_list rl_1 \n (cost=0.00..225.00 rows=3333 width=8) (actual time=0.000..0.003 \nrows=42 loops=4)\n Filter: (level < 20)\n -> CTE Scan on realm_list rl (cost=0.00..294920.00 rows=14746000 \nwidth=8) (actual time=1.001..1.254 rows=167 loops=1)\n -> Hash (cost=17.78..17.78 rows=178 width=254) (actual \ntime=0.085..0.085 rows=167 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 30kB\n -> Seq Scan on realms r (cost=0.00..17.78 rows=178 width=254) \n(actual time=0.006..0.043 rows=167 loops=1)\nPlanning Time: 0.457 ms\nExecution Time: 132.880 ms\n(24 lignes)\n\nTemps : 134,292 ms\n\n\n\n# select version();\n version\n------------------------------------------------------------------------------------------------------------------------------- \n\nPostgreSQL 14.10 (Ubuntu 14.10-1.pgdg23.10+1) on x86_64-pc-linux-gnu, \ncompiled by gcc (Ubuntu 13.2.0-4ubuntu3) 13.2.0, 64-bit\n\n\n\n\n\n\nHello,\nNo answer to my previous email, here is a simpler query with the\n same problem: explain says actual time between 1.093→1.388 but\n final execution time says 132.880ms?!?\nThanks for your help,\n\nexplain\n analyze \n WITH RECURSIVE u AS (\n \n SELECT idrealm, canseesubrealm\n \n FROM get_user(256)\n \n )\n \n ,realm_list as (\n \n -- get the tree view of all visible realms by the\n user\n \n SELECT u.idrealm, 0 AS level\n \n FROM u\n \n\n UNION\n \n\n SELECT realms.idrealm, rl.level+1\n \n FROM realms\n \n JOIN realm_list rl ON rl.idrealm =\n realms.idrealmparent\n \n CROSS JOIN u\n \n WHERE u.canseesubrealm\n \n AND rl.level < 20\n \n )\n \n SELECT\n \n r.idrealm\n \n ,r.idrealmparent\n \n ,r.name\n \n ,r.istemplate\n \n ,r.mustvalidate\n \n ,r.filesdirectory\n \n ,r.dbname\n \n ,r.iconrealm\n \n ,r.dbhost\n \n ,r.email\n \n ,r.urlrealm\n \n ,r.dbport\n \n ,r.dbpassword\n \n ,r.dbloginname\n \n ,r.dbowner\n \n ,rl.level\n \n ,r.pricetag\n \n FROM realm_list rl\n \n JOIN realms r ON r.idrealm=rl.idrealm;\n \n QUERY\n PLAN\n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n \n Hash Join (cost=520303.68..854752.18 rows=13123940 width=258)\n (actual time=1.093..1.388 rows=167 loops=1)\n \n Hash Cond: (rl.idrealm = r.idrealm)\n \n CTE u\n \n -> Function Scan on get_user (cost=0.25..10.25\n rows=1000 width=5) (actual time=0.996..0.997 rows=1 loops=1)\n \n CTE realm_list\n \n -> Recursive Union (cost=0.00..520273.42 rows=14746000\n width=8) (actual time=1.000..1.228 rows=167 loops=1)\n \n -> CTE Scan on u (cost=0.00..20.00 rows=1000\n width=8) (actual time=0.998..0.999 rows=1 loops=1)\n \n -> Nested Loop (cost=266.66..22533.34\n rows=1474500 width=8) (actual time=0.014..0.038 rows=42 loops=4)\n \n -> CTE Scan on u u_1 (cost=0.00..20.00\n rows=500 width=0) (actual time=0.000..0.001 rows=1 loops=4)\n \n Filter: canseesubrealm\n \n -> Materialize (cost=266.66..403.22\n rows=2949 width=8) (actual time=0.012..0.034 rows=42 loops=4)\n \n -> Hash Join (cost=266.66..388.47\n rows=2949 width=8) (actual time=0.012..0.026 rows=42 loops=4)\n \n Hash Cond: (realms.idrealmparent =\n rl_1.idrealm)\n \n -> Seq Scan on realms\n (cost=0.00..17.78 rows=178 width=8) (actual time=0.000..0.009\n rows=167 loops=4)\n \n -> Hash (cost=225.00..225.00\n rows=3333 width=8) (actual time=0.006..0.006 rows=42 loops=4)\n \n Buckets: 4096 Batches: 1\n Memory Usage: 38kB\n \n -> WorkTable Scan on\n realm_list rl_1 (cost=0.00..225.00 rows=3333 width=8) (actual\n time=0.000..0.003 rows=42 loops=4)\n \n Filter: (level < 20)\n \n -> CTE Scan on realm_list rl (cost=0.00..294920.00\n rows=14746000 width=8) (actual time=1.001..1.254 rows=167\n loops=1)\n \n -> Hash (cost=17.78..17.78 rows=178 width=254) (actual\n time=0.085..0.085 rows=167 loops=1)\n \n Buckets: 1024 Batches: 1 Memory Usage: 30kB\n \n -> Seq Scan on realms r (cost=0.00..17.78 rows=178\n width=254) (actual time=0.006..0.043 rows=167 loops=1)\n \n Planning Time: 0.457 ms\n \n Execution Time: 132.880 ms\n \n (24 lignes)\n \n\n Temps : 134,292 ms\n \n\n\n\n\n\n# select version();\n \n version\n \n-------------------------------------------------------------------------------------------------------------------------------\n \n PostgreSQL 14.10 (Ubuntu 14.10-1.pgdg23.10+1) on\n x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 13.2.0-4ubuntu3)\n 13.2.0, 64-bit",
"msg_date": "Tue, 23 Jan 2024 08:45:09 +0100",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: I don't understand that EXPLAIN PLAN timings"
},
{
"msg_contents": "On Tue, 23 Jan 2024 at 20:45, Jean-Christophe Boggio\n<[email protected]> wrote:\n> explain says actual time between 1.093→1.388 but final execution time says 132.880ms?!?\n\nThe 1.388 indicates the total time spent in that node starting from\njust before the node was executed for the first time up until the node\nreturned no more tuples.\n\nThe 132.88 ms is the total time spent to start up the plan, which\nincludes doing things like obtaining locks (if not already obtained\nduring planning), opening files and allocating memory for various\nthings needed for the plan. After the top-level node returns NULL it\nthen will print out the plan before shutting the plan down. This\nshutdown time is also included in the 132.88 ms.\n\nI don't know where exactly the extra time is spent with your query,\nbut it must be in one of the operations that I mentioned which takes\nplace either before the top node is first executed or after the top\nnode returns NULL.\n\nIf you're using psql, if you do \\timing on, how long does EXPLAIN take\nwithout ANALYZE? That also goes through executor startup and shutdown.\nIt just skips the running the executor part.\n\nDavid\n\n\n",
"msg_date": "Tue, 23 Jan 2024 23:41:34 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I don't understand that EXPLAIN PLAN timings"
},
{
"msg_contents": "Hello David,\n\nThanks for your answer.\n\nLe 23/01/2024 à 11:41, David Rowley a écrit :\n> If you're using psql, if you do \\timing on, how long does EXPLAIN take\n> without ANALYZE? That also goes through executor startup and shutdown.\n\nYou are absolutely correct : the EXPLAIN without ANALYZE gives about the \nsame results. Also, minimizing the amount of workmem in postgresql.conf \nchanges drastically the timings. So that means memory allocation is \neating up a lot of time _PER_QUERY_ ?\n\nSince we have quite some RAM on our machines, I dedicated as much as \npossible to workmem (initially I was allocating 1GB) but this looks \nquite counterproductive (I didn't think that memory was allocated every \ntime, I thought it was \"available\" for the current query but not \nnecessarily used). Is this an issue specific to that version of \nPostgreSQL? (I guess no) Or can this be hardware-related? Or OS-related \n(both systems on which I have done tests are running Ubuntu, I will try \non Debian)?\n\nThanks again for your inputs.\n\nBest,\n\nJC\n\n\n\n\n\n\n\nHello David,\nThanks for your answer.\n\nLe 23/01/2024 à 11:41, David Rowley a\n écrit :\n\n\n\nIf you're using psql, if you do \\timing on, how long does EXPLAIN take\nwithout ANALYZE? That also goes through executor startup and shutdown.\n\nYou are absolutely correct : the EXPLAIN without ANALYZE gives\n about the same results. Also, minimizing the amount of workmem in\n postgresql.conf changes drastically the timings. So that means\n memory allocation is eating up a lot of time _PER_QUERY_ ?\nSince we have quite some RAM on our machines, I dedicated as much\n as possible to workmem (initially I was allocating 1GB) but this\n looks quite counterproductive (I didn't think that memory was\n allocated every time, I thought it was \"available\" for the current\n query but not necessarily used). Is this an issue specific to that\n version of PostgreSQL? (I guess no) Or can this be\n hardware-related? Or OS-related (both systems on which I have done\n tests are running Ubuntu, I will try on Debian)?\nThanks again for your inputs.\nBest,\nJC",
"msg_date": "Thu, 25 Jan 2024 14:31:17 +0100",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: I don't understand that EXPLAIN PLAN timings"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 02:31, Jean-Christophe Boggio\n<[email protected]> wrote:\n> You are absolutely correct : the EXPLAIN without ANALYZE gives about the same results. Also, minimizing the amount of workmem in postgresql.conf changes drastically the timings. So that means memory allocation is eating up a lot of time _PER_QUERY_ ?\n\nWe do reuse pallocs to create memory context, but only for I believe\n1k and 8k blocks. That likely allows most small allocations in the\nexecutor to be done without malloc. Speaking in vague terms as I\ndon't have the exact numbers to hand, but larger allocations will go\ndirectly to malloc.\n\nThere was a bug fixed in [1] that did cause behaviour like this, but\nyou seem to be on 14.10 which will have that fix. Also, the 2nd plan\nyou sent has no Memoize nodes.\n\nI do wonder now if it was a bad idea to make Memoize build the hash\ntable on plan startup rather than delaying that until we fetch the\nfirst tuple. I see Hash Join only builds its table during executor\nrun.\n\n> Since we have quite some RAM on our machines, I dedicated as much as possible to workmem (initially I was allocating 1GB) but this looks quite counterproductive (I didn't think that memory was allocated every time, I thought it was \"available\" for the current query but not necessarily used). Is this an issue specific to that version of PostgreSQL? (I guess no) Or can this be hardware-related? Or OS-related (both systems on which I have done tests are running Ubuntu, I will try on Debian)?\n\nIt would be good to narrow down which plan node is causing this. Can\nyou try disabling various planner enable_* GUCs before running EXPLAIN\n(SUMMARY ON) <your query> with \\timing on and see if you can find\nwhich enable_* GUC causes the EXPLAIN to run more quickly? Just watch\nout for variations in the timing of \"Planning Time:\". You're still\nlooking for a large portion of time not accounted for by planning\ntime.\n\nI'd start with:\n\nSET enable_memoize=0;\nEXPLAIN (SUMMARY ON) <your query>;\nRESET enable_memoize;\n\nSET enable_hashjoin=0;\nEXPLAIN (SUMMARY ON) <your query>;\nRESET enable_hashjoin;\n\nThe following will show others that you could try.\nselect name,setting from pg_settings where name like 'enable%';\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1e731ed12aa\n\n\n",
"msg_date": "Fri, 26 Jan 2024 10:11:46 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I don't understand that EXPLAIN PLAN timings"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> I do wonder now if it was a bad idea to make Memoize build the hash\n> table on plan startup rather than delaying that until we fetch the\n> first tuple. I see Hash Join only builds its table during executor\n> run.\n\nOuch! If it really does that, yes it's a bad idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jan 2024 16:32:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I don't understand that EXPLAIN PLAN timings"
},
{
"msg_contents": "David,\n\n> It would be good to narrow down which plan node is causing this. Can\n> you try disabling various planner enable_* GUCs before running EXPLAIN\n> (SUMMARY ON) <your query> with \\timing on and see if you can find\n> which enable_* GUC causes the EXPLAIN to run more quickly? Just watch\n> out for variations in the timing of \"Planning Time:\". You're still\n> looking for a large portion of time not accounted for by planning\n> time.\nI put the original values for work_mem and temp_buffers back to 1GB \n(don't know if that made a difference in the results).\n\nExecution time is consistent at ~135ms\n\nHere are the results for planning time, disabling each planner method :\n\nenable_async_append 0.454ms *slowest\nenable_bitmapscan 0.221ms\nenable_gathermerge 0.176ms\nenable_hashagg 0.229ms\nenable_hashjoin 0.127ms\nenable_incremental_sort 0.143ms\nenable_indexonlyscan 0.147ms\nenable_indexscan 0.200ms\nenable_material 0.138ms\nenable_memoize 0.152ms\nenable_mergejoin 0.122ms*fastest\nenable_nestloop 0.136ms\nenable_parallel_append 0.147ms\nenable_parallel_hash 0.245ms\nenable_partition_pruning 0.162ms\nenable_seqscan 0.137ms\nenable_sort 0.143ms\nenable_tidscan 0.164ms\n\nHope this helps.\n\nThanks,\n\nJC\n\n\n\n\n",
"msg_date": "Fri, 26 Jan 2024 04:22:53 +0100",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: I don't understand that EXPLAIN PLAN timings"
},
{
"msg_contents": "Hello,\n\nIn case it might be useful, I made some more tests.\n\nOn my dev computer (a notebook) I installed:\n\nPostgreSQL 15.5 (Ubuntu 15.5-1.pgdg23.10+1) on x86_64-pc-linux-gnu, \ncompiled by gcc (Ubuntu 13.2.0-4ubuntu3) 13.2.0, 64-bit\n\nand\n\nPostgreSQL 16.1 (Ubuntu 16.1-1.pgdg23.10+1) on x86_64-pc-linux-gnu, \ncompiled by gcc (Ubuntu 13.2.0-4ubuntu3) 13.2.0, 64-bit\n\nI adjusted work_mem to 1GB and disabled JIT, restored the same DB, did \nVACUUM ANALYZE and ran the query several times to lower I/O interference.\n\nExecution time is about the same on PG 14, 15 and 16, around ~120ms\n\nI noticed that planning time, whatever the version, is very variable \nbetween executions (ranging from 0.120ms to 0.400ms), probably due to \nother programs activity and <1ms measurements imprecision. So the \nresults I gave you in my previous email are probably irrelevant.\n\n\nOn our production server, which is running PG 14.10 on Debian 11, same \nwork_mem, execution time is ~45ms but planning time is much more \nconsistent at ~0.110ms\n\nInterestingly though, lowering work_mem to 32MB gives 22ms execution \ntime but planning time rises consistently at ~0.7ms\n\nOn my notebook with work_mem=32MB, execution time is also 22ms but \nplanning time is lower at ~0.4ms (?!?)\n\n\nLet me know if I can do anything to provide you with more useful \nbenchmark. The DB is still very small so it is easy to do tests.\n\nJC\n\n\n\n\n",
"msg_date": "Fri, 26 Jan 2024 05:22:58 +0100",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: I don't understand that EXPLAIN PLAN timings"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 17:23, Jean-Christophe Boggio\n<[email protected]> wrote:\n> Let me know if I can do anything to provide you with more useful\n> benchmark. The DB is still very small so it is easy to do tests.\n\nWhat I was looking to find out was if there was some enable_* GUC that\nyou could turn off that would make the unaccounted time that you were\ncomplaining about go away.\n\nBecause it seems when you reduce work_mem this unaccounted for time\ngoes away, it makes me think that some executor node is allocating a\nload of memory during executor startup. I was hoping to find out\nwhich node is the offending one by the process of elimination.\n\nAre you still seeing this unaccounted for time with your notebook?\ni.e. the relatively high \"Execution Time\" reported by EXPLAIN ANALYZE\nand low total actual execution time on the plan's top-level node.\n\nI probably didn't need to mention the planning time as it seems\nunlikely that disabling an enable* GUC would result in increased\nplanning time. However, it does not seem impossible that that *could*\nhappen.\n\nDavid\n\n\n",
"msg_date": "Fri, 26 Jan 2024 17:46:19 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I don't understand that EXPLAIN PLAN timings"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have the following table:\n\nCREATE TABLE IF NOT EXISTS public.shortened_url\n(\n id character varying(12) COLLATE pg_catalog.\"default\" NOT NULL,\n created_at timestamp without time zone,\n expires_at timestamp without time zone,\n url text COLLATE pg_catalog.\"default\" NOT NULL,\n CONSTRAINT shortened_url_pkey PRIMARY KEY (id)\n)\n\nThe table contains only the following index on PRIMARY KEY:\n\nCREATE UNIQUE INDEX IF NOT EXISTS shortened_url_pkey\n ON public.shortened_url USING btree\n (id COLLATE pg_catalog.\"default\" ASC NULLS LAST)\n TABLESPACE pg_default;\n\nThis table has approximately 5 million rows of expired URLs (expires_at <\nnow()), and 5 thousand rows of non-expired URLs (expires_at > now())\n\nI deleted all expired URLs with this query:\n\nDELETE FROM shortened_url WHERE expires_at < now().\n\nThen, I tried to query the table for expired URLs:\n\nSELECT * FROM shortened_url WHERE expires_at < now();\n\nThis query was very slow. It took around 1-2 minutes to run, while it had\nto fetch only 5000 rows (the non-expired URLs, since the other ones were\ndeleted).\n\nAfter that, I tried to run VACUUM ANALYZE and REINDEX to the table.\nThe query was still slow.\n\nFinally, I ran VACUUM FULL and re-executed the query. Only then, it started\nrunning fast (1-2 seconds).\n\nDo you have observed a similar behavior with VACUUM ANALYZE / VACUUM FULL\nand why this can happen?\nIs this because data is compacted after VACUUM FULL and sequential disk\nreads are faster?\nShouldn't VACUUM ANALYZE reclaim the disk space and make the query run fast?\nIs this because RDS might do some magic? Is it something I am missing?\n\n*Additional details*\nPostgreSQL version: 14.7 on db.t3.micro RDS\nPG configuration: Default of RDS\n\nKind Regards,\nPavlos\n\nHi all,I have the following table:CREATE TABLE IF NOT EXISTS public.shortened_url( id character varying(12) COLLATE pg_catalog.\"default\" NOT NULL, created_at timestamp without time zone, expires_at timestamp without time zone, url text COLLATE pg_catalog.\"default\" NOT NULL, CONSTRAINT shortened_url_pkey PRIMARY KEY (id))The table contains only the following index on PRIMARY KEY:CREATE UNIQUE INDEX IF NOT EXISTS shortened_url_pkey ON public.shortened_url USING btree (id COLLATE pg_catalog.\"default\" ASC NULLS LAST) TABLESPACE pg_default;This table has approximately 5 million rows of expired URLs (expires_at < now()), and 5 thousand rows of non-expired URLs (expires_at > now())I deleted all expired URLs with this query:DELETE FROM shortened_url WHERE expires_at < now().Then, I tried to query the table for expired URLs:SELECT * FROM shortened_url WHERE expires_at < now();This query was very slow. It took around 1-2 minutes to run, while it had to fetch only 5000 rows (the non-expired URLs, since the other ones were deleted).After that, I tried to run VACUUM ANALYZE and REINDEX to the table.The query was still slow.Finally, I ran VACUUM FULL and re-executed the query. Only then, it started running fast (1-2 seconds).Do you have observed a similar behavior with VACUUM ANALYZE / VACUUM FULL and why this can happen? Is this because data is compacted after VACUUM FULL and sequential disk reads are faster? Shouldn't VACUUM ANALYZE reclaim the disk space and make the query run fast?Is this because RDS might do some magic? Is it something I am missing?Additional detailsPostgreSQL version: 14.7 on db.t3.micro RDSPG configuration: Default of RDSKind Regards,Pavlos",
"msg_date": "Tue, 30 Jan 2024 11:40:19 +0200",
"msg_from": "Pavlos Kallis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query in table where many rows were deleted. VACUUM FULL fixes\n it"
},
{
"msg_contents": "On Tue, 2024-01-30 at 11:40 +0200, Pavlos Kallis wrote:\n> I have the following table:\n> \n> CREATE TABLE IF NOT EXISTS public.shortened_url\n> (\n> id character varying(12) COLLATE pg_catalog.\"default\" NOT NULL,\n> created_at timestamp without time zone,\n> expires_at timestamp without time zone,\n> url text COLLATE pg_catalog.\"default\" NOT NULL,\n> CONSTRAINT shortened_url_pkey PRIMARY KEY (id)\n> )\n> \n> The table contains only the following index on PRIMARY KEY:\n> \n> CREATE UNIQUE INDEX IF NOT EXISTS shortened_url_pkey\n> ON public.shortened_url USING btree\n> (id COLLATE pg_catalog.\"default\" ASC NULLS LAST)\n> TABLESPACE pg_default;\n> \n> This table has approximately 5 million rows of expired URLs (expires_at < now()), and 5 thousand rows of non-expired URLs (expires_at > now())\n> \n> I deleted all expired URLs with this query:\n> \n> DELETE FROM shortened_url WHERE expires_at < now().\n> \n> Then, I tried to query the table for expired URLs:\n> \n> SELECT * FROM shortened_url WHERE expires_at < now();\n> \n> This query was very slow. It took around 1-2 minutes to run, while it had to fetch only 5000 rows (the non-expired URLs, since the other ones were deleted).\n> \n> After that, I tried to run VACUUM ANALYZE and REINDEX to the table.\n> The query was still slow.\n> \n> Finally, I ran VACUUM FULL and re-executed the query. Only then, it started running fast (1-2 seconds).\n> \n> Do you have observed a similar behavior with VACUUM ANALYZE / VACUUM FULL and why this can happen? \n> Is this because data is compacted after VACUUM FULL and sequential disk reads are faster? \n> Shouldn't VACUUM ANALYZE reclaim the disk space and make the query run fast?\n> Is this because RDS might do some magic? Is it something I am missing?\n\nThere are too many unknowns here. Please enable \"track_io_timing\" and send us\nthe output of EXPLAIN (ANALYZE, BUFFERS) for the slow statements.\n\nOne theory could be that there was a long running transaction or something else\nthat prevented VACUUM from cleaning up. For that, the output of\n\"VACUUM (VERBOSE) shortened_url\" would be interesting.\n\n> Additional details\n> PostgreSQL version: 14.7 on db.t3.micro RDS\n> PG configuration: Default of RDS\n\nWe can only speak about real PostgreSQL...\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 30 Jan 2024 16:37:09 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in table where many rows were deleted. VACUUM FULL\n fixes it"
},
{
"msg_contents": "\n\n> On Jan 30, 2024, at 4:40 AM, Pavlos Kallis <[email protected]> wrote:\n> \n> Shouldn't VACUUM ANALYZE reclaim the disk space?\n\nHi Pavlos,\nThe short answer to this is “no”. That’s an important difference between VACUUM (also known as “plain” VACUUM) and VACUUM FULL. In some special cases plain VACUUM can reclaim disk space, but I think both the circumstances under which it can do so and the amount it can reclaim are pretty limited. An oversimplified but \"mostly correct\" way to think about it is that plain VACUUM can't reclaim disk space, whereas VACUUM FULL can.\n\nThis is covered in the 4th paragraph of the doc of the VACUUM command --\nhttps://www.postgresql.org/docs/current/sql-vacuum.html\n\nSo in your case those 5m rows that you deleted were probably still clogging up your table until you ran VACUUM FULL.\n\n\nHope this helps\nPhilip\n\n",
"msg_date": "Tue, 30 Jan 2024 15:08:40 -0500",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in table where many rows were deleted. VACUUM FULL\n fixes it"
},
{
"msg_contents": "On Wed, 31 Jan 2024 at 09:09, Philip Semanchuk\n<[email protected]> wrote:\n> So in your case those 5m rows that you deleted were probably still clogging up your table until you ran VACUUM FULL.\n\nIt seems more likely to me that the VACUUM removed the rows and just\nleft empty pages in the table. Since there's no index on expires_at,\nthe only way to answer that query is to Seq Scan and Seq Scan will\nneed to process those empty pages. While that processing is very fast\nif the page's item pointers array is empty, it could still be slow if\nthe page needs to be read from disk. Laurenz's request for the explain\n(analyze, buffers) output with track_io_timing on will help confirm\nthis.\n\nIf it is just reading empty pages that's causing this issue then\nadding that missing index would improve the situation after running\njust plain VACUUM each time there's a bulk delete.\n\nDavid\n\n\n",
"msg_date": "Wed, 31 Jan 2024 09:38:36 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in table where many rows were deleted. VACUUM FULL\n fixes it"
},
{
"msg_contents": "Hi Pavlos\n\nThis is my understanding of why you were not able to run the query fast\nenough after the vacuum analyze. This is possibly what would have happened:\n\n\n 1. The relation has 5 million expired URLs and 5 thousand non-expired\n URLs\n 2. Assuming that the table only has 5 million and 5 thousand tuples,\n once you delete the expired ones, there will be an autovacuum triggered.\n “If the number of tuples obsoleted since the last VACUUM exceeds the\n “vacuum threshold”, the table is vacuumed“ -\n https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM\n ; As the Analyze threshold will also be exceeded, that would also have been\n run by autovacuum alongside.\n 3. The status of this autovacuum (if it is running or blocked), could\n have been checked in the pg_stat_activity.\n 4. Note, autovacuum does not trigger to clean up the dead tuples if it\n is disabled for the relation (or in the postgresql.conf file). However, if\n you would have taken transaction IDs to the threshold of\n autovacuum_freeze_max_age, autovacuum would trigger to FREEZE transaction\n IDs even if disabled.\n 5. As you stated its a t3.micro instance, they have limited resources,\n so it could be that the autovacuum was slow running (again, this can be\n checked in pg_stat_activity).\n 6. Given that you manually ran a VACUUM ANALYZE and it did not make the\n query faster, could be due to internal fragmentation. You are right, Vacuum\n does not release the space back to the operating system in most cases. This\n statement is the documentation that can clarify this for you :\n “The standard form of VACUUM removes dead row versions in tables and\n indexes and marks the space available for future reuse. However, it will\n not return the space to the operating system, except in the special case\n where one or more pages at the end of a table become entirely free and an\n exclusive table lock can be easily obtained. In contrast, VACUUM FULL\n actively compacts tables by writing a complete new version of the table\n file with no dead space. This minimizes the size of the table, but can take\n a long time. It also requires extra disk space for the new copy of the\n table, until the operation completes.”\n https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-SPACE-RECOVERY\n 7. This basically means that once you ran a VACUUM FULL, it might have\n actually shrunk the table quite significantly, which made the query to be\n much faster.\n 8. You could have compared the size of the table before and after the\n VACUUM FULL to understand this better.\n\n\nJust a few suggestion for doing bulk removal of data :\n\n - It would be worth looking into pg_repack for such bulk deletes rather\n than vacuum full as the former does not take an exclusive lock for the\n entire duration of the operation - https://reorg.github.io/pg_repack/ .\n However, you will still need double the space of the table, as it also\n recreates the table.\n - Another way of doing bulk removal of data would be to do a CTAS (\n https://www.postgresql.org/docs/14/sql-createtableas.html) to a new\n table with the live data (in your case the 5 thousand tuples), and then\n dropping the old table (which means no dead tuples). You might need a\n trigger in between to make sure all the live data during use is transferred\n to the new table.\n - You might want to look into partitioning and drop the partitions once\n the URLs in that particular partition are no longer needed (Like URLs older\n than 6 months).\n\n\n\nKind Regards\nDivya Sharma\n\n\nOn Tue, Jan 30, 2024 at 8:38 PM David Rowley <[email protected]> wrote:\n\n> On Wed, 31 Jan 2024 at 09:09, Philip Semanchuk\n> <[email protected]> wrote:\n> > So in your case those 5m rows that you deleted were probably still\n> clogging up your table until you ran VACUUM FULL.\n>\n> It seems more likely to me that the VACUUM removed the rows and just\n> left empty pages in the table. Since there's no index on expires_at,\n> the only way to answer that query is to Seq Scan and Seq Scan will\n> need to process those empty pages. While that processing is very fast\n> if the page's item pointers array is empty, it could still be slow if\n> the page needs to be read from disk. Laurenz's request for the explain\n> (analyze, buffers) output with track_io_timing on will help confirm\n> this.\n>\n> If it is just reading empty pages that's causing this issue then\n> adding that missing index would improve the situation after running\n> just plain VACUUM each time there's a bulk delete.\n>\n> David\n>\n>\n>\n\nHi PavlosThis is my understanding of why you were not able to run the query fast enough after the vacuum analyze. This is possibly what would have happened:The relation has 5 million expired URLs and 5 thousand non-expired URLsAssuming that the table only has 5 million and 5 thousand tuples, once you delete the expired ones, there will be an autovacuum triggered. “If the number of tuples obsoleted since the last VACUUM exceeds the “vacuum threshold”, the table is vacuumed“ - https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM ; As the Analyze threshold will also be exceeded, that would also have been run by autovacuum alongside. The status of this autovacuum (if it is running or blocked), could have been checked in the pg_stat_activity.Note, autovacuum does not trigger to clean up the dead tuples if it is disabled for the relation (or in the postgresql.conf file). However, if you would have taken transaction IDs to the threshold of autovacuum_freeze_max_age, autovacuum would trigger to FREEZE transaction IDs even if disabled.As you stated its a t3.micro instance, they have limited resources, so it could be that the autovacuum was slow running (again, this can be checked in pg_stat_activity). Given that you manually ran a VACUUM ANALYZE and it did not make the query faster, could be due to internal fragmentation. You are right, Vacuum does not release the space back to the operating system in most cases. This statement is the documentation that can clarify this for you :“The standard form of VACUUM removes dead row versions in tables and indexes and marks the space available for future reuse. However, it will not return the space to the operating system, except in the special case where one or more pages at the end of a table become entirely free and an exclusive table lock can be easily obtained. In contrast, VACUUM FULL actively compacts tables by writing a complete new version of the table file with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until the operation completes.” https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-SPACE-RECOVERYThis basically means that once you ran a VACUUM FULL, it might have actually shrunk the table quite significantly, which made the query to be much faster. You could have compared the size of the table before and after the VACUUM FULL to understand this better.Just a few suggestion for doing bulk removal of data :It would be worth looking into pg_repack for such bulk deletes rather than vacuum full as the former does not take an exclusive lock for the entire duration of the operation - https://reorg.github.io/pg_repack/ . However, you will still need double the space of the table, as it also recreates the table.Another way of doing bulk removal of data would be to do a CTAS (https://www.postgresql.org/docs/14/sql-createtableas.html) to a new table with the live data (in your case the 5 thousand tuples), and then dropping the old table (which means no dead tuples). You might need a trigger in between to make sure all the live data during use is transferred to the new table. You might want to look into partitioning and drop the partitions once the URLs in that particular partition are no longer needed (Like URLs older than 6 months).Kind RegardsDivya SharmaOn Tue, Jan 30, 2024 at 8:38 PM David Rowley <[email protected]> wrote:On Wed, 31 Jan 2024 at 09:09, Philip Semanchuk\n<[email protected]> wrote:\n> So in your case those 5m rows that you deleted were probably still clogging up your table until you ran VACUUM FULL.\n\nIt seems more likely to me that the VACUUM removed the rows and just\nleft empty pages in the table. Since there's no index on expires_at,\nthe only way to answer that query is to Seq Scan and Seq Scan will\nneed to process those empty pages. While that processing is very fast\nif the page's item pointers array is empty, it could still be slow if\nthe page needs to be read from disk. Laurenz's request for the explain\n(analyze, buffers) output with track_io_timing on will help confirm\nthis.\n\nIf it is just reading empty pages that's causing this issue then\nadding that missing index would improve the situation after running\njust plain VACUUM each time there's a bulk delete.\n\nDavid",
"msg_date": "Wed, 31 Jan 2024 15:44:52 +0000",
"msg_from": "Divya Sharma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in table where many rows were deleted. VACUUM FULL\n fixes it"
}
] |
[
{
"msg_contents": "Hi,\nWe want to work with PostgreSQL in our new project. I need your opinion on\nthe best way to create a database.\n\nDescription of our Project:\nIt will be in Client/Server Architecture. Windows Application users will\naccess the server as clients and they are all in different locations. There\nwill be a simple ERP system that will perform CRUD transactions and report\nthem.\nWe are considering connecting to the Embarcadero Firedac dataset. We can\nalso connect clients with PosgreRestAPI.\nOur number of clients can be between 5k-20K.\nWe have a maximum of 200 tables consisting of invoice, order, customer,\nbank and stock information. I can create a second Postgre SQL for reporting\nif necessary.\n\n\nQuestion 1 :\nShould we install PostgreSQL on Windows server operating system or Linux\noperating system?\n2:\nIs it correct to open a field named client_id for each table, for example\nthe customer table, and use this field in CRUD operations to host the same\nsingle customer table for all users?\n3:\nCreate a separate table for each User? (result: 5000 users x 200 Tables =\n1,000,000 tables)\n4:\nCreate a database per user? (result: 5000 databases)\n5:\nIs each user a separate schema? (result: 5000 schemas)\n\nCan you share your ideas with me?\nThank you.\n\nHi,We want to work with PostgreSQL in our new project. I need your opinion on the best way to create a database.Description of our Project:It will be in Client/Server Architecture. Windows Application users will access the server as clients and they are all in different locations. There will be a simple ERP system that will perform CRUD transactions and report them.We are considering connecting to the Embarcadero Firedac dataset. We can also connect clients with PosgreRestAPI.Our number of clients can be between 5k-20K.We have a maximum of 200 tables consisting of invoice, order, customer, bank and stock information. I can create a second Postgre SQL for reporting if necessary. Question 1 :Should we install PostgreSQL on Windows server operating system or Linux operating system?2:Is it correct to open a field named client_id for each table, for example the customer table, and use this field in CRUD operations to host the same single customer table for all users?3:Create a separate table for each User? (result: 5000 users x 200 Tables = 1,000,000 tables)4:Create a database per user? (result: 5000 databases)5:Is each user a separate schema? (result: 5000 schemas)Can you share your ideas with me?Thank you.",
"msg_date": "Wed, 31 Jan 2024 14:18:23 +0300",
"msg_from": "Mehmet COKCEVIK <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance"
},
{
"msg_contents": "Hi Mehmet,\n\nOn Wed, 31 Jan 2024 at 13:33, Mehmet COKCEVIK <[email protected]>\nwrote:\n\n> Hi,\n> We want to work with PostgreSQL in our new project. I need your opinion on\n> the best way to create a database.\n>\n\nFirst of all, congratulations on your decision to use PostgreSQL for your\nnew project. :)\n\n\n> Description of our Project:\n> It will be in Client/Server Architecture. Windows Application users will\n> access the server as clients and they are all in different locations. There\n> will be a simple ERP system that will perform CRUD transactions and report\n> them.\n>\n\nI hope you are not thinking of keeping business logic on the application\nside and querying the database from different locations. If you treat the\ndatabase as a regular application's database and run multiple DML's for\neach request through the internet, performance of the application will be\nhorrible due to latency between the application and the database. In case\nyou plan to use such a model, the best approach would be to decrease the\nnumber of queries as much as possible, and achieve multiple operations by a\nsingle request, instead of reading from multiple tables, doing some\ncalculations, writing back something to the database etc. I would move the\nlogic to the database side as much as possible and do function/procedure\ncalls, or have an application nearby the database and make clients'\napplications interact with it. So, the business logic would still be in an\napplication and close to the database.\n\n\n> We are considering connecting to the Embarcadero Firedac dataset. We can\n> also connect clients with PosgreRestAPI.\n> Our number of clients can be between 5k-20K.\n> We have a maximum of 200 tables consisting of invoice, order, customer,\n> bank and stock information. I can create a second Postgre SQL for reporting\n> if necessary.\n>\n\nThis is an interesting point. Because, if you plan to have 20k clients, you\nshould also be planning high availability, backups, replications etc.\nServing 20k clients with a standalone server would not be something I would\nlike to involve :)\n\n\n> Question 1 :\n> Should we install PostgreSQL on Windows server operating system or Linux\n> operating system?\n>\n\nMy personal opinion, this is not even a question. The answer is and will\nalways be Linux for me :D\nHowever, the actual question is what is the cost of managing a Linux server\nfor you. If you are not familiar with Linux, if you don't have any\nexperience with linux, and if you don't have a company or budget to\nhire/work with you on this who is a professional linux or PostgreSQL admin,\ngoing with Windows is a much more sensible option for you even though it is\nnot the best OS or not the best performing option for PostgreSQL.\n\n\n> 2:\n> Is it correct to open a field named client_id for each table, for example\n> the customer table, and use this field in CRUD operations to host the same\n> single customer table for all users?\n>\n\nIt depends on the data size and your project's isolation/security\nrequirements. You may also consider partitioning and row level security\nfeatures of PostgreSQL. There is not a single recipe that is good for all\nmulti-tenancy needs. :)\n\n\n> 3:\n> Create a separate table for each User? (result: 5000 users x 200 Tables =\n> 1,000,000 tables)\n> 4:\n> Create a database per user? (result: 5000 databases)\n> 5:\n> Is each user a separate schema? (result: 5000 schemas)\n>\n> Can you share your ideas with me?\n> Thank you.\n>\n\nBest regards.\nSamed YILDIRIM\n\nHi Mehmet,On Wed, 31 Jan 2024 at 13:33, Mehmet COKCEVIK <[email protected]> wrote:Hi,We want to work with PostgreSQL in our new project. I need your opinion on the best way to create a database.First of all, congratulations on your decision to use PostgreSQL for your new project. :) Description of our Project:It will be in Client/Server Architecture. Windows Application users will access the server as clients and they are all in different locations. There will be a simple ERP system that will perform CRUD transactions and report them.I hope you are not thinking of keeping business logic on the application side and querying the database from different locations. If you treat the database as a regular application's database and run multiple DML's for each request through the internet, performance of the application will be horrible due to latency between the application and the database. In case you plan to use such a model, the best approach would be to decrease the number of queries as much as possible, and achieve multiple operations by a single request, instead of reading from multiple tables, doing some calculations, writing back something to the database etc. I would move the logic to the database side as much as possible and do function/procedure calls, or have an application nearby the database and make clients' applications interact with it. So, the business logic would still be in an application and close to the database. We are considering connecting to the Embarcadero Firedac dataset. We can also connect clients with PosgreRestAPI.Our number of clients can be between 5k-20K.We have a maximum of 200 tables consisting of invoice, order, customer, bank and stock information. I can create a second Postgre SQL for reporting if necessary.This is an interesting point. Because, if you plan to have 20k clients, you should also be planning high availability, backups, replications etc. Serving 20k clients with a standalone server would not be something I would like to involve :) Question 1 :Should we install PostgreSQL on Windows server operating system or Linux operating system?My personal opinion, this is not even a question. The answer is and will always be Linux for me :DHowever, the actual question is what is the cost of managing a Linux server for you. If you are not familiar with Linux, if you don't have any experience with linux, and if you don't have a company or budget to hire/work with you on this who is a professional linux or PostgreSQL admin, going with Windows is a much more sensible option for you even though it is not the best OS or not the best performing option for PostgreSQL. 2:Is it correct to open a field named client_id for each table, for example the customer table, and use this field in CRUD operations to host the same single customer table for all users?It depends on the data size and your project's isolation/security requirements. You may also consider partitioning and row level security features of PostgreSQL. There is not a single recipe that is good for all multi-tenancy needs. :) 3:Create a separate table for each User? (result: 5000 users x 200 Tables = 1,000,000 tables)4:Create a database per user? (result: 5000 databases)5:Is each user a separate schema? (result: 5000 schemas)Can you share your ideas with me?Thank you.Best regards.Samed YILDIRIM",
"msg_date": "Wed, 31 Jan 2024 14:10:29 +0200",
"msg_from": "Samed YILDIRIM <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance"
},
{
"msg_contents": "Hi,\n\n\nI have run a test with pgbench against two cloud vendors (settings, \nparameters almost the same).\n\nBoth Postgres (or whatever they do internally when they call it as \nPostgres offering, NOT Aurora or so :-) )\n\n\nI have got a strange result that cloud vendor 1 is performing almost \neverywhere better in matter of\n\nread and write but except in the init phase of pgbench it took almost \ndouble the time.\n\n\n> /pgbench -i -IdtGvp -s 3000 \"${PG_DATABASE}\"/\n> /pgbench -c 50 -j 10 -P 60 -r -T 3600 \"${PG_DATABASE}\"/\n\n\n| Metric | cloud vendor 1 (small) | cloud vendor 1 (large) | cloud \nvendor 2 (small) | cloud vendor 2 (large) | \n|----------------------------------|------------------------|------------------------|-------------------|-------------------| \n| **Initialization Time** | 60m52.932s | 3h0m8.97s | 32m7.154s | \n5h14m16s | | **Benchmark Duration** | 3600s (1 hour) | 3600s (1 hour) | \n3600s (1 hour) | 3600s (1 hour) | | **Transactions per Second** | \n399.460720 | 9833.737455 | 326.551036 | 3314.363264 | | **Latency \nAverage (ms)** | 125.124 | 6.507 | 153.106 | 19.309 | | **Latency StdDev \n(ms)** | 154.483 | 44.403 | 59.522 | 4.015 | | **Initial Connection Time \n(ms)** | 557.696 | 174.318 | 1688.474 | 651.087 | | **Transactions \nProcessed** | 1,438,437 | 35,400,215 | 1,175,081 | 11,929,631 | | \nStatement (ms) | cloud vendor 1 (small) | cloud vendor 1 (large) | cloud \nvendor 2 (small) | cloud vendor 2 (large) | \n|-----------------------------|------------------------|------------------------|-------------------|-------------------| \n| BEGIN | 8.599 | 0.545 | 9.008 | 1.398 | | UPDATE pgbench_accounts | \n38.648 | 2.031 | 27.124 | 4.722 | | SELECT pgbench_accounts | 12.332 | \n0.676 | 17.922 | 1.798 | | UPDATE pgbench_tellers | 17.275 | 0.853 | \n20.843 | 1.831 | | UPDATE pgbench_branches | 18.478 | 0.862 | 21.941 | \n1.743 | | INSERT INTO pgbench_history | 16.613 | 0.827 | 18.710 | 1.501 \n| | END | 13.177 | 0.708 | 37.553 | 6.317 |\n\n\nOf course no one knows the magig underneath what some cloud vendors are \ndoing underneath but does anyone have some\n\nideas what the reason could be or how I could do better testing to find \nthis out?\n\n\nCheers\n\n\nDirk\n\n\n\n\n\n\nHi,\n\n\nI have run a test with pgbench against two cloud vendors\n (settings, parameters almost the same).\nBoth Postgres (or whatever they do internally when they call it\n as Postgres offering, NOT Aurora or so :-) )\n\n\n\nI have got a strange result that cloud vendor 1 is performing\n almost everywhere better in matter of\nread and write but except in the init phase of pgbench it took\n almost double the time.\n\n\npgbench\n -i -IdtGvp -s 3000 \"${PG_DATABASE}\"\npgbench\n -c 50 -j 10 -P 60 -r -T 3600 \"${PG_DATABASE}\"\n\n\n| Metric | cloud vendor 1 (small) | cloud vendor 1 (large) | cloud vendor 2 (small) | cloud vendor 2 (large) |\n|----------------------------------|------------------------|------------------------|-------------------|-------------------|\n| **Initialization Time** | 60m52.932s | 3h0m8.97s | 32m7.154s | 5h14m16s |\n| **Benchmark Duration** | 3600s (1 hour) | 3600s (1 hour) | 3600s (1 hour) | 3600s (1 hour) |\n| **Transactions per Second** | 399.460720 | 9833.737455 | 326.551036 | 3314.363264 |\n| **Latency Average (ms)** | 125.124 | 6.507 | 153.106 | 19.309 |\n| **Latency StdDev (ms)** | 154.483 | 44.403 | 59.522 | 4.015 |\n| **Initial Connection Time (ms)** | 557.696 | 174.318 | 1688.474 | 651.087 |\n| **Transactions Processed** | 1,438,437 | 35,400,215 | 1,175,081 | 11,929,631 |\n\n| Statement (ms) | cloud vendor 1 (small) | cloud vendor 1 (large) | cloud vendor 2 (small) | cloud vendor 2 (large) |\n|-----------------------------|------------------------|------------------------|-------------------|-------------------|\n| BEGIN | 8.599 | 0.545 | 9.008 | 1.398 |\n| UPDATE pgbench_accounts | 38.648 | 2.031 | 27.124 | 4.722 |\n| SELECT pgbench_accounts | 12.332 | 0.676 | 17.922 | 1.798 |\n| UPDATE pgbench_tellers | 17.275 | 0.853 | 20.843 | 1.831 |\n| UPDATE pgbench_branches | 18.478 | 0.862 | 21.941 | 1.743 |\n| INSERT INTO pgbench_history | 16.613 | 0.827 | 18.710 | 1.501 |\n| END | 13.177 | 0.708 | 37.553 | 6.317 |\n\n\nOf course no one knows the magig underneath what some cloud\n vendors are doing underneath but does anyone have some\nideas what the reason could be or how I could do better testing\n to find this out?\n\n\n\nCheers\n\n Dirk",
"msg_date": "Thu, 1 Feb 2024 10:23:17 +0100",
"msg_from": "Dirk Krautschick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Weird performance differences between cloud vendors"
},
{
"msg_contents": "On Thu, 2024-02-01 at 10:23 +0100, Dirk Krautschick wrote:\n> I have run a test with pgbench against two cloud vendors (settings, parameters almost the same).\n> Both Postgres (or whatever they do internally when they call it as Postgres offering, NOT Aurora or so :-) )\n> \n> I have got a strange result that cloud vendor 1 is performing almost everywhere better in matter of\n> read and write but except in the init phase of pgbench it took almost double the time.\n\nNobody except those vendors could tell you for certain, but perhaps on the one\nsystem the initial data load is fast, because you have not yet exceeded your I/O quota,\nand then I/O is throttled.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 01 Feb 2024 12:07:35 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird performance differences between cloud vendors"
}
] |
[
{
"msg_contents": "Hi,\n We have a Postgresqlv14.8 server, client use Postgresql JDBC connections, today, our server see a lot of \"SubtransBuffer\" and \"SubtransSLRU\" wait_event. Could you help direct me what's the possible cause and how to resolve this waits ?\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\nHi,\n We have a Postgresqlv14.8 server, client use Postgresql JDBC connections, today, our server see a lot of “SubtransBuffer” and “SubtransSLRU” wait_event. Could you help direct me what’s the possible cause and how to resolve this waits\n ? \n \nThanks,\n \nJames",
"msg_date": "Thu, 1 Feb 2024 11:50:57 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": "On Thu, 2024-02-01 at 11:50 +0000, James Pang (chaolpan) wrote:\n> We have a Postgresqlv14.8 server, client use Postgresql JDBC connections, today,\n> our server see a lot of “SubtransBuffer” and “SubtransSLRU” wait_event.\n> Could you help direct me what’s the possible cause and how to resolve this waits ?\n\nToday, the only feasible solution is not to create more than 64 subtransactions\n(savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\n\nDon't use extensions or the JDBC driver option to simulate statement level rollback,\nthat is the road to hell.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 01 Feb 2024 13:41:53 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": "Today, the only feasible solution is not to create more than 64 subtransactions (savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\r\n\r\nDon't use extensions or the JDBC driver option to simulate statement level rollback, that is the road to hell.\r\n You mean extensions to simulate a subtransaction like pg_background ? for JDBC driver option to simulate statement level rollback, could you share more details ? \r\n\r\nThanks,\r\n\r\nJames\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <[email protected]> \r\nSent: Thursday, February 1, 2024 8:42 PM\r\nTo: James Pang (chaolpan) <[email protected]>; [email protected]\r\nSubject: Re: huge SubtransSLRU and SubtransBuffer wait_event\r\n\r\nOn Thu, 2024-02-01 at 11:50 +0000, James Pang (chaolpan) wrote:\r\n> We have a Postgresqlv14.8 server, client use Postgresql JDBC \r\n> connections, today, our server see a lot of “SubtransBuffer” and “SubtransSLRU” wait_event.\r\n> Could you help direct me what’s the possible cause and how to resolve this waits ?\r\n\r\nToday, the only feasible solution is not to create more than 64 subtransactions (savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\r\n\r\nDon't use extensions or the JDBC driver option to simulate statement level rollback, that is the road to hell.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Thu, 1 Feb 2024 15:34:15 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": " Our case is 1) we use PL/PGSQL procedure1-->procedure2 (update table xxxx;commit); 2) application JDBC client call procedure1 (it's a long running job, sometimes it could last > 1hours). During this time window, other Postgresql JDBC clients (100-200) coming in in same time , then quickly see MultiXactoffset and SubtransSLRU increased very quickly. \r\n Possible to increase Subtrans SLRU buffer size ? PL/PGSQL proc1--> procedure2(updates table) it use substransation in procedure2 ,right? \r\n\r\nThanks,\r\n\r\nJames\r\n\r\n-----Original Message-----\r\nFrom: James Pang (chaolpan) \r\nSent: Thursday, February 1, 2024 11:34 PM\r\nTo: Laurenz Albe <[email protected]>; [email protected]\r\nSubject: RE: huge SubtransSLRU and SubtransBuffer wait_event\r\n\r\nToday, the only feasible solution is not to create more than 64 subtransactions (savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\r\n\r\nDon't use extensions or the JDBC driver option to simulate statement level rollback, that is the road to hell.\r\n You mean extensions to simulate a subtransaction like pg_background ? for JDBC driver option to simulate statement level rollback, could you share more details ? \r\n\r\nThanks,\r\n\r\nJames\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <[email protected]>\r\nSent: Thursday, February 1, 2024 8:42 PM\r\nTo: James Pang (chaolpan) <[email protected]>; [email protected]\r\nSubject: Re: huge SubtransSLRU and SubtransBuffer wait_event\r\n\r\nOn Thu, 2024-02-01 at 11:50 +0000, James Pang (chaolpan) wrote:\r\n> We have a Postgresqlv14.8 server, client use Postgresql JDBC \r\n> connections, today, our server see a lot of “SubtransBuffer” and “SubtransSLRU” wait_event.\r\n> Could you help direct me what’s the possible cause and how to resolve this waits ?\r\n\r\nToday, the only feasible solution is not to create more than 64 subtransactions (savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\r\n\r\nDon't use extensions or the JDBC driver option to simulate statement level rollback, that is the road to hell.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Fri, 2 Feb 2024 06:47:47 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": "On 2024-Feb-02, James Pang (chaolpan) wrote:\n\n> Possible to increase Subtrans SLRU buffer size ?\n\nNot at present -- you need to recompile after changing\nNUM_SUBTRANS_BUFFERS in src/include/access/subtrans.h,\nNUM_MULTIXACTOFFSET_BUFFERS and NUM_MULTIXACTMEMBER_BUFFERS in\nsrc/include/access/multixact.h.\n\nThere's pending work to let these be configurable in version 17.\n\n> Our case is 1) we use PL/PGSQL procedure1-->procedure2 (update\n> table xxxx;commit); 2) application JDBC client call procedure1\n> (it's a long running job, sometimes it could last > 1hours).\n> During this time window, other Postgresql JDBC clients (100-200)\n> coming in in same time , then quickly see MultiXactoffset and\n> SubtransSLRU increased very quickly. \n> PL/PGSQL proc1--> procedure2(updates table) it use substransation in\n> procedure2 ,right? \n\nIf your functions/procedures use EXCEPTION clauses, that would create\nsubtransactions also.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No deja de ser humillante para una persona de ingenio saber\nque no hay tonto que no le pueda enseñar algo.\" (Jean B. Say)\n\n\n",
"msg_date": "Fri, 2 Feb 2024 09:12:48 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": ">From: James Pang (chaolpan) <[email protected]>Sent: Friday, February 2, 2024 7:47 AMTo: Laurenz Albe <[email protected]>; [email protected] <[email protected]>Subject: RE: huge SubtransSLRU and SubtransBuffer wait_event\r\n\r\n>\r\n\r\n> Our case is 1) we use PL/PGSQL procedure1-->procedure2 (update table xxxx;commit); 2) application JDBC client call procedure1 (it's a long running job, sometimes it could last > 1hours). During this time window, other Postgresql JDBC clients (100-200) coming in in same time , then quickly see MultiXactoffset and SubtransSLRU increased very quickly.\r\n\r\n\r\nHi\r\n\r\n\r\nWe had the same problem here https://gitlab.com/nibioopensource/resolve-overlap-and-gap . Here we can have more than 50 threads pushing millions of rows into common tables and one single final Postgis Topology structure as a final step. We also need to run try catch. The code is wrapped into functions and procedures and called from psql .\r\n\r\n\r\nJust to test we tried compile with a higher number of subtrans locks and that just made this problem appear just a little bit later.\r\n\r\n\r\nFor us the solution was to save temporary results in array like this https://gitlab.com/nibioopensource/resolve-overlap-and-gap/-/commit/679bea2b4b1ba4c9e84923b65c62c32c3aed6c21#a22cbe80eb0e36ea21e4f8036e0a4109b2ff2379_611_617\r\n\r\n. The clue is to do as much work as possible without involving any common data structures for instance like using arrays to hold temp results and not use a shared final table before it's really needed.\r\n\r\n\r\nThen later at a final step we insert all prepared data into a final common data structure and where we also try to avoid try catch when possible. Then system can then run with verry high CPU load for 99% of the work and just at then verry end we start to involve the common database structure.\r\n\r\n\r\nAnother thing to avoid locks is let each thread work on it's down data as much possible, this means breaking up the input and sort what's unique data for this tread and postpone the common data to a later stage. When for instance working with Postgis Topology we actually split data to be sure that not two threads works on the same area and then at later state another thread push shared data/area in to the final data structure.\r\n\r\n\r\nThis steps seems to have solved this problem for us which started out here https://postgrespro.com/list/thread-id/2478202<https://postgrespro.com/list/thread-id/2478202>\r\n\r\n\r\nLars\r\n\n\n\n\n\n\n\n\n\n>From: James Pang (chaolpan) <[email protected]>Sent: Friday, February 2, 2024 7:47 AMTo:\r\nLaurenz Albe <[email protected]>; [email protected] <[email protected]>Subject: RE: huge SubtransSLRU and SubtransBuffer wait_event\n>\n> Our case is 1) we use PL/PGSQL procedure1-->procedure2 (update table\r\nxxxx;commit); 2) application JDBC client call procedure1 (it's a long running job, sometimes it could last > 1hours). During this time window, other\r\nPostgresql JDBC clients (100-200) coming in in same time , then quickly see MultiXactoffset and SubtransSLRU increased very quickly.\n\n\nHi\n\n\nWe had the same problem here https://gitlab.com/nibioopensource/resolve-overlap-and-gap . Here we can have more than 50 threads pushing\r\n millions of rows into common tables and one single final Postgis Topology structure as a final step. We also need to run try catch. The code is wrapped into functions and procedures and called from\r\npsql .\n\n\nJust to test we tried compile with a higher number of subtrans locks and that just made this problem appear just a little bit later.\n\n\nFor us the solution was to save temporary results in array like this\r\n\r\nhttps://gitlab.com/nibioopensource/resolve-overlap-and-gap/-/commit/679bea2b4b1ba4c9e84923b65c62c32c3aed6c21#a22cbe80eb0e36ea21e4f8036e0a4109b2ff2379_611_617\n\n\n. The clue is to do as much work as possible without involving any common data structures for instance like using arrays to hold temp results\r\n and not use a shared final table before it's really needed.\n\n\nThen later at a final step we insert all prepared data into a final common data structure and where we also try to avoid try catch when\r\n possible. Then system can then run with verry high CPU load for 99% of the work and just at then verry end we start to involve the common database structure.\n\n\nAnother thing to avoid locks is let each thread work on it's down data as much possible, this means breaking up the input and sort what's\r\n unique data for this tread and postpone the common data to a later stage. When for instance working with Postgis Topology we actually split data to be sure that not two threads works on the same area and then at later state another thread push shared data/area\r\n in to the final data structure.\n\n\nThis steps seems to have solved this problem for us which started out here https://postgrespro.com/list/thread-id/2478202\n\n\nLars",
"msg_date": "Fri, 2 Feb 2024 09:26:49 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 04:42 Laurenz Albe <[email protected]> wrote:\n\n> Today, the only feasible solution is not to create more than 64\n> subtransactions\n> (savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\n\n\nSometimes, a single subtransaction is enough to experience a bad\nSubtransSLRU spike:\nhttps://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful#problem-4-subtrans-slru-overflow\n\nI think 64+ nesting level is quite rare, but this kind of problem that hits\nyou when you have high XID growth (lots of writes) + long-running\ntransaction is quite easy to bump into. Or this case involving\nMultiXactIDs:\nhttps://buttondown.email/nelhage/archive/notes-on-some-postgresql-implementation-details/\n\nNik\n\n>\n\nOn Thu, Feb 1, 2024 at 04:42 Laurenz Albe <[email protected]> wrote:\nToday, the only feasible solution is not to create more than 64 subtransactions\n(savepoints or PL/pgSQL EXCEPTION clauses) per transaction.Sometimes, a single subtransaction is enough to experience a bad SubtransSLRU spike: https://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful#problem-4-subtrans-slru-overflowI think 64+ nesting level is quite rare, but this kind of problem that hits you when you have high XID growth (lots of writes) + long-running transaction is quite easy to bump into. Or this case involving MultiXactIDs: https://buttondown.email/nelhage/archive/notes-on-some-postgresql-implementation-details/Nik",
"msg_date": "Fri, 2 Feb 2024 02:04:02 -0800",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": "On Fri, 2024-02-02 at 02:04 -0800, Nikolay Samokhvalov wrote:\n> On Thu, Feb 1, 2024 at 04:42 Laurenz Albe <[email protected]> wrote:\n> > Today, the only feasible solution is not to create more than 64 subtransactions\n> > (savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\n> \n> I think 64+ nesting level is quite rare\n\nIt doesn't have to be 64 *nested* subtransactions. This is enough:\n\nCREATE TABLE tab (x integer);\n\nDO\n$$DECLARE\n i integer;\nBEGIN\n FOR i IN 1..70 LOOP\n BEGIN\n INSERT INTO tab VALUES (i);\n EXCEPTION\n WHEN unique_violation THEN\n NULL; -- ignore\n END;\n END LOOP;\nEND;$$;\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 02 Feb 2024 11:31:19 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": "We finally identified the cause, a pl/pgsql procedure proc1 (for 1…5000 loop call proc2()); proc2 (begin ..exception..end); at the same time, more than 200 sessions coming in milliseconds and do same query during the “call proc1 long running transaction”. The code change and cutdown the parallel sessions count doing same query at the same time help a lot.\r\n\r\n Thanks all.\r\n\r\nJames\r\n\r\nFrom: Nikolay Samokhvalov <[email protected]>\r\nSent: Friday, February 2, 2024 6:04 PM\r\nTo: Laurenz Albe <[email protected]>; [email protected]\r\nSubject: Re: huge SubtransSLRU and SubtransBuffer wait_event\r\n\r\n\r\n\r\nOn Thu, Feb 1, 2024 at 04:42 Laurenz Albe <[email protected]<mailto:[email protected]>> wrote:\r\nToday, the only feasible solution is not to create more than 64 subtransactions\r\n(savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\r\n\r\nSometimes, a single subtransaction is enough to experience a bad SubtransSLRU spike:\r\nhttps://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful#problem-4-subtrans-slru-overflow\r\n\r\nI think 64+ nesting level is quite rare, but this kind of problem that hits you when you have high XID growth (lots of writes) + long-running transaction is quite easy to bump into. Or this case involving MultiXactIDs:\r\nhttps://buttondown.email/nelhage/archive/notes-on-some-postgresql-implementation-details/\r\n\r\nNik\r\n\n\n\n\n\n\n\n\n\n We finally identified the cause, a pl/pgsql procedure proc1 (for 1…5000 loop call proc2()); proc2 (begin ..exception..end); at the same time, more than 200 sessions coming in milliseconds and do same query during the “call proc1 long\r\n running transaction”. The code change and cutdown the parallel sessions count doing same query at the same time help a lot.\r\n\n \n Thanks all. \n \nJames \n \n\nFrom: Nikolay Samokhvalov <[email protected]> \nSent: Friday, February 2, 2024 6:04 PM\nTo: Laurenz Albe <[email protected]>; [email protected]\nSubject: Re: huge SubtransSLRU and SubtransBuffer wait_event\n\n \n\n \n\n\n \n\n\nOn Thu, Feb 1, 2024 at 04:42 Laurenz Albe <[email protected]> wrote:\n\n\nToday, the only feasible solution is not to create more than 64 subtransactions\r\n(savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\n\n\n \n\n\nSometimes, a single subtransaction is enough to experience a bad SubtransSLRU spike: \n\nhttps://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful#problem-4-subtrans-slru-overflow\n\n\n \n\n\nI think 64+ nesting level is quite rare, but this kind of problem that hits you when you have high XID growth (lots of writes) + long-running transaction is quite easy to bump into. Or this case involving MultiXactIDs: \n\nhttps://buttondown.email/nelhage/archive/notes-on-some-postgresql-implementation-details/\n\n\n \n\n\nNik",
"msg_date": "Tue, 6 Feb 2024 06:59:11 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": ">\n>\n> From this link, looks like \"onfigurable buffer pool and partitioning the\n> SLRU lock\" is one the plan, maybe from v18,19 version,\n> https://www.postgresql.org/message-id/[email protected]\n>\n\n James\n\n> *From:* James Pang (chaolpan)\n> *Sent:* Tuesday, February 6, 2024 2:59 PM\n> *To:* Nikolay Samokhvalov <[email protected]>; Laurenz Albe <\n> [email protected]>; [email protected]\n> *Subject:* RE: huge SubtransSLRU and SubtransBuffer wait_event\n>\n>\n>\n> We finally identified the cause, a pl/pgsql procedure proc1 (for\n> 1…5000 loop call proc2()); proc2 (begin ..exception..end); at the same\n> time, more than 200 sessions coming in milliseconds and do same query\n> during the “call proc1 long running transaction”. The code change and\n> cutdown the parallel sessions count doing same query at the same time help\n> a lot.\n>\n>\n>\n> Thanks all.\n>\n>\n>\n> James\n>\n>\n>\n> *From:* Nikolay Samokhvalov <[email protected]>\n> *Sent:* Friday, February 2, 2024 6:04 PM\n> *To:* Laurenz Albe <[email protected]>;\n> [email protected]\n> *Subject:* Re: huge SubtransSLRU and SubtransBuffer wait_event\n>\n>\n>\n>\n>\n>\n>\n> On Thu, Feb 1, 2024 at 04:42 Laurenz Albe <[email protected]>\n> wrote:\n>\n> Today, the only feasible solution is not to create more than 64\n> subtransactions\n> (savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\n>\n>\n>\n> Sometimes, a single subtransaction is enough to experience a bad\n> SubtransSLRU spike:\n>\n>\n> https://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful#problem-4-subtrans-slru-overflow\n>\n>\n>\n> I think 64+ nesting level is quite rare, but this kind of problem that\n> hits you when you have high XID growth (lots of writes) + long-running\n> transaction is quite easy to bump into. Or this case involving\n> MultiXactIDs:\n>\n>\n> https://buttondown.email/nelhage/archive/notes-on-some-postgresql-implementation-details/\n>\n>\n>\n> Nik\n>\n\n\n\n\n \n From this link, looks like \"onfigurable buffer pool and partitioning the SLRU lock\" is one the plan, maybe from v18,19 version, https://www.postgresql.org/message-id/[email protected] James \n\n\nFrom: James Pang (chaolpan) \nSent: Tuesday, February 6, 2024 2:59 PM\nTo: Nikolay Samokhvalov <[email protected]>; Laurenz Albe <[email protected]>; [email protected]\nSubject: RE: huge SubtransSLRU and SubtransBuffer wait_event\n\n\n \n We finally identified the cause, a pl/pgsql procedure proc1 (for 1…5000 loop call proc2()); proc2 (begin ..exception..end); at the same time, more than 200 sessions coming in milliseconds and do same query during the “call proc1 long\n running transaction”. The code change and cutdown the parallel sessions count doing same query at the same time help a lot.\n\n \n Thanks all. \n \nJames \n \n\nFrom: Nikolay Samokhvalov <[email protected]>\n\nSent: Friday, February 2, 2024 6:04 PM\nTo: Laurenz Albe <[email protected]>;\[email protected]\nSubject: Re: huge SubtransSLRU and SubtransBuffer wait_event\n\n \n\n \n\n\n \n\n\nOn Thu, Feb 1, 2024 at 04:42 Laurenz Albe <[email protected]> wrote:\n\n\nToday, the only feasible solution is not to create more than 64 subtransactions\n(savepoints or PL/pgSQL EXCEPTION clauses) per transaction.\n\n\n \n\n\nSometimes, a single subtransaction is enough to experience a bad SubtransSLRU spike: \n\nhttps://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful#problem-4-subtrans-slru-overflow\n\n\n \n\n\nI think 64+ nesting level is quite rare, but this kind of problem that hits you when you have high XID growth (lots of writes) + long-running transaction is quite easy to bump into. Or this case involving MultiXactIDs: \n\nhttps://buttondown.email/nelhage/archive/notes-on-some-postgresql-implementation-details/\n\n\n \n\n\nNik",
"msg_date": "Mon, 26 Feb 2024 14:01:57 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": "> Possible to increase Subtrans SLRU buffer size ?\n\nNot at present -- you need to recompile after changing NUM_SUBTRANS_BUFFERS\nin src/include/access/subtrans.h, NUM_MULTIXACTOFFSET_BUFFERS and\nNUM_MULTIXACTMEMBER_BUFFERS in src/include/access/multixact.h.\n\none question:\n we need to increase all SLRU buffers together , MULTIXACT, XACT,\nSubtrans, COMMIT TS , for example, got all of them doubled based on\nexisting size ? or only increase Subtrans , or Subtrans and multixact ?\n\nThanks,\n\nJames\n\nJames Pang (chaolpan) <[email protected]> 於 2024年3月1日週五 下午2:45寫道:\n\n>\n>\n> -----Original Message-----\n> From: Alvaro Herrera <[email protected]>\n> Sent: Friday, February 2, 2024 4:13 PM\n> To: James Pang (chaolpan) <[email protected]>\n> Cc: Laurenz Albe <[email protected]>;\n> [email protected]\n> Subject: Re: huge SubtransSLRU and SubtransBuffer wait_event\n>\n> On 2024-Feb-02, James Pang (chaolpan) wrote:\n>\n> > Possible to increase Subtrans SLRU buffer size ?\n>\n> Not at present -- you need to recompile after changing\n> NUM_SUBTRANS_BUFFERS in src/include/access/subtrans.h,\n> NUM_MULTIXACTOFFSET_BUFFERS and NUM_MULTIXACTMEMBER_BUFFERS in\n> src/include/access/multixact.h.\n>\n> There's pending work to let these be configurable in version 17.\n>\n> > Our case is 1) we use PL/PGSQL procedure1-->procedure2 (update\n> > table xxxx;commit); 2) application JDBC client call procedure1\n> > (it's a long running job, sometimes it could last > 1hours).\n> > During this time window, other Postgresql JDBC clients (100-200)\n> > coming in in same time , then quickly see MultiXactoffset and\n> > SubtransSLRU increased very quickly.\n> > PL/PGSQL proc1--> procedure2(updates table) it use substransation in\n> > procedure2 ,right?\n>\n> If your functions/procedures use EXCEPTION clauses, that would create\n> subtransactions also.\n>\n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n> \"No deja de ser humillante para una persona de ingenio saber que no hay\n> tonto que no le pueda enseñar algo.\" (Jean B. Say)\n>\n\n> Possible to increase Subtrans SLRU buffer size ?Not at present -- you need to recompile after changing NUM_SUBTRANS_BUFFERS in src/include/access/subtrans.h, NUM_MULTIXACTOFFSET_BUFFERS and NUM_MULTIXACTMEMBER_BUFFERS in src/include/access/multixact.h.one question: we need to increase all SLRU buffers together , MULTIXACT, XACT, Subtrans, COMMIT TS , for example, got all of them doubled based on existing size ? or only increase Subtrans , or Subtrans and multixact ? Thanks,JamesJames Pang (chaolpan) <[email protected]> 於 2024年3月1日週五 下午2:45寫道:\n\n-----Original Message-----\nFrom: Alvaro Herrera <[email protected]> \nSent: Friday, February 2, 2024 4:13 PM\nTo: James Pang (chaolpan) <[email protected]>\nCc: Laurenz Albe <[email protected]>; [email protected]\nSubject: Re: huge SubtransSLRU and SubtransBuffer wait_event\n\nOn 2024-Feb-02, James Pang (chaolpan) wrote:\n\n> Possible to increase Subtrans SLRU buffer size ?\n\nNot at present -- you need to recompile after changing NUM_SUBTRANS_BUFFERS in src/include/access/subtrans.h, NUM_MULTIXACTOFFSET_BUFFERS and NUM_MULTIXACTMEMBER_BUFFERS in src/include/access/multixact.h.\n\nThere's pending work to let these be configurable in version 17.\n\n> Our case is 1) we use PL/PGSQL procedure1-->procedure2 (update\n> table xxxx;commit); 2) application JDBC client call procedure1\n> (it's a long running job, sometimes it could last > 1hours).\n> During this time window, other Postgresql JDBC clients (100-200)\n> coming in in same time , then quickly see MultiXactoffset and\n> SubtransSLRU increased very quickly. \n> PL/PGSQL proc1--> procedure2(updates table) it use substransation in\n> procedure2 ,right? \n\nIf your functions/procedures use EXCEPTION clauses, that would create subtransactions also.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No deja de ser humillante para una persona de ingenio saber que no hay tonto que no le pueda enseñar algo.\" (Jean B. Say)",
"msg_date": "Fri, 1 Mar 2024 14:56:38 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": "On 2024-Mar-01, James Pang wrote:\n\n> one question:\n> we need to increase all SLRU buffers together , MULTIXACT, XACT,\n> Subtrans, COMMIT TS , for example, got all of them doubled based on\n> existing size ?\n\nNo need.\n\n> or only increase Subtrans , or Subtrans and multixact ?\n\nJust increase the sizes for the ones that are causing you pain. You can\nhave a look at pg_stat_slru for some metrics that might be useful in\ndetermining which are those.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 1 Mar 2024 08:35:41 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: huge SubtransSLRU and SubtransBuffer wait_event"
},
{
"msg_contents": "Hi Alvaro ,\n looks like Xact slru buffer use a different way to control size, do we\nneed to increase Xact and how to increase that ? we plan to increase to\n20 times size of existing buffers, any side impact to 20 times increase\nthese subtrans ?\n----------------+-------------+-------------+-----------+--------------+-------------+---------+-----------+-------------------------------\n CommitTs | 1284048 | 387594150 | 54530 | 1305858 |\n 0 | 0 | 5 | 2024-01-19 05:01:38.900698+00\n MultiXactMember | 30252 | 23852620477 | 48555852 | 26106 |\n 0 | 127 | 0 | 2024-01-19 05:01:38.900698+00\n MultiXactOffset | 10638 | 23865848376 | 18434993 | 9375 |\n 127 | 127 | 5 | 2024-01-19 05:01:38.900698+00\n Notify | 0 | 0 | 0 | 0 |\n 0 | 0 | 0 | 2024-01-19 05:01:38.900698+00\n Serial | 0 | 0 | 0 | 0 |\n 0 | 0 | 0 | 2024-01-19 05:01:38.900698+00\n Subtrans | 513486 | 12127027243 | 153119082 | 431238 |\n 0 | 0 | 0 | 2024-01-19 05:01:38.900698+00\n Xact | 32107 | 22450403108 | 72043892 | 18064 |\n 0 | 0 | 3 | 2024-01-19 05:01:38.900698+00\n other | 0 | 0 | 0 | 0 |\n 0 | 0 | 0 | 2024-01-19 05:01:38.900698+00\n(8 rows)\n\nThanks,\n\nJames\n\nAlvaro Herrera <[email protected]> 於 2024年3月1日週五 下午3:35寫道:\n\n> On 2024-Mar-01, James Pang wrote:\n>\n> > one question:\n> > we need to increase all SLRU buffers together , MULTIXACT, XACT,\n> > Subtrans, COMMIT TS , for example, got all of them doubled based on\n> > existing size ?\n>\n> No need.\n>\n> > or only increase Subtrans , or Subtrans and multixact ?\n>\n> Just increase the sizes for the ones that are causing you pain. You can\n> have a look at pg_stat_slru for some metrics that might be useful in\n> determining which are those.\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>\n\nHi Alvaro , looks like Xact slru buffer use a different way to control size, do we need to increase Xact and how to increase that ? we plan to increase to 20 times size of existing buffers, any side impact to 20 times increase these subtrans ? ----------------+-------------+-------------+-----------+--------------+-------------+---------+-----------+------------------------------- CommitTs | 1284048 | 387594150 | 54530 | 1305858 | 0 | 0 | 5 | 2024-01-19 05:01:38.900698+00 MultiXactMember | 30252 | 23852620477 | 48555852 | 26106 | 0 | 127 | 0 | 2024-01-19 05:01:38.900698+00 MultiXactOffset | 10638 | 23865848376 | 18434993 | 9375 | 127 | 127 | 5 | 2024-01-19 05:01:38.900698+00 Notify | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2024-01-19 05:01:38.900698+00 Serial | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2024-01-19 05:01:38.900698+00 Subtrans | 513486 | 12127027243 | 153119082 | 431238 | 0 | 0 | 0 | 2024-01-19 05:01:38.900698+00 Xact | 32107 | 22450403108 | 72043892 | 18064 | 0 | 0 | 3 | 2024-01-19 05:01:38.900698+00 other | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2024-01-19 05:01:38.900698+00(8 rows)Thanks,James Alvaro Herrera <[email protected]> 於 2024年3月1日週五 下午3:35寫道:On 2024-Mar-01, James Pang wrote:\n\n> one question:\n> we need to increase all SLRU buffers together , MULTIXACT, XACT,\n> Subtrans, COMMIT TS , for example, got all of them doubled based on\n> existing size ?\n\nNo need.\n\n> or only increase Subtrans , or Subtrans and multixact ?\n\nJust increase the sizes for the ones that are causing you pain. You can\nhave a look at pg_stat_slru for some metrics that might be useful in\ndetermining which are those.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 15 Mar 2024 10:03:47 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: huge SubtransSLRU and SubtransBuffer wait_event"
}
] |
[
{
"msg_contents": "Subject: Memory Growth Issue in \"Backend\" after Creating and Executing\nMultiple \"Named Prepared Statements\" with Different Names and Executing\nDISCARD ALL Finally.\n\nProduct: PostgreSQL 14\n\nDear Technical Support Team,\n\nWe reach out to you to report an issue related to memory growth in\nPostgreSQL backend processes when running many Prepared Statements with\ndifferent names, even though the \"DISCARD ALL\" command is executed at the\nend of the program execution.\n\nWe understand that while Prepared Statements are executed and maintained in\nthe session, memory may grow since various objects need to be stored in the\nsession, such as the parsed query, execution plans, etc.\n\nHowever, what we don't understand is why, when the DISCARD ALL command is\neventually executed, memory is not freed at all.\n\nCould you please provide us with a more detailed explanation of this\nbehavior? Additionally, we would like to know if there is any other\nspecific action or configuration that we can perform to address this issue\nand ensure that backend memory is reduced after executing many \"Named\nPrepared Statements\".\n\nWe appreciate your attention and look forward to your guidance and\nsuggestions for resolving this problem.\n\nWe have attached a small C program with libpq that demonstrates this issue,\nalong with the program's output and the execution of the \"ps aux\" program.\n\nBest regards,\n\nDaniel Blanch Bataller\nHoplasoftware DBA\n\nprepared_statement.c program\n============================\n\n/*\n * prepared_statement.c\n * This program demonstrates the backend memory growth using a large number\n * of prepared statements, as expected.\n * But surprisingly, after executing DISCARD ALL; memory is not recovered\nat all.\n *\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <libpq-fe.h>\n#include <unistd.h>\n\n#define ITERATIONS 50000\n#define PRINT_TIMES 5000\n\n#define HOST \"localhost\"\n#define PORT \"9999\"\n#define DB \"test\"\n#define USER \"test\"\n#define PASS \"test\"\n\nint main() {\n\n // Connect to the database\n const char *conninfo = \"host=\" HOST \" port=\" PORT \" dbname=\" DB \"\nuser=\" USER \" password=\" PASS \"\";\n printf(\"Connecting to %s\\n\", conninfo);\n PGconn *conn = PQconnectdb(conninfo);\n\n // Check connection result\n if (PQstatus(conn) != CONNECTION_OK) {\n fprintf(stderr, \"Connection error: %s\\n\", PQerrorMessage(conn));\n PQfinish(conn);\n exit(1);\n }\n\n // Get backend PID\n printf(\"Getting backend PID \\n\");\n PGresult *result = PQexec(conn, \"SELECT pg_backend_pid();\");\n\n // Check result status\n if (PQresultStatus(result) != PGRES_TUPLES_OK) {\n fprintf(stderr, \"Error executing query: %s\\n\",\nPQerrorMessage(conn));\n PQclear(result);\n PQfinish(conn);\n exit(EXIT_FAILURE);\n }\n\n // Get result\n char *pid = PQgetvalue(result, 0, 0);\n printf(\"Backend PID: %s\\n\", pid);\n\n // Main loop\n printf(\"Excecuting %d PreparedStatements\\n\", ITERATIONS);\n for (int i = 0; i <= ITERATIONS; i++) {\n\n // Prepare \"Prepared Statement\"\n char stmt_name[50];\n sprintf(stmt_name, \"ps_%d\", i);\n const char *query = \"SELECT 1 WHERE 1 = $1\";\n if (i % PRINT_TIMES == 0) printf(\"Executing PreparedStatement\n'%s'\\n\", stmt_name);\n PGresult *prepare_result = PQprepare(conn, stmt_name, query, 1,\nNULL);\n\n if (PQresultStatus(prepare_result) != PGRES_COMMAND_OK) {\n fprintf(stderr, \"Error preparing the PreparedStatement: %s\\n\",\nPQresultErrorMessage(prepare_result));\n PQclear(prepare_result);\n PQfinish(conn);\n exit(1);\n }\n\n // Preprared Statement parameters\n const char *paramValues[] = {\"1\"};\n\n // Execute Prepared Statement\n PGresult *res = PQexecPrepared(conn, stmt_name, 1, paramValues,\nNULL, NULL, 0);\n\n // Check Prepared Statement execution result\n if (PQresultStatus(res) != PGRES_TUPLES_OK) {\n fprintf(stderr, \"Error executing query: %s\\n\",\nPQresultErrorMessage(res));\n PQclear(res);\n PQfinish(conn);\n exit(1);\n }\n\n // Get results\n int numRows = PQntuples(res);\n int numCols = PQnfields(res);\n\n for (int i = 0; i < numRows; i++) {\n for (int j = 0; j < numCols; j++) {\n PQgetvalue(res, i, j); // Do nothing\n }\n }\n\n // Free Result\n PQclear(res);\n }\n\n // Close Connection\n PQfinish(conn);\n\n return 0;\n}\n\n./prepared_statement output:\n============================\nConnecting to host=localhost port=9999 dbname=test user=test password=test\nGetting backend PID\nBackend PID: 40690\nExcecuting 50000 PreparedStatements\nExecuting PreparedStatement 'ps_0'\nExecuting PreparedStatement 'ps_5000'\nExecuting PreparedStatement 'ps_10000'\nExecuting PreparedStatement 'ps_15000'\nExecuting PreparedStatement 'ps_20000'\nExecuting PreparedStatement 'ps_25000'\nExecuting PreparedStatement 'ps_30000'\nExecuting PreparedStatement 'ps_35000'\nExecuting PreparedStatement 'ps_40000'\nExecuting PreparedStatement 'ps_45000'\nExecuting PreparedStatement 'ps_50000'\n\nPostgres log:\n=============\n2024-02-01 11:19:16.240 CET [40690] test@test LOG: ejecutar ps_49999:\nSELECT 1 WHERE 1 = $1\n2024-02-01 11:19:16.240 CET [40690] test@test DETALLE: parámetros: $1 = '1'\n2024-02-01 11:19:16.243 CET [40690] test@test LOG: ejecutar ps_50000:\nSELECT 1 WHERE 1 = $1\n2024-02-01 11:19:16.243 CET [40690] test@test DETALLE: parámetros: $1 = '1'\n2024-02-01 11:19:16.243 CET [40690] test@test LOG: sentencia: DISCARD ALL\n\nPs aux | grep 40690:\n====================\n$ ps aux | grep 40690\npostgres 40690 5.8 1.4 481204 226024 ? Ss 11:18 0:04\npostgres: 14/main: test test 127.0.0.1(39254) idle\n\nSubject: Memory Growth Issue in \"Backend\" after Creating and Executing Multiple \"Named Prepared Statements\" with Different Names and Executing DISCARD ALL Finally.Product: PostgreSQL 14Dear Technical Support Team,We reach out to you to report an issue related to memory growth in PostgreSQL backend processes when running many Prepared Statements with different names, even though the \"DISCARD ALL\" command is executed at the end of the program execution.We understand that while Prepared Statements are executed and maintained in the session, memory may grow since various objects need to be stored in the session, such as the parsed query, execution plans, etc.However, what we don't understand is why, when the DISCARD ALL command is eventually executed, memory is not freed at all.Could you please provide us with a more detailed explanation of this behavior? Additionally, we would like to know if there is any other specific action or configuration that we can perform to address this issue and ensure that backend memory is reduced after executing many \"Named Prepared Statements\".We appreciate your attention and look forward to your guidance and suggestions for resolving this problem.We have attached a small C program with libpq that demonstrates this issue, along with the program's output and the execution of the \"ps aux\" program.Best regards,Daniel Blanch BatallerHoplasoftware DBAprepared_statement.c program============================/* * prepared_statement.c * This program demonstrates the backend memory growth using a large number * of prepared statements, as expected. * But surprisingly, after executing DISCARD ALL; memory is not recovered at all. * */ #include <stdio.h>#include <stdlib.h>#include <libpq-fe.h>#include <unistd.h>#define ITERATIONS 50000#define PRINT_TIMES 5000#define HOST \"localhost\"#define PORT \"9999\"#define DB \"test\"#define USER \"test\"#define PASS \"test\"int main() { // Connect to the database const char *conninfo = \"host=\" HOST \" port=\" PORT \" dbname=\" DB \" user=\" USER \" password=\" PASS \"\"; printf(\"Connecting to %s\\n\", conninfo); PGconn *conn = PQconnectdb(conninfo); // Check connection result if (PQstatus(conn) != CONNECTION_OK) { fprintf(stderr, \"Connection error: %s\\n\", PQerrorMessage(conn)); PQfinish(conn); exit(1); } // Get backend PID printf(\"Getting backend PID \\n\"); PGresult *result = PQexec(conn, \"SELECT pg_backend_pid();\"); // Check result status if (PQresultStatus(result) != PGRES_TUPLES_OK) { fprintf(stderr, \"Error executing query: %s\\n\", PQerrorMessage(conn)); PQclear(result); PQfinish(conn); exit(EXIT_FAILURE); } // Get result char *pid = PQgetvalue(result, 0, 0); printf(\"Backend PID: %s\\n\", pid); // Main loop printf(\"Excecuting %d PreparedStatements\\n\", ITERATIONS); for (int i = 0; i <= ITERATIONS; i++) { // Prepare \"Prepared Statement\" char stmt_name[50]; sprintf(stmt_name, \"ps_%d\", i); const char *query = \"SELECT 1 WHERE 1 = $1\"; if (i % PRINT_TIMES == 0) printf(\"Executing PreparedStatement '%s'\\n\", stmt_name); PGresult *prepare_result = PQprepare(conn, stmt_name, query, 1, NULL); if (PQresultStatus(prepare_result) != PGRES_COMMAND_OK) { fprintf(stderr, \"Error preparing the PreparedStatement: %s\\n\", PQresultErrorMessage(prepare_result)); PQclear(prepare_result); PQfinish(conn); exit(1); } // Preprared Statement parameters const char *paramValues[] = {\"1\"}; // Execute Prepared Statement PGresult *res = PQexecPrepared(conn, stmt_name, 1, paramValues, NULL, NULL, 0); // Check Prepared Statement execution result if (PQresultStatus(res) != PGRES_TUPLES_OK) { fprintf(stderr, \"Error executing query: %s\\n\", PQresultErrorMessage(res)); PQclear(res); PQfinish(conn); exit(1); } // Get results int numRows = PQntuples(res); int numCols = PQnfields(res); for (int i = 0; i < numRows; i++) { for (int j = 0; j < numCols; j++) { PQgetvalue(res, i, j); // Do nothing } } // Free Result PQclear(res); } // Close Connection PQfinish(conn); return 0;}./prepared_statement output:============================Connecting to host=localhost port=9999 dbname=test user=test password=testGetting backend PID Backend PID: 40690Excecuting 50000 PreparedStatementsExecuting PreparedStatement 'ps_0'Executing PreparedStatement 'ps_5000'Executing PreparedStatement 'ps_10000'Executing PreparedStatement 'ps_15000'Executing PreparedStatement 'ps_20000'Executing PreparedStatement 'ps_25000'Executing PreparedStatement 'ps_30000'Executing PreparedStatement 'ps_35000'Executing PreparedStatement 'ps_40000'Executing PreparedStatement 'ps_45000'Executing PreparedStatement 'ps_50000'Postgres log:=============2024-02-01 11:19:16.240 CET [40690] test@test LOG: ejecutar ps_49999: SELECT 1 WHERE 1 = $12024-02-01 11:19:16.240 CET [40690] test@test DETALLE: parámetros: $1 = '1'2024-02-01 11:19:16.243 CET [40690] test@test LOG: ejecutar ps_50000: SELECT 1 WHERE 1 = $12024-02-01 11:19:16.243 CET [40690] test@test DETALLE: parámetros: $1 = '1'2024-02-01 11:19:16.243 CET [40690] test@test LOG: sentencia: DISCARD ALLPs aux | grep 40690:====================$ ps aux | grep 40690postgres 40690 5.8 1.4 481204 226024 ? Ss 11:18 0:04 postgres: 14/main: test test 127.0.0.1(39254) idle",
"msg_date": "Thu, 1 Feb 2024 14:40:20 +0100",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory growth using many named prepared statements, in spite of using\n DISCARD ALL afterwards."
},
{
"msg_contents": "On 01/02/2024 15:40, Daniel Blanch Bataller wrote:\n> We have attached a small C program with libpq that demonstrates this \n> issue, along with the program's output and the execution of the \"ps aux\" \n> program.\n\nThere is no DISCARD ALL command in the test program you included. I can \nsee the DISCARD ALL in the log output, however. Perhaps you included a \nwrong version of the test program?\n\nIn any case, it's notoriously hard to measure memory usage of backend \nprocesses correctly. The resident size displayed by tools like 'ps' and \n'top' includes shared memory, too, for example.\n\nI'd recommend that you run the test much longer, and observe the memory \nusage for a much longer period of time. I would expect it to eventually \nstabilize at some reasonable level.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 1 Feb 2024 15:51:43 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory growth using many named prepared statements, in spite of\n using DISCARD ALL afterwards."
},
{
"msg_contents": "Hi Heikki!\n\nI made some modifications as you requested:\n\nI have modified the program, now DISCARD ALL is issued within the program,\nat the end.\nI have added all headers to ps aux output so anyone can see the memory\ngrowth I am refering to.\nI now connect directly to postgrres,\nI run now 500000 prepared statements.\n\nI hope it's clearer now.\n\nThank you very much for your tips.\n\nSubject: Memory Growth Issue in \"Backend\" after Creating and Executing\nMultiple \"Named Prepared Statements\" with Different Names and Executing\nDISCARD ALL Finally.\n\nProduct: PostgreSQL 14\n\nDear Technical Support Team,\n\nWe reach out to you to report an issue related to memory growth in\nPostgreSQL backend processes when running many Prepared Statements with\ndifferent names, even though the \"DISCARD ALL\" command is executed at the\nend of the program execution.\n\nWe understand that while Prepared Statements are executed and maintained in\nthe session, memory may grow since various objects need to be stored in the\nsession, such as the parsed query, execution plans, etc.\n\nHowever, what we don't understand is why, when the DISCARD ALL command is\neventually executed, memory is not freed at all.\n\nCould you please provide us with a more detailed explanation of this\nbehavior? Additionally, we would like to know if there is any other\nspecific action or configuration that we can perform to address this issue\nand ensure that backend memory is reduced after executing many \"Named\nPrepared Statements\".\n\nWe appreciate your attention and look forward to your guidance and\nsuggestions for resolving this problem.\n\nWe have attached a small C program with libpq that demonstrates this issue,\nalong with the program's output and the execution of the \"ps aux\" program.\n\nBest regards,\n\nDaniel Blanch Bataller\nHoplasoftware DBA\n\nprepared_statement.c program\n============================\n\n/*\n * prepared_statement.c\n * This program demonstrates the backend memory growth using a large number\n * of prepared statements, as expected.\n * But surprisingly, after executing DISCARD ALL; memory is not recovered\nat all.\n *\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <libpq-fe.h>\n#include <unistd.h>\n\n#define ITERATIONS 500000\n#define PRINT_TIMES 5000\n#define SLEEP_AFTER_DISCARD_ALL 60\n\n#define HOST \"localhost\"\n#define PORT \"5432\"\n#define DB \"test\"\n#define USER \"test\"\n#define PASS \"test\"\n\nint main() {\n\n // Connect to the database\n const char *conninfo = \"host=\" HOST \" port=\" PORT \" dbname=\" DB \"\nuser=\" USER \" password=\" PASS \"\";\n printf(\"Connecting to %s\\n\", conninfo);\n PGconn *conn = PQconnectdb(conninfo);\n\n // Check connection result\n if (PQstatus(conn) != CONNECTION_OK) {\n fprintf(stderr, \"Connection error: %s\\n\", PQerrorMessage(conn));\n PQfinish(conn);\n exit(1);\n }\n\n // Get backend PID\n printf(\"Getting backend PID \\n\");\n PGresult *result = PQexec(conn, \"SELECT pg_backend_pid();\");\n\n // Check result status\n if (PQresultStatus(result) != PGRES_TUPLES_OK) {\n fprintf(stderr, \"Error executing query: %s\\n\",\nPQerrorMessage(conn));\n PQclear(result);\n PQfinish(conn);\n exit(EXIT_FAILURE);\n }\n\n // Get result\n char *pid = PQgetvalue(result, 0, 0);\n printf(\"Backend PID: %s\\n\", pid);\n\n // Main loop\n printf(\"Excecuting %d PreparedStatements\\n\", ITERATIONS);\n for (int i = 0; i <= ITERATIONS; i++) {\n\n // Prepare \"Prepared Statement\"\n char stmt_name[50];\n sprintf(stmt_name, \"ps_%d\", i);\n const char *query = \"SELECT 1 WHERE 1 = $1\";\n if (i % PRINT_TIMES == 0) printf(\"Executing PreparedStatement\n'%s'\\n\", stmt_name);\n PGresult *prepare_result = PQprepare(conn, stmt_name, query, 1,\nNULL);\n\n if (PQresultStatus(prepare_result) != PGRES_COMMAND_OK) {\n fprintf(stderr, \"Error preparing the PreparedStatement: %s\\n\",\nPQresultErrorMessage(prepare_result));\n PQclear(prepare_result);\n PQfinish(conn);\n exit(1);\n }\n\n // Preprared Statement parameters\n const char *paramValues[] = {\"1\"};\n\n // Execute Prepared Statement\n PGresult *res = PQexecPrepared(conn, stmt_name, 1, paramValues,\nNULL, NULL, 0);\n\n // Check Prepared Statement execution result\n if (PQresultStatus(res) != PGRES_TUPLES_OK) {\n fprintf(stderr, \"Error executing query: %s\\n\",\nPQresultErrorMessage(res));\n PQclear(res);\n PQfinish(conn);\n exit(1);\n }\n\n // Get results\n int numRows = PQntuples(res);\n int numCols = PQnfields(res);\n\n for (int i = 0; i < numRows; i++) {\n for (int j = 0; j < numCols; j++) {\n PQgetvalue(res, i, j); // Do nothing\n }\n }\n\n // Free Result\n PQclear(res);\n }\n\n // Execute discard all\n printf(\"Executing DISCARD ALL;\\n\");\n PGresult *discard_result = PQexec(conn, \"DISCARD ALL;\");\n\n // Check result status\n if (PQresultStatus(discard_result) != PGRES_COMMAND_OK) {\n fprintf(stderr, \"Error executing command: %s\\n\",\nPQerrorMessage(conn));\n PQclear(discard_result);\n PQfinish(conn);\n exit(EXIT_FAILURE);\n }\n\n // Wait to check backend growth\n printf(\"Waiting %d seconds, now its time to check backend growth!\\n\",\nSLEEP_AFTER_DISCARD_ALL);\n sleep(SLEEP_AFTER_DISCARD_ALL);\n\n // Close Connection\n PQfinish(conn);\n\n return 0;\n}\n\nprogram output:\n===============\nConnecting to host=localhost port=5432 dbname=test user=test password=test\nGetting backend PID\nBackend PID: 6423\nExcecuting 500000 PreparedStatements\nExecuting PreparedStatement 'ps_0'\nExecuting PreparedStatement 'ps_5000'\n\n...\n\nExecuting PreparedStatement 'ps_495000'\nExecuting PreparedStatement 'ps_500000'\nExecuting DISCARD ALL;\nWaiting 60 seconds, now its time to check backend growth!\n\nPostgres log:\n=============\n2024-02-02 08:29:22.554 CET [6423] test@test LOG: ejecutar ps_499999:\nSELECT 1 WHERE 1 = $1\n2024-02-02 08:29:22.554 CET [6423] test@test DETALLE: parámetros: $1 = '1'\n2024-02-02 08:29:22.554 CET [6423] test@test LOG: ejecutar ps_500000:\nSELECT 1 WHERE 1 = $1\n2024-02-02 08:29:22.554 CET [6423] test@test DETALLE: parámetros: $1 = '1'\n2024-02-02 08:29:22.554 CET [6423] test@test LOG: sentencia: DISCARD ALL;\n\nPs aux output (memory growth):\n==============================\n$ date; ps aux | head -n 1; ps aux | grep 6423\nvie 02 feb 2024 08:29:24 CET\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\npostgres 6423 59.1 13.1 2358692 2105708 ? Ss 08:28 0:27\npostgres: 14/main: test test 127.0.0.1(46084) idle\n\n\nEl jue, 1 feb 2024 a las 14:51, Heikki Linnakangas (<[email protected]>)\nescribió:\n\n> On 01/02/2024 15:40, Daniel Blanch Bataller wrote:\n> > We have attached a small C program with libpq that demonstrates this\n> > issue, along with the program's output and the execution of the \"ps aux\"\n> > program.\n>\n> There is no DISCARD ALL command in the test program you included. I can\n> see the DISCARD ALL in the log output, however. Perhaps you included a\n> wrong version of the test program?\n>\n> In any case, it's notoriously hard to measure memory usage of backend\n> processes correctly. The resident size displayed by tools like 'ps' and\n> 'top' includes shared memory, too, for example.\n>\n> I'd recommend that you run the test much longer, and observe the memory\n> usage for a much longer period of time. I would expect it to eventually\n> stabilize at some reasonable level.\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n\nHi Heikki!I made some modifications as you requested:I have modified the program, now DISCARD ALL is issued within the program, at the end.I have added all headers to ps aux output so anyone can see the memory growth I am refering to.I now connect directly to postgrres,I run now 500000 prepared statements.I hope it's clearer now.Thank you very much for your tips.Subject: Memory Growth Issue in \"Backend\" after Creating and Executing Multiple \"Named Prepared Statements\" with Different Names and Executing DISCARD ALL Finally.Product: PostgreSQL 14Dear Technical Support Team,We reach out to you to report an issue related to memory growth in PostgreSQL backend processes when running many Prepared Statements with different names, even though the \"DISCARD ALL\" command is executed at the end of the program execution.We understand that while Prepared Statements are executed and maintained in the session, memory may grow since various objects need to be stored in the session, such as the parsed query, execution plans, etc.However, what we don't understand is why, when the DISCARD ALL command is eventually executed, memory is not freed at all.Could you please provide us with a more detailed explanation of this behavior? Additionally, we would like to know if there is any other specific action or configuration that we can perform to address this issue and ensure that backend memory is reduced after executing many \"Named Prepared Statements\".We appreciate your attention and look forward to your guidance and suggestions for resolving this problem.We have attached a small C program with libpq that demonstrates this issue, along with the program's output and the execution of the \"ps aux\" program.Best regards,Daniel Blanch BatallerHoplasoftware DBAprepared_statement.c program============================/* * prepared_statement.c * This program demonstrates the backend memory growth using a large number * of prepared statements, as expected. * But surprisingly, after executing DISCARD ALL; memory is not recovered at all. * */ #include <stdio.h>#include <stdlib.h>#include <libpq-fe.h>#include <unistd.h>#define ITERATIONS 500000#define PRINT_TIMES 5000#define SLEEP_AFTER_DISCARD_ALL 60#define HOST \"localhost\"#define PORT \"5432\"#define DB \"test\"#define USER \"test\"#define PASS \"test\"int main() { // Connect to the database const char *conninfo = \"host=\" HOST \" port=\" PORT \" dbname=\" DB \" user=\" USER \" password=\" PASS \"\"; printf(\"Connecting to %s\\n\", conninfo); PGconn *conn = PQconnectdb(conninfo); // Check connection result if (PQstatus(conn) != CONNECTION_OK) { fprintf(stderr, \"Connection error: %s\\n\", PQerrorMessage(conn)); PQfinish(conn); exit(1); } // Get backend PID printf(\"Getting backend PID \\n\"); PGresult *result = PQexec(conn, \"SELECT pg_backend_pid();\"); // Check result status if (PQresultStatus(result) != PGRES_TUPLES_OK) { fprintf(stderr, \"Error executing query: %s\\n\", PQerrorMessage(conn)); PQclear(result); PQfinish(conn); exit(EXIT_FAILURE); } // Get result char *pid = PQgetvalue(result, 0, 0); printf(\"Backend PID: %s\\n\", pid); // Main loop printf(\"Excecuting %d PreparedStatements\\n\", ITERATIONS); for (int i = 0; i <= ITERATIONS; i++) { // Prepare \"Prepared Statement\" char stmt_name[50]; sprintf(stmt_name, \"ps_%d\", i); const char *query = \"SELECT 1 WHERE 1 = $1\"; if (i % PRINT_TIMES == 0) printf(\"Executing PreparedStatement '%s'\\n\", stmt_name); PGresult *prepare_result = PQprepare(conn, stmt_name, query, 1, NULL); if (PQresultStatus(prepare_result) != PGRES_COMMAND_OK) { fprintf(stderr, \"Error preparing the PreparedStatement: %s\\n\", PQresultErrorMessage(prepare_result)); PQclear(prepare_result); PQfinish(conn); exit(1); } // Preprared Statement parameters const char *paramValues[] = {\"1\"}; // Execute Prepared Statement PGresult *res = PQexecPrepared(conn, stmt_name, 1, paramValues, NULL, NULL, 0); // Check Prepared Statement execution result if (PQresultStatus(res) != PGRES_TUPLES_OK) { fprintf(stderr, \"Error executing query: %s\\n\", PQresultErrorMessage(res)); PQclear(res); PQfinish(conn); exit(1); } // Get results int numRows = PQntuples(res); int numCols = PQnfields(res); for (int i = 0; i < numRows; i++) { for (int j = 0; j < numCols; j++) { PQgetvalue(res, i, j); // Do nothing } } // Free Result PQclear(res); } // Execute discard all printf(\"Executing DISCARD ALL;\\n\"); PGresult *discard_result = PQexec(conn, \"DISCARD ALL;\"); // Check result status if (PQresultStatus(discard_result) != PGRES_COMMAND_OK) { fprintf(stderr, \"Error executing command: %s\\n\", PQerrorMessage(conn)); PQclear(discard_result); PQfinish(conn); exit(EXIT_FAILURE); } // Wait to check backend growth printf(\"Waiting %d seconds, now its time to check backend growth!\\n\", SLEEP_AFTER_DISCARD_ALL); sleep(SLEEP_AFTER_DISCARD_ALL); // Close Connection PQfinish(conn); return 0;}program output:===============Connecting to host=localhost port=5432 dbname=test user=test password=testGetting backend PID Backend PID: 6423Excecuting 500000 PreparedStatementsExecuting PreparedStatement 'ps_0'Executing PreparedStatement 'ps_5000'...Executing PreparedStatement 'ps_495000'Executing PreparedStatement 'ps_500000'Executing DISCARD ALL;Waiting 60 seconds, now its time to check backend growth!Postgres log:=============2024-02-02 08:29:22.554 CET [6423] test@test LOG: ejecutar ps_499999: SELECT 1 WHERE 1 = $12024-02-02 08:29:22.554 CET [6423] test@test DETALLE: parámetros: $1 = '1'2024-02-02 08:29:22.554 CET [6423] test@test LOG: ejecutar ps_500000: SELECT 1 WHERE 1 = $12024-02-02 08:29:22.554 CET [6423] test@test DETALLE: parámetros: $1 = '1'2024-02-02 08:29:22.554 CET [6423] test@test LOG: sentencia: DISCARD ALL;Ps aux output (memory growth):==============================$ date; ps aux | head -n 1; ps aux | grep 6423vie 02 feb 2024 08:29:24 CETUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDpostgres 6423 59.1 13.1 2358692 2105708 ? Ss 08:28 0:27 postgres: 14/main: test test 127.0.0.1(46084) idleEl jue, 1 feb 2024 a las 14:51, Heikki Linnakangas (<[email protected]>) escribió:On 01/02/2024 15:40, Daniel Blanch Bataller wrote:\n> We have attached a small C program with libpq that demonstrates this \n> issue, along with the program's output and the execution of the \"ps aux\" \n> program.\n\nThere is no DISCARD ALL command in the test program you included. I can \nsee the DISCARD ALL in the log output, however. Perhaps you included a \nwrong version of the test program?\n\nIn any case, it's notoriously hard to measure memory usage of backend \nprocesses correctly. The resident size displayed by tools like 'ps' and \n'top' includes shared memory, too, for example.\n\nI'd recommend that you run the test much longer, and observe the memory \nusage for a much longer period of time. I would expect it to eventually \nstabilize at some reasonable level.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 2 Feb 2024 08:45:47 +0100",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory growth using many named prepared statements, in spite of\n using DISCARD ALL afterwards."
}
] |
[
{
"msg_contents": "Hi all,\n\nI have performance issue for a pretty simple request in a PostgreSQL server\n14.10\n\n* Request\n\nSELECT p.id_parcelle\nFROM private.parcelles p\nWHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'))\n;\n\n* Table definition (extract)\n\n Table « private.parcelles »\n Colonne | Type | Collationnement |\nNULL-able | Par défaut\n-----------------------+-----------------------------+-----------------+-----------+------------\n id | integer | |\n |\n geom | geometry(MultiPolygon,2154) | |\n |\n fid | bigint | |\n |\n id_parcelle | character varying(14) | |\nnot null |\n insee_col | character varying(5) | |\n |\n nom_col | character varying | |\n |\n section | character varying(2) | |\n |\n numero | character varying(4) | |\n |\n contenance | bigint | |\n |\n epci_nom | character varying | |\n |\n dep | character varying | |\n |\n dep_nom | character varying | |\n |\nIndex :\n \"foncier_pkey\" PRIMARY KEY, btree (id_parcelle)\n \"idx_extension_eol_parcelle\" btree (extension_eol)\n \"idx_lien_hubspot_parcelels\" btree (lien_hubspot)\n \"idx_reg_parcelle\" btree (reg)\n \"idx_type_ener_parcelles\" btree (type_d_energie)\n \"parcelles_dep_idx\" btree (dep)\n \"parcelles_id_parcelle_idx\" btree (id_parcelle)\n \"parcelles_inseecol_idx\" btree (insee_col)\n \"parcelles_object_id_idx\" btree (hs_object_id)\n \"parcelles_pipelinestage_idx\" btree (hs_pipeline_stage)\n \"parcelles_synctohubspot_idx\" btree (synctohubspot)\n \"sidx_foncier_geom\" gist (geom)\n\n-> First comment, the primary Key should be on id (integer) and not on\nid_parcelle (a text code)\n\n\n* Statistiques\n\nlizmap_synerdev_carto=# SELECT * FROM pg_stat_all_tables WHERE schemaname =\n'private' AND relname = 'parcelles';\n-[ RECORD 1 ]-------+------------------------------\nrelid | 2364725\nschemaname | private\nrelname | parcelles\nseq_scan | 1891\nseq_tup_read | 552509679\nidx_scan | 19144304\nidx_tup_fetch | 38926631\nn_tup_ins | 3\nn_tup_upd | 3073182\nn_tup_del | 0\nn_tup_hot_upd | 2996591\nn_live_tup | 92876681\nn_dead_tup | 1836882\nn_mod_since_analyze | 769313\nn_ins_since_vacuum | 3\nlast_vacuum |\nlast_autovacuum |\nlast_analyze | 2024-02-08 15:33:14.008286+01\nlast_autoanalyze |\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 1\nautoanalyze_count | 0\n\n* Plan :\nhttps://explain.dalibo.com/plan/47391e3g8c2589cf#plan/node/2\n\nIt seems PostgreSQL does not use the index parcelles_dep_idx on \"dep\" (text\nfield), even if the corresponding number of lines for this WHERE clause is\na smal subset of the entire data:\napprox 6M against 80M in total\n\nThanks in advance for any hint regarding this cumbersome query.\n\nRegards\nKimaidou\n\nHi all,I have performance issue for a pretty simple request in a PostgreSQL server 14.10* RequestSELECT p.id_parcelleFROM private.parcelles p WHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'));* Table definition (extract) Table « private.parcelles » Colonne | Type | Collationnement | NULL-able | Par défaut -----------------------+-----------------------------+-----------------+-----------+------------ id | integer | | | geom | geometry(MultiPolygon,2154) | | | fid | bigint | | | id_parcelle | character varying(14) | | not null | insee_col | character varying(5) | | | nom_col | character varying | | | section | character varying(2) | | | numero | character varying(4) | | | contenance | bigint | | | epci_nom | character varying | | | dep | character varying | | | dep_nom | character varying | | | Index : \"foncier_pkey\" PRIMARY KEY, btree (id_parcelle) \"idx_extension_eol_parcelle\" btree (extension_eol) \"idx_lien_hubspot_parcelels\" btree (lien_hubspot) \"idx_reg_parcelle\" btree (reg) \"idx_type_ener_parcelles\" btree (type_d_energie) \"parcelles_dep_idx\" btree (dep) \"parcelles_id_parcelle_idx\" btree (id_parcelle) \"parcelles_inseecol_idx\" btree (insee_col) \"parcelles_object_id_idx\" btree (hs_object_id) \"parcelles_pipelinestage_idx\" btree (hs_pipeline_stage) \"parcelles_synctohubspot_idx\" btree (synctohubspot) \"sidx_foncier_geom\" gist (geom)-> First comment, the primary Key should be on id (integer) and not on id_parcelle (a text code)* Statistiqueslizmap_synerdev_carto=# SELECT * FROM pg_stat_all_tables WHERE schemaname = 'private' AND relname = 'parcelles';-[ RECORD 1 ]-------+------------------------------relid | 2364725schemaname | privaterelname | parcellesseq_scan | 1891seq_tup_read | 552509679idx_scan | 19144304idx_tup_fetch | 38926631n_tup_ins | 3n_tup_upd | 3073182n_tup_del | 0n_tup_hot_upd | 2996591n_live_tup | 92876681n_dead_tup | 1836882n_mod_since_analyze | 769313n_ins_since_vacuum | 3last_vacuum | last_autovacuum | last_analyze | 2024-02-08 15:33:14.008286+01last_autoanalyze | vacuum_count | 0autovacuum_count | 0analyze_count | 1autoanalyze_count | 0* Plan : https://explain.dalibo.com/plan/47391e3g8c2589cf#plan/node/2It seems PostgreSQL does not use the index parcelles_dep_idx on \"dep\" (text field), even if the corresponding number of lines for this WHERE clause is a smal subset of the entire data:approx 6M against 80M in totalThanks in advance for any hint regarding this cumbersome query.RegardsKimaidou",
"msg_date": "Fri, 9 Feb 2024 15:14:31 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simple JOIN on heavy table not using expected index"
},
{
"msg_contents": "can you share result for:\n\n*explain analyze* SELECT p.id_parcelle\nFROM private.parcelles p\nWHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'))\n;\n\nOn Fri, 9 Feb 2024 at 17:14, kimaidou <[email protected]> wrote:\n\n> Hi all,\n>\n> I have performance issue for a pretty simple request in a PostgreSQL\n> server 14.10\n>\n> * Request\n>\n> SELECT p.id_parcelle\n> FROM private.parcelles p\n> WHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'))\n> ;\n>\n> * Table definition (extract)\n>\n> Table « private.parcelles »\n> Colonne | Type | Collationnement |\n> NULL-able | Par défaut\n>\n> -----------------------+-----------------------------+-----------------+-----------+------------\n> id | integer | |\n> |\n> geom | geometry(MultiPolygon,2154) | |\n> |\n> fid | bigint | |\n> |\n> id_parcelle | character varying(14) | |\n> not null |\n> insee_col | character varying(5) | |\n> |\n> nom_col | character varying | |\n> |\n> section | character varying(2) | |\n> |\n> numero | character varying(4) | |\n> |\n> contenance | bigint | |\n> |\n> epci_nom | character varying | |\n> |\n> dep | character varying | |\n> |\n> dep_nom | character varying | |\n> |\n> Index :\n> \"foncier_pkey\" PRIMARY KEY, btree (id_parcelle)\n> \"idx_extension_eol_parcelle\" btree (extension_eol)\n> \"idx_lien_hubspot_parcelels\" btree (lien_hubspot)\n> \"idx_reg_parcelle\" btree (reg)\n> \"idx_type_ener_parcelles\" btree (type_d_energie)\n> \"parcelles_dep_idx\" btree (dep)\n> \"parcelles_id_parcelle_idx\" btree (id_parcelle)\n> \"parcelles_inseecol_idx\" btree (insee_col)\n> \"parcelles_object_id_idx\" btree (hs_object_id)\n> \"parcelles_pipelinestage_idx\" btree (hs_pipeline_stage)\n> \"parcelles_synctohubspot_idx\" btree (synctohubspot)\n> \"sidx_foncier_geom\" gist (geom)\n>\n> -> First comment, the primary Key should be on id (integer) and not on\n> id_parcelle (a text code)\n>\n>\n> * Statistiques\n>\n> lizmap_synerdev_carto=# SELECT * FROM pg_stat_all_tables WHERE schemaname\n> = 'private' AND relname = 'parcelles';\n> -[ RECORD 1 ]-------+------------------------------\n> relid | 2364725\n> schemaname | private\n> relname | parcelles\n> seq_scan | 1891\n> seq_tup_read | 552509679\n> idx_scan | 19144304\n> idx_tup_fetch | 38926631\n> n_tup_ins | 3\n> n_tup_upd | 3073182\n> n_tup_del | 0\n> n_tup_hot_upd | 2996591\n> n_live_tup | 92876681\n> n_dead_tup | 1836882\n> n_mod_since_analyze | 769313\n> n_ins_since_vacuum | 3\n> last_vacuum |\n> last_autovacuum |\n> last_analyze | 2024-02-08 15:33:14.008286+01\n> last_autoanalyze |\n> vacuum_count | 0\n> autovacuum_count | 0\n> analyze_count | 1\n> autoanalyze_count | 0\n>\n> * Plan :\n> https://explain.dalibo.com/plan/47391e3g8c2589cf#plan/node/2\n>\n> It seems PostgreSQL does not use the index parcelles_dep_idx on \"dep\"\n> (text field), even if the corresponding number of lines for this WHERE\n> clause is a smal subset of the entire data:\n> approx 6M against 80M in total\n>\n> Thanks in advance for any hint regarding this cumbersome query.\n>\n> Regards\n> Kimaidou\n>\n>\n>\n\n-- \nhttps://www.burcinyazici.com\n\ncan you share result for:explain analyze SELECT p.id_parcelleFROM private.parcelles pWHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'));On Fri, 9 Feb 2024 at 17:14, kimaidou <[email protected]> wrote:Hi all,I have performance issue for a pretty simple request in a PostgreSQL server 14.10* RequestSELECT p.id_parcelleFROM private.parcelles p WHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'));* Table definition (extract) Table « private.parcelles » Colonne | Type | Collationnement | NULL-able | Par défaut -----------------------+-----------------------------+-----------------+-----------+------------ id | integer | | | geom | geometry(MultiPolygon,2154) | | | fid | bigint | | | id_parcelle | character varying(14) | | not null | insee_col | character varying(5) | | | nom_col | character varying | | | section | character varying(2) | | | numero | character varying(4) | | | contenance | bigint | | | epci_nom | character varying | | | dep | character varying | | | dep_nom | character varying | | | Index : \"foncier_pkey\" PRIMARY KEY, btree (id_parcelle) \"idx_extension_eol_parcelle\" btree (extension_eol) \"idx_lien_hubspot_parcelels\" btree (lien_hubspot) \"idx_reg_parcelle\" btree (reg) \"idx_type_ener_parcelles\" btree (type_d_energie) \"parcelles_dep_idx\" btree (dep) \"parcelles_id_parcelle_idx\" btree (id_parcelle) \"parcelles_inseecol_idx\" btree (insee_col) \"parcelles_object_id_idx\" btree (hs_object_id) \"parcelles_pipelinestage_idx\" btree (hs_pipeline_stage) \"parcelles_synctohubspot_idx\" btree (synctohubspot) \"sidx_foncier_geom\" gist (geom)-> First comment, the primary Key should be on id (integer) and not on id_parcelle (a text code)* Statistiqueslizmap_synerdev_carto=# SELECT * FROM pg_stat_all_tables WHERE schemaname = 'private' AND relname = 'parcelles';-[ RECORD 1 ]-------+------------------------------relid | 2364725schemaname | privaterelname | parcellesseq_scan | 1891seq_tup_read | 552509679idx_scan | 19144304idx_tup_fetch | 38926631n_tup_ins | 3n_tup_upd | 3073182n_tup_del | 0n_tup_hot_upd | 2996591n_live_tup | 92876681n_dead_tup | 1836882n_mod_since_analyze | 769313n_ins_since_vacuum | 3last_vacuum | last_autovacuum | last_analyze | 2024-02-08 15:33:14.008286+01last_autoanalyze | vacuum_count | 0autovacuum_count | 0analyze_count | 1autoanalyze_count | 0* Plan : https://explain.dalibo.com/plan/47391e3g8c2589cf#plan/node/2It seems PostgreSQL does not use the index parcelles_dep_idx on \"dep\" (text field), even if the corresponding number of lines for this WHERE clause is a smal subset of the entire data:approx 6M against 80M in totalThanks in advance for any hint regarding this cumbersome query.RegardsKimaidou\n\n-- https://www.burcinyazici.com",
"msg_date": "Fri, 9 Feb 2024 17:19:54 +0300",
"msg_from": "=?UTF-8?B?QnVyw6dpbiBZYXrEsWPEsQ==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple JOIN on heavy table not using expected index"
},
{
"msg_contents": "The query plan is visible here :\nhttps://explain.dalibo.com/plan/50a719h92hde6950\n\nRegards\n\nLe vendredi 9 février 2024, Burçin Yazıcı <[email protected]> a écrit :\n\n> can you share result for:\n>\n> *explain analyze* SELECT p.id_parcelle\n> FROM private.parcelles p\n> WHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'))\n> ;\n>\n> On Fri, 9 Feb 2024 at 17:14, kimaidou <[email protected]> wrote:\n>\n>> Hi all,\n>>\n>> I have performance issue for a pretty simple request in a PostgreSQL\n>> server 14.10\n>>\n>> * Request\n>>\n>> SELECT p.id_parcelle\n>> FROM private.parcelles p\n>> WHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'))\n>> ;\n>>\n>> * Table definition (extract)\n>>\n>> Table « private.parcelles »\n>> Colonne | Type | Collationnement |\n>> NULL-able | Par défaut\n>> -----------------------+-----------------------------+------\n>> -----------+-----------+------------\n>> id | integer | |\n>> |\n>> geom | geometry(MultiPolygon,2154) | |\n>> |\n>> fid | bigint | |\n>> |\n>> id_parcelle | character varying(14) | |\n>> not null |\n>> insee_col | character varying(5) | |\n>> |\n>> nom_col | character varying | |\n>> |\n>> section | character varying(2) | |\n>> |\n>> numero | character varying(4) | |\n>> |\n>> contenance | bigint | |\n>> |\n>> epci_nom | character varying | |\n>> |\n>> dep | character varying | |\n>> |\n>> dep_nom | character varying | |\n>> |\n>> Index :\n>> \"foncier_pkey\" PRIMARY KEY, btree (id_parcelle)\n>> \"idx_extension_eol_parcelle\" btree (extension_eol)\n>> \"idx_lien_hubspot_parcelels\" btree (lien_hubspot)\n>> \"idx_reg_parcelle\" btree (reg)\n>> \"idx_type_ener_parcelles\" btree (type_d_energie)\n>> \"parcelles_dep_idx\" btree (dep)\n>> \"parcelles_id_parcelle_idx\" btree (id_parcelle)\n>> \"parcelles_inseecol_idx\" btree (insee_col)\n>> \"parcelles_object_id_idx\" btree (hs_object_id)\n>> \"parcelles_pipelinestage_idx\" btree (hs_pipeline_stage)\n>> \"parcelles_synctohubspot_idx\" btree (synctohubspot)\n>> \"sidx_foncier_geom\" gist (geom)\n>>\n>> -> First comment, the primary Key should be on id (integer) and not on\n>> id_parcelle (a text code)\n>>\n>>\n>> * Statistiques\n>>\n>> lizmap_synerdev_carto=# SELECT * FROM pg_stat_all_tables WHERE schemaname\n>> = 'private' AND relname = 'parcelles';\n>> -[ RECORD 1 ]-------+------------------------------\n>> relid | 2364725\n>> schemaname | private\n>> relname | parcelles\n>> seq_scan | 1891\n>> seq_tup_read | 552509679\n>> idx_scan | 19144304\n>> idx_tup_fetch | 38926631\n>> n_tup_ins | 3\n>> n_tup_upd | 3073182\n>> n_tup_del | 0\n>> n_tup_hot_upd | 2996591\n>> n_live_tup | 92876681\n>> n_dead_tup | 1836882\n>> n_mod_since_analyze | 769313\n>> n_ins_since_vacuum | 3\n>> last_vacuum |\n>> last_autovacuum |\n>> last_analyze | 2024-02-08 15:33:14.008286+01\n>> last_autoanalyze |\n>> vacuum_count | 0\n>> autovacuum_count | 0\n>> analyze_count | 1\n>> autoanalyze_count | 0\n>>\n>> * Plan :\n>> https://explain.dalibo.com/plan/47391e3g8c2589cf#plan/node/2\n>>\n>> It seems PostgreSQL does not use the index parcelles_dep_idx on \"dep\"\n>> (text field), even if the corresponding number of lines for this WHERE\n>> clause is a smal subset of the entire data:\n>> approx 6M against 80M in total\n>>\n>> Thanks in advance for any hint regarding this cumbersome query.\n>>\n>> Regards\n>> Kimaidou\n>>\n>>\n>>\n>\n> --\n> https://www.burcinyazici.com\n>\n\nThe query plan is visible here :https://explain.dalibo.com/plan/50a719h92hde6950RegardsLe vendredi 9 février 2024, Burçin Yazıcı <[email protected]> a écrit :can you share result for:explain analyze SELECT p.id_parcelleFROM private.parcelles pWHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'));On Fri, 9 Feb 2024 at 17:14, kimaidou <[email protected]> wrote:Hi all,I have performance issue for a pretty simple request in a PostgreSQL server 14.10* RequestSELECT p.id_parcelleFROM private.parcelles p WHERE (p.dep IN ( '08', '10', '54', '57', '67', '68'));* Table definition (extract) Table « private.parcelles » Colonne | Type | Collationnement | NULL-able | Par défaut -----------------------+-----------------------------+-----------------+-----------+------------ id | integer | | | geom | geometry(MultiPolygon,2154) | | | fid | bigint | | | id_parcelle | character varying(14) | | not null | insee_col | character varying(5) | | | nom_col | character varying | | | section | character varying(2) | | | numero | character varying(4) | | | contenance | bigint | | | epci_nom | character varying | | | dep | character varying | | | dep_nom | character varying | | | Index : \"foncier_pkey\" PRIMARY KEY, btree (id_parcelle) \"idx_extension_eol_parcelle\" btree (extension_eol) \"idx_lien_hubspot_parcelels\" btree (lien_hubspot) \"idx_reg_parcelle\" btree (reg) \"idx_type_ener_parcelles\" btree (type_d_energie) \"parcelles_dep_idx\" btree (dep) \"parcelles_id_parcelle_idx\" btree (id_parcelle) \"parcelles_inseecol_idx\" btree (insee_col) \"parcelles_object_id_idx\" btree (hs_object_id) \"parcelles_pipelinestage_idx\" btree (hs_pipeline_stage) \"parcelles_synctohubspot_idx\" btree (synctohubspot) \"sidx_foncier_geom\" gist (geom)-> First comment, the primary Key should be on id (integer) and not on id_parcelle (a text code)* Statistiqueslizmap_synerdev_carto=# SELECT * FROM pg_stat_all_tables WHERE schemaname = 'private' AND relname = 'parcelles';-[ RECORD 1 ]-------+------------------------------relid | 2364725schemaname | privaterelname | parcellesseq_scan | 1891seq_tup_read | 552509679idx_scan | 19144304idx_tup_fetch | 38926631n_tup_ins | 3n_tup_upd | 3073182n_tup_del | 0n_tup_hot_upd | 2996591n_live_tup | 92876681n_dead_tup | 1836882n_mod_since_analyze | 769313n_ins_since_vacuum | 3last_vacuum | last_autovacuum | last_analyze | 2024-02-08 15:33:14.008286+01last_autoanalyze | vacuum_count | 0autovacuum_count | 0analyze_count | 1autoanalyze_count | 0* Plan : https://explain.dalibo.com/plan/47391e3g8c2589cf#plan/node/2It seems PostgreSQL does not use the index parcelles_dep_idx on \"dep\" (text field), even if the corresponding number of lines for this WHERE clause is a smal subset of the entire data:approx 6M against 80M in totalThanks in advance for any hint regarding this cumbersome query.RegardsKimaidou\n\n-- https://www.burcinyazici.com",
"msg_date": "Fri, 9 Feb 2024 16:07:13 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple JOIN on heavy table not using expected index"
},
{
"msg_contents": "kimaidou <[email protected]> writes:\n> It seems PostgreSQL does not use the index parcelles_dep_idx on \"dep\" (text\n> field), even if the corresponding number of lines for this WHERE clause is\n> a smal subset of the entire data:\n> approx 6M against 80M in total\n\n6M out of 80M rows is not a \"small subset\". Typically I'd expect\nthe planner to use an index-based scan for up to 1 or 2 percent of\nthe table. Beyond that, you're going to be touching most pages\nof the table anyway.\n\nYou can try reducing random_page_cost to favor indexscans, but\nyou might not find that the query gets any faster.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Feb 2024 10:12:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple JOIN on heavy table not using expected index"
},
{
"msg_contents": "Tom, thanks a lot for your suggestion.\nIndeed, setting random_page_cost to 2 instead of 4 improves this query a\nlot !\n\nSee the new plan :\nhttps://explain.dalibo.com/plan/h924389529e11244\n\n30 seconds VS 17 minutes before\n\nCheers\nMichaël\n\nLe vendredi 9 février 2024, Tom Lane <[email protected]> a écrit :\n\n> kimaidou <[email protected]> writes:\n> > It seems PostgreSQL does not use the index parcelles_dep_idx on \"dep\"\n> (text\n> > field), even if the corresponding number of lines for this WHERE clause\n> is\n> > a smal subset of the entire data:\n> > approx 6M against 80M in total\n>\n> 6M out of 80M rows is not a \"small subset\". Typically I'd expect\n> the planner to use an index-based scan for up to 1 or 2 percent of\n> the table. Beyond that, you're going to be touching most pages\n> of the table anyway.\n>\n> You can try reducing random_page_cost to favor indexscans, but\n> you might not find that the query gets any faster.\n>\n> regards, tom lane\n>\n\nTom, thanks a lot for your suggestion.Indeed, setting random_page_cost to 2 instead of 4 improves this query a lot !See the new plan :https://explain.dalibo.com/plan/h924389529e1124430 seconds VS 17 minutes beforeCheersMichaëlLe vendredi 9 février 2024, Tom Lane <[email protected]> a écrit :kimaidou <[email protected]> writes:\n> It seems PostgreSQL does not use the index parcelles_dep_idx on \"dep\" (text\n> field), even if the corresponding number of lines for this WHERE clause is\n> a smal subset of the entire data:\n> approx 6M against 80M in total\n\n6M out of 80M rows is not a \"small subset\". Typically I'd expect\nthe planner to use an index-based scan for up to 1 or 2 percent of\nthe table. Beyond that, you're going to be touching most pages\nof the table anyway.\n\nYou can try reducing random_page_cost to favor indexscans, but\nyou might not find that the query gets any faster.\n\n regards, tom lane",
"msg_date": "Fri, 9 Feb 2024 16:44:00 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple JOIN on heavy table not using expected index"
}
] |
[
{
"msg_contents": "Hello,\n\n \n\nPostgreSQL doesn't use 'Index Only Scan' if there is an expression in index.\n\n \n\nThe documentation says that PostgreSQL's planner considers a query to be\npotentially executable by index-only scan only when all columns needed by\nthe query are available from the index. \n\nI think an example on\nhttps://www.postgresql.org/docs/16/indexes-index-only-scans.html :\n\n \n\nSELECT f(x) FROM tab WHERE f(x) < 1;\n\n \n\nis a bit confusing. Even the following query does not use 'Index Only Scan'\n\n \n\nSELECT 1 FROM tab WHERE f(x) < 1;\n\n \n\nDemonstration:\n\n---------------------------\n\ndrop table if exists test;\n\n \n\ncreate table test(s text);\n\ncreate index ix_test_upper on test (upper(s));\n\ncreate index ix_test_normal on test (s);\n\n \n\ninsert into test (s)\n\nselect 'Item' || t.i\n\nfrom pg_catalog.generate_series(1, 100000, 1) t(i);\n\n \n\nanalyze verbose \"test\";\n\n \n\nexplain select 1 from test where s = 'Item123';\n\nexplain select 1 from test where upper(s) = upper('Item123');\n\n--------------------------\n\nQuery plan 1:\n\nIndex Only Scan using ix_test_normal on test (cost=0.42..8.44 rows=1\nwidth=4)\n\n Index Cond: (s = 'Item123'::text)\n\n \n\nQuery plan 2 (SHOULD BE 'Index Only Scan'):\n\nIndex Scan using ix_test_upper on test (cost=0.42..8.44 rows=1 width=4)\n\n Index Cond: (upper(s) = 'ITEM123'::text)\n\n------------------------ \n\n \n\nIf I add 's' as included column to ix_test_upper the plan does use 'Index\nOnly Scan'. That looks strange to me: there is no 's' in SELECT-clause, only\nin WHERE-clause in the form of 'upper(s)' and this is why ix_test_upper is\nchoosen by the planner.\n\n \n\nThanks,\n\nPavel\n\n\nHello, PostgreSQL doesn't use 'Index Only Scan' if there is an expression in index. The documentation says that PostgreSQL's planner considers a query to be potentially executable by index-only scan only when all columns needed by the query are available from the index. I think an example on https://www.postgresql.org/docs/16/indexes-index-only-scans.html : SELECT f(x) FROM tab WHERE f(x) < 1; is a bit confusing. Even the following query does not use 'Index Only Scan' SELECT 1 FROM tab WHERE f(x) < 1; Demonstration:---------------------------drop table if exists test; create table test(s text);create index ix_test_upper on test (upper(s));create index ix_test_normal on test (s); insert into test (s)select 'Item' || t.ifrom pg_catalog.generate_series(1, 100000, 1) t(i); analyze verbose \"test\"; explain select 1 from test where s = 'Item123';explain select 1 from test where upper(s) = upper('Item123');--------------------------Query plan 1:Index Only Scan using ix_test_normal on test (cost=0.42..8.44 rows=1 width=4) Index Cond: (s = 'Item123'::text) Query plan 2 (SHOULD BE 'Index Only Scan'):Index Scan using ix_test_upper on test (cost=0.42..8.44 rows=1 width=4) Index Cond: (upper(s) = 'ITEM123'::text)------------------------ If I add 's' as included column to ix_test_upper the plan does use 'Index Only Scan'. That looks strange to me: there is no 's' in SELECT-clause, only in WHERE-clause in the form of 'upper(s)' and this is why ix_test_upper is choosen by the planner. Thanks,Pavel",
"msg_date": "Thu, 15 Feb 2024 17:37:45 +0300",
"msg_from": "\"Pavel Kulakov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL doesn't use index-only scan if there is an expression in\n index"
},
{
"msg_contents": "On Thu, 2024-02-15 at 17:37 +0300, Pavel Kulakov wrote:\n> Hello,\n> \n> PostgreSQL doesn't use 'Index Only Scan' if there is an expression in index.\n> \n> The documentation says that PostgreSQL's planner considers a query to be potentially\n> executable by index-only scan only when all columns needed by the query are available from the index. \n> I think an example on https://www.postgresql.org/docs/16/indexes-index-only-scans.html :\n> \n> SELECT f(x) FROM tab WHERE f(x) < 1;\n> \n> is a bit confusing. Even the following query does not use 'Index Only Scan'\n> \n> SELECT 1 FROM tab WHERE f(x) < 1;\n> \n> Demonstration:\n> ---------------------------\n> drop table if exists test;\n> \n> create table test(s text);\n> create index ix_test_upper on test (upper(s));\n> create index ix_test_normal on test (s);\n> \n> insert into test (s)\n> select 'Item' || t.i\n> from pg_catalog.generate_series(1, 100000, 1) t(i);\n> \n> analyze verbose \"test\";\n> \n> explain select 1 from test where s = 'Item123';\n> explain select 1 from test where upper(s) = upper('Item123');\n> --------------------------\n> Query plan 1:\n> Index Only Scan using ix_test_normal on test (cost=0.42..8.44 rows=1 width=4)\n> Index Cond: (s = 'Item123'::text)\n> \n> Query plan 2 (SHOULD BE 'Index Only Scan'):\n> Index Scan using ix_test_upper on test (cost=0.42..8.44 rows=1 width=4)\n> Index Cond: (upper(s) = 'ITEM123'::text)\n> ------------------------ \n> \n> If I add 's' as included column to ix_test_upper the plan does use 'Index Only Scan'.\n> That looks strange to me: there is no 's' in SELECT-clause, only in WHERE-clause in\n> the form of 'upper(s)' and this is why ix_test_upper is choosen by the planner.\n\nYou need to create the index like this:\n\n CREATE INDEX ix_test_upper ON test (upper(s)) INCLUDE (s);\n\nSee https://www.postgresql.org/docs/current/indexes-index-only-scans.html:\n\n \"In principle, index-only scans can be used with expression indexes.\n For example, given an index on f(x) where x is a table column, it\n should be possible to execute\n\n SELECT f(x) FROM tab WHERE f(x) < 1;\n\n as an index-only scan; and this is very attractive if f() is an\n expensive-to-compute function. However, PostgreSQL's planner is currently\n not very smart about such cases. It considers a query to be potentially\n executable by index-only scan only when all columns needed by the query\n are available from the index. In this example, x is not needed except in\n the context f(x), but the planner does not notice that and concludes that\n an index-only scan is not possible. If an index-only scan seems sufficiently\n worthwhile, this can be worked around by adding x as an included column,\n for example\n\n CREATE INDEX tab_f_x ON tab (f(x)) INCLUDE (x);\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 15 Feb 2024 16:01:07 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL doesn't use index-only scan if there is an\n expression in index"
}
] |
[
{
"msg_contents": "Hi\n\nWe have a master code block which starts small, tiny operations that create a table and inserts data into that table in many threads.\n\nNothing is done the master code, we follow an Orchestration pattern , where master just sends a message about what to do and that is done in other database connections not related connections used by master code.\n\nIn the master code I add sleep after the CRUD operations are done to make it easier to test. The test table will not change in the rest of this master code (in real life it happens more in the master code off course) .\n\nThen we start testing VACUUM and very simple SQL testing in another window.\n\nWe can now show we have performance of \"3343.794 ms\" and not \"0.123 ms\", which is what we get when we are able to remove dead rows and run a new analyze.\n\nThe problem is that as long as the master code is active, we cannot remove alle dead rows and that what seems to be killing the performance.\n\nWith active I mean in hanging on pg_sleep and remember that this master has not created the test table or inserted any data in this test table it self.\n\nIs the expected behavior ?\n\nIs possible to around this problem in any way ?\n\nIn this note you find a detailed description and a simple standalone test script https://gitlab.com/nibioopensource/resolve-overlap-and-gap/-/issues/67#note_1779300212\n\nI have tested on \"PostgreSQL 14.10 (Homebrew) on aarch64-apple-darwin23.0.0, compiled by Apple clang version 15.0.0 (clang-1500.0.40.1), 64-bit\" and \"PostgreSQL 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\"\n\nThanks .\n\nLars\n\n\n\n\n\n\n\n\nHi\n\n\nWe have a master code block which starts small, tiny operations\n that create a table and inserts data into that table in many threads. \n\n\nNothing is done the master code, we follow\n an Orchestration pattern , where master just sends a message about what to do and that is done in other database connections not related connections used by master code.\n\n\nIn the master code I add sleep after the\n CRUD operations are done to make it easier to test. The test table will not change in the rest of this master code (in real life it happens more in the master code off course) .\n\n\nThen we start testing VACUUM and very simple\n SQL testing in another window.\n\n\nWe can now show we have performance of\n \"3343.794 ms\" and not \"0.123 ms\", which is what we get when we are able to remove dead rows and run a new analyze.\n\n\nThe problem is that as long as the master\n code is active, we cannot remove alle dead rows and that what seems to be killing the performance.\n\n\nWith active I mean in hanging on pg_sleep\n and remember that this master has not created the test table or inserted any data in this test table it self.\n\n\nIs the expected behavior ?\n\n\nIs possible to around this problem in any\n way ?\n\n\nIn this note you find a detailed description\n and a simple standalone test script \nhttps://gitlab.com/nibioopensource/resolve-overlap-and-gap/-/issues/67#note_1779300212\n\n\nI have tested on \"PostgreSQL 14.10 (Homebrew) on aarch64-apple-darwin23.0.0, compiled\n by Apple clang version 15.0.0 (clang-1500.0.40.1), 64-bit\" and \"PostgreSQL 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\"\n\n\nThanks .\n\n\n\nLars",
"msg_date": "Mon, 19 Feb 2024 16:14:25 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"not related\" code blocks for removal of dead rows when using vacuum\n and this kills the performance"
},
{
"msg_contents": "On Mon, 2024-02-19 at 16:14 +0000, Lars Aksel Opsahl wrote:\n> Then we start testing VACUUM and very simple SQL testing in another window.\n> \n> We can now show we have performance of \"3343.794 ms\" and not \"0.123 ms\", which\n> is what we get when we are able to remove dead rows and run a new analyze.\n> \n> The problem is that as long as the master code is active, we cannot remove\n> alle dead rows and that what seems to be killing the performance.\n> \n> With active I mean in hanging on pg_sleep and remember that this master has\n> not created the test table or inserted any data in this test table it self.\n> \n> Is the expected behavior ?\n\nIt is not entirely clear what you are doing, but it seems like you are holding\na database transaction open, and yes, then it is expected behavior that\nVACUUM cannot clean up dead rows in the table.\n\nMake sure that your database transactions are short.\nDon't use table or row locks to synchronize application threads.\nWhat you could use to synchronize your application threads are advisory locks,\nthey are not tied to a database transaction.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 19 Feb 2024 18:46:19 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"not related\" code blocks for removal of dead rows when using\n vacuum and this kills the performance"
},
{
"msg_contents": "From: Laurenz Albe <[email protected]>\n>\n>It is not entirely clear what you are doing, but it seems like you are holding\n>a database transaction open, and yes, then it is expected behavior that\n>VACUUM cannot clean up dead rows in the table.\n>\n>Make sure that your database transactions are short.\n>Don't use table or row locks to synchronize application threads.\n>What you could use to synchronize your application threads are advisory locks,\n>they are not tied to a database transaction.\n>\n\nHi\n\nThe details are here at https://gitlab.com/nibioopensource/resolve-overlap-and-gap/-/issues/67#note_1779300212 and\nhere is also a ref. to this test script that shows problem https://gitlab.com/nibioopensource/resolve-overlap-and-gap/uploads/9a0988b50f05386ec9d91d6600bb03ec/test_issue_67.sql\n\nI am not doing any locks I just do plain CRUD operations .\n\nThe key is that the master code is not creating any table or insert rows that is done by many short operations as you suggested.\n\nBut even if the master code is not doing any operations against the test table it's blocking removal of dead rows.\n\nIf this expected behavior, it's means that any long running transactions will block for removal of any dead rows for all visible tables in the database and that seems like problem or weakness of Postgresql.\n\nWhile writing this I now was thinking maybe I can get around problem by not making the table not visible by the master code but that makes it very complicated for mee.\n\nThanks.\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: Laurenz Albe <[email protected]>\n\n\n>\n\n\n\n\n\n\n\n\n\n\n\n\n\n>It is not entirely clear what you are doing, but it seems like you are holding\n\n>a database transaction open, and yes, then it is expected behavior that\n\n>VACUUM cannot clean up dead rows in the table.\n\n>\n\n>Make sure that your database transactions are short.\n\n>Don't use table or row locks to synchronize application threads.\n\n>What you could use to synchronize your application threads are advisory locks,\n\n>they are not tied to a database transaction.\n\n>\n\n\n\n\nHi\n\n\n\n\nThe details are here at https://gitlab.com/nibioopensource/resolve-overlap-and-gap/-/issues/67#note_1779300212\n and\n\nhere is also a ref. to this test script that shows problem https://gitlab.com/nibioopensource/resolve-overlap-and-gap/uploads/9a0988b50f05386ec9d91d6600bb03ec/test_issue_67.sql\n\n\n\n\nI am not doing any locks I just do plain CRUD operations .\n\n\n\n\nThe key is that the master code is not creating any table or insert rows that is done\n by many short operations as you suggested.\n\n\n\n\nBut even if the master code is not doing any operations against the test table it's blocking removal of dead\n rows.\n\n\n\n\nIf this expected behavior, it's means that any long running transactions will block\n for removal of any dead rows for all visible tables in the database and that seems like problem or weakness of Postgresql.\n\n\n\n\nWhile writing this I now was thinking maybe I can get around problem by not making the\n table not visible by the master code but that makes it very complicated for mee. \n\n\n\n\n\n\nThanks.\n\n\n\n\nLars",
"msg_date": "Mon, 19 Feb 2024 18:36:55 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"not related\" code blocks for removal of dead rows when using\n vacuum and this kills the performance"
},
{
"msg_contents": "From: Lars Aksel Opsahl <[email protected]>\n>From: Laurenz Albe <[email protected]>\n>>\n>>It is not entirely clear what you are doing, but it seems like you are holding\n>>a database transaction open, and yes, then it is expected behavior that\n>>VACUUM cannot clean up dead rows in the table.\n>>\n>>Make sure that your database transactions are short.\n>>Don't use table or row locks to synchronize application threads.\n>>What you could use to synchronize your application threads are advisory locks,\n>>they are not tied to a database transaction.\n>>\n>\n>Hi\n>\n>The details are here at https://gitlab.com/nibioopensource/resolve-overlap-and-gap/-/issues/67#note_1779300212 and\n>here is also a ref. to this test script that shows problem https://gitlab.com/nibioopensource/resolve-overlap-and-gap/uploads/9a0988b50f05386ec9d91d6600bb03ec/test_issue_67.sql\n>\n>I am not doing any locks I just do plain CRUD operations .\n>\n>The key is that the master code is not creating any table or insert rows that is done by many short operations as you suggested.\n>\n>But even if the master code is not doing any operations against the test table it's blocking removal of dead rows.\n>\n>If this expected behavior, it's means that any long running transactions will block for removal of any dead rows for all visible tables in the database and that seems like problem or weakness of Postgresql.\n>\n>While writing this I now was thinking maybe I can get around problem by not making the table not visible by the master code but that makes it very complicated for mee.\n>\n>Thanks.\n>\n>Lars\n\nHi\n\nI now tested running the master (Orchestration) code as user joe.\n\nIn the master code I connect back as user lop and creates the test table test_null and inserts data in many tiny operations.\n\nUser joe who has the long running operation does not know anything about table test_null and does not have any grants to that table.\n\nThe table test_null is not granted to public either.\n\nThe problem is the same, the long running transaction to joe will kill the performance on a table which user joe does not have any access to or know anything about .\n\nIf this is expected behavior it means that any user on the database that writes a long running sql that does not even insert any data can kill performance for any other user in the database.\n\nSo applications like QGIS who seems to keep open connections for a while can then also kill the performance for any other user in the data.\n\nHaving postgresql working like this also makes it very difficult to debug performance issues because a problem may just have been a side effect of a not related sql.\n\nSo I hope this is not the case and that I have done something wrong or that there are some parameters that can be adjusted on get around this problem.\n\nThanks\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: Lars Aksel Opsahl <[email protected]>\n\n\n\n>From: Laurenz Albe <[email protected]>\n\n>>\n\n>>It is not entirely clear what you are doing, but it seems like you are holding\n\n>>a database transaction open, and yes, then it is expected behavior that\n\n>>VACUUM cannot clean up dead rows in the table.\n\n>>\n\n>>Make sure that your database transactions are short.\n\n>>Don't use table or row locks to synchronize application threads.\n\n>>What you could use to synchronize your application threads are advisory locks,\n\n>>they are not tied to a database transaction.\n\n>>\n\n>\n\n>Hi\n\n>\n\n>The details are here at https://gitlab.com/nibioopensource/resolve-overlap-and-gap/-/issues/67#note_1779300212\n and\n\n>here is also a ref. to this test script that shows problem https://gitlab.com/nibioopensource/resolve-overlap-and-gap/uploads/9a0988b50f05386ec9d91d6600bb03ec/test_issue_67.sql\n\n>\n\n>I am not doing any locks I just do plain CRUD operations .\n\n>\n\n>The key is that the master code is not creating any table or insert rows that is done by many short operations\n as you suggested.\n\n>\n\n>But even if the master code is not doing any operations against the test table it's blocking removal of dead\n rows.\n\n>\n\n>If this expected behavior, it's means that any long running transactions will block for removal of any dead\n rows for all visible tables in the database and that seems like problem or weakness of Postgresql.\n\n>\n\n>While writing this I now was thinking maybe I can get around problem by not making the table not visible by\n the master code but that makes it very complicated for mee. \n\n>\n\n>Thanks.\n\n>\n\n>Lars\n\n\n\nHi\n\n\n\nI now tested running the master (Orchestration) code as user joe.\n\n\n\n\nIn the master code I connect back as user lop and creates the test table test_null and inserts data in many\n tiny operations.\n\n\n\n\nUser joe who has the long running operation does not know anything about table test_null and does not have\n any grants to that table.\n\n\n\n\nThe table test_null is not granted to public either. \n\n\n\n\nThe problem is the same, the long running transaction to joe will kill the performance on a table which user\n joe does not have any access to or know anything about .\n\n\n\n\nIf this is expected behavior it means that any user on the database that writes a long running sql that does\n not even insert any data can kill performance for any other user in the database.\n\n\n\n\nSo applications like QGIS who seems to keep open connections for a while can then also kill the performance\n for any other user in the data.\n\n\n\n\nHaving postgresql working like this also makes it very difficult to debug performance issues because a problem\n may just have been a side effect of a not related sql.\n\n\n\n\nSo I hope this is not the case and that I have done something wrong or that there are some parameters that\n can be adjusted on get around this problem.\n\n\n\n\nThanks\n\n\n\nLars",
"msg_date": "Tue, 20 Feb 2024 05:46:39 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"not related\" code blocks for removal of dead rows when using\n vacuum and this kills the performance"
},
{
"msg_contents": "On Tue, 2024-02-20 at 05:46 +0000, Lars Aksel Opsahl wrote:\n> If this is expected behavior it means that any user on the database that writes\n> a long running sql that does not even insert any data can kill performance for\n> any other user in the database.\n\nYes, that is the case. A long running query will hold a snapshot, and no data\nvisible in that snapshot can be deleted.\n\nThat can cause bloat, which can impact performance.\n\n> So applications like QGIS who seems to keep open connections for a while can\n> then also kill the performance for any other user in the data.\n\nNo, that is not a problem. Keeping *connections* open is a good thing. It is\nkeeping data modifying transactions, cursors or long-running queries open\nthat constitutes a problem.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 20 Feb 2024 08:29:14 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"not related\" code blocks for removal of dead rows when using\n vacuum and this kills the performance"
},
{
"msg_contents": "From: Laurenz Albe <[email protected]>\r\nSent: Tuesday, February 20, 2024 8:29 AM\r\n>Re: \"not related\" code blocks for removal of dead rows when using vacuum and this kills the performance\r\n>Laurenz Albe <[email protected]>\r\n>Lars Aksel Opsahl;\r\n>[email protected]\r\n>On Tue, 2024-02-20 at 05:46 +0000, Lars Aksel Opsahl wrote:\r\n>> If this is expected behavior it means that any user on the database that writes\r\n>> a long running sql that does not even insert any data can kill performance for\r\n>> any other user in the database.\r\n>\r\n>Yes, that is the case. A long running query will hold a snapshot, and no data\r\n>visible in that snapshot can be deleted.\r\n>\r\n>That can cause bloat, which can impact performance.\r\n>\r\n\r\nHi\r\n\r\nThanks for the chat, seems like I finally found solution that seems work for this test code.\r\n\r\nAdding a commit's like here /uploads/031b350bc1f65752b013ee4ae5ae64a3/test_issue_67_with_commit.sql to master code even if there are nothing to commit seems to solve problem and that makes sense based on what you say, because then the master code gets a new visible snapshot and then releases the old snapshot.\r\n\r\nThe reason why I like to use psql as the master/Orchestration code and not C/Python/Bash and so is to make more simple to use/code and test.\r\n\r\nLars\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\r\nFrom: Laurenz Albe <[email protected]>\r\nSent: Tuesday, February 20, 2024 8:29 AM\n\n\n\n>Re: \"not related\" code blocks for removal of dead rows when using vacuum and this kills the performance\n\n>Laurenz Albe <[email protected]>\n\n>Lars Aksel Opsahl;\n\n>[email protected]\n\n>On Tue, 2024-02-20 at 05:46 +0000, Lars Aksel Opsahl wrote:\n\n>> If this is expected behavior it means that any user on the database that writes\n\n>> a long running sql that does not even insert any data can kill performance for\n\n>> any other user in the database.\n\n>\n\n>Yes, that is the case. A long running query will hold a snapshot, and no data\n\n>visible in that snapshot can be deleted.\n\n>\n\n>That can cause bloat, which can impact performance.\n\n>\n\n\n\n\nHi\n\n\n\n\nThanks for the chat, seems like I finally found solution that seems work for this test code.\n\n\n\n\nAdding a commit's like here /uploads/031b350bc1f65752b013ee4ae5ae64a3/test_issue_67_with_commit.sql to master\r\n code even if there are nothing to commit seems to solve problem and that makes sense based on what you say, because then the master code gets a new visible snapshot and then releases the old snapshot.\n\n\n\n\nThe reason why I like to use psql as the master/Orchestration code and not C/Python/Bash and so is to make\r\n more simple to use/code and test.\n\n\n\n\nLars",
"msg_date": "Tue, 20 Feb 2024 10:46:15 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"not related\" code blocks for removal of dead rows when using\n vacuum and this kills the performance"
},
{
"msg_contents": "I've found multiple postings out there saying you can query pg_stat_all_indexes and look at idx_scan to know if an index has been used by queries. I want to be 100% sure I can rely on that table/column to know if an index has never been used.\n\nI queried that table for a specific index and idx_scan is 0. I queried pg_statio_all_indexes and can see idx_blks_read and idx_blks_hit have numbers in there. If the index is not being used then what it causing idx_blks_read and idx_blks_hit to increase over time? I'm wondering if those increase due to DML on the table. Could anyone please confirm I can rely on pg_stat_all_index.idx_scan to know if queries are using an index and the increases over time in idx_blks_read and idx_blks_hit in pg_statio_all_indexes would be from DML (or possibly vacuum or other things)?\n\nThanks in advance.\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html\n\n\n\n\n\n\n\n\n\n\n\nI’ve found multiple postings out there saying you can query pg_stat_all_indexes and look at idx_scan to know if an index has been used by queries. I want to be 100% sure I can rely on that table/column to\n know if an index has never been used.\n \nI queried that table for a specific index and idx_scan is 0. I queried pg_statio_all_indexes and can see idx_blks_read and idx_blks_hit have numbers in there. If the index is not being used then what it\n causing idx_blks_read and idx_blks_hit to increase over time? I’m wondering if those increase due to DML on the table. Could anyone please confirm I can rely on pg_stat_all_index.idx_scan to know if queries are using an index and the increases over time\n in idx_blks_read and idx_blks_hit in pg_statio_all_indexes would be from DML (or possibly vacuum or other things)?\n \nThanks in advance.\n\n\n\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain\n required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html",
"msg_date": "Wed, 7 Aug 2024 17:06:19 +0000",
"msg_from": "\"Dirschel, Steve\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Postgres index usage"
},
{
"msg_contents": "Didn't mention- this is Aurora Postgres version 14.6 if that matters for my question. Thanks\n\nFrom: Dirschel, Steve <[email protected]>\nSent: Wednesday, August 7, 2024 12:06 PM\nTo: [email protected]\nSubject: Postgres index usage\n\nI've found multiple postings out there saying you can query pg_stat_all_indexes and look at idx_scan to know if an index has been used by queries. I want to be 100% sure I can rely on that table/column to know if an index has never been used.\n\nI queried that table for a specific index and idx_scan is 0. I queried pg_statio_all_indexes and can see idx_blks_read and idx_blks_hit have numbers in there. If the index is not being used then what it causing idx_blks_read and idx_blks_hit to increase over time? I'm wondering if those increase due to DML on the table. Could anyone please confirm I can rely on pg_stat_all_index.idx_scan to know if queries are using an index and the increases over time in idx_blks_read and idx_blks_hit in pg_statio_all_indexes would be from DML (or possibly vacuum or other things)?\n\nThanks in advance.\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html\n\n\n\n\n\n\n\n\n\nDidn’t mention- this is Aurora Postgres version 14.6 if that matters for my question. Thanks\n \n\n\nFrom: Dirschel, Steve <[email protected]>\n\nSent: Wednesday, August 7, 2024 12:06 PM\nTo: [email protected]\nSubject: Postgres index usage\n\n\n \n\n\nI’ve found multiple postings out there saying you can query pg_stat_all_indexes and look at idx_scan to know if an index has been used by queries. I want to be 100% sure I can rely on that table/column to\n know if an index has never been used.\n \nI queried that table for a specific index and idx_scan is 0. I queried pg_statio_all_indexes and can see idx_blks_read and idx_blks_hit have numbers in there. If the index is not being used then what it\n causing idx_blks_read and idx_blks_hit to increase over time? I’m wondering if those increase due to DML on the table. Could anyone please confirm I can rely on pg_stat_all_index.idx_scan to know if queries are using an index and the increases over time\n in idx_blks_read and idx_blks_hit in pg_statio_all_indexes would be from DML (or possibly vacuum or other things)?\n \nThanks in advance.\n\n\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any\n attachments. Certain required legal entity disclosures can be accessed on our website:\nhttps://www.thomsonreuters.com/en/resources/disclosures.html",
"msg_date": "Wed, 7 Aug 2024 17:23:35 +0000",
"msg_from": "\"Dirschel, Steve\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Postgres index usage"
},
{
"msg_contents": "\"Dirschel, Steve\" <[email protected]> writes:\n> I queried that table for a specific index and idx_scan is 0. I\n> queried pg_statio_all_indexes and can see idx_blks_read and\n> idx_blks_hit have numbers in there. If the index is not being used\n> then what it causing idx_blks_read and idx_blks_hit to increase over\n> time? I'm wondering if those increase due to DML on the table.\n\nYes, I think that's the case: index updates will cause the per-block\ncounters to advance, but only an index search will increment idx_scan.\n\nI'd recommend testing this theory for yourself in an idle database,\nthough. It's not impossible that Aurora works differently from\ncommunity PG.\n\nAnother thing to keep in mind is that in versions before PG 15,\nthe statistics subsystem is (by design) unreliable and might sometimes\nmiss events under load. This effect isn't big enough to invalidate\na conclusion that an index with idx_scan = 0 isn't being used, but\nit's something to keep in mind when running small tests that are\nonly expected to record a few events.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2024 13:36:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres index usage"
},
{
"msg_contents": "On Wed, Aug 7, 2024 at 1:06 PM Dirschel, Steve <\[email protected]> wrote:\n\n> I’ve found multiple postings out there saying you can query\n> pg_stat_all_indexes and look at idx_scan to know if an index has been used\n> by queries. I want to be 100% sure I can rely on that table/column to know\n> if an index has never been used.\n>\n\nAlso make sure you check pg_stat_all_indexes on your replicas as well. Each\nhas their own independent idx_scan counters. So while your primary is not\nusing a particular index, one or more of your replicas might be.\n\nCheers,\nGreg\n\nOn Wed, Aug 7, 2024 at 1:06 PM Dirschel, Steve <[email protected]> wrote:\n\n\n\n\nI’ve found multiple postings out there saying you can query pg_stat_all_indexes and look at idx_scan to know if an index has been used by queries. I want to be 100% sure I can rely on that table/column to\n know if an index has never been used.Also make sure you check pg_stat_all_indexes on your replicas as well. Each has their own independent idx_scan counters. So while your primary is not using a particular index, one or more of your replicas might be.Cheers,Greg",
"msg_date": "Wed, 7 Aug 2024 14:18:46 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres index usage"
}
] |
[
{
"msg_contents": "Hi experts,\n we have a SQL from Postgresql JDBC, primary is based on\n(bigint,varchar2,bigint), but from sql plan, it convert to ::numeric so\nthe plan just use one \"varchar\" key column and use the other 2 bigint keys\nas filters. what's the cause about that ?\n\n Table \"test.xxxxxx\"\n Column | Type | Collation | Nullable |\nDefault\n------------------+--------------------------------+-----------+----------+---------\n xxxid | bigint | | not null |\n paramname | character varying(512) | | not null |\n paramvalue | character varying(1536) | | |\n sssid | bigint | | not null |\n createtime | timestamp(0) without time zone | | |\n lastmodifiedtime | timestamp(0) without time zone | | |\n mmmuuid | character varying(32) | | |\nIndexes:\n \"pk_xxxxxx\" PRIMARY KEY, btree (xxxid, paramname, sssid)\n \"idx_xxxxxx_mmmuuid\" btree (sssid, mmmuuid, paramname)\n\n\n\nSET extra_float_digits = 3\n\n\nduration: 7086.014 ms plan:\n Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID,\nCREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID =\n$1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) )\n Index Scan using pk_xxxxxx on test.xxxxxx (cost=0.57..2065259.09\nrows=1 width=86) (actual time=7086.010..7086.011 rows=0 loops=1)\n Output: confid, paramname, paramvalue, sssid, createtime,\nlastmodifiedtime, mmmuuid\n Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text) <<<\njust use only one key instead all primary keys.\n Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND\n((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but\nconverted to numeric\n Buffers: shared read=1063470\n I/O Timings: read=4402.029\n\nit's from JDBC, we saw this JDBC driver try to set extra_float_digits = 3\nbefore running the SQL ,does that make planner to convert bigint to numeric\n?\n\nThanks,\n\nJames\n\nHi experts, we have a SQL from Postgresql JDBC, primary is based on (bigint,varchar2,bigint), but from sql plan, it convert to ::numeric so the plan just use one \"varchar\" key column and use the other 2 bigint keys as filters. what's the cause about that ? Table \"test.xxxxxx\" Column | Type | Collation | Nullable | Default------------------+--------------------------------+-----------+----------+--------- xxxid | bigint | | not null | paramname | character varying(512) | | not null | paramvalue | character varying(1536) | | | sssid | bigint | | not null | createtime | timestamp(0) without time zone | | | lastmodifiedtime | timestamp(0) without time zone | | | mmmuuid | character varying(32) | | |Indexes: \"pk_xxxxxx\" PRIMARY KEY, btree (xxxid, paramname, sssid) \"idx_xxxxxx_mmmuuid\" btree (sssid, mmmuuid, paramname)\t SET extra_float_digits = 3duration: 7086.014 ms plan: Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID, CREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID = $1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) ) Index Scan using pk_xxxxxx on test.xxxxxx (cost=0.57..2065259.09 rows=1 width=86) (actual time=7086.010..7086.011 rows=0 loops=1) Output: confid, paramname, paramvalue, sssid, createtime, lastmodifiedtime, mmmuuid Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text) <<< just use only one key instead all primary keys. Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND ((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but converted to numeric Buffers: shared read=1063470 I/O Timings: read=4402.029it's from JDBC, we saw this JDBC driver try to set extra_float_digits = 3 before running the SQL ,does that make planner to convert bigint to numeric ? Thanks,James",
"msg_date": "Fri, 23 Feb 2024 15:20:40 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "sql statement not using all primary key values and poor performance"
},
{
"msg_contents": "Hi experts,\n we have a SQL from Postgresql JDBC, primary key is based on\n(bigint,varchar2,bigint), but from sql plan, it convert to ::numeric so\nthe plan just use one \"varchar\" key column and use the other 2 bigint keys\nas filters. what's the cause about that ?\n\n Table \"test.xxxxxx\"\n Column | Type | Collation | Nullable |\nDefault\n------------------+--------------------------------+-----------+----------+---------\n xxxid | bigint | | not null |\n paramname | character varying(512) | | not null |\n paramvalue | character varying(1536) | | |\n sssid | bigint | | not null |\n createtime | timestamp(0) without time zone | | |\n lastmodifiedtime | timestamp(0) without time zone | | |\n mmmuuid | character varying(32) | | |\nIndexes:\n \"pk_xxxxxx\" PRIMARY KEY, btree (xxxid, paramname, sssid)\n \"idx_xxxxxx_mmmuuid\" btree (sssid, mmmuuid, paramname)\n\n\n\nSET extra_float_digits = 3\n\n\nduration: 7086.014 ms plan:\n Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID,\nCREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID =\n$1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) )\n Index Scan using pk_xxxxxx on test.xxxxxx (cost=0.57..2065259.09\nrows=1 width=86) (actual time=7086.010..7086.011 rows=0 loops=1)\n Output: xxxid, paramname, paramvalue, sssid, createtime,\nlastmodifiedtime, mmmuuid\n Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text) <<<\njust use only one key instead all primary keys.\n Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND\n((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but\nconverted to numeric\n Buffers: shared read=1063470\n I/O Timings: read=4402.029\n\nit's from JDBC, we saw this JDBC driver try to set extra_float_digits = 3\nbefore running the SQL ,does that make planner to convert bigint to numeric\n? Postgresql 14.10 version. how to avoid this conversion and make planner\nuse all primary keys.\n\nThanks,\n\nJames\n\nJames Pang <[email protected]> 於 2024年2月23日週五 下午3:20寫道:\n\n> Hi experts,\n> we have a SQL from Postgresql JDBC, primary is based on\n> (bigint,varchar2,bigint), but from sql plan, it convert to ::numeric so\n> the plan just use one \"varchar\" key column and use the other 2 bigint keys\n> as filters. what's the cause about that ?\n>\n> Table \"test.xxxxxx\"\n> Column | Type | Collation | Nullable\n> | Default\n>\n> ------------------+--------------------------------+-----------+----------+---------\n> xxxid | bigint | | not null |\n> paramname | character varying(512) | | not null |\n> paramvalue | character varying(1536) | | |\n> sssid | bigint | | not null |\n> createtime | timestamp(0) without time zone | | |\n> lastmodifiedtime | timestamp(0) without time zone | | |\n> mmmuuid | character varying(32) | | |\n> Indexes:\n> \"pk_xxxxxx\" PRIMARY KEY, btree (xxxid, paramname, sssid)\n> \"idx_xxxxxx_mmmuuid\" btree (sssid, mmmuuid, paramname)\n>\n>\n>\n> SET extra_float_digits = 3\n>\n>\n> duration: 7086.014 ms plan:\n> Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID,\n> CREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID =\n> $1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) )\n> Index Scan using pk_xxxxxx on test.xxxxxx (cost=0.57..2065259.09\n> rows=1 width=86) (actual time=7086.010..7086.011 rows=0 loops=1)\n> Output: confid, paramname, paramvalue, sssid, createtime,\n> lastmodifiedtime, mmmuuid\n> Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text)\n> <<< just use only one key instead all primary keys.\n> Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND\n> ((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but\n> converted to numeric\n> Buffers: shared read=1063470\n> I/O Timings: read=4402.029\n>\n> it's from JDBC, we saw this JDBC driver try to set extra_float_digits = 3\n> before running the SQL ,does that make planner to convert bigint to numeric\n> ?\n>\n> Thanks,\n>\n> James\n>\n\nHi experts, we have a SQL from Postgresql JDBC, primary key is based on (bigint,varchar2,bigint), but from sql plan, it convert to ::numeric so the plan just use one \"varchar\" key column and use the other 2 bigint keys as filters. what's the cause about that ? Table \"test.xxxxxx\" Column | Type | Collation | Nullable | Default------------------+--------------------------------+-----------+----------+--------- xxxid | bigint | | not null | paramname | character varying(512) | | not null | paramvalue | character varying(1536) | | | sssid | bigint | | not null | createtime | timestamp(0) without time zone | | | lastmodifiedtime | timestamp(0) without time zone | | | mmmuuid | character varying(32) | | |Indexes: \"pk_xxxxxx\" PRIMARY KEY, btree (xxxid, paramname, sssid) \"idx_xxxxxx_mmmuuid\" btree (sssid, mmmuuid, paramname) SET extra_float_digits = 3duration: 7086.014 ms plan: Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID, CREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID = $1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) ) Index Scan using pk_xxxxxx on test.xxxxxx (cost=0.57..2065259.09 rows=1 width=86) (actual time=7086.010..7086.011 rows=0 loops=1) Output: xxxid, paramname, paramvalue, sssid, createtime, lastmodifiedtime, mmmuuid Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text) <<< just use only one key instead all primary keys. Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND ((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but converted to numeric Buffers: shared read=1063470 I/O Timings: read=4402.029it's from JDBC, we saw this JDBC driver try to set extra_float_digits = 3 before running the SQL ,does that make planner to convert bigint to numeric ? Postgresql 14.10 version. how to avoid this conversion and make planner use all primary keys.Thanks,JamesJames Pang <[email protected]> 於 2024年2月23日週五 下午3:20寫道:Hi experts, we have a SQL from Postgresql JDBC, primary is based on (bigint,varchar2,bigint), but from sql plan, it convert to ::numeric so the plan just use one \"varchar\" key column and use the other 2 bigint keys as filters. what's the cause about that ? Table \"test.xxxxxx\" Column | Type | Collation | Nullable | Default------------------+--------------------------------+-----------+----------+--------- xxxid | bigint | | not null | paramname | character varying(512) | | not null | paramvalue | character varying(1536) | | | sssid | bigint | | not null | createtime | timestamp(0) without time zone | | | lastmodifiedtime | timestamp(0) without time zone | | | mmmuuid | character varying(32) | | |Indexes: \"pk_xxxxxx\" PRIMARY KEY, btree (xxxid, paramname, sssid) \"idx_xxxxxx_mmmuuid\" btree (sssid, mmmuuid, paramname)\t SET extra_float_digits = 3duration: 7086.014 ms plan: Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID, CREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID = $1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) ) Index Scan using pk_xxxxxx on test.xxxxxx (cost=0.57..2065259.09 rows=1 width=86) (actual time=7086.010..7086.011 rows=0 loops=1) Output: confid, paramname, paramvalue, sssid, createtime, lastmodifiedtime, mmmuuid Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text) <<< just use only one key instead all primary keys. Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND ((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but converted to numeric Buffers: shared read=1063470 I/O Timings: read=4402.029it's from JDBC, we saw this JDBC driver try to set extra_float_digits = 3 before running the SQL ,does that make planner to convert bigint to numeric ? Thanks,James",
"msg_date": "Fri, 23 Feb 2024 15:25:36 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sql statement not using all primary key values and poor\n performance"
},
{
"msg_contents": "On Fri, 2024-02-23 at 15:20 +0800, James Pang wrote:\n> we have a SQL from Postgresql JDBC, primary is based on (bigint,varchar2,bigint),\n> but from sql plan, it convert to ::numeric so the plan just use one \"varchar\"\n> key column and use the other 2 bigint keys as filters. what's the cause about that ? \n> \n> Table \"test.xxxxxx\"\n> Column | Type | Collation | Nullable | Default\n> ------------------+--------------------------------+-----------+----------+---------\n> xxxid | bigint | | not null |\n> paramname | character varying(512) | | not null |\n> paramvalue | character varying(1536) | | |\n> sssid | bigint | | not null |\n> createtime | timestamp(0) without time zone | | |\n> lastmodifiedtime | timestamp(0) without time zone | | |\n> mmmuuid | character varying(32) | | |\n> Indexes:\n> \"pk_xxxxxx\" PRIMARY KEY, btree (xxxid, paramname, sssid)\n> \"idx_xxxxxx_mmmuuid\" btree (sssid, mmmuuid, paramname)\n> \n> SET extra_float_digits = 3\n> \n> duration: 7086.014 ms plan:\n> Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID, CREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID = $1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) )\n> Index Scan using pk_xxxxxx on test.xxxxxx (cost=0.57..2065259.09 rows=1 width=86) (actual time=7086.010..7086.011 rows=0 loops=1)\n> Output: confid, paramname, paramvalue, sssid, createtime, lastmodifiedtime, mmmuuid\n> Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text) <<< just use only one key instead all primary keys.\n> Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND ((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but converted to numeric \n> Buffers: shared read=1063470\n> I/O Timings: read=4402.029\n> \n> it's from JDBC, we saw this JDBC driver try to set extra_float_digits = 3 before\n> running the SQL ,does that make planner to convert bigint to numeric ?\n\nSetting \"extra_float_digits\" is just something the JDBC driver does so as to\nnot lose precision with \"real\" and \"double precision\" values on old versions\nof PostgreSQL.\n\nThe problem is that you bind the query parameters with the wrong data types.\nDon't use \"setBigDecimal()\", but \"setLong()\" if you want to bind a \"bigint\".\nAn alternative is \"setObject()\" with \"targetSqlType\" set to \"Types.BIGINT\".\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 23 Feb 2024 10:17:34 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql statement not using all primary key values and poor\n performance"
},
{
"msg_contents": "it's a third-party vendor application, not easy to change their code.\nis it possible to 1) in Postgresql JDBC driver connection, set\nplan_cache_mode=force_custom_plan\n or 2) some other parameters can workaround this issue?\n\nThanks,\n\nJames\n\nLaurenz Albe <[email protected]> 於 2024年2月23日週五 下午5:17寫道:\n\n> On Fri, 2024-02-23 at 15:20 +0800, James Pang wrote:\n> > we have a SQL from Postgresql JDBC, primary is based on\n> (bigint,varchar2,bigint),\n> > but from sql plan, it convert to ::numeric so the plan just use one\n> \"varchar\"\n> > key column and use the other 2 bigint keys as filters. what's the cause\n> about that ?\n> >\n> > Table \"test.xxxxxx\"\n> > Column | Type | Collation |\n> Nullable | Default\n> >\n> ------------------+--------------------------------+-----------+----------+---------\n> > xxxid | bigint | | not\n> null |\n> > paramname | character varying(512) | | not\n> null |\n> > paramvalue | character varying(1536) | |\n> |\n> > sssid | bigint | | not\n> null |\n> > createtime | timestamp(0) without time zone | |\n> |\n> > lastmodifiedtime | timestamp(0) without time zone | |\n> |\n> > mmmuuid | character varying(32) | |\n> |\n> > Indexes:\n> > \"pk_xxxxxx\" PRIMARY KEY, btree (xxxid, paramname, sssid)\n> > \"idx_xxxxxx_mmmuuid\" btree (sssid, mmmuuid, paramname)\n> >\n> > SET extra_float_digits = 3\n> >\n> > duration: 7086.014 ms plan:\n> > Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID,\n> CREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID =\n> $1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) )\n> > Index Scan using pk_xxxxxx on test.xxxxxx\n> (cost=0.57..2065259.09 rows=1 width=86) (actual time=7086.010..7086.011\n> rows=0 loops=1)\n> > Output: confid, paramname, paramvalue, sssid, createtime,\n> lastmodifiedtime, mmmuuid\n> > Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text)\n> <<< just use only one key instead all primary keys.\n> > Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND\n> ((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but\n> converted to numeric\n> > Buffers: shared read=1063470\n> > I/O Timings: read=4402.029\n> >\n> > it's from JDBC, we saw this JDBC driver try to set extra_float_digits =\n> 3 before\n> > running the SQL ,does that make planner to convert bigint to numeric ?\n>\n> Setting \"extra_float_digits\" is just something the JDBC driver does so as\n> to\n> not lose precision with \"real\" and \"double precision\" values on old\n> versions\n> of PostgreSQL.\n>\n> The problem is that you bind the query parameters with the wrong data\n> types.\n> Don't use \"setBigDecimal()\", but \"setLong()\" if you want to bind a\n> \"bigint\".\n> An alternative is \"setObject()\" with \"targetSqlType\" set to \"Types.BIGINT\".\n>\n> Yours,\n> Laurenz Albe\n>\n\n it's a third-party vendor application, not easy to change their code. is it possible to 1) in Postgresql JDBC driver connection, set \n\nplan_cache_mode=force_custom_plan or 2) some other parameters can workaround this issue? Thanks,JamesLaurenz Albe <[email protected]> 於 2024年2月23日週五 下午5:17寫道:On Fri, 2024-02-23 at 15:20 +0800, James Pang wrote:\n> we have a SQL from Postgresql JDBC, primary is based on (bigint,varchar2,bigint),\n> but from sql plan, it convert to ::numeric so the plan just use one \"varchar\"\n> key column and use the other 2 bigint keys as filters. what's the cause about that ? \n> \n> Table \"test.xxxxxx\"\n> Column | Type | Collation | Nullable | Default\n> ------------------+--------------------------------+-----------+----------+---------\n> xxxid | bigint | | not null |\n> paramname | character varying(512) | | not null |\n> paramvalue | character varying(1536) | | |\n> sssid | bigint | | not null |\n> createtime | timestamp(0) without time zone | | |\n> lastmodifiedtime | timestamp(0) without time zone | | |\n> mmmuuid | character varying(32) | | |\n> Indexes:\n> \"pk_xxxxxx\" PRIMARY KEY, btree (xxxid, paramname, sssid)\n> \"idx_xxxxxx_mmmuuid\" btree (sssid, mmmuuid, paramname)\n> \n> SET extra_float_digits = 3\n> \n> duration: 7086.014 ms plan:\n> Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID, CREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID = $1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) )\n> Index Scan using pk_xxxxxx on test.xxxxxx (cost=0.57..2065259.09 rows=1 width=86) (actual time=7086.010..7086.011 rows=0 loops=1)\n> Output: confid, paramname, paramvalue, sssid, createtime, lastmodifiedtime, mmmuuid\n> Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text) <<< just use only one key instead all primary keys.\n> Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND ((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but converted to numeric \n> Buffers: shared read=1063470\n> I/O Timings: read=4402.029\n> \n> it's from JDBC, we saw this JDBC driver try to set extra_float_digits = 3 before\n> running the SQL ,does that make planner to convert bigint to numeric ?\n\nSetting \"extra_float_digits\" is just something the JDBC driver does so as to\nnot lose precision with \"real\" and \"double precision\" values on old versions\nof PostgreSQL.\n\nThe problem is that you bind the query parameters with the wrong data types.\nDon't use \"setBigDecimal()\", but \"setLong()\" if you want to bind a \"bigint\".\nAn alternative is \"setObject()\" with \"targetSqlType\" set to \"Types.BIGINT\".\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 23 Feb 2024 18:21:05 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sql statement not using all primary key values and poor\n performance"
},
{
"msg_contents": "On Fri, 2024-02-23 at 18:21 +0800, James Pang wrote:\n> it's a third-party vendor application, not easy to change their code.\n\nThen the application is broken, and you should make the vendor fix it.\n\n> is it possible to 1) in Postgresql JDBC driver connection, set\n> plan_cache_mode=force_custom_plan or 2) some other parameters can workaround this issue?\n\nYou can set \"prepareThreshold\" to 0 to keep the JDBC driver from using\nprepared statements in PostgreSQL. I am not sure if that is enough to\nfix the problem.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 23 Feb 2024 12:48:55 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sql statement not using all primary key values and poor\n performance"
}
] |
[
{
"msg_contents": "Dear pgsqlers,\n\nI'm trying to optimize simple queries on two tables (tenders & items) with\na couple million records. Besides the resulting records, the app also\ndisplays the count of total results. Doing count() takes as much time as\nthe other query (which can be 30+ secs), so it's an obvious target for\noptimization. I'm already caching count() results for the most common\nconditions (country & year) in a material table, which practically halves\nresponse time. The tables are updated sparingly, and only with bulk\nCOPYs. Now I'm looking for ways to optimize queries with other conditions.\n\nReading around, seems many people are still using this 2005 snippet\n<https://www.postgresql.org/message-id/[email protected]>\nto obtain the row count estimate from Explain:\n\nCREATE FUNCTION count_estimate(query text) RETURNS integer AS $$DECLARE\n rec record;\n rows integer;BEGIN\n FOR rec IN EXECUTE 'EXPLAIN ' || query LOOP\n rows := substring(rec.\"QUERY PLAN\" FROM ' rows=([[:digit:]]+)');\n EXIT WHEN rows IS NOT NULL;\n END LOOP;\n RETURN rows;END;$$ LANGUAGE plpgsql VOLATILE STRICT;\n\nIs this still the current best practice? Any tips to increase precision?\nCurrently it can estimate the actual number of rows for over *or* under a\nmillion, as seen on the sample queries (1,955,297 instead of 1,001,200;\n162,080 instead of 1,292,010).\n\nAny other tips to improve the query are welcome, of course. There's a big\ndisparity between the two sample queries plans even though only the\nfiltered country changes.\n\nI already raised default_statistics_target up to 2k (the planner wasn't\nusing indexes at all with low values). Gotta get it even higher? These are\nmy custom settings:\n\nshared_buffers = 256MB # min 128kB\nwork_mem = 128MB # min 64kB\nmaintenance_work_mem = 254MB # min 1MB\neffective_cache_size = 2GB\ndefault_statistics_target = 2000\nrandom_page_cost = 1.0 # same scale as above\n\nSample query:\n\nExplain Analyze\nSelect * from tenders inner join items on transaction_id =\ntender_transaction_id\nwhere country = 'Colombia'\nand \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'\nQUERY PLAN\nGather (cost=253837.99..1506524.32 rows=1955297 width=823) (actual\ntime=51433.592..63239.809 rows=1001200 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Join (cost=252837.99..1309994.62 rows=814707\nwidth=823) (actual time=51361.920..61729.142 rows=333733 loops=3)\n Hash Cond: (items.tender_transaction_id = tenders.transaction_id)\n -> Parallel Seq Scan on items (cost=0.00..1048540.46 rows=3282346\nwidth=522) (actual time=1.689..56887.108 rows=2621681 loops=3)\n -> Parallel Hash (cost=247919.56..247919.56 rows=393475\nwidth=301) (actual time=2137.473..2137.476 rows=333733 loops=3)\n Buckets: 1048576 Batches: 1 Memory Usage: 219936kB\n -> Parallel Bitmap Heap Scan on tenders\n (cost=16925.75..247919.56 rows=393475 width=301) (actual\ntime=385.315..908.865 rows=333733 loops=3)\n Recheck Cond: ((country = 'Colombia'::text) AND (date\n>= '2023-01-01'::date) AND (date < '2024-01-01'::date))\n Heap Blocks: exact=24350\n -> Bitmap Index Scan on tenders_country_and_date_index\n (cost=0.00..16689.67 rows=944339 width=0) (actual time=423.213..423.214\nrows=1001200 loops=1)\n Index Cond: ((country = 'Colombia'::text) AND\n(date >= '2023-01-01'::date) AND (date < '2024-01-01'::date))\nPlanning Time: 12.784 ms\nJIT:\nFunctions: 33\nOptions: Inlining true, Optimization true, Expressions true, Deforming true\nTiming: Generation 14.675 ms, Inlining 383.349 ms, Optimization 1023.521\nms, Emission 651.442 ms, Total 2072.987 ms\nExecution Time: 63378.033 ms\n\nExplain Analyze\nSelect * from tenders inner join items on transaction_id =\ntender_transaction_id\nwhere country = 'Mexico'\nand \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'\nQUERY PLAN\nGather (cost=1000.99..414258.70 rows=162080 width=823) (actual\ntime=52.538..7006.128 rows=1292010 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop (cost=0.99..397050.70 rows=67533 width=823) (actual\ntime=40.211..4087.081 rows=430670 loops=3)\n -> Parallel Index Scan using tenders_country_and_date_index on\ntenders (cost=0.43..45299.83 rows=32616 width=301) (actual\ntime=4.376..59.760 rows=1218 loops=3)\n Index Cond: ((country = 'Mexico'::text) AND (date >=\n'2023-01-01'::date) AND (date < '2024-01-01'::date))\n -> Index Scan using items_tender_transaction_id_index on items\n (cost=0.56..10.67 rows=11 width=522) (actual time=0.321..3.035 rows=353\nloops=3655)\n Index Cond: (tender_transaction_id = tenders.transaction_id)\nPlanning Time: 7.808 ms\nJIT:\nFunctions: 27\nOptions: Inlining false, Optimization false, Expressions true, Deforming\ntrue\nTiming: Generation 17.785 ms, Inlining 0.000 ms, Optimization 5.080 ms,\nEmission 93.274 ms, Total 116.138 ms\nExecution Time: 7239.427 ms\n\nThanks in advance!\n\nDear pgsqlers,I'm trying to optimize simple queries on two tables (tenders & items) with a couple million records. Besides the resulting records, the app also displays the count of total results. Doing count() takes as much time as the other query (which can be 30+ secs), so it's an obvious target for optimization. I'm already caching count() results for the most common conditions (country & year) in a material table, which practically halves response time. The tables are updated sparingly, and only with bulk COPYs. Now I'm looking for ways to optimize queries with other conditions.Reading around, seems many people are still using this 2005 snippet to obtain the row count estimate from Explain:CREATE FUNCTION count_estimate(query text) RETURNS integer AS $$\nDECLARE\n rec record;\n rows integer;\nBEGIN\n FOR rec IN EXECUTE 'EXPLAIN ' || query LOOP\n rows := substring(rec.\"QUERY PLAN\" FROM ' rows=([[:digit:]]+)');\n EXIT WHEN rows IS NOT NULL;\n END LOOP;\n RETURN rows;\nEND;\n$$ LANGUAGE plpgsql VOLATILE STRICT;Is this still the current best practice? Any tips to increase precision? Currently it can estimate the actual number of rows for over or under a million, as seen on the sample queries (1,955,297 instead of 1,001,200; 162,080 instead of 1,292,010).Any other tips to improve the query are welcome, of course. There's a big disparity between the two sample queries plans even though only the filtered country changes.I already raised default_statistics_target up to 2k (the planner wasn't using indexes at all with low values). Gotta get it even higher? These are my custom settings:shared_buffers = 256MB # min 128kBwork_mem = 128MB # min 64kBmaintenance_work_mem = 254MB # min 1MBeffective_cache_size = 2GBdefault_statistics_target = 2000random_page_cost = 1.0 # same scale as aboveSample query:Explain AnalyzeSelect * from tenders inner join items on transaction_id = tender_transaction_id\twhere country = 'Colombia'\t\tand \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'QUERY PLANGather (cost=253837.99..1506524.32 rows=1955297 width=823) (actual time=51433.592..63239.809 rows=1001200 loops=1) Workers Planned: 2 Workers Launched: 2 -> Parallel Hash Join (cost=252837.99..1309994.62 rows=814707 width=823) (actual time=51361.920..61729.142 rows=333733 loops=3) Hash Cond: (items.tender_transaction_id = tenders.transaction_id) -> Parallel Seq Scan on items (cost=0.00..1048540.46 rows=3282346 width=522) (actual time=1.689..56887.108 rows=2621681 loops=3) -> Parallel Hash (cost=247919.56..247919.56 rows=393475 width=301) (actual time=2137.473..2137.476 rows=333733 loops=3) Buckets: 1048576 Batches: 1 Memory Usage: 219936kB -> Parallel Bitmap Heap Scan on tenders (cost=16925.75..247919.56 rows=393475 width=301) (actual time=385.315..908.865 rows=333733 loops=3) Recheck Cond: ((country = 'Colombia'::text) AND (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date)) Heap Blocks: exact=24350 -> Bitmap Index Scan on tenders_country_and_date_index (cost=0.00..16689.67 rows=944339 width=0) (actual time=423.213..423.214 rows=1001200 loops=1) Index Cond: ((country = 'Colombia'::text) AND (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date))\nPlanning Time: 12.784 ms\nJIT:\n Functions: 33\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 14.675 ms, Inlining 383.349 ms, Optimization 1023.521 ms, Emission 651.442 ms, Total 2072.987 ms\nExecution Time: 63378.033 msExplain AnalyzeSelect * from tenders inner join items on transaction_id = tender_transaction_id\twhere country = 'Mexico'\t\tand \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'QUERY PLANGather (cost=1000.99..414258.70 rows=162080 width=823) (actual time=52.538..7006.128 rows=1292010 loops=1) Workers Planned: 2 Workers Launched: 2 -> Nested Loop (cost=0.99..397050.70 rows=67533 width=823) (actual time=40.211..4087.081 rows=430670 loops=3) -> Parallel Index Scan using tenders_country_and_date_index on tenders (cost=0.43..45299.83 rows=32616 width=301) (actual time=4.376..59.760 rows=1218 loops=3) Index Cond: ((country = 'Mexico'::text) AND (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date)) -> Index Scan using items_tender_transaction_id_index on items (cost=0.56..10.67 rows=11 width=522) (actual time=0.321..3.035 rows=353 loops=3655) Index Cond: (tender_transaction_id = tenders.transaction_id)\nPlanning Time: 7.808 ms\nJIT:\n Functions: 27\n Options: Inlining false, Optimization false, Expressions true, Deforming true\n Timing: Generation 17.785 ms, Inlining 0.000 ms, Optimization 5.080 ms, Emission 93.274 ms, Total 116.138 ms\nExecution Time: 7239.427 msThanks in advance!",
"msg_date": "Mon, 26 Feb 2024 18:25:19 -0600",
"msg_from": "Chema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": "Is your transaction id more or less monotonic according to the date? If so,\nsomething like the next can help:\n\nwith tenders_filtered as (select * from tenders where country = 'Mexico'\nand \"date\" >= '2023-01-01' and \"date\" < '2024-01-01')\nSelect * from tenders_filtered inner join items on transaction_id =\ntender_transaction_id\nwhere tender_transaction_id between (select min(transaction_id) from\ntenders_filtered) and (select max(transaction_id) from tenders_filtered)\n\nThis assumes you have an index on items(tender_transaction_id) and it would\nbe able to select a small subset (less than say 5%) of the table.\nIf your transaction_id is not monotonic, you can consider having something\nmonotonic or even additional denormalized field(s) with country and/or date\nto your items.\n\nAnother option is to use a windowing function to get the count, e.g.\nSelect *,count(*) OVER () as cnt from tenders inner join items on\ntransaction_id = tender_transaction_id\nwhere country = 'Colombia'\nand \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'\n\nThis would at least save you from doing a second call.\n\nпн, 26 лют. 2024 р. о 16:26 Chema <[email protected]> пише:\n\n> Dear pgsqlers,\n>\n> I'm trying to optimize simple queries on two tables (tenders & items) with\n> a couple million records. Besides the resulting records, the app also\n> displays the count of total results. Doing count() takes as much time as\n> the other query (which can be 30+ secs), so it's an obvious target for\n> optimization. I'm already caching count() results for the most common\n> conditions (country & year) in a material table, which practically halves\n> response time. The tables are updated sparingly, and only with bulk\n> COPYs. Now I'm looking for ways to optimize queries with other conditions.\n>\n> Reading around, seems many people are still using this 2005 snippet\n> <https://www.postgresql.org/message-id/[email protected]>\n> to obtain the row count estimate from Explain:\n>\n> CREATE FUNCTION count_estimate(query text) RETURNS integer AS $$DECLARE\n> rec record;\n> rows integer;BEGIN\n> FOR rec IN EXECUTE 'EXPLAIN ' || query LOOP\n> rows := substring(rec.\"QUERY PLAN\" FROM ' rows=([[:digit:]]+)');\n> EXIT WHEN rows IS NOT NULL;\n> END LOOP;\n> RETURN rows;END;$$ LANGUAGE plpgsql VOLATILE STRICT;\n>\n> Is this still the current best practice? Any tips to increase precision?\n> Currently it can estimate the actual number of rows for over *or* under a\n> million, as seen on the sample queries (1,955,297 instead of 1,001,200;\n> 162,080 instead of 1,292,010).\n>\n> Any other tips to improve the query are welcome, of course. There's a big\n> disparity between the two sample queries plans even though only the\n> filtered country changes.\n>\n> I already raised default_statistics_target up to 2k (the planner wasn't\n> using indexes at all with low values). Gotta get it even higher? These are\n> my custom settings:\n>\n> shared_buffers = 256MB # min 128kB\n> work_mem = 128MB # min 64kB\n> maintenance_work_mem = 254MB # min 1MB\n> effective_cache_size = 2GB\n> default_statistics_target = 2000\n> random_page_cost = 1.0 # same scale as above\n>\n> Sample query:\n>\n> Explain Analyze\n> Select * from tenders inner join items on transaction_id =\n> tender_transaction_id\n> where country = 'Colombia'\n> and \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'\n> QUERY PLAN\n> Gather (cost=253837.99..1506524.32 rows=1955297 width=823) (actual\n> time=51433.592..63239.809 rows=1001200 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Parallel Hash Join (cost=252837.99..1309994.62 rows=814707\n> width=823) (actual time=51361.920..61729.142 rows=333733 loops=3)\n> Hash Cond: (items.tender_transaction_id = tenders.transaction_id)\n> -> Parallel Seq Scan on items (cost=0.00..1048540.46\n> rows=3282346 width=522) (actual time=1.689..56887.108 rows=2621681 loops=3)\n> -> Parallel Hash (cost=247919.56..247919.56 rows=393475\n> width=301) (actual time=2137.473..2137.476 rows=333733 loops=3)\n> Buckets: 1048576 Batches: 1 Memory Usage: 219936kB\n> -> Parallel Bitmap Heap Scan on tenders\n> (cost=16925.75..247919.56 rows=393475 width=301) (actual\n> time=385.315..908.865 rows=333733 loops=3)\n> Recheck Cond: ((country = 'Colombia'::text) AND (date\n> >= '2023-01-01'::date) AND (date < '2024-01-01'::date))\n> Heap Blocks: exact=24350\n> -> Bitmap Index Scan on\n> tenders_country_and_date_index (cost=0.00..16689.67 rows=944339 width=0)\n> (actual time=423.213..423.214 rows=1001200 loops=1)\n> Index Cond: ((country = 'Colombia'::text) AND\n> (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date))\n> Planning Time: 12.784 ms\n> JIT:\n> Functions: 33\n> Options: Inlining true, Optimization true, Expressions true, Deforming true\n> Timing: Generation 14.675 ms, Inlining 383.349 ms, Optimization 1023.521\n> ms, Emission 651.442 ms, Total 2072.987 ms\n> Execution Time: 63378.033 ms\n>\n> Explain Analyze\n> Select * from tenders inner join items on transaction_id =\n> tender_transaction_id\n> where country = 'Mexico'\n> and \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'\n> QUERY PLAN\n> Gather (cost=1000.99..414258.70 rows=162080 width=823) (actual\n> time=52.538..7006.128 rows=1292010 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Nested Loop (cost=0.99..397050.70 rows=67533 width=823) (actual\n> time=40.211..4087.081 rows=430670 loops=3)\n> -> Parallel Index Scan using tenders_country_and_date_index on\n> tenders (cost=0.43..45299.83 rows=32616 width=301) (actual\n> time=4.376..59.760 rows=1218 loops=3)\n> Index Cond: ((country = 'Mexico'::text) AND (date >=\n> '2023-01-01'::date) AND (date < '2024-01-01'::date))\n> -> Index Scan using items_tender_transaction_id_index on items\n> (cost=0.56..10.67 rows=11 width=522) (actual time=0.321..3.035 rows=353\n> loops=3655)\n> Index Cond: (tender_transaction_id = tenders.transaction_id)\n> Planning Time: 7.808 ms\n> JIT:\n> Functions: 27\n> Options: Inlining false, Optimization false, Expressions true, Deforming\n> true\n> Timing: Generation 17.785 ms, Inlining 0.000 ms, Optimization 5.080 ms,\n> Emission 93.274 ms, Total 116.138 ms\n> Execution Time: 7239.427 ms\n>\n> Thanks in advance!\n>\n\nIs your transaction id more or less monotonic according to the date? If so, something like the next can help:with tenders_filtered as (select * from tenders where country = 'Mexico'and \"date\" >= '2023-01-01' and \"date\" < '2024-01-01')Select * from tenders_filtered inner join items on transaction_id = tender_transaction_idwhere tender_transaction_id between (select min(transaction_id) from tenders_filtered) and (select max(transaction_id) from tenders_filtered)This assumes you have an index on items(tender_transaction_id) and it would be able to select a small subset (less than say 5%) of the table. If your transaction_id is not monotonic, you can consider having something monotonic or even additional denormalized field(s) with country and/or date to your items. Another option is to use a windowing function to get the count, e.g.Select *,count(*) OVER () as cnt from tenders inner join items on transaction_id = tender_transaction_idwhere country = 'Colombia'and \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'This would at least save you from doing a second call.пн, 26 лют. 2024 р. о 16:26 Chema <[email protected]> пише:Dear pgsqlers,I'm trying to optimize simple queries on two tables (tenders & items) with a couple million records. Besides the resulting records, the app also displays the count of total results. Doing count() takes as much time as the other query (which can be 30+ secs), so it's an obvious target for optimization. I'm already caching count() results for the most common conditions (country & year) in a material table, which practically halves response time. The tables are updated sparingly, and only with bulk COPYs. Now I'm looking for ways to optimize queries with other conditions.Reading around, seems many people are still using this 2005 snippet to obtain the row count estimate from Explain:CREATE FUNCTION count_estimate(query text) RETURNS integer AS $$\nDECLARE\n rec record;\n rows integer;\nBEGIN\n FOR rec IN EXECUTE 'EXPLAIN ' || query LOOP\n rows := substring(rec.\"QUERY PLAN\" FROM ' rows=([[:digit:]]+)');\n EXIT WHEN rows IS NOT NULL;\n END LOOP;\n RETURN rows;\nEND;\n$$ LANGUAGE plpgsql VOLATILE STRICT;Is this still the current best practice? Any tips to increase precision? Currently it can estimate the actual number of rows for over or under a million, as seen on the sample queries (1,955,297 instead of 1,001,200; 162,080 instead of 1,292,010).Any other tips to improve the query are welcome, of course. There's a big disparity between the two sample queries plans even though only the filtered country changes.I already raised default_statistics_target up to 2k (the planner wasn't using indexes at all with low values). Gotta get it even higher? These are my custom settings:shared_buffers = 256MB # min 128kBwork_mem = 128MB # min 64kBmaintenance_work_mem = 254MB # min 1MBeffective_cache_size = 2GBdefault_statistics_target = 2000random_page_cost = 1.0 # same scale as aboveSample query:Explain AnalyzeSelect * from tenders inner join items on transaction_id = tender_transaction_id\twhere country = 'Colombia'\t\tand \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'QUERY PLANGather (cost=253837.99..1506524.32 rows=1955297 width=823) (actual time=51433.592..63239.809 rows=1001200 loops=1) Workers Planned: 2 Workers Launched: 2 -> Parallel Hash Join (cost=252837.99..1309994.62 rows=814707 width=823) (actual time=51361.920..61729.142 rows=333733 loops=3) Hash Cond: (items.tender_transaction_id = tenders.transaction_id) -> Parallel Seq Scan on items (cost=0.00..1048540.46 rows=3282346 width=522) (actual time=1.689..56887.108 rows=2621681 loops=3) -> Parallel Hash (cost=247919.56..247919.56 rows=393475 width=301) (actual time=2137.473..2137.476 rows=333733 loops=3) Buckets: 1048576 Batches: 1 Memory Usage: 219936kB -> Parallel Bitmap Heap Scan on tenders (cost=16925.75..247919.56 rows=393475 width=301) (actual time=385.315..908.865 rows=333733 loops=3) Recheck Cond: ((country = 'Colombia'::text) AND (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date)) Heap Blocks: exact=24350 -> Bitmap Index Scan on tenders_country_and_date_index (cost=0.00..16689.67 rows=944339 width=0) (actual time=423.213..423.214 rows=1001200 loops=1) Index Cond: ((country = 'Colombia'::text) AND (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date))\nPlanning Time: 12.784 ms\nJIT:\n Functions: 33\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 14.675 ms, Inlining 383.349 ms, Optimization 1023.521 ms, Emission 651.442 ms, Total 2072.987 ms\nExecution Time: 63378.033 msExplain AnalyzeSelect * from tenders inner join items on transaction_id = tender_transaction_id\twhere country = 'Mexico'\t\tand \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'QUERY PLANGather (cost=1000.99..414258.70 rows=162080 width=823) (actual time=52.538..7006.128 rows=1292010 loops=1) Workers Planned: 2 Workers Launched: 2 -> Nested Loop (cost=0.99..397050.70 rows=67533 width=823) (actual time=40.211..4087.081 rows=430670 loops=3) -> Parallel Index Scan using tenders_country_and_date_index on tenders (cost=0.43..45299.83 rows=32616 width=301) (actual time=4.376..59.760 rows=1218 loops=3) Index Cond: ((country = 'Mexico'::text) AND (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date)) -> Index Scan using items_tender_transaction_id_index on items (cost=0.56..10.67 rows=11 width=522) (actual time=0.321..3.035 rows=353 loops=3655) Index Cond: (tender_transaction_id = tenders.transaction_id)\nPlanning Time: 7.808 ms\nJIT:\n Functions: 27\n Options: Inlining false, Optimization false, Expressions true, Deforming true\n Timing: Generation 17.785 ms, Inlining 0.000 ms, Optimization 5.080 ms, Emission 93.274 ms, Total 116.138 ms\nExecution Time: 7239.427 msThanks in advance!",
"msg_date": "Mon, 26 Feb 2024 20:36:13 -0800",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": "On Mon, 2024-02-26 at 18:25 -0600, Chema wrote:\n> I'm trying to optimize simple queries on two tables (tenders & items) with a couple\n> million records. Besides the resulting records, the app also displays the count of\n> total results. Doing count() takes as much time as the other query (which can be\n> 30+ secs), so it's an obvious target for optimization.\n> \n> Reading around, seems many people are still using this 2005 snippet to obtain the\n> row count estimate from Explain:\n\nI recommend using FORMAT JSON and extracting the top row count from that. It is\nsimpler and less error-prone.\n\n> Is this still the current best practice? Any tips to increase precision?\n> Currently it can estimate the actual number of rows for over or under a million,\n> as seen on the sample queries (1,955,297 instead of 1,001,200; 162,080 instead\n> of 1,292,010).\n\nLooking at the samples you provided, I get the impression that the statistics for\nthe table are quite outdated. That will affect the estimates. Try running ANALYZE\nand see if that improves the estimates.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 27 Feb 2024 08:40:43 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": "Hi Chema,\n\nOn 2024-Feb-26, Chema wrote:\n\n> Dear pgsqlers,\n> \n> I'm trying to optimize simple queries on two tables (tenders & items) with\n> a couple million records. Besides the resulting records, the app also\n> displays the count of total results. Doing count() takes as much time as\n> the other query (which can be 30+ secs), so it's an obvious target for\n> optimization. I'm already caching count() results for the most common\n> conditions (country & year) in a material table, which practically halves\n> response time. The tables are updated sparingly, and only with bulk\n> COPYs. Now I'm looking for ways to optimize queries with other conditions.\n\nIt sounds like this approach might serve your purposes:\nhttps://www.postgresql.eu/events/pgconfeu2023/schedule/session/4762-counting-things-at-the-speed-of-light-with-roaring-bitmaps/\n\n> I already raised default_statistics_target up to 2k (the planner wasn't\n> using indexes at all with low values). Gotta get it even higher? These are\n> my custom settings:\n\nI would recommend to put the default_statistics_target back to its\noriginal value and modify the value with ALTER TABLE .. SET STATISTICS\nonly for columns that need it, only on tables that need it; then ANALYZE\neverything. The planner gets too slow if you have too many stats for\neverything.\n\n> shared_buffers = 256MB # min 128kB\n\nThis sounds far too low, unless your server is a Raspberry Pi or\nsomething. See \"explain (buffers, analyze)\" of your queries to see how\nmuch buffer traffic is happening for them.\n\n> Functions: 33\n> Options: Inlining true, Optimization true, Expressions true, Deforming true\n> Timing: Generation 14.675 ms, Inlining 383.349 ms, Optimization 1023.521\n> ms, Emission 651.442 ms, Total 2072.987 ms\n> Execution Time: 63378.033 ms\n\nAlso maybe experiment with turning JIT off. Sometimes it brings no\nbenefit and slows down execution pointlessly. Here you spent two\nseconds JIT-compiling the query; were they worth it?\n\nCheers\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.\n\n\n",
"msg_date": "Tue, 27 Feb 2024 14:11:42 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": ">\n> > Reading around, seems many people are still using this 2005 snippet to\n> obtain the\n> > row count estimate from Explain:\n>\n> I recommend using FORMAT JSON and extracting the top row count from that.\n> It is\n> simpler and less error-prone.\n>\nGood tip, thanks Laurenze!\n\n>\n> > Is this still the current best practice? Any tips to increase precision?\n> > Currently it can estimate the actual number of rows for over or under a\n> million,\n> > as seen on the sample queries (1,955,297 instead of 1,001,200; 162,080\n> instead\n> > of 1,292,010).\n>\n> Looking at the samples you provided, I get the impression that the\n> statistics for\n> the table are quite outdated. That will affect the estimates. Try\n> running ANALYZE\n> and see if that improves the estimates.\n>\n>\nNo major changes after doing Analyze, and also Vacuum Analyze. Seems\nsomething is seriously off. I pimped my config thanks to Alvaro's\nprompting, set default statistics = 500 (suggested for warehouse dbs) but\nraised pertinent columns from 2,000 to 5,000 (will play with disabling JIT\nor raising cost later):\n\nshared_buffers = 2GB # ~0.25 * RAM, dedicated cache, hard\nallocation (requires restart)\neffective_cache_size = 6GB # 0.5-0.75 RAM (free -h: free + cache +\nshared_buffers)\nwork_mem = 128MB # RAM * 0.25 / max_connections.\nmaintenance_work_mem = 512MB\ndefault_statistics_target = 500 # def 100, higher to make planner use\nindexes in big warehouse tables.\nrandom_page_cost = 1.1 # Random reads in SSD cost almost as\nlittle as sequential ones\n\nAnalized again (1.5M samples instead of 600k):\n\"tenders\": scanned 216632 of 216632 pages, containing 3815567 live rows and\n0 dead rows; 1500000 rows in sample, 3815567 estimated total rows\n\"items\": scanned 995023 of 995023 pages, containing 7865043 live rows and 0\ndead rows; 1500000 rows in sample, 7865043 estimated total rows\n\nbut same deal:\n\n-- After config pimp 1,959,657 instead of 1,001,200 45,341.654 ms\n\nGather (cost=247031.70..1479393.82 rows=1959657 width=824) (actual time\n=8464.691..45257.435 rows=1001200 loops=1)\n\nWorkers Planned: 2\n\nWorkers Launched: 2\n\n-> Parallel Hash Join (cost=246031.70..1282428.12 rows=816524 width=824) (\nactual time=8413.057..44614.153 rows=333733 loops=3)\n\nHash Cond: (pricescope_items.tender_transaction_id = pricescope_tenders.\ntransaction_id)\n\n-> Parallel Seq Scan on pricescope_items (cost=0.00..1027794.01 rows=3277101\nwidth=522) (actual time=0.753..41654.507 rows=2621681 loops=3)\n\n-> Parallel Hash (cost=241080.20..241080.20 rows=396120 width=302) (actual\ntime=995.247..995.250 rows=333733 loops=3)\n\nBuckets: 1048576 Batches: 1 Memory Usage: 219904kB\n\n-> Parallel Bitmap Heap Scan on pricescope_tenders (cost=17516.10..241080.20\nrows=396120 width=302) (actual time=162.898..321.472 rows=333733 loops=3)\n\nRecheck Cond: ((country = 'Colombia'::text) AND (date >= '2023-01-01'::date)\nAND (date < '2024-01-01'::date))\n\nHeap Blocks: exact=34722\n\n-> Bitmap Index Scan on pricescope_tenders_country_and_date_index (cost\n=0.00..17278.43 rows=950688 width=0) (actual time=186.536..186.537 rows=\n1001200 loops=1)\n\nIndex Cond: ((country = 'Colombia'::text) AND (date >= '2023-01-01'::date)\nAND (date < '2024-01-01'::date))\n\nPlanning Time: 11.310 ms\n\nJIT:\n\nFunctions: 33\n\nOptions: Inlining true, Optimization true, Expressions true, Deforming true\n\nTiming: Generation 8.608 ms, Inlining 213.375 ms, Optimization 557.351 ms,\nEmission 417.568 ms, Total 1196.902 ms\n\nExecution Time: 45341.654 ms\n\n\nBUT if I force the planner to ignore 'country' statistics:\n\n-- Subselect country to hide constant from planner, so it doesn't use\nstatistics\n\nExplain Analyze\n\nSelect * from pricescope_tenders inner join pricescope_items on\ntransaction_id = tender_transaction_id\n\nwhere country = (select 'Colombia')\n\nand \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'\n\n;\n\nThen I get the same plan than if I filter for Mexico, with a similar run\ntime:\n\n-- Colombia in subselect 428,623 instead of 1,001,200 6674.860 ms\n\nGather (cost=1001.00..570980.73 rows=428623 width=824) (actual time\n=166.785..6600.673 rows=1001200 loops=1)\n\nWorkers Planned: 2\n\nParams Evaluated: $0\n\nWorkers Launched: 2\n\nInitPlan 1 (returns $0)\n\n-> Result (cost=0.00..0.01 rows=1 width=32) (actual time=166.031..166.033\nrows=1 loops=1)\n\n-> Nested Loop (cost=0.99..527118.42 rows=178593 width=824) (actual time\n=200.511..5921.585 rows=333733 loops=3)\n\n-> Parallel Index Scan using pricescope_tenders_country_and_date_index on\npricescope_tenders (cost=0.43..104391.64 rows=86641 width=302) (actual time\n=200.388..400.882 rows=333733 loops=3)\n\nIndex Cond: ((country = $0) AND (date >= '2023-01-01'::date) AND (date <\n'2024-01-01'::date))\n\n-> Index Scan using pricescope_items_tender_transaction_id_index on\npricescope_items (cost=0.56..4.83 rows=5 width=522) (actual time=0.016..\n0.016 rows=1 loops=1001200)\n\nIndex Cond: (tender_transaction_id = pricescope_tenders.transaction_id)\n\nPlanning Time: 7.372 ms\n\nJIT:\n\nFunctions: 31\n\nOptions: Inlining true, Optimization true, Expressions true, Deforming true\n\nTiming: Generation 6.981 ms, Inlining 209.470 ms, Optimization 308.123 ms,\nEmission 248.176 ms, Total 772.750 ms\n\nExecution Time: 6674.860 ms\n\nSo runtime is now decent; stats are still way off by -670k, tho I guess\nthat's better than +1M.\n\n1. Any tips to fix stats?\n2. Or a better way of making the planner go for index scans for country?\n\nThanks again!\n\n> Reading around, seems many people are still using this 2005 snippet to obtain the\n> row count estimate from Explain:\n\nI recommend using FORMAT JSON and extracting the top row count from that. It is\nsimpler and less error-prone.Good tip, thanks Laurenze!\n\n> Is this still the current best practice? Any tips to increase precision?\n> Currently it can estimate the actual number of rows for over or under a million,\n> as seen on the sample queries (1,955,297 instead of 1,001,200; 162,080 instead\n> of 1,292,010).\n\nLooking at the samples you provided, I get the impression that the statistics for\nthe table are quite outdated. That will affect the estimates. Try running ANALYZE\nand see if that improves the estimates.No major changes after doing Analyze, and also Vacuum Analyze. Seems something is seriously off. I pimped my config thanks to Alvaro's prompting, set default statistics = 500 (suggested for warehouse dbs) but raised pertinent columns from 2,000 to 5,000 (will play with disabling JIT or raising cost later):shared_buffers = 2GB # ~0.25 * RAM, dedicated cache, hard allocation (requires restart)effective_cache_size = 6GB # 0.5-0.75 RAM (free -h: free + cache + shared_buffers)work_mem = 128MB # RAM * 0.25 / max_connections.maintenance_work_mem = 512MBdefault_statistics_target = 500 # def 100, higher to make planner use indexes in big warehouse tables.random_page_cost = 1.1 # Random reads in SSD cost almost as little as sequential onesAnalized again (1.5M samples instead of 600k):\"tenders\": scanned 216632 of 216632 pages, containing 3815567 live rows and 0 dead rows; 1500000 rows in sample, 3815567 estimated total rows\"items\": scanned 995023 of 995023 pages, containing 7865043 live rows and 0 dead rows; 1500000 rows in sample, 7865043 estimated total rowsbut same deal:\n\n-- After config pimp 1,959,657 instead of 1,001,200 45,341.654 ms\nGather (cost=247031.70..1479393.82 rows=1959657 width=824) (actual time=8464.691..45257.435 rows=1001200 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Join (cost=246031.70..1282428.12 rows=816524 width=824) (actual time=8413.057..44614.153 rows=333733 loops=3)\n Hash Cond: (pricescope_items.tender_transaction_id = pricescope_tenders.transaction_id)\n -> Parallel Seq Scan on pricescope_items (cost=0.00..1027794.01 rows=3277101 width=522) (actual time=0.753..41654.507 rows=2621681 loops=3)\n -> Parallel Hash (cost=241080.20..241080.20 rows=396120 width=302) (actual time=995.247..995.250 rows=333733 loops=3)\n Buckets: 1048576 Batches: 1 Memory Usage: 219904kB\n -> Parallel Bitmap Heap Scan on pricescope_tenders (cost=17516.10..241080.20 rows=396120 width=302) (actual time=162.898..321.472 rows=333733 loops=3)\n Recheck Cond: ((country = 'Colombia'::text) AND (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date))\n Heap Blocks: exact=34722\n -> Bitmap Index Scan on pricescope_tenders_country_and_date_index (cost=0.00..17278.43 rows=950688 width=0) (actual time=186.536..186.537 rows=1001200 loops=1)\n Index Cond: ((country = 'Colombia'::text) AND (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date))\nPlanning Time: 11.310 ms\nJIT:\n Functions: 33\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 8.608 ms, Inlining 213.375 ms, Optimization 557.351 ms, Emission 417.568 ms, Total 1196.902 ms\nExecution Time: 45341.654 ms\n\nBUT if I force the planner to ignore 'country' statistics:-- Subselect country to hide constant from planner, so it doesn't use statistics\nExplain Analyze\nSelect * from pricescope_tenders inner join pricescope_items on transaction_id = tender_transaction_id\n where country = (select 'Colombia')\n and \"date\" >= '2023-01-01' and \"date\" < '2024-01-01'\n;\nThen I get the same plan than if I filter for Mexico, with a similar run time:\n\n-- Colombia in subselect 428,623 instead of 1,001,200 6674.860 ms\nGather (cost=1001.00..570980.73 rows=428623 width=824) (actual time=166.785..6600.673 rows=1001200 loops=1)\n Workers Planned: 2\n Params Evaluated: $0\n Workers Launched: 2\n InitPlan 1 (returns $0)\n -> Result (cost=0.00..0.01 rows=1 width=32) (actual time=166.031..166.033 rows=1 loops=1)\n -> Nested Loop (cost=0.99..527118.42 rows=178593 width=824) (actual time=200.511..5921.585 rows=333733 loops=3)\n -> Parallel Index Scan using pricescope_tenders_country_and_date_index on pricescope_tenders (cost=0.43..104391.64 rows=86641 width=302) (actual time=200.388..400.882 rows=333733 loops=3)\n Index Cond: ((country = $0) AND (date >= '2023-01-01'::date) AND (date < '2024-01-01'::date))\n -> Index Scan using pricescope_items_tender_transaction_id_index on pricescope_items (cost=0.56..4.83 rows=5 width=522) (actual time=0.016..0.016 rows=1 loops=1001200)\n Index Cond: (tender_transaction_id = pricescope_tenders.transaction_id)\nPlanning Time: 7.372 ms\nJIT:\n Functions: 31\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 6.981 ms, Inlining 209.470 ms, Optimization 308.123 ms, Emission 248.176 ms, Total 772.750 ms\nExecution Time: 6674.860 ms\nSo runtime is now decent; stats are still way off by -670k, tho I guess that's better than +1M.1. Any tips to fix stats?2. Or a better way of making the planner go for index scans for country?Thanks again!",
"msg_date": "Thu, 29 Feb 2024 17:15:42 -0600",
"msg_from": "Chema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": "On Thu, 2024-02-29 at 17:15 -0600, Chema wrote:\n> No major changes after doing Analyze, and also Vacuum Analyze.\n\nIndeed.\n\nThis caught my attention:\n\n> -> Parallel Seq Scan on pricescope_items (cost=0.00..1027794.01 rows=3277101 width=522) (actual time=0.753..41654.507 rows=2621681 loops=3)\n\nWhy does it take over 41 seconds to read a table with less than\n3 million rows? Are the rows so large? Is the tabe bloated?\nWhat is the size of the table as measured with pg_relation_size()\nand pg_table_size()?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 01 Mar 2024 09:57:13 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": ">\n> > -> Parallel Seq Scan on pricescope_items (cost=0.00..1027794.01\n> rows=3277101 width=522) (actual time=0.753..41654.507 rows=2621681 loops=3)\n>\n> Why does it take over 41 seconds to read a table with less than\n> 3 million rows? Are the rows so large? Is the tabe bloated?\n> What is the size of the table as measured with pg_relation_size()\n> and pg_table_size()?\n\nThere's one JSON column in each table with a couple fields, and a column\nwith long texts in Items.\n\n-- pg_table_size, pg_relation_size, pg_indexes_size, rows\nnametable_sizerelation_sizeindex_sizerow_estimate\ntenders 1,775,222,784\n1,630,461,952 3,815,567\nitems 8,158,773,248\n6,052,470,784 7,865,043\ncheck_postgres gave a 1.4 bloat score to tenders, 1.9 to items. I had a\nduplicate index on transaction_id (one hand made, other from the unique\nconstraint) and other text column indexes with 0.3-0.5 bloat scores. After\nVacuum Full Analyze; sizes are greatly reduced, specially Items:\n\n-- pg_table_size, pg_relation_size, pg_indexes_size, rows\nnametable_sizerelation_sizeindex_sizerow_estimate\ntenders 1,203,445,760 1,203,421,184 500,482,048 3,815,567\nitems 4,436,189,184 4,430,790,656 2,326,118,400 7,865,043\n\nThere were a couple mass deletions which probably caused the bloating.\nAutovacuum is on defaults, but I guess it doesn't take care of that.\nStill, performance seems about the same.\n\nThe planner is now using an Index Scan for Colombia without the subselect\nhack, but subselect takes ~200ms less in avg, so might as well keep doing\nit.\n\nRow estimate is still +1M so still can't use that, but at least now it\ntakes less than 10s to get the exact count with all countries.\n\n> -> Parallel Seq Scan on pricescope_items (cost=0.00..1027794.01 rows=3277101 width=522) (actual time=0.753..41654.507 rows=2621681 loops=3)\n\nWhy does it take over 41 seconds to read a table with less than\n3 million rows? Are the rows so large? Is the tabe bloated?\nWhat is the size of the table as measured with pg_relation_size()\nand pg_table_size()?There's one JSON column in each table with a couple fields, and a column with long texts in Items.-- pg_table_size, pg_relation_size, pg_indexes_size, rowsnametable_sizerelation_sizeindex_sizerow_estimatetenders1,775,222,7841,630,461,9523,815,567items8,158,773,2486,052,470,7847,865,043check_postgres gave a 1.4 bloat score to tenders, 1.9 to items. I had a duplicate index on transaction_id (one hand made, other from the unique constraint) and other text column indexes with 0.3-0.5 bloat scores. After Vacuum Full Analyze; sizes are greatly reduced, specially Items:-- pg_table_size, pg_relation_size, pg_indexes_size, rowsnametable_sizerelation_sizeindex_sizerow_estimatetenders1,203,445,7601,203,421,184500,482,0483,815,567\nitems4,436,189,1844,430,790,6562,326,118,4007,865,043There were a couple mass deletions which probably caused the bloating. \n\nAutovacuum is on defaults, but I guess it doesn't take care of that. Still, performance seems about the same.The planner is now using an Index Scan for Colombia without the subselect hack, but subselect takes ~200ms less in avg, so might as well keep doing it.Row estimate is still +1M so still can't use that, but at least now it takes less than 10s to get the exact count with all countries.",
"msg_date": "Mon, 4 Mar 2024 13:13:56 -0600",
"msg_from": "Chema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 2:14 PM Chema <[email protected]> wrote:\n\n> There's one JSON column in each table with a couple fields, and a column\n> with long texts in Items.\n\nand earlier indicated the query was:\n\n> Select * from tenders inner join items\n\n\nYou do not want to do a \"select star\" on both tables unless you 100% need\nevery single column and plan to actively do something with it. Especially\ntrue for large text and json columns. Also, use jsonb not json.\n\nCheers,\nGreg\n\nOn Mon, Mar 4, 2024 at 2:14 PM Chema <[email protected]> wrote:There's one JSON column in each table with a couple fields, and a column with long texts in Items.and earlier indicated the query was:Select * from tenders inner join items You do not want to do a \"select star\" on both tables unless you 100% need every single column and plan to actively do something with it. Especially true for large text and json columns. Also, use jsonb not json.Cheers,Greg",
"msg_date": "Mon, 4 Mar 2024 20:50:00 -0500",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": "El lun, 4 mar 2024 a la(s) 7:50 p.m., Greg Sabino Mullane (\[email protected]) escribió:\n\n> On Mon, Mar 4, 2024 at 2:14 PM Chema <[email protected]> wrote:\n>\n>> There's one JSON column in each table with a couple fields, and a column\n>> with long texts in Items.\n>\n> and earlier indicated the query was:\n>\n>> Select * from tenders inner join items\n>\n>\n> You do not want to do a \"select star\" on both tables unless you 100% need\n> every single column and plan to actively do something with it. Especially\n> true for large text and json columns. Also, use jsonb not json.\n>\nTuples aren't really that long in avg (300 bytes for Tenders, twice as\nmuch for Items). In any case, the Select * was to be used with Explain to\nobtain an estimated row count instantly from stats, as described in my\nfirst email, but even raising stats to 5k in relevant columns has not\nimproved the planner's estimates, which are off by almost 1M, and there's\nbeen no suggestion of what could cause that.\n\nGooglin' once again, though, this SO answer\n<https://stackoverflow.com/a/7943283/564148> implies that that might\nactually be the normal for anything but the simplest queries:\n\nDepending on the complexity of your query, this number may become less and\nless accurate. In fact, in my application, as we added joins and complex\nconditions, it became so inaccurate it was completely worthless, even to\nknow how within a power of 100 how many rows we'd have returned, so we had\nto abandon that strategy.\n\n\nBut if your query is simple enough that Pg can predict within some\nreasonable margin of error how many rows it will return, it may work for\nyou.\n\nEl lun, 4 mar 2024 a la(s) 7:50 p.m., Greg Sabino Mullane ([email protected]) escribió:On Mon, Mar 4, 2024 at 2:14 PM Chema <[email protected]> wrote:There's one JSON column in each table with a couple fields, and a column with long texts in Items.and earlier indicated the query was:Select * from tenders inner join items You do not want to do a \"select star\" on both tables unless you 100% need every single column and plan to actively do something with it. Especially true for large text and json columns. Also, use jsonb not json.Tuples aren't really that long in avg (300 bytes for Tenders, twice as much for Items). In any case, the Select * was to be used with Explain to obtain an estimated row count instantly from stats, as described in my first email, but even raising stats to 5k in relevant columns has not improved the planner's estimates, which are off by almost 1M, and there's been no suggestion of what could cause that.Googlin' once again, though, this SO answer implies that that might actually be the normal for anything but the simplest queries:Depending on the complexity of your query, this number may become less and less accurate. In fact, in my application, as we added joins and complex conditions, it became so inaccurate it was completely worthless, even to know how within a power of 100 how many rows we'd have returned, so we had to abandon that strategy.But if your query is simple enough that Pg can predict within some reasonable margin of error how many rows it will return, it may work for you.",
"msg_date": "Tue, 5 Mar 2024 08:00:00 -0600",
"msg_from": "Chema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": "> columns has not improved the planner's estimates, which are off by almost\n> 1M, and there's been no suggestion of what could cause that.\n\nYou are asking a lot of the planner - how would it know that the average\nnumber of items is much higher for ids derived indirectly from \"Mexico\"\nversus ids derived from \"Columbia\"?\n\nOne thing you could try just as a general performance gain is index-only\nscans, by creating an index like this:\n\ncreate index tenders_date_country_id on tenders (country, \"date\") include\n(transaction_id);\n\n>> Parallel Seq Scan on pricescope_items (cost=0.00..1027794.01\nrows=3277101 width=522)\n>> (actual time=0.753..41654.507 rows=2621681 loops=3)\n> Why does it take over 41 seconds to read a table with less than 3 million\nrows?\n\nGood question. I still maintain it's because you are doing a 'select star'\non large, toasted rows.\n\nI made two tables of the same approximate number of rows, and ran the\nquery. It returned a hash join containing:\n\n-> Parallel Seq Scan on items (cost=0.00..69602.93 rows=3375592 width=8)\n (actual time=0.015..185.414 rows=2700407 loops=3)\n\nThen I boosted the width by a lot by adding some filled text columns, and\nit returned the same number of rows, but much slower:\n\n-> Parallel Seq Scan on items (cost=0.00..1729664.15 rows=3370715\nwidth=1562)\n (actual time=0.027..36693.986 rows=2700407 loops=3)\n\nA second run with everything in cache was better, but still an order of\nmagnitude worse the small row:\n\n-> Parallel Seq Scan on items (cost=0.00..1729664.15 rows=3370715\nwidth=1562)\n (actual time=0.063..1565.486 rows=2700407 loops=3)\n\nBest of all was a \"SELECT 1\" which switched the entire plan to a much\nfaster merge join, resulting in:\n\n-> Parallel Index Only Scan using items_tender_transaction_id_index on\nitems (cost=0.43..101367.60 rows=3372717 width=4)\n (actual time=0.087..244.878 rows=2700407 loops=3)\n\nYours will be different, as I cannot exactly duplicate your schema or data\ndistribution, but give \"SELECT 1\" a try. This was on Postgres 16, FWIW,\nwith a default_statistics_target of 100.\n\nCheers,\nGreg\n\n> columns has not improved the planner's estimates, which are off by almost > 1M, and there's been no suggestion of what could cause that.You are asking a lot of the planner - how would it know that the average number of items is much higher for ids derived indirectly from \"Mexico\" versus ids derived from \"Columbia\"?One thing you could try just as a general performance gain is index-only scans, by creating an index like this:create index tenders_date_country_id on tenders (country, \"date\") include (transaction_id);>> Parallel Seq Scan on pricescope_items (cost=0.00..1027794.01 rows=3277101 width=522) >> (actual time=0.753..41654.507 rows=2621681 loops=3)> Why does it take over 41 seconds to read a table with less than 3 million rows?Good question. I still maintain it's because you are doing a 'select star' on large, toasted rows.I made two tables of the same approximate number of rows, and ran the query. It returned a hash join containing: -> Parallel Seq Scan on items (cost=0.00..69602.93 rows=3375592 width=8) (actual time=0.015..185.414 rows=2700407 loops=3)Then I boosted the width by a lot by adding some filled text columns, and it returned the same number of rows, but much slower:-> Parallel Seq Scan on items (cost=0.00..1729664.15 rows=3370715 width=1562) (actual time=0.027..36693.986 rows=2700407 loops=3)A second run with everything in cache was better, but still an order of magnitude worse the small row:-> Parallel Seq Scan on items (cost=0.00..1729664.15 rows=3370715 width=1562) (actual time=0.063..1565.486 rows=2700407 loops=3) Best of all was a \"SELECT 1\" which switched the entire plan to a much faster merge join, resulting in:-> Parallel Index Only Scan using items_tender_transaction_id_index on items (cost=0.43..101367.60 rows=3372717 width=4) (actual time=0.087..244.878 rows=2700407 loops=3)Yours will be different, as I cannot exactly duplicate your schema or data distribution, but give \"SELECT 1\" a try. This was on Postgres 16, FWIW, with a default_statistics_target of 100.Cheers,Greg",
"msg_date": "Tue, 5 Mar 2024 12:04:17 -0500",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
},
{
"msg_contents": ">\n> Yours will be different, as I cannot exactly duplicate your schema or data\n> distribution, but give \"SELECT 1\" a try. This was on Postgres 16, FWIW,\n> with a default_statistics_target of 100.\n>\n\nSelect 1 produces a sequential scan, like Select * did before Vacuum Full.\nBut if I force an index scan with the subquery hack, there's a significant\nimprovement over Select *. Row estimate is still -50%|200%, so seems it's\nonly accurate for very simple queries indeed. In conclusion, I'll just\nkeep on count(*)ing with the subquery hack. Funny thing, Select 1 is\nslightly faster than Select count(*), so I'm tempted to do Select count(*)\n From (Select 1...) As iluffsubqueries. xD\n\n(pg_roaringbitmap <https://github.com/ChenHuajun/pg_roaringbitmap> looks\ngreat, but I expect it works with fixed categories, while I have several\nfull text search columns)\n\n-- With previous country,date index\n query x 100 | avg | min |\n q1 | median | q3 | max\n------------------------+--------------------+-------------------+-------------------+--------------------+-------------------+--------------------\n Count Colombia | 9093.918731212616 | 6334.060907363892 |\n7366.191983222961 | 9154.448866844177 | 10276.342272758484 |\n13520.153999328613\n Subquery Colombia | 7926.021897792816 | 5926.224946975708 |\n7000.077307224274 | 7531.211018562317 | 8828.327298164368 |\n 11380.73992729187\n Sel* Colombia | 8694.387829303741 | 6963.425874710083 |\n8149.151265621185 | 8704.618453979492 | 9153.236508369446 |\n11787.146806716919\n Sel* Subquery Colombia | 8622.495520114899 | 6959.257125854492 |\n8179.068505764008 | 8765.061974525452 | 9159.55775976181 |\n 10187.61420249939\n Sel1 Colombia | 22717.704384326935 | 8666.495084762573 |\n22885.42276620865 | 23949.790477752686 | 24966.21882915497 |\n30625.644207000732\n Sel1 Subquery Colombia | 7529.951772689819 | 6241.269111633301 |\n7127.403438091278 | 7577.62348651886 | 7866.843640804291 |\n8954.48899269104\n ;\n\n-- After including transaction_id in country,date index\n query x 20 | avg | min |\n q1 | median | q3 | max\n------------------------+--------------------+--------------------+-------------------+--------------------+--------------------+--------------------\n Count Colombia | 10326.94479227066 | 7079.586982727051 |\n8091.441631317139 | 10685.971021652222 | 11660.240888595581 |\n16219.580888748169\n Subquery Colombia | 8345.360279083252 | 6759.0179443359375 |\n7150.483548641205 | 7609.055519104004 | 8118.529975414276 |\n15819.210052490234\n Sel* Colombia | 9401.683914661407 | 8350.785970687866 |\n8727.016389369965 | 9171.823978424072 | 9705.730974674225 |\n12684.055089950562\n Sel* Subquery Colombia | 10874.297595024109 | 7996.103048324585 |\n9317.362785339355 | 10767.66049861908 | 12130.92851638794 |\n14003.422021865845\n Sel1 Colombia | 14704.787838459015 | 7033.560991287231 |\n8938.009798526764 | 11308.07101726532 | 21711.08090877533 |\n25156.877994537354\n Sel1 Subquery Colombia | 7128.487503528595 | 5076.292991638184 |\n5678.286790847778 | 6925.720572471619 | 8272.867858409882 |\n11430.468082427979\n\n query x 100 | avg | min |\nq1 | median | q3 | max\n------------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------\n Count Colombia | 8165.0702357292175 | 5923.334121704102 |\n 6800.160050392151 | 7435.7980489730835 | 9075.710475444794 |\n13613.409042358398\n Subquery Colombia | 7299.517266750336 | 5389.672040939331 |\n 6362.253367900848 | 6781.42237663269 | 7978.189289569855 |\n11542.781829833984\n Sel* Colombia | 14157.406282424927 | 8775.223016738892 |\n 13062.03180551529 | 14233.824968338013 | 15513.144373893738 |\n 19184.97586250305\n Sel* Subquery Colombia | 13438.675961494446 | 10216.159105300903 |\n12183.876752853394 | 13196.363925933838 | 14356.310486793518 |\n20111.860036849976\n Sel1 Colombia | 13753.776743412018 | 7020.914793014526 |\n7893.3587074279785 | 9101.168870925903 | 22971.67855501175 |\n26913.809061050415\n Sel1 Subquery Colombia | 6757.480027675629 | 5529.844045639038 |\n 6212.466478347778 | 6777.510046958923 | 7212.876975536346 |\n 8500.235080718994\n\nYours will be different, as I cannot exactly duplicate your schema or data distribution, but give \"SELECT 1\" a try. This was on Postgres 16, FWIW, with a default_statistics_target of 100.Select 1 produces a sequential scan, like Select * did before Vacuum Full. But if I force an index scan with the subquery hack, there's a significant improvement over Select *. Row estimate is still -50%|200%, so seems it's only accurate for very simple queries indeed. In conclusion, I'll just keep on count(*)ing with the subquery hack. Funny thing, Select 1 is slightly faster than Select count(*), so I'm tempted to do Select count(*) From (Select 1...) As iluffsubqueries. xD(pg_roaringbitmap looks great, but I expect it works with fixed categories, while I have several full text search columns)-- With previous country,date index query x 100 | avg | min | q1 | median | q3 | max------------------------+--------------------+-------------------+-------------------+--------------------+-------------------+-------------------- Count Colombia | 9093.918731212616 | 6334.060907363892 | 7366.191983222961 | 9154.448866844177 | 10276.342272758484 | 13520.153999328613 Subquery Colombia | 7926.021897792816 | 5926.224946975708 | 7000.077307224274 | 7531.211018562317 | 8828.327298164368 | 11380.73992729187 Sel* Colombia | 8694.387829303741 | 6963.425874710083 | 8149.151265621185 | 8704.618453979492 | 9153.236508369446 | 11787.146806716919 Sel* Subquery Colombia | 8622.495520114899 | 6959.257125854492 | 8179.068505764008 | 8765.061974525452 | 9159.55775976181 | 10187.61420249939 Sel1 Colombia | 22717.704384326935 | 8666.495084762573 | 22885.42276620865 | 23949.790477752686 | 24966.21882915497 | 30625.644207000732 Sel1 Subquery Colombia | 7529.951772689819 | 6241.269111633301 | 7127.403438091278 | 7577.62348651886 | 7866.843640804291 | 8954.48899269104 ;-- After including transaction_id in country,date index query x 20 | avg | min | q1 | median | q3 | max------------------------+--------------------+--------------------+-------------------+--------------------+--------------------+-------------------- Count Colombia | 10326.94479227066 | 7079.586982727051 | 8091.441631317139 | 10685.971021652222 | 11660.240888595581 | 16219.580888748169 Subquery Colombia | 8345.360279083252 | 6759.0179443359375 | 7150.483548641205 | 7609.055519104004 | 8118.529975414276 | 15819.210052490234 Sel* Colombia | 9401.683914661407 | 8350.785970687866 | 8727.016389369965 | 9171.823978424072 | 9705.730974674225 | 12684.055089950562 Sel* Subquery Colombia | 10874.297595024109 | 7996.103048324585 | 9317.362785339355 | 10767.66049861908 | 12130.92851638794 | 14003.422021865845 Sel1 Colombia | 14704.787838459015 | 7033.560991287231 | 8938.009798526764 | 11308.07101726532 | 21711.08090877533 | 25156.877994537354 Sel1 Subquery Colombia | 7128.487503528595 | 5076.292991638184 | 5678.286790847778 | 6925.720572471619 | 8272.867858409882 | 11430.468082427979 query x 100 | avg | min | q1 | median | q3 | max------------------------+--------------------+--------------------+--------------------+--------------------+--------------------+-------------------- Count Colombia | 8165.0702357292175 | 5923.334121704102 | 6800.160050392151 | 7435.7980489730835 | 9075.710475444794 | 13613.409042358398 Subquery Colombia | 7299.517266750336 | 5389.672040939331 | 6362.253367900848 | 6781.42237663269 | 7978.189289569855 | 11542.781829833984 Sel* Colombia | 14157.406282424927 | 8775.223016738892 | 13062.03180551529 | 14233.824968338013 | 15513.144373893738 | 19184.97586250305 Sel* Subquery Colombia | 13438.675961494446 | 10216.159105300903 | 12183.876752853394 | 13196.363925933838 | 14356.310486793518 | 20111.860036849976 Sel1 Colombia | 13753.776743412018 | 7020.914793014526 | 7893.3587074279785 | 9101.168870925903 | 22971.67855501175 | 26913.809061050415 Sel1 Subquery Colombia | 6757.480027675629 | 5529.844045639038 | 6212.466478347778 | 6777.510046958923 | 7212.876975536346 | 8500.235080718994",
"msg_date": "Wed, 6 Mar 2024 08:00:00 -0600",
"msg_from": "Chema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing count(), but Explain estimates wildly off"
}
] |
[
{
"msg_contents": "Postgresql 14.8, Redhat8. looks like have to create extend statistics\non indexed and joined columns to make join filters pushed down to secondary\nindex scan in nestloop, and the shared buffer hits show big difference.\n\nis it expected ?\n\n\n SELECT ....\n FROM\n mtgxxxxxxxx a LEFT OUTER JOIN mtgxxxxxxxext\nb ON a.sssid = b.sssid and a.MMMUUID = b.MMMUUID and a.uuid = b.uuid\n WHERE\n a.SSSID=$1\n AND a.MMMUID=$2\n ORDER BY a.XXXX asc\n offset 300 rows\n FETCH FIRST 51 ROWS ONLY\n\n\nexplain (analyze,buffers) slowsql1(...)\n\n1. with default, join filters just after nestloop,\nLimit (cost=5.61..5.62 rows=1 width=1169) (actual time=2249.443..2249.454\nrows=51 loops=1)\n Buffers: shared hit=3864917\n -> Sort (cost=5.61..5.61 rows=1 width=1169) (actual\ntime=2249.404..2249.438 rows=351 loops=1)\n Sort Key: a.email\n Sort Method: top-N heapsort Memory: 174kB\n Buffers: shared hit=3864917\n -> Nested Loop Left Join (cost=1.12..5.60 rows=1 width=1169)\n(actual time=1.335..2246.971 rows=2142 loops=1)\n Join Filter: ((a.siteid = b.siteid) AND ((a.mtguuid)::text =\n(b.mtguuid)::text) AND (a.uuid = b.uuid))\n Rows Removed by Join Filter: 4586022\n Buffers: shared hit=3864917\n -> Index Scan using idx_mmmcfattlist_mmmuuid_f on\nmtgxxxxxxxx a (cost=0.56..2.79 rows=1 width=1093) (actual\ntime=0.026..5.318 rows\n=2142 loops=1)\n Index Cond: ((sssid = $1) AND ((mmmuuid)::text =\n($2)::text))\n Filter: ((joinstatus = ANY (ARRAY[$3, $4])) OR\n((usertype)::text = 'Testlist'::text))\n Buffers: shared hit=2891\n -> Index Scan using idx_mtgattndlstext_mmmuuid_uid on\nmtgxxxxxxxext b (cost=0.56..2.78 rows=1 width=133) (actual\ntime=0.016..0.698\nrows=2142 loops=2142)\n Index Cond: ((sssid = $1) AND ((mmmuuid)::text =\n($2)::text))\n Buffers: shared hit=3862026 <<< here huge\nshared hits.\n Planning Time: 0.033 ms\n Execution Time: 2249.527 ms\n\n\n\n create statistics mtgxxxxxxext_sssid_mmmuuid(dependencies,ndistinct) on\nsssid, mmmuuid from mtgxxxxxxxext.\n analyze mtgxxxxxxxext.\n\n\n\n 2. join filters pushed down to secondary index scan, and reduce a lot of\nshared blks access.\n\n -------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------\n Limit (cost=5.61..5.62 rows=1 width=1245) (actual time=12.371..12.380\nrows=51 loops=1)\n Buffers: shared hit=12865\n -> Sort (cost=5.61..5.61 rows=1 width=1245) (actual\ntime=12.333..12.364 rows=351 loops=1)\n Sort Key: a.email\n Sort Method: top-N heapsort Memory: 174kB\n Buffers: shared hit=12865\n -> Nested Loop Left Join (cost=1.12..5.60 rows=1 width=1245)\n(actual time=0.042..10.819 rows=2142 loops=1)\n Buffers: shared hit=12865\n -> Index Scan using idx_mmmcfattlist_mmmuuid_f on\nmtgxxxxxxxx a (cost=0.56..2.79 rows=1 width=1169) (actual time=0.025..2.492\n rows=2142 loops=1)\n Index Cond: ((sssid = $1) AND ((mmmuuid)::text =\n($2)::text))\n Filter: ((joinstatus = ANY (ARRAY[$3, $4])) OR\n((usertype)::text = 'Testlist'::text))\n Buffers: shared hit=2891\n -> Index Scan using idx_mtgattndlstext_mmmuuid_uid on\nmtgxxxxxxxext b (cost=0.56..2.79 rows=1 width=133) (actual time=0.003..0\n.003 rows=1 loops=2142)\n Index Cond: ((sssid = a.sssid) AND (sssid = $1) AND\n((mmmuuid)::text = (a.mmmuuid)::text) AND ((mmmuuid)::text = ($2)::text)\nAND (uui\nd = a.uuid))\n Buffers: shared hit=10710 <<< here much less shared\nhits , and Index Cond automatically added sssid = a.sssid ,mmuuid=a.mmmuuid.\n Planning Time: 0.021 ms\n Execution Time: 12.451 ms\n(17 rows)\n\nThanks,\n\nJames\n\n Postgresql 14.8, Redhat8. looks like have to create extend statistics on indexed and joined columns to make join filters pushed down to secondary index scan in nestloop, and the shared buffer hits show big difference. is it expected ? SELECT .... FROM mtgxxxxxxxx a LEFT OUTER JOIN mtgxxxxxxxext b ON a.sssid = b.sssid and a.MMMUUID = b.MMMUUID and a.uuid = b.uuid WHERE a.SSSID=$1 AND a.MMMUID=$2 ORDER BY a.XXXX asc offset 300 rows FETCH FIRST 51 ROWS ONLYexplain (analyze,buffers) slowsql1(...)1. with default, join filters just after nestloop, Limit (cost=5.61..5.62 rows=1 width=1169) (actual time=2249.443..2249.454 rows=51 loops=1) Buffers: shared hit=3864917 -> Sort (cost=5.61..5.61 rows=1 width=1169) (actual time=2249.404..2249.438 rows=351 loops=1) Sort Key: a.email Sort Method: top-N heapsort Memory: 174kB Buffers: shared hit=3864917 -> Nested Loop Left Join (cost=1.12..5.60 rows=1 width=1169) (actual time=1.335..2246.971 rows=2142 loops=1) Join Filter: ((a.siteid = b.siteid) AND ((a.mtguuid)::text = (b.mtguuid)::text) AND (a.uuid = b.uuid)) Rows Removed by Join Filter: 4586022 Buffers: shared hit=3864917 -> Index Scan using idx_mmmcfattlist_mmmuuid_f on mtgxxxxxxxx a (cost=0.56..2.79 rows=1 width=1093) (actual time=0.026..5.318 rows=2142 loops=1) Index Cond: ((sssid = $1) AND ((mmmuuid)::text = ($2)::text)) Filter: ((joinstatus = ANY (ARRAY[$3, $4])) OR ((usertype)::text = 'Testlist'::text)) Buffers: shared hit=2891 -> Index Scan using idx_mtgattndlstext_mmmuuid_uid on mtgxxxxxxxext b (cost=0.56..2.78 rows=1 width=133) (actual time=0.016..0.698rows=2142 loops=2142) Index Cond: ((sssid = $1) AND ((mmmuuid)::text = ($2)::text)) Buffers: shared hit=3862026 <<< here huge shared hits. Planning Time: 0.033 ms Execution Time: 2249.527 ms create statistics mtgxxxxxxext_sssid_mmmuuid(dependencies,ndistinct) on sssid, mmmuuid from mtgxxxxxxxext. analyze mtgxxxxxxxext. 2. join filters pushed down to secondary index scan, and reduce a lot of shared blks access. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=5.61..5.62 rows=1 width=1245) (actual time=12.371..12.380 rows=51 loops=1) Buffers: shared hit=12865 -> Sort (cost=5.61..5.61 rows=1 width=1245) (actual time=12.333..12.364 rows=351 loops=1) Sort Key: a.email Sort Method: top-N heapsort Memory: 174kB Buffers: shared hit=12865 -> Nested Loop Left Join (cost=1.12..5.60 rows=1 width=1245) (actual time=0.042..10.819 rows=2142 loops=1) Buffers: shared hit=12865 -> Index Scan using idx_mmmcfattlist_mmmuuid_f on mtgxxxxxxxx a (cost=0.56..2.79 rows=1 width=1169) (actual time=0.025..2.492 rows=2142 loops=1) Index Cond: ((sssid = $1) AND ((mmmuuid)::text = ($2)::text)) Filter: ((joinstatus = ANY (ARRAY[$3, $4])) OR ((usertype)::text = 'Testlist'::text)) Buffers: shared hit=2891 -> Index Scan using idx_mtgattndlstext_mmmuuid_uid on mtgxxxxxxxext b (cost=0.56..2.79 rows=1 width=133) (actual time=0.003..0.003 rows=1 loops=2142) Index Cond: ((sssid = a.sssid) AND (sssid = $1) AND ((mmmuuid)::text = (a.mmmuuid)::text) AND ((mmmuuid)::text = ($2)::text) AND (uuid = a.uuid)) Buffers: shared hit=10710 <<< here much less shared hits , and Index Cond automatically added sssid = a.sssid ,mmuuid=a.mmmuuid. Planning Time: 0.021 ms Execution Time: 12.451 ms(17 rows)Thanks,James",
"msg_date": "Tue, 27 Feb 2024 21:54:48 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "extend statistics help reduce index scan a lot of shared buffer hits."
},
{
"msg_contents": "Postgresql 14.8, Redhat8. looks like have to create extend statistics\non indexed and joined columns to make join filters pushed down to secondary\nindex scan in nestloop, and the shared buffer hits show big difference.\n\nis it expected ?\n\n\n SELECT ....\n FROM\n mtgxxxxxxxx a LEFT OUTER JOIN mtgxxxxxxxext\nb ON a.sssid = b.sssid and a.MMMUUID = b.MMMUUID and a.uuid = b.uuid\n WHERE\n a.SSSID=$1\n AND a.MMMUID=$2\n ORDER BY a.XXXX asc\n offset 300 rows\n FETCH FIRST 51 ROWS ONLY\n\n\nexplain (analyze,buffers) slowsql1(...)\n\n1. with default, join filters just after nestloop,\nLimit (cost=5.61..5.62 rows=1 width=1169) (actual time=2249.443..2249.454\nrows=51 loops=1)\n Buffers: shared hit=3864917\n -> Sort (cost=5.61..5.61 rows=1 width=1169) (actual\ntime=2249.404..2249.438 rows=351 loops=1)\n Sort Key: a.email\n Sort Method: top-N heapsort Memory: 174kB\n Buffers: shared hit=3864917\n -> Nested Loop Left Join (cost=1.12..5.60 rows=1 width=1169)\n(actual time=1.335..2246.971 rows=2142 loops=1)\n Join Filter: ((a.sssid = b.sssid) AND ((a.mmmuuid)::text =\n(b.mmmuuid)::text) AND (a.uuid = b.uuid))\n Rows Removed by Join Filter: 4586022\n Buffers: shared hit=3864917\n -> Index Scan using idx_mmmcfattlist_mmmuuid_f on\nmtgxxxxxxxx a (cost=0.56..2.79 rows=1 width=1093) (actual\ntime=0.026..5.318 rows\n=2142 loops=1)\n Index Cond: ((sssid = $1) AND ((mmmuuid)::text =\n($2)::text))\n Filter: ((joinstatus = ANY (ARRAY[$3, $4])) OR\n((usertype)::text = 'Testlist'::text))\n Buffers: shared hit=2891\n -> Index Scan using idx_mtgattndlstext_mmmuuid_uid on\nmtgxxxxxxxext b (cost=0.56..2.78 rows=1 width=133) (actual\ntime=0.016..0.698\nrows=2142 loops=2142)\n Index Cond: ((sssid = $1) AND ((mmmuuid)::text =\n($2)::text))\n Buffers: shared hit=3862026 <<< here huge\nshared hits.\n Planning Time: 0.033 ms\n Execution Time: 2249.527 ms\n\n\n\n create statistics mtgxxxxxxext_sssid_mmmuuid(dependencies,ndistinct) on\nsssid, mmmuuid from mtgxxxxxxxext.\n analyze mtgxxxxxxxext.\n\n\n\n 2. join filters pushed down to secondary index scan, and reduce a lot of\nshared blks access.\n\n -------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------\n Limit (cost=5.61..5.62 rows=1 width=1245) (actual time=12.371..12.380\nrows=51 loops=1)\n Buffers: shared hit=12865\n -> Sort (cost=5.61..5.61 rows=1 width=1245) (actual\ntime=12.333..12.364 rows=351 loops=1)\n Sort Key: a.email\n Sort Method: top-N heapsort Memory: 174kB\n Buffers: shared hit=12865\n -> Nested Loop Left Join (cost=1.12..5.60 rows=1 width=1245)\n(actual time=0.042..10.819 rows=2142 loops=1)\n Buffers: shared hit=12865\n -> Index Scan using idx_mmmcfattlist_mmmuuid_f on\nmtgxxxxxxxx a (cost=0.56..2.79 rows=1 width=1169) (actual time=0.025..2.492\n rows=2142 loops=1)\n Index Cond: ((sssid = $1) AND ((mmmuuid)::text =\n($2)::text))\n Filter: ((joinstatus = ANY (ARRAY[$3, $4])) OR\n((usertype)::text = 'Testlist'::text))\n Buffers: shared hit=2891\n -> Index Scan using idx_mtgattndlstext_mmmuuid_uid on\nmtgxxxxxxxext b (cost=0.56..2.79 rows=1 width=133) (actual time=0.003..0\n.003 rows=1 loops=2142)\n Index Cond: ((sssid = a.sssid) AND (sssid = $1) AND\n((mmmuuid)::text = (a.mmmuuid)::text) AND ((mmmuuid)::text = ($2)::text)\nAND (uui\nd = a.uuid))\n Buffers: shared hit=10710 <<< here much less shared\nhits , and Index Cond automatically added sssid = a.sssid ,mmuuid=a.mmmuuid.\n Planning Time: 0.021 ms\n Execution Time: 12.451 ms\n(17 rows)\n\nThanks,\n\nJames\n\n Postgresql 14.8, Redhat8. looks like have to create extend statistics on indexed and joined columns to make join filters pushed down to secondary index scan in nestloop, and the shared buffer hits show big difference. is it expected ? SELECT .... FROM mtgxxxxxxxx a LEFT OUTER JOIN mtgxxxxxxxext b ON a.sssid = b.sssid and a.MMMUUID = b.MMMUUID and a.uuid = b.uuid WHERE a.SSSID=$1 AND a.MMMUID=$2 ORDER BY a.XXXX asc offset 300 rows FETCH FIRST 51 ROWS ONLYexplain (analyze,buffers) slowsql1(...)1. with default, join filters just after nestloop, Limit (cost=5.61..5.62 rows=1 width=1169) (actual time=2249.443..2249.454 rows=51 loops=1) Buffers: shared hit=3864917 -> Sort (cost=5.61..5.61 rows=1 width=1169) (actual time=2249.404..2249.438 rows=351 loops=1) Sort Key: a.email Sort Method: top-N heapsort Memory: 174kB Buffers: shared hit=3864917 -> Nested Loop Left Join (cost=1.12..5.60 rows=1 width=1169) (actual time=1.335..2246.971 rows=2142 loops=1) Join Filter: ((a.sssid = b.sssid) AND ((a.mmmuuid)::text = (b.mmmuuid)::text) AND (a.uuid = b.uuid)) Rows Removed by Join Filter: 4586022 Buffers: shared hit=3864917 -> Index Scan using idx_mmmcfattlist_mmmuuid_f on mtgxxxxxxxx a (cost=0.56..2.79 rows=1 width=1093) (actual time=0.026..5.318 rows=2142 loops=1) Index Cond: ((sssid = $1) AND ((mmmuuid)::text = ($2)::text)) Filter: ((joinstatus = ANY (ARRAY[$3, $4])) OR ((usertype)::text = 'Testlist'::text)) Buffers: shared hit=2891 -> Index Scan using idx_mtgattndlstext_mmmuuid_uid on mtgxxxxxxxext b (cost=0.56..2.78 rows=1 width=133) (actual time=0.016..0.698rows=2142 loops=2142) Index Cond: ((sssid = $1) AND ((mmmuuid)::text = ($2)::text)) Buffers: shared hit=3862026 <<< here huge shared hits. Planning Time: 0.033 ms Execution Time: 2249.527 ms create statistics mtgxxxxxxext_sssid_mmmuuid(dependencies,ndistinct) on sssid, mmmuuid from mtgxxxxxxxext. analyze mtgxxxxxxxext. 2. join filters pushed down to secondary index scan, and reduce a lot of shared blks access. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=5.61..5.62 rows=1 width=1245) (actual time=12.371..12.380 rows=51 loops=1) Buffers: shared hit=12865 -> Sort (cost=5.61..5.61 rows=1 width=1245) (actual time=12.333..12.364 rows=351 loops=1) Sort Key: a.email Sort Method: top-N heapsort Memory: 174kB Buffers: shared hit=12865 -> Nested Loop Left Join (cost=1.12..5.60 rows=1 width=1245) (actual time=0.042..10.819 rows=2142 loops=1) Buffers: shared hit=12865 -> Index Scan using idx_mmmcfattlist_mmmuuid_f on mtgxxxxxxxx a (cost=0.56..2.79 rows=1 width=1169) (actual time=0.025..2.492 rows=2142 loops=1) Index Cond: ((sssid = $1) AND ((mmmuuid)::text = ($2)::text)) Filter: ((joinstatus = ANY (ARRAY[$3, $4])) OR ((usertype)::text = 'Testlist'::text)) Buffers: shared hit=2891 -> Index Scan using idx_mtgattndlstext_mmmuuid_uid on mtgxxxxxxxext b (cost=0.56..2.79 rows=1 width=133) (actual time=0.003..0.003 rows=1 loops=2142) Index Cond: ((sssid = a.sssid) AND (sssid = $1) AND ((mmmuuid)::text = (a.mmmuuid)::text) AND ((mmmuuid)::text = ($2)::text) AND (uuid = a.uuid)) Buffers: shared hit=10710 <<< here much less shared hits , and Index Cond automatically added sssid = a.sssid ,mmuuid=a.mmmuuid. Planning Time: 0.021 ms Execution Time: 12.451 ms(17 rows)Thanks,James",
"msg_date": "Tue, 27 Feb 2024 21:58:59 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: extend statistics help reduce index scan a lot of shared buffer\n hits."
},
{
"msg_contents": "On 2/27/24 14:58, James Pang wrote:\n> Postgresql 14.8, Redhat8. looks like have to create extend statistics\n> on indexed and joined columns to make join filters pushed down to secondary\n> index scan in nestloop, and the shared buffer hits show big difference.\n> \n> is it expected ?\n> \n\nIt's hard to say what exactly is happening in the example query (I'd\nhave to do some debugging, but that's impossible without a reproducer),\nbut I think it's mostly expected.\n\nMy guess is that without the stats the optimizer sees this:\n\n-> Index Scan using idx_mtgattndlstext_mmmuuid_uid on mtgxxxxxxxext b\n(cost=0.56..2.78 rows=1 width=133) (actual time=0.016..0.698 rows=2142\nloops=2142)\n\nand so decides there's no point in pushing down more conditions to the\nindex scan (because it already returns just 1 row). But there's some\nsort of correlation / skew, and it returns 2142 rows.\n\nWith the extended stats it realizes pushing down more conditions makes\nsense, because doing that in index scan is cheaper than having to read\nthe heap pages etc. So it does that.\n\nSo yeah, this seems exactly the improvement you'd expect from extended\nstats. Why do you think this would not be expected?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 28 Feb 2024 15:53:55 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: extend statistics help reduce index scan a lot of shared\n buffer hits."
}
] |
[
{
"msg_contents": "Hi,\n we create statistics (dependencies,distinct) on (cccid,sssid); with\nreal bind variables , it make good plan of Hash join , but when it try to\ngeneric plan, it automatically convert to Nestloop and then very poor sql\nperformance. why generic plan change to to a poor plan \"nestloop\" ? how\nto fix that.\n\n explain execute j2eemtgatdlistsql16(27115336789879,15818676);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=11513.05..25541.17 rows=773 width=1111)\n Hash Cond: ((a.sssid = b.sssid) AND (a.cccid = b.cccid) AND (a.uuid =\nb.uuid))\n -> Index Scan using idx_mtgccclist_conf_j2 on mtgccclistj2 a\n (cost=0.43..14010.19 rows=773 width=1059)\n Index Cond: ((cccfid = '27115336789879'::bigint) AND (sssid =\n'15818676'::bigint))\n Filter: (jstatus = ANY ('{3,7,11,2,6,10}'::bigint[]))\n -> Hash (cost=11330.73..11330.73 rows=10393 width=51)\n -> Index Scan using idx_mtgccclstext_cccsssid_j2 on\nmtgcccclistextj2 b (cost=0.43..11330.73 rows=10393 width=51)\n Index Cond: ((cccid = '27115336789879'::bigint) AND (siteid\n= '15818676'::bigint))\n(8 rows)\n\n explain execute j2eemtgatdlistsql16(27115336789879,15818676);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.87..289.53 rows=14 width=1111)\n -> Index Scan using idx_mtgccclist_conf_j2 on mtgccclistj2 a\n (cost=0.43..251.94 rows=14 width=1059)\n Index Cond: ((cccid = $1) AND (sssid = $2))\n Filter: (jstatus = ANY ('{3,7,11,2,6,10}'::bigint[]))\n -> Index Scan using idx_mtgccclstext_cccsssid_j2 on mtgcccclistextj2 b\n (cost=0.43..2.66 rows=1 width=51)\n Index Cond: ((cccid = a.cccid) AND (cccid = $1) AND (sssid =\na.sssid) AND (sssid = $2))\n Filter: (a.uuid = uuid)\n(7 rows)\n\nThanks,\n\nJames\n\nHi, we create statistics (dependencies,distinct) on (cccid,sssid); \n\nwith real bind variables , it make good plan of Hash join , but when it try to generic plan, it automatically convert to Nestloop and then very poor sql performance. why generic plan change to to a poor plan \"nestloop\" ? how to fix that. explain execute j2eemtgatdlistsql16(27115336789879,15818676); QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join (cost=11513.05..25541.17 rows=773 width=1111) Hash Cond: ((a.sssid = b.sssid) AND (a.cccid = b.cccid) AND (a.uuid = b.uuid)) -> Index Scan using idx_mtgccclist_conf_j2 on mtgccclistj2 a (cost=0.43..14010.19 rows=773 width=1059) Index Cond: ((cccfid = '27115336789879'::bigint) AND (sssid = '15818676'::bigint)) Filter: (jstatus = ANY ('{3,7,11,2,6,10}'::bigint[])) -> Hash (cost=11330.73..11330.73 rows=10393 width=51) -> Index Scan using idx_mtgccclstext_cccsssid_j2 on mtgcccclistextj2 b (cost=0.43..11330.73 rows=10393 width=51) Index Cond: ((cccid = '27115336789879'::bigint) AND (siteid = '15818676'::bigint))(8 rows) explain execute j2eemtgatdlistsql16(27115336789879,15818676); QUERY PLAN--------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=0.87..289.53 rows=14 width=1111) -> Index Scan using idx_mtgccclist_conf_j2 on mtgccclistj2 a (cost=0.43..251.94 rows=14 width=1059) Index Cond: ((cccid = $1) AND (sssid = $2)) Filter: (jstatus = ANY ('{3,7,11,2,6,10}'::bigint[])) -> Index Scan using idx_mtgccclstext_cccsssid_j2 on mtgcccclistextj2 b (cost=0.43..2.66 rows=1 width=51) Index Cond: ((cccid = a.cccid) AND (cccid = $1) AND (sssid = a.sssid) AND (sssid = $2)) Filter: (a.uuid = uuid)(7 rows)Thanks,James",
"msg_date": "Thu, 29 Feb 2024 22:27:42 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "generic plan generate poor performance"
},
{
"msg_contents": "Hi\n\nčt 29. 2. 2024 v 15:28 odesílatel James Pang <[email protected]>\nnapsal:\n\n> Hi,\n> we create statistics (dependencies,distinct) on (cccid,sssid); with\n> real bind variables , it make good plan of Hash join , but when it try to\n> generic plan, it automatically convert to Nestloop and then very poor sql\n> performance. why generic plan change to to a poor plan \"nestloop\" ? how\n> to fix that.\n>\n\nplease, send result of EXPLAIN ANALYZE, try to run VACUUM ANALYZE before\n\nprobably there will not good estimation\n\n\n\n>\n> explain execute j2eemtgatdlistsql16(27115336789879,15818676);\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=11513.05..25541.17 rows=773 width=1111)\n> Hash Cond: ((a.sssid = b.sssid) AND (a.cccid = b.cccid) AND (a.uuid =\n> b.uuid))\n> -> Index Scan using idx_mtgccclist_conf_j2 on mtgccclistj2 a\n> (cost=0.43..14010.19 rows=773 width=1059)\n> Index Cond: ((cccfid = '27115336789879'::bigint) AND (sssid =\n> '15818676'::bigint))\n> Filter: (jstatus = ANY ('{3,7,11,2,6,10}'::bigint[]))\n> -> Hash (cost=11330.73..11330.73 rows=10393 width=51)\n> -> Index Scan using idx_mtgccclstext_cccsssid_j2 on\n> mtgcccclistextj2 b (cost=0.43..11330.73 rows=10393 width=51)\n> Index Cond: ((cccid = '27115336789879'::bigint) AND (siteid\n> = '15818676'::bigint))\n> (8 rows)\n>\n> explain execute j2eemtgatdlistsql16(27115336789879,15818676);\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=0.87..289.53 rows=14 width=1111)\n> -> Index Scan using idx_mtgccclist_conf_j2 on mtgccclistj2 a\n> (cost=0.43..251.94 rows=14 width=1059)\n> Index Cond: ((cccid = $1) AND (sssid = $2))\n> Filter: (jstatus = ANY ('{3,7,11,2,6,10}'::bigint[]))\n> -> Index Scan using idx_mtgccclstext_cccsssid_j2 on mtgcccclistextj2 b\n> (cost=0.43..2.66 rows=1 width=51)\n> Index Cond: ((cccid = a.cccid) AND (cccid = $1) AND (sssid =\n> a.sssid) AND (sssid = $2))\n> Filter: (a.uuid = uuid)\n> (7 rows)\n>\n> Thanks,\n>\n>\nRegards\n\nPavel\n\n\n> James\n>\n\nHičt 29. 2. 2024 v 15:28 odesílatel James Pang <[email protected]> napsal:Hi, we create statistics (dependencies,distinct) on (cccid,sssid); \n\nwith real bind variables , it make good plan of Hash join , but when it try to generic plan, it automatically convert to Nestloop and then very poor sql performance. why generic plan change to to a poor plan \"nestloop\" ? how to fix that. please, send result of EXPLAIN ANALYZE, try to run VACUUM ANALYZE beforeprobably there will not good estimation explain execute j2eemtgatdlistsql16(27115336789879,15818676); QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join (cost=11513.05..25541.17 rows=773 width=1111) Hash Cond: ((a.sssid = b.sssid) AND (a.cccid = b.cccid) AND (a.uuid = b.uuid)) -> Index Scan using idx_mtgccclist_conf_j2 on mtgccclistj2 a (cost=0.43..14010.19 rows=773 width=1059) Index Cond: ((cccfid = '27115336789879'::bigint) AND (sssid = '15818676'::bigint)) Filter: (jstatus = ANY ('{3,7,11,2,6,10}'::bigint[])) -> Hash (cost=11330.73..11330.73 rows=10393 width=51) -> Index Scan using idx_mtgccclstext_cccsssid_j2 on mtgcccclistextj2 b (cost=0.43..11330.73 rows=10393 width=51) Index Cond: ((cccid = '27115336789879'::bigint) AND (siteid = '15818676'::bigint))(8 rows) explain execute j2eemtgatdlistsql16(27115336789879,15818676); QUERY PLAN--------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=0.87..289.53 rows=14 width=1111) -> Index Scan using idx_mtgccclist_conf_j2 on mtgccclistj2 a (cost=0.43..251.94 rows=14 width=1059) Index Cond: ((cccid = $1) AND (sssid = $2)) Filter: (jstatus = ANY ('{3,7,11,2,6,10}'::bigint[])) -> Index Scan using idx_mtgccclstext_cccsssid_j2 on mtgcccclistextj2 b (cost=0.43..2.66 rows=1 width=51) Index Cond: ((cccid = a.cccid) AND (cccid = $1) AND (sssid = a.sssid) AND (sssid = $2)) Filter: (a.uuid = uuid)(7 rows)Thanks,RegardsPavel James",
"msg_date": "Thu, 29 Feb 2024 15:38:28 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generic plan generate poor performance"
}
] |
[
{
"msg_contents": "I was told that partitioned table indexed must always start with the\npartition key columns.\n\nIs this always the case or does it depend on use case? When would you want\nto create indexes in this way?\n\nThe documentation just mentions that it is strictly unnecessary but can be\nhelpful. My understanding is partitions behave like normal tables. Each\ngets their own index. So, I'd expect the reasoning behind creating the\nindex on the partition should be the same as if it were just a normal table\n(assuming it has the same subset of data as the individual partition). Is\nthis a correct understanding?\n\nAny other performance considerations when it comes to partitioned table\nindexing? Specifically, partitioning by range where the range is a single\nvalue.\n\nI was told that partitioned table indexed must always start with the partition key columns. Is this always the case or does it depend on use case? When would you want to create indexes in this way?The documentation just mentions that it is strictly unnecessary but can be helpful. My understanding is partitions behave like normal tables. Each gets their own index. So, I'd expect the reasoning behind creating the index on the partition should be the same as if it were just a normal table (assuming it has the same subset of data as the individual partition). Is this a correct understanding? Any other performance considerations when it comes to partitioned table indexing? Specifically, partitioning by range where the range is a single value.",
"msg_date": "Thu, 29 Feb 2024 11:42:11 -0500",
"msg_from": "David Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table Partitioning and Indexes Performance Questions"
},
{
"msg_contents": "On Thu, 2024-02-29 at 11:42 -0500, David Kelly wrote:\n> I was told that partitioned table indexed must always start with the partition key columns.\n\nThat's not true.\n\nOnly unique indexes (as used by primary key and unique constraints) must\ncontain the partitioning key (but they don't have to start with it).\n\n\n> Any other performance considerations when it comes to partitioned table indexing?\n> Specifically, partitioning by range where the range is a single value.\n\nNot particularly - selecting from a partitioned table is like selecting\nfrom a UNION ALL of all partitions, except that sometimes PostgreSQL\ncan forgo scanning some of the partitions.\nIf you use very many partitions, the overhead for query planning and\nexecution can become noticable.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 29 Feb 2024 18:32:48 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Partitioning and Indexes Performance Questions"
},
{
"msg_contents": "Would eliminating triggers and stored procedure would be step #1 to start seeing gains from partitions?\nWe have many triigers and stored procedure and i am trying to to kake sure if need to deprecate before moving to partitioning.\n\nMany thx\nAndy\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Laurenz Albe <[email protected]>\nSent: Thursday, February 29, 2024 9:32:48 AM\nTo: David Kelly <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Table Partitioning and Indexes Performance Questions\n\nOn Thu, 2024-02-29 at 11:42 -0500, David Kelly wrote:\n> I was told that partitioned table indexed must always start with the partition key columns.\n\nThat's not true.\n\nOnly unique indexes (as used by primary key and unique constraints) must\ncontain the partitioning key (but they don't have to start with it).\n\n\n> Any other performance considerations when it comes to partitioned table indexing?\n> Specifically, partitioning by range where the range is a single value.\n\nNot particularly - selecting from a partitioned table is like selecting\nfrom a UNION ALL of all partitions, except that sometimes PostgreSQL\ncan forgo scanning some of the partitions.\nIf you use very many partitions, the overhead for query planning and\nexecution can become noticable.\n\nYours,\nLaurenz Albe\n\n\n\n\n\n\n\n\nWould eliminating triggers and stored procedure would be step #1 to start seeing gains from partitions?\nWe have many triigers and stored procedure and i am trying to to kake sure if need to deprecate before moving to partitioning.\n\n\nMany thx\nAndy\n\n\n\nGet Outlook for Android\n\nFrom: Laurenz Albe <[email protected]>\nSent: Thursday, February 29, 2024 9:32:48 AM\nTo: David Kelly <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Table Partitioning and Indexes Performance Questions\n \n\n\nOn Thu, 2024-02-29 at 11:42 -0500, David Kelly wrote:\n> I was told that partitioned table indexed must always start with the partition key columns.\n\nThat's not true.\n\nOnly unique indexes (as used by primary key and unique constraints) must\ncontain the partitioning key (but they don't have to start with it).\n\n\n> Any other performance considerations when it comes to partitioned table indexing?\n> Specifically, partitioning by range where the range is a single value.\n\nNot particularly - selecting from a partitioned table is like selecting\nfrom a UNION ALL of all partitions, except that sometimes PostgreSQL\ncan forgo scanning some of the partitions.\nIf you use very many partitions, the overhead for query planning and\nexecution can become noticable.\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 29 Feb 2024 18:11:33 +0000",
"msg_from": "Anupam b <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Partitioning and Indexes Performance Questions"
}
] |
[
{
"msg_contents": "Hi,\nWe are designing one application which is currently restricted to one time\nzone users but has the possibility to go global in future. Some of the\ntransaction tables are going to be daily range partitioned on the\ntransaction_create_date column. But the \"date\" data type will have no time\ncomponent in it, so we are thinking to make it as timestamp data\ntype(timestamptz(6)), so that it will help us in us two ways,\n\nfirstly , though current use cases in which the majority of the queries are\ngoing to happen on a day or multiple days of transactions. But if we have\nany use case which needs further lower granularity like in hourly duration\n, then having \"timestamp\" data type with an index created on it will help.\nAnd in future , if we plan to partition it based on further lower\ngranularity like hourly , that can be accommodated easily with a\n\"timestamp\" data type.\n\nHowever the question we have is ,\n1)If there is any downside of having the partition key with \"timestamp with\ntimezone\" type? Will it impact the partition pruning of the queries anyway\nby appending any run time \"time zone\" conversion function during the query\nplanning/execution phase?\n2) As it will take the default server times , so during daylight saving\nthe server time will change, so in that case, can it cause any unforeseen\nissue?\n3)Will this cause the data to be spread unevenly across partitions and make\nthe partitions unevenly sized? If will go for UTC/GMT as db time, the\nuser's one day transaction might span across two daily partitions.\n\n\nThanks and Regards\nSud\n\nHi,We are designing one application which is currently restricted to one time zone users but has the possibility to go global in future. Some of the transaction tables are going to be daily range partitioned on the transaction_create_date column. But the \"date\" data type will have no time component in it, so we are thinking to make it as timestamp data type(timestamptz(6)), so that it will help us in us two ways,firstly , though current use cases in which the majority of the queries are going to happen on a day or multiple days of transactions. But if we have any use case which needs further lower granularity like in hourly duration , then having \"timestamp\" data type with an index created on it will help. And in future , if we plan to partition it based on further lower granularity like hourly , that can be accommodated easily with a \"timestamp\" data type. However the question we have is , 1)If there is any downside of having the partition key with \"timestamp with timezone\" type? Will it impact the partition pruning of the queries anyway by appending any run time \"time zone\" conversion function during the query planning/execution phase? 2) As it will take the default server times , so during daylight saving the server time will change, so in that case, can it cause any unforeseen issue?3)Will this cause the data to be spread unevenly across partitions and make the partitions unevenly sized? If will go for UTC/GMT as db time, the user's one day transaction might span across two daily partitions. Thanks and RegardsSud",
"msg_date": "Tue, 5 Mar 2024 01:09:16 +0530",
"msg_from": "sud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is partition pruning impacted by data type"
},
{
"msg_contents": "Hello,\nHas anybody got experience of using a range partitioning table using\ntimestamptz or \"timestamp with no timezone\" Column and saw any of such\nknown issues in pruning?\n\n\n\n> On Tue, 5 Mar, 2024, 1:09 am sud, <[email protected]> wrote:\n>\n>> Hi,\n>> We are designing one application which is currently restricted to one\n>> time zone users but has the possibility to go global in future. Some of the\n>> transaction tables are going to be daily range partitioned on the\n>> transaction_create_date column. But the \"date\" data type will have no time\n>> component in it, so we are thinking to make it as timestamp data\n>> type(timestamptz(6)), so that it will help us in us two ways,\n>>\n>> firstly , though current use cases in which the majority of the queries\n>> are going to happen on a day or multiple days of transactions. But if we\n>> have any use case which needs further lower granularity like in hourly\n>> duration , then having \"timestamp\" data type with an index created on it\n>> will help. And in future , if we plan to partition it based on further\n>> lower granularity like hourly , that can be accommodated easily with a\n>> \"timestamp\" data type.\n>>\n>> However the question we have is ,\n>> *1)If there is any downside of having the partition key with \"timestamp\n>> with timezone\" type? Will it impact the partition pruning of the queries\n>> anyway by appending any run time \"time zone\" conversion function during the\n>> query planning/execution phase? *\n>>\n>\n> *2) As it will take the default server times , so during daylight saving\n>> the server time will change, so in that case, can it cause any unforeseen\n>> issue?*\n>>\n>\n> *3)Will this cause the data to be spread unevenly across partitions and\n>> make the partitions unevenly sized? If will go for UTC/GMT as db time, the\n>> user's one day transaction might span across two daily partitions. *\n>>\n>>\n>> Thanks and Regards\n>> Sud\n>>\n>\n\nHello,Has anybody got experience of using a range partitioning table using timestamptz or \"timestamp with no timezone\" Column and saw any of such known issues in pruning? On Tue, 5 Mar, 2024, 1:09 am sud, <[email protected]> wrote:Hi,We are designing one application which is currently restricted to one time zone users but has the possibility to go global in future. Some of the transaction tables are going to be daily range partitioned on the transaction_create_date column. But the \"date\" data type will have no time component in it, so we are thinking to make it as timestamp data type(timestamptz(6)), so that it will help us in us two ways,firstly , though current use cases in which the majority of the queries are going to happen on a day or multiple days of transactions. But if we have any use case which needs further lower granularity like in hourly duration , then having \"timestamp\" data type with an index created on it will help. And in future , if we plan to partition it based on further lower granularity like hourly , that can be accommodated easily with a \"timestamp\" data type. However the question we have is , 1)If there is any downside of having the partition key with \"timestamp with timezone\" type? Will it impact the partition pruning of the queries anyway by appending any run time \"time zone\" conversion function during the query planning/execution phase? 2) As it will take the default server times , so during daylight saving the server time will change, so in that case, can it cause any unforeseen issue?3)Will this cause the data to be spread unevenly across partitions and make the partitions unevenly sized? If will go for UTC/GMT as db time, the user's one day transaction might span across two daily partitions. Thanks and RegardsSud",
"msg_date": "Tue, 5 Mar 2024 10:50:19 +0530",
"msg_from": "sud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is partition pruning impacted by data type"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 1:09 AM sud <[email protected]> wrote:\n\n>\n> However the question we have is ,\n> 1)If there is any downside of having the partition key with \"timestamp\n> with timezone\" type? Will it impact the partition pruning of the queries\n> anyway by appending any run time \"time zone\" conversion function during the\n> query planning/execution phase?\n> 2) As it will take the default server times , so during daylight saving\n> the server time will change, so in that case, can it cause any unforeseen\n> issue?\n> 3)Will this cause the data to be spread unevenly across partitions and\n> make the partitions unevenly sized? If will go for UTC/GMT as db time, the\n> user's one day transaction might span across two daily partitions.\n>\n>\nMy 2 cents.\nWe have cases which use the \"timestamp with timezone\" column as partition\nkey and the partition pruning happens for the read queries without any\nissue, so we don't see any conversion functions applied to the predicate as\nsuch which is partition key. I think if the users go global it's better to\nhave the database time in UTC time zone. and it's obvious that, In case of\nglobal users the data ought to be span across multiple days as the days\nwon't be as per the users time zone rather UTC.\n\nOn Tue, Mar 5, 2024 at 1:09 AM sud <[email protected]> wrote:However the question we have is , 1)If there is any downside of having the partition key with \"timestamp with timezone\" type? Will it impact the partition pruning of the queries anyway by appending any run time \"time zone\" conversion function during the query planning/execution phase? 2) As it will take the default server times , so during daylight saving the server time will change, so in that case, can it cause any unforeseen issue?3)Will this cause the data to be spread unevenly across partitions and make the partitions unevenly sized? If will go for UTC/GMT as db time, the user's one day transaction might span across two daily partitions. My 2 cents.We have cases which use the \"timestamp with timezone\" column as partition key and the partition pruning happens for the read queries without any issue, so we don't see any conversion functions applied to the predicate as such which is partition key. I think if the users go global it's better to have the database time in UTC time zone. and it's obvious that, In case of global users the data ought to be span across multiple days as the days won't be as per the users time zone rather UTC.",
"msg_date": "Wed, 6 Mar 2024 00:35:02 +0530",
"msg_from": "Lok P <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is partition pruning impacted by data type"
},
{
"msg_contents": "Thank you.\n\nYes, I tried creating a table manually with column timestamptz(6) type and\npartitioned on that and then executed select query with the filter on that\ncolumn and I do see partition pruning happening. Not able to visualize any\nother issues though, however some teammates say it may have a negative\nimpact on aggregation type queries , not sure how but will try to test it.\nThanks again for the response.\n\nOn Wed, Mar 6, 2024 at 12:35 AM Lok P <[email protected]> wrote:\n\n>\n> On Tue, Mar 5, 2024 at 1:09 AM sud <[email protected]> wrote:\n>\n>>\n>> However the question we have is ,\n>> 1)If there is any downside of having the partition key with \"timestamp\n>> with timezone\" type? Will it impact the partition pruning of the queries\n>> anyway by appending any run time \"time zone\" conversion function during the\n>> query planning/execution phase?\n>> 2) As it will take the default server times , so during daylight saving\n>> the server time will change, so in that case, can it cause any unforeseen\n>> issue?\n>> 3)Will this cause the data to be spread unevenly across partitions and\n>> make the partitions unevenly sized? If will go for UTC/GMT as db time, the\n>> user's one day transaction might span across two daily partitions.\n>>\n>>\n> My 2 cents.\n> We have cases which use the \"timestamp with timezone\" column as partition\n> key and the partition pruning happens for the read queries without any\n> issue, so we don't see any conversion functions applied to the predicate as\n> such which is partition key. I think if the users go global it's better to\n> have the database time in UTC time zone. and it's obvious that, In case of\n> global users the data ought to be span across multiple days as the days\n> won't be as per the users time zone rather UTC.\n>\n>\n>\n>\n\nThank you.Yes, I tried creating a table manually with column timestamptz(6) type and partitioned on that and then executed select query with the filter on that column and I do see partition pruning happening. Not able to visualize any other issues though, however some teammates say it may have a negative impact on aggregation type queries , not sure how but will try to test it. Thanks again for the response.On Wed, Mar 6, 2024 at 12:35 AM Lok P <[email protected]> wrote:On Tue, Mar 5, 2024 at 1:09 AM sud <[email protected]> wrote:However the question we have is , 1)If there is any downside of having the partition key with \"timestamp with timezone\" type? Will it impact the partition pruning of the queries anyway by appending any run time \"time zone\" conversion function during the query planning/execution phase? 2) As it will take the default server times , so during daylight saving the server time will change, so in that case, can it cause any unforeseen issue?3)Will this cause the data to be spread unevenly across partitions and make the partitions unevenly sized? If will go for UTC/GMT as db time, the user's one day transaction might span across two daily partitions. My 2 cents.We have cases which use the \"timestamp with timezone\" column as partition key and the partition pruning happens for the read queries without any issue, so we don't see any conversion functions applied to the predicate as such which is partition key. I think if the users go global it's better to have the database time in UTC time zone. and it's obvious that, In case of global users the data ought to be span across multiple days as the days won't be as per the users time zone rather UTC.",
"msg_date": "Wed, 6 Mar 2024 00:59:41 +0530",
"msg_from": "sud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is partition pruning impacted by data type"
},
{
"msg_contents": "Something interesting and not sure if expected behaviour is as below. We\nare confused here.\n\nIn the below example we created two partitioned tables on timestamptz type\ncolumns with different time zones and the child partitions are created\nappropriately with boundaries as one mid night to next mid night and so\non.But when we change the time zone and query the data dictionary views\nagain, it shows the start and end of the partition boundary as not\nmidnights but different values.\n\nSo I was wondering if this can cause us any unforeseen issues in the long\nrun while creating the partitions though partman or while persisting the\ndata into the tables from the end users? or should we always set the local\ntimezone as UTC always before running or calling the pg_partman/pg_cron\nprocess which creates the partitions? Mainly in a database which serves\nglobal users sitting across multiple timezones. And same thing while\ninserting data into the table, we should use UTC timezone conversion\nfunction.\n\nAnd while checking the timezone using the \"show timezone\" function it shows\nthe local timezone, so is there any way to see postgres DB the server\ntimezone?\n\nSET SESSION TIME ZONE 'UTC';\nCREATE TABLE test_timestamp (\nts TIMESTAMP,\ntstz TIMESTAMPTZ) PARTITION BY RANGE (tstz);\n\nSELECT partman.create_parent(\n p_parent_table := 'public.test_timestamp',\n p_control := 'tstz',\n p_type := 'native',\n p_interval := '1 day',\n p_premake := 4,\n p_start_partition => '2024-03-07 00:00:00'\n);\n\nUPDATE partman.part_config SET infinite_time_partitions = 'true' WHERE\nparent_table = 'public.test_timestamp';\n\nwith recursive inh as (\n select i.inhrelid, null::text as parent\n from pg_catalog.pg_inherits i\n join pg_catalog.pg_class cl on i.inhparent = cl.oid\n join pg_catalog.pg_namespace nsp on cl.relnamespace = nsp.oid\n where nsp.nspname = 'public'\n and cl.relname = 'test_timestamp2'\n union all\n select i.inhrelid, (i.inhparent::regclass)::text\n from inh\n join pg_catalog.pg_inherits i on (inh.inhrelid = i.inhparent)\n)\nselect c.relname as partition_name,\n pg_get_expr(c.relpartbound, c.oid, true) as partition_expression\nfrom inh\n join pg_catalog.pg_class c on inh.inhrelid = c.oid\n join pg_catalog.pg_namespace n on c.relnamespace = n.oid\n left join pg_partitioned_table p on p.partrelid = c.oid\norder by n.nspname, c.relname;\n\ntest_timestamp_default DEFAULT\ntest_timestamp_p2024_03_07 FOR VALUES FROM ('2024-03-07 00:00:00+00') TO\n('2024-03-08 00:00:00+00')\ntest_timestamp_p2024_03_08 FOR VALUES FROM ('2024-03-08 00:00:00+00') TO\n('2024-03-09 00:00:00+00')\ntest_timestamp_p2024_03_09 FOR VALUES FROM ('2024-03-09 00:00:00+00') TO\n('2024-03-10 00:00:00+00')\ntest_timestamp_p2024_03_10 FOR VALUES FROM ('2024-03-10 00:00:00+00') TO\n('2024-03-11 00:00:00+00')\ntest_timestamp_p2024_03_11 FOR VALUES FROM ('2024-03-11 00:00:00+00') TO\n('2024-03-12 00:00:00+00')\n\nSET SESSION TIME ZONE 'EST';\n\ntest_timestamp_default DEFAULT\ntest_timestamp_p2024_03_07 FOR VALUES FROM ('2024-03-06 *19:00:00-05*') TO\n('2024-03-07 19:00:00-05')\ntest_timestamp_p2024_03_08 FOR VALUES FROM ('2024-03-07 *19:00:00-05*') TO\n('2024-03-08 19:00:00-05')\ntest_timestamp_p2024_03_09 FOR VALUES FROM ('2024-03-08 *19:00:00-05*') TO\n('2024-03-09 19:00:00-05')\ntest_timestamp_p2024_03_10 FOR VALUES FROM ('2024-03-09 *19:00:00-05*') TO\n('2024-03-10 19:00:00-05')\ntest_timestamp_p2024_03_11 FOR VALUES FROM ('2024-03-10 *19:00:00-05*') TO\n('2024-03-11 19:00:00-05')\n\n***********************\n\nSET SESSION TIME ZONE 'EST';\n\nCREATE TABLE test_timestamp2 (\nts TIMESTAMP,\ntstz TIMESTAMPTZ) PARTITION BY RANGE (tstz);\n\nSELECT partman.create_parent(\n p_parent_table := 'public.test_timestamp2',\n p_control := 'tstz',\n p_type := 'native',\n p_interval := '1 day',\n p_premake := 4,\n p_start_partition => '2024-03-07 00:00:00'\n);\n\nUPDATE partman.part_config SET infinite_time_partitions = 'true' WHERE\nparent_table = 'public.test_timestamp2';\n\nwith recursive inh as (\n select i.inhrelid, null::text as parent\n from pg_catalog.pg_inherits i\n join pg_catalog.pg_class cl on i.inhparent = cl.oid\n join pg_catalog.pg_namespace nsp on cl.relnamespace = nsp.oid\n where nsp.nspname = 'public'\n and cl.relname = 'test_timestamp2'\n union all\n select i.inhrelid, (i.inhparent::regclass)::text\n from inh\n join pg_catalog.pg_inherits i on (inh.inhrelid = i.inhparent)\n)\nselect c.relname as partition_name,\n pg_get_expr(c.relpartbound, c.oid, true) as partition_expression\nfrom inh\n join pg_catalog.pg_class c on inh.inhrelid = c.oid\n join pg_catalog.pg_namespace n on c.relnamespace = n.oid\n left join pg_partitioned_table p on p.partrelid = c.oid\norder by n.nspname, c.relname;\n\ntest_timestamp2_default DEFAULT\ntest_timestamp2_p2024_03_07 FOR VALUES FROM ('2024-03-07 00:00:00-05') TO\n('2024-03-08 00:00:00-05')\ntest_timestamp2_p2024_03_08 FOR VALUES FROM ('2024-03-08 00:00:00-05') TO\n('2024-03-09 00:00:00-05')\ntest_timestamp2_p2024_03_09 FOR VALUES FROM ('2024-03-09 00:00:00-05') TO\n('2024-03-10 00:00:00-05')\ntest_timestamp2_p2024_03_10 FOR VALUES FROM ('2024-03-10 00:00:00-05') TO\n('2024-03-11 00:00:00-05')\ntest_timestamp2_p2024_03_11 FOR VALUES FROM ('2024-03-11 00:00:00-05') TO\n('2024-03-12 00:00:00-05')\n\n\nSET SESSION TIME ZONE 'UTC';\n\ntest_timestamp2_default DEFAULT\ntest_timestamp2_p2024_03_07 FOR VALUES FROM ('2024-03-07 *05:00:00+00*') TO\n('2024-03-08 05:00:00+00')\ntest_timestamp2_p2024_03_08 FOR VALUES FROM ('2024-03-08 *05:00:00+00*') TO\n('2024-03-09 05:00:00+00')\ntest_timestamp2_p2024_03_09 FOR VALUES FROM ('2024-03-09 *05:00:00+00*') TO\n('2024-03-10 05:00:00+00')\ntest_timestamp2_p2024_03_10 FOR VALUES FROM ('2024-03-10 *05:00:00+00*') TO\n('2024-03-11 05:00:00+00')\ntest_timestamp2_p2024_03_11 FOR VALUES FROM ('2024-03-11 *05:00:00+00*') TO\n('2024-03-12 05:00:00+00')\n\nRegards\nSud\n\nSomething interesting and not sure if expected behaviour is as below. We are confused here.In the below example we created two partitioned tables on timestamptz type columns with different time zones and the child partitions are created appropriately with boundaries as one mid night to next mid night and so on.But when we change the time zone and query the data dictionary views again, it shows the start and end of the partition boundary as not midnights but different values. So I was wondering if this can cause us any unforeseen issues in the long run while creating the partitions though partman or while persisting the data into the tables from the end users? or should we always set the local timezone as UTC always before running or calling the pg_partman/pg_cron process which creates the partitions? Mainly in a database which serves global users sitting across multiple timezones. And same thing while inserting data into the table, we should use UTC timezone conversion function.And while checking the timezone using the \"show timezone\" function it shows the local timezone, so is there any way to see postgres DB the server timezone?SET SESSION TIME ZONE 'UTC';CREATE TABLE test_timestamp (ts TIMESTAMP,tstz TIMESTAMPTZ) PARTITION BY RANGE (tstz);SELECT partman.create_parent( p_parent_table := 'public.test_timestamp', p_control := 'tstz', p_type := 'native', p_interval := '1 day', p_premake := 4, p_start_partition => '2024-03-07 00:00:00');UPDATE partman.part_config SET infinite_time_partitions = 'true' WHERE parent_table = 'public.test_timestamp';with recursive inh as ( select i.inhrelid, null::text as parent from pg_catalog.pg_inherits i join pg_catalog.pg_class cl on i.inhparent = cl.oid join pg_catalog.pg_namespace nsp on cl.relnamespace = nsp.oid where nsp.nspname = 'public' and cl.relname = 'test_timestamp2' union all select i.inhrelid, (i.inhparent::regclass)::text from inh join pg_catalog.pg_inherits i on (inh.inhrelid = i.inhparent))select c.relname as partition_name, pg_get_expr(c.relpartbound, c.oid, true) as partition_expressionfrom inh join pg_catalog.pg_class c on inh.inhrelid = c.oid join pg_catalog.pg_namespace n on c.relnamespace = n.oid left join pg_partitioned_table p on p.partrelid = c.oidorder by n.nspname, c.relname;test_timestamp_default\tDEFAULTtest_timestamp_p2024_03_07\tFOR VALUES FROM ('2024-03-07 00:00:00+00') TO ('2024-03-08 00:00:00+00')test_timestamp_p2024_03_08\tFOR VALUES FROM ('2024-03-08 00:00:00+00') TO ('2024-03-09 00:00:00+00')test_timestamp_p2024_03_09\tFOR VALUES FROM ('2024-03-09 00:00:00+00') TO ('2024-03-10 00:00:00+00')test_timestamp_p2024_03_10\tFOR VALUES FROM ('2024-03-10 00:00:00+00') TO ('2024-03-11 00:00:00+00')test_timestamp_p2024_03_11\tFOR VALUES FROM ('2024-03-11 00:00:00+00') TO ('2024-03-12 00:00:00+00')SET SESSION TIME ZONE 'EST';test_timestamp_default\tDEFAULTtest_timestamp_p2024_03_07\tFOR VALUES FROM ('2024-03-06 19:00:00-05') TO ('2024-03-07 19:00:00-05')test_timestamp_p2024_03_08\tFOR VALUES FROM ('2024-03-07 19:00:00-05') TO ('2024-03-08 19:00:00-05')test_timestamp_p2024_03_09\tFOR VALUES FROM ('2024-03-08 19:00:00-05') TO ('2024-03-09 19:00:00-05')test_timestamp_p2024_03_10\tFOR VALUES FROM ('2024-03-09 19:00:00-05') TO ('2024-03-10 19:00:00-05')test_timestamp_p2024_03_11\tFOR VALUES FROM ('2024-03-10 19:00:00-05') TO ('2024-03-11 19:00:00-05')***********************SET SESSION TIME ZONE 'EST';CREATE TABLE test_timestamp2 (ts TIMESTAMP,tstz TIMESTAMPTZ) PARTITION BY RANGE (tstz);SELECT partman.create_parent( p_parent_table := 'public.test_timestamp2', p_control := 'tstz', p_type := 'native', p_interval := '1 day', p_premake := 4, p_start_partition => '2024-03-07 00:00:00');UPDATE partman.part_config SET infinite_time_partitions = 'true' WHERE parent_table = 'public.test_timestamp2';with recursive inh as ( select i.inhrelid, null::text as parent from pg_catalog.pg_inherits i join pg_catalog.pg_class cl on i.inhparent = cl.oid join pg_catalog.pg_namespace nsp on cl.relnamespace = nsp.oid where nsp.nspname = 'public' and cl.relname = 'test_timestamp2' union all select i.inhrelid, (i.inhparent::regclass)::text from inh join pg_catalog.pg_inherits i on (inh.inhrelid = i.inhparent))select c.relname as partition_name, pg_get_expr(c.relpartbound, c.oid, true) as partition_expressionfrom inh join pg_catalog.pg_class c on inh.inhrelid = c.oid join pg_catalog.pg_namespace n on c.relnamespace = n.oid left join pg_partitioned_table p on p.partrelid = c.oidorder by n.nspname, c.relname;test_timestamp2_default\tDEFAULTtest_timestamp2_p2024_03_07\tFOR VALUES FROM ('2024-03-07 00:00:00-05') TO ('2024-03-08 00:00:00-05')test_timestamp2_p2024_03_08\tFOR VALUES FROM ('2024-03-08 00:00:00-05') TO ('2024-03-09 00:00:00-05')test_timestamp2_p2024_03_09\tFOR VALUES FROM ('2024-03-09 00:00:00-05') TO ('2024-03-10 00:00:00-05')test_timestamp2_p2024_03_10\tFOR VALUES FROM ('2024-03-10 00:00:00-05') TO ('2024-03-11 00:00:00-05')test_timestamp2_p2024_03_11\tFOR VALUES FROM ('2024-03-11 00:00:00-05') TO ('2024-03-12 00:00:00-05')SET SESSION TIME ZONE 'UTC';test_timestamp2_default\tDEFAULTtest_timestamp2_p2024_03_07\tFOR VALUES FROM ('2024-03-07 05:00:00+00') TO ('2024-03-08 05:00:00+00')test_timestamp2_p2024_03_08\tFOR VALUES FROM ('2024-03-08 05:00:00+00') TO ('2024-03-09 05:00:00+00')test_timestamp2_p2024_03_09\tFOR VALUES FROM ('2024-03-09 05:00:00+00') TO ('2024-03-10 05:00:00+00')test_timestamp2_p2024_03_10\tFOR VALUES FROM ('2024-03-10 05:00:00+00') TO ('2024-03-11 05:00:00+00')test_timestamp2_p2024_03_11\tFOR VALUES FROM ('2024-03-11 05:00:00+00') TO ('2024-03-12 05:00:00+00')RegardsSud",
"msg_date": "Fri, 8 Mar 2024 02:07:23 +0530",
"msg_from": "sud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is partition pruning impacted by data type"
}
] |
[
{
"msg_contents": "Hi list,\n\nIn France, the total number of cadastral parcels is around 10 000 000\n\nThe data can be heavy, because each parcel stores a geometry (PostGIS\ngeometry data type inside a geom column).\nIndexes must be created to increase performance of day-to-day requests:\n\n* GIST index on geom for spatial filtering and intersection with other\ngeometries (other tables)\n* Primary key and probably another unique code to index\n* one index on the \"department\" field. There are around 100 \"departments\"\n(admin boundaries) in France, and the parcels are homogeneously distributed\n(~ 1M parcel per \"department\")\n\nThe initial import of this data is made one department by one department\n(the data source is distributed by department by French authorities). And\neach year, data must be deleted and reimported (data change each year), and\nthis is also often done one department at a time.\n\n* Sometimes requests are made with a department filter (for example WHERE\ndepartment IN ('2A', '34', '30', '48') )\n* Sometimes other client database clients must be able to get data from the\nwhole dataset ( for example get the parcels for a list of known IDs)\n\nI would like to question the list about the following 2 strategies to\nmaintain such data:\n\n1/ Put the whole dataset into one big table\n2/ Create one table per department, and create a VIEW with 100 UNION ALL to\ngather all the parcels\n\n1/ Seems simpler for the database clients, but it seems to me this can be a\npain to maintain. For example, each time we will need to replace last year\ndata for one department with the upcoming new data, we will need to delete\n1M lines, reimport the new 1M lines and VACUUM FULL to regain space.\nIndexes will be huge, and I can suffer questions like :\nhttps://www.postgresql.org/message-id/CAMKXKO7yXmduSs4zzMfdRaPUn2kOKtQ6KMnDe1GxEr56Vr8hxA%40mail.gmail.com\nI often need to use pg_repack to regain spaces on this kind of table.\nVACUUM FULL cannot be used because it locks the table, and it takes times\n(!)\n\n2/ Seems more kiss, but only if queries on the UNION VIEW will be able to\nuse the tables indexes (geom, department) and perform as well as the big\ntable.\n\n\nAny hint appreciated !\nRegards\n\nKimaidou\n\nHi list,In France, the total number of cadastral parcels is around 10 000 000The data can be heavy, because each parcel stores a geometry (PostGIS geometry data type inside a geom column).Indexes must be created to increase performance of day-to-day requests:* GIST index on geom for spatial filtering and intersection with other geometries (other tables)* Primary key and probably another unique code to index* one index on the \"department\" field. There are around 100 \"departments\" (admin boundaries) in France, and the parcels are homogeneously distributed (~ 1M parcel per \"department\")The initial import of this data is made one department by one department (the data source is distributed by department by French authorities). And each year, data must be deleted and reimported (data change each year), and this is also often done one department at a time.* Sometimes requests are made with a department filter (for example WHERE department IN ('2A', '34', '30', '48') )* Sometimes other client database clients must be able to get data from the whole dataset ( for example get the parcels for a list of known IDs)I would like to question the list about the following 2 strategies to maintain such data:1/ Put the whole dataset into one big table2/ Create one table per department, and create a VIEW with 100 UNION ALL to gather all the parcels1/ Seems simpler for the database clients, but it seems to me this can be a pain to maintain. For example, each time we will need to replace last year data for one department with the upcoming new data, we will need to delete 1M lines, reimport the new 1M lines and VACUUM FULL to regain space. Indexes will be huge, and I can suffer questions like : https://www.postgresql.org/message-id/CAMKXKO7yXmduSs4zzMfdRaPUn2kOKtQ6KMnDe1GxEr56Vr8hxA%40mail.gmail.comI often need to use pg_repack to regain spaces on this kind of table. VACUUM FULL cannot be used because it locks the table, and it takes times (!)2/ Seems more kiss, but only if queries on the UNION VIEW will be able to use the tables indexes (geom, department) and perform as well as the big table.Any hint appreciated !RegardsKimaidou",
"msg_date": "Tue, 5 Mar 2024 08:44:54 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Separate 100 M spatial data in 100 tables VS one big table"
},
{
"msg_contents": "Salut Kimaidou,\nwhy not a partitioned table with the department a partitioning Key ?\neach year just detach the obsolete data, department by\ndepartment (ie.detach the partition, almost instantaneous) and drop or keep\nthe obsolete data.\nNo delete, quite easy to maintain. For each global index, Postgres will\ncreate one index per each partition. and detach them when you detach a\ndepartment partition.\nso when importing, first create an appropriate table, load the data, and\nattach it to the main partitioned table. Postgres will\nautomatically recreate all necessary indexes.\n\nMarc MILLAS\nSenior Architect\n+33607850334\nwww.mokadb.com\n\n\n\nOn Tue, Mar 5, 2024 at 8:45 AM kimaidou <[email protected]> wrote:\n\n> Hi list,\n>\n> In France, the total number of cadastral parcels is around 10 000 000\n>\n> The data can be heavy, because each parcel stores a geometry (PostGIS\n> geometry data type inside a geom column).\n> Indexes must be created to increase performance of day-to-day requests:\n>\n> * GIST index on geom for spatial filtering and intersection with other\n> geometries (other tables)\n> * Primary key and probably another unique code to index\n> * one index on the \"department\" field. There are around 100 \"departments\"\n> (admin boundaries) in France, and the parcels are homogeneously distributed\n> (~ 1M parcel per \"department\")\n>\n> The initial import of this data is made one department by one department\n> (the data source is distributed by department by French authorities). And\n> each year, data must be deleted and reimported (data change each year), and\n> this is also often done one department at a time.\n>\n> * Sometimes requests are made with a department filter (for example WHERE\n> department IN ('2A', '34', '30', '48') )\n> * Sometimes other client database clients must be able to get data from\n> the whole dataset ( for example get the parcels for a list of known IDs)\n>\n> I would like to question the list about the following 2 strategies to\n> maintain such data:\n>\n> 1/ Put the whole dataset into one big table\n> 2/ Create one table per department, and create a VIEW with 100 UNION ALL\n> to gather all the parcels\n>\n> 1/ Seems simpler for the database clients, but it seems to me this can be\n> a pain to maintain. For example, each time we will need to replace last\n> year data for one department with the upcoming new data, we will need to\n> delete 1M lines, reimport the new 1M lines and VACUUM FULL to regain space.\n> Indexes will be huge, and I can suffer questions like :\n>\n> https://www.postgresql.org/message-id/CAMKXKO7yXmduSs4zzMfdRaPUn2kOKtQ6KMnDe1GxEr56Vr8hxA%40mail.gmail.com\n> I often need to use pg_repack to regain spaces on this kind of table.\n> VACUUM FULL cannot be used because it locks the table, and it takes times\n> (!)\n>\n> 2/ Seems more kiss, but only if queries on the UNION VIEW will be able to\n> use the tables indexes (geom, department) and perform as well as the big\n> table.\n>\n>\n> Any hint appreciated !\n> Regards\n>\n> Kimaidou\n>\n>\n>\n\nSalut Kimaidou,why not a partitioned table with the department a partitioning Key ?each year just detach the obsolete data, department by department (ie.detach the partition, almost instantaneous) and drop or keep the obsolete data.No delete, quite easy to maintain. For each global index, Postgres will create one index per each partition. and detach them when you detach a department partition.so when importing, first create an appropriate table, load the data, and attach it to the main partitioned table. Postgres will automatically recreate all necessary indexes.Marc MILLASSenior Architect+33607850334www.mokadb.comOn Tue, Mar 5, 2024 at 8:45 AM kimaidou <[email protected]> wrote:Hi list,In France, the total number of cadastral parcels is around 10 000 000The data can be heavy, because each parcel stores a geometry (PostGIS geometry data type inside a geom column).Indexes must be created to increase performance of day-to-day requests:* GIST index on geom for spatial filtering and intersection with other geometries (other tables)* Primary key and probably another unique code to index* one index on the \"department\" field. There are around 100 \"departments\" (admin boundaries) in France, and the parcels are homogeneously distributed (~ 1M parcel per \"department\")The initial import of this data is made one department by one department (the data source is distributed by department by French authorities). And each year, data must be deleted and reimported (data change each year), and this is also often done one department at a time.* Sometimes requests are made with a department filter (for example WHERE department IN ('2A', '34', '30', '48') )* Sometimes other client database clients must be able to get data from the whole dataset ( for example get the parcels for a list of known IDs)I would like to question the list about the following 2 strategies to maintain such data:1/ Put the whole dataset into one big table2/ Create one table per department, and create a VIEW with 100 UNION ALL to gather all the parcels1/ Seems simpler for the database clients, but it seems to me this can be a pain to maintain. For example, each time we will need to replace last year data for one department with the upcoming new data, we will need to delete 1M lines, reimport the new 1M lines and VACUUM FULL to regain space. Indexes will be huge, and I can suffer questions like : https://www.postgresql.org/message-id/CAMKXKO7yXmduSs4zzMfdRaPUn2kOKtQ6KMnDe1GxEr56Vr8hxA%40mail.gmail.comI often need to use pg_repack to regain spaces on this kind of table. VACUUM FULL cannot be used because it locks the table, and it takes times (!)2/ Seems more kiss, but only if queries on the UNION VIEW will be able to use the tables indexes (geom, department) and perform as well as the big table.Any hint appreciated !RegardsKimaidou",
"msg_date": "Tue, 5 Mar 2024 13:47:03 +0100",
"msg_from": "Marc Millas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Separate 100 M spatial data in 100 tables VS one big table"
},
{
"msg_contents": "On 3/5/24 13:47, Marc Millas wrote:\n> Salut Kimaidou,\n> why not a partitioned table with the department a partitioning Key ?\n> each year just detach the obsolete data, department by\n> department (ie.detach the partition, almost instantaneous) and drop or keep\n> the obsolete data.\n> No delete, quite easy to maintain. For each global index, Postgres will\n> create one index per each partition. and detach them when you detach a\n> department partition.\n> so when importing, first create an appropriate table, load the data, and\n> attach it to the main partitioned table. Postgres will\n> automatically recreate all necessary indexes.\n> \n\nYes, a table partitioned like this is certainly a valid option - and\nit's much better than the view with a UNION of all the per-department\ntables. The optimizer has very little insight into the view, which\nlimits how it can optimize queries. For example if the query has a\ncondition like\n\n WHERE department = 'X'\n\nwith the declarative partitioning the planner can eliminate all other\npartitions (and just ignore them), while with the view it will have to\nscan all of them.\n\nBut is partitioning a good choice? Who knows - it makes some operations\nsimpler (e.g. you can detach/drop a partition instead of deleting the\nrows), but it also makes other operations less efficient. For example a\nquery that can't eliminate partitions has to do more stuff during execution.\n\nSo to answer this we'd need to know how often stuff like bulk deletes /\nreloads happen, what queries will be executed, and so on. Both options\n(non-partitioned and partitioned table) are valid, but you have to try.\n\nAlso, partitioned table may not support / allow some features - for\nexample unique keys that don't contain the partition key. We're\nimproving this in every release, but there will always be a gap.\n\nI personally would start with non-partitioned table, because that's the\nsimplest option. And once I get a better idea how often the reloads\nhappen, I'd consider if that's something worth the extra complexity of\npartitioning the data. If it happens only occasionally (a couple times a\nyear), it probably is not. You'll just delete the data and reuse the\nspace for new data.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 5 Mar 2024 21:13:13 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Separate 100 M spatial data in 100 tables VS one big table"
},
{
"msg_contents": "Hi !\n\nI would like to thank you all for your detailed answers and explanations.\nI would give \"partitioning\" a try, by creating a dedicated new partition\ntable, and insert a (big enough) extract of the source data in it.\n\nYou are right, the best would be to try in real life !\n\nBest wishes\nKimaidou\n\nLe mardi 5 mars 2024, Tomas Vondra <[email protected]> a écrit :\n\n> On 3/5/24 13:47, Marc Millas wrote:\n> > Salut Kimaidou,\n> > why not a partitioned table with the department a partitioning Key ?\n> > each year just detach the obsolete data, department by\n> > department (ie.detach the partition, almost instantaneous) and drop or\n> keep\n> > the obsolete data.\n> > No delete, quite easy to maintain. For each global index, Postgres will\n> > create one index per each partition. and detach them when you detach a\n> > department partition.\n> > so when importing, first create an appropriate table, load the data, and\n> > attach it to the main partitioned table. Postgres will\n> > automatically recreate all necessary indexes.\n> >\n>\n> Yes, a table partitioned like this is certainly a valid option - and\n> it's much better than the view with a UNION of all the per-department\n> tables. The optimizer has very little insight into the view, which\n> limits how it can optimize queries. For example if the query has a\n> condition like\n>\n> WHERE department = 'X'\n>\n> with the declarative partitioning the planner can eliminate all other\n> partitions (and just ignore them), while with the view it will have to\n> scan all of them.\n>\n> But is partitioning a good choice? Who knows - it makes some operations\n> simpler (e.g. you can detach/drop a partition instead of deleting the\n> rows), but it also makes other operations less efficient. For example a\n> query that can't eliminate partitions has to do more stuff during\n> execution.\n>\n> So to answer this we'd need to know how often stuff like bulk deletes /\n> reloads happen, what queries will be executed, and so on. Both options\n> (non-partitioned and partitioned table) are valid, but you have to try.\n>\n> Also, partitioned table may not support / allow some features - for\n> example unique keys that don't contain the partition key. We're\n> improving this in every release, but there will always be a gap.\n>\n> I personally would start with non-partitioned table, because that's the\n> simplest option. And once I get a better idea how often the reloads\n> happen, I'd consider if that's something worth the extra complexity of\n> partitioning the data. If it happens only occasionally (a couple times a\n> year), it probably is not. You'll just delete the data and reuse the\n> space for new data.\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi !I would like to thank you all for your detailed answers and explanations.I would give \"partitioning\" a try, by creating a dedicated new partition table, and insert a (big enough) extract of the source data in it.You are right, the best would be to try in real life !Best wishesKimaidouLe mardi 5 mars 2024, Tomas Vondra <[email protected]> a écrit :On 3/5/24 13:47, Marc Millas wrote:\n> Salut Kimaidou,\n> why not a partitioned table with the department a partitioning Key ?\n> each year just detach the obsolete data, department by\n> department (ie.detach the partition, almost instantaneous) and drop or keep\n> the obsolete data.\n> No delete, quite easy to maintain. For each global index, Postgres will\n> create one index per each partition. and detach them when you detach a\n> department partition.\n> so when importing, first create an appropriate table, load the data, and\n> attach it to the main partitioned table. Postgres will\n> automatically recreate all necessary indexes.\n> \n\nYes, a table partitioned like this is certainly a valid option - and\nit's much better than the view with a UNION of all the per-department\ntables. The optimizer has very little insight into the view, which\nlimits how it can optimize queries. For example if the query has a\ncondition like\n\n WHERE department = 'X'\n\nwith the declarative partitioning the planner can eliminate all other\npartitions (and just ignore them), while with the view it will have to\nscan all of them.\n\nBut is partitioning a good choice? Who knows - it makes some operations\nsimpler (e.g. you can detach/drop a partition instead of deleting the\nrows), but it also makes other operations less efficient. For example a\nquery that can't eliminate partitions has to do more stuff during execution.\n\nSo to answer this we'd need to know how often stuff like bulk deletes /\nreloads happen, what queries will be executed, and so on. Both options\n(non-partitioned and partitioned table) are valid, but you have to try.\n\nAlso, partitioned table may not support / allow some features - for\nexample unique keys that don't contain the partition key. We're\nimproving this in every release, but there will always be a gap.\n\nI personally would start with non-partitioned table, because that's the\nsimplest option. And once I get a better idea how often the reloads\nhappen, I'd consider if that's something worth the extra complexity of\npartitioning the data. If it happens only occasionally (a couple times a\nyear), it probably is not. You'll just delete the data and reuse the\nspace for new data.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 6 Mar 2024 17:13:24 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Separate 100 M spatial data in 100 tables VS one big table"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a table approx. 20GB.\n\nI have a create unique index statement.\n\nCREATE UNIQUE INDEX testindex_v1 ON testtable1 (test_index);\n\nMy observations:\nmaintenance_work_mem = 2G\nmax_parallel_workers = '16'\n\nThe create index completes in 20 minutes.\n\nWhen I change this:\nmaintenance_work_mem = 16G\nmax_parallel_workers = '16'\n\nIt completes in 9 minutes. So I can see that I can gain performance by changing this number.\n\nSo it is faster but the question I have is it safe to set it to such a high number? I am aware that only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, so it's safe to set this value significantly larger. I have 128GB memory.\n\n\n 1. Any advice or thoughts?\n 2. Is there any other parameter that can accelerate index creation?\n\nThanks,\nAd\n\n\n\n\n\n\n\n\nHi all,\n\n\n\n\nI have a table approx. 20GB.\n\n\n\n\nI have a create unique index statement.\n\n\n\n\nCREATE UNIQUE INDEX testindex_v1 ON testtable1 (test_index); \n\n\n\n\nMy observations:\n\nmaintenance_work_mem = 2G\n\nmax_parallel_workers = '16'\n\n\n\n\nThe create index completes in 20 minutes.\n\n\n\n\nWhen I change this:\n\nmaintenance_work_mem = 16G\n\nmax_parallel_workers = '16'\n\n\n\n\nIt completes in 9 minutes. So I can see that I can gain performance by changing this number.\n\n\n\n\nSo it is faster but the question I have is it safe to set it to such a high number? I am aware that only\none of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, so it's safe to set this value\n significantly larger. I have 128GB memory.\n\n\n\n\n\nAny advice or thoughts?\n\nIs there any other parameter that can accelerate index creation? \n\n\n\n\n\nThanks,\nAd",
"msg_date": "Tue, 19 Mar 2024 16:05:44 +0000",
"msg_from": "Adithya Kumaranchath <[email protected]>",
"msg_from_op": true,
"msg_subject": "maintenance_work_mem impact?"
},
{
"msg_contents": "On Tue, 2024-03-19 at 16:05 +0000, Adithya Kumaranchath wrote:\n> I have a table approx. 20GB.\n> \n> CREATE UNIQUE INDEX testindex_v1 ON testtable1 (test_index); \n> \n> My observations:\n> maintenance_work_mem = 2G\n> max_parallel_workers = '16'\n> \n> The create index completes in 20 minutes.\n> \n> When I change this:\n> maintenance_work_mem = 16G\n> max_parallel_workers = '16'\n> \n> It completes in 9 minutes. So I can see that I can gain performance by changing this number.\n> \n> So it is faster but the question I have is it safe to set it to such a high number?\n> I am aware that onlyone of these operations can be executed at a time by a database\n> session, and an installation normally doesn't have many of them running concurrently,\n> so it's safe to set this value significantly larger. I have 128GB memory.\n> 1. Any advice or thoughts?\n> 2. Is there any other parameter that can accelerate index creation? \n\nIt is safe as long as you have enough free memory on the machine.\n\nYou can verify with tools like \"free\" on Linux (look for \"available\" memory).\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 19 Mar 2024 18:54:21 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: maintenance_work_mem impact?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have an issue with a query which has been migrated from a different\nRDBMS where it performed in a second or so, and takes minutes to run\nin postgresql 14.9 on AWS/RDS:\n\n PostgreSQL 14.9 on aarch64-unknown-linux-gnu, compiled by\naarch64-unknown-linux-gnu-gcc (GCC) 9.5.0, 64-bit\n\n\nThe query looks mostly like this (table names changed, most fields\ndroppped out of the select):\n\n------\nselect\n AR.TABLE_D_PK as \"AR.TABLE_D_PK\",\n\n from TABLE_A as GI\n\n join TABLE_B as II on (II.TABLE_A_PK = GI.TABLE_A_PK and II.IS_DELETED =\n'N')\n join TABLE_C as GR on (GR.TABLE_A_PK = GI.TABLE_A_PK and GR.IS_DELETED =\n'N')\n join TABLE_D as AR on (GR.TABLE_D_PK = AR.TABLE_D_PK and AR.IS_DELETED =\n'N')\n join TABLE_E as IR on (GR.TABLE_C_PK = IR.TABLE_C_PK and IR.IS_DELETED =\n'N')\n join TABLE_F as MR on (IR.TABLE_F_PK = MR.TABLE_F_PK and MR.IS_DELETED =\n'N')\n join TABLE_N as MI on (MR.TABLE_N_PK = MI.TABLE_N_PK and MI.IS_DELETED =\n'N')\n\n\n left outer join TABLE_G as GM on (GR.TABLE_C_PK = GM.TABLE_C_PK and\nGM.IS_DELETED = 'N')\n left outer join TABLE_H as GY on (GR.TABLE_C_PK = GY.TABLE_C_PK and\nGY.IS_DELETED = 'N')\n left outer join TABLE_I as AC on (AR.TABLE_D_PK = AC.TABLE_D_PK and\nAC.IS_DELETED = 'N')\n left outer join TABLE_M as AA on (AR.TABLE_D_PK = AA.TABLE_D_PK and\nAA.IS_DELETED = 'N')\n left outer join TABLE_K as AP on (AR.TABLE_D_PK = AP.TABLE_D_PK and\nAP.IS_DELETED = 'N')\n left outer join TABLE_L as FP on (MR.TABLE_F_PK = FP.TABLE_F_PK and\nFP.IS_DELETED = 'N')\n\n\n --where AR.TABLE_D_PK = (\nwhere AR.TABLE_D_PK IN (\n--where AR.TABLE_D_PK = any(array (\n\n select AA1.TABLE_D_PK from TABLE_M as AA1\n join TABLE_D as AR1 on (AR1.TABLE_D_PK = AA1.TABLE_D_PK and\nAR1.IS_DELETED = 'N')\n where AA1.IS_DELETED = 'N' and AA1.TABLE_M_TYPE = 'ID_TYPE_X'\n and ((AA1.TABLE_M_VALUE = 'SOME/UNIQUE/VALUE' and AR1.APP_CASE_TYPE\n= '8' ))\n-- limit 1\n\n)\nand GI.IS_DELETED = 'N'\n\n------\n\nThe first interesting thing I noticed was that if I simplify the\nsubquery in various ways (such as looking up the TABLE_D_PK beforehand\nand just inlining it there such as\n\n where AR.TABLE_D_PK IN ( 123456 )\n\nthen it runs subsecond. Similarly if I change it from\n\n where AR.TABLE_D_PK IN ( {select query} )\n\nto either of the options listed there:\n\n where AR.TABLE_D_PK = ( {select query} )\n where AR.TABLE_D_PK = ANY(ARRAY( {select query} ) )\n\nthen it's also subsecond. It feels like postgresql is expecting the\nsubquery to return many rows and goes one way, whereas when it knows\nit's only going to be 1 then it goes another (admittedly that doesn't\naccount for how ANY(ARRAY()) fixes it).\n\nOr, it's as if the subquery is correlated, and so has to run for all the\nrows\nin the main query, but to my understanding it is an uncorrelated query.\n\n\nAlternatively, if I strip some of the joins out, then I can leave the\noriginal\n\n where AR.TABLE_D_PK IN ( {select query} )\n\nin place and it's still subsecond.\n\n From here on the debugging bits I talk about are comparing the\noriginal query listed at the top to the\n\n where AR.TABLE_D_PK = ANY(ARRAY( {select query} ) )\n\nversion.\n\nThe query plans produced by the two are totally different:\n\n\n(the mailing list page says to use attachments for big query plans,\nand I've had to fallback to gmail to try to get through DKIM).\n\nslow query -> attachment -> plan-slow-explain.txt\nfast query -> attachment -> plan-fast-explain.txt\n\n\nPlans with ANALYZE, BUFFERS, SETTINGS\nslow query -> attachment -> plan-slow-explain-analyze.txt\nfast query -> attachment -> plan-fast-explain-analyze.txt\n\n\nThe bit of code that generates this query CAN generate a subquery\n(multiple options for SOME/UNIQUE/VALUE type affair) that returns a\nlist of TABLE_D_PKs, although in _this_ instance it's only finding\none. I did try adding LIMIT 1 to the subquery, thinking that that\nwould tell the planner there is only one row, and might make it behave\nlike the already-known-value case, or the = ( {subquery} case but it\ndoes not.\n\nI also found various suggestions online of adding OFFSET 0 to the\nsubquery, which also didn't improve anything.\n\n(\nI quickly get out of my depth with explain plans once it's not as\nsimple as \"hey it's a seq scan when it should be an index\", but one\nthing that interests me here in the slow plan is where it does:\n\nWorkers Planned: 6\n -> Hash Semi Join (cost=2666860.17..3131468.69 rows=1 width=24)\n Hash Cond: (gr.TABLE_E_pk = ar1.TABLE_E_pk)\n -> Parallel Hash Left Join (cost=2666851.92..3122476.95\nrows=3422280 width=32)\n\nbecause the \"main\" query doesn't join gr to ar1. It joins gr to ar.\nar1 is only in the subquery. I wonder if this suggests an initial\nmisstep in the plan that leads to minutes-not-subsecond.\n)\n\nI increased the default_statistics number to 10000 and reanalyzed, no\nchange.\n\nTable counts:\n\nselect count(*) from TABLE_A; -- 19293000\nselect count(*) from TABLE_B; -- 19293000\nselect count(*) from TABLE_C; -- 21089144\nselect count(*) from TABLE_D; -- 21089144\nselect count(*) from TABLE_E; -- 21089144\nselect count(*) from TABLE_F; -- 21089144\nselect count(*) from TABLE_G; -- 6002192\nselect count(*) from TABLE_H; -- 21089147\nselect count(*) from TABLE_I; -- 5899661\nselect count(*) from TABLE_M; -- 21041076\nselect count(*) from TABLE_K; -- 375773\nselect count(*) from TABLE_L; -- 6571232\nselect count(*) from TABLE_M; -- 21041083\nselect count(*) from TABLE_N; -- 19293007\n\n\nThe tables are all indexed on the PK with INCLUDE (is_deleted)\n\nThanks for any guidance.\n\nDave\n\n\nplan-slow-explain.txt\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=2667862.34..3132471.46 rows=1 width=8)\n -> Nested Loop Left Join (cost=2667861.90..3132470.72 rows=1 width=16)\n -> Nested Loop Left Join (cost=2667861.48..3132470.27 rows=1\nwidth=16)\n -> Nested Loop Left Join (cost=2667861.04..3132469.76\nrows=1 width=16)\n -> Nested Loop Left Join\n (cost=2667860.61..3132469.29 rows=1 width=16)\n -> Gather (cost=2667860.17..3132468.79 rows=1\nwidth=24)\n Workers Planned: 6\n -> Hash Semi Join\n (cost=2666860.17..3131468.69 rows=1 width=24)\n Hash Cond: (gr.TABLE_E_pk =\nar1.TABLE_E_pk)\n -> Parallel Hash Left Join\n (cost=2666851.92..3122476.95 rows=3422280 width=32)\n Hash Cond: (gr.TABLE_D_pk =\ngm.TABLE_D_pk)\n -> Parallel Hash Join\n (cost=2519520.86..2960409.74 rows=3422280 width=32)\n Hash Cond:\n(gr.TABLE_A_pk = ii.TABLE_A_pk)\n -> Parallel Hash Join\n (cost=1732613.92..2146401.28 rows=3423649 width=40)\n Hash Cond:\n(mr.TABLE_O_pk = mi.TABLE_O_pk)\n -> Parallel Hash\nJoin (cost=1384407.10..1789205.63 rows=3424314 width=48)\n Hash Cond:\n(gr.TABLE_E_pk = ar.TABLE_E_pk)\n -> Parallel\nHash Join (cost=850104.36..1245854.83 rows=3446882 width=40)\n Hash\nCond: (ir.TABLE_D_pk = gr.TABLE_D_pk)\n ->\n Parallel Hash Join (cost=377639.61..764283.22 rows=3469280 width=24)\n\n Hash Cond: (mr.TABLE_G_pk = ir.TABLE_G_pk)\n\n -> Parallel Seq Scan on TABLE_G mr (cost=0.00..377476.70 rows=3492155\nwidth=16)\n\n Filter: (is_deleted = 'N'::bpchar)\n\n -> Parallel Hash (cost=333991.70..333991.70 rows=3491833 width=16)\n\n -> Parallel Seq Scan on TABLE_F ir (cost=0.00..333991.70\nrows=3491833 width=16)\n\n Filter: (is_deleted = 'N'::bpchar)\n ->\n Parallel Hash (cost=428812.70..428812.70 rows=3492164 width=24)\n\n -> Parallel Seq Scan on TABLE_D gr (cost=0.00..428812.70 rows=3492164\nwidth=24)\n\n Filter: (is_deleted = 'N'::bpchar)\n -> Parallel\nHash (cost=490654.70..490654.70 rows=3491843 width=8)\n ->\n Parallel Seq Scan on TABLE_E ar (cost=0.00..490654.70 rows=3491843\nwidth=8)\n\n Filter: (is_deleted = 'N'::bpchar)\n -> Parallel Hash\n (cost=307982.32..307982.32 rows=3217960 width=8)\n -> Parallel\nSeq Scan on TABLE_O mi (cost=0.00..307982.32 rows=3217960 width=8)\n\n Filter: (is_deleted = 'N'::bpchar)\n -> Parallel Hash\n (cost=746729.28..746729.28 rows=3214213 width=16)\n -> Parallel Hash\nJoin (cost=390397.57..746729.28 rows=3214213 width=16)\n Hash Cond:\n(ii.TABLE_A_pk = gi.TABLE_A_pk)\n -> Parallel\nSeq Scan on TABLE_C ii (cost=0.00..347892.74 rows=3214846 width=8)\n\n Filter: (is_deleted = 'N'::bpchar)\n -> Parallel\nHash (cost=350211.74..350211.74 rows=3214866 width=8)\n ->\n Parallel Seq Scan on TABLE_A gi (cost=0.00..350211.74 rows=3214866\nwidth=8)\n\n Filter: (is_deleted = 'N'::bpchar)\n -> Parallel Hash\n (cost=132675.47..132675.47 rows=1172447 width=8)\n -> Parallel Seq Scan on\nTABLE_H gm (cost=0.00..132675.47 rows=1172447 width=8)\n Filter:\n(is_deleted = 'N'::bpchar)\n -> Hash (cost=8.24..8.24 rows=1\nwidth=16)\n -> Nested Loop\n (cost=1.00..8.24 rows=1 width=16)\n -> Index Scan using\nix_TABLE_N_type_value_case on TABLE_N aa1 (cost=0.56..4.18 rows=1 width=8)\n Index Cond:\n(((TABLE_N_type)::text = 'ID_TYPE_X'::text) AND ((TABLE_N_value)::text =\n'SOME/UNIQUE/VALUE'::text))\n Filter:\n(is_deleted = 'N'::bpchar)\n -> Index Scan using\nix_TABLE_E_TABLE_E_pk on TABLE_E ar1 (cost=0.44..4.06 rows=1 width=8)\n Index Cond:\n(TABLE_E_pk = aa1.TABLE_E_pk)\n Filter:\n((is_deleted = 'N'::bpchar) AND ((app_case_type)::text = '8'::text))\n -> Index Scan using ix_TABLE_I_TABLE_D_pk on\nTABLE_I gy (cost=0.44..0.49 rows=1 width=8)\n Index Cond: (TABLE_D_pk = gr.TABLE_D_pk)\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Scan using ix_TABLE_J_TABLE_E_pk on TABLE_J\nac (cost=0.43..0.46 rows=1 width=8)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Scan using ix_TABLE_N_TABLE_E_pk on TABLE_N aa\n (cost=0.44..0.49 rows=1 width=8)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Scan using ix_TABLE_L_TABLE_E_pk on TABLE_L ap\n (cost=0.42..0.44 rows=1 width=8)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Only Scan using ix_TABLE_M_TABLE_G_pk on TABLE_M fp\n (cost=0.43..0.64 rows=10 width=8)\n Index Cond: ((TABLE_G_pk = mr.TABLE_G_pk) AND (is_deleted =\n'N'::bpchar))\n(68 rows)\n\n\n\nplan-fast-explain.txt\n\n QUERY PLAN\n-\n Nested Loop Left Join (cost=14.15..249.76 rows=10 width=8) (actual\ntime=0.227..0.233 rows=1 loops=1)\n InitPlan 1 (returns $1)\n -> Nested Loop (cost=1.00..8.24 rows=1 width=8) (actual\ntime=0.057..0.058 rows=1 loops=1)\n -> Index Scan using ix_TABLE_N_type_value_case on TABLE_N aa1\n (cost=0.56..4.18 rows=1 width=8) (actual time=0.039..0.040 rows=1 loops=1)\n Index Cond: (((TABLE_N_type)::text = 'ID_TYPE_X'::text)\nAND ((TABLE_N_value)::text = 'SOME/UNIQUE/VALUE'::text))\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Scan using ix_TABLE_E_TABLE_E_pk on TABLE_E ar1\n (cost=0.44..4.06 rows=1 width=8) (actual time=0.016..0.016 rows=1 loops=1)\n Index Cond: (TABLE_E_pk = aa1.TABLE_E_pk)\n Filter: ((is_deleted = 'N'::bpchar) AND\n((app_case_type)::text = '8'::text))\n -> Nested Loop Left Join (cost=5.48..234.10 rows=10 width=16) (actual\ntime=0.217..0.222 rows=1 loops=1)\n -> Nested Loop Left Join (cost=5.05..193.62 rows=10 width=16)\n(actual time=0.208..0.213 rows=1 loops=1)\n -> Nested Loop Left Join (cost=4.62..152.99 rows=10\nwidth=16) (actual time=0.196..0.201 rows=1 loops=1)\n -> Nested Loop Left Join (cost=4.18..112.42 rows=10\nwidth=16) (actual time=0.186..0.191 rows=1 loops=1)\n -> Nested Loop Left Join (cost=3.75..107.42\nrows=10 width=24) (actual time=0.172..0.176 rows=1 loops=1)\n -> Nested Loop (cost=3.31..102.68\nrows=10 width=24) (actual time=0.159..0.162 rows=1 loops=1)\n -> Nested Loop (cost=2.88..97.81\nrows=10 width=32) (actual time=0.144..0.147 rows=1 loops=1)\n -> Nested Loop\n (cost=2.44..92.88 rows=10 width=24) (actual time=0.131..0.134 rows=1\nloops=1)\n -> Nested Loop\n (cost=1.88..86.62 rows=10 width=16) (actual time=0.118..0.119 rows=1\nloops=1)\n Join Filter:\n(gi.TABLE_A_pk = ii.TABLE_A_pk)\n -> Nested Loop\n (cost=1.31..80.32 rows=10 width=32) (actual time=0.103..0.104 rows=1\nloops=1)\n -> Nested\nLoop (cost=0.88..75.43 rows=10 width=24) (actual time=0.090..0.091 rows=1\nloops=1)\n ->\n Index Scan using ix_TABLE_E_TABLE_E_pk on TABLE_E ar (cost=0.44..34.90\nrows=10 width=8) (actual time=0.073..0.074 rows=1 loops=1)\n\n Index Cond: (TABLE_E_pk = ANY ($1))\n\n Filter: (is_deleted = 'N'::bpchar)\n ->\n Index Scan using uk_TABLE_D_TABLE_E_pk on TABLE_D gr (cost=0.44..4.05\nrows=1 width=24) (actual time=0.016..0.016 rows=1 loops=1)\n\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n\n Filter: (is_deleted = 'N'::bpchar)\n -> Index\nScan using ix_biog_id_biog_id_pk on TABLE_A gi (cost=0.44..0.49 rows=1\nwidth=8) (actual time=0.012..0.012 rows=1 loops=1)\n Index\nCond: (TABLE_A_pk = gr.TABLE_A_pk)\n\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Scan\nusing ix_TABLE_C_mida on TABLE_C ii (cost=0.56..0.62 rows=1 width=8)\n(actual time=0.014..0.014 rows=1 loops=1)\n Index Cond:\n(TABLE_A_pk = gr.TABLE_A_pk)\n Filter:\n(is_deleted = 'N'::bpchar)\n -> Index Scan using\nix_TABLE_F_mida on TABLE_F ir (cost=0.56..0.62 rows=1 width=16) (actual\ntime=0.013..0.013 rows=1 loops=1)\n Index Cond:\n(TABLE_D_pk = gr.TABLE_D_pk)\n Filter:\n(is_deleted = 'N'::bpchar)\n -> Index Scan using\npk_TABLE_G on TABLE_G mr (cost=0.44..0.49 rows=1 width=16) (actual\ntime=0.012..0.012 rows=1 loops=1)\n Index Cond: (TABLE_G_pk\n= ir.TABLE_G_pk)\n Filter: (is_deleted =\n'N'::bpchar)\n -> Index Scan using ix_TABLE_O_mida\non TABLE_O mi (cost=0.44..0.49 rows=1 width=8) (actual time=0.015..0.015\nrows=1 loops=1)\n Index Cond: (TABLE_O_pk =\nmr.TABLE_O_pk)\n Filter: (is_deleted =\n'N'::bpchar)\n -> Index Scan using ix_TABLE_H_TABLE_D_pk\non TABLE_H gm (cost=0.43..0.46 rows=1 width=8) (actual time=0.012..0.012\nrows=1 loops=1)\n Index Cond: (TABLE_D_pk =\ngr.TABLE_D_pk)\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Scan using ix_TABLE_I_TABLE_D_pk on\nTABLE_I gy (cost=0.44..0.49 rows=1 width=8) (actual time=0.014..0.014\nrows=1 loops=1)\n Index Cond: (TABLE_D_pk = gr.TABLE_D_pk)\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Scan using ix_TABLE_J_TABLE_E_pk on TABLE_J\nac (cost=0.43..4.05 rows=1 width=8) (actual time=0.009..0.009 rows=0\nloops=1)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Scan using ix_TABLE_N_TABLE_E_pk on TABLE_N aa\n (cost=0.44..4.05 rows=1 width=8) (actual time=0.011..0.011 rows=1 loops=1)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Scan using ix_TABLE_L_TABLE_E_pk on TABLE_L ap\n (cost=0.42..4.04 rows=1 width=8) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n -> Index Only Scan using ix_TABLE_M_TABLE_G_pk on TABLE_M fp\n (cost=0.43..0.64 rows=10 width=8) (actual time=0.009..0.009 rows=0 loops=1)\n Index Cond: ((TABLE_G_pk = mr.TABLE_G_pk) AND (is_deleted =\n'N'::bpchar))\n Heap Fetches: 0\n Planning Time: 61.539 ms\n Execution Time: 0.405 ms\n(62 rows)\n\n\nplan-slow-explain-analyze.txt\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=2667903.66..3132513.31 rows=1 width=8)\n(actual time=200578.974..201810.300 rows=1 loops=1)\n Buffers: shared hit=1610887 read=847496\n I/O Timings: read=46389.959\n -> Nested Loop Left Join (cost=2667903.22..3132512.57 rows=1 width=16)\n(actual time=200578.953..201810.278 rows=1 loops=1)\n Buffers: shared hit=1610884 read=847496\n I/O Timings: read=46389.959\n -> Nested Loop Left Join (cost=2667902.80..3132512.12 rows=1\nwidth=16) (actual time=200578.934..201810.257 rows=1 loops=1)\n Buffers: shared hit=1610881 read=847496\n I/O Timings: read=46389.959\n -> Nested Loop Left Join (cost=2667902.36..3132511.61\nrows=1 width=16) (actual time=200578.908..201810.229 rows=1 loops=1)\n Buffers: shared hit=1610877 read=847496\n I/O Timings: read=46389.959\n -> Nested Loop Left Join\n (cost=2667901.93..3132511.14 rows=1 width=16) (actual\ntime=200578.888..201810.207 rows=1 loops=1)\n Buffers: shared hit=1610874 read=847496\n I/O Timings: read=46389.959\n -> Gather (cost=2667901.49..3132510.64 rows=1\nwidth=24) (actual time=200578.847..201810.161 rows=1 loops=1)\n Workers Planned: 6\n Workers Launched: 0\n Buffers: shared hit=1610870 read=847496\n I/O Timings: read=46389.959\n -> Hash Semi Join\n (cost=2666901.49..3131510.54 rows=1 width=24) (actual\ntime=200577.866..200786.512 rows=1 loops=1)\n Hash Cond: (gr.TABLE_E_pk =\nar1.TABLE_E_pk)\n Buffers: shared hit=1610870\nread=847496\n I/O Timings: read=46389.959\n -> Parallel Hash Left Join\n (cost=2666893.24..3122518.80 rows=3422280 width=32) (actual\ntime=115007.084..198452.799 rows=20955994 loops=1)\n Hash Cond: (gr.TABLE_D_pk =\ngm.TABLE_D_pk)\n Buffers: shared hit=1610861\nread=847496\n I/O Timings: read=46389.959\n -> Parallel Hash Join\n (cost=2519520.86..2960409.74 rows=3422280 width=32) (actual\ntime=108933.021..180685.868 rows=20952150 loops=1)\n Hash Cond:\n(gr.TABLE_A_pk = ii.TABLE_A_pk)\n Buffers: shared\nhit=1493176 read=847478\n I/O Timings:\nread=46372.819\n -> Parallel Hash Join\n (cost=1732613.92..2146401.28 rows=3423649 width=40) (actual\ntime=71413.550..129338.033 rows=20952150 loops=1)\n Hash Cond:\n(mr.TABLE_O_pk = mi.TABLE_O_pk)\n Buffers: shared\nhit=875756 read=847181\n I/O Timings:\nread=46316.586\n -> Parallel Hash\nJoin (cost=1384407.10..1789205.63 rows=3424314 width=48) (actual\ntime=53970.151..99631.531 rows=20952588 loops=1)\n Hash Cond:\n(gr.TABLE_E_pk = ar.TABLE_E_pk)\n Buffers:\nshared hit=875734 read=579453\n I/O Timings:\nread=32017.291\n -> Parallel\nHash Join (cost=850104.36..1245854.83 rows=3446882 width=40) (actual\ntime=40670.584..73979.070 rows=20952588 loops=1)\n Hash\nCond: (ir.TABLE_D_pk = gr.TABLE_D_pk)\n\n Buffers: shared hit=431786 read=576682\n I/O\nTimings: read=31532.023\n ->\n Parallel Hash Join (cost=377639.61..764283.22 rows=3469280 width=24)\n(actual time=17370.113..37655.366 rows=20952588 loops=1)\n\n Hash Cond: (mr.TABLE_G_pk = ir.TABLE_G_pk)\n\n Buffers: shared hit=431746 read=191851\n\n I/O Timings: read=11334.886\n\n -> Parallel Seq Scan on TABLE_G mr (cost=0.00..377476.70 rows=3492155\nwidth=16) (actual time=8.256..7628.867 rows=20952588 loops=1)\n\n Filter: (is_deleted = 'N'::bpchar)\n\n Rows Removed by Filter: 137463\n\n Buffers: shared hit=333485 read=56\n\n I/O Timings: read=99.651\n\n -> Parallel Hash (cost=333991.70..333991.70 rows=3491833 width=16)\n(actual time=17209.235..17209.236 rows=20952588 loops=1)\n\n Buckets: 33554432 Batches: 1 Memory Usage: 1245280kB\n\n Buffers: shared hit=98261 read=191795\n\n I/O Timings: read=11235.235\n\n -> Parallel Seq Scan on TABLE_F ir (cost=0.00..333991.70\nrows=3491833 width=16) (actual time=10.521..8410.712 rows=20952588 loops=1)\n\n Filter: (is_deleted = 'N'::bpchar)\n\n Rows Removed by Filter: 137463\n\n Buffers: shared hit=98261 read=191795\n\n I/O Timings: read=11235.235\n ->\n Parallel Hash (cost=428812.70..428812.70 rows=3492164 width=24) (actual\ntime=23231.120..23231.121 rows=20952588 loops=1)\n\n Buckets: 33554432 Batches: 1 Memory Usage: 1410240kB\n\n Buffers: shared hit=40 read=384831\n\n I/O Timings: read=20197.137\n\n -> Parallel Seq Scan on TABLE_D gr (cost=0.00..428812.70 rows=3492164\nwidth=24) (actual time=1.548..13339.643 rows=20952588 loops=1)\n\n Filter: (is_deleted = 'N'::bpchar)\n\n Rows Removed by Filter: 137463\n\n Buffers: shared hit=40 read=384831\n\n I/O Timings: read=20197.137\n -> Parallel\nHash (cost=490654.70..490654.70 rows=3491843 width=8) (actual\ntime=13214.241..13214.243 rows=20952588 loops=1)\n\n Buckets: 33554432 Batches: 1 Memory Usage: 1081824kB\n\n Buffers: shared hit=443948 read=2771\n I/O\nTimings: read=485.268\n ->\n Parallel Seq Scan on TABLE_E ar (cost=0.00..490654.70 rows=3491843\nwidth=8) (actual time=0.016..6391.132 rows=20952588 loops=1)\n\n Filter: (is_deleted = 'N'::bpchar)\n\n Rows Removed by Filter: 137463\n\n Buffers: shared hit=443948 read=2771\n\n I/O Timings: read=485.268\n -> Parallel Hash\n (cost=307982.32..307982.32 rows=3217960 width=8) (actual\ntime=17420.366..17420.366 rows=19289868 loops=1)\n Buckets:\n33554432 Batches: 1 Memory Usage: 1016768kB\n Buffers:\nshared hit=22 read=267728\n I/O Timings:\nread=14299.295\n -> Parallel\nSeq Scan on TABLE_O mi (cost=0.00..307982.32 rows=3217960 width=8) (actual\ntime=7.209..9860.853 rows=19289868 loops=1)\n\n Filter: (is_deleted = 'N'::bpchar)\n Rows\nRemoved by Filter: 3884\n\n Buffers: shared hit=22 read=267728\n I/O\nTimings: read=14299.295\n -> Parallel Hash\n (cost=746729.28..746729.28 rows=3214213 width=16) (actual\ntime=37277.818..37277.821 rows=19289868 loops=1)\n Buckets: 33554432\n Batches: 1 Memory Usage: 1167264kB\n Buffers: shared\nhit=617420 read=297\n I/O Timings:\nread=56.233\n -> Parallel Hash\nJoin (cost=390397.57..746729.28 rows=3214213 width=16) (actual\ntime=14977.066..30086.210 rows=19289868 loops=1)\n Hash Cond:\n(ii.TABLE_A_pk = gi.TABLE_A_pk)\n Buffers:\nshared hit=617420 read=297\n I/O Timings:\nread=56.233\n -> Parallel\nSeq Scan on TABLE_C ii (cost=0.00..347892.74 rows=3214846 width=8) (actual\ntime=0.010..5278.795 rows=19289868 loops=1)\n\n Filter: (is_deleted = 'N'::bpchar)\n Rows\nRemoved by Filter: 3884\n\n Buffers: shared hit=307402 read=297\n I/O\nTimings: read=56.233\n -> Parallel\nHash (cost=350211.74..350211.74 rows=3214866 width=8) (actual\ntime=14791.416..14791.417 rows=19289874 loops=1)\n\n Buckets: 33554432 Batches: 1 Memory Usage: 1016768kB\n\n Buffers: shared hit=310018\n ->\n Parallel Seq Scan on TABLE_A gi (cost=0.00..350211.74 rows=3214866\nwidth=8) (actual time=0.013..6388.707 rows=19289874 loops=1)\n\n Filter: (is_deleted = 'N'::bpchar)\n\n Rows Removed by Filter: 3878\n\n Buffers: shared hit=310018\n -> Parallel Hash\n (cost=132712.68..132712.68 rows=1172776 width=8) (actual\ntime=6015.243..6015.244 rows=5863214 loops=1)\n Buckets: 8388608\n Batches: 1 Memory Usage: 294912kB\n Buffers: shared\nhit=117685 read=18\n I/O Timings: read=17.141\n -> Parallel Seq Scan on\nTABLE_H gm (cost=0.00..132712.68 rows=1172776 width=8) (actual\ntime=0.016..2828.580 rows=5863214 loops=1)\n Filter:\n(is_deleted = 'N'::bpchar)\n Rows Removed by\nFilter: 139838\n Buffers: shared\nhit=117685 read=18\n I/O Timings:\nread=17.141\n -> Hash (cost=8.24..8.24 rows=1\nwidth=16) (actual time=0.062..0.064 rows=1 loops=1)\n Buckets: 1024 Batches: 1\n Memory Usage: 9kB\n Buffers: shared hit=9\n -> Nested Loop\n (cost=1.00..8.24 rows=1 width=16) (actual time=0.048..0.050 rows=1 loops=1)\n Buffers: shared hit=9\n -> Index Scan using\nix_TABLE_N_type_value_case on TABLE_N aa1 (cost=0.56..4.18 rows=1 width=8)\n(actual time=0.031..0.032 rows=1 loops=1)\n Index Cond:\n(((TABLE_N_type)::text = ID_TYPE_X'::text) AND ((TABLE_N_value)::text =\n'SOME/UNIQUE/VALUE'::text))\n Filter:\n(is_deleted = 'N'::bpchar)\n Buffers: shared\nhit=5\n -> Index Scan using\nix_TABLE_E_TABLE_E_pk on TABLE_E ar1 (cost=0.44..4.06 rows=1 width=8)\n(actual time=0.014..0.014 rows=1 loops=1)\n Index Cond:\n(TABLE_E_pk = aa1.TABLE_E_pk)\n Filter:\n((is_deleted = 'N'::bpchar) AND ((app_case_type)::text = '8'::text))\n Buffers: shared\nhit=4\n -> Index Scan using ix_TABLE_I_TABLE_D_pk on\nTABLE_I gy (cost=0.44..0.49 rows=1 width=8) (actual time=0.037..0.038\nrows=1 loops=1)\n Index Cond: (TABLE_D_pk = gr.TABLE_D_pk)\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=4\n -> Index Scan using ix_TABLE_J_TABLE_E_pk on TABLE_J\nac (cost=0.43..0.46 rows=1 width=8) (actual time=0.017..0.018 rows=0\nloops=1)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=3\n -> Index Scan using ix_TABLE_N_TABLE_E_pk on TABLE_N aa\n (cost=0.44..0.49 rows=1 width=8) (actual time=0.023..0.024 rows=1 loops=1)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=4\n -> Index Scan using ix_TABLE_L_TABLE_E_pk on TABLE_L ap\n (cost=0.42..0.44 rows=1 width=8) (actual time=0.017..0.017 rows=0 loops=1)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=3\n -> Index Only Scan using ix_TABLE_M_TABLE_G_pk on TABLE_M fp\n (cost=0.43..0.64 rows=10 width=8) (actual time=0.018..0.018 rows=0 loops=1)\n Index Cond: ((TABLE_G_pk = mr.TABLE_G_pk) AND (is_deleted =\n'N'::bpchar))\n Heap Fetches: 0\n Buffers: shared hit=3\n Settings: effective_cache_size = '43806080kB', maintenance_io_concurrency\n= '1', max_parallel_workers_per_gather = '8', random_page_cost = '1.79769',\nsearch_path = dbname, \"$user\", public', work_mem = '16GB'\n Planning:\n Buffers: shared hit=405\n Planning Time: 178.009 ms\n Execution Time: 201810.512 ms\n(156 rows)\n\n\n\nplan-fast-explain-analyze:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=14.15..249.76 rows=10 width=8) (actual\ntime=9.640..9.654 rows=1 loops=1)\n Buffers: shared hit=59 read=1\n I/O Timings: read=9.334\n InitPlan 1 (returns $1)\n -> Nested Loop (cost=1.00..8.24 rows=1 width=8) (actual\ntime=0.048..0.051 rows=1 loops=1)\n Buffers: shared hit=9\n -> Index Scan using ix_TABLE_N_type_value_case on TABLE_N aa1\n (cost=0.56..4.18 rows=1 width=8) (actual time=0.031..0.032 rows=1 loops=1)\n Index Cond: (((TABLE_N_type)::text = 'ID_TYPE_X'::text)\nAND ((TABLE_N_value)::text = 'SOME/UNIQUE/VALUE'::text))\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=5\n -> Index Scan using ix_TABLE_E_TABLE_E_pk on TABLE_E ar1\n (cost=0.44..4.06 rows=1 width=8) (actual time=0.016..0.016 rows=1 loops=1)\n Index Cond: (TABLE_E_pk = aa1.TABLE_E_pk)\n Filter: ((is_deleted = 'N'::bpchar) AND\n((app_case_type)::text = '8'::text))\n Buffers: shared hit=4\n -> Nested Loop Left Join (cost=5.48..234.10 rows=10 width=16) (actual\ntime=9.627..9.638 rows=1 loops=1)\n Buffers: shared hit=56 read=1\n I/O Timings: read=9.334\n -> Nested Loop Left Join (cost=5.05..193.62 rows=10 width=16)\n(actual time=9.615..9.626 rows=1 loops=1)\n Buffers: shared hit=53 read=1\n I/O Timings: read=9.334\n -> Nested Loop Left Join (cost=4.62..152.99 rows=10\nwidth=16) (actual time=9.600..9.610 rows=1 loops=1)\n Buffers: shared hit=49 read=1\n I/O Timings: read=9.334\n -> Nested Loop Left Join (cost=4.18..112.42 rows=10\nwidth=16) (actual time=9.586..9.596 rows=1 loops=1)\n Buffers: shared hit=46 read=1\n I/O Timings: read=9.334\n -> Nested Loop Left Join (cost=3.75..107.42\nrows=10 width=24) (actual time=9.569..9.577 rows=1 loops=1)\n Buffers: shared hit=42 read=1\n I/O Timings: read=9.334\n -> Nested Loop (cost=3.31..102.68\nrows=10 width=24) (actual time=9.552..9.559 rows=1 loops=1)\n Buffers: shared hit=38 read=1\n I/O Timings: read=9.334\n -> Nested Loop (cost=2.88..97.81\nrows=10 width=32) (actual time=9.531..9.538 rows=1 loops=1)\n Buffers: shared hit=34 read=1\n I/O Timings: read=9.334\n -> Nested Loop\n (cost=2.44..92.88 rows=10 width=24) (actual time=9.513..9.519 rows=1\nloops=1)\n Buffers: shared hit=30\nread=1\n I/O Timings: read=9.334\n -> Nested Loop\n (cost=1.88..86.62 rows=10 width=16) (actual time=9.476..9.481 rows=1\nloops=1)\n Join Filter:\n(gi.TABLE_A_pk = ii.TABLE_A_pk)\n Buffers: shared\nhit=25 read=1\n I/O Timings:\nread=9.334\n -> Nested Loop\n (cost=1.31..80.32 rows=10 width=32) (actual time=0.097..0.101 rows=1\nloops=1)\n Buffers:\nshared hit=21\n -> Nested\nLoop (cost=0.88..75.43 rows=10 width=24) (actual time=0.079..0.082 rows=1\nloops=1)\n\n Buffers: shared hit=17\n ->\n Index Scan using ix_TABLE_E_TABLE_E_pk on TABLE_E ar (cost=0.44..34.90\nrows=10 width=8) (actual time=0.064..0.066 rows=1 loops=1)\n\n Index Cond: (TABLE_E_pk = ANY ($1))\n\n Filter: (is_deleted = 'N'::bpchar)\n\n Buffers: shared hit=13\n ->\n Index Scan using uk_TABLE_D_TABLE_E_pk on TABLE_D gr (cost=0.44..4.05\nrows=1 width=24) (actual time=0.014..0.014 rows=1 loops=1)\n\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n\n Filter: (is_deleted = 'N'::bpchar)\n\n Buffers: shared hit=4\n -> Index\nScan using pk_TABLE_A on TABLE_A gi (cost=0.44..0.49 rows=1 width=8)\n(actual time=0.017..0.017 rows=1 loops=1)\n Index\nCond: (TABLE_A_pk = gr.TABLE_A_pk)\n\n Filter: (is_deleted = 'N'::bpchar)\n\n Buffers: shared hit=4\n -> Index Scan\nusing ix_TABLE_C_mida on TABLE_C ii (cost=0.56..0.62 rows=1 width=8)\n(actual time=9.375..9.376 rows=1 loops=1)\n Index Cond:\n(TABLE_A_pk = gr.TABLE_A_pk)\n Filter:\n(is_deleted = 'N'::bpchar)\n Buffers:\nshared hit=4 read=1\n I/O Timings:\nread=9.334\n -> Index Scan using\nix_TABLE_F_mida on TABLE_F ir (cost=0.56..0.62 rows=1 width=16) (actual\ntime=0.031..0.031 rows=1 loops=1)\n Index Cond:\n(TABLE_D_pk = gr.TABLE_D_pk)\n Filter:\n(is_deleted = 'N'::bpchar)\n Buffers: shared\nhit=5\n -> Index Scan using\npk_TABLE_G on TABLE_G mr (cost=0.44..0.49 rows=1 width=16) (actual\ntime=0.015..0.015 rows=1 loops=1)\n Index Cond: (TABLE_G_pk\n= ir.TABLE_G_pk)\n Filter: (is_deleted =\n'N'::bpchar)\n Buffers: shared hit=4\n -> Index Scan using ix_TABLE_O_mida\non TABLE_O mi (cost=0.44..0.49 rows=1 width=8) (actual time=0.018..0.019\nrows=1 loops=1)\n Index Cond: (TABLE_O_pk =\nmr.TABLE_O_pk)\n Filter: (is_deleted =\n'N'::bpchar)\n Buffers: shared hit=4\n -> Index Scan using ix_TABLE_H_TABLE_D_pk\non TABLE_H gm (cost=0.43..0.46 rows=1 width=8) (actual time=0.014..0.015\nrows=1 loops=1)\n Index Cond: (TABLE_D_pk =\ngr.TABLE_D_pk)\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=4\n -> Index Scan using ix_TABLE_I_TABLE_D_pk on\nTABLE_I gy (cost=0.44..0.49 rows=1 width=8) (actual time=0.015..0.016\nrows=1 loops=1)\n Index Cond: (TABLE_D_pk = gr.TABLE_D_pk)\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=4\n -> Index Scan using ix_TABLE_J_TABLE_E_pk on TABLE_J\nac (cost=0.43..4.05 rows=1 width=8) (actual time=0.012..0.012 rows=0\nloops=1)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=3\n -> Index Scan using ix_TABLE_N_TABLE_E_pk on TABLE_N aa\n (cost=0.44..4.05 rows=1 width=8) (actual time=0.013..0.014 rows=1 loops=1)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=4\n -> Index Scan using ix_TABLE_L_TABLE_E_pk on TABLE_L ap\n (cost=0.42..4.04 rows=1 width=8) (actual time=0.009..0.009 rows=0 loops=1)\n Index Cond: (TABLE_E_pk = ar.TABLE_E_pk)\n Filter: (is_deleted = 'N'::bpchar)\n Buffers: shared hit=3\n -> Index Only Scan using ix_TABLE_M_TABLE_G_pk on TABLE_M fp\n (cost=0.43..0.64 rows=10 width=8) (actual time=0.011..0.011 rows=0 loops=1)\n Index Cond: ((TABLE_G_pk = mr.TABLE_G_pk) AND (is_deleted =\n'N'::bpchar))\n Heap Fetches: 0\n Buffers: shared hit=3\n Settings: effective_cache_size = '43806080kB', maintenance_io_concurrency\n= '1', max_parallel_workers_per_gather = '8', random_page_cost = '1.79769',\nsearch_path = dbname, \"$user\", public', work_mem = '16GB'\n Planning:\n Buffers: shared hit=381\n Planning Time: 129.736 ms\n Execution Time: 9.847 ms\n(104 rows)",
"msg_date": "Fri, 5 Apr 2024 15:20:02 +0000",
"msg_from": "Dave Thorn <[email protected]>",
"msg_from_op": true,
"msg_subject": "IN subquery, many joins, very different plans"
}
] |
[
{
"msg_contents": "we found sometimes , with many sessions running same query \"select ...\"\nat the same time, saw many sessions waiting on \"LockManager\". for example,\npg_stat_activity show. It's a production server, so no enable\ntrace_lwlocks flag. could you direct me what's the possible reason and how\nto reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no\nDML.\n\n time wait_event\n count(pid)\n2024-04-08 09:00:06.043996+00 | DataFileRead | 42\n 2024-04-08 09:00:06.043996+00 | | 15\n 2024-04-08 09:00:06.043996+00 | LockManager | 31\n 2024-04-08 09:00:06.043996+00 | BufferMapping | 46\n 2024-04-08 09:00:07.114015+00 | LockManager | 43\n 2024-04-08 09:00:07.114015+00 | DataFileRead | 28\n 2024-04-08 09:00:07.114015+00 | ClientRead | 11\n 2024-04-08 09:00:07.114015+00 | | 11\n\nThanks,\n\nJames\n\n we found sometimes , with many sessions running same query \"select ...\" at the same time, saw many sessions waiting on \"LockManager\". for example, pg_stat_activity show. It's a production server, so no enable trace_lwlocks flag. could you direct me what's the possible reason and how to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no DML. time wait_event count(pid) 2024-04-08 09:00:06.043996+00 | DataFileRead | 42 2024-04-08 09:00:06.043996+00 | | 15 2024-04-08 09:00:06.043996+00 | LockManager | 31 2024-04-08 09:00:06.043996+00 | BufferMapping | 46 2024-04-08 09:00:07.114015+00 | LockManager | 43 2024-04-08 09:00:07.114015+00 | DataFileRead | 28 2024-04-08 09:00:07.114015+00 | ClientRead | 11 2024-04-08 09:00:07.114015+00 | | 11Thanks,James",
"msg_date": "Tue, 9 Apr 2024 11:07:49 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "LWlock:LockManager waits"
},
{
"msg_contents": "On Tue, 2024-04-09 at 11:07 +0800, James Pang wrote:\n> we found sometimes , with many sessions running same query \"select ...\" at the same time, saw many sessions waiting on \"LockManager\". for example, pg_stat_activity show. It's a production server, so no enable trace_lwlocks flag. could you direct me what's the possible reason and how to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no DML.\n> \n> time wait_event count(pid) \n> 2024-04-08 09:00:06.043996+00 | DataFileRead | 42\n> 2024-04-08 09:00:06.043996+00 | | 15\n> 2024-04-08 09:00:06.043996+00 | LockManager | 31\n> 2024-04-08 09:00:06.043996+00 | BufferMapping | 46\n> 2024-04-08 09:00:07.114015+00 | LockManager | 43\n> 2024-04-08 09:00:07.114015+00 | DataFileRead | 28\n> 2024-04-08 09:00:07.114015+00 | ClientRead | 11\n> 2024-04-08 09:00:07.114015+00 | | 11\n\nThat's quite obvious: too many connections cause internal contention in the database.\n\nReduce the number of connections by using a reasonably sized connection pool.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 09 Apr 2024 06:31:48 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWlock:LockManager waits"
},
{
"msg_contents": "you mean too many concurrent sessions trying to acquire lock on same\nrelation , then waiting on \"LockManager\" LWlock,right? this contention\noccurred on parsing ,planning, or execute step ?\n\nThanks,\n\nJames\n\nLaurenz Albe <[email protected]> 於 2024年4月9日週二 下午12:31寫道:\n\n> On Tue, 2024-04-09 at 11:07 +0800, James Pang wrote:\n> > we found sometimes , with many sessions running same query \"select\n> ...\" at the same time, saw many sessions waiting on \"LockManager\". for\n> example, pg_stat_activity show. It's a production server, so no enable\n> trace_lwlocks flag. could you direct me what's the possible reason and how\n> to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no\n> DML.\n> >\n> > time wait_event\n> count(pid)\n> > 2024-04-08 09:00:06.043996+00 | DataFileRead | 42\n> > 2024-04-08 09:00:06.043996+00 | | 15\n> > 2024-04-08 09:00:06.043996+00 | LockManager | 31\n> > 2024-04-08 09:00:06.043996+00 | BufferMapping | 46\n> > 2024-04-08 09:00:07.114015+00 | LockManager | 43\n> > 2024-04-08 09:00:07.114015+00 | DataFileRead | 28\n> > 2024-04-08 09:00:07.114015+00 | ClientRead | 11\n> > 2024-04-08 09:00:07.114015+00 | | 11\n>\n> That's quite obvious: too many connections cause internal contention in\n> the database.\n>\n> Reduce the number of connections by using a reasonably sized connection\n> pool.\n>\n> Yours,\n> Laurenz Albe\n>\n\nyou mean too many concurrent sessions trying to acquire lock on same relation , then waiting on \"LockManager\" LWlock,right? this contention occurred on parsing ,planning, or execute step ? Thanks,JamesLaurenz Albe <[email protected]> 於 2024年4月9日週二 下午12:31寫道:On Tue, 2024-04-09 at 11:07 +0800, James Pang wrote:\n> we found sometimes , with many sessions running same query \"select ...\" at the same time, saw many sessions waiting on \"LockManager\". for example, pg_stat_activity show. It's a production server, so no enable trace_lwlocks flag. could you direct me what's the possible reason and how to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no DML.\n> \n> time wait_event count(pid) \n> 2024-04-08 09:00:06.043996+00 | DataFileRead | 42\n> 2024-04-08 09:00:06.043996+00 | | 15\n> 2024-04-08 09:00:06.043996+00 | LockManager | 31\n> 2024-04-08 09:00:06.043996+00 | BufferMapping | 46\n> 2024-04-08 09:00:07.114015+00 | LockManager | 43\n> 2024-04-08 09:00:07.114015+00 | DataFileRead | 28\n> 2024-04-08 09:00:07.114015+00 | ClientRead | 11\n> 2024-04-08 09:00:07.114015+00 | | 11\n\nThat's quite obvious: too many connections cause internal contention in the database.\n\nReduce the number of connections by using a reasonably sized connection pool.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 9 Apr 2024 15:54:45 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LWlock:LockManager waits"
},
{
"msg_contents": "James,\n\nA lock can be obtained in the parse, plan and execute step, depending on cache, state and type of object.\n\nA LWLock is a spinlock, a low level access mechanism that is supposed to be extremely quickly. It is used to serialise access to elementary structures mostly for changes.\nA Lock is an higher level lock that is much more sophisticated, contains multiple states and can order multiple requests. It is used to safeguard transaction intention for objects.\n\nThe wait LWLock:LockManager is documented to have two common reasons: too many locks being acquired, exceeding the fastpath slots number (16) and/or exceeding CPU capacity.\n\nWhat is happening if you are waiting for a LWLock is that the number of processes trying to access the structure (the lock manager) is higher than one. Because the LWLock is meant to be held so briefly that there should no waiting, it means that if you are waiting for it, there must be a reason it’s held so long. An obvious reason for holding a LWLock too long is if there are more tasks on the OS than CPU’s, as Laurenz indicates. If such a situation happens, it’s possible a tasks is put off CPU by the operating system whilst holding the LWLock, which will greatly increase the time waiting for it, because the LWLock can only be released if the task manages to get back on CPU.\n\nRegards,\n\nFrits Hoogland\n\n\n\n\n> On 9 Apr 2024, at 09:54, James Pang <[email protected]> wrote:\n> \n> you mean too many concurrent sessions trying to acquire lock on same relation , then waiting on \"LockManager\" LWlock,right? this contention occurred on parsing ,planning, or execute step ? \n> \n> Thanks,\n> \n> James\n> \n> Laurenz Albe <[email protected] <mailto:[email protected]>> 於 2024年4月9日週二 下午12:31寫道:\n>> On Tue, 2024-04-09 at 11:07 +0800, James Pang wrote:\n>> > we found sometimes , with many sessions running same query \"select ...\" at the same time, saw many sessions waiting on \"LockManager\". for example, pg_stat_activity show. It's a production server, so no enable trace_lwlocks flag. could you direct me what's the possible reason and how to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no DML.\n>> > \n>> > time wait_event count(pid) \n>> > 2024-04-08 09:00:06.043996+00 | DataFileRead | 42\n>> > 2024-04-08 09:00:06.043996+00 | | 15\n>> > 2024-04-08 09:00:06.043996+00 | LockManager | 31\n>> > 2024-04-08 09:00:06.043996+00 | BufferMapping | 46\n>> > 2024-04-08 09:00:07.114015+00 | LockManager | 43\n>> > 2024-04-08 09:00:07.114015+00 | DataFileRead | 28\n>> > 2024-04-08 09:00:07.114015+00 | ClientRead | 11\n>> > 2024-04-08 09:00:07.114015+00 | | 11\n>> \n>> That's quite obvious: too many connections cause internal contention in the database.\n>> \n>> Reduce the number of connections by using a reasonably sized connection pool.\n>> \n>> Yours,\n>> Laurenz Albe\n\n\nJames,A lock can be obtained in the parse, plan and execute step, depending on cache, state and type of object.A LWLock is a spinlock, a low level access mechanism that is supposed to be extremely quickly. It is used to serialise access to elementary structures mostly for changes.A Lock is an higher level lock that is much more sophisticated, contains multiple states and can order multiple requests. It is used to safeguard transaction intention for objects.The wait LWLock:LockManager is documented to have two common reasons: too many locks being acquired, exceeding the fastpath slots number (16) and/or exceeding CPU capacity.What is happening if you are waiting for a LWLock is that the number of processes trying to access the structure (the lock manager) is higher than one. Because the LWLock is meant to be held so briefly that there should no waiting, it means that if you are waiting for it, there must be a reason it’s held so long. An obvious reason for holding a LWLock too long is if there are more tasks on the OS than CPU’s, as Laurenz indicates. If such a situation happens, it’s possible a tasks is put off CPU by the operating system whilst holding the LWLock, which will greatly increase the time waiting for it, because the LWLock can only be released if the task manages to get back on CPU.Regards,\nFrits Hoogland\n\nOn 9 Apr 2024, at 09:54, James Pang <[email protected]> wrote:you mean too many concurrent sessions trying to acquire lock on same relation , then waiting on \"LockManager\" LWlock,right? this contention occurred on parsing ,planning, or execute step ? Thanks,JamesLaurenz Albe <[email protected]> 於 2024年4月9日週二 下午12:31寫道:On Tue, 2024-04-09 at 11:07 +0800, James Pang wrote:\n> we found sometimes , with many sessions running same query \"select ...\" at the same time, saw many sessions waiting on \"LockManager\". for example, pg_stat_activity show. It's a production server, so no enable trace_lwlocks flag. could you direct me what's the possible reason and how to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no DML.\n> \n> time wait_event count(pid) \n> 2024-04-08 09:00:06.043996+00 | DataFileRead | 42\n> 2024-04-08 09:00:06.043996+00 | | 15\n> 2024-04-08 09:00:06.043996+00 | LockManager | 31\n> 2024-04-08 09:00:06.043996+00 | BufferMapping | 46\n> 2024-04-08 09:00:07.114015+00 | LockManager | 43\n> 2024-04-08 09:00:07.114015+00 | DataFileRead | 28\n> 2024-04-08 09:00:07.114015+00 | ClientRead | 11\n> 2024-04-08 09:00:07.114015+00 | | 11\n\nThat's quite obvious: too many connections cause internal contention in the database.\n\nReduce the number of connections by using a reasonably sized connection pool.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 9 Apr 2024 10:36:40 +0200",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWlock:LockManager waits"
},
{
"msg_contents": "I need to rectify myself: LWLock is not a spinlock (anymore). \nThe documentation in lwlock.c makes it clear that it used to be spinlock, but now is a counter modified by atomic instructions.\n\nOh, I forgot to answer:\n\n> you mean too many concurrent sessions trying to acquire lock on same relation , then waiting on \"LockManager\" LWlock,right? \n\nThis is the point: no, it’s not about the same relation. \n\nThe LWLock:LockManager is a wait event that is raised when competing for the LWLock that protects the shared Lock structure, which holds all of the locks of the database.\n\n\nFrits Hoogland\n\n\n\n\n> On 9 Apr 2024, at 09:54, James Pang <[email protected]> wrote:\n> \n> you mean too many concurrent sessions trying to acquire lock on same relation , then waiting on \"LockManager\" LWlock,right? this contention occurred on parsing ,planning, or execute step ? \n> \n> Thanks,\n> \n> James\n> \n> Laurenz Albe <[email protected] <mailto:[email protected]>> 於 2024年4月9日週二 下午12:31寫道:\n>> On Tue, 2024-04-09 at 11:07 +0800, James Pang wrote:\n>> > we found sometimes , with many sessions running same query \"select ...\" at the same time, saw many sessions waiting on \"LockManager\". for example, pg_stat_activity show. It's a production server, so no enable trace_lwlocks flag. could you direct me what's the possible reason and how to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no DML.\n>> > \n>> > time wait_event count(pid) \n>> > 2024-04-08 09:00:06.043996+00 | DataFileRead | 42\n>> > 2024-04-08 09:00:06.043996+00 | | 15\n>> > 2024-04-08 09:00:06.043996+00 | LockManager | 31\n>> > 2024-04-08 09:00:06.043996+00 | BufferMapping | 46\n>> > 2024-04-08 09:00:07.114015+00 | LockManager | 43\n>> > 2024-04-08 09:00:07.114015+00 | DataFileRead | 28\n>> > 2024-04-08 09:00:07.114015+00 | ClientRead | 11\n>> > 2024-04-08 09:00:07.114015+00 | | 11\n>> \n>> That's quite obvious: too many connections cause internal contention in the database.\n>> \n>> Reduce the number of connections by using a reasonably sized connection pool.\n>> \n>> Yours,\n>> Laurenz Albe\n\n\nI need to rectify myself: LWLock is not a spinlock (anymore). The documentation in lwlock.c makes it clear that it used to be spinlock, but now is a counter modified by atomic instructions.Oh, I forgot to answer:you mean too many concurrent sessions trying to acquire lock on same relation , then waiting on \"LockManager\" LWlock,right? This is the point: no, it’s not about the same relation. The LWLock:LockManager is a wait event that is raised when competing for the LWLock that protects the shared Lock structure, which holds all of the locks of the database.\nFrits Hoogland\n\nOn 9 Apr 2024, at 09:54, James Pang <[email protected]> wrote:you mean too many concurrent sessions trying to acquire lock on same relation , then waiting on \"LockManager\" LWlock,right? this contention occurred on parsing ,planning, or execute step ? Thanks,JamesLaurenz Albe <[email protected]> 於 2024年4月9日週二 下午12:31寫道:On Tue, 2024-04-09 at 11:07 +0800, James Pang wrote:\n> we found sometimes , with many sessions running same query \"select ...\" at the same time, saw many sessions waiting on \"LockManager\". for example, pg_stat_activity show. It's a production server, so no enable trace_lwlocks flag. could you direct me what's the possible reason and how to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no DML.\n> \n> time wait_event count(pid) \n> 2024-04-08 09:00:06.043996+00 | DataFileRead | 42\n> 2024-04-08 09:00:06.043996+00 | | 15\n> 2024-04-08 09:00:06.043996+00 | LockManager | 31\n> 2024-04-08 09:00:06.043996+00 | BufferMapping | 46\n> 2024-04-08 09:00:07.114015+00 | LockManager | 43\n> 2024-04-08 09:00:07.114015+00 | DataFileRead | 28\n> 2024-04-08 09:00:07.114015+00 | ClientRead | 11\n> 2024-04-08 09:00:07.114015+00 | | 11\n\nThat's quite obvious: too many connections cause internal contention in the database.\n\nReduce the number of connections by using a reasonably sized connection pool.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 9 Apr 2024 11:36:41 +0200",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWlock:LockManager waits"
},
{
"msg_contents": "Hi James,\n\nTake a look here, in the links you will find many info and real examples\nfor LockManager issues.\n\nhttps://ardentperf.com/2024/03/03/postgres-indexes-partitioning-and-lwlocklockmanager-scalability/\n\nLuiz\n\nOn Tue, Apr 9, 2024 at 4:08 AM James Pang <[email protected]> wrote:\n\n> we found sometimes , with many sessions running same query \"select ...\"\n> at the same time, saw many sessions waiting on \"LockManager\". for example,\n> pg_stat_activity show. It's a production server, so no enable\n> trace_lwlocks flag. could you direct me what's the possible reason and how\n> to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no\n> DML.\n>\n> time wait_event\n> count(pid)\n> 2024-04-08 09:00:06.043996+00 | DataFileRead | 42\n> 2024-04-08 09:00:06.043996+00 | | 15\n> 2024-04-08 09:00:06.043996+00 | LockManager | 31\n> 2024-04-08 09:00:06.043996+00 | BufferMapping | 46\n> 2024-04-08 09:00:07.114015+00 | LockManager | 43\n> 2024-04-08 09:00:07.114015+00 | DataFileRead | 28\n> 2024-04-08 09:00:07.114015+00 | ClientRead | 11\n> 2024-04-08 09:00:07.114015+00 | | 11\n>\n> Thanks,\n>\n> James\n>\n\nHi James,Take a look here, in the links you will find many info and real examples for LockManager issues.https://ardentperf.com/2024/03/03/postgres-indexes-partitioning-and-lwlocklockmanager-scalability/LuizOn Tue, Apr 9, 2024 at 4:08 AM James Pang <[email protected]> wrote: we found sometimes , with many sessions running same query \"select ...\" at the same time, saw many sessions waiting on \"LockManager\". for example, pg_stat_activity show. It's a production server, so no enable trace_lwlocks flag. could you direct me what's the possible reason and how to reduce this \"lockmanager\" lock? all the sql statement are \"select \" ,no DML. time wait_event count(pid) 2024-04-08 09:00:06.043996+00 | DataFileRead | 42 2024-04-08 09:00:06.043996+00 | | 15 2024-04-08 09:00:06.043996+00 | LockManager | 31 2024-04-08 09:00:06.043996+00 | BufferMapping | 46 2024-04-08 09:00:07.114015+00 | LockManager | 43 2024-04-08 09:00:07.114015+00 | DataFileRead | 28 2024-04-08 09:00:07.114015+00 | ClientRead | 11 2024-04-08 09:00:07.114015+00 | | 11Thanks,James",
"msg_date": "Tue, 9 Apr 2024 11:33:36 +0100",
"msg_from": "\"Luiz Fernando G. Verona\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWlock:LockManager waits"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're running PostgreSQL as essentially a data warehouse, and we have a few\nthousand roles, which are used to grant permissions on a table-by-table\nbasis to a few thousand users, so a user would typically have say between 1\nand 2 thousand roles. There is also quite a lot of \"churn\" in terms of\ntables being created/removed, and permissions changed.\n\nThe issue is that we're hitting a strange performance problem on\nconnection. Sometimes it can take ~25 to 40 seconds just to connect,\nalthough it's often way quicker. There seems to be no middle ground - never\nhave I seen a connection take between 0.5 and 25 seconds for example. We\nsuspect it's related to the number of roles the connecting user has\n(including via other roles), because if we remove all roles but one from\nthe connecting user (the one that grants connection permissions),\nconnecting is always virtually instantaneous.\n\nThe closest issue that I can find that's similar is\nhttps://www.postgresql.org/message-id/flat/CAGvXd3OSMbJQwOSc-Tq-Ro1CAz%3DvggErdSG7pv2s6vmmTOLJSg%40mail.gmail.com,\nwhich reports that GRANT role is slow with a high number of roles - but in\nour case, it's connecting that's the problem, before (as far as we can\ntell) even one query is run. The database is busy, say up to 60-80% on a 16\nVCPU machine - even if it's a \"good amount\" below 100%, the issue occurs.\n\nIs there anything we can do to investigate (or hopefully fix!) the issue?\n\nThanks,\n\nMichal\n\n------\n\nA description of what you are trying to achieve and what results you\nexpect.:\nWe would like to connect to the database - expect it to connect in less\nthan 1 second, but sometimes 25 - 40s.\n\nPostgreSQL version number you are running:\nPostgreSQL 14.10 on aarch64-unknown-linux-gnu, compiled by\naarch64-unknown-linux-gnu-gcc (GCC) 9.5.0, 64-bit\n\nHow you installed PostgreSQL:\nVia AWS/Amazon Aurora\n\nChanges made to the settings in the postgresql.conf file\nIn attached CSV file\n\nOperating system and version:\nUnknown\n\nWhat program you're using to connect to PostgreSQL:\nPython + SQLAlchemy, psql, or also via Amazon Quicksight (Unsure which\nclient they use under the hood, but it surfaces connection timeout errors,\nwhich we suspect is due to the issue described above)\n\nIs there anything relevant or unusual in the PostgreSQL server logs?:\nNo\n\nFor questions about any kind of error:\nN/A",
"msg_date": "Sat, 20 Apr 2024 12:55:12 +0100",
"msg_from": "Michal Charemza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extremely slow to establish connection when user has a high number of\n roles"
},
{
"msg_contents": "On 4/20/24 13:55, Michal Charemza wrote:\n> Hi,\n> \n> We're running PostgreSQL as essentially a data warehouse, and we have a few\n> thousand roles, which are used to grant permissions on a table-by-table\n> basis to a few thousand users, so a user would typically have say between 1\n> and 2 thousand roles. There is also quite a lot of \"churn\" in terms of\n> tables being created/removed, and permissions changed.\n> \n> The issue is that we're hitting a strange performance problem on\n> connection. Sometimes it can take ~25 to 40 seconds just to connect,\n> although it's often way quicker. There seems to be no middle ground - never\n> have I seen a connection take between 0.5 and 25 seconds for example. We\n> suspect it's related to the number of roles the connecting user has\n> (including via other roles), because if we remove all roles but one from\n> the connecting user (the one that grants connection permissions),\n> connecting is always virtually instantaneous.\n> \n\nI tried a couple simple setups with many roles (user with many roles\ngranted directly and with many roles granted through other roles), but\nI've been unable to reproduce this.\n\n> The closest issue that I can find that's similar is\n> https://www.postgresql.org/message-id/flat/CAGvXd3OSMbJQwOSc-Tq-Ro1CAz%3DvggErdSG7pv2s6vmmTOLJSg%40mail.gmail.com,\n> which reports that GRANT role is slow with a high number of roles - but in\n> our case, it's connecting that's the problem, before (as far as we can\n> tell) even one query is run. The database is busy, say up to 60-80% on a 16\n> VCPU machine - even if it's a \"good amount\" below 100%, the issue occurs.\n> \n> Is there anything we can do to investigate (or hopefully fix!) the issue?\n> \n\nA reproducer would be great - a script that creates user/roles, and\ntriggers the long login time would allow us to investigate that.\n\nAnother option would be to get a perf profile from the process busy with\nlogging the user in - assuming it's CPU-intensive, and not (e.g.) some\nsort of locking issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 20 Apr 2024 14:57:20 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
},
{
"msg_contents": "Michal Charemza <[email protected]> writes:\n> The issue is that we're hitting a strange performance problem on\n> connection. Sometimes it can take ~25 to 40 seconds just to connect,\n> although it's often way quicker. There seems to be no middle ground - never\n> have I seen a connection take between 0.5 and 25 seconds for example. We\n> suspect it's related to the number of roles the connecting user has\n> (including via other roles), because if we remove all roles but one from\n> the connecting user (the one that grants connection permissions),\n> connecting is always virtually instantaneous.\n\nIt's not very clear what you mean by \"sometimes\". Is the slowness\nreproducible for a particular user and role configuration, or does\nit seem to come and go by itself?\n\nAs Tomas said, a self-contained reproduction script would be very\nhelpful for looking into this.\n\n> The closest issue that I can find that's similar is\n> https://www.postgresql.org/message-id/flat/CAGvXd3OSMbJQwOSc-Tq-Ro1CAz%3DvggErdSG7pv2s6vmmTOLJSg%40mail.gmail.com,\n> which reports that GRANT role is slow with a high number of roles - but in\n> our case, it's connecting that's the problem, before (as far as we can\n> tell) even one query is run.\n\nThat specific problem is (we think) new in v16, but the root cause\nis an inefficient lookup mechanism that has been there a long time.\nMaybe you have found a usage pattern that exposes its weakness in\nolder branches. If so, we could consider back-patching 14e991db8\nfurther than v16 ... but I don't plan to take any risk there without\nconcrete evidence that it'd improve things.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Apr 2024 10:37:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
},
{
"msg_contents": "On Sat, Apr 20, 2024, 5:25 PM Michal Charemza <[email protected]> wrote:\n\n> Hi,\n>\n> We're running PostgreSQL as essentially a data warehouse, and we have a\n> few thousand roles, which are used to grant permissions on a table-by-table\n> basis to a few thousand users, so a user would typically have say between 1\n> and 2 thousand roles. There is also quite a lot of \"churn\" in terms of\n> tables being created/removed, and permissions changed.\n>\n> The issue is that we're hitting a strange performance problem on\n> connection. Sometimes it can take ~25 to 40 seconds just to connect,\n> although it's often way quicker\n>\n\ncan you rule out system catalog bloat ?\n\nOn Sat, Apr 20, 2024, 5:25 PM Michal Charemza <[email protected]> wrote:Hi,We're running PostgreSQL as essentially a data warehouse, and we have a few thousand roles, which are used to grant permissions on a table-by-table basis to a few thousand users, so a user would typically have say between 1 and 2 thousand roles. There is also quite a lot of \"churn\" in terms of tables being created/removed, and permissions changed.The issue is that we're hitting a strange performance problem on connection. Sometimes it can take ~25 to 40 seconds just to connect, although it's often way quickercan you rule out system catalog bloat ?",
"msg_date": "Sat, 20 Apr 2024 20:22:15 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n> It's not very clear what you mean by \"sometimes\". Is the slowness\nreproducible for a particular user and role configuration, or does\nit seem to come and go by itself?\n\nAh it's more come and go by itself - as in one connection takes 30 seconds,\nthen the next say 0.06s. It's happened for every user we've tried. Even\nmore anecdotally, I would say it happens more when the database is busy in\nterms of tables being dropped/created and permissions changing.\n\nAlso: realise we did have one user that had directly was a member of\nseveral thousand roles, but indirectly several million. It would sometimes\ntake 10 minutes for that user to connect. We've since changed that to one\nrole, and that user connects fine now.\n\n> As Tomas said, a self-contained reproduction script would be very\nhelpful for looking into this.\n\nHave tried... but alas it seems fine in anything but the production\nenvironment. My closest attempt is attached to at least it show in more\ndetail how our system is setup, but it always works fine for me locally.\n\nI am wondering - what happens on connection? What catalogue tables does\nPostgreSQL check and how? What's allowed to happen concurrently and what\nisn't? If I knew, maybe I could come up with a reproduction script that\ndoes reproduce the issue?",
"msg_date": "Sat, 20 Apr 2024 16:12:56 +0100",
"msg_from": "Michal Charemza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
},
{
"msg_contents": "Vijaykumar Jain <[email protected]> writes:\n> can you rule out system catalog bloat ?\n\nI don't know! I've now run the query from\nhttps://wiki.postgresql.org/wiki/Show_database_bloat just just on\npg_catalog, results attached\n\nOn Sat, Apr 20, 2024 at 3:52 PM Vijaykumar Jain <\[email protected]> wrote:\n\n>\n>\n> On Sat, Apr 20, 2024, 5:25 PM Michal Charemza <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> We're running PostgreSQL as essentially a data warehouse, and we have a\n>> few thousand roles, which are used to grant permissions on a table-by-table\n>> basis to a few thousand users, so a user would typically have say between 1\n>> and 2 thousand roles. There is also quite a lot of \"churn\" in terms of\n>> tables being created/removed, and permissions changed.\n>>\n>> The issue is that we're hitting a strange performance problem on\n>> connection. Sometimes it can take ~25 to 40 seconds just to connect,\n>> although it's often way quicker\n>>\n>\n> can you rule out system catalog bloat ?\n>\n>",
"msg_date": "Sat, 20 Apr 2024 16:22:56 +0100",
"msg_from": "Michal Charemza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
},
{
"msg_contents": "Michael, can you validate if this is consistently happening for the first connection after database cluster startup?\n\nFrits\n\n> Op 20 apr 2024 om 04:55 heeft Michal Charemza <[email protected]> het volgende geschreven:\n> \n> \n> Hi,\n> \n> We're running PostgreSQL as essentially a data warehouse, and we have a few thousand roles, which are used to grant permissions on a table-by-table basis to a few thousand users, so a user would typically have say between 1 and 2 thousand roles. There is also quite a lot of \"churn\" in terms of tables being created/removed, and permissions changed.\n> \n> The issue is that we're hitting a strange performance problem on connection. Sometimes it can take ~25 to 40 seconds just to connect, although it's often way quicker. There seems to be no middle ground - never have I seen a connection take between 0.5 and 25 seconds for example. We suspect it's related to the number of roles the connecting user has (including via other roles), because if we remove all roles but one from the connecting user (the one that grants connection permissions), connecting is always virtually instantaneous.\n> \n> The closest issue that I can find that's similar is https://www.postgresql.org/message-id/flat/CAGvXd3OSMbJQwOSc-Tq-Ro1CAz%3DvggErdSG7pv2s6vmmTOLJSg%40mail.gmail.com, which reports that GRANT role is slow with a high number of roles - but in our case, it's connecting that's the problem, before (as far as we can tell) even one query is run. The database is busy, say up to 60-80% on a 16 VCPU machine - even if it's a \"good amount\" below 100%, the issue occurs.\n> \n> Is there anything we can do to investigate (or hopefully fix!) the issue?\n> \n> Thanks,\n> \n> Michal\n> \n> ------\n> \n> A description of what you are trying to achieve and what results you expect.:\n> We would like to connect to the database - expect it to connect in less than 1 second, but sometimes 25 - 40s.\n> \n> PostgreSQL version number you are running:\n> PostgreSQL 14.10 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 9.5.0, 64-bit\n> \n> How you installed PostgreSQL:\n> Via AWS/Amazon Aurora\n> \n> Changes made to the settings in the postgresql.conf file\n> In attached CSV file\n> \n> Operating system and version:\n> Unknown\n> \n> What program you're using to connect to PostgreSQL:\n> Python + SQLAlchemy, psql, or also via Amazon Quicksight (Unsure which client they use under the hood, but it surfaces connection timeout errors, which we suspect is due to the issue described above)\n> \n> Is there anything relevant or unusual in the PostgreSQL server logs?:\n> No\n> \n> For questions about any kind of error:\n> N/A\n> <server_configuration.csv>\n\nMichael, can you validate if this is consistently happening for the first connection after database cluster startup?FritsOp 20 apr 2024 om 04:55 heeft Michal Charemza <[email protected]> het volgende geschreven:Hi,We're running PostgreSQL as essentially a data warehouse, and we have a few thousand roles, which are used to grant permissions on a table-by-table basis to a few thousand users, so a user would typically have say between 1 and 2 thousand roles. There is also quite a lot of \"churn\" in terms of tables being created/removed, and permissions changed.The issue is that we're hitting a strange performance problem on connection. Sometimes it can take ~25 to 40 seconds just to connect, although it's often way quicker. There seems to be no middle ground - never have I seen a connection take between 0.5 and 25 seconds for example. We suspect it's related to the number of roles the connecting user has (including via other roles), because if we remove all roles but one from the connecting user (the one that grants connection permissions), connecting is always virtually instantaneous.The closest issue that I can find that's similar is https://www.postgresql.org/message-id/flat/CAGvXd3OSMbJQwOSc-Tq-Ro1CAz%3DvggErdSG7pv2s6vmmTOLJSg%40mail.gmail.com, which reports that GRANT role is slow with a high number of roles - but in our case, it's connecting that's the problem, before (as far as we can tell) even one query is run. The database is busy, say up to 60-80% on a 16 VCPU machine - even if it's a \"good amount\" below 100%, the issue occurs.Is there anything we can do to investigate (or hopefully fix!) the issue?Thanks,Michal------A description of what you are trying to achieve and what results you expect.:We would like to connect to the database - expect it to connect in less than 1 second, but sometimes 25 - 40s.PostgreSQL version number you are running:PostgreSQL 14.10 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 9.5.0, 64-bitHow you installed PostgreSQL:Via AWS/Amazon AuroraChanges made to the settings in the postgresql.conf fileIn attached CSV fileOperating system and version:UnknownWhat program you're using to connect to PostgreSQL:Python + SQLAlchemy, psql, or also via Amazon Quicksight (Unsure which client they use under the hood, but it surfaces connection timeout errors, which we suspect is due to the issue described above) Is there anything relevant or unusual in the PostgreSQL server logs?:No For questions about any kind of error:N/A\n<server_configuration.csv>",
"msg_date": "Sat, 20 Apr 2024 08:24:02 -0700",
"msg_from": "Frits Hoogland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
},
{
"msg_contents": "Frits Hoogland <[email protected]> writes:\n> Michael, can you validate if this is consistently happening for the first\nconnection after database cluster startup?\n\nHmmm... it'll be tricky and need some planning. It might even be impossible\nsince this is on AWS Aurora, and I think AWS connects in regularly as part\nof a heartbeat check(?)\n\nBut: do you mean any connection, or just a user that hasn't connected yet\nafter the restart? A user that hasn't connected yet would be much easier.\n\nFrits Hoogland <[email protected]> writes:> Michael, can you validate if this is consistently happening for the first connection after database cluster startup?Hmmm... it'll be tricky and need some planning. It might even be impossible since this is on AWS Aurora, and I think AWS connects in regularly as part of a heartbeat check(?)But: do you mean any connection, or just a user that hasn't connected yet after the restart? A user that hasn't connected yet would be much easier.",
"msg_date": "Sat, 20 Apr 2024 16:34:12 +0100",
"msg_from": "Michal Charemza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
},
{
"msg_contents": "Michal Charemza <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> It's not very clear what you mean by \"sometimes\". Is the slowness\n> reproducible for a particular user and role configuration, or does\n> it seem to come and go by itself?\n\n> Ah it's more come and go by itself - as in one connection takes 30 seconds,\n> then the next say 0.06s. It's happened for every user we've tried. Even\n> more anecdotally, I would say it happens more when the database is busy in\n> terms of tables being dropped/created and permissions changing.\n\nOK, that pretty much eliminates the idea that it's a new manifestation\nof the catcache-inefficiency problem. Vijaykumar may well have the\nright idea, that it's a form of catalog bloat. Do you do bulk\npermissions updates that might affect thousands of role memberships at\nonce?\n\n> Also: realise we did have one user that had directly was a member of\n> several thousand roles, but indirectly several million. It would sometimes\n> take 10 minutes for that user to connect. We've since changed that to one\n> role, and that user connects fine now.\n\nInteresting --- but even for that user, it was sometimes fast to\nconnect?\n\n> I am wondering - what happens on connection? What catalogue tables does\n> PostgreSQL check and how? What's allowed to happen concurrently and what\n> isn't? If I knew, maybe I could come up with a reproduction script that\n> does reproduce the issue?\n\nWell, it's going to be looking to see that the user has CONNECT\nprivileges on the target database. If that db doesn't have public\nconnect privileges, but only grants CONNECT to certain roles, then\nwe'll have to test whether the connecting user is a member of those\nroles --- which involves looking into pg_auth_members and possibly\neven doing recursive searches there. For the sort of setup you're\ndescribing with thousands of role grants (pg_auth_members entries)\nit's not hard to imagine that search being rather expensive. What\nremains to be explained is how come it's only expensive sometimes.\n\nThe catalog-bloat idea comes from thinking about how Postgres handles\nrow updates. There will be multiple physical copies (row versions)\nof any recently-updated row, and this is much more expensive to scan\nthan a static situation with only one live row version. First just\nbecause we have to look at more than one copy, and second because\ntesting whether that copy is the live version is noticeably more\nexpensive if it's recent than once it's older than the xmin horizon,\nand third because if we are the first process to scan it since it\nbecame dead-to-everybody then it's our responsibility to mark it as\ndead-to-everybody, so that we have to incur additional I/O to do that.\nA plausible idea for particular connection attempts being slow is that\nthey came in just as a whole lot of pg_auth_members entries became\ndead-to-everybody, and hence they were unlucky enough to get saddled\nwith a whole lot of that hint-bit-updating work. (This also nicely\nexplains why the next attempt isn't slow: the work's been done.)\n\nBut this is only plausible if you regularly do actions that cause a\nlot of pg_auth_members entries to be updated at the same time.\nSo we still don't have good insight into that, and your test script\nisn't shedding any light.\n\nA couple of other thoughts:\n\n* I don't think your test script would show a connection-slowness\nproblem even if there was one to be shown, because you forgot to\nrevoke the PUBLIC connect privilege on the postgres database.\nI'm fairly sure that if that exists it's always noticed first,\nbypassing the need for any role membership tests. So please\nconfirm whether your production database has revoked PUBLIC\nconnect privilege.\n\n* It could be that the problem is not associated with the\ndatabase's connect privilege, but with role membership lookups\ntriggered by pg_hba.conf entries. Do you have any entries there\nthat require testing membership (i.e. the role column is not\n\"all\")?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Apr 2024 11:52:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n> Do you do bulk\n> permissions updates that might affect thousands of role memberships at\n> once?\n\nSo we do regularly update role memberships - essentially a sync from a\nseparate database, and some could well have happened just before\nconnections, but it's more in the tens at a time at most, not thousands...\nOr at least, that's what I thought. It sounds like it would be good to see\nif it's doing more. It'll take some time for me to figure this out though...\n\n> I'm fairly sure that if that exists it's always noticed first,\n> bypassing the need for any role membership tests. So please\n> confirm whether your production database has revoked PUBLIC\n> connect privilege.\n\nI realised that in fact we hadn't revoked this. So it sounds like whatever\nthe issue, it's not about checking if the user has the CONNECT privilege?\n\n> It could be that the problem is not associated with the\n> database's connect privilege, but with role membership lookups\n> triggered by pg_hba.conf entries. Do you have any entries there\n> that require testing membership (i.e. the role column is not\n> \"all\")?\n\nRunning `select * from pg_hba_file_rules` it looks like the user column is\nalways {all} or {rdsadmin}\n\nThanks,\n\nMichal\n\nTom Lane <[email protected]> writes:> Do you do bulk> permissions updates that might affect thousands of role memberships at> once?So we do regularly update role memberships - essentially a sync from a separate database, and some could well have happened just before connections, but it's more in the tens at a time at most, not thousands... Or at least, that's what I thought. It sounds like it would be good to see if it's doing more. It'll take some time for me to figure this out though...> I'm fairly sure that if that exists it's always noticed first,> bypassing the need for any role membership tests. So please> confirm whether your production database has revoked PUBLIC> connect privilege.I realised that in fact we hadn't revoked this. So it sounds like whatever the issue, it's not about checking if the user has the CONNECT privilege?> It could be that the problem is not associated with the> database's connect privilege, but with role membership lookups> triggered by pg_hba.conf entries. Do you have any entries there> that require testing membership (i.e. the role column is not> \"all\")?Running `select * from pg_hba_file_rules` it looks like the user column is always {all} or {rdsadmin}Thanks,Michal",
"msg_date": "Sun, 21 Apr 2024 12:08:11 +0100",
"msg_from": "Michal Charemza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
},
{
"msg_contents": "Michal Charemza <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> I'm fairly sure that if that exists it's always noticed first,\n>> bypassing the need for any role membership tests. So please\n>> confirm whether your production database has revoked PUBLIC\n>> connect privilege.\n\n> I realised that in fact we hadn't revoked this. So it sounds like whatever\n> the issue, it's not about checking if the user has the CONNECT privilege?\n\nYeah. I double-checked the code (see aclmask()), and it will detect\nholding a privilege via PUBLIC before it performs any role membership\nsearches. So whatever is happening, it's not that lookup.\n\n>> It could be that the problem is not associated with the\n>> database's connect privilege, but with role membership lookups\n>> triggered by pg_hba.conf entries. Do you have any entries there\n>> that require testing membership (i.e. the role column is not\n>> \"all\")?\n\n> Running `select * from pg_hba_file_rules` it looks like the user column is\n> always {all} or {rdsadmin}\n\nYou'll need to look closer and figure out which of the HBA rules is\nbeing used for the slow connection attempts. If it's {rdsadmin}\nthen that would definitely involve a role membership search.\nIf it's {all} then we're back to square one.\n\nA different line of thought could be that the slow connections\nare slow because they are waiting on a lock that some other\nprocess has got and is in no hurry to release. It would be\nworth trying to capture the contents of the pg_locks view\n(and I guess also pg_stat_activity) while one of these sessions\nis stuck, if you can reproduce it often enough to make that\nfeasible.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Apr 2024 11:08:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow to establish connection when user has a high\n number of roles"
}
] |
[
{
"msg_contents": "I have been having an ongoing problem for years with PostgreSQL selecting\nvery poor plans when running queries. It does things like doing a table\nscan of gigabyte size tables to generate a hash table rather than use a\nsuitable index.\n\nWhen I disable enough features that it generates a sensible plan I notice\nthat the low range of the total cost is many orders of magnitude lower\nwhile the high range is higher then the slow plan it originally chose. This\nsuggests PostgreSQL is choosing a plan based on the high end cost rather\nthan the median cost.\n\nIs there a PostgreSQL setting that can control how it judges plans?\n\nHere is a recent example of a query that finds the last time at a stop\nfiltered for a certain route it has to look up another table to find.\nPostgreSQL initially chose the plan that cost \"37357.45..37357.45\" rather\nthan the one that cost \"1.15..61088.32\".\n\ntranssee=# explain analyze select t.time+coalesce(t.timespent,0)*interval\n'1 second' from trackstopscurr t join tracktrip r on r.a=0 and\nr.route='501' and r.id=t.trackid and t.stopid='4514' order by t.time desc\nlimit 1;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=37357.45..37357.45 rows=1 width=16) (actual\ntime=2667.674..2694.047 rows=1 loops=1)\n -> Sort (cost=37357.45..37357.45 rows=1 width=16) (actual\ntime=2667.673..2694.045 rows=1 loops=1)\n Sort Key: t.\"time\" DESC\n Sort Method: top-N heapsort Memory: 25kB\n -> Gather (cost=1182.60..37357.44 rows=1 width=16) (actual\ntime=387.266..2692.733 rows=4027 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop (cost=182.60..36357.34 rows=1 width=16)\n(actual time=381.913..2659.412 rows=1342 loops=3)\n -> Parallel Bitmap Heap Scan on trackstopscurr t\n (cost=182.03..19048.63 rows=2014 width=14) (actual time=380.467..1231.788\nrows=8097 loops=3)\n Recheck Cond: ((stopid)::text = '4514'::text)\n Heap Blocks: exact=8103\n -> Bitmap Index Scan on trackstopscurr_2\n (cost=0.00..180.82 rows=4833 width=0) (actual time=382.653..382.653\nrows=24379 loops=1)\n Index Cond: ((stopid)::text = '4514'::text)\n -> Index Scan using tracktrip_0 on tracktrip r\n (cost=0.57..8.59 rows=1 width=4) (actual time=0.175..0.175 rows=0\nloops=24290)\n Index Cond: (id = t.trackid)\n Filter: ((a = 0) AND ((route)::text =\n'501'::text))\n Rows Removed by Filter: 1\n Planning Time: 0.228 ms\n Execution Time: 2694.077 ms\n(19 rows)\n\ntranssee=# set enable_sort TO false;\n SET\ntranssee=# explain analyze select t.time+coalesce(t.timespent,0)*interval\n'1 second' from trackstopscurr t join tracktrip r on r.a=0 and\nr.route='501' and r.id=t.trackid and t.stopid='4514' order by t.time desc\nlimit 1;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1.15..61088.32 rows=1 width=16) (actual time=0.076..0.076\nrows=1 loops=1)\n -> Nested Loop (cost=1.15..61088.32 rows=1 width=16) (actual\ntime=0.076..0.076 rows=1 loops=1)\n -> Index Scan Backward using trackstopscurr_2 on trackstopscurr t\n (cost=0.57..19552.59 rows=4833 width=14) (actual time=0.021..0.032 rows=5\nloops=1)\n Index Cond: ((stopid)::text = '4514'::text)\n -> Index Scan using tracktrip_0 on tracktrip r (cost=0.57..8.59\nrows=1 width=4) (actual time=0.008..0.008 rows=0 loops=5)\n Index Cond: (id = t.trackid)\n Filter: ((a = 0) AND ((route)::text = '501'::text))\n Rows Removed by Filter: 1\n Planning Time: 0.229 ms\n Execution Time: 0.091 ms\n(10 rows)\n\nIndexes:\n \"trackstopscurr_0f\" UNIQUE, btree (trackid, \"time\"), tablespace \"new\"\n \"trackstopscurr_1\" btree (trackid, stopid), tablespace \"new\"\n \"trackstopscurr_2\" btree (stopid, \"time\")\n\nI have been having an ongoing problem for years with PostgreSQL selecting very poor plans when running queries. It does things like doing a table scan of gigabyte size tables to generate a hash table rather than use a suitable index.When I disable enough features that it generates a sensible plan I notice that the low range of the total cost is many orders of magnitude lower while the high range is higher then the slow plan it originally chose. This suggests PostgreSQL is choosing a plan based on the high end cost rather than the median cost.Is there a PostgreSQL setting that can control how it judges plans?Here is a recent example of a query that finds the last time at a stop filtered for a certain route it has to look up another table to find. PostgreSQL initially chose the plan that cost \"37357.45..37357.45\" rather than the one that cost \"1.15..61088.32\".transsee=# explain analyze select t.time+coalesce(t.timespent,0)*interval '1 second' from trackstopscurr t join tracktrip r on r.a=0 and r.route='501' and r.id=t.trackid and t.stopid='4514' order by t.time desc limit 1; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=37357.45..37357.45 rows=1 width=16) (actual time=2667.674..2694.047 rows=1 loops=1) -> Sort (cost=37357.45..37357.45 rows=1 width=16) (actual time=2667.673..2694.045 rows=1 loops=1) Sort Key: t.\"time\" DESC Sort Method: top-N heapsort Memory: 25kB -> Gather (cost=1182.60..37357.44 rows=1 width=16) (actual time=387.266..2692.733 rows=4027 loops=1) Workers Planned: 2 Workers Launched: 2 -> Nested Loop (cost=182.60..36357.34 rows=1 width=16) (actual time=381.913..2659.412 rows=1342 loops=3) -> Parallel Bitmap Heap Scan on trackstopscurr t (cost=182.03..19048.63 rows=2014 width=14) (actual time=380.467..1231.788 rows=8097 loops=3) Recheck Cond: ((stopid)::text = '4514'::text) Heap Blocks: exact=8103 -> Bitmap Index Scan on trackstopscurr_2 (cost=0.00..180.82 rows=4833 width=0) (actual time=382.653..382.653 rows=24379 loops=1) Index Cond: ((stopid)::text = '4514'::text) -> Index Scan using tracktrip_0 on tracktrip r (cost=0.57..8.59 rows=1 width=4) (actual time=0.175..0.175 rows=0 loops=24290) Index Cond: (id = t.trackid) Filter: ((a = 0) AND ((route)::text = '501'::text)) Rows Removed by Filter: 1 Planning Time: 0.228 ms Execution Time: 2694.077 ms(19 rows)transsee=# set enable_sort TO false; SETtranssee=# explain analyze select t.time+coalesce(t.timespent,0)*interval '1 second' from trackstopscurr t join tracktrip r on r.a=0 and r.route='501' and r.id=t.trackid and t.stopid='4514' order by t.time desc limit 1; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=1.15..61088.32 rows=1 width=16) (actual time=0.076..0.076 rows=1 loops=1) -> Nested Loop (cost=1.15..61088.32 rows=1 width=16) (actual time=0.076..0.076 rows=1 loops=1) -> Index Scan Backward using trackstopscurr_2 on trackstopscurr t (cost=0.57..19552.59 rows=4833 width=14) (actual time=0.021..0.032 rows=5 loops=1) Index Cond: ((stopid)::text = '4514'::text) -> Index Scan using tracktrip_0 on tracktrip r (cost=0.57..8.59 rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=5) Index Cond: (id = t.trackid) Filter: ((a = 0) AND ((route)::text = '501'::text)) Rows Removed by Filter: 1 Planning Time: 0.229 ms Execution Time: 0.091 ms(10 rows)Indexes: \"trackstopscurr_0f\" UNIQUE, btree (trackid, \"time\"), tablespace \"new\" \"trackstopscurr_1\" btree (trackid, stopid), tablespace \"new\" \"trackstopscurr_2\" btree (stopid, \"time\")",
"msg_date": "Wed, 29 May 2024 21:03:28 -0400",
"msg_from": "\"Darwin O'Connor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Plan selection based on worst case scenario"
},
{
"msg_contents": "On Thu, 30 May 2024 at 13:03, Darwin O'Connor <[email protected]> wrote:\n> Is there a PostgreSQL setting that can control how it judges plans?\n\nThere's nothing like that, unfortunately.\n\n> Here is a recent example of a query that finds the last time at a stop filtered for a certain route it has to look up another table to find. PostgreSQL initially chose the plan that cost \"37357.45..37357.45\" rather than the one that cost \"1.15..61088.32\".\n>\n> transsee=# explain analyze select t.time+coalesce(t.timespent,0)*interval '1 second' from trackstopscurr t join tracktrip r on r.a=0 and r.route='501' and r.id=t.trackid and t.stopid='4514' order by t.time desc limit 1;\n\n> Limit (cost=37357.45..37357.45 rows=1 width=16) (actual time=2667.674..2694.047 rows=1 loops=1)\n\n> -> Nested Loop (cost=182.60..36357.34 rows=1 width=16) (actual time=381.913..2659.412 rows=1342 loops=3)\n> -> Parallel Bitmap Heap Scan on trackstopscurr t (cost=182.03..19048.63 rows=2014 width=14) (actual time=380.467..1231.788 rows=8097 loops=3)\n> Recheck Cond: ((stopid)::text = '4514'::text)\n> Heap Blocks: exact=8103\n> -> Bitmap Index Scan on trackstopscurr_2 (cost=0.00..180.82 rows=4833 width=0) (actual time=382.653..382.653 rows=24379 loops=1)\n> Index Cond: ((stopid)::text = '4514'::text)\n> -> Index Scan using tracktrip_0 on tracktrip r (cost=0.57..8.59 rows=1 width=4) (actual time=0.175..0.175 rows=0 loops=24290)\n> Index Cond: (id = t.trackid)\n> Filter: ((a = 0) AND ((route)::text = '501'::text))\n> Rows Removed by Filter: 1\n> Planning Time: 0.228 ms\n> Execution Time: 2694.077 ms\n\nThe problem here is primarily down to the poor estimates for the scan\non tracktrip. You can see that the Nested Loop estimates 1 row, so\ntherefore the LIMIT costing code thinks LIMIT 1 will require reading\nall rows, all 1 of them. In which case that's expected to cost\n36357.34, which is cheaper than the other plan which costs 61088.32 to\nget one row.\n\nIf you can fix the row estimate to even estimate 2 rows rather than 1,\nthen it'll choose the other plan. An estimate of 2 rows would mean\nthe total cost of the best path after the limit would be 61088.32 / 2\n= 30544.16, which is cheaper than the 36357.34 of the bad plan.\n\nYou could try ANALYZE on tracktrip, or perhaps increasing the\nstatistics targets on the columns being queried here.\n\nIf there's a correlation between the \"a\" and \"route\" columns then you\nmight want to try CREATE STATISTICS:\n\nCREATE STATISTICS ON a,route FROM tracktrip;\nANALYZE tracktrip;\n\nDavid\n\n\n",
"msg_date": "Thu, 30 May 2024 13:44:45 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan selection based on worst case scenario"
},
{
"msg_contents": "Hey Darwin,\n\nyou don't mention your version or config, but it's always good to go\nthrough https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nI used to notice huge improvements in plans when increasing statistics in\nrelevant columns, as already suggested by David, and also by\nlowering random_page_cost, especially in older Pg versions.\n\nHey Darwin,you don't mention your version or config, but it's always good to go through https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_ServerI used to notice huge improvements in plans when increasing statistics in relevant columns, as already suggested by David, and also by lowering random_page_cost, especially in older Pg versions.",
"msg_date": "Wed, 29 May 2024 22:03:16 -0600",
"msg_from": "Chema <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan selection based on worst case scenario"
}
] |
[
{
"msg_contents": "I've written the below SQL query that joins pg_stat_statements with\npg_stat_activity using \"queryid\" as the join condition. Yet, the results\nshow that pg_stat_statements and pg_stat_activity are reporting two\ndistinct queries for the identical queryid. Can this occur?\n\nselect\n\npgss.queryid as \"PGSS_QUERYID\",\n\npgss.query as \"PGSS_QUERY\",\n\npgsa.query_id as \"PGSA_QUERYID\",\n\nsubstring(pgsa.query,\n\n1,\n\n45) as \"PGSA_QUERY\"\n\nfrom\n\npg_stat_statements pgss\n\njoin pg_stat_activity pgsa on\n\npgss.queryid = pgsa.query_id\n\nand pgss.queryid = '2397681704071010949';\n\n\n PGSS_QUERYID | PGSS_QUERY | PGSA_QUERYID |\n PGSA_QUERY\n---------------------+------------+---------------------+-----------------------------------------------\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nprojectper0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nprojectper0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nfolderperm0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nfolderperm0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT\nitem_guid, count(item_guid) count FR\n 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT\nitem_guid, count(item_guid) count FR\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nfolderperm0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nfolderperm0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT distinct\nep.role FROM v_project_permis\n 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT distinct\nep.role FROM v_project_permis\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nthis_.CONTACT_SID as CONTACT1_362_1_,\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nthis_.CONTACT_SID as CONTACT1_362_1_,\n 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT\nitem_guid,category,root_guid, count(i\n 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT\nitem_guid,category,root_guid, count(i\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nfolderperm0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nfolderperm0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nfolderperm0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nfolderperm0_.ENTITY_PERMISSION_SID as\n 2397681704071010949 | BEGIN | 2397681704071010949 | select\nfolderperm0_.ENTITY_PERMISSION_SID as\nRegards,\n\nSatalabha\n\nI've written the below SQL query that joins pg_stat_statements with pg_stat_activity using \"queryid\" as the join condition. Yet, the results show that pg_stat_statements and pg_stat_activity are reporting two distinct queries for the identical queryid. Can this occur?select pgss.queryid as \"PGSS_QUERYID\", pgss.query as \"PGSS_QUERY\", pgsa.query_id as \"PGSA_QUERYID\", substring(pgsa.query, 1, 45) as \"PGSA_QUERY\" from pg_stat_statements pgssjoin pg_stat_activity pgsa on pgss.queryid = pgsa.query_id and pgss.queryid = '2397681704071010949'; PGSS_QUERYID | PGSS_QUERY | PGSA_QUERYID | PGSA_QUERY---------------------+------------+---------------------+----------------------------------------------- 2397681704071010949 | BEGIN | 2397681704071010949 | select projectper0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | select projectper0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | select folderperm0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | select folderperm0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT item_guid, count(item_guid) count FR 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT item_guid, count(item_guid) count FR 2397681704071010949 | BEGIN | 2397681704071010949 | select folderperm0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | select folderperm0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT distinct ep.role FROM v_project_permis 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT distinct ep.role FROM v_project_permis 2397681704071010949 | BEGIN | 2397681704071010949 | select this_.CONTACT_SID as CONTACT1_362_1_, 2397681704071010949 | BEGIN | 2397681704071010949 | select this_.CONTACT_SID as CONTACT1_362_1_, 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT item_guid,category,root_guid, count(i 2397681704071010949 | BEGIN | 2397681704071010949 | SELECT item_guid,category,root_guid, count(i 2397681704071010949 | BEGIN | 2397681704071010949 | select folderperm0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | select folderperm0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | select folderperm0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | select folderperm0_.ENTITY_PERMISSION_SID as 2397681704071010949 | BEGIN | 2397681704071010949 | select folderperm0_.ENTITY_PERMISSION_SID asRegards,Satalabha",
"msg_date": "Tue, 4 Jun 2024 12:16:09 +0530",
"msg_from": "Satalabaha Postgres <[email protected]>",
"msg_from_op": true,
"msg_subject": "query column in pg_stat_statements and pg_stat_activity"
}
] |
[
{
"msg_contents": "Hello,\n\nIn our application after upgrading from postgresql from v15 to v16\nperformance for some queries dropped from less than 1-2 seconds to 2+\nminutes.\nAfter checking out some blogposts regarding DISTINCT improvements in v16 i\nwas able to \"fix\" it by adding order by clauses in subqueries, still it\nlooks more like a workaround for me and i would like to ask if it is\nworking as expected now.\n\nI was able to reproduce the issue by the following steps\n1. start postgresql v16 in docker\n docker run --name some-postgres16 -e\nPOSTGRES_PASSWORD=mysecretpassword -d postgres:16\n2. create tables with data.\n - data is originated by northwind db, but tables were created with\ndifferent schema by custom software\n - each table has 8k rows\n - _BDN_Terretories__202406100822.sql\nand _BDN_EmployeeTerritories__202406100822.sql files are attached\n3. Execute the following query\n\nselect distinct tbl1.\"BDN_EmployeeTerritories_ID\", tbl2.\"BDN_Terretories_ID\",\ntbl1.\"Reference_Date\" from\n\n(\n\nselect \"BDN_EmployeeTerritories_ID\", \"Reference_Date\", token\n\nfrom public.\"BDN_EmployeeTerritories\",\n\nunnest(string_to_array(\"EMP_TerretoryID\", ';')) s(token)\n\n--order by token\n\n) tbl1\n\njoin (\n\nselect \"BDN_Terretories_ID\", token from public.\"BDN_Terretories\", unnest(\nstring_to_array(\"EMP_TerretoryID\", ';')) s(token)\n\n) tbl2 on tbl1.token = tbl2.token Observations:\n1. query runs for 4-5 seconds on v16 and less than a second on v15 2. in\nv16 it also goes downs to less than a second if 2.1 distinct is removed\n\n2.2 unnest is removed. it is not really needed for this particular data but\nthis query is autogenerated and unnest makes sense for other data\n\n2.3 \"order by token\" is uncommented, this is my current way of fixing the\nproblem I would really appreciate some feedback if that is expected\nbehaviour and if there are better solutions\n\n-- \nBest Regards,\nVitalii Lytovskyi",
"msg_date": "Mon, 10 Jun 2024 08:59:04 +0200",
"msg_from": "Vitaliy Litovskiy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Distinct performance dropped by multiple times in v16"
},
{
"msg_contents": "On Mon, Jun 10, 2024 at 3:32 AM Vitaliy Litovskiy <\[email protected]> wrote:\n\n> 1. query runs for 4-5 seconds on v16 and less than a second on v15\n>\n\nYeah, that's a big regression. Seeing the actual EXPLAIN ANALYZE output\nfor both systems would be very helpful to us. Also nice to see the plan if\nyou do a\nSET enable_incremental_sort=0;\nbefore running the query\n\nI wonder if materializing things might help, something like\n\nwith\n x as (select id, unnest(string_to_array(emp,';')) as token, refdate from\nterr),\n y as (select id, unnest(string_to_array(emp,';')) as token from employee)\nselect distinct x.id, y.id, refdate from x join y using (token);\n\n(renamed to avoid all those mixed-case quoting)\n\nCheers,\nGreg\n\nOn Mon, Jun 10, 2024 at 3:32 AM Vitaliy Litovskiy <[email protected]> wrote:1. query runs for 4-5 seconds on v16 and less than a second on v15Yeah, that's a big regression. Seeing the actual EXPLAIN ANALYZE output for both systems would be very helpful to us. Also nice to see the plan if you do a SET enable_incremental_sort=0;before running the queryI wonder if materializing things might help, something likewith x as (select id, unnest(string_to_array(emp,';')) as token, refdate from terr), y as (select id, unnest(string_to_array(emp,';')) as token from employee)select distinct x.id, y.id, refdate from x join y using (token);(renamed to avoid all those mixed-case quoting)Cheers,Greg",
"msg_date": "Mon, 10 Jun 2024 12:24:29 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distinct performance dropped by multiple times in v16"
},
{
"msg_contents": "On 6/10/24 13:59, Vitaliy Litovskiy wrote:\n> ) tbl2 on tbl1.token = tbl2.token Observations:\n> 1. query runs for 4-5 seconds on v16 and less than a second on v15 2. in \n> v16 it also goes downs to less than a second if 2.1 distinct is removed\n> \n> 2.2 unnest is removed. it is not really needed for this particular data \n> but this query is autogenerated and unnest makes sense for other data\n> \n> 2.3 \"order by token\" is uncommented, this is my current way of fixing \n> the problem I would really appreciate some feedback if that is expected \n> behaviour and if there are better solutions\nThe reason for this behaviour is simple: commit 3c6fc58 allowed using \nincremental_sort with DISTINCT clauses.\nSo, in PostgreSQL 16 we have two concurrent strategies:\n1. HashJoin + hashAgg at the end\n2, NestLoop, which derives sort order from the index scan and planner \ncan utilise IncrementalSort+Unique instead of full sort.\n\nDisabling Incremental Sort (see explain 2) we get good plan with \nHashJoin at the top. Now we can see, that HashJoin definitely cheaper \naccording to total cost, but has a big startup cost. Optimiser compares \ncost of incremental cost and, compare to hash agg it is much cheaper \nbecause of a reason.\n\nHere we already have a couple of questions:\n1. Why optimiser overestimates in such a simple situation.\n2. Why in the case of big numbers of tuples incremental sort is better \nthan hashAgg?\n\nBefore the next step just see how optimiser decides in the case of \ncorrect prediction. I usually use the AQO extension for that. Executing \nthe query twice with the AQO in 'learn' mode we have correct plannedrows \nnumber in each node of the plan and, as you can see in EXPLAIN 3, \noptimiser chooses good strategy. So, having correct predictions, \noptimiser ends up with optimal plan.\n\nOrigins of overestimation lie in internals of the unnest, it is not \nobvious so far and may be discovered later.\nThe reason, why optimiser likes NestLoop on big data looks enigmatic. \nAttempting to increase a cost of unnest routing from 1 to 1E5 we don't \nsee any changes in decision. In my opinion, the key reason here may be \ntriggered by unusual width of the JOIN result:\nHash Join (cost=0.24..0.47 rows=10 width=0)\nIn my opinion, the cost model can provide too low cost of the join and \nit is a reason why upper NestLoop looks better than HashJoin.\n\nI don't have any general recommendations to resolve this issue, but this \ncase should be discovered by the core developers.\n\n[1] https://github.com/postgrespro/aqo\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Tue, 11 Jun 2024 14:10:45 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distinct performance dropped by multiple times in v16"
},
{
"msg_contents": "On 6/10/24 13:59, Vitaliy Litovskiy wrote:\n> 2.2 unnest is removed. it is not really needed for this particular data \n> but this query is autogenerated and unnest makes sense for other data\n> \n> 2.3 \"order by token\" is uncommented, this is my current way of fixing \n> the problem I would really appreciate some feedback if that is expected \n> behaviour and if there are better solutions\nAfter second thought I found that the key issue here is too cheap cycles \nother unnest() routine. unnest procost is 1 as any other routines but it \nlooks a bit more costly than it is.\nAlso, you can tune cpu_operator_cost a bit. Right now it is set to \n0.0025 by default. Increasing it to 0.005:\nSET cpu_operator_cost = 0.005;\nresolves your issue.\nI guess, the value of cpu_operator_cost usually not a problem, because \ntable pages costs much more. But here function calls are the main source \nof load and because of that need to be estimated more precisely.\nI hope this will be helpful for you.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Wed, 12 Jun 2024 19:10:47 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distinct performance dropped by multiple times in v16"
}
] |
[
{
"msg_contents": "Hi Team,\n\nI'm installing postgresql 14 version by using Rpm. However i'm getting\nerror while execute the database initialzation. Please check below error\nmessage\n\nError;\nHi Team, I'm installing postgresql 14 version by using Rpm. However i'm\ngetting error while execute the database initialzation. Please check below\nerror message\n\nThanks,\nNikhil,\nPostgreSQL DBA.\n\nHi Team, I'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageError; Hi Team, \n\nI'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageThanks, Nikhil, PostgreSQL DBA.",
"msg_date": "Wed, 12 Jun 2024 01:28:12 +0530",
"msg_from": "nikhil kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql initialize error"
},
{
"msg_contents": "Error ; ./ initdbs error while loading shared libraries: libssl.so.1.1:\ncannot open shared object file: No such file or directory\n\n\nOn Wed, 12 Jun 2024 at 1:28 AM, nikhil kumar <[email protected]> wrote:\n\n> Hi Team,\n>\n> I'm installing postgresql 14 version by using Rpm. However i'm getting\n> error while execute the database initialzation. Please check below error\n> message\n>\n> Error;\n> Hi Team, I'm installing postgresql 14 version by using Rpm. However i'm\n> getting error while execute the database initialzation. Please check below\n> error message\n>\n> Thanks,\n> Nikhil,\n> PostgreSQL DBA.\n>\n>\n\nError ; ./ initdbs error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directoryOn Wed, 12 Jun 2024 at 1:28 AM, nikhil kumar <[email protected]> wrote:Hi Team, I'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageError; Hi Team, \n\nI'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageThanks, Nikhil, PostgreSQL DBA.",
"msg_date": "Wed, 12 Jun 2024 01:29:15 +0530",
"msg_from": "nikhil kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql initialize error"
},
{
"msg_contents": "Greetings,\n\nCheck OpenSSL installed on your system\nopenssl version\n\nIf not installed\nsudo apt-get install openssl\n\nFind library\nsudo find / -name libssl.so.1.1\n\nAdd in Library path\nexport LD_LIBRARY_PATH=/path/to/libssl:$LD_LIBRARY_PATH\n\n*Salahuddin (살라후딘**)*\n\n\n\nOn Wed, 12 Jun 2024 at 00:59, nikhil kumar <[email protected]> wrote:\n\n> Error ; ./ initdbs error while loading shared libraries: libssl.so.1.1:\n> cannot open shared object file: No such file or directory\n>\n>\n> On Wed, 12 Jun 2024 at 1:28 AM, nikhil kumar <[email protected]>\n> wrote:\n>\n>> Hi Team,\n>>\n>> I'm installing postgresql 14 version by using Rpm. However i'm getting\n>> error while execute the database initialzation. Please check below error\n>> message\n>>\n>> Error;\n>> Hi Team, I'm installing postgresql 14 version by using Rpm. However i'm\n>> getting error while execute the database initialzation. Please check below\n>> error message\n>>\n>> Thanks,\n>> Nikhil,\n>> PostgreSQL DBA.\n>>\n>>\n\nGreetings,Check OpenSSL installed on your systemopenssl versionIf not installedsudo apt-get install opensslFind librarysudo find / -name libssl.so.1.1Add in Library pathexport LD_LIBRARY_PATH=/path/to/libssl:$LD_LIBRARY_PATHSalahuddin (살라후딘)On Wed, 12 Jun 2024 at 00:59, nikhil kumar <[email protected]> wrote:Error ; ./ initdbs error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directoryOn Wed, 12 Jun 2024 at 1:28 AM, nikhil kumar <[email protected]> wrote:Hi Team, I'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageError; Hi Team, \n\nI'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageThanks, Nikhil, PostgreSQL DBA.",
"msg_date": "Wed, 12 Jun 2024 01:04:42 +0500",
"msg_from": "Muhammad Salahuddin Manzoor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql initialize error"
},
{
"msg_contents": "Thanks for your support. I’ll check it\n\nOn Wed, 12 Jun 2024 at 1:34 AM, Muhammad Salahuddin Manzoor <\[email protected]> wrote:\n\n> Greetings,\n>\n> Check OpenSSL installed on your system\n> openssl version\n>\n> If not installed\n> sudo apt-get install openssl\n>\n> Find library\n> sudo find / -name libssl.so.1.1\n>\n> Add in Library path\n> export LD_LIBRARY_PATH=/path/to/libssl:$LD_LIBRARY_PATH\n>\n> *Salahuddin (살라후딘**)*\n>\n>\n>\n> On Wed, 12 Jun 2024 at 00:59, nikhil kumar <[email protected]> wrote:\n>\n>> Error ; ./ initdbs error while loading shared libraries: libssl.so.1.1:\n>> cannot open shared object file: No such file or directory\n>>\n>>\n>> On Wed, 12 Jun 2024 at 1:28 AM, nikhil kumar <[email protected]>\n>> wrote:\n>>\n>>> Hi Team,\n>>>\n>>> I'm installing postgresql 14 version by using Rpm. However i'm getting\n>>> error while execute the database initialzation. Please check below error\n>>> message\n>>>\n>>> Error;\n>>> Hi Team, I'm installing postgresql 14 version by using Rpm. However i'm\n>>> getting error while execute the database initialzation. Please check below\n>>> error message\n>>>\n>>> Thanks,\n>>> Nikhil,\n>>> PostgreSQL DBA.\n>>>\n>>>\n\nThanks for your support. I’ll check itOn Wed, 12 Jun 2024 at 1:34 AM, Muhammad Salahuddin Manzoor <[email protected]> wrote:Greetings,Check OpenSSL installed on your systemopenssl versionIf not installedsudo apt-get install opensslFind librarysudo find / -name libssl.so.1.1Add in Library pathexport LD_LIBRARY_PATH=/path/to/libssl:$LD_LIBRARY_PATHSalahuddin (살라후딘)On Wed, 12 Jun 2024 at 00:59, nikhil kumar <[email protected]> wrote:Error ; ./ initdbs error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directoryOn Wed, 12 Jun 2024 at 1:28 AM, nikhil kumar <[email protected]> wrote:Hi Team, I'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageError; Hi Team, \n\nI'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageThanks, Nikhil, PostgreSQL DBA.",
"msg_date": "Wed, 12 Jun 2024 01:35:58 +0530",
"msg_from": "nikhil kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql initialize error"
},
{
"msg_contents": "As I check we have OpenSSL package in that server but that file is not\nvisible\n\n\nOn Wed, 12 Jun 2024 at 1:35 AM, nikhil kumar <[email protected]> wrote:\n\n> Thanks for your support. I’ll check it\n>\n> On Wed, 12 Jun 2024 at 1:34 AM, Muhammad Salahuddin Manzoor <\n> [email protected]> wrote:\n>\n>> Greetings,\n>>\n>> Check OpenSSL installed on your system\n>> openssl version\n>>\n>> If not installed\n>> sudo apt-get install openssl\n>>\n>> Find library\n>> sudo find / -name libssl.so.1.1\n>>\n>> Add in Library path\n>> export LD_LIBRARY_PATH=/path/to/libssl:$LD_LIBRARY_PATH\n>>\n>> *Salahuddin (살라후딘**)*\n>>\n>>\n>>\n>> On Wed, 12 Jun 2024 at 00:59, nikhil kumar <[email protected]>\n>> wrote:\n>>\n>>> Error ; ./ initdbs error while loading shared libraries: libssl.so.1.1:\n>>> cannot open shared object file: No such file or directory\n>>>\n>>>\n>>> On Wed, 12 Jun 2024 at 1:28 AM, nikhil kumar <[email protected]>\n>>> wrote:\n>>>\n>>>> Hi Team,\n>>>>\n>>>> I'm installing postgresql 14 version by using Rpm. However i'm getting\n>>>> error while execute the database initialzation. Please check below error\n>>>> message\n>>>>\n>>>> Error;\n>>>> Hi Team, I'm installing postgresql 14 version by using Rpm. However i'm\n>>>> getting error while execute the database initialzation. Please check below\n>>>> error message\n>>>>\n>>>> Thanks,\n>>>> Nikhil,\n>>>> PostgreSQL DBA.\n>>>>\n>>>>\n\nAs I check we have OpenSSL package in that server but that file is not visible On Wed, 12 Jun 2024 at 1:35 AM, nikhil kumar <[email protected]> wrote:Thanks for your support. I’ll check itOn Wed, 12 Jun 2024 at 1:34 AM, Muhammad Salahuddin Manzoor <[email protected]> wrote:Greetings,Check OpenSSL installed on your systemopenssl versionIf not installedsudo apt-get install opensslFind librarysudo find / -name libssl.so.1.1Add in Library pathexport LD_LIBRARY_PATH=/path/to/libssl:$LD_LIBRARY_PATHSalahuddin (살라후딘)On Wed, 12 Jun 2024 at 00:59, nikhil kumar <[email protected]> wrote:Error ; ./ initdbs error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directoryOn Wed, 12 Jun 2024 at 1:28 AM, nikhil kumar <[email protected]> wrote:Hi Team, I'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageError; Hi Team, \n\nI'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error messageThanks, Nikhil, PostgreSQL DBA.",
"msg_date": "Wed, 12 Jun 2024 01:53:09 +0530",
"msg_from": "nikhil kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql initialize error"
},
{
"msg_contents": "Just be safe over sorry install openssl and openssl-dev packages.\r\n\r\nYou may want to run updatedb, if you have the locate package installed just to double verify\r\n\r\n\r\n\r\n[cid:[email protected]]\r\nTravis Smith\r\nVP Business Technology\r\nCircana\r\n\r\nFrom: nikhil kumar <[email protected]>\r\nSent: Tuesday, June 11, 2024 3:23 PM\r\nTo: Muhammad Salahuddin Manzoor <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: Postgresql initialize error\r\n\r\n\r\n***ATTENTION!! This message originated from outside of Circana. Treat hyperlinks and attachments in this email with caution.***\r\n\r\nAs I check we have OpenSSL package in that server but that file is not visible\r\n\r\n\r\nOn Wed, 12 Jun 2024 at 1:35 AM, nikhil kumar <[email protected]<mailto:[email protected]>> wrote:\r\nThanks for your support. I’ll check it\r\n\r\nOn Wed, 12 Jun 2024 at 1:34 AM, Muhammad Salahuddin Manzoor <[email protected]<mailto:[email protected]>> wrote:\r\nGreetings,\r\n\r\nCheck OpenSSL installed on your system\r\nopenssl version\r\n\r\nIf not installed\r\nsudo apt-get install openssl\r\n\r\nFind library\r\nsudo find / -name libssl.so.1.1\r\n\r\nAdd in Library path\r\nexport LD_LIBRARY_PATH=/path/to/libssl:$LD_LIBRARY_PATH\r\n\r\nSalahuddin (살라후딘)\r\n\r\n\r\nOn Wed, 12 Jun 2024 at 00:59, nikhil kumar <[email protected]<mailto:[email protected]>> wrote:\r\nError ; ./ initdbs error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory\r\n\r\n\r\nOn Wed, 12 Jun 2024 at 1:28 AM, nikhil kumar <[email protected]<mailto:[email protected]>> wrote:\r\nHi Team,\r\n\r\nI'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error message\r\n\r\nError;\r\nHi Team, I'm installing postgresql 14 version by using Rpm. However i'm getting error while execute the database initialzation. Please check below error message\r\n\r\nThanks,\r\nNikhil,\r\nPostgreSQL DBA.\r\n\r\nREMINDER: This message came from an external source. Please exercise caution when opening any attachments or clicking on links.",
"msg_date": "Tue, 11 Jun 2024 20:25:00 +0000",
"msg_from": "\"Smith, Travis\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Postgresql initialize error"
}
] |
[
{
"msg_contents": "Hi Team,\n\nCan anyone please help on SMTP configuration for send gmail. If any\ndocument please let me know.\n\nThanks & Regards,\nNikhil,\nPostgresql DBA,\n8074430856.\n\nHi Team,Can anyone please help on SMTP configuration for send gmail. If any document please let me know. Thanks & Regards,Nikhil,Postgresql DBA,8074430856.",
"msg_date": "Wed, 12 Jun 2024 17:28:00 +0530",
"msg_from": "nikhil kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need help on configuration SMTP"
},
{
"msg_contents": "Hi Nikhil,\n\nSee if these links work for you\n\nhttps://dev.to/davepar/sending-email-from-postgres-47i0\nhttps://www.cdata.com/kb/tech/email-jdbc-postgresql-fdw-mysql.rst\n\nRegards,\nMuhammad Ikram\nBitnine Global\n\nOn Wed, Jun 12, 2024 at 4:58 PM nikhil kumar <[email protected]> wrote:\n\n> Hi Team,\n>\n> Can anyone please help on SMTP configuration for send gmail. If any\n> document please let me know.\n>\n> Thanks & Regards,\n> Nikhil,\n> Postgresql DBA,\n> 8074430856.\n>\n\n\n-- \nMuhammad Ikram\n\nHi Nikhil,See if these links work for youhttps://dev.to/davepar/sending-email-from-postgres-47i0https://www.cdata.com/kb/tech/email-jdbc-postgresql-fdw-mysql.rstRegards,Muhammad IkramBitnine GlobalOn Wed, Jun 12, 2024 at 4:58 PM nikhil kumar <[email protected]> wrote:Hi Team,Can anyone please help on SMTP configuration for send gmail. If any document please let me know. Thanks & Regards,Nikhil,Postgresql DBA,8074430856.\n-- Muhammad Ikram",
"msg_date": "Wed, 12 Jun 2024 17:01:29 +0500",
"msg_from": "Muhammad Ikram <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help on configuration SMTP"
},
{
"msg_contents": "On Wednesday, June 12, 2024, nikhil kumar <[email protected]> wrote:\n\n>\n> Can anyone please help on SMTP configuration for send gmail. If any\n> document please let me know.\n>\n\nThis seems like an exceedingly unusual place to be asking for such help…\n\nDavid J.\n\nOn Wednesday, June 12, 2024, nikhil kumar <[email protected]> wrote:Can anyone please help on SMTP configuration for send gmail. If any document please let me know. This seems like an exceedingly unusual place to be asking for such help…David J.",
"msg_date": "Wed, 12 Jun 2024 05:25:43 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help on configuration SMTP"
},
{
"msg_contents": "Thank you I’ll check it out.\n\nOn Wed, 12 Jun 2024 at 5:55 PM, David G. Johnston <\[email protected]> wrote:\n\n> On Wednesday, June 12, 2024, nikhil kumar <[email protected]> wrote:\n>\n>>\n>> Can anyone please help on SMTP configuration for send gmail. If any\n>> document please let me know.\n>>\n>\n> This seems like an exceedingly unusual place to be asking for such help…\n>\n> David J.\n>\n\nThank you I’ll check it out.On Wed, 12 Jun 2024 at 5:55 PM, David G. Johnston <[email protected]> wrote:On Wednesday, June 12, 2024, nikhil kumar <[email protected]> wrote:Can anyone please help on SMTP configuration for send gmail. If any document please let me know. This seems like an exceedingly unusual place to be asking for such help…David J.",
"msg_date": "Wed, 12 Jun 2024 22:46:19 +0530",
"msg_from": "nikhil kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need help on configuration SMTP"
}
] |
[
{
"msg_contents": "Hello Beautiful People,\n\nI have a question regarding a good performing BDR system for Postgres. I am currently evaluating/prototyping a primary-primary/master-master replication system. I have a question around the experience held by the community at large. Has anyone worked in a production environment with a solid BDR platform that performs well? I am currently testing SymmetricDS but I am worried about a few things although the feature set is salivating.\n\nAny hints to other software would be great, the organization has budget cap to bring a talented consultant or consultant firm in to fulfill an advisory role.\n\n\n\nSo far I have reviewed:\nGreenplum -- always lagging in version\nPG 16 logical replication - very basic - manual conflict resolution.\nSymmetricDS - trigger based replication\nEDB BDR (PGD) - not an option at this time\n\n\n\n\nThank you in advance,\n\n[cid:[email protected]]\nTravis Smith\nVP Business Technology\nCircana\n\[email protected]<mailto:[email protected]>\nC: 773.349.1415\n203 North LaSalle Street, Suite 1500\n\ncircana.com<https://www.circana.com/> LinkedIn<https://www.linkedin.com/company/wearecircana> Twitter<https://twitter.com/wearecircana> Facebook<https://www.facebook.com/wearecircana> YouTube<https://www.youtube.com/@WeAreCircana> Instagram<https://www.instagram.com/wearecircana/>\n\nWe Are Now Circana! IRI and NPD have come together to form Circana, the world's leading advisor on the complexity of consumer behavior. Learn more.<https://www.circana.com/iri-and-npd-rebrand-as-circana-the-leading-advisor-on-the-complexity-of-consumer-behavior.html>",
"msg_date": "Fri, 14 Jun 2024 12:08:51 +0000",
"msg_from": "\"Smith, Travis\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BDR that performs"
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 12:08:51PM +0000, Smith, Travis wrote:\n> Hello Beautiful People,\n> \n> \n> \n> I have a question regarding a good performing BDR system for Postgres. I am\n> currently evaluating/prototyping a primary-primary/master-master replication\n> system. I have a question around the experience held by the community at\n> large. Has anyone worked in a production environment with a solid BDR\n> platform that performs well? I am currently testing SymmetricDS but I am\n> worried about a few things although the feature set is salivating. \n> \n> \n> \n> Any hints to other software would be great, the organization has budget cap to\n> bring a talented consultant or consultant firm in to fulfill an advisory role.\n> \n> So far I have reviewed:\n> \n> Greenplum -- always lagging in version\n> \n> PG 16 logical replication – very basic – manual conflict resolution.\n> \n> SymmetricDS – trigger based replication\n> \n> EDB BDR (PGD) – not an option at this time\n\nWell, I think you have the right data above. There isn't much demand\nfor multi-master in the community because the usefulness of it is\nlimited; see:\n\n\thttps://momjian.us/main/blogs/pgblog/2018.html#December_24_2018\n\nWe are working on expanding logical replication to handle DDL changes\nand conflicts, but that work is a few years away from completion.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 14 Jun 2024 08:35:49 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BDR that performs"
},
{
"msg_contents": "Hi Travis,\n\nOn Fri, Jun 14, 2024 at 5:39 PM Smith, Travis <[email protected]>\nwrote:\n\n> Hello Beautiful People,\n>\n>\n>\n> I have a question regarding a good performing BDR system for Postgres. I\n> am currently evaluating/prototyping a primary-primary/master-master\n> replication system. I have a question around the experience held by the\n> community at large. Has anyone worked in a production environment with a\n> solid BDR platform that performs well? I am currently testing SymmetricDS\n> but I am worried about a few things although the feature set is\n> salivating.\n>\n>\n>\n> Any hints to other software would be great, the organization has budget\n> cap to bring a talented consultant or consultant firm in to fulfill an\n> advisory role.\n>\n>\n>\n>\n>\n>\n>\n> So far I have reviewed:\n>\n> Greenplum -- always lagging in version\n>\n> PG 16 logical replication – very basic – manual conflict resolution.\n>\n> SymmetricDS – trigger based replication\n>\n> EDB BDR (PGD) – not an option at this time\n>\n\nOne alternative you might consider is pgEdge Platform. Specifically\ntailored for robust PostgreSQL MMR replication and high availability\nscenarios. pgEdge offers comprehensive support and extensive documentation,\nwhich could be beneficial for your current evaluation and prototyping phase.\n\nSee https://docs.pgedge.com/ for more info.\n\n>\n>\n>\n>\n>\n>\n>\n>\n> Thank you in advance,\n>\n>\n>\n> *Travis Smith*\n>\n> VP Business Technology\n>\n> Circana\n>\n>\n>\n> [email protected]\n>\n> C: 773.349.1415\n>\n> 203 North LaSalle Street, Suite 1500\n>\n>\n>\n> circana.com <https://www.circana.com/> LinkedIn\n> <https://www.linkedin.com/company/wearecircana> Twitter\n> <https://twitter.com/wearecircana> Facebook\n> <https://www.facebook.com/wearecircana> YouTube\n> <https://www.youtube.com/@WeAreCircana> Instagram\n> <https://www.instagram.com/wearecircana/>\n>\n>\n>\n> *We Are Now Circana! *IRI and NPD have come together to form Circana, the\n> world’s leading advisor on the complexity of consumer behavior. Learn\n> more.\n> <https://www.circana.com/iri-and-npd-rebrand-as-circana-the-leading-advisor-on-the-complexity-of-consumer-behavior.html>\n>\n>\n>",
"msg_date": "Fri, 14 Jun 2024 18:10:31 +0530",
"msg_from": "Hari Kiran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BDR that performs"
},
{
"msg_contents": "Hello Travis,\n\nWe have had this requirement (true BDR that also handles DDLs & Conflict\nResolution) and have been researching the options for a long time and I'll\nsend you more details soon, but meanwhile curious to understand why EDB BDR\nis not an option?\n\nThanks\n\nOn Fri, 14 Jun 2024 at 17:39, Smith, Travis <[email protected]>\nwrote:\n\n> Hello Beautiful People,\n>\n>\n>\n> I have a question regarding a good performing BDR system for Postgres. I\n> am currently evaluating/prototyping a primary-primary/master-master\n> replication system. I have a question around the experience held by the\n> community at large. Has anyone worked in a production environment with a\n> solid BDR platform that performs well? I am currently testing SymmetricDS\n> but I am worried about a few things although the feature set is\n> salivating.\n>\n>\n>\n> Any hints to other software would be great, the organization has budget\n> cap to bring a talented consultant or consultant firm in to fulfill an\n> advisory role.\n>\n>\n>\n>\n>\n>\n>\n> So far I have reviewed:\n>\n> Greenplum -- always lagging in version\n>\n> PG 16 logical replication – very basic – manual conflict resolution.\n>\n> SymmetricDS – trigger based replication\n>\n> EDB BDR (PGD) – not an option at this time\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> Thank you in advance,\n>\n>\n>\n> *Travis Smith*\n>\n> VP Business Technology\n>\n> Circana\n>\n>\n>\n> [email protected]\n>\n> C: 773.349.1415\n>\n> 203 North LaSalle Street, Suite 1500\n>\n>\n>\n> circana.com <https://www.circana.com/> LinkedIn\n> <https://www.linkedin.com/company/wearecircana> Twitter\n> <https://twitter.com/wearecircana> Facebook\n> <https://www.facebook.com/wearecircana> YouTube\n> <https://www.youtube.com/@WeAreCircana> Instagram\n> <https://www.instagram.com/wearecircana/>\n>\n>\n>\n> *We Are Now Circana! *IRI and NPD have come together to form Circana, the\n> world’s leading advisor on the complexity of consumer behavior. Learn\n> more.\n> <https://www.circana.com/iri-and-npd-rebrand-as-circana-the-leading-advisor-on-the-complexity-of-consumer-behavior.html>\n>\n>\n>",
"msg_date": "Fri, 14 Jun 2024 18:11:53 +0530",
"msg_from": "P C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BDR that performs"
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 06:11:53PM +0530, P C wrote:\n> Hello Travis,\n> \n> We have had this requirement (true BDR that also handles DDLs & Conflict\n> Resolution) and have been researching the options for a long time and I'll send\n> you more details soon, but meanwhile curious to understand why EDB BDR is not\n> an option?\n\nI agree pgEdge is worth considering:\n\n\thttps://www.pgedge.com/products/pgedge-platform\n\nand perhaps EDB BDR/PGD was not being considered due to cost.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 14 Jun 2024 10:57:19 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BDR that performs"
}
] |
[
{
"msg_contents": "Dear Postgresql performance guru,\n\nFor some reason on our client server a function written in SQL language \nexecutes *100 times slower* than the one written in plpgsql...\n\nAfter updating to \"PostgreSQL 12.18, compiled by Visual C++ build 1914, \n64-bit\" (from pg9.5) our client reported a performance issue. Everything \nboils down to a query that uses our function *public.fnk_saskaitos_skola \n*to calculate a visitors debt. The function is written in 'sql' language.\n\nThe function is simple enough, marked STABLE\n\n```\n\nCREATE OR REPLACE FUNCTION public.fnk_saskaitos_skola(prm_saskaita integer)\n RETURNS numeric\n LANGUAGE sql\n STABLE SECURITY DEFINER\nAS $function$\n SELECT\n COALESCE(sum(mok_nepadengta), 0)\n FROM\n public.b_pardavimai\n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = $1)\n$function$\n;\n\n```\n\nThe problem is when I use it, it takes like 50ms to execute (on our \nclient server).\n\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT * FROM fnk_saskaitos_skola(7141968)\n\n\n\"Function Scan on public.fnk_saskaitos_skola (cost=0.25..0.26 rows=1 \nwidth=32) (actual time=59.824..59.825 rows=1 loops=1)\"\n\" Output: fnk_saskaitos_skola\"\n\" Function Call: fnk_saskaitos_skola(7141968)\"\n\" Buffers: shared hit=20\"\n\"Planning Time: 0.044 ms\"\n\"Execution Time: 59.848 ms\"\n\n\n*How ever, if I rewrite the same function using plpgsql the result is \nquite different:*\n\n```\n\nCREATE OR REPLACE FUNCTION public.fnk_saskaitos_skola_jt(IN prm_saskaita \ninteger)\nRETURNS numeric\nLANGUAGE 'plpgsql'\nSTABLE SECURITY DEFINER\nPARALLEL UNSAFE\nCOST 100\nAS $BODY$\nbegin\n return (\n SELECT\n COALESCE(sum(mok_nepadengta), 0)\n FROM\n public.b_pardavimai\n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = $1)\n );\nend\n$BODY$;\n\n```\n\n\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT fnk_saskaitos_skola_jt(7141968)\n\n\n```\n\n\"Result (cost=0.00..0.26 rows=1 width=32) (actual time=0.562..0.562 \nrows=1 loops=1)\"\n\" Output: fnk_saskaitos_skola_jt(7141968)\"\n\" Buffers: shared hit=20\"\n\"Planning Time: 0.022 ms\"\n\"Execution Time: 0.574 ms\"\n\n```\n\n\nIf I *analyze the sql that is inside the function* I get results similar \nto the ones of using plpgsql function:\n\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT\n COALESCE(sum(mok_nepadengta), 0)\n FROM\n public.b_pardavimai\n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = 7141968)\n\n```\n\n\"Aggregate (cost=2773.78..2773.79 rows=1 width=32) (actual \ntime=0.015..0.016 rows=1 loops=1)\"\n\" Output: COALESCE(sum((b_mokejimai.mok_nepadengta)::numeric), \n'0'::numeric)\"\n\" Buffers: shared hit=4\"\n\" -> Nested Loop (cost=1.00..2771.96 rows=730 width=3) (actual \ntime=0.013..0.013 rows=0 loops=1)\"\n\" Output: b_mokejimai.mok_nepadengta\"\n\" Inner Unique: true\"\n\" Buffers: shared hit=4\"\n\" -> Index Scan using idx_saskaita on public.b_mokejimai \n(cost=0.56..793.10 rows=746 width=7) (actual time=0.012..0.012 rows=0 \nloops=1)\"\n\" Output: b_mokejimai.mok_id, b_mokejimai.mok_moketojas, \nb_mokejimai.mok_pardavimas, b_mokejimai.mok_laikas, \nb_mokejimai.mok_suma, b_mokejimai.mok_budas, b_mokejimai.mok_terminas, \nb_mokejimai.mok_cekis, b_mokejimai.mok_saskaita, \nb_mokejimai.mok_suma_bazine, b_mokejimai.mok_nepadengta, \nb_mokejimai.mok_padengta, b_mokejimai.mok_laiko_diena\"\n\" Index Cond: (b_mokejimai.mok_saskaita = 7141968)\"\n\" Buffers: shared hit=4\"\n\" -> Index Scan using pk_b_pardavimai_id on public.b_pardavimai \n(cost=0.44..2.65 rows=1 width=4) (never executed)\"\n\" Output: b_pardavimai.pard_id, b_pardavimai.pard_preke, \nb_pardavimai.pard_kaina, b_pardavimai.pard_nuolaida, \nb_pardavimai.pard_kiekis, b_pardavimai.pard_kasos_nr, \nb_pardavimai.pard_laikas, b_pardavimai.pard_prekes_id, \nb_pardavimai.pard_pirkejo_id, b_pardavimai.pard_pardavejas, \nb_pardavimai.pard_spausdinta, b_pardavimai.pard_reikia_grazinti, \nb_pardavimai.pard_kam_naudoti, b_pardavimai.pard_susieta, \nb_pardavimai.pard_galima_anuliuoti, b_pardavimai.pard_tipas, \nb_pardavimai.pard_pvm, b_pardavimai.pard_apsilankymas, \nb_pardavimai.pard_fk, b_pardavimai.pard_kelintas, \nb_pardavimai.pard_precekis, b_pardavimai.pard_imone, \nb_pardavimai.pard_grazintas, b_pardavimai.pard_debeto_sutartis, \nb_pardavimai.pard_kaina_be_nld, b_pardavimai.pard_uzsakymas_pos, \nb_pardavimai.pard_pvm_suma, b_pardavimai.pard_uzsakymo_nr, \nb_pardavimai.pard_nuolaidos_id, b_pardavimai.pard_nuolaida_taikyti, \nb_pardavimai.pard_pirkeja_keisti_galima, \nb_pardavimai.pard_suma_keisti_galima\"\n\" Index Cond: (b_pardavimai.pard_id = \nb_mokejimai.mok_pardavimas)\"\n\" Filter: (b_pardavimai.pard_tipas = ANY \n('{1,2,6,7}'::smallint[]))\"\n\"Planning Time: 0.550 ms\"\n\"Execution Time: 0.049 ms\"\n\n```\n\n\nAs I understand, the planning in case of sql functions is done everytime \nthe functions is executed. I don't mind if planning would take 0.550 ms \nas when using plain SQL. But why execution takes ~59ms??... What is it \nspent for?\n\nIsn't PosgreSQL supposed to inline simple SQL functions that are stable \nor immutable?\n\nAny advice on where to look for the cause of this \"anomaly\" is highly \nappreciated?\n\n\nI've tried executing the same query on different server and different \ndatabase - I could not reproduce the behavior. Using SQL function \nproduces results faster.\n\nI'd be gratefull to receive some insights of how to investigate the \nbehavior. I'm not keen on changing the language or the function not \nknowing why it is required or how it helps...\n\n\n\nRegards,\n\nJulius Tuskenis\n\n\n\n\n\n\n\n\nDear Postgresql performance guru, \n\nFor some reason on our client server a function written in SQL\n language executes 100 times slower than the one written in\n plpgsql...\n\nAfter updating to \"PostgreSQL 12.18, compiled by Visual C++ build\n 1914, 64-bit\" (from pg9.5) our client reported a performance\n issue. Everything boils down to a query that uses our function public.fnk_saskaitos_skola\n to calculate a visitors debt. The function is written in 'sql'\n language. \nThe function is simple enough, marked STABLE \n```\nCREATE OR REPLACE FUNCTION\n public.fnk_saskaitos_skola(prm_saskaita integer)\n RETURNS numeric\n LANGUAGE sql\n STABLE SECURITY DEFINER\n AS $function$\n SELECT\n COALESCE(sum(mok_nepadengta), 0) \n FROM\n public.b_pardavimai \n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = $1)\n $function$\n ;\n\n``` \n\nThe problem is when I use it, it takes like 50ms to execute (on\n our client server).\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT * FROM fnk_saskaitos_skola(7141968)\n\n\n\"Function Scan on public.fnk_saskaitos_skola (cost=0.25..0.26\n rows=1 width=32) (actual time=59.824..59.825 rows=1 loops=1)\"\n \" Output: fnk_saskaitos_skola\"\n \" Function Call: fnk_saskaitos_skola(7141968)\"\n \" Buffers: shared hit=20\"\n \"Planning Time: 0.044 ms\"\n \"Execution Time: 59.848 ms\"\n\n\nHow ever, if I rewrite the same function using plpgsql the\n result is quite different:\n```\nCREATE OR REPLACE FUNCTION public.fnk_saskaitos_skola_jt(IN\n prm_saskaita integer)\n RETURNS numeric\n LANGUAGE 'plpgsql'\n STABLE SECURITY DEFINER\n PARALLEL UNSAFE\n COST 100\n AS $BODY$\n begin\n return (\n SELECT\n COALESCE(sum(mok_nepadengta), 0) \n FROM\n public.b_pardavimai \n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = $1)\n );\n end\n $BODY$;\n```\n\n\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT fnk_saskaitos_skola_jt(7141968) \n\n\n\n```\n\n\"Result (cost=0.00..0.26 rows=1 width=32) (actual\n time=0.562..0.562 rows=1 loops=1)\"\n \" Output: fnk_saskaitos_skola_jt(7141968)\"\n \" Buffers: shared hit=20\"\n \"Planning Time: 0.022 ms\"\n \"Execution Time: 0.574 ms\"\n```\n\n\nIf I analyze the sql that is inside the function I get\n results similar to the ones of using plpgsql function:\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT\n COALESCE(sum(mok_nepadengta), 0) \n FROM\n public.b_pardavimai \n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = 7141968)\n```\n\"Aggregate (cost=2773.78..2773.79 rows=1 width=32) (actual\n time=0.015..0.016 rows=1 loops=1)\"\n \" Output: COALESCE(sum((b_mokejimai.mok_nepadengta)::numeric),\n '0'::numeric)\"\n \" Buffers: shared hit=4\"\n \" -> Nested Loop (cost=1.00..2771.96 rows=730 width=3)\n (actual time=0.013..0.013 rows=0 loops=1)\"\n \" Output: b_mokejimai.mok_nepadengta\"\n \" Inner Unique: true\"\n \" Buffers: shared hit=4\"\n \" -> Index Scan using idx_saskaita on\n public.b_mokejimai (cost=0.56..793.10 rows=746 width=7) (actual\n time=0.012..0.012 rows=0 loops=1)\"\n \" Output: b_mokejimai.mok_id,\n b_mokejimai.mok_moketojas, b_mokejimai.mok_pardavimas,\n b_mokejimai.mok_laikas, b_mokejimai.mok_suma,\n b_mokejimai.mok_budas, b_mokejimai.mok_terminas,\n b_mokejimai.mok_cekis, b_mokejimai.mok_saskaita,\n b_mokejimai.mok_suma_bazine, b_mokejimai.mok_nepadengta,\n b_mokejimai.mok_padengta, b_mokejimai.mok_laiko_diena\"\n \" Index Cond: (b_mokejimai.mok_saskaita = 7141968)\"\n \" Buffers: shared hit=4\"\n \" -> Index Scan using pk_b_pardavimai_id on\n public.b_pardavimai (cost=0.44..2.65 rows=1 width=4) (never\n executed)\"\n \" Output: b_pardavimai.pard_id,\n b_pardavimai.pard_preke, b_pardavimai.pard_kaina,\n b_pardavimai.pard_nuolaida, b_pardavimai.pard_kiekis,\n b_pardavimai.pard_kasos_nr, b_pardavimai.pard_laikas,\n b_pardavimai.pard_prekes_id, b_pardavimai.pard_pirkejo_id,\n b_pardavimai.pard_pardavejas, b_pardavimai.pard_spausdinta,\n b_pardavimai.pard_reikia_grazinti, b_pardavimai.pard_kam_naudoti,\n b_pardavimai.pard_susieta, b_pardavimai.pard_galima_anuliuoti,\n b_pardavimai.pard_tipas, b_pardavimai.pard_pvm,\n b_pardavimai.pard_apsilankymas, b_pardavimai.pard_fk,\n b_pardavimai.pard_kelintas, b_pardavimai.pard_precekis,\n b_pardavimai.pard_imone, b_pardavimai.pard_grazintas,\n b_pardavimai.pard_debeto_sutartis, b_pardavimai.pard_kaina_be_nld,\n b_pardavimai.pard_uzsakymas_pos, b_pardavimai.pard_pvm_suma,\n b_pardavimai.pard_uzsakymo_nr, b_pardavimai.pard_nuolaidos_id,\n b_pardavimai.pard_nuolaida_taikyti,\n b_pardavimai.pard_pirkeja_keisti_galima,\n b_pardavimai.pard_suma_keisti_galima\"\n \" Index Cond: (b_pardavimai.pard_id =\n b_mokejimai.mok_pardavimas)\"\n \" Filter: (b_pardavimai.pard_tipas = ANY\n ('{1,2,6,7}'::smallint[]))\"\n \"Planning Time: 0.550 ms\"\n \"Execution Time: 0.049 ms\"\n```\n\n\n\nAs I understand, the planning in case of sql functions is done\n everytime the functions is executed. I don't mind if planning\n would take 0.550 ms as when using plain SQL. But why execution\n takes ~59ms??... What is it spent for?\n\nIsn't PosgreSQL supposed to inline simple SQL functions that are\n stable or immutable? \n\nAny advice on where to look for the cause of this \"anomaly\" is\n highly appreciated?\n\n\nI've tried executing the same query on different server and\n different database - I could not reproduce the behavior. Using SQL\n function produces results faster.\n\nI'd be gratefull to receive some insights of how to investigate\n the behavior. I'm not keen on changing the language or the\n function not knowing why it is required or how it helps...\n\n\n\n\nRegards,\nJulius Tuskenis",
"msg_date": "Mon, 17 Jun 2024 12:35:02 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance of sql and plpgsql functions"
},
{
"msg_contents": "po 17. 6. 2024 v 11:35 odesílatel Julius Tuskenis <[email protected]>\nnapsal:\n\n> Dear Postgresql performance guru,\n>\n> For some reason on our client server a function written in SQL language\n> executes *100 times slower* than the one written in plpgsql...\n>\n> After updating to \"PostgreSQL 12.18, compiled by Visual C++ build 1914,\n> 64-bit\" (from pg9.5) our client reported a performance issue. Everything\n> boils down to a query that uses our function *public.fnk_saskaitos_skola *to\n> calculate a visitors debt. The function is written in 'sql' language.\n>\n> The function is simple enough, marked STABLE\n>\n> ```\n>\n> CREATE OR REPLACE FUNCTION public.fnk_saskaitos_skola(prm_saskaita integer)\n> RETURNS numeric\n> LANGUAGE sql\n> STABLE SECURITY DEFINER\n> AS $function$\n> SELECT\n> COALESCE(sum(mok_nepadengta), 0)\n> FROM\n> public.b_pardavimai\n> JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n> WHERE\n> (pard_tipas = ANY('{1, 2, 6, 7}'))\n> AND (mok_saskaita = $1)\n> $function$\n> ;\n>\n> ```\n>\n> The problem is when I use it, it takes like 50ms to execute (on our client\n> server).\n>\n> EXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n> SELECT * FROM fnk_saskaitos_skola(7141968)\n>\n>\n> \"Function Scan on public.fnk_saskaitos_skola (cost=0.25..0.26 rows=1\n> width=32) (actual time=59.824..59.825 rows=1 loops=1)\"\n> \" Output: fnk_saskaitos_skola\"\n> \" Function Call: fnk_saskaitos_skola(7141968)\"\n> \" Buffers: shared hit=20\"\n> \"Planning Time: 0.044 ms\"\n> \"Execution Time: 59.848 ms\"\n>\n>\n> *How ever, if I rewrite the same function using plpgsql the result is\n> quite different:*\n>\n> ```\n>\n> CREATE OR REPLACE FUNCTION public.fnk_saskaitos_skola_jt(IN prm_saskaita\n> integer)\n> RETURNS numeric\n> LANGUAGE 'plpgsql'\n> STABLE SECURITY DEFINER\n> PARALLEL UNSAFE\n> COST 100\n> AS $BODY$\n> begin\n> return (\n> SELECT\n> COALESCE(sum(mok_nepadengta), 0)\n> FROM\n> public.b_pardavimai\n> JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n> WHERE\n> (pard_tipas = ANY('{1, 2, 6, 7}'))\n> AND (mok_saskaita = $1)\n> );\n> end\n> $BODY$;\n>\n```\n>\n>\n> EXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n> SELECT fnk_saskaitos_skola_jt(7141968)\n>\n>\n> ```\n>\n> \"Result (cost=0.00..0.26 rows=1 width=32) (actual time=0.562..0.562\n> rows=1 loops=1)\"\n> \" Output: fnk_saskaitos_skola_jt(7141968)\"\n> \" Buffers: shared hit=20\"\n> \"Planning Time: 0.022 ms\"\n> \"Execution Time: 0.574 ms\"\n>\n> ```\n>\n>\n> If I *analyze the sql that is inside the function* I get results similar\n> to the ones of using plpgsql function:\n>\n> EXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n> SELECT\n> COALESCE(sum(mok_nepadengta), 0)\n> FROM\n> public.b_pardavimai\n> JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n> WHERE\n> (pard_tipas = ANY('{1, 2, 6, 7}'))\n> AND (mok_saskaita = 7141968)\n>\n> ```\n>\n> \"Aggregate (cost=2773.78..2773.79 rows=1 width=32) (actual\n> time=0.015..0.016 rows=1 loops=1)\"\n> \" Output: COALESCE(sum((b_mokejimai.mok_nepadengta)::numeric),\n> '0'::numeric)\"\n> \" Buffers: shared hit=4\"\n> \" -> Nested Loop (cost=1.00..2771.96 rows=730 width=3) (actual\n> time=0.013..0.013 rows=0 loops=1)\"\n> \" Output: b_mokejimai.mok_nepadengta\"\n> \" Inner Unique: true\"\n> \" Buffers: shared hit=4\"\n> \" -> Index Scan using idx_saskaita on public.b_mokejimai\n> (cost=0.56..793.10 rows=746 width=7) (actual time=0.012..0.012 rows=0\n> loops=1)\"\n> \" Output: b_mokejimai.mok_id, b_mokejimai.mok_moketojas,\n> b_mokejimai.mok_pardavimas, b_mokejimai.mok_laikas, b_mokejimai.mok_suma,\n> b_mokejimai.mok_budas, b_mokejimai.mok_terminas, b_mokejimai.mok_cekis,\n> b_mokejimai.mok_saskaita, b_mokejimai.mok_suma_bazine,\n> b_mokejimai.mok_nepadengta, b_mokejimai.mok_padengta,\n> b_mokejimai.mok_laiko_diena\"\n> \" Index Cond: (b_mokejimai.mok_saskaita = 7141968)\"\n> \" Buffers: shared hit=4\"\n> \" -> Index Scan using pk_b_pardavimai_id on public.b_pardavimai\n> (cost=0.44..2.65 rows=1 width=4) (never executed)\"\n> \" Output: b_pardavimai.pard_id, b_pardavimai.pard_preke,\n> b_pardavimai.pard_kaina, b_pardavimai.pard_nuolaida,\n> b_pardavimai.pard_kiekis, b_pardavimai.pard_kasos_nr,\n> b_pardavimai.pard_laikas, b_pardavimai.pard_prekes_id,\n> b_pardavimai.pard_pirkejo_id, b_pardavimai.pard_pardavejas,\n> b_pardavimai.pard_spausdinta, b_pardavimai.pard_reikia_grazinti,\n> b_pardavimai.pard_kam_naudoti, b_pardavimai.pard_susieta,\n> b_pardavimai.pard_galima_anuliuoti, b_pardavimai.pard_tipas,\n> b_pardavimai.pard_pvm, b_pardavimai.pard_apsilankymas,\n> b_pardavimai.pard_fk, b_pardavimai.pard_kelintas,\n> b_pardavimai.pard_precekis, b_pardavimai.pard_imone,\n> b_pardavimai.pard_grazintas, b_pardavimai.pard_debeto_sutartis,\n> b_pardavimai.pard_kaina_be_nld, b_pardavimai.pard_uzsakymas_pos,\n> b_pardavimai.pard_pvm_suma, b_pardavimai.pard_uzsakymo_nr,\n> b_pardavimai.pard_nuolaidos_id, b_pardavimai.pard_nuolaida_taikyti,\n> b_pardavimai.pard_pirkeja_keisti_galima,\n> b_pardavimai.pard_suma_keisti_galima\"\n> \" Index Cond: (b_pardavimai.pard_id =\n> b_mokejimai.mok_pardavimas)\"\n> \" Filter: (b_pardavimai.pard_tipas = ANY\n> ('{1,2,6,7}'::smallint[]))\"\n> \"Planning Time: 0.550 ms\"\n> \"Execution Time: 0.049 ms\"\n>\n> ```\n>\n>\n> As I understand, the planning in case of sql functions is done everytime\n> the functions is executed. I don't mind if planning would take 0.550 ms as\n> when using plain SQL. But why execution takes ~59ms??... What is it spent\n> for?\n>\n> Isn't PosgreSQL supposed to inline simple SQL functions that are stable or\n> immutable?\n>\nno, PLpgSQL functions are not inlined\n\nRegards\n\nPavel\n\n\n\n> Any advice on where to look for the cause of this \"anomaly\" is highly\n> appreciated?\n>\n>\n> I've tried executing the same query on different server and different\n> database - I could not reproduce the behavior. Using SQL function produces\n> results faster.\n>\n> I'd be gratefull to receive some insights of how to investigate the\n> behavior. I'm not keen on changing the language or the function not knowing\n> why it is required or how it helps...\n>\n>\n>\n> Regards,\n>\n> Julius Tuskenis\n>\n>\n>\n>\n\npo 17. 6. 2024 v 11:35 odesílatel Julius Tuskenis <[email protected]> napsal:\n\nDear Postgresql performance guru, \n\nFor some reason on our client server a function written in SQL\n language executes 100 times slower than the one written in\n plpgsql...\n\nAfter updating to \"PostgreSQL 12.18, compiled by Visual C++ build\n 1914, 64-bit\" (from pg9.5) our client reported a performance\n issue. Everything boils down to a query that uses our function public.fnk_saskaitos_skola\n to calculate a visitors debt. The function is written in 'sql'\n language. \nThe function is simple enough, marked STABLE \n```\nCREATE OR REPLACE FUNCTION\n public.fnk_saskaitos_skola(prm_saskaita integer)\n RETURNS numeric\n LANGUAGE sql\n STABLE SECURITY DEFINER\n AS $function$\n SELECT\n COALESCE(sum(mok_nepadengta), 0) \n FROM\n public.b_pardavimai \n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = $1)\n $function$\n ;\n\n``` \n\nThe problem is when I use it, it takes like 50ms to execute (on\n our client server).\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT * FROM fnk_saskaitos_skola(7141968)\n\n\n\"Function Scan on public.fnk_saskaitos_skola (cost=0.25..0.26\n rows=1 width=32) (actual time=59.824..59.825 rows=1 loops=1)\"\n \" Output: fnk_saskaitos_skola\"\n \" Function Call: fnk_saskaitos_skola(7141968)\"\n \" Buffers: shared hit=20\"\n \"Planning Time: 0.044 ms\"\n \"Execution Time: 59.848 ms\"\n\n\nHow ever, if I rewrite the same function using plpgsql the\n result is quite different:\n```\nCREATE OR REPLACE FUNCTION public.fnk_saskaitos_skola_jt(IN\n prm_saskaita integer)\n RETURNS numeric\n LANGUAGE 'plpgsql'\n STABLE SECURITY DEFINER\n PARALLEL UNSAFE\n COST 100\n AS $BODY$\n begin\n return (\n SELECT\n COALESCE(sum(mok_nepadengta), 0) \n FROM\n public.b_pardavimai \n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = $1)\n );\n end\n $BODY$;\n```\n\n\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT fnk_saskaitos_skola_jt(7141968) \n\n\n\n```\n\n\"Result (cost=0.00..0.26 rows=1 width=32) (actual\n time=0.562..0.562 rows=1 loops=1)\"\n \" Output: fnk_saskaitos_skola_jt(7141968)\"\n \" Buffers: shared hit=20\"\n \"Planning Time: 0.022 ms\"\n \"Execution Time: 0.574 ms\"\n```\n\n\nIf I analyze the sql that is inside the function I get\n results similar to the ones of using plpgsql function:\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT\n COALESCE(sum(mok_nepadengta), 0) \n FROM\n public.b_pardavimai \n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = 7141968)\n```\n\"Aggregate (cost=2773.78..2773.79 rows=1 width=32) (actual\n time=0.015..0.016 rows=1 loops=1)\"\n \" Output: COALESCE(sum((b_mokejimai.mok_nepadengta)::numeric),\n '0'::numeric)\"\n \" Buffers: shared hit=4\"\n \" -> Nested Loop (cost=1.00..2771.96 rows=730 width=3)\n (actual time=0.013..0.013 rows=0 loops=1)\"\n \" Output: b_mokejimai.mok_nepadengta\"\n \" Inner Unique: true\"\n \" Buffers: shared hit=4\"\n \" -> Index Scan using idx_saskaita on\n public.b_mokejimai (cost=0.56..793.10 rows=746 width=7) (actual\n time=0.012..0.012 rows=0 loops=1)\"\n \" Output: b_mokejimai.mok_id,\n b_mokejimai.mok_moketojas, b_mokejimai.mok_pardavimas,\n b_mokejimai.mok_laikas, b_mokejimai.mok_suma,\n b_mokejimai.mok_budas, b_mokejimai.mok_terminas,\n b_mokejimai.mok_cekis, b_mokejimai.mok_saskaita,\n b_mokejimai.mok_suma_bazine, b_mokejimai.mok_nepadengta,\n b_mokejimai.mok_padengta, b_mokejimai.mok_laiko_diena\"\n \" Index Cond: (b_mokejimai.mok_saskaita = 7141968)\"\n \" Buffers: shared hit=4\"\n \" -> Index Scan using pk_b_pardavimai_id on\n public.b_pardavimai (cost=0.44..2.65 rows=1 width=4) (never\n executed)\"\n \" Output: b_pardavimai.pard_id,\n b_pardavimai.pard_preke, b_pardavimai.pard_kaina,\n b_pardavimai.pard_nuolaida, b_pardavimai.pard_kiekis,\n b_pardavimai.pard_kasos_nr, b_pardavimai.pard_laikas,\n b_pardavimai.pard_prekes_id, b_pardavimai.pard_pirkejo_id,\n b_pardavimai.pard_pardavejas, b_pardavimai.pard_spausdinta,\n b_pardavimai.pard_reikia_grazinti, b_pardavimai.pard_kam_naudoti,\n b_pardavimai.pard_susieta, b_pardavimai.pard_galima_anuliuoti,\n b_pardavimai.pard_tipas, b_pardavimai.pard_pvm,\n b_pardavimai.pard_apsilankymas, b_pardavimai.pard_fk,\n b_pardavimai.pard_kelintas, b_pardavimai.pard_precekis,\n b_pardavimai.pard_imone, b_pardavimai.pard_grazintas,\n b_pardavimai.pard_debeto_sutartis, b_pardavimai.pard_kaina_be_nld,\n b_pardavimai.pard_uzsakymas_pos, b_pardavimai.pard_pvm_suma,\n b_pardavimai.pard_uzsakymo_nr, b_pardavimai.pard_nuolaidos_id,\n b_pardavimai.pard_nuolaida_taikyti,\n b_pardavimai.pard_pirkeja_keisti_galima,\n b_pardavimai.pard_suma_keisti_galima\"\n \" Index Cond: (b_pardavimai.pard_id =\n b_mokejimai.mok_pardavimas)\"\n \" Filter: (b_pardavimai.pard_tipas = ANY\n ('{1,2,6,7}'::smallint[]))\"\n \"Planning Time: 0.550 ms\"\n \"Execution Time: 0.049 ms\"\n```\n\n\n\nAs I understand, the planning in case of sql functions is done\n everytime the functions is executed. I don't mind if planning\n would take 0.550 ms as when using plain SQL. But why execution\n takes ~59ms??... What is it spent for?\n\nIsn't PosgreSQL supposed to inline simple SQL functions that are\n stable or immutable? no, PLpgSQL functions are not inlinedRegardsPavel \n\nAny advice on where to look for the cause of this \"anomaly\" is\n highly appreciated?\n\n\nI've tried executing the same query on different server and\n different database - I could not reproduce the behavior. Using SQL\n function produces results faster.\n\nI'd be gratefull to receive some insights of how to investigate\n the behavior. I'm not keen on changing the language or the\n function not knowing why it is required or how it helps...\n\n\n\n\nRegards,\nJulius Tuskenis",
"msg_date": "Mon, 17 Jun 2024 11:44:32 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance of sql and plpgsql functions"
},
{
"msg_contents": "Thank you Pavel for your input.\n\nYou wrote:\n\n> no, PLpgSQL functions are not inlined\nYes, I understand that. I was referring to SQL functions (not plpgsql).\n\nRegards,\n\nJulius Tuskenis\n\n\nOn 2024-06-17 12:44, Pavel Stehule wrote:\n>\n>\n> po 17. 6. 2024 v 11:35 odesílatel Julius Tuskenis \n> <[email protected]> napsal:\n>\n> Dear Postgresql performance guru,\n>\n> For some reason on our client server a function written in SQL\n> language executes *100 times slower* than the one written in\n> plpgsql...\n>\n> After updating to \"PostgreSQL 12.18, compiled by Visual C++ build\n> 1914, 64-bit\" (from pg9.5) our client reported a performance\n> issue. Everything boils down to a query that uses our function\n> *public.fnk_saskaitos_skola *to calculate a visitors debt. The\n> function is written in 'sql' language.\n>\n> The function is simple enough, marked STABLE\n>\n> ```\n>\n> CREATE OR REPLACE FUNCTION public.fnk_saskaitos_skola(prm_saskaita\n> integer)\n> RETURNS numeric\n> LANGUAGE sql\n> STABLE SECURITY DEFINER\n> AS $function$\n> SELECT\n> COALESCE(sum(mok_nepadengta), 0)\n> FROM\n> public.b_pardavimai\n> JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n> WHERE\n> (pard_tipas = ANY('{1, 2, 6, 7}'))\n> AND (mok_saskaita = $1)\n> $function$\n> ;\n>\n> ```\n>\n> The problem is when I use it, it takes like 50ms to execute (on\n> our client server).\n>\n> EXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n> SELECT * FROM fnk_saskaitos_skola(7141968)\n>\n>\n> \"Function Scan on public.fnk_saskaitos_skola (cost=0.25..0.26\n> rows=1 width=32) (actual time=59.824..59.825 rows=1 loops=1)\"\n> \" Output: fnk_saskaitos_skola\"\n> \" Function Call: fnk_saskaitos_skola(7141968)\"\n> \" Buffers: shared hit=20\"\n> \"Planning Time: 0.044 ms\"\n> \"Execution Time: 59.848 ms\"\n>\n>\n> *How ever, if I rewrite the same function using plpgsql the result\n> is quite different:*\n>\n> ```\n>\n> CREATE OR REPLACE FUNCTION public.fnk_saskaitos_skola_jt(IN\n> prm_saskaita integer)\n> RETURNS numeric\n> LANGUAGE 'plpgsql'\n> STABLE SECURITY DEFINER\n> PARALLEL UNSAFE\n> COST 100\n> AS $BODY$\n> begin\n> return (\n> SELECT\n> COALESCE(sum(mok_nepadengta), 0)\n> FROM\n> public.b_pardavimai\n> JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n> WHERE\n> (pard_tipas = ANY('{1, 2, 6, 7}'))\n> AND (mok_saskaita = $1)\n> );\n> end\n> $BODY$;\n>\n> ```\n>\n>\n> EXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n> SELECT fnk_saskaitos_skola_jt(7141968)\n>\n>\n> ```\n>\n> \"Result (cost=0.00..0.26 rows=1 width=32) (actual\n> time=0.562..0.562 rows=1 loops=1)\"\n> \" Output: fnk_saskaitos_skola_jt(7141968)\"\n> \" Buffers: shared hit=20\"\n> \"Planning Time: 0.022 ms\"\n> \"Execution Time: 0.574 ms\"\n>\n> ```\n>\n>\n> If I *analyze the sql that is inside the function* I get results\n> similar to the ones of using plpgsql function:\n>\n> EXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n> SELECT\n> COALESCE(sum(mok_nepadengta), 0)\n> FROM\n> public.b_pardavimai\n> JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n> WHERE\n> (pard_tipas = ANY('{1, 2, 6, 7}'))\n> AND (mok_saskaita = 7141968)\n>\n> ```\n>\n> \"Aggregate (cost=2773.78..2773.79 rows=1 width=32) (actual\n> time=0.015..0.016 rows=1 loops=1)\"\n> \" Output: COALESCE(sum((b_mokejimai.mok_nepadengta)::numeric),\n> '0'::numeric)\"\n> \" Buffers: shared hit=4\"\n> \" -> Nested Loop (cost=1.00..2771.96 rows=730 width=3) (actual\n> time=0.013..0.013 rows=0 loops=1)\"\n> \" Output: b_mokejimai.mok_nepadengta\"\n> \" Inner Unique: true\"\n> \" Buffers: shared hit=4\"\n> \" -> Index Scan using idx_saskaita on public.b_mokejimai \n> (cost=0.56..793.10 rows=746 width=7) (actual time=0.012..0.012\n> rows=0 loops=1)\"\n> \" Output: b_mokejimai.mok_id,\n> b_mokejimai.mok_moketojas, b_mokejimai.mok_pardavimas,\n> b_mokejimai.mok_laikas, b_mokejimai.mok_suma,\n> b_mokejimai.mok_budas, b_mokejimai.mok_terminas,\n> b_mokejimai.mok_cekis, b_mokejimai.mok_saskaita,\n> b_mokejimai.mok_suma_bazine, b_mokejimai.mok_nepadengta,\n> b_mokejimai.mok_padengta, b_mokejimai.mok_laiko_diena\"\n> \" Index Cond: (b_mokejimai.mok_saskaita = 7141968)\"\n> \" Buffers: shared hit=4\"\n> \" -> Index Scan using pk_b_pardavimai_id on\n> public.b_pardavimai (cost=0.44..2.65 rows=1 width=4) (never\n> executed)\"\n> \" Output: b_pardavimai.pard_id,\n> b_pardavimai.pard_preke, b_pardavimai.pard_kaina,\n> b_pardavimai.pard_nuolaida, b_pardavimai.pard_kiekis,\n> b_pardavimai.pard_kasos_nr, b_pardavimai.pard_laikas,\n> b_pardavimai.pard_prekes_id, b_pardavimai.pard_pirkejo_id,\n> b_pardavimai.pard_pardavejas, b_pardavimai.pard_spausdinta,\n> b_pardavimai.pard_reikia_grazinti, b_pardavimai.pard_kam_naudoti,\n> b_pardavimai.pard_susieta, b_pardavimai.pard_galima_anuliuoti,\n> b_pardavimai.pard_tipas, b_pardavimai.pard_pvm,\n> b_pardavimai.pard_apsilankymas, b_pardavimai.pard_fk,\n> b_pardavimai.pard_kelintas, b_pardavimai.pard_precekis,\n> b_pardavimai.pard_imone, b_pardavimai.pard_grazintas,\n> b_pardavimai.pard_debeto_sutartis, b_pardavimai.pard_kaina_be_nld,\n> b_pardavimai.pard_uzsakymas_pos, b_pardavimai.pard_pvm_suma,\n> b_pardavimai.pard_uzsakymo_nr, b_pardavimai.pard_nuolaidos_id,\n> b_pardavimai.pard_nuolaida_taikyti,\n> b_pardavimai.pard_pirkeja_keisti_galima,\n> b_pardavimai.pard_suma_keisti_galima\"\n> \" Index Cond: (b_pardavimai.pard_id =\n> b_mokejimai.mok_pardavimas)\"\n> \" Filter: (b_pardavimai.pard_tipas = ANY\n> ('{1,2,6,7}'::smallint[]))\"\n> \"Planning Time: 0.550 ms\"\n> \"Execution Time: 0.049 ms\"\n>\n> ```\n>\n>\n> As I understand, the planning in case of sql functions is done\n> everytime the functions is executed. I don't mind if planning\n> would take 0.550 ms as when using plain SQL. But why execution\n> takes ~59ms??... What is it spent for?\n>\n> Isn't PosgreSQL supposed to inline simple SQL functions that are\n> stable or immutable?\n>\n> no, PLpgSQL functions are not inlined\n>\n> Regards\n>\n> Pavel\n>\n> Any advice on where to look for the cause of this \"anomaly\" is\n> highly appreciated?\n>\n>\n> I've tried executing the same query on different server and\n> different database - I could not reproduce the behavior. Using SQL\n> function produces results faster.\n>\n> I'd be gratefull to receive some insights of how to investigate\n> the behavior. I'm not keen on changing the language or the\n> function not knowing why it is required or how it helps...\n>\n>\n>\n> Regards,\n>\n> Julius Tuskenis\n>\n>\n>\n\n\n\n\n\nThank you Pavel for your input.\nYou wrote:\n\n\nno, PLpgSQL functions are not inlined\n Yes, I understand that. I was referring to SQL functions (not\n plpgsql).\n\nRegards,\nJulius Tuskenis\n\n\nOn 2024-06-17 12:44, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\npo 17. 6. 2024 v 11:35\n odesílatel Julius Tuskenis <[email protected]>\n napsal:\n\n\n\nDear Postgresql performance guru, \n\nFor some reason on our client server a function written\n in SQL language executes 100 times slower than\n the one written in plpgsql...\n\nAfter updating to \"PostgreSQL 12.18, compiled by Visual\n C++ build 1914, 64-bit\" (from pg9.5) our client reported\n a performance issue. Everything boils down to a query\n that uses our function public.fnk_saskaitos_skola to\n calculate a visitors debt. The function is written in\n 'sql' language. \nThe function is simple enough, marked STABLE \n```\nCREATE OR REPLACE FUNCTION\n public.fnk_saskaitos_skola(prm_saskaita integer)\n RETURNS numeric\n LANGUAGE sql\n STABLE SECURITY DEFINER\n AS $function$\n SELECT\n COALESCE(sum(mok_nepadengta), 0) \n FROM\n public.b_pardavimai \n JOIN public.b_mokejimai ON (mok_pardavimas =\n pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = $1)\n $function$\n ;\n\n``` \n\nThe problem is when I use it, it takes like 50ms to\n execute (on our client server).\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT * FROM fnk_saskaitos_skola(7141968)\n\n\n\"Function Scan on public.fnk_saskaitos_skola \n (cost=0.25..0.26 rows=1 width=32) (actual\n time=59.824..59.825 rows=1 loops=1)\"\n \" Output: fnk_saskaitos_skola\"\n \" Function Call: fnk_saskaitos_skola(7141968)\"\n \" Buffers: shared hit=20\"\n \"Planning Time: 0.044 ms\"\n \"Execution Time: 59.848 ms\"\n\n\nHow ever, if I rewrite the same function using\n plpgsql the result is quite different:\n```\nCREATE OR REPLACE FUNCTION\n public.fnk_saskaitos_skola_jt(IN prm_saskaita integer)\n RETURNS numeric\n LANGUAGE 'plpgsql'\n STABLE SECURITY DEFINER\n PARALLEL UNSAFE\n COST 100\n AS $BODY$\n begin\n return (\n SELECT\n COALESCE(sum(mok_nepadengta), 0) \n FROM\n public.b_pardavimai \n JOIN public.b_mokejimai ON (mok_pardavimas =\n pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = $1)\n );\n end\n $BODY$;\n\n\n\n\n```\n\n\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT fnk_saskaitos_skola_jt(7141968) \n\n\n\n```\n\n\"Result (cost=0.00..0.26 rows=1 width=32) (actual\n time=0.562..0.562 rows=1 loops=1)\"\n \" Output: fnk_saskaitos_skola_jt(7141968)\"\n \" Buffers: shared hit=20\"\n \"Planning Time: 0.022 ms\"\n \"Execution Time: 0.574 ms\"\n```\n\n\nIf I analyze the sql that is inside the function\n I get results similar to the ones of using plpgsql\n function:\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT\n COALESCE(sum(mok_nepadengta), 0) \n FROM\n public.b_pardavimai \n JOIN public.b_mokejimai ON (mok_pardavimas =\n pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = 7141968)\n```\n\"Aggregate (cost=2773.78..2773.79 rows=1 width=32)\n (actual time=0.015..0.016 rows=1 loops=1)\"\n \" Output:\n COALESCE(sum((b_mokejimai.mok_nepadengta)::numeric),\n '0'::numeric)\"\n \" Buffers: shared hit=4\"\n \" -> Nested Loop (cost=1.00..2771.96 rows=730\n width=3) (actual time=0.013..0.013 rows=0 loops=1)\"\n \" Output: b_mokejimai.mok_nepadengta\"\n \" Inner Unique: true\"\n \" Buffers: shared hit=4\"\n \" -> Index Scan using idx_saskaita on\n public.b_mokejimai (cost=0.56..793.10 rows=746 width=7)\n (actual time=0.012..0.012 rows=0 loops=1)\"\n \" Output: b_mokejimai.mok_id,\n b_mokejimai.mok_moketojas, b_mokejimai.mok_pardavimas,\n b_mokejimai.mok_laikas, b_mokejimai.mok_suma,\n b_mokejimai.mok_budas, b_mokejimai.mok_terminas,\n b_mokejimai.mok_cekis, b_mokejimai.mok_saskaita,\n b_mokejimai.mok_suma_bazine, b_mokejimai.mok_nepadengta,\n b_mokejimai.mok_padengta, b_mokejimai.mok_laiko_diena\"\n \" Index Cond: (b_mokejimai.mok_saskaita =\n 7141968)\"\n \" Buffers: shared hit=4\"\n \" -> Index Scan using pk_b_pardavimai_id on\n public.b_pardavimai (cost=0.44..2.65 rows=1 width=4)\n (never executed)\"\n \" Output: b_pardavimai.pard_id,\n b_pardavimai.pard_preke, b_pardavimai.pard_kaina,\n b_pardavimai.pard_nuolaida, b_pardavimai.pard_kiekis,\n b_pardavimai.pard_kasos_nr, b_pardavimai.pard_laikas,\n b_pardavimai.pard_prekes_id,\n b_pardavimai.pard_pirkejo_id,\n b_pardavimai.pard_pardavejas,\n b_pardavimai.pard_spausdinta,\n b_pardavimai.pard_reikia_grazinti,\n b_pardavimai.pard_kam_naudoti,\n b_pardavimai.pard_susieta,\n b_pardavimai.pard_galima_anuliuoti,\n b_pardavimai.pard_tipas, b_pardavimai.pard_pvm,\n b_pardavimai.pard_apsilankymas, b_pardavimai.pard_fk,\n b_pardavimai.pard_kelintas, b_pardavimai.pard_precekis,\n b_pardavimai.pard_imone, b_pardavimai.pard_grazintas,\n b_pardavimai.pard_debeto_sutartis,\n b_pardavimai.pard_kaina_be_nld,\n b_pardavimai.pard_uzsakymas_pos,\n b_pardavimai.pard_pvm_suma,\n b_pardavimai.pard_uzsakymo_nr,\n b_pardavimai.pard_nuolaidos_id,\n b_pardavimai.pard_nuolaida_taikyti,\n b_pardavimai.pard_pirkeja_keisti_galima,\n b_pardavimai.pard_suma_keisti_galima\"\n \" Index Cond: (b_pardavimai.pard_id =\n b_mokejimai.mok_pardavimas)\"\n \" Filter: (b_pardavimai.pard_tipas = ANY\n ('{1,2,6,7}'::smallint[]))\"\n \"Planning Time: 0.550 ms\"\n \"Execution Time: 0.049 ms\"\n```\n\n\n\nAs I understand, the planning in case of sql functions\n is done everytime the functions is executed. I don't\n mind if planning would take 0.550 ms as when using plain\n SQL. But why execution takes ~59ms??... What is it\n spent for?\n\nIsn't PosgreSQL supposed to inline simple SQL functions\n that are stable or immutable? \n\n\n\nno, PLpgSQL functions are not inlined\n\n\nRegards\n\n\nPavel\n\n\n\n \n\n\n \nAny advice on where to look for the cause of this\n \"anomaly\" is highly appreciated?\n\n\nI've tried executing the same query on different server\n and different database - I could not reproduce the\n behavior. Using SQL function produces results faster.\n\nI'd be gratefull to receive some insights of how to\n investigate the behavior. I'm not keen on changing the\n language or the function not knowing why it is required\n or how it helps...\n\n\n\n\nRegards,\nJulius Tuskenis",
"msg_date": "Mon, 17 Jun 2024 14:01:00 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance of sql and plpgsql functions"
},
{
"msg_contents": "\n\n> On Jun 17, 2024, at 5:35 AM, Julius Tuskenis <[email protected]> wrote:\n> \n> \n> Isn't PosgreSQL supposed to inline simple SQL functions that are stable or immutable?\n\nPostgres inlines SQL functions under certain conditions:\nhttps://wiki.postgresql.org/wiki/Inlining_of_SQL_functions\n\nOne of those conditions is \"the function is not SECURITY DEFINER”. It looks like yours is defined that way, so that might be why it’s not being inlined. \n\nHope this helps\nPhilip\n\n",
"msg_date": "Mon, 17 Jun 2024 08:59:07 -0400",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance of sql and plpgsql functions"
},
{
"msg_contents": "On 2024-06-17 15:59, Philip Semanchuk wrote:\n>\n>> On Jun 17, 2024, at 5:35 AM, Julius Tuskenis<[email protected]> wrote:\n>>\n>>\n>> Isn't PosgreSQL supposed to inline simple SQL functions that are stable or immutable?\n> Postgres inlines SQL functions under certain conditions:\n> https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions\n>\n> One of those conditions is \"the function is not SECURITY DEFINER”. It looks like yours is defined that way, so that might be why it’s not being inlined.\n>\n> Hope this helps\n> Philip\n\nThank You, Philip.\n\nThe link you've provided helps a lot explaining why the body of my SQL \nfunction is not inlined.\n\nAny thoughts on why the execution times differ so much? I see planning \nof a plain SQL is 0.550ms. So I expect the SQL function to spend that \ntime planning (inside), but I get 50ms (100 times longer).\n\n\nRegards,\n\nJulius Tuskenis\n\n\n\n\n\n\nOn 2024-06-17 15:59, Philip Semanchuk\n wrote:\n\n\n\n\n\n\nOn Jun 17, 2024, at 5:35 AM, Julius Tuskenis <[email protected]> wrote:\n\n\nIsn't PosgreSQL supposed to inline simple SQL functions that are stable or immutable?\n\n\n\nPostgres inlines SQL functions under certain conditions:\nhttps://wiki.postgresql.org/wiki/Inlining_of_SQL_functions\n\nOne of those conditions is \"the function is not SECURITY DEFINER”. It looks like yours is defined that way, so that might be why it’s not being inlined. \n\nHope this helps\nPhilip\n\nThank You, Philip.\nThe link you've provided helps a lot explaining why the body of\n my SQL function is not inlined. \n\nAny thoughts on why the execution times differ so much? I see\n planning of a plain SQL is 0.550ms. So I expect the SQL function\n to spend that time planning (inside), but I get 50ms (100 times\n longer). \n\n\n\nRegards,\nJulius Tuskenis",
"msg_date": "Mon, 17 Jun 2024 16:54:48 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance of sql and plpgsql functions"
},
{
"msg_contents": "po 17. 6. 2024 v 15:55 odesílatel Julius Tuskenis <[email protected]>\nnapsal:\n\n> On 2024-06-17 15:59, Philip Semanchuk wrote:\n>\n> On Jun 17, 2024, at 5:35 AM, Julius Tuskenis <[email protected]> <[email protected]> wrote:\n>\n>\n> Isn't PosgreSQL supposed to inline simple SQL functions that are stable or immutable?\n>\n> Postgres inlines SQL functions under certain conditions:https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions\n>\n> One of those conditions is \"the function is not SECURITY DEFINER”. It looks like yours is defined that way, so that might be why it’s not being inlined.\n>\n> Hope this helps\n> Philip\n>\n> Thank You, Philip.\n>\n> The link you've provided helps a lot explaining why the body of my SQL\n> function is not inlined.\n>\n> Any thoughts on why the execution times differ so much? I see planning of\n> a plain SQL is 0.550ms. So I expect the SQL function to spend that time\n> planning (inside), but I get 50ms (100 times longer).\n>\nAttention planning time is time of optimizations, it is not planned\n(expected) execution time.\n\nSecond - The embedded SQL inside PL/pgSQL uses plan cache. Against it, SQL\nfunctions are inlined (and then are pretty fast), or not, and then are\nslower, because there is no plan cache.\n\nI don't know exactly where the problem is, but I've got this issue many\ntimes, execution of an not inlined SQL function is slow. If you can, try to\nuse a profiler.\n\n\n\n> Regards,\n>\n> Julius Tuskenis\n>\n\npo 17. 6. 2024 v 15:55 odesílatel Julius Tuskenis <[email protected]> napsal:\n\nOn 2024-06-17 15:59, Philip Semanchuk\n wrote:\n\n\n\n\nOn Jun 17, 2024, at 5:35 AM, Julius Tuskenis <[email protected]> wrote:\n\n\nIsn't PosgreSQL supposed to inline simple SQL functions that are stable or immutable?\n\n\nPostgres inlines SQL functions under certain conditions:\nhttps://wiki.postgresql.org/wiki/Inlining_of_SQL_functions\n\nOne of those conditions is \"the function is not SECURITY DEFINER”. It looks like yours is defined that way, so that might be why it’s not being inlined. \n\nHope this helps\nPhilip\n\nThank You, Philip.\nThe link you've provided helps a lot explaining why the body of\n my SQL function is not inlined. \n\nAny thoughts on why the execution times differ so much? I see\n planning of a plain SQL is 0.550ms. So I expect the SQL function\n to spend that time planning (inside), but I get 50ms (100 times\n longer). Attention planning time is time of optimizations, it is not planned (expected) execution time.Second - The embedded SQL inside PL/pgSQL uses plan cache. Against it, SQL functions are inlined (and then are pretty fast), or not, and then are slower, because there is no plan cache. I don't know exactly where the problem is, but I've got this issue many times, execution of an not inlined SQL function is slow. If you can, try to use a profiler. \n\n\n\nRegards,\nJulius Tuskenis",
"msg_date": "Mon, 17 Jun 2024 16:07:40 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance of sql and plpgsql functions"
},
{
"msg_contents": "Julius Tuskenis <[email protected]> writes:\n> EXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n> SELECT\n> COALESCE(sum(mok_nepadengta), 0)\n> FROM\n> public.b_pardavimai\n> JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n> WHERE\n> (pard_tipas = ANY('{1, 2, 6, 7}'))\n> AND (mok_saskaita = 7141968)\n\nI believe that the SQL-language function executor always uses generic\nplans for parameterized queries (which is bad, but nobody's gotten\nround to improving it). So the above is a poor way of investigating\nwhat will happen, because it corresponds to a custom plan for the\nvalue 7141968. You should try something like\n\nPREPARE p(integer) AS\n SELECT COALESCE ...\n ... AND (mok_saskaita = $1);\n\nSET plan_cache_mode TO force_generic_plan;\n\nEXPLAIN ANALYZE EXECUTE p(7141968);\n\nWhat I suspect is that the statistics for mok_saskaita are\nhighly skewed and so with a generic plan the planner will\nnot risk using a plan that depends on the parameter value\nbeing infrequent, as the one you're showing does.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jun 2024 10:24:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance of sql and plpgsql functions"
},
{
"msg_contents": "On 2024-06-17 17:24, Tom Lane wrote:\n> Julius Tuskenis<[email protected]> writes:\n>> EXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n>> SELECT\n>> COALESCE(sum(mok_nepadengta), 0)\n>> FROM\n>> public.b_pardavimai\n>> JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n>> WHERE\n>> (pard_tipas = ANY('{1, 2, 6, 7}'))\n>> AND (mok_saskaita = 7141968)\n> I believe that the SQL-language function executor always uses generic\n> plans for parameterized queries (which is bad, but nobody's gotten\n> round to improving it). So the above is a poor way of investigating\n> what will happen, because it corresponds to a custom plan for the\n> value 7141968. You should try something like\n>\n> PREPARE p(integer) AS\n> SELECT COALESCE ...\n> ... AND (mok_saskaita = $1);\n>\n> SET plan_cache_mode TO force_generic_plan;\n>\n> EXPLAIN ANALYZE EXECUTE p(7141968);\n>\n> What I suspect is that the statistics for mok_saskaita are\n> highly skewed and so with a generic plan the planner will\n> not risk using a plan that depends on the parameter value\n> being infrequent, as the one you're showing does.\n>\n> \t\t\tregards, tom lane\n\n\nThank you Tom Lane, for pointing the problem.\n\nIn deed, after setting plan_cache_mode to force_generic_plan I see very \ndifferent plan:\n\n```\n\n\"Finalize Aggregate (cost=6901.01..6901.02 rows=1 width=32) (actual \ntime=50.258..56.004 rows=1 loops=1)\"\n\" Output: COALESCE(sum((b_mokejimai.mok_nepadengta)::numeric), \n'0'::numeric)\"\n\" Buffers: shared hit=4\"\n\" -> Gather (cost=6900.89..6901.00 rows=1 width=32) (actual \ntime=0.809..55.993 rows=2 loops=1)\"\n\" Output: (PARTIAL sum((b_mokejimai.mok_nepadengta)::numeric))\"\n\" Workers Planned: 1\"\n\" Workers Launched: 1\"\n\" Buffers: shared hit=4\"\n\" -> Partial Aggregate (cost=5900.89..5900.90 rows=1 width=32) \n(actual time=0.077..0.079 rows=1 loops=2)\"\n\" Output: PARTIAL sum((b_mokejimai.mok_nepadengta)::numeric)\"\n\" Buffers: shared hit=4\"\n\" Worker 0: actual time=0.052..0.053 rows=1 loops=1\"\n\" -> Nested Loop (cost=25.92..5897.69 rows=1280 width=3) \n(actual time=0.070..0.072 rows=0 loops=2)\"\n\" Output: b_mokejimai.mok_nepadengta\"\n\" Inner Unique: true\"\n\" Buffers: shared hit=4\"\n\" Worker 0: actual time=0.043..0.043 rows=0 loops=1\"\n\" -> Parallel Bitmap Heap Scan on \npublic.b_mokejimai (cost=25.48..2455.36 rows=1307 width=7) (actual \ntime=0.069..0.070 rows=0 loops=2)\"\n\" Output: b_mokejimai.mok_id, \nb_mokejimai.mok_moketojas, b_mokejimai.mok_pardavimas, \nb_mokejimai.mok_laikas, b_mokejimai.mok_suma, b_mokejimai.mok_budas, \nb_mokejimai.mok_terminas, b_mokejimai.mok_cekis, \nb_mokejimai.mok_saskaita, b_mokejimai.mok_suma_bazine, \nb_mokejimai.mok_nepadengta, b_mokejimai.mok_padengta, \nb_mokejimai.mok_laiko_diena\"\n\" Recheck Cond: (b_mokejimai.mok_saskaita = $1)\"\n\" Buffers: shared hit=4\"\n\" Worker 0: actual time=0.042..0.042 rows=0 \nloops=1\"\n\" -> Bitmap Index Scan on idx_saskaita \n(cost=0.00..24.93 rows=2222 width=0) (actual time=0.023..0.023 rows=0 \nloops=1)\"\n\" Index Cond: (b_mokejimai.mok_saskaita = \n$1)\"\n\" Buffers: shared hit=4\"\n\" -> Index Scan using pk_b_pardavimai_id on \npublic.b_pardavimai (cost=0.44..2.63 rows=1 width=4) (never executed)\"\n\" Output: b_pardavimai.pard_id, \nb_pardavimai.pard_preke, b_pardavimai.pard_kaina, \nb_pardavimai.pard_nuolaida, b_pardavimai.pard_kiekis, \nb_pardavimai.pard_kasos_nr, b_pardavimai.pard_laikas, \nb_pardavimai.pard_prekes_id, b_pardavimai.pard_pirkejo_id, \nb_pardavimai.pard_pardavejas, b_pardavimai.pard_spausdinta, \nb_pardavimai.pard_reikia_grazinti, b_pardavimai.pard_kam_naudoti, \nb_pardavimai.pard_susieta, b_pardavimai.pard_galima_anuliuoti, \nb_pardavimai.pard_tipas, b_pardavimai.pard_pvm, \nb_pardavimai.pard_apsilankymas, b_pardavimai.pard_fk, \nb_pardavimai.pard_kelintas, b_pardavimai.pard_precekis, \nb_pardavimai.pard_imone, b_pardavimai.pard_grazintas, \nb_pardavimai.pard_debeto_sutartis, b_pardavimai.pard_kaina_be_nld, \nb_pardavimai.pard_uzsakymas_pos, b_pardavimai.pard_pvm_suma, \nb_pardavimai.pard_uzsakymo_nr, b_pardavimai.pard_nuolaidos_id, \nb_pardavimai.pard_nuolaida_taikyti, \nb_pardavimai.pard_pirkeja_keisti_galima, \nb_pardavimai.pard_suma_keisti_galima\"\n\" Index Cond: (b_pardavimai.pard_id = \nb_mokejimai.mok_pardavimas)\"\n\" Filter: (b_pardavimai.pard_tipas = ANY \n('{1,2,6,7}'::integer[]))\"\n\"Planning Time: 0.016 ms\"\n\"Execution Time: 56.097 ms\"\n\n```\n\nIf I understand the plan correctly, the problem is the planner expects \nto find 2222 records for a provide value of `mok_saskaita`. I've tried \nrunning analyze on `b_mokejimai`, but the plan remains the same - must \nbe because some values of `mok_saskaita` do really return tens of \nthousands of records.\n\nI don't know how the planner comes up with value 2222, because on \naverage there are 15 b_mokejimai records for a single mok_saskaita (if \nNULL in mok_saskata is ignored), and 628 records if not.\n\n\nAnyway...\n\nDo you think rewriting a function in plpgsql is a way to go in such \ncase? In pg documentation \n(https://www.postgresql.org/docs/12/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING) \nI read that the plan for the plpgsql function is calculated the first \ntime the function is executed (for a connection). I'm concerned, that \nthe function execution is not replanned: I will be stuck with a plan \nthat corresponds to the `mok_saskaita` parameter value passed on the \nfirst execution. Or am I wrong?\n\nIs there a way to make PostgreSQL recalculate the plan on each execution \nof the function? The observed planning times are acceptable for my \napplication.\n\n\nRegards,\n\nJulius Tuskenis\n\n\n\n\n\n\n On 2024-06-17 17:24, Tom Lane wrote:\n\nJulius Tuskenis <[email protected]> writes:\n\n\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE)\n SELECT\n COALESCE(sum(mok_nepadengta), 0)\n FROM\n public.b_pardavimai\n JOIN public.b_mokejimai ON (mok_pardavimas = pard_id)\n WHERE\n (pard_tipas = ANY('{1, 2, 6, 7}'))\n AND (mok_saskaita = 7141968)\n\n\n\nI believe that the SQL-language function executor always uses generic\nplans for parameterized queries (which is bad, but nobody's gotten\nround to improving it). So the above is a poor way of investigating\nwhat will happen, because it corresponds to a custom plan for the\nvalue 7141968. You should try something like\n\nPREPARE p(integer) AS\n SELECT COALESCE ...\n ... AND (mok_saskaita = $1);\n\nSET plan_cache_mode TO force_generic_plan;\n\nEXPLAIN ANALYZE EXECUTE p(7141968);\n\nWhat I suspect is that the statistics for mok_saskaita are\nhighly skewed and so with a generic plan the planner will\nnot risk using a plan that depends on the parameter value\nbeing infrequent, as the one you're showing does.\n\n\t\t\tregards, tom lane\n\n\n\nThank you Tom Lane, for pointing the problem.\n\nIn deed, after setting plan_cache_mode to force_generic_plan I\n see very different plan:\n```\n\n\"Finalize Aggregate (cost=6901.01..6901.02 rows=1 width=32)\n (actual time=50.258..56.004 rows=1 loops=1)\"\n \" Output: COALESCE(sum((b_mokejimai.mok_nepadengta)::numeric),\n '0'::numeric)\"\n \" Buffers: shared hit=4\"\n \" -> Gather (cost=6900.89..6901.00 rows=1 width=32) (actual\n time=0.809..55.993 rows=2 loops=1)\"\n \" Output: (PARTIAL\n sum((b_mokejimai.mok_nepadengta)::numeric))\"\n \" Workers Planned: 1\"\n \" Workers Launched: 1\"\n \" Buffers: shared hit=4\"\n \" -> Partial Aggregate (cost=5900.89..5900.90 rows=1\n width=32) (actual time=0.077..0.079 rows=1 loops=2)\"\n \" Output: PARTIAL\n sum((b_mokejimai.mok_nepadengta)::numeric)\"\n \" Buffers: shared hit=4\"\n \" Worker 0: actual time=0.052..0.053 rows=1 loops=1\"\n \" -> Nested Loop (cost=25.92..5897.69 rows=1280\n width=3) (actual time=0.070..0.072 rows=0 loops=2)\"\n \" Output: b_mokejimai.mok_nepadengta\"\n \" Inner Unique: true\"\n \" Buffers: shared hit=4\"\n \" Worker 0: actual time=0.043..0.043 rows=0\n loops=1\"\n \" -> Parallel Bitmap Heap Scan on\n public.b_mokejimai (cost=25.48..2455.36 rows=1307 width=7)\n (actual time=0.069..0.070 rows=0 loops=2)\"\n \" Output: b_mokejimai.mok_id,\n b_mokejimai.mok_moketojas, b_mokejimai.mok_pardavimas,\n b_mokejimai.mok_laikas, b_mokejimai.mok_suma,\n b_mokejimai.mok_budas, b_mokejimai.mok_terminas,\n b_mokejimai.mok_cekis, b_mokejimai.mok_saskaita,\n b_mokejimai.mok_suma_bazine, b_mokejimai.mok_nepadengta,\n b_mokejimai.mok_padengta, b_mokejimai.mok_laiko_diena\"\n \" Recheck Cond: (b_mokejimai.mok_saskaita\n = $1)\"\n \" Buffers: shared hit=4\"\n \" Worker 0: actual time=0.042..0.042\n rows=0 loops=1\"\n \" -> Bitmap Index Scan on\n idx_saskaita (cost=0.00..24.93 rows=2222 width=0) (actual\n time=0.023..0.023 rows=0 loops=1)\"\n \" Index Cond:\n (b_mokejimai.mok_saskaita = $1)\"\n \" Buffers: shared hit=4\"\n \" -> Index Scan using pk_b_pardavimai_id on\n public.b_pardavimai (cost=0.44..2.63 rows=1 width=4) (never\n executed)\"\n \" Output: b_pardavimai.pard_id,\n b_pardavimai.pard_preke, b_pardavimai.pard_kaina,\n b_pardavimai.pard_nuolaida, b_pardavimai.pard_kiekis,\n b_pardavimai.pard_kasos_nr, b_pardavimai.pard_laikas,\n b_pardavimai.pard_prekes_id, b_pardavimai.pard_pirkejo_id,\n b_pardavimai.pard_pardavejas, b_pardavimai.pard_spausdinta,\n b_pardavimai.pard_reikia_grazinti, b_pardavimai.pard_kam_naudoti,\n b_pardavimai.pard_susieta, b_pardavimai.pard_galima_anuliuoti,\n b_pardavimai.pard_tipas, b_pardavimai.pard_pvm,\n b_pardavimai.pard_apsilankymas, b_pardavimai.pard_fk,\n b_pardavimai.pard_kelintas, b_pardavimai.pard_precekis,\n b_pardavimai.pard_imone, b_pardavimai.pard_grazintas,\n b_pardavimai.pard_debeto_sutartis, b_pardavimai.pard_kaina_be_nld,\n b_pardavimai.pard_uzsakymas_pos, b_pardavimai.pard_pvm_suma,\n b_pardavimai.pard_uzsakymo_nr, b_pardavimai.pard_nuolaidos_id,\n b_pardavimai.pard_nuolaida_taikyti,\n b_pardavimai.pard_pirkeja_keisti_galima,\n b_pardavimai.pard_suma_keisti_galima\"\n \" Index Cond: (b_pardavimai.pard_id =\n b_mokejimai.mok_pardavimas)\"\n \" Filter: (b_pardavimai.pard_tipas = ANY\n ('{1,2,6,7}'::integer[]))\"\n \"Planning Time: 0.016 ms\"\n \"Execution Time: 56.097 ms\"\n\n```\n\nIf I understand the plan correctly, the problem is the planner\n expects to find 2222 records for a provide value of\n `mok_saskaita`. I've tried running analyze on `b_mokejimai`, but\n the plan remains the same - must be because some values of\n `mok_saskaita` do really return tens of thousands of records.\nI don't know how the planner comes up with value 2222, because on\n average there are 15 b_mokejimai records for a single mok_saskaita\n (if NULL in mok_saskata is ignored), and 628 records if not.\n\n\n\nAnyway...\n\nDo you think rewriting a function in plpgsql is a way to go in\n such case? In pg documentation\n(https://www.postgresql.org/docs/12/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING)\n I read that the plan for the plpgsql function is calculated the\n first time the function is executed (for a connection). I'm\n concerned, that the function execution is not replanned: I will be\n stuck with a plan that corresponds to the `mok_saskaita` \n parameter value passed on the first execution. Or am I wrong?\n\nIs there a way to make PostgreSQL recalculate the plan on each\n execution of the function? The observed planning times are\n acceptable for my application.\n\n\n\nRegards,\nJulius Tuskenis",
"msg_date": "Tue, 18 Jun 2024 16:03:17 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance of sql and plpgsql functions"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm trying to implement a system which requires row level security on \nsome key data tables (most do not require RLS). The data tables will \ngrow substantially (rows likely > +100M/year - the system is > 80% data \ninsert plus < 20% updates and by design, no deletes).\n\nSome queries are likely to brush past many rows before being eliminated \nby the RLS policy, so I'm trying to find the most efficient way that \ndoes not compromise query times. I also want to have a unified approach \nacross all the RLS data to make the policy implementation as \nstraightforward as possible too because I know there will be future \nexpansion of the RLS rules.\n\nMy thought currently is that table inheritance could possibly be one way \nforward. Specifically the base table holding just the RLS attributes, \nsuch as site group, site ID, customer group, customer ID as some initial \nexamples (I expect company division, department may be future needs too).\n\nWith the RLS attributes on the base table, I can add future needs to \nthat table and they automatically propagate to the child tables holding \nthe RLS data. Policies on the child tables can enforce row visibility \nbased on session tokens assigned at login (a future problem avoided just \nnow for simplicity).\n\nI have a small prototype working, with the policy function comparing the \ncolumns (from the base table) to the user tokens to permit/deny row \naccess. This allows this to be as in-memory and hopefully as fast as \npossible as it avoids needing to do any lookups to other tables or \nanything more expensive than some 'permissionColumn IN \nlistOfTokensHeldByTheSession' checks.\n\nMy concern is the base table will grow substantially faster than the \nchild data tables as that receives a new row for every row inserted in \nany of the child tables, so could easily be +300M rows/year and this \ncould become some performance fence. Some of the child tables have a \nclear partition key available so inherited & partitioned is also \nappealing but could possibly amplify any performance issue further.\n\nDoes this approach sound viable or are there pitfalls or a different \nmore recommended approach?\n\nThanks\n\nJim\n\n\n\n\n\n\n\n\n\n\nHi,\nI'm trying to implement a system which requires row level\n security on some key data tables (most do not require RLS). The\n data tables will grow substantially (rows likely > +100M/year -\n the system is > 80% data insert plus < 20% updates and by\n design, no deletes).\nSome queries are likely to brush past many rows before being\n eliminated by the RLS policy, so I'm trying to find the most\n efficient way that does not compromise query times. I also want\n to have a unified approach across all the RLS data to make the\n policy implementation as straightforward as possible too because I\n know there will be future expansion of the RLS rules.\nMy thought currently is that table inheritance could possibly be\n one way forward. Specifically the base table holding just the RLS\n attributes, such as site group, site ID, customer group, customer\n ID as some initial examples (I expect company division, department\n may be future needs too).\nWith the RLS attributes on the base table, I can add future needs\n to that table and they automatically propagate to the child tables\n holding the RLS data. Policies on the child tables can enforce\n row visibility based on session tokens assigned at login (a future\n problem avoided just now for simplicity).\nI have a small prototype working, with the policy function\n comparing the columns (from the base table) to the user tokens to\n permit/deny row access. This allows this to be as in-memory and\n hopefully as fast as possible as it avoids needing to do any\n lookups to other tables or anything more expensive than some\n 'permissionColumn IN listOfTokensHeldByTheSession' checks.\nMy concern is the base table will grow substantially faster than\n the child data tables as that receives a new row for every row\n inserted in any of the child tables, so could easily be +300M\n rows/year and this could become some performance fence. Some of\n the child tables have a clear partition key available so inherited\n & partitioned is also appealing but could possibly amplify any\n performance issue further.\nDoes this approach sound viable or are there pitfalls or a\n different more recommended approach?\nThanks\nJim",
"msg_date": "Mon, 24 Jun 2024 18:28:13 -0400",
"msg_from": "Thomas Simpson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Row level security"
},
{
"msg_contents": "Hello Jim,\n\nYour approach of using table inheritance in PostgreSQL for implementing \nrow-level security (RLS) has some interesting aspects, but there are \npotential pitfalls and alternatives that you should consider. Below, \nI'll outline some key points to\n\nTable Inheritance and Performance Concerns\n\n* RLS and Inheritance - In PostgreSQL, RLS policies are applied per \ntable. If you use inheritance, RLS policies defined on the parent table \nwon’t automatically apply to the child tables. You’ll have to set up RLS \npolicies on each child table separately.\n\n* Growing Base Table - The base table, getting a new row for every row \ninserted in the child tables, will grow really fast. Managing a table \nwith hundreds of millions of rows per year could become a serious \nperformance problem.\n\n* Partitioning - Partitioning can help manage big tables by breaking \nthem into smaller parts. But if your base table becomes a bottleneck, \npartitioning the child tables alone might not solve the problem.\n\nAlternative Approach: Use Partitioned Tables Directly with RLS\n\nGiven your needs, here's a different approach that leverages \nPostgreSQL's partitioning and indexing features along with RLS:\n\n* Directly Partitioned Tables - Instead of inheritance, create \npartitioned tables directly for each type of data. Partition these \ntables based on a logical key (like time, site ID, or customer ID) so \neach partition stays manageable:\nExample:\n CREATE TABLE data (\n id SERIAL PRIMARY KEY,\n site_id INT,\n customer_id INT,\n division_id INT,\n department_id INT,\n data_payload JSONB,\n created_at TIMESTAMPTZ\n ) PARTITION BY RANGE (created_at);\n\n\n* RLS Policies on Partitions - Set up RLS policies on each partition. \nSince partitions are smaller, RLS policy checks should be more efficient.\nExample:\n CREATE POLICY rls_policy ON data\n USING (site_id = current_setting('app.current_site_id')::INT);\n ENABLE ROW LEVEL SECURITY;\n\n* Session Variables - Using PostgreSQL session variables to store \nuser-specific info (like /app.current_site_id/) is convenient, but has \npotential security risks. If a client can set these variables, they \ncould manipulate them to gain unauthorized access. To mitigate this, \nensure that only trusted parts of your application can set these \nvariables. Consider using server-side functions or application logic to \nsecurely set these variables based on the authenticated user's information.\nExample:\n SET app.current_site_id = '123';\n\n* Indexing - Make sure you index columns used in RLS policies and \nqueries, like /site_id/ and /customer_id/.\n CREATE INDEX idx_site_id ON data (site_id);\n CREATE INDEX idx_customer_id ON data (customer_id);\n CREATE INDEX idx_site_customer ON data (site_id, customer_id);\n CREATE INDEX idx_created_at ON data (created_at);\n\n* Using Stored Procedures - using stored procedures can centralize \nsecurity logic, but it can also add complexity. Here's a brief look:\n\nAdvantages:\n - Centralized security logic.\n - Additional layer of security as logic is hidden from end-users.\n - Can include data validation and business logic.\n\n2Disadvantages:\n - Increased complexity in development and maintenance.\n - Potential performance overhead for complex procedures.\n - Less flexibility for ad-hoc queries.\n\nExample:\n CREATE OR REPLACE FUNCTION insert_data(\n p_site_id INT,\n p_customer_id INT,\n p_division_id INT,\n p_department_id INT,\n p_data_payload JSONB,\n p_created_at TIMESTAMPTZ\n ) RETURNS VOID AS $$\n BEGIN\n IF current_setting('app.current_site_id')::INT = p_site_id THEN\n INSERT INTO data (site_id, customer_id, division_id, \ndepartment_id, data_payload, created_at)\n VALUES (p_site_id, p_customer_id, p_division_id, \np_department_id, p_data_payload, p_created_at);\n ELSE\n RAISE EXCEPTION 'Access Denied';\n END IF;\n END;\n $$ LANGUAGE plpgsql;\n\n\n* Final Thoughts\n\nUsing direct partitioning and applying RLS policies to each partition \nshould help with performance issues linked to a growing base table. This \napproach also keeps things flexible for future expansions and avoids the \nhassle of managing inheritance hierarchies. Proper indexing with RLS \npolicies in mind can greatly improve query performance in large tables. \nJust make sure to handle session variables securely to avoid potential \nsecurity issues.\n\nIf you have more questions or need further advice on implementation, \njust let me know!\n\nCheers,\nAndy\n\nOn 25-Jun-24 00:28, Thomas Simpson wrote:\n>\n> Hi,\n>\n> I'm trying to implement a system which requires row level security on \n> some key data tables (most do not require RLS). The data tables will \n> grow substantially (rows likely > +100M/year - the system is > 80% \n> data insert plus < 20% updates and by design, no deletes).\n>\n> Some queries are likely to brush past many rows before being \n> eliminated by the RLS policy, so I'm trying to find the most efficient \n> way that does not compromise query times. I also want to have a \n> unified approach across all the RLS data to make the policy \n> implementation as straightforward as possible too because I know there \n> will be future expansion of the RLS rules.\n>\n> My thought currently is that table inheritance could possibly be one \n> way forward. Specifically the base table holding just the RLS \n> attributes, such as site group, site ID, customer group, customer ID \n> as some initial examples (I expect company division, department may be \n> future needs too).\n>\n> With the RLS attributes on the base table, I can add future needs to \n> that table and they automatically propagate to the child tables \n> holding the RLS data. Policies on the child tables can enforce row \n> visibility based on session tokens assigned at login (a future problem \n> avoided just now for simplicity).\n>\n> I have a small prototype working, with the policy function comparing \n> the columns (from the base table) to the user tokens to permit/deny \n> row access. This allows this to be as in-memory and hopefully as fast \n> as possible as it avoids needing to do any lookups to other tables or \n> anything more expensive than some 'permissionColumn IN \n> listOfTokensHeldByTheSession' checks.\n>\n> My concern is the base table will grow substantially faster than the \n> child data tables as that receives a new row for every row inserted in \n> any of the child tables, so could easily be +300M rows/year and this \n> could become some performance fence. Some of the child tables have a \n> clear partition key available so inherited & partitioned is also \n> appealing but could possibly amplify any performance issue further.\n>\n> Does this approach sound viable or are there pitfalls or a different \n> more recommended approach?\n>\n> Thanks\n>\n> Jim\n>\n>\n>\n>\n>\n\n\n\n\n\n\nHello Jim,\n\n\nYour approach of using table\n inheritance in PostgreSQL for implementing row-level security\n (RLS) has some interesting aspects, but there are potential\n pitfalls and alternatives that you should consider. Below, I'll\n outline some key points to \n\n Table Inheritance and Performance Concerns\n\n * RLS and Inheritance - In PostgreSQL, RLS policies are applied\n per table. If you use inheritance, RLS policies defined on the\n parent table won’t automatically apply to the child tables. You’ll\n have to set up RLS policies on each child table separately.\n\n * Growing Base Table - The base table, getting a new row for\n every row inserted in the child tables, will grow really fast.\n Managing a table with hundreds of millions of rows per year could\n become a serious performance problem.\n\n * Partitioning - Partitioning can help manage big tables by\n breaking them into smaller parts. But if your base table becomes a\n bottleneck, partitioning the child tables alone might not solve\n the problem.\n\n Alternative Approach: Use Partitioned Tables Directly with RLS\n\n Given your needs, here's a different approach that leverages\n PostgreSQL's partitioning and indexing features along with RLS:\n\n * Directly Partitioned Tables - Instead of inheritance, create\n partitioned tables directly for each type of data. Partition these\n tables based on a logical key (like time, site ID, or customer ID)\n so each partition stays manageable:\n Example:\n CREATE TABLE data (\n id SERIAL PRIMARY KEY,\n site_id INT,\n customer_id INT,\n division_id INT,\n department_id INT,\n data_payload JSONB,\n created_at TIMESTAMPTZ\n ) PARTITION BY RANGE (created_at);\n\n\n * RLS Policies on Partitions - Set up RLS policies on each\n partition. Since partitions are smaller, RLS policy checks should\n be more efficient.\n Example:\n CREATE POLICY rls_policy ON data\n USING (site_id =\n current_setting('app.current_site_id')::INT);\n ENABLE ROW LEVEL SECURITY;\n\n * Session Variables - Using PostgreSQL session variables to store\n user-specific info (like app.current_site_id) is\n convenient, but has potential security risks. If a client can set\n these variables, they could manipulate them to gain unauthorized\n access. To mitigate this, ensure that only trusted parts of your\n application can set these variables. Consider using server-side\n functions or application logic to securely set these variables\n based on the authenticated user's information.\n Example:\n SET app.current_site_id = '123';\n\n * Indexing - Make sure you index columns used in RLS policies and\n queries, like site_id and customer_id.\n CREATE INDEX idx_site_id ON data\n (site_id);\n CREATE INDEX idx_customer_id ON data (customer_id);\n CREATE INDEX idx_site_customer ON data (site_id,\n customer_id);\n CREATE INDEX idx_created_at ON data (created_at);\n\n * Using Stored Procedures - using stored procedures can centralize\n security logic, but it can also add complexity. Here's a brief\n look:\n\n Advantages:\n - Centralized security logic.\n - Additional layer of security as logic is hidden from\n end-users.\n - Can include data validation and business logic.\n\n 2Disadvantages:\n - Increased complexity in development and maintenance.\n - Potential performance overhead for complex procedures.\n - Less flexibility for ad-hoc queries.\n\n Example:\n CREATE OR REPLACE FUNCTION insert_data(\n p_site_id INT,\n p_customer_id INT,\n p_division_id INT,\n p_department_id INT,\n p_data_payload JSONB,\n p_created_at TIMESTAMPTZ\n ) RETURNS VOID AS $$\n BEGIN\n IF current_setting('app.current_site_id')::INT =\n p_site_id THEN\n INSERT INTO data (site_id, customer_id, division_id,\n department_id, data_payload, created_at)\n VALUES (p_site_id, p_customer_id, p_division_id,\n p_department_id, p_data_payload, p_created_at);\n ELSE\n RAISE EXCEPTION 'Access Denied';\n END IF;\n END;\n $$ LANGUAGE plpgsql;\n\n\n\n\n* Final Thoughts\n\n Using direct partitioning and applying RLS policies to each\n partition should help with performance issues linked to a growing\n base table. This approach also keeps things flexible for future\n expansions and avoids the hassle of managing inheritance\n hierarchies. Proper indexing with RLS policies in mind can greatly\n improve query performance in large tables. Just make sure to\n handle session variables securely to avoid potential security\n issues.\n\n If you have more questions or need further advice on\n implementation, just let me know!\n\n Cheers, \n Andy\n\n\nOn 25-Jun-24 00:28, Thomas Simpson\n wrote:\n\n\n\nHi,\nI'm trying to implement a system which requires row level\n security on some key data tables (most do not require RLS). The\n data tables will grow substantially (rows likely > +100M/year\n - the system is > 80% data insert plus < 20% updates and\n by design, no deletes).\nSome queries are likely to brush past many rows before being\n eliminated by the RLS policy, so I'm trying to find the most\n efficient way that does not compromise query times. I also want\n to have a unified approach across all the RLS data to make the\n policy implementation as straightforward as possible too because\n I know there will be future expansion of the RLS rules.\nMy thought currently is that table inheritance could possibly\n be one way forward. Specifically the base table holding just\n the RLS attributes, such as site group, site ID, customer group,\n customer ID as some initial examples (I expect company division,\n department may be future needs too).\nWith the RLS attributes on the base table, I can add future\n needs to that table and they automatically propagate to the\n child tables holding the RLS data. Policies on the child tables\n can enforce row visibility based on session tokens assigned at\n login (a future problem avoided just now for simplicity).\nI have a small prototype working, with the policy function\n comparing the columns (from the base table) to the user tokens\n to permit/deny row access. This allows this to be as in-memory\n and hopefully as fast as possible as it avoids needing to do any\n lookups to other tables or anything more expensive than some\n 'permissionColumn IN listOfTokensHeldByTheSession' checks.\nMy concern is the base table will grow substantially faster\n than the child data tables as that receives a new row for every\n row inserted in any of the child tables, so could easily be\n +300M rows/year and this could become some performance fence. \n Some of the child tables have a clear partition key available so\n inherited & partitioned is also appealing but could possibly\n amplify any performance issue further.\nDoes this approach sound viable or are there pitfalls or a\n different more recommended approach?\nThanks\nJim",
"msg_date": "Thu, 27 Jun 2024 12:45:04 +0200",
"msg_from": "Andrew Okhmat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row level security"
}
] |
[
{
"msg_contents": "Heya, I hope the title is somewhat descriptive. I'm working on a\ndecentralized social media platform and have encountered the following\nperformance issue/quirk, and would like to ask for input, since I'm not\nsure I missed anything.\n\nI'm running PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC)\n13.2.1 20230801, 64-bit, running on an Arch Linux box with 128GB of RAM &\nan 8c16t Ryzen 3700X CPU. Disk is a NVME RAID0.\n\nPostgres configuration: https://paste.depesz.com/s/iTv\n\nI'm using autovacuum defaults & am running a manual VACUUM ANALYZE on the\nentire database nightly.\n\nThe relevant database parts consist of a table with posts (note), a table\nwith users (user), and a table with follow relationships (following). The\nquery in question takes the most recent n (e.g. 50) posts, filtered by the\nusers follow relations.\n\nThe note table on my main production instance grows by about 200k entries\nper week.\n\nSchema & tuple counts: https://paste.depesz.com/s/cfI\n\nHere's the shortest query I can reproduce the issue with:\nhttps://paste.depesz.com/s/RoC\nSpecifically, it works well for users that follow a relatively large amount\nof users (https://explain.depesz.com/s/tJnB), and is very slow for users\nthat follow a low amount of users / users that post infrequently (\nhttps://explain.depesz.com/s/Mtyr).\n\n From what I can tell, this is because this query causes postgres to scan\nthe note table from the bottom (most recent posts first), discarding\nanything by users that are not followed.\n\nCuriously, rewriting the query like this (https://paste.depesz.com/s/8rN)\ncauses the opposite problem, this query is fast for users with a low\nfollowing count (https://explain.depesz.com/s/yHAz#query), and slow for\nusers with a high following count (https://explain.depesz.com/s/1v6L,\nhttps://explain.depesz.com/s/yg3N).\n\nThese numbers are even further apart (to the point of 10-30s query\ntimeouts) in the most extreme outlier cases I've observed, and on lower-end\nhardware.\n\nI've sidestepped the issue by running either of these queries based on a\nheuristic that checks whether there are more than 250 matching posts in the\npast 7 days, recomputed once per day for every user, but it feels more like\na hack than a proper solution.\n\nI'm able to make the planner make a sensible decision in both cases by\nsetting enable_sort = off, but that tanks performance for the rest of my\napplication, is even more of a hack, and doesn't seem to work in all cases.\n\nI've been able to reproduce this issue with mock data (\nhttps://paste.depesz.com/s/CnY), though it's not generating quite the same\nquery plans and is behaving a bit differently.\n\nI'd appreciate any and all input on the situation. If I've left out any\ninformation that would be useful in figuring this out, please tell me.\n\nThanks in advance,\nLaura Hausmann\n\nHeya, I hope the title is somewhat descriptive. I'm working on a decentralized social media platform and have encountered the following performance issue/quirk, and would like to ask for input, since I'm not sure I missed anything.I'm running PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 13.2.1 20230801, 64-bit, running on an Arch Linux box with 128GB of RAM & an 8c16t Ryzen 3700X CPU. Disk is a NVME RAID0.Postgres configuration: https://paste.depesz.com/s/iTvI'm using autovacuum defaults & am running a manual VACUUM ANALYZE on the entire database nightly.The relevant database parts consist of a table with posts (note), a table with users (user), and a table with follow relationships (following). The query in question takes the most recent n (e.g. 50) posts, filtered by the users follow relations.The note table on my main production instance grows by about 200k entries per week.Schema & tuple counts: https://paste.depesz.com/s/cfIHere's the shortest query I can reproduce the issue with: https://paste.depesz.com/s/RoCSpecifically, it works well for users that follow a relatively large amount of users (https://explain.depesz.com/s/tJnB), and is very slow for users that follow a low amount of users / users that post infrequently (https://explain.depesz.com/s/Mtyr). From what I can tell, this is because this query causes postgres to scan the note table from the bottom (most recent posts first), discarding anything by users that are not followed.Curiously, rewriting the query like this (https://paste.depesz.com/s/8rN) causes the opposite problem, this query is fast for users with a low following count (https://explain.depesz.com/s/yHAz#query), and slow for users with a high following count (https://explain.depesz.com/s/1v6L, https://explain.depesz.com/s/yg3N).These numbers are even further apart (to the point of 10-30s query timeouts) in the most extreme outlier cases I've observed, and on lower-end hardware.I've sidestepped the issue by running either of these queries based on a heuristic that checks whether there are more than 250 matching posts in the past 7 days, recomputed once per day for every user, but it feels more like a hack than a proper solution.I'm able to make the planner make a sensible decision in both cases\n by setting enable_sort = off, but that tanks performance for the rest \nof my application, is even more of a hack, and doesn't seem to work in all cases.I've been able to reproduce this issue with mock data (https://paste.depesz.com/s/CnY), though it's not generating quite the same query plans and is behaving a bit differently.I'd appreciate any and all input on the situation. If I've left out any information that would be useful in figuring this out, please tell me.Thanks in advance,Laura Hausmann",
"msg_date": "Thu, 27 Jun 2024 02:50:31 +0200",
"msg_from": "Laura Hausmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inconsistent query performance based on relation hit frequency"
},
{
"msg_contents": "On 6/27/24 07:50, Laura Hausmann wrote:\n> I'd appreciate any and all input on the situation. If I've left out any \n> information that would be useful in figuring this out, please tell me.\nThanks for this curious case, I like it!\nAt first, you can try to avoid \"OR\" expressions - PostgreSQL has quite \nlimited set of optimisation/prediction tricks on such expressions.\nSecond - I see, postgres predicts wrong number of tuples. But using my \ntypical tool [1] and getting more precise estimations i don't see \nsignificant profit:\n\n Limit (cost=10832.85..10838.69 rows=50 width=21)\n -> Gather Merge (cost=10832.85..10838.92 rows=52 width=21)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=9832.83..9832.90 rows=26 width=21)\n Sort Key: objects.id DESC\n Sort Method: top-N heapsort Memory: 32kB\n Worker 0: Sort Method: quicksort Memory: 32kB\n Worker 1: Sort Method: quicksort Memory: 32kB\n -> Parallel Seq Scan on objects\n Filter: ((hashed SubPlan 1) OR (\"userId\" = 1))\n Rows Removed by Filter: 183372\n SubPlan 1\n -> Nested Loop\n -> Index Only Scan using users_pkey on\n Index Cond: (id = 1)\n Heap Fetches: 0\n -> Index Only Scan using \n\"relationships_followerId_followeeId_idx\" on relationships\n Index Cond: (\"followerId\" = 1)\n Heap Fetches: 0\n Planning Time: 0.762 ms\n Execution Time: 43.816 ms\n\n Limit (cost=10818.83..10819.07 rows=2 width=21)\n -> Gather Merge (cost=10818.83..10819.07 rows=2 width=21)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=9818.81..9818.81 rows=1 width=21)\n Sort Key: objects.id DESC\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort Memory: 25kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n -> Parallel Seq Scan on objects\n Filter: ((hashed SubPlan 1) OR (\"userId\" = 4))\n Rows Removed by Filter: 183477\n SubPlan 1\n -> Nested Loop (cost=0.56..8.61 rows=1 width=4)\n -> Index Only Scan using \n\"relationships_followerId_followeeId_idx\" on relationships\n Index Cond: (\"followerId\" = 4)\n Heap Fetches: 0\n -> Index Only Scan using users_pkey\n Index Cond: (id = 4)\n Heap Fetches: 0\n Planning Time: 0.646 ms\n Execution Time: 30.824 ms\n\nBut this was achieved just because of parallel workers utilisation. \nDisabling them we get:\n\n Limit (cost=14635.07..14635.08 rows=2 width=21) (actual \ntime=75.941..75.943 rows=0 loops=1)\n -> Sort (cost=14635.07..14635.08 rows=2 width=21) (actual \ntime=75.939..75.940 rows=0 loops=1)\n Sort Key: objects.id DESC\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on objects (cost=8.61..14635.06 rows=2 width=21) \n(actual time=75.931..75.932 rows=0 loops=1)\n Filter: ((hashed SubPlan 1) OR (\"userId\" = 4))\n Rows Removed by Filter: 550430\n SubPlan 1\n -> Nested Loop (cost=0.56..8.61 rows=1 width=4) \n(actual time=0.039..0.040 rows=0 loops=1)\n -> Index Only Scan using \n\"relationships_followerId_followeeId_idx\" on relationships \n(cost=0.28..4.29 rows=1 width=8) (actual time=0.038..0.038 rows=0 loops=1)\n Index Cond: (\"followerId\" = 4)\n Heap Fetches: 0\n -> Index Only Scan using users_pkey on users \n(cost=0.29..4.31 rows=1 width=4) (never executed)\n Index Cond: (id = 4)\n Heap Fetches: 0\n Planning Time: 0.945 ms\n Execution Time: 76.123 ms\n\nSo, from the optimiser's point of view, it has done the best it could.\nTheoretically, if you have a big table with indexes and must select a \nsmall number of tuples, the ideal query plan will include parameterised \nNestLoop JOINs. Unfortunately, parameterisation in PostgreSQL can't pass \ninside a subquery. It could be a reason for new development because \nMSSQL can do such a trick, but it is a long way.\nYou can try to rewrite your schema and query to avoid subqueries in \nexpressions at all.\nI hope this message gave you some insights.\n\n[1] https://github.com/postgrespro/aqo\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Thu, 27 Jun 2024 17:31:36 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent query performance based on relation hit frequency"
},
{
"msg_contents": "On 6/27/24 03:50, Laura Hausmann wrote:\n> Heya, I hope the title is somewhat descriptive. I'm working on a \n> decentralized social media platform and have encountered the following \n> performance issue/quirk, and would like to ask for input, since I'm \n> not sure I missed anything.\n>\n> I'm running PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc \n> (GCC) 13.2.1 20230801, 64-bit, running on an Arch Linux box with 128GB \n> of RAM & an 8c16t Ryzen 3700X CPU. Disk is a NVME RAID0.\n>\n> Postgres configuration: https://paste.depesz.com/s/iTv\n>\n> I'm using autovacuum defaults & am running a manual VACUUM ANALYZE on \n> the entire database nightly.\n>\n> The relevant database parts consist of a table with posts (note), a \n> table with users (user), and a table with follow relationships \n> (following). The query in question takes the most recent n (e.g. 50) \n> posts, filtered by the users follow relations.\n>\n> The note table on my main production instance grows by about 200k \n> entries per week.\n>\n> Schema & tuple counts: https://paste.depesz.com/s/cfI\n>\n> Here's the shortest query I can reproduce the issue with: \n> https://paste.depesz.com/s/RoC\n> Specifically, it works well for users that follow a relatively large \n> amount of users (https://explain.depesz.com/s/tJnB), and is very slow \n> for users that follow a low amount of users / users that post \n> infrequently (https://explain.depesz.com/s/Mtyr).\n>\n> From what I can tell, this is because this query causes postgres to \n> scan the note table from the bottom (most recent posts first), \n> discarding anything by users that are not followed.\n>\n> Curiously, rewriting the query like this \n> (https://paste.depesz.com/s/8rN) causes the opposite problem, this \n> query is fast for users with a low following count \n> (https://explain.depesz.com/s/yHAz#query), and slow for users with a \n> high following count (https://explain.depesz.com/s/1v6L, \n> https://explain.depesz.com/s/yg3N).\n>\n> These numbers are even further apart (to the point of 10-30s query \n> timeouts) in the most extreme outlier cases I've observed, and on \n> lower-end hardware.\n>\n> I've sidestepped the issue by running either of these queries based on \n> a heuristic that checks whether there are more than 250 matching posts \n> in the past 7 days, recomputed once per day for every user, but it \n> feels more like a hack than a proper solution.\n>\n> I'm able to make the planner make a sensible decision in both cases by \n> setting enable_sort = off, but that tanks performance for the rest of \n> my application, is even more of a hack, and doesn't seem to work in \n> all cases.\n>\n> I've been able to reproduce this issue with mock data \n> (https://paste.depesz.com/s/CnY), though it's not generating quite the \n> same query plans and is behaving a bit differently.\n\nBefore deep dive into everybody's favorite topic you may simplify your \nquery :\n\nselect o.* from objects o where o.\"userId\" = :userid UNION select o.* \nfrom objects o where o.\"userId\" IN\n\n(SELECT r.\"followeeId\" FROM relationships r WHERE r.\"followerId\"= :userid)\n\npostgres@[local]/laura=# explain (analyze, buffers) select o.* from \nobjects o where o.\"userId\" = 1 UNION select o.* from objects o where \no.\"userId\" IN (SELECT r.\"followeeId\" FROM relati\nonships r WHERE r.\"followerId\"=1) ORDER BY id DESC ;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---\nSort (cost=8622.04..8767.98 rows=58376 width=40) (actual \ntime=1.041..1.053 rows=314 loops=1)\n Sort Key: o.id DESC\n Sort Method: quicksort Memory: 39kB\n Buffers: shared hit=1265\n -> HashAggregate (cost=3416.92..4000.68 rows=58376 width=40) \n(actual time=0.900..1.006 rows=314 loops=1)\n Group Key: o.id, o.\"userId\", o.data\n Batches: 1 Memory Usage: 1585kB\n Buffers: shared hit=1265\n -> Append (cost=0.42..2979.10 rows=58376 width=40) (actual \ntime=0.024..0.816 rows=314 loops=1)\n Buffers: shared hit=1265\n -> Index Scan using \"objects_userId_idx\" on objects o \n (cost=0.42..3.10 rows=17 width=21) (actual time=0.003..0.003 rows=0 \nloops=1)\n Index Cond: (\"userId\" = 1)\n Buffers: shared hit=3\n -> Nested Loop (cost=0.70..2684.12 rows=58359 width=21) \n(actual time=0.020..0.794 rows=314 loops=1)\n Buffers: shared hit=1262\n -> Index Only Scan using \n\"relationships_followerId_followeeId_idx\" on relationships r \n (cost=0.28..7.99 rows=315 width=4) (actual time=0.011..0.030 rows=315 \nloops=\n1)\n Index Cond: (\"followerId\" = 1)\n Heap Fetches: 0\n Buffers: shared hit=3\n -> Index Scan using \"objects_userId_idx\" on \nobjects o_1 (cost=0.42..6.65 rows=185 width=21) (actual \ntime=0.002..0.002 rows=1 loops=315)\n Index Cond: (\"userId\" = r.\"followeeId\")\n Buffers: shared hit=1259\nPlanning:\n Buffers: shared hit=8\nPlanning Time: 0.190 ms\nExecution Time: 1.184 ms\n(26 rows)\n\nTime: 1.612 ms\npostgres@[local]/laura=# explain (analyze, buffers) select o.* from \nobjects o where o.\"userId\" = 4 UNION select o.* from objects o where \no.\"userId\" IN (SELECT r.\"followeeId\" FROM relati\nonships r WHERE r.\"followerId\"=4) ORDER BY id DESC ;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nSort (cost=27.53..28.03 rows=202 width=40) (actual time=0.015..0.016 \nrows=0 loops=1)\n Sort Key: o.id DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=5\n -> HashAggregate (cost=17.77..19.79 rows=202 width=40) (actual \ntime=0.013..0.013 rows=0 loops=1)\n Group Key: o.id, o.\"userId\", o.data\n Batches: 1 Memory Usage: 40kB\n Buffers: shared hit=5\n -> Append (cost=0.42..16.26 rows=202 width=40) (actual \ntime=0.011..0.012 rows=0 loops=1)\n Buffers: shared hit=5\n -> Index Scan using \"objects_userId_idx\" on objects o \n (cost=0.42..3.10 rows=17 width=21) (actual time=0.005..0.005 rows=0 \nloops=1)\n Index Cond: (\"userId\" = 4)\n Buffers: shared hit=3\n -> Nested Loop (cost=0.70..12.14 rows=185 width=21) \n(actual time=0.005..0.005 rows=0 loops=1)\n Buffers: shared hit=2\n -> Index Only Scan using \n\"relationships_followerId_followeeId_idx\" on relationships r \n (cost=0.28..1.39 rows=1 width=4) (actual time=0.005..0.005 rows=0 \nloops=1)\n Index Cond: (\"followerId\" = 4)\n Heap Fetches: 0\n Buffers: shared hit=2\n -> Index Scan using \"objects_userId_idx\" on \nobjects o_1 (cost=0.42..8.90 rows=185 width=21) (never executed)\n Index Cond: (\"userId\" = r.\"followeeId\")\nPlanning:\n Buffers: shared hit=8\nPlanning Time: 0.201 ms\nExecution Time: 0.048 ms\n(25 rows)\n\nTime: 0.490 ms\n\n\n>\n> I'd appreciate any and all input on the situation. If I've left out \n> any information that would be useful in figuring this out, please tell me.\n>\n> Thanks in advance,\n> Laura Hausmann\n\n\n\n\n\n\n\nOn 6/27/24 03:50, Laura Hausmann wrote:\n\n\n\n\n\nHeya, I hope the title is somewhat descriptive. I'm\n working on a decentralized social media platform and have\n encountered the following performance issue/quirk, and would\n like to ask for input, since I'm not sure I missed anything.\n\n\n I'm running PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled\n by gcc (GCC) 13.2.1 20230801, 64-bit, running on an Arch Linux\n box with 128GB of RAM & an 8c16t Ryzen 3700X CPU. Disk is\n a NVME RAID0.\n\n\nPostgres configuration: https://paste.depesz.com/s/iTv\n\n\nI'm using autovacuum defaults & am running a manual\n VACUUM ANALYZE on the entire database nightly.\n\n\n\nThe relevant database parts consist of a table with posts\n (note), a table with users (user), and a table with follow\n relationships (following). The query in question takes the\n most recent n (e.g. 50) posts, filtered by the users follow\n relations.\n\n\nThe note table on my main production instance grows by\n about 200k entries per week.\n\n\nSchema & tuple counts: https://paste.depesz.com/s/cfI\n\n\nHere's the shortest query I can reproduce the issue with: https://paste.depesz.com/s/RoC\n Specifically, it works well for users that follow a relatively\n large amount of users (https://explain.depesz.com/s/tJnB),\n and is very slow for users that follow a low amount of users /\n users that post infrequently (https://explain.depesz.com/s/Mtyr).\n\n\n\nFrom what I can tell, this is because this query causes\n postgres to scan the note table from the bottom (most recent\n posts first), discarding anything by users that are not\n followed.\n\n\n Curiously, rewriting the query like this (https://paste.depesz.com/s/8rN)\n causes the opposite problem, this query is fast for users with\n a low following count (https://explain.depesz.com/s/yHAz#query),\n and slow for users with a high following count (https://explain.depesz.com/s/1v6L,\n https://explain.depesz.com/s/yg3N).\n\n\nThese numbers are even further apart (to the point of\n 10-30s query timeouts) in the most extreme outlier cases I've\n observed, and on lower-end hardware.\n\n\nI've sidestepped the issue by running either of these\n queries based on a heuristic that checks whether there are\n more than 250 matching posts in the past 7 days, recomputed\n once per day for every user, but it feels more like a hack\n than a proper solution.\n\nI'm able to make the planner make a sensible decision in\n both cases by setting enable_sort = off, but that tanks\n performance for the rest of my application, is even more of\n a hack, and doesn't seem to work in all cases.\n\n\n\nI've been able to reproduce this issue with mock data (https://paste.depesz.com/s/CnY),\n though it's not generating quite the same query plans and is\n behaving a bit differently.\n\n\n\nBefore deep dive into everybody's favorite topic you may simplify\n your query :\nselect o.*\n from objects o where o.\"userId\" = :userid UNION select o.*\n from objects o where o.\"userId\" IN \n\n(SELECT\n r.\"followeeId\" FROM relationships r WHERE\n r.\"followerId\"= :userid)\n\npostgres@[local]/laura=#\n explain (analyze, buffers) select o.* from objects o where\n o.\"userId\" = 1 UNION select o.* from objects o where\n o.\"userId\" IN (SELECT r.\"followeeId\" FROM relati\n onships r WHERE r.\"followerId\"=1) ORDER BY id DESC ;\n \n QUERY\n PLAN\n \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n ---\n \n Sort (cost=8622.04..8767.98 rows=58376 width=40) (actual\n time=1.041..1.053 rows=314 loops=1)\n \n Sort Key: o.id DESC\n \n Sort Method: quicksort Memory: 39kB\n \n Buffers: shared hit=1265\n \n -> HashAggregate (cost=3416.92..4000.68 rows=58376\n width=40) (actual time=0.900..1.006 rows=314 loops=1)\n \n Group Key: o.id, o.\"userId\", o.data\n \n Batches: 1 Memory Usage: 1585kB\n \n Buffers: shared hit=1265\n \n -> Append (cost=0.42..2979.10 rows=58376 width=40)\n (actual time=0.024..0.816 rows=314 loops=1)\n \n Buffers: shared hit=1265\n \n -> Index Scan using \"objects_userId_idx\" on\n objects o (cost=0.42..3.10 rows=17 width=21) (actual\n time=0.003..0.003 rows=0 loops=1)\n \n Index Cond: (\"userId\" = 1)\n \n Buffers: shared hit=3\n \n -> Nested Loop (cost=0.70..2684.12 rows=58359\n width=21) (actual time=0.020..0.794 rows=314 loops=1)\n \n Buffers: shared hit=1262\n \n -> Index Only Scan using\n \"relationships_followerId_followeeId_idx\" on relationships r\n (cost=0.28..7.99 rows=315 width=4) (actual time=0.011..0.030\n rows=315 loops=\n 1)\n \n Index Cond: (\"followerId\" = 1)\n \n Heap Fetches: 0\n \n Buffers: shared hit=3\n \n -> Index Scan using \"objects_userId_idx\"\n on objects o_1 (cost=0.42..6.65 rows=185 width=21) (actual\n time=0.002..0.002 rows=1 loops=315)\n \n Index Cond: (\"userId\" =\n r.\"followeeId\")\n \n Buffers: shared hit=1259\n \n Planning:\n \n Buffers: shared hit=8\n \n Planning Time: 0.190 ms\n \n Execution Time: 1.184 ms\n \n (26 rows)\n \n\n Time: 1.612 ms\n \n postgres@[local]/laura=# explain (analyze, buffers) select o.*\n from objects o where o.\"userId\" = 4 UNION select o.* from\n objects o where o.\"userId\" IN (SELECT r.\"followeeId\" FROM relati\n onships r WHERE r.\"followerId\"=4) ORDER BY id DESC ;\n \n QUERY\n PLAN\n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n \n Sort (cost=27.53..28.03 rows=202 width=40) (actual\n time=0.015..0.016 rows=0 loops=1)\n \n Sort Key: o.id DESC\n \n Sort Method: quicksort Memory: 25kB\n \n Buffers: shared hit=5\n \n -> HashAggregate (cost=17.77..19.79 rows=202 width=40)\n (actual time=0.013..0.013 rows=0 loops=1)\n \n Group Key: o.id, o.\"userId\", o.data\n \n Batches: 1 Memory Usage: 40kB\n \n Buffers: shared hit=5\n \n -> Append (cost=0.42..16.26 rows=202 width=40)\n (actual time=0.011..0.012 rows=0 loops=1)\n \n Buffers: shared hit=5\n \n -> Index Scan using \"objects_userId_idx\" on\n objects o (cost=0.42..3.10 rows=17 width=21) (actual\n time=0.005..0.005 rows=0 loops=1)\n \n Index Cond: (\"userId\" = 4)\n \n Buffers: shared hit=3\n \n -> Nested Loop (cost=0.70..12.14 rows=185\n width=21) (actual time=0.005..0.005 rows=0 loops=1)\n \n Buffers: shared hit=2\n \n -> Index Only Scan using\n \"relationships_followerId_followeeId_idx\" on relationships r\n (cost=0.28..1.39 rows=1 width=4) (actual time=0.005..0.005\n rows=0 loops=1)\n \n Index Cond: (\"followerId\" = 4)\n \n Heap Fetches: 0\n \n Buffers: shared hit=2\n \n -> Index Scan using \"objects_userId_idx\"\n on objects o_1 (cost=0.42..8.90 rows=185 width=21) (never\n executed)\n \n Index Cond: (\"userId\" =\n r.\"followeeId\")\n \n Planning:\n \n Buffers: shared hit=8\n \n Planning Time: 0.201 ms\n \n Execution Time: 0.048 ms\n \n (25 rows)\n \n\n Time: 0.490 ms\n\n\n\n\n\n\n\n\n\n\nI'd appreciate any and all input on the situation. If\n I've left out any information that would be useful in\n figuring this out, please tell me.\n\n\n\nThanks in advance,\n\n\n\n\n\nLaura Hausmann",
"msg_date": "Thu, 27 Jun 2024 16:27:54 +0300",
"msg_from": "Achilleas Mantzios - cloud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent query performance based on relation hit frequency"
},
{
"msg_contents": "Heya & thank you for the response!\n\nThat makes a lot of sense. I'm glad to hear it's on the radar of the team,\nbut I understand that this is a complex task and won't happen anytime soon.\n\nFor the meantime, I've tried a couple ways of rewriting the query, sadly\nnone of which seem to translate to the production database:\n\nSimply dropping the or/union clause (and adding a relationship to the user\nthemselves) fixes the problem in the test database (both user 1 (\nhttps://explain.depesz.com/s/ZY8l) and user 4 (\nhttps://explain.depesz.com/s/Q2Wk) run in 1~15ms, which isn't perfect but\ngood enough), but not the production one (still fast for high frequency (\nhttps://explain.depesz.com/s/DixF) and slow for low frequency (\nhttps://explain.depesz.com/s/fIKm) users).\n\nI also tried rewriting it as a join (https://explain.depesz.com/s/36Ve),\nbut that also didn't seem to have an effect.\n\nIt's very possible I missed one or multiple ways the query could be\nrewritten in.\n\nI'm sadly not sure how I could generate a test dataset that more closely\nresembles the production workload. In case that would be helpful in\ndebugging this further, any tips on that would be greatly appreciated.\n\nThanks in advance,\nLaura Hausmann\n\n\nOn Thu, Jun 27, 2024 at 12:31 PM Andrei Lepikhov <[email protected]> wrote:\n\n> On 6/27/24 07:50, Laura Hausmann wrote:\n> > I'd appreciate any and all input on the situation. If I've left out any\n> > information that would be useful in figuring this out, please tell me.\n> Thanks for this curious case, I like it!\n> At first, you can try to avoid \"OR\" expressions - PostgreSQL has quite\n> limited set of optimisation/prediction tricks on such expressions.\n> Second - I see, postgres predicts wrong number of tuples. But using my\n> typical tool [1] and getting more precise estimations i don't see\n> significant profit:\n>\n> Limit (cost=10832.85..10838.69 rows=50 width=21)\n> -> Gather Merge (cost=10832.85..10838.92 rows=52 width=21)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Sort (cost=9832.83..9832.90 rows=26 width=21)\n> Sort Key: objects.id DESC\n> Sort Method: top-N heapsort Memory: 32kB\n> Worker 0: Sort Method: quicksort Memory: 32kB\n> Worker 1: Sort Method: quicksort Memory: 32kB\n> -> Parallel Seq Scan on objects\n> Filter: ((hashed SubPlan 1) OR (\"userId\" = 1))\n> Rows Removed by Filter: 183372\n> SubPlan 1\n> -> Nested Loop\n> -> Index Only Scan using users_pkey on\n> Index Cond: (id = 1)\n> Heap Fetches: 0\n> -> Index Only Scan using\n> \"relationships_followerId_followeeId_idx\" on relationships\n> Index Cond: (\"followerId\" = 1)\n> Heap Fetches: 0\n> Planning Time: 0.762 ms\n> Execution Time: 43.816 ms\n>\n> Limit (cost=10818.83..10819.07 rows=2 width=21)\n> -> Gather Merge (cost=10818.83..10819.07 rows=2 width=21)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Sort (cost=9818.81..9818.81 rows=1 width=21)\n> Sort Key: objects.id DESC\n> Sort Method: quicksort Memory: 25kB\n> Worker 0: Sort Method: quicksort Memory: 25kB\n> Worker 1: Sort Method: quicksort Memory: 25kB\n> -> Parallel Seq Scan on objects\n> Filter: ((hashed SubPlan 1) OR (\"userId\" = 4))\n> Rows Removed by Filter: 183477\n> SubPlan 1\n> -> Nested Loop (cost=0.56..8.61 rows=1 width=4)\n> -> Index Only Scan using\n> \"relationships_followerId_followeeId_idx\" on relationships\n> Index Cond: (\"followerId\" = 4)\n> Heap Fetches: 0\n> -> Index Only Scan using users_pkey\n> Index Cond: (id = 4)\n> Heap Fetches: 0\n> Planning Time: 0.646 ms\n> Execution Time: 30.824 ms\n>\n> But this was achieved just because of parallel workers utilisation.\n> Disabling them we get:\n>\n> Limit (cost=14635.07..14635.08 rows=2 width=21) (actual\n> time=75.941..75.943 rows=0 loops=1)\n> -> Sort (cost=14635.07..14635.08 rows=2 width=21) (actual\n> time=75.939..75.940 rows=0 loops=1)\n> Sort Key: objects.id DESC\n> Sort Method: quicksort Memory: 25kB\n> -> Seq Scan on objects (cost=8.61..14635.06 rows=2 width=21)\n> (actual time=75.931..75.932 rows=0 loops=1)\n> Filter: ((hashed SubPlan 1) OR (\"userId\" = 4))\n> Rows Removed by Filter: 550430\n> SubPlan 1\n> -> Nested Loop (cost=0.56..8.61 rows=1 width=4)\n> (actual time=0.039..0.040 rows=0 loops=1)\n> -> Index Only Scan using\n> \"relationships_followerId_followeeId_idx\" on relationships\n> (cost=0.28..4.29 rows=1 width=8) (actual time=0.038..0.038 rows=0 loops=1)\n> Index Cond: (\"followerId\" = 4)\n> Heap Fetches: 0\n> -> Index Only Scan using users_pkey on users\n> (cost=0.29..4.31 rows=1 width=4) (never executed)\n> Index Cond: (id = 4)\n> Heap Fetches: 0\n> Planning Time: 0.945 ms\n> Execution Time: 76.123 ms\n>\n> So, from the optimiser's point of view, it has done the best it could.\n> Theoretically, if you have a big table with indexes and must select a\n> small number of tuples, the ideal query plan will include parameterised\n> NestLoop JOINs. Unfortunately, parameterisation in PostgreSQL can't pass\n> inside a subquery. It could be a reason for new development because\n> MSSQL can do such a trick, but it is a long way.\n> You can try to rewrite your schema and query to avoid subqueries in\n> expressions at all.\n> I hope this message gave you some insights.\n>\n> [1] https://github.com/postgrespro/aqo\n>\n> --\n> regards, Andrei Lepikhov\n>\n>\n\nHeya & thank you for the response!That makes a lot of sense. I'm glad to hear it's on the radar of the team, but I understand that this is a complex task and won't happen anytime soon. For the meantime, I've tried a couple ways of rewriting the query, sadly none of which seem to translate to the production database:Simply dropping the or/union clause (and adding a relationship to the user themselves) fixes the problem in the test database (both user 1 (https://explain.depesz.com/s/ZY8l) and user 4 (https://explain.depesz.com/s/Q2Wk) run in 1~15ms, which isn't perfect but good enough), but not the production one (still fast for high frequency (https://explain.depesz.com/s/DixF) and slow for low frequency (https://explain.depesz.com/s/fIKm) users).I also tried rewriting it as a join (https://explain.depesz.com/s/36Ve), but that also didn't seem to have an effect.It's very possible I missed one or multiple ways the query could be rewritten in.I'm sadly not sure how I could generate a test dataset that more closely resembles the production workload. In case that would be helpful in debugging this further, any tips on that would be greatly appreciated.Thanks in advance,Laura HausmannOn Thu, Jun 27, 2024 at 12:31 PM Andrei Lepikhov <[email protected]> wrote:On 6/27/24 07:50, Laura Hausmann wrote:\n> I'd appreciate any and all input on the situation. If I've left out any \n> information that would be useful in figuring this out, please tell me.\nThanks for this curious case, I like it!\nAt first, you can try to avoid \"OR\" expressions - PostgreSQL has quite \nlimited set of optimisation/prediction tricks on such expressions.\nSecond - I see, postgres predicts wrong number of tuples. But using my \ntypical tool [1] and getting more precise estimations i don't see \nsignificant profit:\n\n Limit (cost=10832.85..10838.69 rows=50 width=21)\n -> Gather Merge (cost=10832.85..10838.92 rows=52 width=21)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=9832.83..9832.90 rows=26 width=21)\n Sort Key: objects.id DESC\n Sort Method: top-N heapsort Memory: 32kB\n Worker 0: Sort Method: quicksort Memory: 32kB\n Worker 1: Sort Method: quicksort Memory: 32kB\n -> Parallel Seq Scan on objects\n Filter: ((hashed SubPlan 1) OR (\"userId\" = 1))\n Rows Removed by Filter: 183372\n SubPlan 1\n -> Nested Loop\n -> Index Only Scan using users_pkey on\n Index Cond: (id = 1)\n Heap Fetches: 0\n -> Index Only Scan using \n\"relationships_followerId_followeeId_idx\" on relationships\n Index Cond: (\"followerId\" = 1)\n Heap Fetches: 0\n Planning Time: 0.762 ms\n Execution Time: 43.816 ms\n\n Limit (cost=10818.83..10819.07 rows=2 width=21)\n -> Gather Merge (cost=10818.83..10819.07 rows=2 width=21)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=9818.81..9818.81 rows=1 width=21)\n Sort Key: objects.id DESC\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort Memory: 25kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n -> Parallel Seq Scan on objects\n Filter: ((hashed SubPlan 1) OR (\"userId\" = 4))\n Rows Removed by Filter: 183477\n SubPlan 1\n -> Nested Loop (cost=0.56..8.61 rows=1 width=4)\n -> Index Only Scan using \n\"relationships_followerId_followeeId_idx\" on relationships\n Index Cond: (\"followerId\" = 4)\n Heap Fetches: 0\n -> Index Only Scan using users_pkey\n Index Cond: (id = 4)\n Heap Fetches: 0\n Planning Time: 0.646 ms\n Execution Time: 30.824 ms\n\nBut this was achieved just because of parallel workers utilisation. \nDisabling them we get:\n\n Limit (cost=14635.07..14635.08 rows=2 width=21) (actual \ntime=75.941..75.943 rows=0 loops=1)\n -> Sort (cost=14635.07..14635.08 rows=2 width=21) (actual \ntime=75.939..75.940 rows=0 loops=1)\n Sort Key: objects.id DESC\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on objects (cost=8.61..14635.06 rows=2 width=21) \n(actual time=75.931..75.932 rows=0 loops=1)\n Filter: ((hashed SubPlan 1) OR (\"userId\" = 4))\n Rows Removed by Filter: 550430\n SubPlan 1\n -> Nested Loop (cost=0.56..8.61 rows=1 width=4) \n(actual time=0.039..0.040 rows=0 loops=1)\n -> Index Only Scan using \n\"relationships_followerId_followeeId_idx\" on relationships \n(cost=0.28..4.29 rows=1 width=8) (actual time=0.038..0.038 rows=0 loops=1)\n Index Cond: (\"followerId\" = 4)\n Heap Fetches: 0\n -> Index Only Scan using users_pkey on users \n(cost=0.29..4.31 rows=1 width=4) (never executed)\n Index Cond: (id = 4)\n Heap Fetches: 0\n Planning Time: 0.945 ms\n Execution Time: 76.123 ms\n\nSo, from the optimiser's point of view, it has done the best it could.\nTheoretically, if you have a big table with indexes and must select a \nsmall number of tuples, the ideal query plan will include parameterised \nNestLoop JOINs. Unfortunately, parameterisation in PostgreSQL can't pass \ninside a subquery. It could be a reason for new development because \nMSSQL can do such a trick, but it is a long way.\nYou can try to rewrite your schema and query to avoid subqueries in \nexpressions at all.\nI hope this message gave you some insights.\n\n[1] https://github.com/postgrespro/aqo\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Thu, 27 Jun 2024 16:58:15 +0200",
"msg_from": "Laura Hausmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent query performance based on relation hit frequency"
},
{
"msg_contents": "Στις 27/6/24 17:58, ο/η Laura Hausmann έγραψε:\n> Heya & thank you for the response!\n>\n> That makes a lot of sense. I'm glad to hear it's on the radar of the \n> team, but I understand that this is a complex task and won't happen \n> anytime soon.\n>\n> For the meantime, I've tried a couple ways of rewriting the query, \n> sadly none of which seem to translate to the production database:\n>\n> Simply dropping the or/union clause (and adding a relationship to the \n> user themselves) fixes the problem in the test database (both user 1 \n> (https://explain.depesz.com/s/ZY8l) and user 4 \n> (https://explain.depesz.com/s/Q2Wk) run in 1~15ms, which isn't perfect \n> but good enough), but not the production one (still fast for high \n> frequency (https://explain.depesz.com/s/DixF) and slow for low \n> frequency (https://explain.depesz.com/s/fIKm) users).\n>\n> I also tried rewriting it as a join \n> (https://explain.depesz.com/s/36Ve), but that also didn't seem to have \n> an effect.\n>\n> It's very possible I missed one or multiple ways the query could be \n> rewritten in.\n>\n> I'm sadly not sure how I could generate a test dataset that more \n> closely resembles the production workload. In case that would be \n> helpful in debugging this further, any tips on that would be greatly \n> appreciated.\n\nI am not sure my message made it through to you, I dont know if you are \nsubscribed to the list, here is an idea :\n\nselect o.* from objects o where o.\"userId\" = :userid UNION select o.* \nfrom objects o where o.\"userId\" IN\n\n(SELECT r.\"followeeId\" FROM relationships r WHERE \nr.\"followerId\"=:userid) ORDER BY id DESC ;\n\nWith your test data I get <= 1ms answers with all inputs.\n\n>\n> Thanks in advance,\n> Laura Hausmann\n>\n>\n> On Thu, Jun 27, 2024 at 12:31 PM Andrei Lepikhov <[email protected]> \n> wrote:\n>\n> On 6/27/24 07:50, Laura Hausmann wrote:\n> > I'd appreciate any and all input on the situation. If I've left\n> out any\n> > information that would be useful in figuring this out, please\n> tell me.\n> Thanks for this curious case, I like it!\n> At first, you can try to avoid \"OR\" expressions - PostgreSQL has\n> quite\n> limited set of optimisation/prediction tricks on such expressions.\n> Second - I see, postgres predicts wrong number of tuples. But\n> using my\n> typical tool [1] and getting more precise estimations i don't see\n> significant profit:\n>\n> Limit (cost=10832.85..10838.69 rows=50 width=21)\n> -> Gather Merge (cost=10832.85..10838.92 rows=52 width=21)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Sort (cost=9832.83..9832.90 rows=26 width=21)\n> Sort Key: objects.id <http://objects.id> DESC\n> Sort Method: top-N heapsort Memory: 32kB\n> Worker 0: Sort Method: quicksort Memory: 32kB\n> Worker 1: Sort Method: quicksort Memory: 32kB\n> -> Parallel Seq Scan on objects\n> Filter: ((hashed SubPlan 1) OR (\"userId\" = 1))\n> Rows Removed by Filter: 183372\n> SubPlan 1\n> -> Nested Loop\n> -> Index Only Scan using users_pkey on\n> Index Cond: (id = 1)\n> Heap Fetches: 0\n> -> Index Only Scan using\n> \"relationships_followerId_followeeId_idx\" on relationships\n> Index Cond: (\"followerId\" = 1)\n> Heap Fetches: 0\n> Planning Time: 0.762 ms\n> Execution Time: 43.816 ms\n>\n> Limit (cost=10818.83..10819.07 rows=2 width=21)\n> -> Gather Merge (cost=10818.83..10819.07 rows=2 width=21)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Sort (cost=9818.81..9818.81 rows=1 width=21)\n> Sort Key: objects.id <http://objects.id> DESC\n> Sort Method: quicksort Memory: 25kB\n> Worker 0: Sort Method: quicksort Memory: 25kB\n> Worker 1: Sort Method: quicksort Memory: 25kB\n> -> Parallel Seq Scan on objects\n> Filter: ((hashed SubPlan 1) OR (\"userId\" = 4))\n> Rows Removed by Filter: 183477\n> SubPlan 1\n> -> Nested Loop (cost=0.56..8.61 rows=1\n> width=4)\n> -> Index Only Scan using\n> \"relationships_followerId_followeeId_idx\" on relationships\n> Index Cond: (\"followerId\" = 4)\n> Heap Fetches: 0\n> -> Index Only Scan using users_pkey\n> Index Cond: (id = 4)\n> Heap Fetches: 0\n> Planning Time: 0.646 ms\n> Execution Time: 30.824 ms\n>\n> But this was achieved just because of parallel workers utilisation.\n> Disabling them we get:\n>\n> Limit (cost=14635.07..14635.08 rows=2 width=21) (actual\n> time=75.941..75.943 rows=0 loops=1)\n> -> Sort (cost=14635.07..14635.08 rows=2 width=21) (actual\n> time=75.939..75.940 rows=0 loops=1)\n> Sort Key: objects.id <http://objects.id> DESC\n> Sort Method: quicksort Memory: 25kB\n> -> Seq Scan on objects (cost=8.61..14635.06 rows=2\n> width=21)\n> (actual time=75.931..75.932 rows=0 loops=1)\n> Filter: ((hashed SubPlan 1) OR (\"userId\" = 4))\n> Rows Removed by Filter: 550430\n> SubPlan 1\n> -> Nested Loop (cost=0.56..8.61 rows=1 width=4)\n> (actual time=0.039..0.040 rows=0 loops=1)\n> -> Index Only Scan using\n> \"relationships_followerId_followeeId_idx\" on relationships\n> (cost=0.28..4.29 rows=1 width=8) (actual time=0.038..0.038 rows=0\n> loops=1)\n> Index Cond: (\"followerId\" = 4)\n> Heap Fetches: 0\n> -> Index Only Scan using users_pkey on users\n> (cost=0.29..4.31 rows=1 width=4) (never executed)\n> Index Cond: (id = 4)\n> Heap Fetches: 0\n> Planning Time: 0.945 ms\n> Execution Time: 76.123 ms\n>\n> So, from the optimiser's point of view, it has done the best it could.\n> Theoretically, if you have a big table with indexes and must select a\n> small number of tuples, the ideal query plan will include\n> parameterised\n> NestLoop JOINs. Unfortunately, parameterisation in PostgreSQL\n> can't pass\n> inside a subquery. It could be a reason for new development because\n> MSSQL can do such a trick, but it is a long way.\n> You can try to rewrite your schema and query to avoid subqueries in\n> expressions at all.\n> I hope this message gave you some insights.\n>\n> [1] https://github.com/postgrespro/aqo\n>\n> -- \n> regards, Andrei Lepikhov\n>\n-- \nAchilleas Mantzios\n IT DEV - HEAD\n IT DEPT\n Dynacom Tankers Mgmt (as agents only)\n\n\n\n\n\n\nΣτις 27/6/24 17:58, ο/η Laura Hausmann\n έγραψε:\n\n\n\n\n\n\n\n\nHeya & thank you for the response!\n\n\n That makes a lot of sense. I'm glad to hear it's on the\n radar of the team, but I understand that this is a\n complex task and won't happen anytime soon.\n\n For the meantime, I've tried a couple ways of rewriting\n the query, sadly none of which seem to translate to the\n production database:\n\n\n Simply dropping the or/union clause (and adding a\n relationship to the user themselves) fixes the problem in\n the test database (both user 1 (https://explain.depesz.com/s/ZY8l)\n and user 4 (https://explain.depesz.com/s/Q2Wk)\n run in 1~15ms, which isn't perfect but good enough), but\n not the production one (still fast for high frequency (https://explain.depesz.com/s/DixF)\n and slow for low frequency (https://explain.depesz.com/s/fIKm)\n users).\n\n\n I also tried rewriting it as a join (https://explain.depesz.com/s/36Ve),\n but that also didn't seem to have an effect.\n\n\nIt's very possible I missed one or multiple ways the\n query could be rewritten in.\n\n\n\n I'm sadly not sure how I could generate a test dataset that more\n closely resembles the production workload. In case that would be\n helpful in debugging this further, any tips on that would be\n greatly appreciated.\n\n\nI am not sure my message made it through to you, I dont know if\n you are subscribed to the list, here is an idea :\nselect o.* from objects o where o.\"userId\" = :userid UNION select\n o.* from objects o where o.\"userId\" IN \n(SELECT r.\"followeeId\" FROM relationships r WHERE r.\"followerId\"=:userid)\n ORDER BY id DESC ; \n\nWith your test data I get <= 1ms answers with all inputs.\n\n\n\n\n\nThanks in advance,\n\n\n\n\n\n\n\n\n\nLaura Hausmann\n\n\n\n\n\n\n\n\n\n\n\n\nOn Thu, Jun 27, 2024 at\n 12:31 PM Andrei Lepikhov <[email protected]>\n wrote:\n\nOn\n 6/27/24 07:50, Laura Hausmann wrote:\n > I'd appreciate any and all input on the situation. If\n I've left out any \n > information that would be useful in figuring this out,\n please tell me.\n Thanks for this curious case, I like it!\n At first, you can try to avoid \"OR\" expressions - PostgreSQL\n has quite \n limited set of optimisation/prediction tricks on such\n expressions.\n Second - I see, postgres predicts wrong number of tuples. But\n using my \n typical tool [1] and getting more precise estimations i don't\n see \n significant profit:\n\n Limit (cost=10832.85..10838.69 rows=50 width=21)\n -> Gather Merge (cost=10832.85..10838.92 rows=52\n width=21)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=9832.83..9832.90 rows=26\n width=21)\n Sort Key: objects.id\n DESC\n Sort Method: top-N heapsort Memory: 32kB\n Worker 0: Sort Method: quicksort Memory:\n 32kB\n Worker 1: Sort Method: quicksort Memory:\n 32kB\n -> Parallel Seq Scan on objects\n Filter: ((hashed SubPlan 1) OR (\"userId\"\n = 1))\n Rows Removed by Filter: 183372\n SubPlan 1\n -> Nested Loop\n -> Index Only Scan using\n users_pkey on\n Index Cond: (id = 1)\n Heap Fetches: 0\n -> Index Only Scan using \n \"relationships_followerId_followeeId_idx\" on relationships\n Index Cond: (\"followerId\"\n = 1)\n Heap Fetches: 0\n Planning Time: 0.762 ms\n Execution Time: 43.816 ms\n\n Limit (cost=10818.83..10819.07 rows=2 width=21)\n -> Gather Merge (cost=10818.83..10819.07 rows=2\n width=21)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=9818.81..9818.81 rows=1 width=21)\n Sort Key: objects.id\n DESC\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort Memory:\n 25kB\n Worker 1: Sort Method: quicksort Memory:\n 25kB\n -> Parallel Seq Scan on objects\n Filter: ((hashed SubPlan 1) OR (\"userId\"\n = 4))\n Rows Removed by Filter: 183477\n SubPlan 1\n -> Nested Loop (cost=0.56..8.61\n rows=1 width=4)\n -> Index Only Scan using \n \"relationships_followerId_followeeId_idx\" on relationships\n Index Cond: (\"followerId\"\n = 4)\n Heap Fetches: 0\n -> Index Only Scan using\n users_pkey\n Index Cond: (id = 4)\n Heap Fetches: 0\n Planning Time: 0.646 ms\n Execution Time: 30.824 ms\n\n But this was achieved just because of parallel workers\n utilisation. \n Disabling them we get:\n\n Limit (cost=14635.07..14635.08 rows=2 width=21) (actual \n time=75.941..75.943 rows=0 loops=1)\n -> Sort (cost=14635.07..14635.08 rows=2 width=21)\n (actual \n time=75.939..75.940 rows=0 loops=1)\n Sort Key: objects.id\n DESC\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on objects (cost=8.61..14635.06\n rows=2 width=21) \n (actual time=75.931..75.932 rows=0 loops=1)\n Filter: ((hashed SubPlan 1) OR (\"userId\" = 4))\n Rows Removed by Filter: 550430\n SubPlan 1\n -> Nested Loop (cost=0.56..8.61 rows=1\n width=4) \n (actual time=0.039..0.040 rows=0 loops=1)\n -> Index Only Scan using \n \"relationships_followerId_followeeId_idx\" on relationships \n (cost=0.28..4.29 rows=1 width=8) (actual time=0.038..0.038\n rows=0 loops=1)\n Index Cond: (\"followerId\" = 4)\n Heap Fetches: 0\n -> Index Only Scan using\n users_pkey on users \n (cost=0.29..4.31 rows=1 width=4) (never executed)\n Index Cond: (id = 4)\n Heap Fetches: 0\n Planning Time: 0.945 ms\n Execution Time: 76.123 ms\n\n So, from the optimiser's point of view, it has done the best\n it could.\n Theoretically, if you have a big table with indexes and must\n select a \n small number of tuples, the ideal query plan will include\n parameterised \n NestLoop JOINs. Unfortunately, parameterisation in PostgreSQL\n can't pass \n inside a subquery. It could be a reason for new development\n because \n MSSQL can do such a trick, but it is a long way.\n You can try to rewrite your schema and query to avoid\n subqueries in \n expressions at all.\n I hope this message gave you some insights.\n\n [1] https://github.com/postgrespro/aqo\n\n -- \n regards, Andrei Lepikhov\n\n\n\n\n-- \nAchilleas Mantzios\n IT DEV - HEAD\n IT DEPT\n Dynacom Tankers Mgmt (as agents only)",
"msg_date": "Thu, 27 Jun 2024 22:22:34 +0300",
"msg_from": "Achilleas Mantzios <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent query performance based on relation hit frequency"
}
] |
[
{
"msg_contents": "Hi,\n a simple SQL \"select ... from tablex where id1=34215670 and\nid2=59403938282;\nid1 and i2 are bigint and primary key.\n Index Cond: ((tablex.id2 = ' 5940393828299'::bigint) AND (tablex.id1\n= ' 34215670 '::bigint))\n Buffers: shared hit=2\n Query Identifier: -1350604566224020319\n Planning:\n Buffers: shared hit=110246 <<< here planning need access a lot\nof buffers\n Planning Time: 81.850 ms\n Execution Time: 0.034 ms\n\n could you help why planning need a lot of shared buffers access ? this\ntable has 4 indexes. and I tested similar SQL with another table has 4\ncompound indexes and that table only show very small shared buffers hit\nwhen planning.\n this table has a lot of \"update\" and \"delete\" .\n\nThanks,\n\nJames\n\nHi, a simple SQL \"select ... from tablex where id1=34215670 and id2=59403938282;id1 and i2 are bigint and primary key. Index Cond: ((tablex.id2 = '\n\n5940393828299'::bigint) AND (tablex.id1 = '\n\n34215670 '::bigint)) Buffers: shared hit=2 Query Identifier: -1350604566224020319 Planning: Buffers: shared hit=110246 <<< here planning need access a lot of buffers Planning Time: 81.850 ms Execution Time: 0.034 ms could you help why planning need a lot of shared buffers access ? this table has 4 indexes. and I tested similar SQL with another table has 4 compound indexes and that table only show very small shared buffers hit when planning. this table has a lot of \"update\" and \"delete\" .Thanks,James",
"msg_date": "Mon, 1 Jul 2024 17:45:29 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "a lot of shared buffers hit when planning for a simple query with\n primary access path"
},
{
"msg_contents": "On Mon, 1 Jul 2024 at 21:45, James Pang <[email protected]> wrote:\n> Buffers: shared hit=110246 <<< here planning need access a lot of buffers\n> Planning Time: 81.850 ms\n> Execution Time: 0.034 ms\n>\n> could you help why planning need a lot of shared buffers access ?\n\nPerhaps you have lots of bloat in your system catalogue tables. That\ncould happen if you make heavy use of temporary tables. There are many\nother reasons too. It's maybe worth doing some vacuum work on the\ncatalogue tables.\n\nDavid\n\n\n",
"msg_date": "Mon, 1 Jul 2024 22:10:08 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a lot of shared buffers hit when planning for a simple query with\n primary access path"
},
{
"msg_contents": "Hi\n\npo 1. 7. 2024 v 12:10 odesílatel David Rowley <[email protected]> napsal:\n\n> On Mon, 1 Jul 2024 at 21:45, James Pang <[email protected]> wrote:\n> > Buffers: shared hit=110246 <<< here planning need access a\n> lot of buffers\n> > Planning Time: 81.850 ms\n> > Execution Time: 0.034 ms\n> >\n> > could you help why planning need a lot of shared buffers access ?\n>\n> Perhaps you have lots of bloat in your system catalogue tables. That\n> could happen if you make heavy use of temporary tables. There are many\n> other reasons too. It's maybe worth doing some vacuum work on the\n> catalogue tables.\n>\n\nThe planners get min/max range from indexes. So some user's indexes can be\nbloated too with similar effect\n\nRegards\n\nPavel\n\n\n> David\n>\n>\n>\n\nHipo 1. 7. 2024 v 12:10 odesílatel David Rowley <[email protected]> napsal:On Mon, 1 Jul 2024 at 21:45, James Pang <[email protected]> wrote:\n> Buffers: shared hit=110246 <<< here planning need access a lot of buffers\n> Planning Time: 81.850 ms\n> Execution Time: 0.034 ms\n>\n> could you help why planning need a lot of shared buffers access ?\n\nPerhaps you have lots of bloat in your system catalogue tables. That\ncould happen if you make heavy use of temporary tables. There are many\nother reasons too. It's maybe worth doing some vacuum work on the\ncatalogue tables.The planners get min/max range from indexes. So some user's indexes can be bloated too with similar effectRegardsPavel\n\nDavid",
"msg_date": "Mon, 1 Jul 2024 12:20:15 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a lot of shared buffers hit when planning for a simple query with\n primary access path"
},
{
"msg_contents": "On Mon, 1 Jul 2024 at 22:20, Pavel Stehule <[email protected]> wrote:\n> The planners get min/max range from indexes. So some user's indexes can be bloated too with similar effect\n\nI considered that, but it doesn't apply to this query as there are no\nrange quals.\n\nDavid\n\n\n",
"msg_date": "Mon, 1 Jul 2024 22:52:03 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a lot of shared buffers hit when planning for a simple query with\n primary access path"
},
{
"msg_contents": "we have a daily job to do vacuumdb including catalog tables, and in\nsame database , I did similar query with where=pk on another table and\nshared buffer access is very small, if catalog table bloat, should see\nsimilar shared buffer access when planning for other tables ,right? How to\nget more details about this planning ?\n\n relname | last_vacuum |\nlast_analyze\n-------------------------+-------------------------------+-------------------------------\n pg_statistic | 2024-06-30 01:13:08.703291+00 |\n pg_attribute | 2024-06-30 01:14:48.061235+00 | 2024-07-01\n01:11:49.377759+00\n pg_class | 2024-06-30 01:15:09.984027+00 | 2024-07-01\n01:12:05.160881+00\n pg_type | 2024-06-30 01:15:11.139648+00 | 2024-07-01\n01:12:05.32726+00\n ...\n(62 rows)\n\nDavid Rowley <[email protected]> 於 2024年7月1日週一 下午6:52寫道:\n\n> On Mon, 1 Jul 2024 at 22:20, Pavel Stehule <[email protected]>\n> wrote:\n> > The planners get min/max range from indexes. So some user's indexes can\n> be bloated too with similar effect\n>\n> I considered that, but it doesn't apply to this query as there are no\n> range quals.\n>\n> David\n>\n\n we have a daily job to do vacuumdb including catalog tables, and in same database , I did similar query with where=pk on another table and shared buffer access is very small, if catalog table bloat, should see similar shared buffer access when planning for other tables ,right? How to get more details about this planning ? relname | last_vacuum | last_analyze-------------------------+-------------------------------+------------------------------- pg_statistic | 2024-06-30 01:13:08.703291+00 | pg_attribute | 2024-06-30 01:14:48.061235+00 | 2024-07-01 01:11:49.377759+00 pg_class | 2024-06-30 01:15:09.984027+00 | 2024-07-01 01:12:05.160881+00 pg_type | 2024-06-30 01:15:11.139648+00 | 2024-07-01 01:12:05.32726+00 ...(62 rows)David Rowley <[email protected]> 於 2024年7月1日週一 下午6:52寫道:On Mon, 1 Jul 2024 at 22:20, Pavel Stehule <[email protected]> wrote:\n> The planners get min/max range from indexes. So some user's indexes can be bloated too with similar effect\n\nI considered that, but it doesn't apply to this query as there are no\nrange quals.\n\nDavid",
"msg_date": "Mon, 1 Jul 2024 18:58:37 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a lot of shared buffers hit when planning for a simple query with\n primary access path"
},
{
"msg_contents": "On 1/7/2024 17:58, James Pang wrote:\n> we have a daily job to do vacuumdb including catalog tables, and \n> in same database , I did similar query with where=pk on another table \n> and shared buffer access is very small, if catalog table bloat, should \n> see similar shared buffer access when planning for other tables ,right? \n> How to get more details about this planning ?\n> \n> relname | last_vacuum | \n> last_analyze\n> -------------------------+-------------------------------+-------------------------------\n> pg_statistic | 2024-06-30 01:13:08.703291+00 |\n> pg_attribute | 2024-06-30 01:14:48.061235+00 | 2024-07-01 \n> 01:11:49.377759+00\n> pg_class | 2024-06-30 01:15:09.984027+00 | 2024-07-01 \n> 01:12:05.160881+00\n> pg_type | 2024-06-30 01:15:11.139648+00 | 2024-07-01 \n> 01:12:05.32726+00\n> ...\n> (62 rows)\n> \n> David Rowley <[email protected] <mailto:[email protected]>> 於 \n> 2024年7月1日週一 下午6:52寫道:\n> \n> On Mon, 1 Jul 2024 at 22:20, Pavel Stehule <[email protected]\n> <mailto:[email protected]>> wrote:\n> > The planners get min/max range from indexes. So some user's\n> indexes can be bloated too with similar effect\n> \n> I considered that, but it doesn't apply to this query as there are no\n> range quals.\n> \n> David\n> \nDon't forget about extended statistics as well - it also could be used.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Mon, 1 Jul 2024 18:31:24 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a lot of shared buffers hit when planning for a simple query with\n primary access path"
}
] |
[
{
"msg_contents": "Both tables are hash partition tables , and we have a left out join ,\noptimizer convert to Hash Right Join, but it always try to seq scan on\ntablexxx 32 paritions. there are almost 250k rows per parition for\ntablexxxx , so it's slow. As a workaround, I disable hashjoin the it run\nmuch fast with index scan on tablexxxx ,nestloop join.\nWith Hash Right Join, optimizer always use seq scan for outer table ?\nPGv13.11\n\n -> Hash Right Join (cost=22.50..6760.46 rows=5961 width=78)\n Hash Cond: ((aa.partitionkeyid)::text = (b_9.paritionkeyid)::text)\n -> Append (cost=0.00..6119.48 rows=149032 width=79)\n -> Seq Scan on tablexxxx_p0 aa_2 (cost=0.00..89.71\nrows=2471 width=78)\n -> Seq Scan on tablexxxx_p1 aa_3 (cost=0.00..88.23\nrows=2423 width=78)\n -> Seq Scan on tablexxxx_p2 aa_4 (cost=0.00..205.26\nrows=5726 width=79)\n -> Seq Scan on tablexxxx_p3 aa_5 (cost=0.00..102.92\nrows=2892 width=78)\n -> Seq Scan on tablexxxx_p4 aa_6 (cost=0.00..170.27\nrows=4727 width=78)\n ...\n -> Seq Scan on tablexxxx_p31 aa_33 (cost=0.00..220.59\nrows=6159 width=79)\n -> Append (cost=0.69..187.64 rows=4034 width=78) (actual\ntime=0.030..0.035 rows=3 loops=3)\n index scan .... tableyyyy_p0 b_2\nindex scan ..... tableyyyy_p1 b_3\n....\nindex scan ... tableyyyy_p31 b_33\n\nThanks,\n\nJames\n\n Both tables are hash partition tables , and we have a left out join , optimizer convert to Hash Right Join, but it always try to seq scan on tablexxx 32 paritions. \n\nthere are almost 250k rows per parition for tablexxxx , so it's slow. As a workaround, I disable hashjoin the it run much fast with index scan on tablexxxx ,nestloop join. With Hash Right Join, optimizer always use seq scan for outer table ? PGv13.11 -> Hash Right Join (cost=22.50..6760.46 rows=5961 width=78) Hash Cond: ((aa.partitionkeyid)::text = (b_9.paritionkeyid)::text) -> Append (cost=0.00..6119.48 rows=149032 width=79) -> Seq Scan on tablexxxx_p0 aa_2 (cost=0.00..89.71 rows=2471 width=78) -> Seq Scan on tablexxxx_p1 aa_3 (cost=0.00..88.23 rows=2423 width=78) -> Seq Scan on tablexxxx_p2 aa_4 (cost=0.00..205.26 rows=5726 width=79) -> Seq Scan on tablexxxx_p3 aa_5 (cost=0.00..102.92 rows=2892 width=78) -> Seq Scan on tablexxxx_p4 aa_6 (cost=0.00..170.27 rows=4727 width=78) ... -> Seq Scan on tablexxxx_p31 aa_33 (cost=0.00..220.59 rows=6159 width=79)\t -> Append (cost=0.69..187.64 rows=4034 width=78) (actual time=0.030..0.035 rows=3 loops=3) \t\t index scan .... tableyyyy_p0 b_2 \t\t\t index scan ..... tableyyyy_p1 b_3 \t\t\t ....\t\t\t index scan ... tableyyyy_p31 b_33Thanks,James",
"msg_date": "Wed, 3 Jul 2024 12:57:08 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hash Right join and seq scan"
},
{
"msg_contents": "the query is\n select ....\n from tableyyyy b join table xxxx aa\n on b.partitionkeyid=aa.partitionkeyid\n where b.id1= $1 and b.id2=$2 and b.rtime between $3 and $4;\n\n looks like optimizer try to \"calculate cost for nestloop for\nscanning all partitions of tablexxx (32 hash partitions) \" but actually ,\nit only scan only a few partitions. that make the nestloop cost more than\nhashjoin with table seq scan cost. optimizer does not the partitioney\npassed in by tableyyy that got selected based on indexes on other columns.\npossible to make optimizer to calculate cost with partition pruning? since\nthe join key is hash partition key .\n\n\nThanks,\n\nJames\n\n\nJames Pang <[email protected]> 於 2024年7月3日週三 下午12:57寫道:\n\n> Both tables are hash partition tables , and we have a left out join ,\n> optimizer convert to Hash Right Join, but it always try to seq scan on\n> tablexxx 32 paritions. there are almost 250k rows per parition for\n> tablexxxx , so it's slow. As a workaround, I disable hashjoin the it run\n> much fast with index scan on tablexxxx ,nestloop join.\n> With Hash Right Join, optimizer always use seq scan for outer table ?\n> PGv13.11\n>\n> -> Hash Right Join (cost=22.50..6760.46 rows=5961 width=78)\n> Hash Cond: ((aa.partitionkeyid)::text = (b_9.paritionkeyid)::text)\n> -> Append (cost=0.00..6119.48 rows=149032 width=79)\n> -> Seq Scan on tablexxxx_p0 aa_2 (cost=0.00..89.71\n> rows=2471 width=78)\n> -> Seq Scan on tablexxxx_p1 aa_3 (cost=0.00..88.23\n> rows=2423 width=78)\n> -> Seq Scan on tablexxxx_p2 aa_4 (cost=0.00..205.26\n> rows=5726 width=79)\n> -> Seq Scan on tablexxxx_p3 aa_5 (cost=0.00..102.92\n> rows=2892 width=78)\n> -> Seq Scan on tablexxxx_p4 aa_6 (cost=0.00..170.27\n> rows=4727 width=78)\n> ...\n> -> Seq Scan on tablexxxx_p31 aa_33 (cost=0.00..220.59\n> rows=6159 width=79)\n> -> Append (cost=0.69..187.64 rows=4034 width=78) (actual\n> time=0.030..0.035 rows=3 loops=3)\n> index scan .... tableyyyy_p0 b_2\n> index scan ..... tableyyyy_p1 b_3\n> ....\n> index scan ... tableyyyy_p31 b_33\n>\n> Thanks,\n>\n> James\n>\n\nthe query is select .... from tableyyyy b join table xxxx aa on b.partitionkeyid=aa.partitionkeyid where b.id1= $1 and b.id2=$2 and b.rtime between $3 and $4; looks like optimizer try to \"calculate cost for nestloop for scanning all partitions of tablexxx (32 hash partitions) \" but actually , it only scan only a few partitions. that make the nestloop cost more than hashjoin with table seq scan cost. optimizer does not the partitioney passed in by tableyyy that got selected based on indexes on other columns. possible to make optimizer to calculate cost with partition pruning? since the join key is hash partition key . Thanks,James James Pang <[email protected]> 於 2024年7月3日週三 下午12:57寫道: Both tables are hash partition tables , and we have a left out join , optimizer convert to Hash Right Join, but it always try to seq scan on tablexxx 32 paritions. \n\nthere are almost 250k rows per parition for tablexxxx , so it's slow. As a workaround, I disable hashjoin the it run much fast with index scan on tablexxxx ,nestloop join. With Hash Right Join, optimizer always use seq scan for outer table ? PGv13.11 -> Hash Right Join (cost=22.50..6760.46 rows=5961 width=78) Hash Cond: ((aa.partitionkeyid)::text = (b_9.paritionkeyid)::text) -> Append (cost=0.00..6119.48 rows=149032 width=79) -> Seq Scan on tablexxxx_p0 aa_2 (cost=0.00..89.71 rows=2471 width=78) -> Seq Scan on tablexxxx_p1 aa_3 (cost=0.00..88.23 rows=2423 width=78) -> Seq Scan on tablexxxx_p2 aa_4 (cost=0.00..205.26 rows=5726 width=79) -> Seq Scan on tablexxxx_p3 aa_5 (cost=0.00..102.92 rows=2892 width=78) -> Seq Scan on tablexxxx_p4 aa_6 (cost=0.00..170.27 rows=4727 width=78) ... -> Seq Scan on tablexxxx_p31 aa_33 (cost=0.00..220.59 rows=6159 width=79)\t -> Append (cost=0.69..187.64 rows=4034 width=78) (actual time=0.030..0.035 rows=3 loops=3) \t\t index scan .... tableyyyy_p0 b_2 \t\t\t index scan ..... tableyyyy_p1 b_3 \t\t\t ....\t\t\t index scan ... tableyyyy_p31 b_33Thanks,James",
"msg_date": "Wed, 3 Jul 2024 14:51:54 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Right join and seq scan"
},
{
"msg_contents": "the join is \"left out join\"\n\nJames Pang <[email protected]> 於 2024年7月3日週三 下午2:51寫道:\n\n>\n> the query is\n> select ....\n> from tableyyyy b join table xxxx aa\n> on b.partitionkeyid=aa.partitionkeyid\n> where b.id1= $1 and b.id2=$2 and b.rtime between $3 and $4;\n>\n> looks like optimizer try to \"calculate cost for nestloop for\n> scanning all partitions of tablexxx (32 hash partitions) \" but actually ,\n> it only scan only a few partitions. that make the nestloop cost more than\n> hashjoin with table seq scan cost. optimizer does not the partitioney\n> passed in by tableyyy that got selected based on indexes on other columns.\n> possible to make optimizer to calculate cost with partition pruning? since\n> the join key is hash partition key .\n>\n>\n> Thanks,\n>\n> James\n>\n>\n> James Pang <[email protected]> 於 2024年7月3日週三 下午12:57寫道:\n>\n>> Both tables are hash partition tables , and we have a left out join ,\n>> optimizer convert to Hash Right Join, but it always try to seq scan on\n>> tablexxx 32 paritions. there are almost 250k rows per parition for\n>> tablexxxx , so it's slow. As a workaround, I disable hashjoin the it run\n>> much fast with index scan on tablexxxx ,nestloop join.\n>> With Hash Right Join, optimizer always use seq scan for outer table ?\n>> PGv13.11\n>>\n>> -> Hash Right Join (cost=22.50..6760.46 rows=5961 width=78)\n>> Hash Cond: ((aa.partitionkeyid)::text = (b_9.paritionkeyid)::text)\n>> -> Append (cost=0.00..6119.48 rows=149032 width=79)\n>> -> Seq Scan on tablexxxx_p0 aa_2 (cost=0.00..89.71\n>> rows=2471 width=78)\n>> -> Seq Scan on tablexxxx_p1 aa_3 (cost=0.00..88.23\n>> rows=2423 width=78)\n>> -> Seq Scan on tablexxxx_p2 aa_4 (cost=0.00..205.26\n>> rows=5726 width=79)\n>> -> Seq Scan on tablexxxx_p3 aa_5 (cost=0.00..102.92\n>> rows=2892 width=78)\n>> -> Seq Scan on tablexxxx_p4 aa_6 (cost=0.00..170.27\n>> rows=4727 width=78)\n>> ...\n>> -> Seq Scan on tablexxxx_p31 aa_33 (cost=0.00..220.59\n>> rows=6159 width=79)\n>> -> Append (cost=0.69..187.64 rows=4034 width=78) (actual\n>> time=0.030..0.035 rows=3 loops=3)\n>> index scan .... tableyyyy_p0 b_2\n>> index scan ..... tableyyyy_p1 b_3\n>> ....\n>> index scan ... tableyyyy_p31 b_33\n>>\n>> Thanks,\n>>\n>> James\n>>\n>\n\n the join is \"left out join\" James Pang <[email protected]> 於 2024年7月3日週三 下午2:51寫道:the query is select .... from tableyyyy b join table xxxx aa on b.partitionkeyid=aa.partitionkeyid where b.id1= $1 and b.id2=$2 and b.rtime between $3 and $4; looks like optimizer try to \"calculate cost for nestloop for scanning all partitions of tablexxx (32 hash partitions) \" but actually , it only scan only a few partitions. that make the nestloop cost more than hashjoin with table seq scan cost. optimizer does not the partitioney passed in by tableyyy that got selected based on indexes on other columns. possible to make optimizer to calculate cost with partition pruning? since the join key is hash partition key . Thanks,James James Pang <[email protected]> 於 2024年7月3日週三 下午12:57寫道: Both tables are hash partition tables , and we have a left out join , optimizer convert to Hash Right Join, but it always try to seq scan on tablexxx 32 paritions. \n\nthere are almost 250k rows per parition for tablexxxx , so it's slow. As a workaround, I disable hashjoin the it run much fast with index scan on tablexxxx ,nestloop join. With Hash Right Join, optimizer always use seq scan for outer table ? PGv13.11 -> Hash Right Join (cost=22.50..6760.46 rows=5961 width=78) Hash Cond: ((aa.partitionkeyid)::text = (b_9.paritionkeyid)::text) -> Append (cost=0.00..6119.48 rows=149032 width=79) -> Seq Scan on tablexxxx_p0 aa_2 (cost=0.00..89.71 rows=2471 width=78) -> Seq Scan on tablexxxx_p1 aa_3 (cost=0.00..88.23 rows=2423 width=78) -> Seq Scan on tablexxxx_p2 aa_4 (cost=0.00..205.26 rows=5726 width=79) -> Seq Scan on tablexxxx_p3 aa_5 (cost=0.00..102.92 rows=2892 width=78) -> Seq Scan on tablexxxx_p4 aa_6 (cost=0.00..170.27 rows=4727 width=78) ... -> Seq Scan on tablexxxx_p31 aa_33 (cost=0.00..220.59 rows=6159 width=79)\t -> Append (cost=0.69..187.64 rows=4034 width=78) (actual time=0.030..0.035 rows=3 loops=3) \t\t index scan .... tableyyyy_p0 b_2 \t\t\t index scan ..... tableyyyy_p1 b_3 \t\t\t ....\t\t\t index scan ... tableyyyy_p31 b_33Thanks,James",
"msg_date": "Wed, 3 Jul 2024 14:53:35 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Right join and seq scan"
},
{
"msg_contents": "Hi James,\n\nI think it'd be much easier to help you with investigating this issue if\nyou shared the actual queries, and the full EXPLAIN ANALYZE output both\nwith and without disabled hashjoin. Or even better, share a script that\nreproduces the issue (creates tables, loads data, runs the queries).\n\nBTW you suggested each partition has ~250k rows, but the explain plan\nsnippet you shared does not seem to be consistent with that - it only\nshows 2500-5000 rows per partition. If you run ANALYZE on the table,\ndoes that change the plan?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 3 Jul 2024 19:40:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Right join and seq scan"
},
{
"msg_contents": "we have a daily vacuumdb and analyze job, generally speaking it's done\nin seconds, sometimes it suddenly running more than tens of minutes with\nsame bind variable values and huge temp space got used and at that time,\nexplain show \"Hash Anti join, Hash Right join\" with seq scan two tables.\n\nTomas Vondra <[email protected]> 於 2024年7月4日週四 上午1:40寫道:\n\n> Hi James,\n>\n> I think it'd be much easier to help you with investigating this issue if\n> you shared the actual queries, and the full EXPLAIN ANALYZE output both\n> with and without disabled hashjoin. Or even better, share a script that\n> reproduces the issue (creates tables, loads data, runs the queries).\n>\n> BTW you suggested each partition has ~250k rows, but the explain plan\n> snippet you shared does not seem to be consistent with that - it only\n> shows 2500-5000 rows per partition. If you run ANALYZE on the table,\n> does that change the plan?\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>",
"msg_date": "Fri, 5 Jul 2024 08:50:18 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Right join and seq scan"
},
{
"msg_contents": "On Fri, 5 Jul 2024 at 12:50, James Pang <[email protected]> wrote:\n> we have a daily vacuumdb and analyze job, generally speaking it's done in seconds, sometimes it suddenly running more than tens of minutes with same bind variable values and huge temp space got used and at that time, explain show \"Hash Anti join, Hash Right join\" with seq scan two tables.\n\nThere was talk about adding costing for run-time partition pruning\nfactors but nothing was ever agreed, so nothing was done. It's just\nnot that obvious to me how we'd do that. If the Append had 10\npartitions as subnodes, with an equality join condition, you could\nassume we'll only match to 1 of those 10, but we've no idea at plan\ntime which one that'll be and the partitions might drastically vary in\nsize. The best I think we could do is take the total cost of those 10\nand divide by 10 to get the average cost. It's much harder for range\nconditions as those could match anything from 0 to all partitions. The\nbest suggestion I saw for that was to multiply the costs by\nDEFAULT_INEQ_SEL.\n\nI think for now, you might want to lower the random_page_cost or\nincrease effective_cache_size to encourage the nested loop -> index\nscan plan. Good ranges for effective_cache_size is anywhere between 50\n- 75% of your servers's RAM. However, that might not be ideal if your\nserver is under memory pressure from other running processes. It also\ndepends on how large shared_buffers are as a percentage of total RAM.\n\nDavid\n\n\n",
"msg_date": "Fri, 5 Jul 2024 14:15:42 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Right join and seq scan"
},
{
"msg_contents": "David Rowley <[email protected]> 於 2024年7月5日週五 上午10:15寫道:\n\n> On Fri, 5 Jul 2024 at 12:50, James Pang <[email protected]> wrote:\n> > we have a daily vacuumdb and analyze job, generally speaking it's\n> done in seconds, sometimes it suddenly running more than tens of minutes\n> with same bind variable values and huge temp space got used and at that\n> time, explain show \"Hash Anti join, Hash Right join\" with seq scan two\n> tables.\n>\n> There was talk about adding costing for run-time partition pruning\n> factors but nothing was ever agreed, so nothing was done. It's just\n> not that obvious to me how we'd do that. If the Append had 10\n> partitions as subnodes, with an equality join condition, you could\n> assume we'll only match to 1 of those 10, but we've no idea at plan\n> time which one that'll be and the partitions might drastically vary in\n> size. The best I think we could do is take the total cost of those 10\n> and divide by 10 to get the average cost. It's much harder for range\n> conditions as those could match anything from 0 to all partitions. The\n> best suggestion I saw for that was to multiply the costs by\n> DEFAULT_INEQ_SEL.\n>\n> I think for now, you might want to lower the random_page_cost or\n> increase effective_cache_size to encourage the nested loop -> index\n> scan plan. Good ranges for effective_cache_size is anywhere between 50\n> - 75% of your servers's RAM. However, that might not be ideal if your\n> server is under memory pressure from other running processes. It also\n> depends on how large shared_buffers are as a percentage of total RAM.\n>\n> David\n>\n\n We already random_page_cost=1.1 and effective_cache_size=75% physical\nmemory in this database server. For this SQL,\n -> Nested Loop Anti Join (cost=40.32..132168227.57 rows=224338\nwidth=78)\n Join Filter: (lower((p.ctinfo)::text) =\nlower((w.ctinfo)::text))\n -> Nested Loop Left Join\n (cost=39.63..398917.29 rows=299118 width=78)\n -> Append (cost=0.56..22.36 rows=8 width=54)\n -> Index Scan using\nwmdata_p0_llid_hhid_stime_idx on wmdata_p0 m_1 (cost=0.5\n6..2.79 rows=1 width=54)\n ....\n -> Append (cost=39.07..49312.09 rows=54978 width=78)\n -> Bitmap Heap Scan on wmvtee_p0 w.1\n (cost=39.07..1491.06 rows=1669 width=78)\n Recheck Cond: ((m.partitionkeyid)::text =\n(partitionkeyid)::text)\n -> Bitmap Index Scan on\nwmvtee_p0_partitionkeyid_intid_idx (cost=0.00..38.65 rows=1669 width=0)\n Index Cond: ((partitionkeyid)::text\n= (m.partitionkeyid)::text)\n ...\n -> Append (cost=0.69..516.96 rows=4010 width=78)\n -> Index Only Scan using\nwmpct_p0_partitionkeyid_ctinfo_idx on wmpct_p0 p_1 (cost=0.\n69..15.78 rows=124 width=78)\n ...\n\n for nest loop path, since the first one estimated only \"8\" rows\n, and they use partitionkeyid as joinkey and all are hash partitions , is\nit better to estimate cost to 8 (loop times) * 1600 = 12800 (each one\nloop map to only 1 hash partition bitmap scan ,avg one partition cost),\nthat's much less than 398917.29 of all partitions ? for secondary Nest\nLoop Anti join could be rows 299118 rows * 15.78(avg index scan cost of\none partition) = 4,720,082 that still much less than 132168227.57 ?\n for Hash Right join, is it possible to estimate by 8 seq\npartition scan instead of all 32 hash partitions since the first query\nestimated 8 rows only ?\n extend statistics may help estimate count(partitionkeyid) based on\nother columns bind variables, but looks like that did not help table join\ncase.\n\nThanks,\n\nJames\n\nDavid Rowley <[email protected]> 於 2024年7月5日週五 上午10:15寫道:On Fri, 5 Jul 2024 at 12:50, James Pang <[email protected]> wrote:\n> we have a daily vacuumdb and analyze job, generally speaking it's done in seconds, sometimes it suddenly running more than tens of minutes with same bind variable values and huge temp space got used and at that time, explain show \"Hash Anti join, Hash Right join\" with seq scan two tables.\n\nThere was talk about adding costing for run-time partition pruning\nfactors but nothing was ever agreed, so nothing was done. It's just\nnot that obvious to me how we'd do that. If the Append had 10\npartitions as subnodes, with an equality join condition, you could\nassume we'll only match to 1 of those 10, but we've no idea at plan\ntime which one that'll be and the partitions might drastically vary in\nsize. The best I think we could do is take the total cost of those 10\nand divide by 10 to get the average cost. It's much harder for range\nconditions as those could match anything from 0 to all partitions. The\nbest suggestion I saw for that was to multiply the costs by\nDEFAULT_INEQ_SEL.\n\nI think for now, you might want to lower the random_page_cost or\nincrease effective_cache_size to encourage the nested loop -> index\nscan plan. Good ranges for effective_cache_size is anywhere between 50\n- 75% of your servers's RAM. However, that might not be ideal if your\nserver is under memory pressure from other running processes. It also\ndepends on how large shared_buffers are as a percentage of total RAM.\n\nDavid We already random_page_cost=1.1 and effective_cache_size=75% physical memory in this database server. For this SQL, -> Nested Loop Anti Join (cost=40.32..132168227.57 rows=224338 width=78) Join Filter: (lower((p.ctinfo)::text) = lower((w.ctinfo)::text)) -> Nested Loop Left Join (cost=39.63..398917.29 rows=299118 width=78) -> Append (cost=0.56..22.36 rows=8 width=54) -> Index Scan using wmdata_p0_llid_hhid_stime_idx on wmdata_p0 m_1 (cost=0.56..2.79 rows=1 width=54) .... -> Append (cost=39.07..49312.09 rows=54978 width=78) -> Bitmap Heap Scan on wmvtee_p0 w.1 (cost=39.07..1491.06 rows=1669 width=78) Recheck Cond: ((m.partitionkeyid)::text = (partitionkeyid)::text) -> Bitmap Index Scan on wmvtee_p0_partitionkeyid_intid_idx (cost=0.00..38.65 rows=1669 width=0) Index Cond: ((partitionkeyid)::text = (m.partitionkeyid)::text) ... -> Append (cost=0.69..516.96 rows=4010 width=78) -> Index Only Scan using wmpct_p0_partitionkeyid_ctinfo_idx on wmpct_p0 p_1 (cost=0.69..15.78 rows=124 width=78) ... for nest loop path, since the first one estimated only \"8\" rows , and they use partitionkeyid as joinkey and all are hash partitions , is it better to estimate cost to 8 (loop times) * 1600 = 12800 (each one loop map to only 1 hash partition bitmap scan ,avg one partition cost), that's much less than 398917.29 of all partitions ? for secondary Nest Loop Anti join could be rows 299118 rows * 15.78(avg index scan cost of one partition) = 4,720,082 that still much less than \n\n132168227.57 ? for Hash Right join, is it possible to estimate by 8 seq partition scan instead of all 32 hash partitions since the first query estimated 8 rows only ? extend statistics may help estimate count(partitionkeyid) based on other columns bind variables, but looks like that did not help table join case. Thanks,James",
"msg_date": "Fri, 5 Jul 2024 22:43:06 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Right join and seq scan"
},
{
"msg_contents": "On Sat, 6 Jul 2024 at 02:43, James Pang <[email protected]> wrote:\n> for nest loop path, since the first one estimated only \"8\" rows , and they use partitionkeyid as joinkey and all are hash partitions , is it better to estimate cost to 8 (loop times) * 1600 = 12800 (each one loop map to only 1 hash partition bitmap scan ,avg one partition cost), that's much less than 398917.29 of all partitions ?\n\nI'm not really sure where you're getting the numbers from here. The\nouter side of the deepest nested loop has an 8 row estimate, not the\nnested loop itself. I'm unsure where the 1600 is from. I only see\n1669.\n\nAs of now, we don't do a great job of costing for partition pruning\nthat will happen during execution. We won't be inventing anything to\nfix that in existing releases of PostgreSQL, so you'll need to either\nadjust the code yourself, or find a workaround.\n\nYou've not shown us your schema, but perhaps enable_partitionwise_join\n= on might help you. Other things that might help are further lowering\nrandom_page_cost or raising effective_cache_size artificially high.\nIt's hard to tell from here how much random I/O is being costed into\nthe index scans. You could determine this by checking if the nested\nloop plan costs change as a result of doing further increases to\neffective_cache_size. You could maybe nudge it up enough for it to win\nover the hash join plan. It is possible that this won't work, however.\n\n> for secondary Nest Loop Anti join could be rows 299118 rows * 15.78(avg index scan cost of one partition) = 4,720,082 that still much less than 132168227.57 ?\n> for Hash Right join, is it possible to estimate by 8 seq partition scan instead of all 32 hash partitions since the first query estimated 8 rows only ?\n> extend statistics may help estimate count(partitionkeyid) based on other columns bind variables, but looks like that did not help table join case.\n\nI can't quite follow this. You'll need to better explain where you're\ngetting these numbers for me to be able to understand.\n\nDavid\n\n\n",
"msg_date": "Sat, 6 Jul 2024 12:32:47 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Right join and seq scan"
},
{
"msg_contents": "Sorry for confusion, it's from attached explain output of the SQL.\nplease check attached. my questions is : for nestloop of two partition\ntables , they use same partition key and equal join on partition key, the\ncost could be \"outer tables estimated rows\" * (average index scan of only\none partition of inner table) , instead of \"outer tables estimated rows\"\n* (index scans of all partitions), is it possible ? or it's still need\nrunning time partition pruning enhancement?\n random_page_cost = 1.1, seq_page_cost=1.0,\neffective_cache_size=0.75*physical memory size. set random_page_cost=0.9\nmake optimizer to choose index scan instead of seq scan.\n\nThanks,\n\nJames\n\nDavid Rowley <[email protected]> 於 2024年7月6日週六 上午8:33寫道:\n\n> On Sat, 6 Jul 2024 at 02:43, James Pang <[email protected]> wrote:\n> > for nest loop path, since the first one estimated only \"8\"\n> rows , and they use partitionkeyid as joinkey and all are hash partitions ,\n> is it better to estimate cost to 8 (loop times) * 1600 = 12800 (each one\n> loop map to only 1 hash partition bitmap scan ,avg one partition cost),\n> that's much less than 398917.29 of all partitions ?\n>\n> I'm not really sure where you're getting the numbers from here. The\n> outer side of the deepest nested loop has an 8 row estimate, not the\n> nested loop itself. I'm unsure where the 1600 is from. I only see\n> 1669.\n>\n> As of now, we don't do a great job of costing for partition pruning\n> that will happen during execution. We won't be inventing anything to\n> fix that in existing releases of PostgreSQL, so you'll need to either\n> adjust the code yourself, or find a workaround.\n>\n> You've not shown us your schema, but perhaps enable_partitionwise_join\n> = on might help you. Other things that might help are further lowering\n> random_page_cost or raising effective_cache_size artificially high.\n> It's hard to tell from here how much random I/O is being costed into\n> the index scans. You could determine this by checking if the nested\n> loop plan costs change as a result of doing further increases to\n> effective_cache_size. You could maybe nudge it up enough for it to win\n> over the hash join plan. It is possible that this won't work, however.\n>\n> > for secondary Nest Loop Anti join could be rows 299118 rows *\n> 15.78(avg index scan cost of one partition) = 4,720,082 that still much\n> less than 132168227.57 ?\n> > for Hash Right join, is it possible to estimate by 8 seq\n> partition scan instead of all 32 hash partitions since the first query\n> estimated 8 rows only ?\n> > extend statistics may help estimate count(partitionkeyid) based\n> on other columns bind variables, but looks like that did not help table\n> join case.\n>\n> I can't quite follow this. You'll need to better explain where you're\n> getting these numbers for me to be able to understand.\n>\n> David\n>",
"msg_date": "Mon, 8 Jul 2024 10:21:58 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Right join and seq scan"
},
{
"msg_contents": "Is the query fast with some bind parameters but slow with others?\n\nIf so, it'd be better to show an explain with 'fast' and 'slow' bind\nparams, rather than the same bind params with enable_*=off.\n\nOr is the change because autoanalyze runs on some table and changes the\nstatistics enough to change the plan ? Investigate by setting\nlog_autovacuum_min_duration=0 or by checking\npg_stat_all_tables.last_{auto,}{vacuum,analyze}.\n\nMaybe your llid/hhid are correlated, and you should CREATE STATISTICS.\n\nOr maybe the answer will be to increase the stats target.\n\n\n",
"msg_date": "Sun, 7 Jul 2024 21:26:56 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Right join and seq scan"
}
] |
[
{
"msg_contents": "Hello everyone,\n\nI have a database of sufficient size that it does not fit entirely in RAM, including indexes that also exceed RAM capacity. Most of my big fact tables are following the same scheme :\n\n * 3-5 id columns.\n * Partitioned along one ID (ID_JOUR).\n * 10 - 100 Go partitions.\n * 1-5 Go primary key indexes, for each partition\n * Contains 10 - 120 columns, some of them with a lot of NaNs. All of them are int or float8. Some columns are containing a lot of NaNs, because they were not all created and added to daily processing at the same date\n * Requesting a small subset (100k-1M) lines from this database, always filtering on the primary key (I have a WHERE ... filtering on each ID column. Some are done through dimension tables + join, but I tried doing those directly, it did not solve my problem).\n * See in the annex a DDL code for one of those tables.\n * See the annex for the size of my different tables.\n * Stable data (I am focusing on past data, rare to no insertions, VACUUM + CLUSTER done after those rare modifications).\n * Played REINDEX, VACUUM, CLUSTER, ANALYZE on those tables.\n\nWhen performing queries, I observe significant differences in processing time depending on whether the index needs to be read from disk or is already loaded in RAM. I think I have confirmed using EXPLAIN ANALYZE that the issue stems from index scans. See for example :\n\n\n * https://explain.dalibo.com/plan/2c85077gagh98a17: very slow because some part of the index is \"read\" (from disk) and not \"hit\".\n * https://explain.dalibo.com/plan/gfd20f8cadaa5261#plan/node/8 : 1M lines \"instantaneously\" (2 sec) retrieved, when the index is in RAM.\n * https://explain.dalibo.com/plan/1b394hc5a26cf747 where I added set track_io_timing=TRUE.\n\nI measured the speed of loading my index into RAM during a query, which is approximately 2 to 3 MB/s. However, my infrastructure theoretically supports I/O speed of around 900 MB/s.\n\nOn some older partitions, I was able to sometimes get better throughputs (see e.g. https://explain.dalibo.com/plan/4db409d1d6d95d4b)\n\nI do not understand why reading my index from disk is so slow. I suspect that the index is not sequentially read, but I do not know how postgresql internals really behave, so this is just a supposition.\n\nMy question is : what can I change to get a better index reading speed ?\n\nWhat I already tried :\n\n\n * Setting random_page_cost to prohibitive value (10000000) to force a bitmap heap scan, because those can be made in parallel. This has not worked, the optimizer is still doing an index scan on my fact table.\n * Change effective_io_concurrency, max_parallel_workers_per_gather, work_mem to much higher values.\n\nThank you in advance for your help, any idea/advice greatly appreciated !\n\nSimon F.\n\nANNEX :\n\nSome more details about my environment :\n\n\n * I am working on Azure. My hardware are a E16s_v3 (see https://learn.microsoft.com/en-us/azure/virtual-machines/ev3-esv3-series) and P80 disks (https://azure.microsoft.com/en-us/pricing/details/managed-disks/)\n * I was unable to run the hardware speed test, because I do not have sudo rights, but through VACUUM execution, the\n * Postgresql version : PostgreSQL 12.15 (Ubuntu 12.15-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n * OS version :\n\n==> /etc/lsb-release <==\nDISTRIB_ID=Ubuntu\nDISTRIB_RELEASE=20.04\nDISTRIB_CODENAME=focal\nDISTRIB_DESCRIPTION=\"Ubuntu 20.04 LTS\"\n\n==> /etc/os-release <==\nID=ubuntu\nID_LIKE=debian\nPRETTY_NAME=\"Ubuntu 20.04 LTS\"\nVERSION_ID=\"20.04\"\nHOME_URL=https://www.ubuntu.com/\nSUPPORT_URL=https://help.ubuntu.com/\nBUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/\nPRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\nVERSION_CODENAME=focal\nUBUNTU_CODENAME=focal\n\n==> /etc/SNCF_release <==\nHOSTNAME : uzcordbr05\nOS_NAME : UBUNTU 20.04\nOS_DESCRIPTION : Ubuntu 20.04 LTS\nOS_RELEASE : 20.04\nOS_CODENAME : focal\nCMDB_ENV : Recette\nAENV : hprod\nREPO_IT : https://repos.it.sncf.fr/repos/os/ubuntu/bugfix/dists/focal-bugfix/Release\n\n\n * Default requests settings :\n * effective_cache_size = '96GB',\n * effective_io_concurrency = '200',\n * max_parallel_workers_per_gather = '4',\n * random_page_cost = '1.1',\n * search_path = 'public',\n * work_mem = '64MB' --> I tried to change work_mem to 4GB, did not change anything.\n\n\n * Postgres custom configuration settings\n\nname |current_setting |source |\n--------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+\napplication_name |DBeaver 21.2.5 - SQLEditor <Script-19.sql> |session |\narchive_command |(disabled) |configuration file |\narchive_mode |off |configuration file |\narchive_timeout |2h |configuration file |\nautovacuum_analyze_scale_factor |0.05 |configuration file |\nautovacuum_analyze_threshold |50 |configuration file |\nautovacuum_max_workers |6 |configuration file |\nautovacuum_naptime |15s |configuration file |\nautovacuum_vacuum_cost_delay |10ms |configuration file |\nautovacuum_vacuum_cost_limit |-1 |configuration file |\nautovacuum_vacuum_scale_factor |0.01 |configuration file |\nautovacuum_vacuum_threshold |50 |configuration file |\nautovacuum_work_mem |512MB |configuration file |\ncheckpoint_completion_target |0.9 |configuration file |\nclient_encoding |UTF8 |client |\ncluster_name |irdbr010 |configuration file |\nDateStyle |ISO, DMY |client |\ndefault_text_search_config |pg_catalog.french |configuration file |\neffective_cache_size |96GB |configuration file |\neffective_io_concurrency |200 |database |\nextra_float_digits |3 |session |\nlc_messages |C |configuration file |\nlc_monetary |fr_FR.UTF8 |configuration file |\nlc_numeric |fr_FR.UTF8 |configuration file |\nlc_time |fr_FR.UTF8 |configuration file |\nlisten_addresses |* |configuration file |\nlog_autovacuum_min_duration |0 |configuration file |\nlog_checkpoints |on |configuration file |\nlog_connections |on |configuration file |\nlog_disconnections |off |configuration file |\nlog_file_mode |0640 |configuration file |\nlog_line_prefix |%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h |configuration file |\nlog_lock_waits |on |configuration file |\nlog_min_duration_statement |5s |configuration file |\nlog_min_error_statement |warning |configuration file |\nlog_statement |ddl |configuration file |\nlog_temp_files |0 |configuration file |\nlog_timezone |Europe/Paris |configuration file |\nlogging_collector |on |configuration file |\nmaintenance_work_mem |4GB |configuration file |\nmax_connections |500 |configuration file |\nmax_locks_per_transaction |1024 |configuration file |\nmax_parallel_workers_per_gather |4 |configuration file |\nmax_stack_depth |2MB |environment variable|\nmax_wal_size |4GB |configuration file |\nmin_wal_size |128MB |configuration file |\npassword_encryption |scram-sha-256 |configuration file |\npg_stat_statements.max |15000 |configuration file |\npg_stat_statements.save |off |configuration file |\npg_stat_statements.track |all |configuration file |\npg_stat_statements.track_utility|off |configuration file |\nport |5433 |configuration file |\nrandom_page_cost |1.1 |database |\nrestore_command |/home/postgres/admin/bin/pgbackrest --config=/etc/pgbackrest.conf --pg1-path=/home/postgres/data/irdbr010/systeme --stanza=rdb_backup archive-get %f \"%p\"|configuration file |\nsearch_path |public, public, temporaire, dtm_2019 |session |\nshared_buffers |32GB |configuration file |\nssl |on |configuration file |\nstatement_timeout |0 |user |\ntcp_keepalives_count |10 |configuration file |\ntcp_keepalives_idle |900 |configuration file |\ntcp_keepalives_interval |75 |configuration file |\nTimeZone |Europe/Paris |client |\nunix_socket_group |postgres |configuration file |\nunix_socket_permissions |0700 |configuration file |\nwal_buffers |16MB |configuration file |\nwork_mem |64MB |configuration file |\n\n\nDDL for my one of my table :\n\n\n-- public.\"F_TDOJ_HIST_1\" definition\n\n-- Drop table\n\n-- DROP TABLE public.\"F_TDOJ_HIST_1\";\n\nCREATE TABLE public.\"F_TDOJ_HIST_1\" (\n \"ID_TRAIN\" int4 NOT NULL,\n \"ID_JOUR\" int4 NOT NULL,\n \"ID_OD\" int4 NOT NULL,\n \"JX\" int4 NOT NULL,\n \"RES\" int4 NULL,\n \"REV\" float8 NULL,\n \"OFFRE\" int4 NULL,\n \"CC_OUV\" int4 NULL,\n \"GENV_NUM\" int8 NULL,\n \"GENV_DEN\" int8 NULL,\n \"GENR_NUM\" int8 NULL,\n \"GENR_DEN\" int8 NULL,\n \"GENH_NUM\" int8 NULL,\n \"GENH_DEN\" int8 NULL,\n \"RES_CC0\" int4 NULL,\n \"RES_CC1\" int4 NULL,\n \"RES_CC2\" int4 NULL,\n \"RES_CC3\" int4 NULL,\n \"RES_CC4\" int4 NULL,\n \"RES_CC5\" int4 NULL,\n \"RES_CC6\" int4 NULL,\n \"RES_CC7\" int4 NULL,\n \"RES_CC8\" int4 NULL,\n \"RES_CC9\" int4 NULL,\n \"RES_CC10\" int4 NULL,\n \"RES_CC11\" int4 NULL,\n \"RES_CC12\" int4 NULL,\n \"RES_CC13\" int4 NULL,\n \"RES_CC14\" int4 NULL,\n \"RES_CC15\" int4 NULL,\n \"RES_CC16\" int4 NULL,\n \"RES_CC17\" int4 NULL,\n \"RES_CC18\" int4 NULL,\n \"RES_CC19\" int4 NULL,\n \"RES_CC20\" int4 NULL,\n \"AUT_CC0\" int4 NULL,\n \"AUT_CC1\" int4 NULL,\n \"AUT_CC2\" int4 NULL,\n \"AUT_CC3\" int4 NULL,\n \"AUT_CC4\" int4 NULL,\n \"AUT_CC5\" int4 NULL,\n \"AUT_CC6\" int4 NULL,\n \"AUT_CC7\" int4 NULL,\n \"AUT_CC8\" int4 NULL,\n \"AUT_CC9\" int4 NULL,\n \"AUT_CC10\" int4 NULL,\n \"AUT_CC11\" int4 NULL,\n \"AUT_CC12\" int4 NULL,\n \"AUT_CC13\" int4 NULL,\n \"AUT_CC14\" int4 NULL,\n \"AUT_CC15\" int4 NULL,\n \"AUT_CC16\" int4 NULL,\n \"AUT_CC17\" int4 NULL,\n \"AUT_CC18\" int4 NULL,\n \"AUT_CC19\" int4 NULL,\n \"AUT_CC20\" int4 NULL,\n \"DSP_CC0\" int4 NULL,\n \"DSP_CC1\" int4 NULL,\n \"DSP_CC2\" int4 NULL,\n \"DSP_CC3\" int4 NULL,\n \"DSP_CC4\" int4 NULL,\n \"DSP_CC5\" int4 NULL,\n \"DSP_CC6\" int4 NULL,\n \"DSP_CC7\" int4 NULL,\n \"DSP_CC8\" int4 NULL,\n \"DSP_CC9\" int4 NULL,\n \"DSP_CC10\" int4 NULL,\n \"DSP_CC11\" int4 NULL,\n \"DSP_CC12\" int4 NULL,\n \"DSP_CC13\" int4 NULL,\n \"DSP_CC14\" int4 NULL,\n \"DSP_CC15\" int4 NULL,\n \"DSP_CC16\" int4 NULL,\n \"DSP_CC17\" int4 NULL,\n \"DSP_CC18\" int4 NULL,\n \"DSP_CC19\" int4 NULL,\n \"DSP_CC20\" int4 NULL,\n \"REV_CC0\" float8 NULL,\n \"REV_CC1\" float8 NULL,\n \"REV_CC2\" float8 NULL,\n \"REV_CC3\" float8 NULL,\n \"REV_CC4\" float8 NULL,\n \"REV_CC5\" float8 NULL,\n \"REV_CC6\" float8 NULL,\n \"REV_CC7\" float8 NULL,\n \"REV_CC8\" float8 NULL,\n \"REV_CC9\" float8 NULL,\n \"REV_CC10\" float8 NULL,\n \"REV_CC11\" float8 NULL,\n \"REV_CC12\" float8 NULL,\n \"REV_CC13\" float8 NULL,\n \"REV_CC14\" float8 NULL,\n \"REV_CC15\" float8 NULL,\n \"REV_CC16\" float8 NULL,\n \"REV_CC17\" float8 NULL,\n \"REV_CC18\" float8 NULL,\n \"REV_CC19\" float8 NULL,\n \"REV_CC20\" float8 NULL,\n \"RES_CHD\" int4 NULL,\n \"REV_CHD\" float8 NULL,\n \"RES_PRO\" int4 NULL,\n \"REV_PRO\" float8 NULL,\n \"RES_SOC\" int4 NULL,\n \"REV_SOC\" float8 NULL,\n \"RES_TMX\" int4 NULL,\n \"REV_TMX\" float8 NULL,\n \"RES_MAX\" int4 NULL,\n \"RES_GRP\" int4 NULL,\n \"REV_GRP\" float8 NULL,\n \"PREV_RES\" int4 NULL,\n \"PREV_REV\" float8 NULL,\n \"OPTIM_RES\" int4 NULL,\n \"OPTIM_REV\" float8 NULL,\n \"RES_FFX\" int4 NULL,\n \"REV_FFX\" float8 NULL,\n \"RES_SFE\" int4 NULL,\n \"REV_SFE\" float8 NULL,\n \"RES_SFN\" int4 NULL,\n \"REV_SFN\" float8 NULL,\n \"RES_NFE\" int4 NULL,\n \"REV_NFE\" float8 NULL,\n \"RES_NFN\" int4 NULL,\n \"REV_NFN\" float8 NULL,\n \"RES_ABO\" int4 NULL,\n \"REV_ABO\" float8 NULL,\n \"RES_AGN\" int4 NULL,\n \"REV_AGN\" float8 NULL,\n \"RES_BPR\" int4 NULL,\n \"REV_BPR\" float8 NULL,\n \"RES_LIB\" int4 NULL,\n \"REV_LIB\" float8 NULL,\n \"RES_FFN\" int4 NULL,\n \"REV_FFN\" float8 NULL,\n \"RES_PRI\" int4 NULL,\n \"REV_PRI\" float8 NULL,\n CONSTRAINT \"F_TDOJ_HIST_1_OLDP_pkey\" PRIMARY KEY (\"ID_TRAIN\", \"ID_JOUR\", \"ID_OD\", \"JX\")\n)\nPARTITION BY RANGE (\"ID_JOUR\");\nCREATE INDEX \"F_TDOJ_HIST_1_OLDP_ID_JOUR_JX_idx\" ON ONLY public.\"F_TDOJ_HIST_1\" USING btree (\"ID_JOUR\", \"JX\");\nCREATE INDEX \"F_TDOJ_HIST_1_OLDP_ID_JOUR_idx\" ON ONLY public.\"F_TDOJ_HIST_1\" USING btree (\"ID_JOUR\");\nCREATE INDEX \"F_TDOJ_HIST_1_OLDP_ID_OD_idx\" ON ONLY public.\"F_TDOJ_HIST_1\" USING btree (\"ID_OD\");\nCREATE INDEX \"F_TDOJ_HIST_1_OLDP_ID_TRAIN_idx\" ON ONLY public.\"F_TDOJ_HIST_1\" USING btree (\"ID_TRAIN\");\nCREATE INDEX \"F_TDOJ_HIST_1_OLDP_JX_idx\" ON ONLY public.\"F_TDOJ_HIST_1\" USING btree (\"JX\");\n\n\n-- public.\"F_TDOJ_HIST_1\" foreign keys\n\nALTER TABLE public.\"F_TDOJ_HIST_1\" ADD CONSTRAINT \"F_TDOJ_HIST_1_OLDP_ID_JOUR_fkey\" FOREIGN KEY (\"ID_JOUR\") REFERENCES public.\"D_JOUR\"(\"ID_JOUR\");\nALTER TABLE public.\"F_TDOJ_HIST_1\" ADD CONSTRAINT \"F_TDOJ_HIST_1_OLDP_ID_OD_fkey\" FOREIGN KEY (\"ID_OD\") REFERENCES public.\"D_OD\"(\"ID_OD\");\nALTER TABLE public.\"F_TDOJ_HIST_1\" ADD CONSTRAINT \"F_TDOJ_HIST_1_OLDP_ID_TRAIN_fkey\" FOREIGN KEY (\"ID_TRAIN\") REFERENCES public.\"D_TRAIN\"(\"ID_TRAIN\");\nALTER TABLE public.\"F_TDOJ_HIST_1\" ADD CONSTRAINT \"F_TDOJ_HIST_1_OLDP_JX_pkey\" FOREIGN KEY (\"JX\") REFERENCES public.\"D_JX\"(\"JX\");\n\n\n\nResult of :\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname in (\n 'F_TDLJ_HIST_1', 'F_TDLJ_HIST_2', 'F_TDLJ_HIST', 'F_TDOJ_HIST_1', 'F_TDOJ_HIST_2', 'F_TDOJ_HIST'\n);\n\n\nrelpages|reltuples|relname |relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n--------+---------+-------------+-------------+-------+--------+--------------+----------+-------------+\n 0| 0.0|F_TDLJ_HIST | 0|p | 47|true |NULL | 0|\n 95806| 442969.0|F_TDLJ_HIST | 95806|r | 47|false |NULL | 785080320|\n 0| 0.0|F_TDLJ_HIST_1| 0|p | 129|true |NULL | 0|\n 197458| 730226.0|F_TDLJ_HIST_1| 157954|r | 129|false |NULL | 1618059264|\n 0| 0.0|F_TDLJ_HIST_2| 0|p | 159|true |NULL | 0|\n 278359| 441524.0|F_TDLJ_HIST_2| 278359|r | 159|false |NULL | 2280972288|\n 0| 0.0|F_TDOJ_HIST | 0|p | 104|true |NULL | 0|\n 311913|1424975.0|F_TDOJ_HIST | 311913|r | 56|false |NULL | 2555928576|\n 0| 0.0|F_TDOJ_HIST_1| 0|p | 135|true |NULL | 0|\n 682522|1241940.0|F_TDOJ_HIST_1| 682522|r | 135|false |NULL | 5592793088|\n 0| 0.0|F_TDOJ_HIST_2| 0|p | 163|true |NULL | 0|\n 661324|1397598.0|F_TDOJ_HIST_2| 661324|r | 163|false |NULL | 5419098112|\n\n\nInterne\n\n-------\nCe message et toutes les pièces jointes sont établis à l'intention exclusive de ses destinataires et sont confidentiels. L'intégrité de ce message n'étant pas assurée sur Internet, la SNCF ne peut être tenue responsable des altérations qui pourraient se produire sur son contenu. Toute publication, utilisation, reproduction, ou diffusion, même partielle, non autorisée préalablement par la SNCF, est strictement interdite. Si vous n'êtes pas le destinataire de ce message, merci d'en avertir immédiatement l'expéditeur et de le détruire.\n-------\nThis message and any attachments are intended solely for the addressees and are confidential. SNCF may not be held responsible for their contents whose accuracy and completeness cannot be guaranteed over the Internet. Unauthorized use, disclosure, distribution, copying, or any part thereof is strictly prohibited. If you are not the intended recipient of this message, please notify the sender immediately and delete it.\n\n\n\n\n\n\n\n\n\nHello everyone,\n\nI have a database of sufficient size that it does not fit entirely in RAM, including indexes that also exceed RAM capacity. Most of my big fact tables are following the same scheme :\n\n3-5 id columns.Partitioned along one ID (ID_JOUR).\n10 – 100 Go partitions.\n1-5 Go primary key indexes, for each partitionContains 10 – 120 columns, some of them with a lot of NaNs. All of them are int or float8. Some columns are containing a lot of NaNs, because they were not all created\n and added to daily processing at the same dateRequesting a small subset (100k-1M) lines from this database, always filtering on the primary key (I have a WHERE … filtering on each ID column. Some are done through\n dimension tables + join, but I tried doing those directly, it did not solve my problem).See in the annex a DDL code for one of those tables.See the annex for the size of my different tables.Stable data (I am focusing on past data, rare to no insertions, VACUUM + CLUSTER done after those rare modifications).Played REINDEX, VACUUM, CLUSTER, ANALYZE on those tables.\n \nWhen performing queries, I observe significant differences in processing time depending on whether the index needs to be read from disk or is already loaded in RAM. I think I have confirmed using EXPLAIN ANALYZE that\n the issue stems from index scans. See for example :\n \n\nhttps://explain.dalibo.com/plan/2c85077gagh98a17: very slow because some part of the index is \"read\"\n (from disk) and not \"hit\".https://explain.dalibo.com/plan/gfd20f8cadaa5261#plan/node/8 : 1M lines “instantaneously”\n (2 sec) retrieved, when the index is in RAM.https://explain.dalibo.com/plan/1b394hc5a26cf747 where I added set track_io_timing=TRUE.\n \nI measured the speed of loading my index into RAM during a query, which is approximately 2 to 3 MB/s. However, my infrastructure theoretically supports I/O speed of around 900 MB/s.\n\nOn some older partitions, I was able to sometimes get better throughputs (see e.g.\nhttps://explain.dalibo.com/plan/4db409d1d6d95d4b)\n\nI do not understand why reading my index from disk is so slow. I suspect that the index is not sequentially read, but I do not know how postgresql internals really behave, so this is just a supposition.\n \nMy question is : what can I change to get a better index reading speed ?\n \nWhat I already tried :\n \n\nSetting random_page_cost to prohibitive value (10000000) to force a bitmap heap scan, because those can be made in parallel. This has not worked, the optimizer is\n still doing an index scan on my fact table.Change effective_io_concurrency, max_parallel_workers_per_gather, work_mem to much higher values.\n\nThank you in advance for your help, any idea/advice greatly appreciated !\n\nSimon F.\n\n\nANNEX :\n\n\nSome more details about my environment :\n \n\nI am working on Azure. My hardware are a E16s_v3 (see\n\nhttps://learn.microsoft.com/en-us/azure/virtual-machines/ev3-esv3-series) and P80 disks (https://azure.microsoft.com/en-us/pricing/details/managed-disks/)\nI was unable to run the hardware speed test, because I do not have sudo rights, but through VACUUM execution, the\nPostgresql version : PostgreSQL 12.15 (Ubuntu 12.15-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bitOS version :\n\n \n==> /etc/lsb-release <==\nDISTRIB_ID=Ubuntu\nDISTRIB_RELEASE=20.04\nDISTRIB_CODENAME=focal\nDISTRIB_DESCRIPTION=\"Ubuntu 20.04 LTS\"\n \n==> /etc/os-release <==\nID=ubuntu\nID_LIKE=debian\nPRETTY_NAME=\"Ubuntu 20.04 LTS\"\nVERSION_ID=\"20.04\"\nHOME_URL=https://www.ubuntu.com/\nSUPPORT_URL=https://help.ubuntu.com/\nBUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/\nPRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\nVERSION_CODENAME=focal\nUBUNTU_CODENAME=focal\n \n==> /etc/SNCF_release <==\nHOSTNAME : uzcordbr05\nOS_NAME : UBUNTU 20.04\nOS_DESCRIPTION : Ubuntu 20.04 LTS\nOS_RELEASE : 20.04\nOS_CODENAME : focal\nCMDB_ENV : Recette\nAENV : hprod\nREPO_IT : \nhttps://repos.it.sncf.fr/repos/os/ubuntu/bugfix/dists/focal-bugfix/Release\n \n\nDefault requests settings :\n\neffective_cache_size = '96GB',\neffective_io_concurrency = '200',\nmax_parallel_workers_per_gather = '4',\nrandom_page_cost = '1.1',\nsearch_path = 'public',\nwork_mem = '64MB'\nà I tried to change work_mem to 4GB, did not change anything.\n\n \n\nPostgres custom configuration settings\n\n \nname |current_setting |source |\n--------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+\napplication_name |DBeaver 21.2.5 - SQLEditor <Script-19.sql> |session |\narchive_command |(disabled) |configuration file |\narchive_mode |off |configuration file |\narchive_timeout |2h |configuration file |\nautovacuum_analyze_scale_factor |0.05 |configuration file |\nautovacuum_analyze_threshold |50 |configuration file |\nautovacuum_max_workers |6 |configuration file |\nautovacuum_naptime |15s |configuration file |\nautovacuum_vacuum_cost_delay |10ms |configuration file |\nautovacuum_vacuum_cost_limit |-1 |configuration file |\nautovacuum_vacuum_scale_factor |0.01 |configuration file |\nautovacuum_vacuum_threshold |50 |configuration file |\nautovacuum_work_mem |512MB |configuration file |\ncheckpoint_completion_target |0.9 |configuration file |\nclient_encoding |UTF8 |client |\ncluster_name |irdbr010 |configuration file |\nDateStyle |ISO, DMY |client |\ndefault_text_search_config |pg_catalog.french |configuration file |\neffective_cache_size |96GB |configuration file |\neffective_io_concurrency |200 |database |\nextra_float_digits |3 |session |\nlc_messages |C |configuration file |\nlc_monetary |fr_FR.UTF8 |configuration file |\nlc_numeric |fr_FR.UTF8 |configuration file |\nlc_time |fr_FR.UTF8 |configuration file |\nlisten_addresses |* |configuration file |\nlog_autovacuum_min_duration |0 |configuration file |\nlog_checkpoints |on |configuration file |\nlog_connections |on |configuration file |\nlog_disconnections |off |configuration file |\nlog_file_mode |0640 |configuration file |\nlog_line_prefix |%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h |configuration file |\nlog_lock_waits |on |configuration file |\nlog_min_duration_statement |5s |configuration file |\nlog_min_error_statement |warning |configuration file |\nlog_statement |ddl |configuration file |\nlog_temp_files |0 |configuration file |\nlog_timezone |Europe/Paris |configuration file |\nlogging_collector |on |configuration file |\nmaintenance_work_mem |4GB |configuration file |\nmax_connections |500 |configuration file |\nmax_locks_per_transaction |1024 |configuration file |\nmax_parallel_workers_per_gather |4 |configuration file |\nmax_stack_depth |2MB |environment variable|\nmax_wal_size |4GB |configuration file |\nmin_wal_size |128MB |configuration file |\npassword_encryption |scram-sha-256 |configuration file |\npg_stat_statements.max |15000 |configuration file |\npg_stat_statements.save |off |configuration file |\npg_stat_statements.track |all |configuration file |\npg_stat_statements.track_utility|off |configuration file |\nport |5433 |configuration file |\nrandom_page_cost |1.1 |database |\nrestore_command |/home/postgres/admin/bin/pgbackrest --config=/etc/pgbackrest.conf --pg1-path=/home/postgres/data/irdbr010/systeme --stanza=rdb_backup archive-get %f \"%p\"|configuration file |\nsearch_path |public, public, temporaire, dtm_2019 |session |\nshared_buffers |32GB |configuration file |\nssl |on |configuration file |\nstatement_timeout |0 |user |\ntcp_keepalives_count |10 |configuration file |\ntcp_keepalives_idle |900 |configuration file |\ntcp_keepalives_interval |75 |configuration file |\nTimeZone |Europe/Paris |client |\nunix_socket_group |postgres |configuration file |\nunix_socket_permissions |0700 |configuration file |\nwal_buffers |16MB |configuration file |\nwork_mem |64MB |configuration file |\n\n\nDDL for my one of my table :\n\n\n-- public.\"F_TDOJ_HIST_1\" definition\n \n-- Drop table\n \n-- DROP TABLE public.\"F_TDOJ_HIST_1\";\n \nCREATE\nTABLE public.\"F_TDOJ_HIST_1\"\n (\n \n\"ID_TRAIN\"\nint4\nNOT\nNULL,\n \n\"ID_JOUR\"\nint4\nNOT\nNULL,\n \n\"ID_OD\"\nint4\nNOT\nNULL,\n \n\"JX\"\nint4\nNOT\nNULL,\n \n\"RES\"\nint4\nNULL,\n \n\"REV\"\nfloat8\nNULL,\n \n\"OFFRE\"\nint4\nNULL,\n \n\"CC_OUV\"\nint4\nNULL,\n \n\"GENV_NUM\"\nint8\nNULL,\n \n\"GENV_DEN\"\nint8\nNULL,\n \n\"GENR_NUM\"\nint8\nNULL,\n \n\"GENR_DEN\"\nint8\nNULL,\n \n\"GENH_NUM\"\nint8\nNULL,\n \n\"GENH_DEN\"\nint8\nNULL,\n \n\"RES_CC0\"\nint4\nNULL,\n \n\"RES_CC1\"\nint4\nNULL,\n \n\"RES_CC2\"\nint4\nNULL,\n \n\"RES_CC3\"\nint4\nNULL,\n \n\"RES_CC4\"\nint4\nNULL,\n \n\"RES_CC5\"\nint4\nNULL,\n \n\"RES_CC6\"\nint4\nNULL,\n \n\"RES_CC7\"\nint4\nNULL,\n \n\"RES_CC8\"\nint4\nNULL,\n \n\"RES_CC9\"\nint4\nNULL,\n \n\"RES_CC10\"\nint4\nNULL,\n \n\"RES_CC11\"\nint4\nNULL,\n \n\"RES_CC12\"\nint4\nNULL,\n \n\"RES_CC13\"\nint4\nNULL,\n \n\"RES_CC14\"\nint4\nNULL,\n \n\"RES_CC15\"\nint4\nNULL,\n \n\"RES_CC16\"\nint4\nNULL,\n \n\"RES_CC17\"\nint4\nNULL,\n \n\"RES_CC18\"\nint4\nNULL,\n \n\"RES_CC19\"\nint4\nNULL,\n \n\"RES_CC20\"\nint4\nNULL,\n \n\"AUT_CC0\"\nint4\nNULL,\n \n\"AUT_CC1\"\nint4\nNULL,\n \n\"AUT_CC2\"\nint4\nNULL,\n \n\"AUT_CC3\"\nint4\nNULL,\n \n\"AUT_CC4\"\nint4\nNULL,\n \n\"AUT_CC5\"\nint4\nNULL,\n \n\"AUT_CC6\"\nint4\nNULL,\n \n\"AUT_CC7\"\nint4\nNULL,\n \n\"AUT_CC8\"\nint4\nNULL,\n \n\"AUT_CC9\"\nint4\nNULL,\n \n\"AUT_CC10\"\nint4\nNULL,\n \n\"AUT_CC11\"\nint4\nNULL,\n \n\"AUT_CC12\"\nint4\nNULL,\n \n\"AUT_CC13\"\nint4\nNULL,\n \n\"AUT_CC14\"\nint4\nNULL,\n \n\"AUT_CC15\"\nint4\nNULL,\n \n\"AUT_CC16\"\nint4\nNULL,\n \n\"AUT_CC17\"\nint4\nNULL,\n \n\"AUT_CC18\"\nint4\nNULL,\n \n\"AUT_CC19\"\nint4\nNULL,\n \n\"AUT_CC20\"\nint4\nNULL,\n \n\"DSP_CC0\"\nint4\nNULL,\n \n\"DSP_CC1\"\nint4\nNULL,\n \n\"DSP_CC2\"\nint4\nNULL,\n \n\"DSP_CC3\"\nint4\nNULL,\n \n\"DSP_CC4\"\nint4\nNULL,\n \n\"DSP_CC5\"\nint4\nNULL,\n \n\"DSP_CC6\"\nint4\nNULL,\n \n\"DSP_CC7\"\nint4\nNULL,\n \n\"DSP_CC8\"\nint4\nNULL,\n \n\"DSP_CC9\"\nint4\nNULL,\n \n\"DSP_CC10\"\nint4\nNULL,\n \n\"DSP_CC11\"\nint4\nNULL,\n \n\"DSP_CC12\"\nint4\nNULL,\n \n\"DSP_CC13\"\nint4\nNULL,\n \n\"DSP_CC14\"\nint4\nNULL,\n \n\"DSP_CC15\"\nint4\nNULL,\n \n\"DSP_CC16\"\nint4\nNULL,\n \n\"DSP_CC17\"\nint4\nNULL,\n \n\"DSP_CC18\"\nint4\nNULL,\n \n\"DSP_CC19\"\nint4\nNULL,\n \n\"DSP_CC20\"\nint4\nNULL,\n \n\"REV_CC0\"\nfloat8\nNULL,\n \n\"REV_CC1\"\nfloat8\nNULL,\n \n\"REV_CC2\"\nfloat8\nNULL,\n \n\"REV_CC3\"\nfloat8\nNULL,\n \n\"REV_CC4\"\nfloat8\nNULL,\n \n\"REV_CC5\"\nfloat8\nNULL,\n \n\"REV_CC6\"\nfloat8\nNULL,\n \n\"REV_CC7\"\nfloat8\nNULL,\n \n\"REV_CC8\"\nfloat8\nNULL,\n \n\"REV_CC9\"\nfloat8\nNULL,\n \n\"REV_CC10\"\nfloat8\nNULL,\n \n\"REV_CC11\"\nfloat8\nNULL,\n \n\"REV_CC12\"\nfloat8\nNULL,\n \n\"REV_CC13\"\nfloat8\nNULL,\n \n\"REV_CC14\"\nfloat8\nNULL,\n \n\"REV_CC15\"\nfloat8\nNULL,\n \n\"REV_CC16\"\nfloat8\nNULL,\n \n\"REV_CC17\"\nfloat8\nNULL,\n \n\"REV_CC18\"\nfloat8\nNULL,\n \n\"REV_CC19\"\nfloat8\nNULL,\n \n\"REV_CC20\"\nfloat8\nNULL,\n \n\"RES_CHD\"\nint4\nNULL,\n \n\"REV_CHD\"\nfloat8\nNULL,\n \n\"RES_PRO\"\nint4\nNULL,\n \n\"REV_PRO\"\nfloat8\nNULL,\n \n\"RES_SOC\"\nint4\nNULL,\n \n\"REV_SOC\"\nfloat8\nNULL,\n \n\"RES_TMX\"\nint4\nNULL,\n \n\"REV_TMX\"\nfloat8\nNULL,\n \n\"RES_MAX\"\nint4\nNULL,\n \n\"RES_GRP\"\nint4\nNULL,\n \n\"REV_GRP\"\nfloat8\nNULL,\n \n\"PREV_RES\"\nint4\nNULL,\n \n\"PREV_REV\"\nfloat8\nNULL,\n \n\"OPTIM_RES\"\nint4\nNULL,\n \n\"OPTIM_REV\"\nfloat8\nNULL,\n \n\"RES_FFX\"\nint4\nNULL,\n \n\"REV_FFX\"\nfloat8\nNULL,\n \n\"RES_SFE\"\nint4\nNULL,\n \n\"REV_SFE\"\nfloat8\nNULL,\n \n\"RES_SFN\"\nint4\nNULL,\n \n\"REV_SFN\"\nfloat8\nNULL,\n \n\"RES_NFE\"\nint4\nNULL,\n \n\"REV_NFE\"\nfloat8\nNULL,\n \n\"RES_NFN\"\nint4\nNULL,\n \n\"REV_NFN\"\nfloat8\nNULL,\n \n\"RES_ABO\"\nint4\nNULL,\n \n\"REV_ABO\"\nfloat8\nNULL,\n \n\"RES_AGN\"\nint4\nNULL,\n \n\"REV_AGN\"\nfloat8\nNULL,\n \n\"RES_BPR\"\nint4\nNULL,\n \n\"REV_BPR\"\nfloat8\nNULL,\n \n\"RES_LIB\"\nint4\nNULL,\n \n\"REV_LIB\"\nfloat8\nNULL,\n \n\"RES_FFN\"\nint4\nNULL,\n \n\"REV_FFN\"\nfloat8\nNULL,\n \n\"RES_PRI\"\nint4\nNULL,\n \n\"REV_PRI\"\nfloat8\nNULL,\n \nCONSTRAINT\n\"F_TDOJ_HIST_1_OLDP_pkey\"\nPRIMARY\nKEY (\"ID_TRAIN\",\n\"ID_JOUR\",\n\"ID_OD\",\n\"JX\")\n)\nPARTITION\nBY\nRANGE (\"ID_JOUR\");\nCREATE\nINDEX\n\"F_TDOJ_HIST_1_OLDP_ID_JOUR_JX_idx\"\nON\nONLY public.\"F_TDOJ_HIST_1\"\nUSING btree (\"ID_JOUR\",\n\"JX\");\nCREATE\nINDEX\n\"F_TDOJ_HIST_1_OLDP_ID_JOUR_idx\"\nON\nONLY public.\"F_TDOJ_HIST_1\"\nUSING btree (\"ID_JOUR\");\nCREATE\nINDEX\n\"F_TDOJ_HIST_1_OLDP_ID_OD_idx\"\nON\nONLY public.\"F_TDOJ_HIST_1\"\nUSING btree (\"ID_OD\");\nCREATE\nINDEX\n\"F_TDOJ_HIST_1_OLDP_ID_TRAIN_idx\"\nON\nONLY public.\"F_TDOJ_HIST_1\"\nUSING btree (\"ID_TRAIN\");\nCREATE\nINDEX\n\"F_TDOJ_HIST_1_OLDP_JX_idx\"\nON\nONLY public.\"F_TDOJ_HIST_1\"\nUSING btree (\"JX\");\n \n \n-- public.\"F_TDOJ_HIST_1\" foreign keys\n \nALTER\nTABLE public.\"F_TDOJ_HIST_1\"\nADD\nCONSTRAINT\n\"F_TDOJ_HIST_1_OLDP_ID_JOUR_fkey\"\nFOREIGN\nKEY (\"ID_JOUR\")\nREFERENCES public.\"D_JOUR\"(\"ID_JOUR\");\nALTER\nTABLE public.\"F_TDOJ_HIST_1\"\nADD\nCONSTRAINT\n\"F_TDOJ_HIST_1_OLDP_ID_OD_fkey\"\nFOREIGN\nKEY (\"ID_OD\")\nREFERENCES public.\"D_OD\"(\"ID_OD\");\nALTER\nTABLE public.\"F_TDOJ_HIST_1\"\nADD\nCONSTRAINT\n\"F_TDOJ_HIST_1_OLDP_ID_TRAIN_fkey\"\nFOREIGN\nKEY (\"ID_TRAIN\")\nREFERENCES public.\"D_TRAIN\"(\"ID_TRAIN\");\nALTER\nTABLE public.\"F_TDOJ_HIST_1\"\nADD\nCONSTRAINT\n\"F_TDOJ_HIST_1_OLDP_JX_pkey\"\nFOREIGN\nKEY (\"JX\")\nREFERENCES public.\"D_JX\"(\"JX\");\n \n \n \nResult of : \nSELECT relname, relpages, reltuples, relallvisible,\n relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid)\nFROM pg_class\nWHERE relname\nin (\n 'F_TDLJ_HIST_1',\n'F_TDLJ_HIST_2',\n'F_TDLJ_HIST',\n'F_TDOJ_HIST_1',\n'F_TDOJ_HIST_2',\n'F_TDOJ_HIST'\n);\n\n\nrelpages|reltuples|relname |relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n--------+---------+-------------+-------------+-------+--------+--------------+----------+-------------+\n 0| 0.0|F_TDLJ_HIST | 0|p | 47|true |NULL | 0|\n 95806| 442969.0|F_TDLJ_HIST | 95806|r | 47|false |NULL | 785080320|\n 0| 0.0|F_TDLJ_HIST_1| 0|p | 129|true |NULL | 0|\n 197458| 730226.0|F_TDLJ_HIST_1| 157954|r | 129|false |NULL | 1618059264|\n 0| 0.0|F_TDLJ_HIST_2| 0|p | 159|true |NULL | 0|\n 278359| 441524.0|F_TDLJ_HIST_2| 278359|r | 159|false |NULL | 2280972288|\n 0| 0.0|F_TDOJ_HIST | 0|p | 104|true |NULL | 0|\n 311913|1424975.0|F_TDOJ_HIST | 311913|r | 56|false |NULL | 2555928576|\n 0| 0.0|F_TDOJ_HIST_1| 0|p | 135|true |NULL | 0|\n 682522|1241940.0|F_TDOJ_HIST_1| 682522|r | 135|false |NULL | 5592793088|\n 0| 0.0|F_TDOJ_HIST_2| 0|p | 163|true |NULL | 0|\n 661324|1397598.0|F_TDOJ_HIST_2| 661324|r | 163|false |NULL | 5419098112|\n\n\n\nInterne\n\n\n-------\nCe message et toutes les pièces jointes sont établis à l'intention exclusive de ses destinataires et sont confidentiels. L'intégrité de ce message n'étant pas assurée sur Internet, la SNCF ne peut être tenue responsable des altérations qui pourraient se produire sur son contenu. Toute publication, utilisation, reproduction, ou diffusion, même partielle, non autorisée préalablement par la SNCF, est strictement interdite. Si vous n'êtes pas le destinataire de ce message, merci d'en avertir immédiatement l'expéditeur et de le détruire.\n-------\nThis message and any attachments are intended solely for the addressees and are confidential. SNCF may not be held responsible for their contents whose accuracy and completeness cannot be guaranteed over the Internet. Unauthorized use, disclosure, distribution, copying, or any part thereof is strictly prohibited. If you are not the intended recipient of this message, please notify the sender immediately and delete it.",
"msg_date": "Thu, 4 Jul 2024 13:25:44 +0000",
"msg_from": "\"FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP\n YIELD MANAGEMENT)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to solve my slow disk i/o throughput during index scan"
},
{
"msg_contents": "On 4/7/2024 20:25, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE \nTGV / DM RMP YIELD MANAGEMENT) wrote:\n> *My question is : what can I change to get a better index reading speed ?*\n> \n> What I already tried :\n> \n> * Setting random_page_cost to prohibitive value (10000000) to force a\n> bitmap heap scan, because those can be made in parallel. This has\n> not worked, the optimizer is still doing an index scan on my fact table.\n> * Change effective_io_concurrency, max_parallel_workers_per_gather,\n> work_mem to much higher values.\nI'm not sure the case is only about speed of index scan. Just see into \nslow Index clause:\nfth.\\\"ID_TRAIN\\\" = ANY ('{17855,13945,536795,18838,18837,13574 ...\nand many more values.\nIndexScan need to make scan for each of these values and for each value \ngo through the pages to check other conditions.\nWe already discuss some optimisations related to this case in couple of \npgsql-hackers threads. But I'm not sure we have quick solution right now.\nIf you want to use BitmapScan (that might be reasonable to try here) - \nyou need to split huge ANY (...) clause into sequence of ORs.\nAlso, may be parallel append could help here? if can change \ncorresponding startup and tuple costs to force such a plan.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Thu, 4 Jul 2024 21:36:59 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to solve my slow disk i/o throughput during index scan"
},
{
"msg_contents": "Hello,\n\nThank you, splitting in “OR” query definitely enables bitmap heap scans, and thus parallelized read to disk 😃 ! I though did not understand your second point, what is parallel append, and how to enable it ?\n\nSimon F.\n\n\n\nInterne\n\nDe : Andrei Lepikhov <[email protected]>\nEnvoyé : jeudi 4 juillet 2024 16:37\nÀ : FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]>; [email protected]; Peter Geoghegan <[email protected]>\nObjet : Re: How to solve my slow disk i/o throughput during index scan\n\nOn 4/7/2024 20: 25, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) wrote: > *My question is : what can I change to get a better index reading speed ?* > > What I already tried : > > * Setting\n\n\nOn 4/7/2024 20:25, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE\n\nTGV / DM RMP YIELD MANAGEMENT) wrote:\n\n> *My question is : what can I change to get a better index reading speed ?*\n\n>\n\n> What I already tried :\n\n>\n\n> * Setting random_page_cost to prohibitive value (10000000) to force a\n\n> bitmap heap scan, because those can be made in parallel. This has\n\n> not worked, the optimizer is still doing an index scan on my fact table.\n\n> * Change effective_io_concurrency, max_parallel_workers_per_gather,\n\n> work_mem to much higher values.\n\nI'm not sure the case is only about speed of index scan. Just see into\n\nslow Index clause:\n\nfth.\\\"ID_TRAIN\\\" = ANY ('{17855,13945,536795,18838,18837,13574 ...\n\nand many more values.\n\nIndexScan need to make scan for each of these values and for each value\n\ngo through the pages to check other conditions.\n\nWe already discuss some optimisations related to this case in couple of\n\npgsql-hackers threads. But I'm not sure we have quick solution right now.\n\nIf you want to use BitmapScan (that might be reasonable to try here) -\n\nyou need to split huge ANY (...) clause into sequence of ORs.\n\nAlso, may be parallel append could help here? if can change\n\ncorresponding startup and tuple costs to force such a plan.\n\n\n\n--\n\nregards, Andrei Lepikhov\n\n\n-------\nCe message et toutes les pièces jointes sont établis à l'intention exclusive de ses destinataires et sont confidentiels. L'intégrité de ce message n'étant pas assurée sur Internet, la SNCF ne peut être tenue responsable des altérations qui pourraient se produire sur son contenu. Toute publication, utilisation, reproduction, ou diffusion, même partielle, non autorisée préalablement par la SNCF, est strictement interdite. Si vous n'êtes pas le destinataire de ce message, merci d'en avertir immédiatement l'expéditeur et de le détruire.\n-------\nThis message and any attachments are intended solely for the addressees and are confidential. SNCF may not be held responsible for their contents whose accuracy and completeness cannot be guaranteed over the Internet. Unauthorized use, disclosure, distribution, copying, or any part thereof is strictly prohibited. If you are not the intended recipient of this message, please notify the sender immediately and delete it.\n\n\n\n\n\n\n\n\n\nHello,\n\nThank you, splitting in “OR” query definitely enables bitmap heap scans, and thus parallelized read to disk\n😃 ! I though did not understand your second point, what is parallel\n append, and how to enable it ? \n \nSimon F.\n\n \n\n\n\n\nInterne\n\nDe : Andrei Lepikhov <[email protected]>\n\nEnvoyé : jeudi 4 juillet 2024 16:37\nÀ : FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]>; [email protected]; Peter Geoghegan <[email protected]>\nObjet : Re: How to solve my slow disk i/o throughput during index scan\n\n\n \n\nOn 4/7/2024 20: 25, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) wrote: > *My question is : what can I change to get a better\n index reading speed ?* > > What I already tried : > > * Setting \n\n\n\n\nOn 4/7/2024 20:25, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE \nTGV / DM RMP YIELD MANAGEMENT) wrote:\n> *My question is : what can I change to get a better index reading speed ?*\n> \n> What I already tried :\n> \n> * Setting random_page_cost to prohibitive value (10000000) to force a\n> bitmap heap scan, because those can be made in parallel. This has\n> not worked, the optimizer is still doing an index scan on my fact table.\n> * Change effective_io_concurrency, max_parallel_workers_per_gather,\n> work_mem to much higher values.\nI'm not sure the case is only about speed of index scan. Just see into \nslow Index clause:\nfth.\\\"ID_TRAIN\\\" = ANY ('{17855,13945,536795,18838,18837,13574 ...\nand many more values.\nIndexScan need to make scan for each of these values and for each value \ngo through the pages to check other conditions.\nWe already discuss some optimisations related to this case in couple of \npgsql-hackers threads. But I'm not sure we have quick solution right now.\nIf you want to use BitmapScan (that might be reasonable to try here) - \nyou need to split huge ANY (...) clause into sequence of ORs.\nAlso, may be parallel append could help here? if can change \ncorresponding startup and tuple costs to force such a plan.\n \n-- \nregards, Andrei Lepikhov\n \n\n\n-------\nCe message et toutes les pièces jointes sont établis à l'intention exclusive de ses destinataires et sont confidentiels. L'intégrité de ce message n'étant pas assurée sur Internet, la SNCF ne peut être tenue responsable des altérations qui pourraient se produire sur son contenu. Toute publication, utilisation, reproduction, ou diffusion, même partielle, non autorisée préalablement par la SNCF, est strictement interdite. Si vous n'êtes pas le destinataire de ce message, merci d'en avertir immédiatement l'expéditeur et de le détruire.\n-------\nThis message and any attachments are intended solely for the addressees and are confidential. SNCF may not be held responsible for their contents whose accuracy and completeness cannot be guaranteed over the Internet. Unauthorized use, disclosure, distribution, copying, or any part thereof is strictly prohibited. If you are not the intended recipient of this message, please notify the sender immediately and delete it.",
"msg_date": "Thu, 4 Jul 2024 15:23:23 +0000",
"msg_from": "\"FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP\n YIELD MANAGEMENT)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to solve my slow disk i/o throughput during index scan"
},
{
"msg_contents": "On 7/4/24 22:23, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE \nTGV / DM RMP YIELD MANAGEMENT) wrote:\n> Hello,\n> \n> Thank you, splitting in “OR” query definitely enables bitmap heap scans, \n> and thus parallelized read to disk 😃! I though did not understand your \n> second point, what is parallel append, and how to enable it ?\nJust for example:\n\nDROP TABLE IF EXISTS t CASCADE;\nCREATE TABLE t (id int not null, payload text) PARTITION BY RANGE (id);\nCREATE TABLE p1 PARTITION OF t FOR VALUES FROM (0) TO (1000);\nCREATE TABLE p2 PARTITION OF t FOR VALUES FROM (1000) TO (2000);\nCREATE TABLE p3 PARTITION OF t FOR VALUES FROM (2000) TO (3000);\nCREATE TABLE p4 PARTITION OF t FOR VALUES FROM (3000) TO (4000);\nINSERT INTO t SELECT x % 4000, repeat('a',128) || x FROM \ngenerate_series(1,1E5) AS x;\nANALYZE t;\n\nSET enable_parallel_append = on;\nSET parallel_setup_cost = 0.00001;\nSET parallel_tuple_cost = 0.00001;\nSET max_parallel_workers_per_gather = 8;\nSET min_parallel_table_scan_size = 0;\nSET min_parallel_index_scan_size = 0;\n\nEXPLAIN (COSTS OFF)\nSELECT t.id, t.payload FROM t WHERE t.id % 2 = 0\nGROUP BY t.id, t.payload;\n\n Group\n Group Key: t.id, t.payload\n -> Gather Merge\n Workers Planned: 6\n -> Sort\n Sort Key: t.id, t.payload\n -> Parallel Append\n -> Parallel Seq Scan on p1 t_1\n Filter: ((id % 2) = 0)\n -> Parallel Seq Scan on p2 t_2\n Filter: ((id % 2) = 0)\n -> Parallel Seq Scan on p3 t_3\n Filter: ((id % 2) = 0)\n -> Parallel Seq Scan on p4 t_4\n Filter: ((id % 2) = 0)\n\nHere the table is scanned in parallel. It also works with IndexScan.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Fri, 5 Jul 2024 09:04:31 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to solve my slow disk i/o throughput during index scan"
},
{
"msg_contents": "Hello, and thank you again for your example !\n\nSorry for my late answer, I was working on a patch for our requests. I am though not completely understanding what is happening. Here is a plan of a query where I splitted the calls with OR as you suggested, what seemed to have enabled parallel scans.\n\nhttps://explain.dalibo.com/plan/gfa1cf9fffd01bcg#plan/node/1\n\nBut, I still wonder, why was my request that slow ? My current understanding of what happened is :\n\n\n * When postgresql does an Index Scan, it goes through a loop (which is not parallel) of asking for a chunk of data, and then processing it. It wait for having processed the data to ask the next chunk, instead of loading the whole index in RAM (which, I suppose, would be much faster, but also not feasible if the index is too big and the RAM too small, so postgresql does not do it). Thus, the 2MB/s.\n * When it does a Bitmap Index Scan, it can parallelize disk interactions, and does not use the processor to discard lines, thus a much faster index load and processing.\n\nI might be completely wrong, and would really like to understand the details, in order to explain them to my team, and to other who might encounter the same problem.\n\nAgain, thank you very much for your help, we were really struggling with those slow queries !\n\nSimon FREYBURGER\n\n\n\nInterne\n\nDe : Andrei Lepikhov <[email protected]>\nEnvoyé : vendredi 5 juillet 2024 04:05\nÀ : FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]>; [email protected]; Peter Geoghegan <[email protected]>\nObjet : Re: How to solve my slow disk i/o throughput during index scan\n\nOn 7/4/24 22: 23, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) wrote: > Hello, > > Thank you, splitting in “OR” query definitely enables bitmap heap scans, > and thus parallelized read to disk\n\n\nOn 7/4/24 22:23, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE\n\nTGV / DM RMP YIELD MANAGEMENT) wrote:\n\n> Hello,\n\n>\n\n> Thank you, splitting in “OR” query definitely enables bitmap heap scans,\n\n> and thus parallelized read to disk 😃! I though did not understand your\n\n> second point, what is parallel append, and how to enable it ?\n\nJust for example:\n\n\n\nDROP TABLE IF EXISTS t CASCADE;\n\nCREATE TABLE t (id int not null, payload text) PARTITION BY RANGE (id);\n\nCREATE TABLE p1 PARTITION OF t FOR VALUES FROM (0) TO (1000);\n\nCREATE TABLE p2 PARTITION OF t FOR VALUES FROM (1000) TO (2000);\n\nCREATE TABLE p3 PARTITION OF t FOR VALUES FROM (2000) TO (3000);\n\nCREATE TABLE p4 PARTITION OF t FOR VALUES FROM (3000) TO (4000);\n\nINSERT INTO t SELECT x % 4000, repeat('a',128) || x FROM\n\ngenerate_series(1,1E5) AS x;\n\nANALYZE t;\n\n\n\nSET enable_parallel_append = on;\n\nSET parallel_setup_cost = 0.00001;\n\nSET parallel_tuple_cost = 0.00001;\n\nSET max_parallel_workers_per_gather = 8;\n\nSET min_parallel_table_scan_size = 0;\n\nSET min_parallel_index_scan_size = 0;\n\n\n\nEXPLAIN (COSTS OFF)\n\nSELECT t.id, t.payload FROM t WHERE t.id % 2 = 0\n\nGROUP BY t.id, t.payload;\n\n\n\n Group\n\n Group Key: t.id, t.payload\n\n -> Gather Merge\n\n Workers Planned: 6\n\n -> Sort\n\n Sort Key: t.id, t.payload\n\n -> Parallel Append\n\n -> Parallel Seq Scan on p1 t_1\n\n Filter: ((id % 2) = 0)\n\n -> Parallel Seq Scan on p2 t_2\n\n Filter: ((id % 2) = 0)\n\n -> Parallel Seq Scan on p3 t_3\n\n Filter: ((id % 2) = 0)\n\n -> Parallel Seq Scan on p4 t_4\n\n Filter: ((id % 2) = 0)\n\n\n\nHere the table is scanned in parallel. It also works with IndexScan.\n\n\n\n--\n\nregards, Andrei Lepikhov\n\n\n-------\nCe message et toutes les pièces jointes sont établis à l'intention exclusive de ses destinataires et sont confidentiels. L'intégrité de ce message n'étant pas assurée sur Internet, la SNCF ne peut être tenue responsable des altérations qui pourraient se produire sur son contenu. Toute publication, utilisation, reproduction, ou diffusion, même partielle, non autorisée préalablement par la SNCF, est strictement interdite. Si vous n'êtes pas le destinataire de ce message, merci d'en avertir immédiatement l'expéditeur et de le détruire.\n-------\nThis message and any attachments are intended solely for the addressees and are confidential. SNCF may not be held responsible for their contents whose accuracy and completeness cannot be guaranteed over the Internet. Unauthorized use, disclosure, distribution, copying, or any part thereof is strictly prohibited. If you are not the intended recipient of this message, please notify the sender immediately and delete it.\n\n\n\n\n\n\n\n\n\n\nHello, and thank you again for your example !\n\nSorry for my late answer, I was working on a patch for our requests. I am though not completely understanding what is happening. Here is a plan of a query where I splitted the calls with OR as you suggested, what seemed to have enabled parallel scans.\n\nhttps://explain.dalibo.com/plan/gfa1cf9fffd01bcg#plan/node/1\n\nBut, I still wonder, why was my request that slow ? My current understanding of what happened is :\n \n\nWhen postgresql does an Index Scan, it goes through a loop (which is not parallel) of asking for a chunk of data,\n and then processing it. It wait for having processed the data to ask the next chunk, instead of loading the whole index in RAM (which, I suppose, would be much faster, but also not feasible if the index is too big and the RAM too small, so postgresql does\n not do it). Thus, the 2MB/s. When it does a Bitmap Index Scan, it can parallelize disk interactions, and does not use the processor to discard\n lines, thus a much faster index load and processing. \n \nI might be completely wrong, and would really like to understand the details, in order to explain them to my team, and to other who might\n encounter the same problem. \n\nAgain, thank you very much for your help, we were really struggling with those slow queries !\n\nSimon FREYBURGER\n \n\n\n\n\nInterne\n\nDe : Andrei Lepikhov <[email protected]>\n\nEnvoyé : vendredi 5 juillet 2024 04:05\nÀ : FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]>; [email protected]; Peter Geoghegan <[email protected]>\nObjet : Re: How to solve my slow disk i/o throughput during index scan\n\n\n \n\nOn 7/4/24 22: 23, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) wrote: > Hello, > > Thank you, splitting in “OR” query definitely\n enables bitmap heap scans, > and thus parallelized read to disk \n\n\n\n\nOn 7/4/24 22:23, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE \nTGV / DM RMP YIELD MANAGEMENT) wrote:\n> Hello,\n> \n> Thank you, splitting in “OR” query definitely enables bitmap heap scans, \n> and thus parallelized read to disk 😃! I though did not understand your \n> second point, what is parallel append, and how to enable it ?\nJust for example:\n \nDROP TABLE IF EXISTS t CASCADE;\nCREATE TABLE t (id int not null, payload text) PARTITION BY RANGE (id);\nCREATE TABLE p1 PARTITION OF t FOR VALUES FROM (0) TO (1000);\nCREATE TABLE p2 PARTITION OF t FOR VALUES FROM (1000) TO (2000);\nCREATE TABLE p3 PARTITION OF t FOR VALUES FROM (2000) TO (3000);\nCREATE TABLE p4 PARTITION OF t FOR VALUES FROM (3000) TO (4000);\nINSERT INTO t SELECT x % 4000, repeat('a',128) || x FROM \ngenerate_series(1,1E5) AS x;\nANALYZE t;\n \nSET enable_parallel_append = on;\nSET parallel_setup_cost = 0.00001;\nSET parallel_tuple_cost = 0.00001;\nSET max_parallel_workers_per_gather = 8;\nSET min_parallel_table_scan_size = 0;\nSET min_parallel_index_scan_size = 0;\n \nEXPLAIN (COSTS OFF)\nSELECT t.id, t.payload FROM t WHERE t.id % 2 = 0\nGROUP BY t.id, t.payload;\n \n Group\n Group Key: t.id, t.payload\n -> Gather Merge\n Workers Planned: 6\n -> Sort\n Sort Key: t.id, t.payload\n -> Parallel Append\n -> Parallel Seq Scan on p1 t_1\n Filter: ((id % 2) = 0)\n -> Parallel Seq Scan on p2 t_2\n Filter: ((id % 2) = 0)\n -> Parallel Seq Scan on p3 t_3\n Filter: ((id % 2) = 0)\n -> Parallel Seq Scan on p4 t_4\n Filter: ((id % 2) = 0)\n \nHere the table is scanned in parallel. It also works with IndexScan.\n \n-- \nregards, Andrei Lepikhov\n \n\n\n-------\nCe message et toutes les pièces jointes sont établis à l'intention exclusive de ses destinataires et sont confidentiels. L'intégrité de ce message n'étant pas assurée sur Internet, la SNCF ne peut être tenue responsable des altérations qui pourraient se produire sur son contenu. Toute publication, utilisation, reproduction, ou diffusion, même partielle, non autorisée préalablement par la SNCF, est strictement interdite. Si vous n'êtes pas le destinataire de ce message, merci d'en avertir immédiatement l'expéditeur et de le détruire.\n-------\nThis message and any attachments are intended solely for the addressees and are confidential. SNCF may not be held responsible for their contents whose accuracy and completeness cannot be guaranteed over the Internet. Unauthorized use, disclosure, distribution, copying, or any part thereof is strictly prohibited. If you are not the intended recipient of this message, please notify the sender immediately and delete it.",
"msg_date": "Thu, 11 Jul 2024 14:59:26 +0000",
"msg_from": "\"FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP\n YIELD MANAGEMENT)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to solve my slow disk i/o throughput during index scan"
},
{
"msg_contents": "Also, It might not be related, but I have suspiciously similar slow reads when I am inserting in database, could it be related ?\r\n\r\nSee e.g. : https://explain.dalibo.com/plan/43d37de5870e1651\r\n\r\nThe table I am inserting into looks like :\r\n\r\n-- public.\"F_TDOJ_SC_HIST_2\" definition\r\n\r\n-- Drop table\r\n\r\n-- DROP TABLE public.\"F_TDOJ_SC_HIST_2\";\r\n\r\nCREATE TABLE public.\"F_TDOJ_SC_HIST_2\" (\r\n \"ID_TRAIN\" int4 NOT NULL,\r\n \"ID_JOUR\" int4 NOT NULL,\r\n \"ID_OD\" int4 NOT NULL,\r\n \"JX\" int4 NOT NULL,\r\n \"RES\" int4 NULL,\r\n \"REV\" float4 NULL,\r\n \"RES_SC1\" int4 NULL,\r\n \"RES_SC2\" int4 NULL,\r\n \"RES_SC3\" int4 NULL,\r\n \"RES_SC4\" int4 NULL,\r\n \"RES_SC5\" int4 NULL,\r\n \"RES_SC6\" int4 NULL,\r\n \"RES_SC7\" int4 NULL,\r\n \"RES_SC8\" int4 NULL,\r\n \"RES_SC9\" int4 NULL,\r\n \"RES_SC10\" int4 NULL,\r\n \"RES_SC11\" int4 NULL,\r\n \"RES_SC12\" int4 NULL,\r\n \"RES_SC13\" int4 NULL,\r\n \"RES_SC14\" int4 NULL,\r\n \"RES_SC15\" int4 NULL,\r\n \"RES_SC16\" int4 NULL,\r\n \"RES_SC17\" int4 NULL,\r\n \"RES_SC18\" int4 NULL,\r\n \"AUT_SC1\" int4 NULL,\r\n \"AUT_SC2\" int4 NULL,\r\n \"AUT_SC3\" int4 NULL,\r\n \"AUT_SC4\" int4 NULL,\r\n \"AUT_SC5\" int4 NULL,\r\n \"AUT_SC6\" int4 NULL,\r\n \"AUT_SC7\" int4 NULL,\r\n \"AUT_SC8\" int4 NULL,\r\n \"AUT_SC9\" int4 NULL,\r\n \"AUT_SC10\" int4 NULL,\r\n \"AUT_SC11\" int4 NULL,\r\n \"AUT_SC12\" int4 NULL,\r\n \"AUT_SC13\" int4 NULL,\r\n \"AUT_SC14\" int4 NULL,\r\n \"AUT_SC15\" int4 NULL,\r\n \"AUT_SC16\" int4 NULL,\r\n \"AUT_SC17\" int4 NULL,\r\n \"AUT_SC18\" int4 NULL,\r\n \"DSP_SC1\" int4 NULL,\r\n \"DSP_SC2\" int4 NULL,\r\n \"DSP_SC3\" int4 NULL,\r\n \"DSP_SC4\" int4 NULL,\r\n \"DSP_SC5\" int4 NULL,\r\n \"DSP_SC6\" int4 NULL,\r\n \"DSP_SC7\" int4 NULL,\r\n \"DSP_SC8\" int4 NULL,\r\n \"DSP_SC9\" int4 NULL,\r\n \"DSP_SC10\" int4 NULL,\r\n \"DSP_SC11\" int4 NULL,\r\n \"DSP_SC12\" int4 NULL,\r\n \"DSP_SC13\" int4 NULL,\r\n \"DSP_SC14\" int4 NULL,\r\n \"DSP_SC15\" int4 NULL,\r\n \"DSP_SC16\" int4 NULL,\r\n \"DSP_SC17\" int4 NULL,\r\n \"DSP_SC18\" int4 NULL,\r\n \"REV_SC1\" float4 NULL,\r\n \"REV_SC2\" float4 NULL,\r\n \"REV_SC3\" float4 NULL,\r\n \"REV_SC4\" float4 NULL,\r\n \"REV_SC5\" float4 NULL,\r\n \"REV_SC6\" float4 NULL,\r\n \"REV_SC7\" float4 NULL,\r\n \"REV_SC8\" float4 NULL,\r\n \"REV_SC9\" float4 NULL,\r\n \"REV_SC10\" float4 NULL,\r\n \"REV_SC11\" float4 NULL,\r\n \"REV_SC12\" float4 NULL,\r\n \"REV_SC13\" float4 NULL,\r\n \"REV_SC14\" float4 NULL,\r\n \"REV_SC15\" float4 NULL,\r\n \"REV_SC16\" float4 NULL,\r\n \"REV_SC17\" float4 NULL,\r\n \"REV_SC18\" float4 NULL,\r\n CONSTRAINT \"F_TDOJ_SC_HIST_2_pkey\" PRIMARY KEY (\"ID_TRAIN\",\"ID_JOUR\",\"ID_OD\",\"JX\")\r\n)\r\nPARTITION BY RANGE (\"ID_JOUR\");\r\n\r\n\r\n-- public.\"F_TDOJ_SC_HIST_2\" foreign keys\r\n\r\nALTER TABLE public.\"F_TDOJ_SC_HIST_2\" ADD CONSTRAINT \"F_TDOJ_SC_HIST_2_ID_JOUR_fkey\" FOREIGN KEY (\"ID_JOUR\") REFERENCES public.\"D_JOUR\"(\"ID_JOUR\");\r\nALTER TABLE public.\"F_TDOJ_SC_HIST_2\" ADD CONSTRAINT \"F_TDOJ_SC_HIST_2_ID_OD_fkey\" FOREIGN KEY (\"ID_OD\") REFERENCES public.\"D_OD\"(\"ID_OD\");\r\nALTER TABLE public.\"F_TDOJ_SC_HIST_2\" ADD CONSTRAINT \"F_TDOJ_SC_HIST_2_ID_TRAIN_fkey\" FOREIGN KEY (\"ID_TRAIN\") REFERENCES public.\"D_TRAIN\"(\"ID_TRAIN\");\r\nALTER TABLE public.\"F_TDOJ_SC_HIST_2\" ADD CONSTRAINT \"F_TDOJ_SC_HIST_2_JX_fkey\" FOREIGN KEY (\"JX\") REFERENCES public.\"D_JX\"(\"JX\");\r\n\r\nI’m using a 3 steps process to insert my lines in the table :\r\n\r\n * COPY into a temporary table\r\n * DELETE FROM on the perimeter I will be inserting into\r\n * INSERT … INTO mytable SELECT … FROM temporarytable ON CONFLICT DO NOTHING\r\n\r\nIs it possible to parallelize the scans during the modify step ?\r\n\r\nRegards\r\n\r\nSimon FREYBURGER\r\n\r\n\r\n\r\nInterne\r\n\r\nDe : FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]>\r\nEnvoyé : jeudi 11 juillet 2024 16:59\r\nÀ : Andrei Lepikhov <[email protected]>; [email protected]; Peter Geoghegan <[email protected]>\r\nObjet : RE: How to solve my slow disk i/o throughput during index scan\r\n\r\n\r\nHello, and thank you again for your example !\r\n\r\nSorry for my late answer, I was working on a patch for our requests. I am though not completely understanding what is happening. Here is a plan of a query where I splitted the calls with OR as you suggested, what seemed to have enabled parallel scans.\r\n\r\nhttps://explain.dalibo.com/plan/gfa1cf9fffd01bcg#plan/node/1<https://urldefense.com/v3/__https:/explain.dalibo.com/plan/gfa1cf9fffd01bcg*plan/node/1__;Iw!!Nto2ANp9CeU!DO4mHI1UB39FE3w097CdWOwFBxRK5YZ9E7p70kmlURqazSpt9Ap12PEHG_fci2fBq3KdsjqSwYit6C9RgWm5EmWE4BuTvM66Uxw$>\r\n\r\nBut, I still wonder, why was my request that slow ? My current understanding of what happened is :\r\n\r\n\r\n * When postgresql does an Index Scan, it goes through a loop (which is not parallel) of asking for a chunk of data, and then processing it. It wait for having processed the data to ask the next chunk, instead of loading the whole index in RAM (which, I suppose, would be much faster, but also not feasible if the index is too big and the RAM too small, so postgresql does not do it). Thus, the 2MB/s.\r\n * When it does a Bitmap Index Scan, it can parallelize disk interactions, and does not use the processor to discard lines, thus a much faster index load and processing.\r\n\r\nI might be completely wrong, and would really like to understand the details, in order to explain them to my team, and to other who might encounter the same problem.\r\n\r\nAgain, thank you very much for your help, we were really struggling with those slow queries !\r\n\r\nSimon FREYBURGER\r\n\r\n\r\n\r\nInterne\r\nDe : Andrei Lepikhov <[email protected]<mailto:[email protected]>>\r\nEnvoyé : vendredi 5 juillet 2024 04:05\r\nÀ : FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>; Peter Geoghegan <[email protected]<mailto:[email protected]>>\r\nObjet : Re: How to solve my slow disk i/o throughput during index scan\r\n\r\nOn 7/4/24 22: 23, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) wrote: > Hello, > > Thank you, splitting in “OR” query definitely enables bitmap heap scans, > and thus parallelized read to disk\r\n\r\n\r\n\r\n\r\nInterne\r\n\r\n\r\n\r\nOn 7/4/24 22:23, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE\r\n\r\nTGV / DM RMP YIELD MANAGEMENT) wrote:\r\n\r\n> Hello,\r\n\r\n>\r\n\r\n> Thank you, splitting in “OR” query definitely enables bitmap heap scans,\r\n\r\n> and thus parallelized read to disk 😃! I though did not understand your\r\n\r\n> second point, what is parallel append, and how to enable it ?\r\n\r\nJust for example:\r\n\r\n\r\n\r\nDROP TABLE IF EXISTS t CASCADE;\r\n\r\nCREATE TABLE t (id int not null, payload text) PARTITION BY RANGE (id);\r\n\r\nCREATE TABLE p1 PARTITION OF t FOR VALUES FROM (0) TO (1000);\r\n\r\nCREATE TABLE p2 PARTITION OF t FOR VALUES FROM (1000) TO (2000);\r\n\r\nCREATE TABLE p3 PARTITION OF t FOR VALUES FROM (2000) TO (3000);\r\n\r\nCREATE TABLE p4 PARTITION OF t FOR VALUES FROM (3000) TO (4000);\r\n\r\nINSERT INTO t SELECT x % 4000, repeat('a',128) || x FROM\r\n\r\ngenerate_series(1,1E5) AS x;\r\n\r\nANALYZE t;\r\n\r\n\r\n\r\nSET enable_parallel_append = on;\r\n\r\nSET parallel_setup_cost = 0.00001;\r\n\r\nSET parallel_tuple_cost = 0.00001;\r\n\r\nSET max_parallel_workers_per_gather = 8;\r\n\r\nSET min_parallel_table_scan_size = 0;\r\n\r\nSET min_parallel_index_scan_size = 0;\r\n\r\n\r\n\r\nEXPLAIN (COSTS OFF)\r\n\r\nSELECT t.id, t.payload FROM t WHERE t.id % 2 = 0\r\n\r\nGROUP BY t.id, t.payload;\r\n\r\n\r\n\r\n Group\r\n\r\n Group Key: t.id, t.payload\r\n\r\n -> Gather Merge\r\n\r\n Workers Planned: 6\r\n\r\n -> Sort\r\n\r\n Sort Key: t.id, t.payload\r\n\r\n -> Parallel Append\r\n\r\n -> Parallel Seq Scan on p1 t_1\r\n\r\n Filter: ((id % 2) = 0)\r\n\r\n -> Parallel Seq Scan on p2 t_2\r\n\r\n Filter: ((id % 2) = 0)\r\n\r\n -> Parallel Seq Scan on p3 t_3\r\n\r\n Filter: ((id % 2) = 0)\r\n\r\n -> Parallel Seq Scan on p4 t_4\r\n\r\n Filter: ((id % 2) = 0)\r\n\r\n\r\n\r\nHere the table is scanned in parallel. It also works with IndexScan.\r\n\r\n\r\n\r\n--\r\n\r\nregards, Andrei Lepikhov\r\n\r\n\r\n-------\r\nCe message et toutes les pièces jointes sont établis à l'intention exclusive de ses destinataires et sont confidentiels. L'intégrité de ce message n'étant pas assurée sur Internet, la SNCF ne peut être tenue responsable des altérations qui pourraient se produire sur son contenu. Toute publication, utilisation, reproduction, ou diffusion, même partielle, non autorisée préalablement par la SNCF, est strictement interdite. Si vous n'êtes pas le destinataire de ce message, merci d'en avertir immédiatement l'expéditeur et de le détruire.\r\n-------\r\nThis message and any attachments are intended solely for the addressees and are confidential. SNCF may not be held responsible for their contents whose accuracy and completeness cannot be guaranteed over the Internet. Unauthorized use, disclosure, distribution, copying, or any part thereof is strictly prohibited. If you are not the intended recipient of this message, please notify the sender immediately and delete it.\r\n\n\n\n\n\n\n\n\n\nAlso, It might not be related, but I have suspiciously similar slow reads when I am inserting in database, could it be related ?\r\n\n\r\nSee e.g. : https://explain.dalibo.com/plan/43d37de5870e1651\n\r\nThe table I am inserting into looks like :\n\n-- public.\"F_TDOJ_SC_HIST_2\" definition\n \n-- Drop table\n \n-- DROP TABLE public.\"F_TDOJ_SC_HIST_2\";\n \nCREATE\nTABLE public.\"F_TDOJ_SC_HIST_2\"\r\n (\n \r\n\"ID_TRAIN\"\nint4\nNOT\nNULL,\n \r\n\"ID_JOUR\"\nint4\nNOT\nNULL,\n \r\n\"ID_OD\"\nint4\nNOT\nNULL,\n \r\n\"JX\"\nint4\nNOT\nNULL,\n \r\n\"RES\"\nint4\nNULL,\n \r\n\"REV\"\nfloat4\nNULL,\n \r\n\"RES_SC1\"\nint4\nNULL,\n \r\n\"RES_SC2\"\nint4\nNULL,\n \r\n\"RES_SC3\"\nint4\nNULL,\n \r\n\"RES_SC4\"\nint4\nNULL,\n \r\n\"RES_SC5\"\nint4\nNULL,\n \r\n\"RES_SC6\"\nint4\nNULL,\n \r\n\"RES_SC7\"\nint4\nNULL,\n \r\n\"RES_SC8\"\nint4\nNULL,\n \r\n\"RES_SC9\"\nint4\nNULL,\n \r\n\"RES_SC10\"\nint4\nNULL,\n \r\n\"RES_SC11\"\nint4\nNULL,\n \r\n\"RES_SC12\"\nint4\nNULL,\n \r\n\"RES_SC13\"\nint4\nNULL,\n \r\n\"RES_SC14\"\nint4\nNULL,\n \r\n\"RES_SC15\"\nint4\nNULL,\n \r\n\"RES_SC16\"\nint4\nNULL,\n \r\n\"RES_SC17\"\nint4\nNULL,\n \r\n\"RES_SC18\"\nint4\nNULL,\n \r\n\"AUT_SC1\"\nint4\nNULL,\n \r\n\"AUT_SC2\"\nint4\nNULL,\n \r\n\"AUT_SC3\"\nint4\nNULL,\n \r\n\"AUT_SC4\"\nint4\nNULL,\n \r\n\"AUT_SC5\"\nint4\nNULL,\n \r\n\"AUT_SC6\"\nint4\nNULL,\n \r\n\"AUT_SC7\"\nint4\nNULL,\n \r\n\"AUT_SC8\"\nint4\nNULL,\n \r\n\"AUT_SC9\"\nint4\nNULL,\n \r\n\"AUT_SC10\"\nint4\nNULL,\n \r\n\"AUT_SC11\"\nint4\nNULL,\n \r\n\"AUT_SC12\"\nint4\nNULL,\n \r\n\"AUT_SC13\"\nint4\nNULL,\n \r\n\"AUT_SC14\"\nint4\nNULL,\n \r\n\"AUT_SC15\"\nint4\nNULL,\n \r\n\"AUT_SC16\"\nint4\nNULL,\n \r\n\"AUT_SC17\"\nint4\nNULL,\n \r\n\"AUT_SC18\"\nint4\nNULL,\n \r\n\"DSP_SC1\"\nint4\nNULL,\n \r\n\"DSP_SC2\"\nint4\nNULL,\n \r\n\"DSP_SC3\"\nint4\nNULL,\n \r\n\"DSP_SC4\"\nint4\nNULL,\n \r\n\"DSP_SC5\"\nint4\nNULL,\n \r\n\"DSP_SC6\"\nint4\nNULL,\n \r\n\"DSP_SC7\"\nint4\nNULL,\n \r\n\"DSP_SC8\"\nint4\nNULL,\n \r\n\"DSP_SC9\"\nint4\nNULL,\n \r\n\"DSP_SC10\"\nint4\nNULL,\n \r\n\"DSP_SC11\"\nint4\nNULL,\n \r\n\"DSP_SC12\"\nint4\nNULL,\n \r\n\"DSP_SC13\"\nint4\nNULL,\n \r\n\"DSP_SC14\"\nint4\nNULL,\n \r\n\"DSP_SC15\"\nint4\nNULL,\n \r\n\"DSP_SC16\"\nint4\nNULL,\n \r\n\"DSP_SC17\"\nint4\nNULL,\n \r\n\"DSP_SC18\"\nint4\nNULL,\n \r\n\"REV_SC1\"\nfloat4\nNULL,\n \r\n\"REV_SC2\"\nfloat4\nNULL,\n \r\n\"REV_SC3\"\nfloat4\nNULL,\n \r\n\"REV_SC4\"\nfloat4\nNULL,\n \r\n\"REV_SC5\"\nfloat4\nNULL,\n \r\n\"REV_SC6\"\nfloat4\nNULL,\n \r\n\"REV_SC7\"\nfloat4\nNULL,\n \r\n\"REV_SC8\"\nfloat4\nNULL,\n \r\n\"REV_SC9\"\nfloat4\nNULL,\n \r\n\"REV_SC10\"\nfloat4\nNULL,\n \r\n\"REV_SC11\"\nfloat4\nNULL,\n \r\n\"REV_SC12\"\nfloat4\nNULL,\n \r\n\"REV_SC13\"\nfloat4\nNULL,\n \r\n\"REV_SC14\"\nfloat4\nNULL,\n \r\n\"REV_SC15\"\nfloat4\nNULL,\n \r\n\"REV_SC16\"\nfloat4\nNULL,\n \r\n\"REV_SC17\"\nfloat4\nNULL,\n \r\n\"REV_SC18\"\nfloat4\nNULL,\n \r\nCONSTRAINT\n\"F_TDOJ_SC_HIST_2_pkey\"\nPRIMARY\nKEY (\"ID_TRAIN\",\"ID_JOUR\",\"ID_OD\",\"JX\")\n)\nPARTITION\nBY\nRANGE (\"ID_JOUR\");\n \n \n-- public.\"F_TDOJ_SC_HIST_2\" foreign keys\n \nALTER\nTABLE public.\"F_TDOJ_SC_HIST_2\"\nADD\nCONSTRAINT\n\"F_TDOJ_SC_HIST_2_ID_JOUR_fkey\"\nFOREIGN\nKEY (\"ID_JOUR\")\r\nREFERENCES public.\"D_JOUR\"(\"ID_JOUR\");\nALTER\nTABLE public.\"F_TDOJ_SC_HIST_2\"\nADD\nCONSTRAINT\n\"F_TDOJ_SC_HIST_2_ID_OD_fkey\"\nFOREIGN\nKEY (\"ID_OD\")\r\nREFERENCES public.\"D_OD\"(\"ID_OD\");\nALTER\nTABLE public.\"F_TDOJ_SC_HIST_2\"\nADD\nCONSTRAINT\n\"F_TDOJ_SC_HIST_2_ID_TRAIN_fkey\"\nFOREIGN\nKEY (\"ID_TRAIN\")\r\nREFERENCES public.\"D_TRAIN\"(\"ID_TRAIN\");\nALTER\nTABLE public.\"F_TDOJ_SC_HIST_2\"\nADD\nCONSTRAINT\n\"F_TDOJ_SC_HIST_2_JX_fkey\"\nFOREIGN\nKEY (\"JX\")\r\nREFERENCES public.\"D_JX\"(\"JX\");\n\nI’m using a 3 steps process to insert my lines in the table :\n\nCOPY into a temporary tableDELETE FROM on the perimeter I will be inserting intoINSERT … INTO mytable SELECT … FROM temporarytable ON CONFLICT DO NOTHING\n \nIs it possible to parallelize the scans during the modify step ?\r\n\n \nRegards\n\r\nSimon FREYBURGER\n \n\n\n\n\n\r\nInterne\n\r\nDe : FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]>\r\n\nEnvoyé : jeudi 11 juillet 2024 16:59\nÀ : Andrei Lepikhov <[email protected]>; [email protected]; Peter Geoghegan <[email protected]>\nObjet : RE: How to solve my slow disk i/o throughput during index scan\n\n\n\n \n\r\nHello, and thank you again for your example !\n\r\nSorry for my late answer, I was working on a patch for our requests. I am though not completely understanding what is happening. Here is a plan of a query where I splitted the calls with OR as you suggested, what seemed to have enabled parallel scans.\n\nhttps://explain.dalibo.com/plan/gfa1cf9fffd01bcg#plan/node/1\n\r\nBut, I still wonder, why was my request that slow ? My current understanding of what happened is :\n \n\nWhen postgresql does an Index Scan, it goes through a loop (which is not parallel) of asking for a chunk of data,\r\n and then processing it. It wait for having processed the data to ask the next chunk, instead of loading the whole index in RAM (which, I suppose, would be much faster, but also not feasible if the index is too big and the RAM too small, so postgresql does\r\n not do it). Thus, the 2MB/s. When it does a Bitmap Index Scan, it can parallelize disk interactions, and does not use the processor to discard\r\n lines, thus a much faster index load and processing. \n \nI might be completely wrong, and would really like to understand the details, in order to explain them to my team, and to other who might\r\n encounter the same problem. \n\r\nAgain, thank you very much for your help, we were really struggling with those slow queries !\n\r\nSimon FREYBURGER\n \n\n \nInterne\nDe : Andrei Lepikhov <[email protected]>\r\n\nEnvoyé : vendredi 5 juillet 2024 04:05\nÀ : FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]>;\r\[email protected]; Peter Geoghegan <[email protected]>\nObjet : Re: How to solve my slow disk i/o throughput during index scan \n\n\n \n\nOn 7/4/24 22: 23, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) wrote: > Hello, > > Thank you, splitting in “OR” query definitely\r\n enables bitmap heap scans, > and thus parallelized read to disk \n\n\n \n\n \nInterne\n \nOn 7/4/24 22:23, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE \nTGV / DM RMP YIELD MANAGEMENT) wrote:\n> Hello,\n> \n> Thank you, splitting in “OR” query definitely enables bitmap heap scans, \n> and thus parallelized read to disk 😃! I though did not understand your \n> second point, what is parallel append, and how to enable it ?\nJust for example:\n \nDROP TABLE IF EXISTS t CASCADE;\nCREATE TABLE t (id int not null, payload text) PARTITION BY RANGE (id);\nCREATE TABLE p1 PARTITION OF t FOR VALUES FROM (0) TO (1000);\nCREATE TABLE p2 PARTITION OF t FOR VALUES FROM (1000) TO (2000);\nCREATE TABLE p3 PARTITION OF t FOR VALUES FROM (2000) TO (3000);\nCREATE TABLE p4 PARTITION OF t FOR VALUES FROM (3000) TO (4000);\nINSERT INTO t SELECT x % 4000, repeat('a',128) || x FROM \ngenerate_series(1,1E5) AS x;\nANALYZE t;\n \nSET enable_parallel_append = on;\nSET parallel_setup_cost = 0.00001;\nSET parallel_tuple_cost = 0.00001;\nSET max_parallel_workers_per_gather = 8;\nSET min_parallel_table_scan_size = 0;\nSET min_parallel_index_scan_size = 0;\n \nEXPLAIN (COSTS OFF)\nSELECT t.id, t.payload FROM t WHERE t.id % 2 = 0\nGROUP BY t.id, t.payload;\n \n Group\n Group Key: t.id, t.payload\n -> Gather Merge\n Workers Planned: 6\n -> Sort\n Sort Key: t.id, t.payload\n -> Parallel Append\n -> Parallel Seq Scan on p1 t_1\n Filter: ((id % 2) = 0)\n -> Parallel Seq Scan on p2 t_2\n Filter: ((id % 2) = 0)\n -> Parallel Seq Scan on p3 t_3\n Filter: ((id % 2) = 0)\n -> Parallel Seq Scan on p4 t_4\n Filter: ((id % 2) = 0)\n \nHere the table is scanned in parallel. It also works with IndexScan.\n \n-- \nregards, Andrei Lepikhov\n \n\n-------\r\nCe message et toutes les pièces jointes sont établis à l'intention exclusive de ses destinataires et sont confidentiels. L'intégrité de ce message n'étant pas assurée sur Internet, la SNCF ne peut être tenue responsable des altérations qui pourraient se produire\r\n sur son contenu. Toute publication, utilisation, reproduction, ou diffusion, même partielle, non autorisée préalablement par la SNCF, est strictement interdite. Si vous n'êtes pas le destinataire de ce message, merci d'en avertir immédiatement l'expéditeur\r\n et de le détruire.\r\n-------\r\nThis message and any attachments are intended solely for the addressees and are confidential. SNCF may not be held responsible for their contents whose accuracy and completeness cannot be guaranteed over the Internet. Unauthorized use, disclosure, distribution,\r\n copying, or any part thereof is strictly prohibited. If you are not the intended recipient of this message, please notify the sender immediately and delete it.",
"msg_date": "Thu, 11 Jul 2024 15:09:01 +0000",
"msg_from": "\"FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP\n YIELD MANAGEMENT)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to solve my slow disk i/o throughput during index scan"
},
{
"msg_contents": "On Thursday, July 11, 2024, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION\nGENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]> wrote:\n\n> Also, It might not be related, but I have suspiciously similar slow reads\n> when I am inserting in database, could it be related ?\n> I’m using a 3 steps process to insert my lines in the table :\n>\n> - COPY into a temporary table\n> - DELETE FROM on the perimeter I will be inserting into\n> - INSERT … INTO mytable SELECT … FROM temporarytable ON CONFLICT DO\n> NOTHING\n>\n>\n>\n> Is it possible to parallelize the scans during the modify step ?\n>\n>\n>\nThis tells you when parallelism is used:\n\n\nhttps://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html\n\nDavid J.\n\nOn Thursday, July 11, 2024, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE TGV / DM RMP YIELD MANAGEMENT) <[email protected]> wrote:\n\n\nAlso, It might not be related, but I have suspiciously similar slow reads when I am inserting in database, could it be related ?\n\nI’m using a 3 steps process to insert my lines in the table :\n\nCOPY into a temporary tableDELETE FROM on the perimeter I will be inserting intoINSERT … INTO mytable SELECT … FROM temporarytable ON CONFLICT DO NOTHING\n \nIs it possible to parallelize the scans during the modify step ?\n\nThis tells you when parallelism is used: https://www.postgresql.org/docs/current/when-can-parallel-query-be-used.htmlDavid J.",
"msg_date": "Thu, 11 Jul 2024 08:19:51 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to solve my slow disk i/o throughput during index scan"
},
{
"msg_contents": "On 11/7/2024 22:09, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION \nGENERALE TGV / DM RMP YIELD MANAGEMENT) wrote:\n> Is it possible to parallelize the scans during the modify step ?\nTemporary tables can't be used inside a query with parallel workers \ninvolved, because such table is local for single process.\n\nWhat about your question - I'm not sure without whole bunch of data. But \nmaximum speedup you can get by disabling as much constraints as possible \n- ideally, fill each partition individually with no constraints and \nindexes at all before uniting them into one partitioned table.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Thu, 11 Jul 2024 22:22:09 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to solve my slow disk i/o throughput during index scan"
},
{
"msg_contents": "On 11/7/2024 21:59, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION \nGENERALE TGV / DM RMP YIELD MANAGEMENT) wrote:\n> \n> Hello, and thank you again for your example !\n> \n> Sorry for my late answer, I was working on a patch for our requests. I \n> am though not completely understanding what is happening. Here is a plan \n> of a query where I splitted the calls with OR as you suggested, what \n> seemed to have enabled parallel scans.\nThanks for the feedback!\nGenerally, I don't understand why you needed to transform ANY -> OR at \nall to get BitmapScan. Can you just disable IndexScan and possibly \nSeqScan to see is it a hard transformation limit or mistake in cost \nestimation?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Thu, 11 Jul 2024 22:34:54 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to solve my slow disk i/o throughput during index scan"
}
] |
[
{
"msg_contents": "hi All,\n\nI am new on the list.\nI hope someone can give me an adequate answer or good advice about my\nproblem.\n\nI have a client (normally a web service, for testing the psql client) in\nGCP. There is a PSQL server in another DC. The ping response time is 20ms.\nI measured the bandwidth via scp and it is more than 1Gb/s which is more\nthan enough IMO.\n\n\nThe psql connection between the DCs for me was unexpectedly slow.\nI would expect a bit slower query without data ('select now()') due to the\nincreased latency and somewhat similar speed of data transfer.\nWhat I see is that\n\nselect now() increased from 0.7ms to 20ms which is OK.\nAnd 'select *' on a table with 3082 rows (so it's a small table) increased\nfrom 10ms to 800ms.\n\n\nIs this normal? Can I improve it somehow?\n\n\nThank you,\n\nhi All,I am new on the list.I hope someone can give me an adequate answer or good advice about my problem.I have a client (normally a web service, for testing the psql client) in GCP. There is a PSQL server in another DC. The ping response time is 20ms.I measured the bandwidth via scp and it is more than 1Gb/s which is more than enough IMO.The psql connection between the DCs for me was unexpectedly slow.I would expect a bit slower query without data ('select now()') due to the increased latency and somewhat similar speed of data transfer.What I see is thatselect now() increased from 0.7ms to 20ms which is OK.And 'select *' on a table with 3082 rows (so it's a small table) increased from 10ms to 800ms.Is this normal? Can I improve it somehow?Thank you,",
"msg_date": "Sun, 7 Jul 2024 20:17:28 +0200",
"msg_from": "=?UTF-8?Q?Tam=C3=A1s_PAPP?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Low performance between datacenters"
},
{
"msg_contents": "El dom, 7 jul 2024 a la(s) 3:17 p.m., Tamás PAPP ([email protected])\nescribió:\n\n> hi All,\n>\n> I am new on the list.\n> I hope someone can give me an adequate answer or good advice about my\n> problem.\n>\n> I have a client (normally a web service, for testing the psql client) in\n> GCP. There is a PSQL server in another DC. The ping response time is 20ms.\n> I measured the bandwidth via scp and it is more than 1Gb/s which is more\n> than enough IMO.\n>\n>\nHi Tamás,\n\nBandwidth and Latency are two different concepts. Latency is the\nround-trip-time of a data-packet travelling over the network while\nbandwidth is the amount of data you can move concurrently. Latency will add\nup time over each conversation between the client and the server. For\nexample, establishing a connection has a bit of back and forth dialogue\ngoing on. Each of these communications bits will be affected by latency as\neither client or server needs to wait for the other party to receive the\nmessage and then respond to it.\nLatency will have an impact even when transmitting data like in a plain\nSELECT * FROM table. Bear in mind that psql uses TCP which sends an ACK\npacket at regular intervals to assert the integrity of the transmitted data.\n\n\n> The psql connection between the DCs for me was unexpectedly slow.\n> I would expect a bit slower query without data ('select now()') due to the\n> increased latency and somewhat similar speed of data transfer.\n> What I see is that\n>\n> select now() increased from 0.7ms to 20ms which is OK.\n> And 'select *' on a table with 3082 rows (so it's a small table) increased\n> from 10ms to 800ms.\n>\n\nSince you aren't providing much evidence, I can speculate a lot might be\ngoing on to explain this kind of increase in the query delay: busy\ndatabase, busy network, locks, latency, network unreliability, etc.\n\n\n> Is this normal? Can I improve it somehow?\n>\n\nMove your web service physically as close as possible to the database\nserver.\n\nEl dom, 7 jul 2024 a la(s) 3:17 p.m., Tamás PAPP ([email protected]) escribió:hi All,I am new on the list.I hope someone can give me an adequate answer or good advice about my problem.I have a client (normally a web service, for testing the psql client) in GCP. There is a PSQL server in another DC. The ping response time is 20ms.I measured the bandwidth via scp and it is more than 1Gb/s which is more than enough IMO.Hi Tamás,Bandwidth and Latency are two different concepts. Latency is the round-trip-time of a data-packet travelling over the network while bandwidth is the amount of data you can move concurrently. Latency will add up time over each conversation between the client and the server. For example, establishing a connection has a bit of back and forth dialogue going on. Each of these communications bits will be affected by latency as either client or server needs to wait for the other party to receive the message and then respond to it.Latency will have an impact even when transmitting data like in a plain SELECT * FROM table. Bear in mind that psql uses TCP which sends an ACK packet at regular intervals to assert the integrity of the transmitted data. The psql connection between the DCs for me was unexpectedly slow.I would expect a bit slower query without data ('select now()') due to the increased latency and somewhat similar speed of data transfer.What I see is thatselect now() increased from 0.7ms to 20ms which is OK.And 'select *' on a table with 3082 rows (so it's a small table) increased from 10ms to 800ms.Since you aren't providing much evidence, I can speculate a lot might be going on to explain this kind of increase in the query delay: busy database, busy network, locks, latency, network unreliability, etc. Is this normal? Can I improve it somehow?Move your web service physically as close as possible to the database server.",
"msg_date": "Sun, 7 Jul 2024 17:23:41 -0300",
"msg_from": "Fernando Hevia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low performance between datacenters"
},
{
"msg_contents": "Often it happens because of the low batch size for fetching data. This\nmakes client wait unnecessarily while reading rows. I am not sure which\nclient are you using, but in java this can be controlled on per-statement\nlevel, see\nhttps://jdbc.postgresql.org/documentation/query/\n\nI believe there is also a connection parameter to set the default value,\nbut I don’t remember out of top of my head. You can definitely set it on\nconnection, see\nhttps://jdbc.postgresql.org/documentation/publicapi/org/postgresql/jdbc/PgConnection.html#setDefaultFetchSize-int-\n\nнд, 7 лип. 2024 р. о 10:17 Tamás PAPP <[email protected]> пише:\n\n> hi All,\n>\n> I am new on the list.\n> I hope someone can give me an adequate answer or good advice about my\n> problem.\n>\n> I have a client (normally a web service, for testing the psql client) in\n> GCP. There is a PSQL server in another DC. The ping response time is 20ms.\n> I measured the bandwidth via scp and it is more than 1Gb/s which is more\n> than enough IMO.\n>\n>\n> The psql connection between the DCs for me was unexpectedly slow.\n> I would expect a bit slower query without data ('select now()') due to the\n> increased latency and somewhat similar speed of data transfer.\n> What I see is that\n>\n> select now() increased from 0.7ms to 20ms which is OK.\n> And 'select *' on a table with 3082 rows (so it's a small table) increased\n> from 10ms to 800ms.\n>\n>\n> Is this normal? Can I improve it somehow?\n>\n>\n> Thank you,\n>\n>\n\nOften it happens because of the low batch size for fetching data. This makes client wait unnecessarily while reading rows. I am not sure which client are you using, but in java this can be controlled on per-statement level, see https://jdbc.postgresql.org/documentation/query/I believe there is also a connection parameter to set the default value, but I don’t remember out of top of my head. You can definitely set it on connection, see https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/jdbc/PgConnection.html#setDefaultFetchSize-int-нд, 7 лип. 2024 р. о 10:17 Tamás PAPP <[email protected]> пише:hi All,I am new on the list.I hope someone can give me an adequate answer or good advice about my problem.I have a client (normally a web service, for testing the psql client) in GCP. There is a PSQL server in another DC. The ping response time is 20ms.I measured the bandwidth via scp and it is more than 1Gb/s which is more than enough IMO.The psql connection between the DCs for me was unexpectedly slow.I would expect a bit slower query without data ('select now()') due to the increased latency and somewhat similar speed of data transfer.What I see is thatselect now() increased from 0.7ms to 20ms which is OK.And 'select *' on a table with 3082 rows (so it's a small table) increased from 10ms to 800ms.Is this normal? Can I improve it somehow?Thank you,",
"msg_date": "Sun, 7 Jul 2024 16:42:27 -0800",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low performance between datacenters"
},
{
"msg_contents": "On 8/7/2024 01:17, Tamás PAPP wrote:\n> hi All,\n> \n> I am new on the list.\n> I hope someone can give me an adequate answer or good advice about my \n> problem.\n> \n> I have a client (normally a web service, for testing the psql client) in \n> GCP. There is a PSQL server in another DC. The ping response time is 20ms.\n> I measured the bandwidth via scp and it is more than 1Gb/s which is more \n> than enough IMO.\n> \n> \n> The psql connection between the DCs for me was unexpectedly slow.\n> I would expect a bit slower query without data ('select now()') due to \n> the increased latency and somewhat similar speed of data transfer.\n> What I see is that\n> \n> select now() increased from 0.7ms to 20ms which is OK.\n> And 'select *' on a table with 3082 rows (so it's a small table) \n> increased from 10ms to 800ms.\n> \n> \n> Is this normal? Can I improve it somehow?\nThis subject is totally new for me, but maybe you have different \nconnection settings for internal and external connections? I mean - from \ndifferent networks?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Mon, 8 Jul 2024 14:24:53 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low performance between datacenters"
}
] |
[
{
"msg_contents": "Hello all,\n\nWhile executing the join query on the postgres database we have observed sometimes randomly below query is being fired which is affecting our response time.\n\nQuery randomly fired in the background:-\nSELECT p.proname,p.oid FROM pg_catalog.pg_proc p, pg_catalog.pg_namespace n WHERE p.pronamespace=n.oid AND n.nspname='pg_catalog' AND ( proname = 'lo_open' or proname = 'lo_close' or proname = 'lo_creat' or proname = 'lo_unlink' or proname = 'lo_lseek' or proname = 'lo_lseek64' or proname = 'lo_tell' or proname = 'lo_tell64' or proname = 'loread' or proname = 'lowrite' or proname = 'lo_truncate' or proname = 'lo_truncate64')\n\nQuery intended to be executed:-\nSELECT a.* FROM tablename1 a INNER JOIN users u ON u.id = a.user_id INNER JOIN tablename2 c ON u.client_id = c.id WHERE u.external_id = ? AND c.name = ? AND (c.namespace = ? OR (c.namespace IS NULL AND ? IS NULL))\n\nPostgres version 11\n\nBelow are my questions:-\n\n 1. Is the query referring pg_catalog fired by postgres library implicitly?\n 2. Is there any way we can suppress this query?\n\n\nThanks and regards,\nDheeraj Sonawane\n\nMastercard\n| mobile +917588196818\n[cid:[email protected]]<www.mastercard.com>\n\n\nCONFIDENTIALITY NOTICE This e-mail message and any attachments are only for the use of the intended recipient and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient, any disclosure, distribution or other use of this e-mail message or attachments is prohibited. If you have received this e-mail message in error, please delete and notify the sender immediately. Thank you.",
"msg_date": "Wed, 10 Jul 2024 07:11:39 +0000",
"msg_from": "Dheeraj Sonawane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance issue"
},
{
"msg_contents": "Dheeraj Sonawane <[email protected]> writes:\n> While executing the join query on the postgres database we have observed sometimes randomly below query is being fired which is affecting our response time.\n\n> Query randomly fired in the background:-\n> SELECT p.proname,p.oid FROM pg_catalog.pg_proc p, pg_catalog.pg_namespace n WHERE p.pronamespace=n.oid AND n.nspname='pg_catalog' AND ( proname = 'lo_open' or proname = 'lo_close' or proname = 'lo_creat' or proname = 'lo_unlink' or proname = 'lo_lseek' or proname = 'lo_lseek64' or proname = 'lo_tell' or proname = 'lo_tell64' or proname = 'loread' or proname = 'lowrite' or proname = 'lo_truncate' or proname = 'lo_truncate64')\n\nThat looks very similar to libpq's preparatory lookup before executing\nlarge object accesses (cf lo_initialize in fe-lobj.c). The details\naren't identical so it's not from libpq, but I'd guess this is some\nother client library's version of the same thing.\n\n> Query intended to be executed:-\n> SELECT a.* FROM tablename1 a INNER JOIN users u ON u.id = a.user_id INNER JOIN tablename2 c ON u.client_id = c.id WHERE u.external_id = ? AND c.name = ? AND (c.namespace = ? OR (c.namespace IS NULL AND ? IS NULL))\n\nIt is *really* hard to believe that that lookup query would make any\nnoticeable difference on response time for some other session, unless\nyou are running the server on seriously underpowered hardware.\n\nIt could be that you've misinterpreted your data, and what is actually\nhappening is that that other session has completed its lookup query\nand is now doing fast-path large object reads and writes using the\nresults. Fast-path requests might not show up as queries in your\nmonitoring, but if the large object I/O is sufficiently fast and\nvoluminous maybe that'd account for visible performance impact.\n\n> 2. Is there any way we can suppress this query?\n\nStop using large objects? But the alternatives won't be better\nin terms of performance impact. Really, if this is a problem\nfor you, you need a beefier server. Or split the work across\nmore than one server.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2024 09:40:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
}
] |
[
{
"msg_contents": "Dear Team,\n\nWe received a request from client. They required all functions, stored\nprocedures and triggers backup. can anyone please let me know. How to take\nbackup only above objects.\n\n\nThanks & Regards,\nNikhil,\nPostgreSQL DBA.\n\nDear Team,We received a request from client. They required all functions, stored procedures and triggers backup. can anyone please let me know. How to take backup only above objects. Thanks & Regards,Nikhil, PostgreSQL DBA.",
"msg_date": "Wed, 10 Jul 2024 23:35:22 +0530",
"msg_from": "nikhil kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Specific objects backup in PostgreSQL"
},
{
"msg_contents": "On Wed, Jul 10, 2024 at 11:05 AM nikhil kumar <[email protected]>\nwrote:\n\n>\n> We received a request from client. They required all functions, stored\n> procedures and triggers backup. can anyone please let me know. How to take\n> backup only above objects.\n>\n\nThis hardly qualifies as a performance question.\n\nYou might try the -general list if you want to brainstorm workarounds\nbecause pg_dump itself doesn't provide any command line options to give you\nthis specific subset of your database.\n\nDavid J.\n\nOn Wed, Jul 10, 2024 at 11:05 AM nikhil kumar <[email protected]> wrote:We received a request from client. They required all functions, stored procedures and triggers backup. can anyone please let me know. How to take backup only above objects. This hardly qualifies as a performance question.You might try the -general list if you want to brainstorm workarounds because pg_dump itself doesn't provide any command line options to give you this specific subset of your database.David J.",
"msg_date": "Wed, 10 Jul 2024 11:41:01 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific objects backup in PostgreSQL"
},
{
"msg_contents": "Hi Nikhil,\n\nOn Wed, Jul 10, 2024 at 8:05 PM nikhil kumar <[email protected]> wrote:\n\n> Dear Team,\n>\n> We received a request from client. They required all functions, stored\n> procedures and triggers backup. can anyone please let me know. How to take\n> backup only above objects.\n>\n>\n> Thanks & Regards,\n> Nikhil,\n> PostgreSQL DBA.\n>\n\n\"pg_dump -s\" will export the model, including functions, triggers... as\nwell as tables and views without data.\n\nI should have somewhere an old program I wrote on a lazy day to slice this\nbackup into individual objects. I can dig it for you if you need it.\n\nCheers\n--\nOlivier Gautherot\n\nHi Nikhil,On Wed, Jul 10, 2024 at 8:05 PM nikhil kumar <[email protected]> wrote:Dear Team,We received a request from client. They required all functions, stored procedures and triggers backup. can anyone please let me know. How to take backup only above objects. Thanks & Regards,Nikhil, PostgreSQL DBA.\"pg_dump -s\" will export the model, including functions, triggers... as well as tables and views without data.I should have somewhere an old program I wrote on a lazy day to slice this backup into individual objects. I can dig it for you if you need it.Cheers--Olivier Gautherot",
"msg_date": "Wed, 10 Jul 2024 20:42:49 +0200",
"msg_from": "Olivier Gautherot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific objects backup in PostgreSQL"
},
{
"msg_contents": "Hi Nikhil, maybe you can apply some tricks playing with pg_dump -s and \npg_restore --list\n\n\ncheck this link: https://stackoverflow.com/a/13758324/8308381\n\n\nRegards\n\nOn 10-07-24 14:05, nikhil kumar wrote:\n> Dear Team,\n>\n> We received a request from client. They required all functions, stored \n> procedures and triggers backup. can anyone please let me know. How to \n> take backup only above objects.\n>\n>\n> Thanks & Regards,\n> Nikhil,\n> PostgreSQL DBA.\n\n\n",
"msg_date": "Wed, 10 Jul 2024 16:04:48 -0400",
"msg_from": "Anthony Sotolongo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific objects backup in PostgreSQL"
},
{
"msg_contents": "Thank you everyone for your help\n\nOn Thu, 11 Jul, 2024, 1:34 am Anthony Sotolongo, <[email protected]>\nwrote:\n\n> Hi Nikhil, maybe you can apply some tricks playing with pg_dump -s and\n> pg_restore --list\n>\n>\n> check this link: https://stackoverflow.com/a/13758324/8308381\n>\n>\n> Regards\n>\n> On 10-07-24 14:05, nikhil kumar wrote:\n> > Dear Team,\n> >\n> > We received a request from client. They required all functions, stored\n> > procedures and triggers backup. can anyone please let me know. How to\n> > take backup only above objects.\n> >\n> >\n> > Thanks & Regards,\n> > Nikhil,\n> > PostgreSQL DBA.\n>\n\nThank you everyone for your help \nOn Thu, 11 Jul, 2024, 1:34 am Anthony Sotolongo, <[email protected]> wrote:Hi Nikhil, maybe you can apply some tricks playing with pg_dump -s and \npg_restore --list\n\n\ncheck this link: https://stackoverflow.com/a/13758324/8308381\n\n\nRegards\n\nOn 10-07-24 14:05, nikhil kumar wrote:\n> Dear Team,\n>\n> We received a request from client. They required all functions, stored \n> procedures and triggers backup. can anyone please let me know. How to \n> take backup only above objects.\n>\n>\n> Thanks & Regards,\n> Nikhil,\n> PostgreSQL DBA.",
"msg_date": "Thu, 11 Jul 2024 08:59:50 +0530",
"msg_from": "nikhil kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Specific objects backup in PostgreSQL"
}
] |
[
{
"msg_contents": "Hey!\n\n[version: PostgreSQL 16.3]\n\nIn the example below, I noticed that the JOIN predicate \"t1.a<1\" is not\npushed down to the scan over \"t2\", though it superficially seems like it\nshould be.\n\ncreate table t as (select 1 a);\nanalyze t;\nexplain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a<1;\n QUERY PLAN\n-------------------------------\n Hash Join\n Hash Cond: (t2.a = t1.a)\n -> Seq Scan on t t2\n -> Hash\n -> Seq Scan on t t1\n Filter: (a < 1)\n(6 rows)\n\nThe same is true for the predicate \"t1.a in (0, 1)\". For comparison, the\npredicate \"t1.a=1\" does get pushed down to both scans.\n\nexplain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a=1;\n QUERY PLAN\n-------------------------\n Nested Loop\n -> Seq Scan on t t1\n Filter: (a = 1)\n -> Seq Scan on t t2\n Filter: (a = 1)\n(5 rows)\n\n\n-Paul-\n\nHey![version: PostgreSQL 16.3]In the example below, I noticed that the JOIN predicate \"t1.a<1\" is not pushed down to the scan over \"t2\", though it superficially seems like it should be.create table t as (select 1 a);analyze t;explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a<1; QUERY PLAN ------------------------------- Hash Join Hash Cond: (t2.a = t1.a) -> Seq Scan on t t2 -> Hash -> Seq Scan on t t1 Filter: (a < 1)(6 rows)The same is true for the predicate \"t1.a in (0, 1)\". For comparison, the predicate \"t1.a=1\" does get pushed down to both scans.explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a=1; QUERY PLAN ------------------------- Nested Loop -> Seq Scan on t t1 Filter: (a = 1) -> Seq Scan on t t2 Filter: (a = 1)(5 rows)-Paul-",
"msg_date": "Thu, 11 Jul 2024 16:31:24 -0700",
"msg_from": "Paul George <[email protected]>",
"msg_from_op": true,
"msg_subject": "inequality predicate not pushed down in JOIN?"
},
{
"msg_contents": "On 12/7/2024 06:31, Paul George wrote:\n> In the example below, I noticed that the JOIN predicate \"t1.a<1\" is not \n> pushed down to the scan over \"t2\", though it superficially seems like it \n> should be.\nIt has already discussed at least couple of years ago, see [1].\nSummarising, it is more complicated when equivalences and wastes CPU \ncycles more probably than helps.\n\n> \n> create table t as (select 1 a);\n> analyze t;\n> explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a<1;\n> QUERY PLAN\n> -------------------------------\n> Hash Join\n> Hash Cond: (t2.a = t1.a)\n> -> Seq Scan on t t2\n> -> Hash\n> -> Seq Scan on t t1\n> Filter: (a < 1)\n> (6 rows)\n> \n> The same is true for the predicate \"t1.a in (0, 1)\". For comparison, the \n> predicate \"t1.a=1\" does get pushed down to both scans.\n> \n> explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a=1;\n> QUERY PLAN\n> -------------------------\n> Nested Loop\n> -> Seq Scan on t t1\n> Filter: (a = 1)\n> -> Seq Scan on t t2\n> Filter: (a = 1)\n> (5 rows)\n\n[1] Condition pushdown: why (=) is pushed down into join, but BETWEEN or \n >= is not?\nhttps://www.postgresql.org/message-id/flat/CAFQUnFhqkWuPCwQ1NmHYrisHJhYx4DoJak-dV%2BFcjyY6scooYA%40mail.gmail.com\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Fri, 12 Jul 2024 06:49:34 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inequality predicate not pushed down in JOIN?"
},
{
"msg_contents": "Cool! Thanks for the speedy reply, link, and summary! I'm not sure how I\nmissed this, but apologies for the noise.\n\n-Paul-\n\nOn Thu, Jul 11, 2024 at 4:49 PM Andrei Lepikhov <[email protected]> wrote:\n\n> On 12/7/2024 06:31, Paul George wrote:\n> > In the example below, I noticed that the JOIN predicate \"t1.a<1\" is not\n> > pushed down to the scan over \"t2\", though it superficially seems like it\n> > should be.\n> It has already discussed at least couple of years ago, see [1].\n> Summarising, it is more complicated when equivalences and wastes CPU\n> cycles more probably than helps.\n>\n> >\n> > create table t as (select 1 a);\n> > analyze t;\n> > explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a<1;\n> > QUERY PLAN\n> > -------------------------------\n> > Hash Join\n> > Hash Cond: (t2.a = t1.a)\n> > -> Seq Scan on t t2\n> > -> Hash\n> > -> Seq Scan on t t1\n> > Filter: (a < 1)\n> > (6 rows)\n> >\n> > The same is true for the predicate \"t1.a in (0, 1)\". For comparison, the\n> > predicate \"t1.a=1\" does get pushed down to both scans.\n> >\n> > explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a=1;\n> > QUERY PLAN\n> > -------------------------\n> > Nested Loop\n> > -> Seq Scan on t t1\n> > Filter: (a = 1)\n> > -> Seq Scan on t t2\n> > Filter: (a = 1)\n> > (5 rows)\n>\n> [1] Condition pushdown: why (=) is pushed down into join, but BETWEEN or\n> >= is not?\n>\n> https://www.postgresql.org/message-id/flat/CAFQUnFhqkWuPCwQ1NmHYrisHJhYx4DoJak-dV%2BFcjyY6scooYA%40mail.gmail.com\n>\n> --\n> regards, Andrei Lepikhov\n>\n>\n\nCool! Thanks for the speedy reply, link, and summary! I'm not sure how I missed this, but apologies for the noise.-Paul-On Thu, Jul 11, 2024 at 4:49 PM Andrei Lepikhov <[email protected]> wrote:On 12/7/2024 06:31, Paul George wrote:\n> In the example below, I noticed that the JOIN predicate \"t1.a<1\" is not \n> pushed down to the scan over \"t2\", though it superficially seems like it \n> should be.\nIt has already discussed at least couple of years ago, see [1].\nSummarising, it is more complicated when equivalences and wastes CPU \ncycles more probably than helps.\n\n> \n> create table t as (select 1 a);\n> analyze t;\n> explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a<1;\n> QUERY PLAN\n> -------------------------------\n> Hash Join\n> Hash Cond: (t2.a = t1.a)\n> -> Seq Scan on t t2\n> -> Hash\n> -> Seq Scan on t t1\n> Filter: (a < 1)\n> (6 rows)\n> \n> The same is true for the predicate \"t1.a in (0, 1)\". For comparison, the \n> predicate \"t1.a=1\" does get pushed down to both scans.\n> \n> explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a=1;\n> QUERY PLAN\n> -------------------------\n> Nested Loop\n> -> Seq Scan on t t1\n> Filter: (a = 1)\n> -> Seq Scan on t t2\n> Filter: (a = 1)\n> (5 rows)\n\n[1] Condition pushdown: why (=) is pushed down into join, but BETWEEN or \n >= is not?\nhttps://www.postgresql.org/message-id/flat/CAFQUnFhqkWuPCwQ1NmHYrisHJhYx4DoJak-dV%2BFcjyY6scooYA%40mail.gmail.com\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Thu, 11 Jul 2024 17:07:05 -0700",
"msg_from": "Paul George <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: inequality predicate not pushed down in JOIN?"
},
{
"msg_contents": "While applying transitivity to non-equality conditions is less frequently\nbeneficial than applying it to equality conditions, it can be very helpful,\nespecially with third party apps and dynamically changing data. One\npossible implementation to avoid the mentioned overhead would be to mark\nthe internally generated predicate(s) as potentially redundant and discard\nit on the inner table of the join after planning (and enhance the optimizer\nto recognize redundant predicates and adjust accordingly when costing).\n\nJerry\n\nOn Thu, Jul 11, 2024 at 5:16 PM Paul George <[email protected]> wrote:\n\n> Cool! Thanks for the speedy reply, link, and summary! I'm not sure how I\n> missed this, but apologies for the noise.\n>\n> -Paul-\n>\n> On Thu, Jul 11, 2024 at 4:49 PM Andrei Lepikhov <[email protected]> wrote:\n>\n>> On 12/7/2024 06:31, Paul George wrote:\n>> > In the example below, I noticed that the JOIN predicate \"t1.a<1\" is not\n>> > pushed down to the scan over \"t2\", though it superficially seems like\n>> it\n>> > should be.\n>> It has already discussed at least couple of years ago, see [1].\n>> Summarising, it is more complicated when equivalences and wastes CPU\n>> cycles more probably than helps.\n>>\n>> >\n>> > create table t as (select 1 a);\n>> > analyze t;\n>> > explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and\n>> t1.a<1;\n>> > QUERY PLAN\n>> > -------------------------------\n>> > Hash Join\n>> > Hash Cond: (t2.a = t1.a)\n>> > -> Seq Scan on t t2\n>> > -> Hash\n>> > -> Seq Scan on t t1\n>> > Filter: (a < 1)\n>> > (6 rows)\n>> >\n>> > The same is true for the predicate \"t1.a in (0, 1)\". For comparison,\n>> the\n>> > predicate \"t1.a=1\" does get pushed down to both scans.\n>> >\n>> > explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and\n>> t1.a=1;\n>> > QUERY PLAN\n>> > -------------------------\n>> > Nested Loop\n>> > -> Seq Scan on t t1\n>> > Filter: (a = 1)\n>> > -> Seq Scan on t t2\n>> > Filter: (a = 1)\n>> > (5 rows)\n>>\n>> [1] Condition pushdown: why (=) is pushed down into join, but BETWEEN or\n>> >= is not?\n>>\n>> https://www.postgresql.org/message-id/flat/CAFQUnFhqkWuPCwQ1NmHYrisHJhYx4DoJak-dV%2BFcjyY6scooYA%40mail.gmail.com\n>> <https://www.postgresql.org/message-id/flat/CAFQUnFhqkWuPCwQ1NmHYrisHJhYx4DoJak-dV%2BFcjyY6scooYA%40mail.gmail.com>\n>>\n>> --\n>> regards, Andrei Lepikhov\n>>\n>>\n\nWhile applying transitivity to non-equality conditions is less frequently beneficial than applying it to equality conditions, it can be very helpful, especially with third party apps and dynamically changing data. One possible implementation to avoid the mentioned overhead would be to mark the internally generated predicate(s) as potentially redundant and discard it on the inner table of the join after planning (and enhance the optimizer to recognize redundant predicates and adjust accordingly when costing).JerryOn Thu, Jul 11, 2024 at 5:16 PM Paul George <[email protected]> wrote:Cool! Thanks for the speedy reply, link, and summary! I'm not sure how I missed this, but apologies for the noise.-Paul-On Thu, Jul 11, 2024 at 4:49 PM Andrei Lepikhov <[email protected]> wrote:On 12/7/2024 06:31, Paul George wrote:\n> In the example below, I noticed that the JOIN predicate \"t1.a<1\" is not \n> pushed down to the scan over \"t2\", though it superficially seems like it \n> should be.\nIt has already discussed at least couple of years ago, see [1].\nSummarising, it is more complicated when equivalences and wastes CPU \ncycles more probably than helps.\n\n> \n> create table t as (select 1 a);\n> analyze t;\n> explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a<1;\n> QUERY PLAN\n> -------------------------------\n> Hash Join\n> Hash Cond: (t2.a = t1.a)\n> -> Seq Scan on t t2\n> -> Hash\n> -> Seq Scan on t t1\n> Filter: (a < 1)\n> (6 rows)\n> \n> The same is true for the predicate \"t1.a in (0, 1)\". For comparison, the \n> predicate \"t1.a=1\" does get pushed down to both scans.\n> \n> explain (costs off) select * from t t1 join t t2 on t1.a=t2.a and t1.a=1;\n> QUERY PLAN\n> -------------------------\n> Nested Loop\n> -> Seq Scan on t t1\n> Filter: (a = 1)\n> -> Seq Scan on t t2\n> Filter: (a = 1)\n> (5 rows)\n\n[1] Condition pushdown: why (=) is pushed down into join, but BETWEEN or \n >= is not?\nhttps://www.postgresql.org/message-id/flat/CAFQUnFhqkWuPCwQ1NmHYrisHJhYx4DoJak-dV%2BFcjyY6scooYA%40mail.gmail.com\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Thu, 11 Jul 2024 17:58:21 -0700",
"msg_from": "Jerry Brenner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inequality predicate not pushed down in JOIN?"
}
] |
[
{
"msg_contents": "Hello all postgres developers,\n\nRecently pg started to make a query plan, which I can not understand.\nThe pan is here: https://explain.depesz.com/s/Lvw0#source\n\nThe interesting part is at the top:\n```\nAggregate (cost=1132443.98..1132443.99 rows=1 width=24) (actual rows=1\nloops=1)\n -> Merge Join (cost=1127516.99..1131699.33 rows=372323 width=24)\n(actual rows=642956 loops=1)\n Merge Cond: (parent.volume_id = volume.id)\n -> Merge Join (cost=1127516.66..7430940.30 rows=372323 width=40)\n(actual rows=642956 loops=1)\n\n ...\n\n -> Index Only Scan using volume_pkey on volume (cost=0.06..18.72\nrows=1060 width=8) (actual rows=1011 loops=1)\n Heap Fetches: 23\n```\n\nWhat bothers me is that the inner plan cost (7430940) is higher than the\nouter plan cost (1131699).\nAnd I wonder how that is possible. There is no limit in the query that\nwould prevent PG from reading all rows coming out from inner Merge Join.\ncursor_tuple_fraction is 1.\n\nThe query is similar to: (there were more joins, but they were rejected by\nthe planner)\n```\nSELECT CAST(count(*) AS BIGINT) AS COUNT\n FROM\n (SELECT file.id\n FROM sf.file_current AS FILE\n JOIN sf.dir_current AS parent ON parent.id = file.parent_id\n AND parent.volume_id = file.volume_id\n JOIN sf_volumes.volume AS volume ON file.volume_id = volume.id\n WHERE (parent.volume_id = 1011\n AND parent.ancestor_ids && ARRAY[151188430]::BIGINT[]\n OR file.volume_id = 453)\n AND file.type = 32768\n AND file.volume_id IN (1011, 453)\n AND parent.volume_id IN (1011, 453)) AS fsentry_query\n```\n\nI am using:\n PostgreSQL 13.12 (Ubuntu 13.12-1.pgdg20.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n\nAll non-standard configuration params are in the attachment.\n\nI am looking for some hints for understanding this situation.\n\nThanks,\nStanisław Skonieczny",
"msg_date": "Fri, 26 Jul 2024 15:25:44 +0200",
"msg_from": "=?UTF-8?Q?Stanis=C5=82aw_Skonieczny?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Outer cost higher than the inner cost"
},
{
"msg_contents": "=?UTF-8?Q?Stanis=C5=82aw_Skonieczny?= <[email protected]> writes:\n> What bothers me is that the inner plan cost (7430940) is higher than the\n> outer plan cost (1131699).\n\nI think it is estimating (based on knowledge of the ranges of join keys\nin the two relations) that that input subplan won't need to be run to\ncompletion. See initial_cost_mergejoin in costsize.c.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 10:30:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Outer cost higher than the inner cost"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm trying to come up with an efficient query, ideally without having\nto coerce the planner into doing what I want, but I'm running up\nagainst a problem with row estimates, and I'm curious if there's a\nbetter way to address the problem than what I've come up with.\n\nHere's a straightforward version of the query:\n\nSELECT\n v.patient_class,\n count( DISTINCT v.id ) AS num_visits\nFROM\n patient_visits AS v\n JOIN patients AS p ON p.id = v.patient_id\n JOIN members AS m ON m.patient_id = p.id\n JOIN user_populations AS up ON up.population_id = m.population_id\n JOIN users AS u ON u.id = up.user_id\n JOIN sponsor_populations AS sp ON sp.population_id = m.population_id AND\n sp.sponsor_id = u.sponsor_id\nWHERE\n u.id = 3962 AND\n v.start_on < ( m.last_activity_on + sp.authz_expiration_interval ) AND\n v.end_on IS NULL\nGROUP BY\n v.patient_class;\n\nThe plan looks like this:\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n GroupAggregate (cost=5666.32..5666.42 rows=5 width=12) (actual\ntime=22239.478..22244.339 rows=5 loops=1)\n Group Key: v.patient_class\n Buffers: shared hit=20466857\n -> Sort (cost=5666.32..5666.34 rows=7 width=12) (actual\ntime=22236.049..22239.030 rows=50988 loops=1)\n Sort Key: v.patient_class, v.id\n Sort Method: quicksort Memory: 3528kB\n Buffers: shared hit=20466857\n -> Nested Loop (cost=2.94..5666.22 rows=7 width=12) (actual\ntime=1.759..22203.441 rows=50988 loops=1)\n Join Filter: (v.patient_id = p.id)\n Buffers: shared hit=20466857\n -> Nested Loop (cost=2.37..5659.62 rows=11 width=28)\n(actual time=1.743..21892.354 rows=50988 loops=1)\n Join Filter: (v.start_on < (m.last_activity_on +\nsp.authz_expiration_interval))\n Rows Removed by Join Filter: 19368\n Buffers: shared hit=20204287\n -> Nested Loop (cost=1.80..3616.14 rows=2285\nwidth=28) (actual time=0.065..3322.440 rows=2968729 loops=1)\n Join Filter: (m.population_id = up.population_id)\n Buffers: shared hit=2053968\n -> Nested Loop (cost=1.11..11.73 rows=1\nwidth=73) (actual time=0.041..0.286 rows=9 loops=1)\n Join Filter: (u.sponsor_id = sp.sponsor_id)\n Buffers: shared hit=67\n -> Nested Loop (cost=0.70..9.09\nrows=1 width=95) (actual time=0.028..0.164 rows=9 loops=1)\n Buffers: shared hit=31\n -> Index Only Scan using\nindex_user_populations_on_user_id_and_population_id on\nuser_populations up (cost=0.42..1.57 rows=3 width=36) (actual\ntime=0.015..0.025 rows=9 loops=1)\n Index Cond: (user_id = 3962)\n Heap Fetches: 0\n Buffers: shared hit=4\n -> Index Scan using\nindex_sponsor_populations_on_population_id_and_sponsor_id on\nsponsor_populations sp (cost=0.28..2.50 rows=1 width=59) (actual\ntime=0.011..0.012 rows=1 loops=9)\n Index Cond:\n(population_id = up.population_id)\n Buffers: shared hit=27\n -> Index Scan using users_pkey on\nusers u (cost=0.41..2.63 rows=1 width=24) (actual time=0.011..0.011\nrows=1 loops=9)\n Index Cond: (id = 3962)\n Buffers: shared hit=36\n -> Index Only Scan using\nindex_members_on_population_id on members m (cost=0.70..3173.12\nrows=34503 width=40) (actual time=0.023..324.991 rows=329859 loops=9)\n Index Cond: (population_id = sp.population_id)\n Heap Fetches: 2729516\n Buffers: shared hit=2053901\n -> Index Scan using\nidx_new_patient_visits_unique on patient_visits v (cost=0.57..0.88\nrows=1 width=24) (actual time=0.006..0.006 rows=0 loops=2968729)\n Index Cond: (patient_id = m.patient_id)\n Filter: (end_on IS NULL)\n Rows Removed by Filter: 4\n Buffers: shared hit=18150319\n -> Index Only Scan using patients_pkey on patients p\n(cost=0.57..0.59 rows=1 width=8) (actual time=0.006..0.006 rows=1\nloops=50988)\n Index Cond: (id = m.patient_id)\n Heap Fetches: 12947\n Buffers: shared hit=262570\n Settings: effective_cache_size = '372GB', max_parallel_workers =\n'12', max_parallel_workers_per_gather = '4', work_mem = '164MB',\nrandom_page_cost = '1.1', effective_io_concurrency = '200'\n Planning:\n Buffers: shared hit=76\n Planning Time: 68.881 ms\n Execution Time: 22244.426 ms\n(50 rows)\n\n=====\n\nNow, this plan isn't bad, despite the order-of-magnitude\nunderestimation of how many rows it will need to look at in\nindex_members_on_population_id. But I'm still hoping to get a faster\nquery.\n\nThere are, broadly speaking, two constraints on which patient_visits\nrows will be counted:\n- The end_on must be NULL, indicating that the visit is still in progress.\n- The visit must be accessible by the user, meaning:\n - The visit's patient must have a member from at least one of the\nuser's populations, such that the visit starts before the user's\naccess to that member expires.\n\nUnfortunately, neither of these conditions is independently super\nselective. There are a lot of visits (many that the user cannot\naccess) where end_on is NULL (1,212,480) and even more visits (many\nalready ended) that the user can access (4,167,864). However, there\nare only 25,334 rows that meet both conditions. I don't see any way of\ntaking advantage of this fact without denormalizing some of the\nauthorization-related data into the patient_visits table.\n\nSo the approach I'm considering is to add a column,\npatient_visits.cached_population_ids, which is an array of the visit's\npatient's member's population_ids. It's a necessary (though\ninsufficient) condition of a user being able to access a\npatient_visits row that this array have a non-empty intersection with\nan array of the user's own population_ids (drawn from the\nuser_populations table). This array column has an index:\n \"index_new_patient_visits_on_open_cached_population_ids\" gin\n(cached_population_ids) WHERE end_on IS NULL\n\nSo the new query looks like this:\n\nwith user_member_expirations as (\n select\n u.id user_id,\n m.id member_id,\n m.patient_id,\n (m.last_activity_on + sp.authz_expiration_interval) expires_on\n from members m\n join sponsor_populations sp on sp.population_id = m.population_id\n join user_populations up on up.population_id = m.population_id\n join users u on u.id = up.user_id\n where sp.sponsor_id = u.sponsor_id\n)\nselect v.patient_class, count(distinct v.id)\nfrom new_patient_visits v\n join user_member_expirations u on u.patient_id = v.patient_id\nwhere u.user_id = 3962\n and v.end_on is null\n and v.cached_population_ids &&\nARRAY['foo-population','bar-population','baz-population','quux-population']\n and v.start_on < u.expires_on\ngroup by v.patient_class;\n\n[Note: I've replaced the actual population ids with fake ones, since\nthey are textual and name actual facilities.]\n\nThe plan, however, is disappointing:\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=5220.41..5220.43 rows=1 width=12) (actual\ntime=21341.629..21346.488 rows=5 loops=1)\n Group Key: v.patient_class\n Buffers: shared hit=17585790\n -> Sort (cost=5220.41..5220.41 rows=1 width=12) (actual\ntime=21338.148..21341.128 rows=50988 loops=1)\n Sort Key: v.patient_class, v.id\n Sort Method: quicksort Memory: 3528kB\n Buffers: shared hit=17585790\n -> Nested Loop (cost=2.37..5220.40 rows=1 width=12) (actual\ntime=1.385..21299.416 rows=50988 loops=1)\n Join Filter: (v.start_on < (m.last_activity_on +\nsp.authz_expiration_interval))\n Rows Removed by Join Filter: 19368\n Buffers: shared hit=17585790\n -> Nested Loop (cost=1.80..3529.88 rows=395 width=28)\n(actual time=0.048..2876.926 rows=2968729 loops=1)\n Buffers: shared hit=2053968\n -> Nested Loop (cost=1.11..11.73 rows=1\nwidth=73) (actual time=0.028..0.426 rows=9 loops=1)\n Join Filter: (u.sponsor_id = sp.sponsor_id)\n Buffers: shared hit=67\n -> Nested Loop (cost=0.70..9.09 rows=1\nwidth=95) (actual time=0.019..0.262 rows=9 loops=1)\n Buffers: shared hit=31\n -> Index Only Scan using\nindex_user_populations_on_user_id_and_population_id on\nuser_populations up (cost=0.42..1.57 rows=3 width=36) (actual\ntime=0.010..0.047 rows=9 loops=1)\n Index Cond: (user_id = 3962)\n Heap Fetches: 0\n Buffers: shared hit=4\n -> Index Scan using\nindex_sponsor_populations_on_population_id_and_sponsor_id on\nsponsor_populations sp (cost=0.28..2.50 rows=1 width=59) (actual\ntime=0.011..0.016 rows=1 loops=9)\n Index Cond: (population_id =\nup.population_id)\n Buffers: shared hit=27\n -> Index Scan using users_pkey on users u\n(cost=0.41..2.63 rows=1 width=24) (actual time=0.013..0.013 rows=1\nloops=9)\n Index Cond: (id = 3962)\n Buffers: shared hit=36\n -> Index Only Scan using\nindex_members_on_population_id on members m (cost=0.70..3173.12\nrows=34503 width=40) (actual time=0.021..288.933 rows=329859 loops=9)\n Index Cond: (population_id = sp.population_id)\n Heap Fetches: 2729516\n Buffers: shared hit=2053901\n -> Index Scan using\nindex_new_patient_visits_on_patient_id on new_patient_visits v\n(cost=0.57..4.26 rows=1 width=24) (actual time=0.006..0.006 rows=0\nloops=2968729)\n Index Cond: (patient_id = m.patient_id)\n Filter: ((end_on IS NULL) AND\n(cached_population_ids &&\n'{foo-population,bar-population,baz-population,quux-population}'::text[]))\n Rows Removed by Filter: 4\n Buffers: shared hit=15531822\n Settings: effective_cache_size = '372GB', max_parallel_workers =\n'12', max_parallel_workers_per_gather = '4', work_mem = '164MB',\nrandom_page_cost = '1.1', effective_io_concurrency = '200'\n Planning:\n Buffers: shared hit=44\n Planning Time: 68.692 ms\n Execution Time: 21346.567 ms\n(42 rows)\n\n===\nDespite the fact that there are only 29,751 patient_visits rows where\nend_on is null and the array intersection is non-empty, whereas there\nare (as mentioned previously) over 4M rows that meet the full authz\ncondition, the query planner is starting with the larger set, and I\nthink this is because of the misestimation of how many members rows\nare involved. (BTW, the tables are all freshly analyzed.) If I\nexplicitly materialize the relevant visits, I get a much better plan:\n\ndb=# explain (analyze, buffers, settings) with user_member_expirations as (\n select\n u.id user_id,\n m.id member_id,\n m.patient_id,\n (m.last_activity_on + sp.authz_expiration_interval) expires_on\n from members m\n join sponsor_populations sp on sp.population_id = m.population_id\n join user_populations up on up.population_id = m.population_id\n join users u on u.id = up.user_id\n where sp.sponsor_id = u.sponsor_id\n), visits as materialized (\n select * from new_patient_visits v\n where end_on is null\n and v.cached_population_ids &&\nARRAY['foo-population','bar-population','baz-population','quux-population']\n)\nselect v.patient_class, count(distinct v.id)\nfrom visits v\n join user_member_expirations u on u.patient_id = v.patient_id\nwhere u.user_id = 3962\n and v.start_on < u.expires_on\ngroup by v.patient_class;\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=58665.16..58665.18 rows=1 width=12) (actual\ntime=4310.702..4315.069 rows=5 loops=1)\n Group Key: v.patient_class\n Buffers: shared hit=2599817\n CTE visits\n -> Bitmap Heap Scan on new_patient_visits v_1\n(cost=55.66..54698.37 rows=49407 width=722) (actual\ntime=14.364..72.664 rows=29751 loops=1)\n Recheck Cond: ((cached_population_ids &&\n'{foo-population,bar-population,baz-population,quux-population}':::text[])\nAND (end_on IS NULL))\n Heap Blocks: exact=28442\n Buffers: shared hit=28504\n -> Bitmap Index Scan on\nindex_new_patient_visits_on_open_cached_population_ids\n(cost=0.00..43.31 rows=49407 width=0) (actual time=9.569..9.570\nrows=29751 loops=1)\n Index Cond: (cached_population_ids &&\n'{foo-population,bar-population,baz-population,quux-population}':::text[])\n Buffers: shared hit=62\n -> Sort (cost=3966.79..3966.80 rows=1 width=12) (actual\ntime=4307.634..4309.925 rows=50988 loops=1)\n Sort Key: v.patient_class, v.id\n Sort Method: quicksort Memory: 3528kB\n Buffers: shared hit=2599817\n -> Nested Loop (cost=1.11..3966.78 rows=1 width=12) (actual\ntime=14.550..4280.320 rows=50988 loops=1)\n Join Filter: ((m.population_id = sp.population_id) AND\n(v.start_on < (m.last_activity_on + sp.authz_expiration_interval)))\n Rows Removed by Join Filter: 2124510\n Buffers: shared hit=2599817\n -> Nested Loop (cost=1.11..1493.94 rows=558 width=97)\n(actual time=14.405..250.756 rows=267759 loops=1)\n Buffers: shared hit=28571\n -> Nested Loop (cost=1.11..11.73 rows=1\nwidth=73) (actual time=0.035..0.327 rows=9 loops=1)\n Join Filter: (u.sponsor_id = sp.sponsor_id)\n Buffers: shared hit=67\n -> Nested Loop (cost=0.70..9.09 rows=1\nwidth=95) (actual time=0.023..0.211 rows=9 loops=1)\n Buffers: shared hit=31\n -> Index Only Scan using\nindex_user_populations_on_user_id_and_population_id on\nuser_populations up (cost=0.42..1.57 rows=3 width=36) (actual\ntime=0.013..0.036 rows=9 loops=1)\n Index Cond: (user_id = 3962)\n Heap Fetches: 0\n Buffers: shared hit=4\n -> Index Scan using\nindex_sponsor_populations_on_population_id_and_sponsor_id on\nsponsor_populations sp (cost=0.28..2.50 rows=1 width=59) (actual\ntime=0.014..0.016 rows=1 loops=9)\n Index Cond: (population_id =\nup.population_id)\n Buffers: shared hit=27\n -> Index Scan using users_pkey on users u\n(cost=0.41..2.63 rows=1 width=24) (actual time=0.011..0.011 rows=1\nloops=9)\n Index Cond: (id = 3962)\n Buffers: shared hit=36\n -> CTE Scan on visits v (cost=0.00..988.14\nrows=49407 width=24) (actual time=1.597..21.285 rows=29751 loops=9)\n Buffers: shared hit=28504\n -> Index Scan using index_members_on_patient_id on\nmembers m (cost=0.00..4.38 rows=3 width=40) (actual time=0.003..0.014\nrows=8 loops=267759)\n Index Cond: (patient_id = v.patient_id)\n Rows Removed by Index Recheck: 0\n Buffers: shared hit=2571246\n Settings: effective_cache_size = '372GB', max_parallel_workers =\n'12', max_parallel_workers_per_gather = '4', work_mem = '164MB',\nrandom_page_cost = '1.1', effective_io_concurrency = '200'\n Planning:\n Buffers: shared hit=32\n Planning Time: 67.924 ms\n Execution Time: 4320.252 ms\n(47 rows)\n\n===\n\nOf course, I'd prefer not to have to materialize this relation\nexplicitly. This particular query, for this particular user, benefits\nfrom it, but similar queries or queries for different users may not.\n\nI think the root of the problem is that population size (i.e., the\nnumber of members in a given population) has a high variance, and then\nplanner is basing its estimates on the average population size (and\nmaybe the average number of populations to which a user has access?),\nwhich is not especially useful. Is there anything I can do about this?\nWould any extended statistics be useful here?\n\nThanks,\nJon\n\n\n",
"msg_date": "Mon, 29 Jul 2024 16:51:40 -0400",
"msg_from": "Jon Zeppieri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help with row estimate problem"
},
{
"msg_contents": "On 29/7/2024 22:51, Jon Zeppieri wrote:\n> Of course, I'd prefer not to have to materialize this relation\n> explicitly. This particular query, for this particular user, benefits\n> from it, but similar queries or queries for different users may not.\n> \n> I think the root of the problem is that population size (i.e., the\n> number of members in a given population) has a high variance, and then\n> planner is basing its estimates on the average population size (and\n> maybe the average number of populations to which a user has access?),\n> which is not especially useful. Is there anything I can do about this?\n> Would any extended statistics be useful here?\nThanks for report. I see such cases frequently enough and the key \nproblem here is data skew, as you already mentioned. Extended statistics \ndoesn't help here. Also, because we can't estimate specific values \ncoming from the outer NestLoop - we can't involve MCV to estimate \nselectivity of the population. That's the reason why the optimiser uses \nndistinct value.\nWhat you can do? I see only one option - split the table to some \npartitions where data will be distributed more or less uniformly. And \ninvent a criteria for pruning unnecessary partitions.\nOf course, you can also try pg_hint_plan and force planner to use \nMergeJoin or HashJoin in that suspicious case.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Tue, 30 Jul 2024 17:34:20 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with row estimate problem"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 11:34 AM Andrei Lepikhov <[email protected]> wrote:\n>\n> Thanks for report. I see such cases frequently enough and the key\n> problem here is data skew, as you already mentioned. Extended statistics\n> doesn't help here. Also, because we can't estimate specific values\n> coming from the outer NestLoop - we can't involve MCV to estimate\n> selectivity of the population. That's the reason why the optimiser uses\n> ndistinct value.\n> What you can do? I see only one option - split the table to some\n> partitions where data will be distributed more or less uniformly. And\n> invent a criteria for pruning unnecessary partitions.\n> Of course, you can also try pg_hint_plan and force planner to use\n> MergeJoin or HashJoin in that suspicious case.\n\nThanks for the reply, Andrei, and for the advice about partitioning.\n\n- Jon\n\n\n",
"msg_date": "Tue, 30 Jul 2024 13:22:11 -0400",
"msg_from": "Jon Zeppieri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help with row estimate problem"
}
] |
[
{
"msg_contents": "2024-07-31 00:01:02.795\nUTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR:\n out of memory\n2024-07-31 00:01:02.795\nUTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL:\n Cannot enlarge string buffer containing 378355896 bytes by 756711422 more\nbytes.\n2024-07-31 00:01:02.795\nUTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT:\n slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback,\nassociated LSN 3D/318438E0\n2024-07-31 00:01:02.795\nUTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT:\n START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version\n'1', publication_names 'pgreps_13801')\n\nWe use built-in pgoutput and a client application did an HOT update to a\ncolumn , that data type is \"text\" and real length is 756711422. But this\ntable is NOT on publication list, possible to make logical decoding ignore\n\"WAL records belong to tables that's not in publication list\" ?\n\nThanks,\n\nJames\n\n2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR: out of memory2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL: Cannot enlarge string buffer containing 378355896 bytes by 756711422 more bytes.2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT: slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback, associated LSN 3D/318438E02024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT: START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version '1', publication_names 'pgreps_13801')We use built-in pgoutput and a client application did an HOT update to a column , that data type is \"text\" and real length is 756711422. But this table is NOT on publication list, possible to make logical decoding ignore \"WAL records belong to tables that's not in publication list\" ? Thanks,James",
"msg_date": "Wed, 31 Jul 2024 10:17:41 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "logical replication out of memory"
},
{
"msg_contents": "We use built-in pgoutput and a client application did an HOT update to\na column , that data type is \"text\" and real length is 756711422. But this\ntable is NOT on publication list, possible to make logical decoding ignore\n\"WAL records belong to tables that's not in publication list\" ? or we\nhave drop replication slots or make it start from a new pglsn position ?\n\n\n2024-07-31 00:01:02.795\nUTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR:\nout of memory\n\n2024-07-31 00:01:02.795\nUTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL:\nCannot enlarge string buffer containing 378355896 bytes by 756711422 more\nbytes.\n\n2024-07-31 00:01:02.795\nUTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT:\nslot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback,\nassociated LSN 3D/318438E0\n\n2024-07-31 00:01:02.795\nUTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT:\nSTART_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version\n'1', publication_names 'pgreps_13801')\n\n\nJames Pang (chaolpan) <[email protected]> 於 2024年7月31日週三 上午10:28寫道:\n\n>\n>\n>\n>\n> *From:* James Pang <[email protected]>\n> *Sent:* Wednesday, July 31, 2024 10:18 AM\n> *To:* [email protected]\n> *Subject:* logical replication out of memory\n>\n>\n>\n>\n>\n>\n>\n> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR:\n> out of memory\n>\n> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL:\n> Cannot enlarge string buffer containing 378355896 bytes by 756711422 more\n> bytes.\n>\n> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT:\n> slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback,\n> associated LSN 3D/318438E0\n>\n> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT:\n> START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version\n> '1', publication_names 'pgreps_13801')\n>\n>\n>\n> We use built-in pgoutput and a client application did an HOT update to a\n> column , that data type is \"text\" and real length is 756711422. But this\n> table is NOT on publication list, possible to make logical decoding ignore\n> \"WAL records belong to tables that's not in publication list\" ?\n>\n\n We use built-in pgoutput and a client application did an HOT update to a column , that data type is \"text\" and real length is 756711422. But this table is NOT on publication list, possible to make logical decoding ignore \"WAL records belong to tables that's not in publication list\" ? or we have drop replication slots or make it start from a new pglsn position ? 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR: out of memory2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL: Cannot enlarge string buffer containing 378355896 bytes by 756711422 more bytes.2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT: slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback, associated LSN 3D/318438E02024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT: START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version '1', publication_names 'pgreps_13801')James Pang (chaolpan) <[email protected]> 於 2024年7月31日週三 上午10:28寫道:\n\n\n \n \n\nFrom: James Pang <[email protected]>\n\nSent: Wednesday, July 31, 2024 10:18 AM\nTo: [email protected]\nSubject: logical replication out of memory\n\n \n\n \n \n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR: out of memory\n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL: Cannot enlarge string buffer containing 378355896 bytes by 756711422 more bytes.\n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT: slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback, associated LSN 3D/318438E0\n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT: START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version '1', publication_names 'pgreps_13801')\n \nWe use built-in pgoutput and a client application did an HOT update to a column , that data type is \"text\" and real length is 756711422. But this table is NOT on publication list, possible to make logical decoding ignore \"WAL records belong\n to tables that's not in publication list\" ?",
"msg_date": "Wed, 31 Jul 2024 10:30:19 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logical replication out of memory"
},
{
"msg_contents": "Hi Pang\n\nThe text column is exceptionally large, Your server must be out of memory,\nSuch a process ran out of memory while handling a large text column update.\n\nI suggest using an S3 bucket for such files, Consider increasing the\nmemory-related configuration parameters, like work_mem, maintenance_work_mem or\neven the server's overall memory allocation if possible.\n\nOr increase the shared buffer size.\n\nIf everything doesn't work, use physical replication to cope with it. 😄\n\nin last let me know the datatype your are using for this column.\n\n\n\nOn Wed, Jul 31, 2024 at 7:18 AM James Pang <[email protected]> wrote:\n\n> 2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR:\n> out of memory\n> 2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL:\n> Cannot enlarge string buffer containing 378355896 bytes by 756711422 more\n> bytes.\n> 2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT:\n> slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback,\n> associated LSN 3D/318438E0\n> 2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT:\n> START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version\n> '1', publication_names 'pgreps_13801')\n>\n> We use built-in pgoutput and a client application did an HOT update to a\n> column , that data type is \"text\" and real length is 756711422. But this\n> table is NOT on publication list, possible to make logical decoding ignore\n> \"WAL records belong to tables that's not in publication list\" ?\n>\n> Thanks,\n>\n> James\n>\n\nHi PangThe text column is exceptionally large, Your server must be out of memory, Such a process ran out of memory while handling a large text column update.I suggest using an S3 bucket for such files, Consider increasing the memory-related configuration parameters, like work_mem, maintenance_work_mem or even the server's overall memory allocation if possible.Or increase the shared buffer size.If everything doesn't work, use physical replication to cope with it. 😄in last let me know the datatype your are using for this column.On Wed, Jul 31, 2024 at 7:18 AM James Pang <[email protected]> wrote:2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR: out of memory2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL: Cannot enlarge string buffer containing 378355896 bytes by 756711422 more bytes.2024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT: slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback, associated LSN 3D/318438E02024-07-31 00:01:02.795 UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT: START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version '1', publication_names 'pgreps_13801')We use built-in pgoutput and a client application did an HOT update to a column , that data type is \"text\" and real length is 756711422. But this table is NOT on publication list, possible to make logical decoding ignore \"WAL records belong to tables that's not in publication list\" ? Thanks,James",
"msg_date": "Wed, 31 Jul 2024 10:41:41 +0500",
"msg_from": "khan Affan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logical replication out of memory"
},
{
"msg_contents": "We use built-in pgoutput and a client application did an HOT update to a\ncolumn , that data type is \"text\" and real data length is 756711422 bytes.\n it's pg logical decoding throw out of memory error when decode \"WAL\nrecords belong to table\" , and string buffer total size exceed 1GB. But\nthis table is NOT on publication list and not in replication either,\n possible to make logical decoding to ignore the wal records that belong to\n\"NOT in replication list\" tables ? that can help reduce this kind of\nerror.\n\nThanks,\n\nJames\n\n\n>\n>\n>\n> *From:* khan Affan <[email protected]>\n> *Sent:* Wednesday, July 31, 2024 1:42 PM\n> *To:* James Pang <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: logical replication out of memory\n>\n>\n>\n> Hi Pang\n>\n> The text column is exceptionally large, Your server must be out of memory,\n> Such a process ran out of memory while handling a large text column update.\n>\n> I suggest using an S3 bucket for such files, Consider increasing the\n> memory-related configuration parameters, like work_mem,\n> maintenance_work_mem or even the server's overall memory allocation if\n> possible.\n>\n> Or increase the shared buffer size.\n>\n> If everything doesn't work, use physical replication to cope with it. 😄\n>\n> in last let me know the datatype your are using for this column.\n>\n>\n>\n> On Wed, Jul 31, 2024 at 7:18 AM James Pang <[email protected]> wrote:\n>\n> We use built-in pgoutput and a client application did an HOT update to a\n> column , that data type is \"text\" and real length is 756711422. But this\n> table is NOT on publication list, possible to make logical decoding ignore\n> \"WAL records belong to tables that's not in publication list\" ?\n>\n> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR:\n> out of memory\n>\n> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL:\n> Cannot enlarge string buffer containing 378355896 bytes by 756711422 more\n> bytes.\n>\n> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT:\n> slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback,\n> associated LSN 3D/318438E0\n>\n> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT:\n> START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version\n> '1', publication_names 'pgreps_13801')\n>\n>\n>\n> Thanks,\n>\n>\n>\n> James\n>\n>\n\n We use built-in pgoutput and a client application did an HOT update to a column , that data type is \"text\" and real data length is 756711422 bytes. it's pg logical decoding throw out of memory error when decode \"WAL records belong to table\" , and string buffer total size exceed 1GB. But this table is NOT on publication list and not in replication either, possible to make logical decoding to ignore the wal records that belong to \"NOT in replication list\" tables ? that can help reduce this kind of error. Thanks,James\n \n\nFrom: khan Affan <[email protected]>\n\nSent: Wednesday, July 31, 2024 1:42 PM\nTo: James Pang <[email protected]>\nCc: [email protected]\nSubject: Re: logical replication out of memory\n\n \n\nHi Pang\n\nThe text column is exceptionally large, Your server must be out of memory, Such a process ran out of memory while handling a large text column update.\n\nI suggest using an S3 bucket for such files, Consider increasing the memory-related configuration parameters, like\nwork_mem, maintenance_work_mem or even the server's overall memory allocation if possible.\n\nOr increase the shared buffer size.\n\nIf everything doesn't work, use physical replication to cope with it. 😄\n\nin last let me know the datatype your are using for this column.\n\n\n\n \n\n\nOn Wed, Jul 31, 2024 at 7:18 AM James Pang <[email protected]> wrote:\n\n\n\n\nWe use built-in pgoutput and a client application did an HOT update to a column , that data type is \"text\" and real length is 756711422. But this table is NOT on publication list, possible to make logical decoding\n ignore \"WAL records belong to tables that's not in publication list\" ? \n 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR: out of memory\n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL: Cannot enlarge string buffer containing 378355896 bytes by 756711422 more\n bytes.\n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT: slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback,\n associated LSN 3D/318438E0\n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT: START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version '1', publication_names\n 'pgreps_13801')\n\n\n \n\n\nThanks,\n\n\n \n\n\nJames",
"msg_date": "Wed, 31 Jul 2024 14:48:43 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logical replication out of memory"
},
{
"msg_contents": "PostgreSQL's built-in pgoutput plugin doesn't provide a direct mechanism to\nfilter WAL records based on table-level filtering.\nAdjust the replication slot retention period and you can also increase the\nnumber of parallel workers for logical decoding (if supported by your\nPostgreSQL version) to distribute the workload.\n\nThanks\n\nAffan\n\nOn Wed, Jul 31, 2024 at 11:48 AM James Pang <[email protected]> wrote:\n\n> We use built-in pgoutput and a client application did an HOT update to\n> a column , that data type is \"text\" and real data length is 756711422\n> bytes. it's pg logical decoding throw out of memory error when decode\n> \"WAL records belong to table\" , and string buffer total size exceed 1GB.\n> But this table is NOT on publication list and not in replication either,\n> possible to make logical decoding to ignore the wal records that belong to\n> \"NOT in replication list\" tables ? that can help reduce this kind of\n> error.\n>\n> Thanks,\n>\n> James\n>\n>\n>>\n>>\n>>\n>> *From:* khan Affan <[email protected]>\n>> *Sent:* Wednesday, July 31, 2024 1:42 PM\n>> *To:* James Pang <[email protected]>\n>> *Cc:* [email protected]\n>> *Subject:* Re: logical replication out of memory\n>>\n>>\n>>\n>> Hi Pang\n>>\n>> The text column is exceptionally large, Your server must be out of\n>> memory, Such a process ran out of memory while handling a large text column\n>> update.\n>>\n>> I suggest using an S3 bucket for such files, Consider increasing the\n>> memory-related configuration parameters, like work_mem,\n>> maintenance_work_mem or even the server's overall memory allocation if\n>> possible.\n>>\n>> Or increase the shared buffer size.\n>>\n>> If everything doesn't work, use physical replication to cope with it. 😄\n>>\n>> in last let me know the datatype your are using for this column.\n>>\n>>\n>>\n>> On Wed, Jul 31, 2024 at 7:18 AM James Pang <[email protected]>\n>> wrote:\n>>\n>> We use built-in pgoutput and a client application did an HOT update to a\n>> column , that data type is \"text\" and real length is 756711422. But this\n>> table is NOT on publication list, possible to make logical decoding ignore\n>> \"WAL records belong to tables that's not in publication list\" ?\n>>\n>> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR:\n>> out of memory\n>>\n>> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL:\n>> Cannot enlarge string buffer containing 378355896 bytes by 756711422 more\n>> bytes.\n>>\n>> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT:\n>> slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback,\n>> associated LSN 3D/318438E0\n>>\n>> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT:\n>> START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version\n>> '1', publication_names 'pgreps_13801')\n>>\n>>\n>>\n>> Thanks,\n>>\n>>\n>>\n>> James\n>>\n>>\n\nPostgreSQL's built-in pgoutput plugin doesn't provide a direct mechanism to filter WAL records based on table-level filtering.\n\nAdjust the replication slot retention period\n\nand you can also increase the number of parallel workers for logical decoding (if supported by your PostgreSQL version) to distribute the workload.\n\nThanks AffanOn Wed, Jul 31, 2024 at 11:48 AM James Pang <[email protected]> wrote: We use built-in pgoutput and a client application did an HOT update to a column , that data type is \"text\" and real data length is 756711422 bytes. it's pg logical decoding throw out of memory error when decode \"WAL records belong to table\" , and string buffer total size exceed 1GB. But this table is NOT on publication list and not in replication either, possible to make logical decoding to ignore the wal records that belong to \"NOT in replication list\" tables ? that can help reduce this kind of error. Thanks,James\n \n\nFrom: khan Affan <[email protected]>\n\nSent: Wednesday, July 31, 2024 1:42 PM\nTo: James Pang <[email protected]>\nCc: [email protected]\nSubject: Re: logical replication out of memory\n\n \n\nHi Pang\n\nThe text column is exceptionally large, Your server must be out of memory, Such a process ran out of memory while handling a large text column update.\n\nI suggest using an S3 bucket for such files, Consider increasing the memory-related configuration parameters, like\nwork_mem, maintenance_work_mem or even the server's overall memory allocation if possible.\n\nOr increase the shared buffer size.\n\nIf everything doesn't work, use physical replication to cope with it. 😄\n\nin last let me know the datatype your are using for this column.\n\n\n\n \n\n\nOn Wed, Jul 31, 2024 at 7:18 AM James Pang <[email protected]> wrote:\n\n\n\n\nWe use built-in pgoutput and a client application did an HOT update to a column , that data type is \"text\" and real length is 756711422. But this table is NOT on publication list, possible to make logical decoding\n ignore \"WAL records belong to tables that's not in publication list\" ? \n 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR: out of memory\n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL: Cannot enlarge string buffer containing 378355896 bytes by 756711422 more\n bytes.\n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT: slot \"pgreps_13801\", output plugin \"pgoutput\", in the change callback,\n associated LSN 3D/318438E0\n2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT: START_REPLICATION SLOT \"pgreps_13801\" LOGICAL 3C/F24C74D0 (proto_version '1', publication_names\n 'pgreps_13801')\n\n\n \n\n\nThanks,\n\n\n \n\n\nJames",
"msg_date": "Wed, 31 Jul 2024 12:06:22 +0500",
"msg_from": "khan Affan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logical replication out of memory"
}
] |
[
{
"msg_contents": "Hello all. I am trying to make postgres 16 prune partition for queries with\n`WHERE tenant_id=ANY(current_setting('my.tenant_id')::integer[])`, but I\nhaven't been able to make it work, and naturally it impacts performance so\nI thought this list would be appropriate.\n\nHere's the SQL I tried (but feel free to skip to the end as I'm sure all\nthis stuff is obvious to you!):\n\n\n\n\n\n\n\n\n*CREATE TABLE tbl (id SERIAL NOT NULL, tenant_id INT NOT NULL, some_col\nINT, PRIMARY KEY (tenant_id, id)) PARTITION BY HASH (tenant_id);CREATE\nTABLE tbl1 PARTITION OF tbl FOR VALUES WITH (MODULUS 2, REMAINDER 0);CREATE\nTABLE tbl2 PARTITION OF tbl FOR VALUES WITH (MODULUS 2, REMAINDER 1);INSERT\nINTO tbl (tenant_id, some_col) SELECT 1, * FROM\ngenerate_series(1,10000);INSERT INTO tbl (tenant_id, some_col) SELECT 3, *\nFROM generate_series(1,10000);*\n\nPartition pruning works as expected for this query (still not an\narray-contains check):\n*EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=1;*\n\nWhen reading from a setting it also prunes partitions correctly:\n\n*SET my.tenant_id=1;EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE\ntenant_id=current_setting('my.tenant_id')::integer;*\n\nIt still does partition pruning if we use a scalar subquery. I can see the\n(never executed) scans in the plan.\n*EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=(SELECT\ncurrent_setting('my.tenant_id')::integer);*\n\nBut how about an array-contains check? Still prunes, which is nice.\n*EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE\ntenant_id=ANY('{1}'::integer[]);*\n\nHowever, it doesn't prune if the array is in a setting:\n\n*SET my.tenant_id='{1}';EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE\ntenant_id=ANY(current_setting('my.tenant_id')::integer[]);*\n\nI actually expected that when in a setting, none of the previous queries\nwould've done partition pruning because I thought `current_setting` is not\na stable function. But some of them did, which surprised me.\n\nSo I thought maybe if I put it in a scalar query it will give me an\nInitPlan node, but it looks like method resolution for =ANY won't let me\ntry this:\n*EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY((SELECT\ncurrent_setting('my.tenant_id')::integer[]));*\n*ERROR: operator does not exist: integer = integer[]*\n\nI tried using UNNEST, but that adds a Hash Semi Join to the plan which also\ndoesn't do partition pruning.\n*EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY((SELECT\nUNNEST(current_setting('my.tenant_id')::integer[])));*\n\nMy question is if there's a way to do partition pruning based on\narray-contains operator if the array is in a setting. The use-case is to\nmake Row Level Security policies do partition pruning \"automatically\" in a\nsetting where users can be in more than one tenant.\nIt feels like this would work if there were a non-overloaded operator that\ntakes in an array and a single element and tests for array-contains,\nbecause then I could use that operator with a scalar subquery and get an\nInitPlan node. But I'm new to all of this, so apologies if I'm getting it\nall wrong!\n\nThanks in advance,\nMarcelo.\n\nHello all. I am trying to make postgres 16 prune partition for queries with `WHERE tenant_id=ANY(current_setting('my.tenant_id')::integer[])`, but I haven't been able to make it work, and naturally it impacts performance so I thought this list would be appropriate.Here's the SQL I tried (but feel free to skip to the end as I'm sure all this stuff is obvious to you!):CREATE TABLE tbl (id SERIAL NOT NULL, tenant_id INT NOT NULL, some_col INT, PRIMARY KEY (tenant_id, id))\tPARTITION BY HASH (tenant_id);CREATE TABLE tbl1 PARTITION OF tbl FOR VALUES WITH (MODULUS 2, REMAINDER 0);CREATE TABLE tbl2 PARTITION OF tbl FOR VALUES WITH (MODULUS 2, REMAINDER 1);INSERT INTO tbl (tenant_id, some_col) SELECT 1, * FROM generate_series(1,10000);INSERT INTO tbl (tenant_id, some_col) SELECT 3, * FROM generate_series(1,10000);Partition pruning works as expected for this query (still not an array-contains check):EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=1;When reading from a setting it also prunes partitions correctly:SET my.tenant_id=1;EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=current_setting('my.tenant_id')::integer;It still does partition pruning if we use a scalar subquery. I can see the (never executed) scans in the plan.EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=(SELECT current_setting('my.tenant_id')::integer);But how about an array-contains check? Still prunes, which is nice.EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY('{1}'::integer[]);However, it doesn't prune if the array is in a setting:SET my.tenant_id='{1}';EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY(current_setting('my.tenant_id')::integer[]);I actually expected that when in a setting, none of the previous queries would've done partition pruning because I thought `current_setting` is not a stable function. But some of them did, which surprised me.So I thought maybe if I put it in a scalar query it will give me an InitPlan node, but it looks like method resolution for =ANY won't let me try this:EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY((SELECT current_setting('my.tenant_id')::integer[]));ERROR: operator does not exist: integer = integer[]I tried using UNNEST, but that adds a Hash Semi Join to the plan which also doesn't do partition pruning.EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY((SELECT UNNEST(current_setting('my.tenant_id')::integer[])));My question is if there's a way to do partition pruning based on array-contains operator if the array is in a setting. The use-case is to make Row Level Security policies do partition pruning \"automatically\" in a setting where users can be in more than one tenant.It feels like this would work if there were a non-overloaded operator that takes in an array and a single element and tests for array-contains, because then I could use that operator with a scalar subquery and get an InitPlan node. But I'm new to all of this, so apologies if I'm getting it all wrong!Thanks in advance,Marcelo.",
"msg_date": "Wed, 7 Aug 2024 18:10:04 -0300",
"msg_from": "Marcelo Zabani <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partition pruning with array-contains check and current_setting\n function"
},
{
"msg_contents": "I managed to get a plan I was hoping for, but it still doesn't prune\npartitions. I created a new operator #|<(integer[], integer) that is\ndefined in SQL and is basically equivalent to value=ANY(array), and a\nnon-stable tenants() function defined that returns an array from the\nsetting, and with that I could use a scalar subquery without running into\ntype-checking errors. This gives me an InitPlan node:\n\n=> SET my.tenant_id='{1}';EXPLAIN (COSTS OFF) SELECT COUNT(*) FROM tbl\nWHERE tenant_id #|< (select tenants());\nSET\n QUERY PLAN\n----------------------------------------------------------\n Finalize Aggregate\n InitPlan 1 (returns $0)\n -> Result\n -> Gather\n Workers Planned: 2\n Params Evaluated: $0\n -> Partial Aggregate\n -> Parallel Append\n -> Parallel Seq Scan on tbl2 tbl_2\n Filter: (tenant_id = ANY ($0))\n -> Parallel Seq Scan on tbl1 tbl_1\n Filter: (tenant_id = ANY ($0))\n\n\n\n\nIt still doesn't prune even if I EXPLAIN ANALYZE it. I thought maybe I did\nsomething wrong with the operator definition, so I tried making tenants()\nimmutable and removing the scalar subquery, and then it does prune:\n=> SET my.tenant_id='{1}';EXPLAIN (COSTS OFF) SELECT COUNT(*) FROM tbl\nWHERE tenant_id #|< tenants();\nSET\n QUERY PLAN\n------------------------------------------------------\n Aggregate\n -> Seq Scan on tbl1 tbl\n Filter: (tenant_id = ANY ('{1}'::integer[]))\n\n\n\nSadly I can't make tenants() immutable because it's a runtime setting, and\nmaking tenants() STABLE does not lead to partition pruning with or without\nthe scalar subquery around it.\n\nI'm a bit lost. It seems like postgres is fully capable of pruning\npartitions for =ANY checks, and some strange detail is confusing it in this\ncase. I'm not sure what else to try.\n\nOn Wed, Aug 7, 2024 at 6:10 PM Marcelo Zabani <[email protected]> wrote:\n\n> Hello all. I am trying to make postgres 16 prune partition for queries\n> with `WHERE tenant_id=ANY(current_setting('my.tenant_id')::integer[])`, but\n> I haven't been able to make it work, and naturally it impacts performance\n> so I thought this list would be appropriate.\n>\n> Here's the SQL I tried (but feel free to skip to the end as I'm sure all\n> this stuff is obvious to you!):\n>\n>\n>\n>\n>\n>\n>\n>\n> *CREATE TABLE tbl (id SERIAL NOT NULL, tenant_id INT NOT NULL, some_col\n> INT, PRIMARY KEY (tenant_id, id)) PARTITION BY HASH (tenant_id);CREATE\n> TABLE tbl1 PARTITION OF tbl FOR VALUES WITH (MODULUS 2, REMAINDER 0);CREATE\n> TABLE tbl2 PARTITION OF tbl FOR VALUES WITH (MODULUS 2, REMAINDER 1);INSERT\n> INTO tbl (tenant_id, some_col) SELECT 1, * FROM\n> generate_series(1,10000);INSERT INTO tbl (tenant_id, some_col) SELECT 3, *\n> FROM generate_series(1,10000);*\n>\n> Partition pruning works as expected for this query (still not an\n> array-contains check):\n> *EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=1;*\n>\n> When reading from a setting it also prunes partitions correctly:\n>\n> *SET my.tenant_id=1;EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE\n> tenant_id=current_setting('my.tenant_id')::integer;*\n>\n> It still does partition pruning if we use a scalar subquery. I can see the\n> (never executed) scans in the plan.\n> *EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=(SELECT\n> current_setting('my.tenant_id')::integer);*\n>\n> But how about an array-contains check? Still prunes, which is nice.\n> *EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE\n> tenant_id=ANY('{1}'::integer[]);*\n>\n> However, it doesn't prune if the array is in a setting:\n>\n> *SET my.tenant_id='{1}';EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE\n> tenant_id=ANY(current_setting('my.tenant_id')::integer[]);*\n>\n> I actually expected that when in a setting, none of the previous queries\n> would've done partition pruning because I thought `current_setting` is not\n> a stable function. But some of them did, which surprised me.\n>\n> So I thought maybe if I put it in a scalar query it will give me an\n> InitPlan node, but it looks like method resolution for =ANY won't let me\n> try this:\n> *EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY((SELECT\n> current_setting('my.tenant_id')::integer[]));*\n> *ERROR: operator does not exist: integer = integer[]*\n>\n> I tried using UNNEST, but that adds a Hash Semi Join to the plan which\n> also doesn't do partition pruning.\n> *EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY((SELECT\n> UNNEST(current_setting('my.tenant_id')::integer[])));*\n>\n> My question is if there's a way to do partition pruning based on\n> array-contains operator if the array is in a setting. The use-case is to\n> make Row Level Security policies do partition pruning \"automatically\" in a\n> setting where users can be in more than one tenant.\n> It feels like this would work if there were a non-overloaded operator that\n> takes in an array and a single element and tests for array-contains,\n> because then I could use that operator with a scalar subquery and get an\n> InitPlan node. But I'm new to all of this, so apologies if I'm getting it\n> all wrong!\n>\n> Thanks in advance,\n> Marcelo.\n>\n\nI managed to get a plan I was hoping for, but it still doesn't prune partitions. I created a new operator #|<(integer[], integer) that is defined in SQL and is basically equivalent to value=ANY(array), and a non-stable tenants() function defined that returns an array from the setting, and with that I could use a scalar subquery without running into type-checking errors. This gives me an InitPlan node:=> SET my.tenant_id='{1}';EXPLAIN (COSTS OFF) SELECT COUNT(*) FROM tbl WHERE tenant_id #|< (select tenants());SET QUERY PLAN---------------------------------------------------------- Finalize Aggregate InitPlan 1 (returns $0) -> Result -> Gather Workers Planned: 2 Params Evaluated: $0 -> Partial Aggregate -> Parallel Append -> Parallel Seq Scan on tbl2 tbl_2 Filter: (tenant_id = ANY ($0)) -> Parallel Seq Scan on tbl1 tbl_1 Filter: (tenant_id = ANY ($0))It still doesn't prune even if I EXPLAIN ANALYZE it. I thought maybe I did something wrong with the operator definition, so I tried making tenants() immutable and removing the scalar subquery, and then it does prune:=> SET my.tenant_id='{1}';EXPLAIN (COSTS OFF) SELECT COUNT(*) FROM tbl WHERE tenant_id #|< tenants();SET QUERY PLAN------------------------------------------------------ Aggregate -> Seq Scan on tbl1 tbl Filter: (tenant_id = ANY ('{1}'::integer[]))Sadly I can't make tenants() immutable because it's a runtime setting, and making tenants() STABLE does not lead to partition pruning with or without the scalar subquery around it.I'm a bit lost. It seems like postgres is fully capable of pruning partitions for =ANY checks, and some strange detail is confusing it in this case. I'm not sure what else to try.On Wed, Aug 7, 2024 at 6:10 PM Marcelo Zabani <[email protected]> wrote:Hello all. I am trying to make postgres 16 prune partition for queries with `WHERE tenant_id=ANY(current_setting('my.tenant_id')::integer[])`, but I haven't been able to make it work, and naturally it impacts performance so I thought this list would be appropriate.Here's the SQL I tried (but feel free to skip to the end as I'm sure all this stuff is obvious to you!):CREATE TABLE tbl (id SERIAL NOT NULL, tenant_id INT NOT NULL, some_col INT, PRIMARY KEY (tenant_id, id))\tPARTITION BY HASH (tenant_id);CREATE TABLE tbl1 PARTITION OF tbl FOR VALUES WITH (MODULUS 2, REMAINDER 0);CREATE TABLE tbl2 PARTITION OF tbl FOR VALUES WITH (MODULUS 2, REMAINDER 1);INSERT INTO tbl (tenant_id, some_col) SELECT 1, * FROM generate_series(1,10000);INSERT INTO tbl (tenant_id, some_col) SELECT 3, * FROM generate_series(1,10000);Partition pruning works as expected for this query (still not an array-contains check):EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=1;When reading from a setting it also prunes partitions correctly:SET my.tenant_id=1;EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=current_setting('my.tenant_id')::integer;It still does partition pruning if we use a scalar subquery. I can see the (never executed) scans in the plan.EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=(SELECT current_setting('my.tenant_id')::integer);But how about an array-contains check? Still prunes, which is nice.EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY('{1}'::integer[]);However, it doesn't prune if the array is in a setting:SET my.tenant_id='{1}';EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY(current_setting('my.tenant_id')::integer[]);I actually expected that when in a setting, none of the previous queries would've done partition pruning because I thought `current_setting` is not a stable function. But some of them did, which surprised me.So I thought maybe if I put it in a scalar query it will give me an InitPlan node, but it looks like method resolution for =ANY won't let me try this:EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY((SELECT current_setting('my.tenant_id')::integer[]));ERROR: operator does not exist: integer = integer[]I tried using UNNEST, but that adds a Hash Semi Join to the plan which also doesn't do partition pruning.EXPLAIN ANALYZE SELECT COUNT(*) FROM tbl WHERE tenant_id=ANY((SELECT UNNEST(current_setting('my.tenant_id')::integer[])));My question is if there's a way to do partition pruning based on array-contains operator if the array is in a setting. The use-case is to make Row Level Security policies do partition pruning \"automatically\" in a setting where users can be in more than one tenant.It feels like this would work if there were a non-overloaded operator that takes in an array and a single element and tests for array-contains, because then I could use that operator with a scalar subquery and get an InitPlan node. But I'm new to all of this, so apologies if I'm getting it all wrong!Thanks in advance,Marcelo.",
"msg_date": "Wed, 11 Sep 2024 16:11:49 -0300",
"msg_from": "Marcelo Zabani <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition pruning with array-contains check and current_setting\n function"
}
] |
[
{
"msg_contents": "Hi folks.\n\nI have a table with 4.5m rows per partition (16 partitions) (I know, very\nsmall, probably didn't need to be partitioned).\n\nThe table has two columns, a bigint and b text.\nThere is a unique index on (a,b)\nThe query is:\n\nSELECT b\nFROM table\nWHERE a = <id>\n AND b IN (<ids>)\n\n\nThe visibility map is almost exclusively true.\nThis table gets few updates.\n\nThe planner says index only scan, but is filtering on b.\n\nIndex Only Scan using pkey on table (cost=0.46..29.09 rows=1\nwidth=19) (actual time=0.033..0.053 rows=10 loops=1)\n Index Cond: (a = 662028765)\n\" Filter: (b = ANY\n('{634579987:662028765,561730945:662028765,505555183:662028765,472806302:662028765,401361055:662028765,363587258:662028765,346093772:662028765,314369897:662028765,289498328:662028765,217993946:662028765}'::text[]))\"\n Rows Removed by Filter: 1\n Heap Fetches: 11\nPlanning Time: 0.095 ms\nExecution Time: 0.070 ms\n\nMy question is, why isn't it using the index for column b? Is this\nexpected? And why is it doing heap lookups for every row,.\n\nPerformance is still good, but I am curious.\n\nThanks in advance!\n\nHi folks.I have a table with 4.5m rows per partition (16 partitions) (I know, very small, probably didn't need to be partitioned).The table has two columns, a bigint and b text.There is a unique index on (a,b)The query is:SELECT bFROM tableWHERE a = <id> AND b IN (<ids>)The visibility map is almost exclusively true. This table gets few updates.The planner says index only scan, but is filtering on b.Index Only Scan using pkey on table (cost=0.46..29.09 rows=1 width=19) (actual time=0.033..0.053 rows=10 loops=1) Index Cond: (a = 662028765)\" Filter: (b = ANY ('{634579987:662028765,561730945:662028765,505555183:662028765,472806302:662028765,401361055:662028765,363587258:662028765,346093772:662028765,314369897:662028765,289498328:662028765,217993946:662028765}'::text[]))\" Rows Removed by Filter: 1 Heap Fetches: 11Planning Time: 0.095 msExecution Time: 0.070 msMy question is, why isn't it using the index for column b? Is this expected? And why is it doing heap lookups for every row,.Performance is still good, but I am curious.Thanks in advance!",
"msg_date": "Sun, 18 Aug 2024 20:55:50 -0500",
"msg_from": "\"Stephen Samuel (Sam)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "On Sun, Aug 18, 2024 at 9:56 PM Stephen Samuel (Sam) <[email protected]> wrote:\n> My question is, why isn't it using the index for column b? Is this expected? And why is it doing heap lookups for every row,.\n\nThis has been fixed for Postgres 17:\n\nhttps://pganalyze.com/blog/5mins-postgres-17-faster-btree-index-scans\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 18 Aug 2024 21:59:10 -0400",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "Oh as simple as upgrading!\nOk great, appreciate the quick reply. Will have to wait for AWS to support\n17 :)\n\n\nOn Sun, 18 Aug 2024 at 20:59, Peter Geoghegan <[email protected]> wrote:\n\n> On Sun, Aug 18, 2024 at 9:56 PM Stephen Samuel (Sam) <[email protected]>\n> wrote:\n> > My question is, why isn't it using the index for column b? Is this\n> expected? And why is it doing heap lookups for every row,.\n>\n> This has been fixed for Postgres 17:\n>\n> https://pganalyze.com/blog/5mins-postgres-17-faster-btree-index-scans\n>\n> --\n> Peter Geoghegan\n>\n\nOh as simple as upgrading!Ok great, appreciate the quick reply. Will have to wait for AWS to support 17 :)On Sun, 18 Aug 2024 at 20:59, Peter Geoghegan <[email protected]> wrote:On Sun, Aug 18, 2024 at 9:56 PM Stephen Samuel (Sam) <[email protected]> wrote:\n> My question is, why isn't it using the index for column b? Is this expected? And why is it doing heap lookups for every row,.\n\nThis has been fixed for Postgres 17:\n\nhttps://pganalyze.com/blog/5mins-postgres-17-faster-btree-index-scans\n\n-- \nPeter Geoghegan",
"msg_date": "Sun, 18 Aug 2024 21:01:16 -0500",
"msg_from": "\"Stephen Samuel (Sam)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "On Sun, Aug 18, 2024 at 10:01 PM Stephen Samuel (Sam) <[email protected]> wrote:\n> Oh as simple as upgrading!\n> Ok great, appreciate the quick reply. Will have to wait for AWS to support 17 :)\n\nIt is possible to use index quals for both a and b on earlier\nversions, with certain restrictions. You might try setting\nrandom_page_cost to a much lower value, to see if that allows the\nplanner to use such a plan with your real query.\n\nIn my experience it's very unlikely that the planner will do that,\nthough, even when coaxed. At least when there are this many IN()\nconstants. So you probably really will need to upgrade to 17.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 18 Aug 2024 22:08:24 -0400",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "Performance is pretty good anyway, and I'm only running 5 r7.large readers\non this service, I was just looking at the query planner and it surprised\nme.\n\n\nOn Sun, 18 Aug 2024 at 21:08, Peter Geoghegan <[email protected]> wrote:\n\n> On Sun, Aug 18, 2024 at 10:01 PM Stephen Samuel (Sam) <[email protected]>\n> wrote:\n> > Oh as simple as upgrading!\n> > Ok great, appreciate the quick reply. Will have to wait for AWS to\n> support 17 :)\n>\n> It is possible to use index quals for both a and b on earlier\n> versions, with certain restrictions. You might try setting\n> random_page_cost to a much lower value, to see if that allows the\n> planner to use such a plan with your real query.\n>\n> In my experience it's very unlikely that the planner will do that,\n> though, even when coaxed. At least when there are this many IN()\n> constants. So you probably really will need to upgrade to 17.\n>\n> --\n> Peter Geoghegan\n>\n\nPerformance is pretty good anyway, and I'm only running 5 r7.large readers on this service, I was just looking at the query planner and it surprised me.On Sun, 18 Aug 2024 at 21:08, Peter Geoghegan <[email protected]> wrote:On Sun, Aug 18, 2024 at 10:01 PM Stephen Samuel (Sam) <[email protected]> wrote:\n> Oh as simple as upgrading!\n> Ok great, appreciate the quick reply. Will have to wait for AWS to support 17 :)\n\nIt is possible to use index quals for both a and b on earlier\nversions, with certain restrictions. You might try setting\nrandom_page_cost to a much lower value, to see if that allows the\nplanner to use such a plan with your real query.\n\nIn my experience it's very unlikely that the planner will do that,\nthough, even when coaxed. At least when there are this many IN()\nconstants. So you probably really will need to upgrade to 17.\n\n-- \nPeter Geoghegan",
"msg_date": "Sun, 18 Aug 2024 21:09:51 -0500",
"msg_from": "\"Stephen Samuel (Sam)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "\"Stephen Samuel (Sam)\" <[email protected]> writes:\n> There is a unique index on (a,b)\n> The query is:\n\n> SELECT b\n> FROM table\n> WHERE a = <id>\n> AND b IN (<ids>)\n\n> The planner says index only scan, but is filtering on b.\n\n> Index Only Scan using pkey on table (cost=0.46..29.09 rows=1\n> width=19) (actual time=0.033..0.053 rows=10 loops=1)\n> Index Cond: (a = 662028765)\n> \" Filter: (b = ANY\n> ('{634579987:662028765,561730945:662028765,505555183:662028765,472806302:662028765,401361055:662028765,363587258:662028765,346093772:662028765,314369897:662028765,289498328:662028765,217993946:662028765}'::text[]))\"\n> Rows Removed by Filter: 1\n> Heap Fetches: 11\n> Planning Time: 0.095 ms\n> Execution Time: 0.070 ms\n\nI think it's a good bet that this query would be *slower* if\nit were done the other way. The filter condition is eliminating\nonly one of the 11 rows matching \"a = 662028765\". If we did what\nyou think you want, we'd initiate ten separate index descents\nto find the other ten rows.\n\nWhether the planner is costing this out accurately enough to\nrealize that, or whether it's just accidentally falling into\nthe right plan, I'm not sure; you've not provided nearly\nenough details for anyone to guess what the other cost estimate\nwas.\n\n> And why is it doing heap lookups for every row,.\n\nYeah, that part is a weakness I've wanted to fix for a long\ntime: it could do the filter condition by fetching b from the\nindex, but it doesn't notice that and has to go to the heap\nto get b. (If the other plan does win, it'd likely be because\nof that problem and not because the index scanning strategy\nper se is better.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Aug 2024 22:50:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "On Sun, Aug 18, 2024 at 10:50 PM Tom Lane <[email protected]> wrote:\n> I think it's a good bet that this query would be *slower* if\n> it were done the other way. The filter condition is eliminating\n> only one of the 11 rows matching \"a = 662028765\". If we did what\n> you think you want, we'd initiate ten separate index descents\n> to find the other ten rows.\n\nTrue - on versions prior to Postgres 17.\n\nOn 17 the number of index descents will be minimal. If there are less\nthan a few hundred index tuples with the value a = <whatever>, then\nthere'll only be one descent.\n\n> Yeah, that part is a weakness I've wanted to fix for a long\n> time: it could do the filter condition by fetching b from the\n> index, but it doesn't notice that and has to go to the heap\n> to get b.\n\nIt was fixed? At least on 17.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 18 Aug 2024 23:11:40 -0400",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Sun, Aug 18, 2024 at 10:50 PM Tom Lane <[email protected]> wrote:\n>> Yeah, that part is a weakness I've wanted to fix for a long\n>> time: it could do the filter condition by fetching b from the\n>> index, but it doesn't notice that and has to go to the heap\n>> to get b.\n\n> It was fixed? At least on 17.\n\nOh, sorry, I was thinking of a related problem that doesn't apply\nhere: matching indexes on expressions to fragments of a filter\ncondition. However, the fact that the OP's EXPLAIN shows heap\nfetches from a supposedly all-visible table suggests that his\nIN isn't getting optimized that way. I wonder why --- it seems\nto work for me, even in fairly old versions. Taking a parallel\nexample from the regression database, even v12 can do\n\nregression=# explain analyze select tenthous from tenk1 where thousand=99 and tenthous in (1,4,7,9,11,55,66,88,99,77,8876,9876);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using tenk1_thous_tenthous on tenk1 (cost=0.29..4.61 rows=1 width=4) (actual time=0.016..0.018 rows=1 loops=1)\n Index Cond: (thousand = 99)\n Filter: (tenthous = ANY ('{1,4,7,9,11,55,66,88,99,77,8876,9876}'::integer[]))\n Rows Removed by Filter: 9\n Heap Fetches: 0\n Planning Time: 0.298 ms\n Execution Time: 0.036 ms\n(7 rows)\n\nNo heap fetches, so it must have done the filter from the index.\nWhy not in the original case?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Aug 2024 00:06:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "select all_visible, count(*)\nfrom pg_visibility('table')\ngroup by all_visible\n\nfalse,1614\ntrue,30575\n\nThe table is partitioned if that matters (but same results if I run the\nqueries directly on the partition).\n\nOn Sun, 18 Aug 2024 at 23:06, Tom Lane <[email protected]> wrote:\n\n> Peter Geoghegan <[email protected]> writes:\n> > On Sun, Aug 18, 2024 at 10:50 PM Tom Lane <[email protected]> wrote:\n> >> Yeah, that part is a weakness I've wanted to fix for a long\n> >> time: it could do the filter condition by fetching b from the\n> >> index, but it doesn't notice that and has to go to the heap\n> >> to get b.\n>\n> > It was fixed? At least on 17.\n>\n> Oh, sorry, I was thinking of a related problem that doesn't apply\n> here: matching indexes on expressions to fragments of a filter\n> condition. However, the fact that the OP's EXPLAIN shows heap\n> fetches from a supposedly all-visible table suggests that his\n> IN isn't getting optimized that way. I wonder why --- it seems\n> to work for me, even in fairly old versions. Taking a parallel\n> example from the regression database, even v12 can do\n>\n> regression=# explain analyze select tenthous from tenk1 where thousand=99\n> and tenthous in (1,4,7,9,11,55,66,88,99,77,8876,9876);\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Only Scan using tenk1_thous_tenthous on tenk1 (cost=0.29..4.61\n> rows=1 width=4) (actual time=0.016..0.018 rows=1 loops=1)\n> Index Cond: (thousand = 99)\n> Filter: (tenthous = ANY\n> ('{1,4,7,9,11,55,66,88,99,77,8876,9876}'::integer[]))\n> Rows Removed by Filter: 9\n> Heap Fetches: 0\n> Planning Time: 0.298 ms\n> Execution Time: 0.036 ms\n> (7 rows)\n>\n> No heap fetches, so it must have done the filter from the index.\n> Why not in the original case?\n>\n> regards, tom lane\n>\n\nselect all_visible, count(*)from pg_visibility('table')group by all_visiblefalse,1614true,30575The table is partitioned if that matters (but same results if I run the queries directly on the partition).On Sun, 18 Aug 2024 at 23:06, Tom Lane <[email protected]> wrote:Peter Geoghegan <[email protected]> writes:\n> On Sun, Aug 18, 2024 at 10:50 PM Tom Lane <[email protected]> wrote:\n>> Yeah, that part is a weakness I've wanted to fix for a long\n>> time: it could do the filter condition by fetching b from the\n>> index, but it doesn't notice that and has to go to the heap\n>> to get b.\n\n> It was fixed? At least on 17.\n\nOh, sorry, I was thinking of a related problem that doesn't apply\nhere: matching indexes on expressions to fragments of a filter\ncondition. However, the fact that the OP's EXPLAIN shows heap\nfetches from a supposedly all-visible table suggests that his\nIN isn't getting optimized that way. I wonder why --- it seems\nto work for me, even in fairly old versions. Taking a parallel\nexample from the regression database, even v12 can do\n\nregression=# explain analyze select tenthous from tenk1 where thousand=99 and tenthous in (1,4,7,9,11,55,66,88,99,77,8876,9876);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using tenk1_thous_tenthous on tenk1 (cost=0.29..4.61 rows=1 width=4) (actual time=0.016..0.018 rows=1 loops=1)\n Index Cond: (thousand = 99)\n Filter: (tenthous = ANY ('{1,4,7,9,11,55,66,88,99,77,8876,9876}'::integer[]))\n Rows Removed by Filter: 9\n Heap Fetches: 0\n Planning Time: 0.298 ms\n Execution Time: 0.036 ms\n(7 rows)\n\nNo heap fetches, so it must have done the filter from the index.\nWhy not in the original case?\n\n regards, tom lane",
"msg_date": "Sun, 18 Aug 2024 23:16:45 -0500",
"msg_from": "\"Stephen Samuel (Sam)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "Hello,\n\nThe query's behavior is expected due to how PostgreSQL handles composite\nindexes and MVCC. The index on `(a, b)` is used efficiently for the `a`\ncondition, but the `b IN (<ids>)` filter is more complex, leading to\nadditional filtering rather than direct index usage. Although the\nindex-only scan is utilized, heap fetches still occur to verify tuple\nvisibility, a necessary step when the visibility map doesn’t confirm\nvisibility or to apply the `b` filter accurately. This is standard in\nPostgreSQL’s handling of such queries, ensuring data consistency and\naccuracy. Performance remains good, but these heap fetches could be\noptimized if needed by reconsidering the index structure or query design.\nThank you!\n\n\n\n\nOn Mon, Aug 19, 2024 at 7:26 AM Stephen Samuel (Sam) <[email protected]>\nwrote:\n\n> Hi folks.\n>\n> I have a table with 4.5m rows per partition (16 partitions) (I know, very\n> small, probably didn't need to be partitioned).\n>\n> The table has two columns, a bigint and b text.\n> There is a unique index on (a,b)\n> The query is:\n>\n> SELECT b\n> FROM table\n> WHERE a = <id>\n> AND b IN (<ids>)\n>\n>\n> The visibility map is almost exclusively true.\n> This table gets few updates.\n>\n> The planner says index only scan, but is filtering on b.\n>\n> Index Only Scan using pkey on table (cost=0.46..29.09 rows=1 width=19) (actual time=0.033..0.053 rows=10 loops=1)\n> Index Cond: (a = 662028765)\n> \" Filter: (b = ANY ('{634579987:662028765,561730945:662028765,505555183:662028765,472806302:662028765,401361055:662028765,363587258:662028765,346093772:662028765,314369897:662028765,289498328:662028765,217993946:662028765}'::text[]))\"\n> Rows Removed by Filter: 1\n> Heap Fetches: 11\n> Planning Time: 0.095 ms\n> Execution Time: 0.070 ms\n>\n> My question is, why isn't it using the index for column b? Is this expected? And why is it doing heap lookups for every row,.\n>\n> Performance is still good, but I am curious.\n>\n> Thanks in advance!\n>\n>\n\n-- \n\n*Best Regards *\n*Shiv Iyer *\n\nHello, The query's behavior is expected due to how PostgreSQL handles composite indexes and MVCC. The index on `(a, b)` is used efficiently for the `a` condition, but the `b IN (<ids>)` filter is more complex, leading to additional filtering rather than direct index usage. Although the index-only scan is utilized, heap fetches still occur to verify tuple visibility, a necessary step when the visibility map doesn’t confirm visibility or to apply the `b` filter accurately. This is standard in PostgreSQL’s handling of such queries, ensuring data consistency and accuracy. Performance remains good, but these heap fetches could be optimized if needed by reconsidering the index structure or query design. Thank you! On Mon, Aug 19, 2024 at 7:26 AM Stephen Samuel (Sam) <[email protected]> wrote:Hi folks.I have a table with 4.5m rows per partition (16 partitions) (I know, very small, probably didn't need to be partitioned).The table has two columns, a bigint and b text.There is a unique index on (a,b)The query is:SELECT bFROM tableWHERE a = <id> AND b IN (<ids>)The visibility map is almost exclusively true. This table gets few updates.The planner says index only scan, but is filtering on b.Index Only Scan using pkey on table (cost=0.46..29.09 rows=1 width=19) (actual time=0.033..0.053 rows=10 loops=1) Index Cond: (a = 662028765)\" Filter: (b = ANY ('{634579987:662028765,561730945:662028765,505555183:662028765,472806302:662028765,401361055:662028765,363587258:662028765,346093772:662028765,314369897:662028765,289498328:662028765,217993946:662028765}'::text[]))\" Rows Removed by Filter: 1 Heap Fetches: 11Planning Time: 0.095 msExecution Time: 0.070 msMy question is, why isn't it using the index for column b? Is this expected? And why is it doing heap lookups for every row,.Performance is still good, but I am curious.Thanks in advance!\n-- Best Regards Shiv Iyer",
"msg_date": "Mon, 19 Aug 2024 13:35:40 +0530",
"msg_from": "Shiv Iyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "On Mon, Aug 19, 2024 at 12:06 AM Tom Lane <[email protected]> wrote:\n> > It was fixed? At least on 17.\n>\n> Oh, sorry, I was thinking of a related problem that doesn't apply\n> here: matching indexes on expressions to fragments of a filter\n> condition. However, the fact that the OP's EXPLAIN shows heap\n> fetches from a supposedly all-visible table suggests that his\n> IN isn't getting optimized that way.\n\nAs you pointed out, the number of tuples filtered out by the filter\nqual is only a small proportion of the total in this particular\nexample (wasn't really paying attention to that aspect myself). I\nguess that that factor makes the Postgres 17 nbtree SAOP work almost\nirrelevant to the exact scenario shown, since even if true index quals\ncould be used they'd only save at most one heap page access.\n\nI would still expect the 17 work to make the query slightly faster,\nsince my testing showed that avoiding expression evaluation is\nslightly faster. Plus it would *definitely* make similar queries\nfaster by avoiding heap access entirely -- cases where the use of true\nindex quals can eliminate most heap page accesses.\n\n> No heap fetches, so it must have done the filter from the index.\n> Why not in the original case?\n\nMy guess is that that's due to some kind of naturally occuring\ncorrelation. The few unset-in-VM pages are disproportionately likely\nto become heap fetches.\n\nThe difficulty at predicting this kind of variation argues for an\napproach that makes as many decisions as possible at runtime. This is\nparticularly true of how we skip within the index scan. I wouldn't\nexpect skipping to be useful in the exact scenario shown, but why not\nbe open to the possibility? If the planner only has one choice then\nthere are no wrong choices.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 19 Aug 2024 11:21:13 -0400",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "Just for my own knowledge:\n\nThis index covers both columns needed in the predicate/projection, and the\nvisibility bit is almost always set, why does it need to go to the heap at\nall and doesn't just get what it needs from the index?\nOr does scanning the _vm table count as a heap access in the planner ?\n\nOn Mon, 19 Aug 2024 at 10:21, Peter Geoghegan <[email protected]> wrote:\n\n> On Mon, Aug 19, 2024 at 12:06 AM Tom Lane <[email protected]> wrote:\n> > > It was fixed? At least on 17.\n> >\n> > Oh, sorry, I was thinking of a related problem that doesn't apply\n> > here: matching indexes on expressions to fragments of a filter\n> > condition. However, the fact that the OP's EXPLAIN shows heap\n> > fetches from a supposedly all-visible table suggests that his\n> > IN isn't getting optimized that way.\n>\n> As you pointed out, the number of tuples filtered out by the filter\n> qual is only a small proportion of the total in this particular\n> example (wasn't really paying attention to that aspect myself). I\n> guess that that factor makes the Postgres 17 nbtree SAOP work almost\n> irrelevant to the exact scenario shown, since even if true index quals\n> could be used they'd only save at most one heap page access.\n>\n> I would still expect the 17 work to make the query slightly faster,\n> since my testing showed that avoiding expression evaluation is\n> slightly faster. Plus it would *definitely* make similar queries\n> faster by avoiding heap access entirely -- cases where the use of true\n> index quals can eliminate most heap page accesses.\n>\n> > No heap fetches, so it must have done the filter from the index.\n> > Why not in the original case?\n>\n> My guess is that that's due to some kind of naturally occuring\n> correlation. The few unset-in-VM pages are disproportionately likely\n> to become heap fetches.\n>\n> The difficulty at predicting this kind of variation argues for an\n> approach that makes as many decisions as possible at runtime. This is\n> particularly true of how we skip within the index scan. I wouldn't\n> expect skipping to be useful in the exact scenario shown, but why not\n> be open to the possibility? If the planner only has one choice then\n> there are no wrong choices.\n>\n> --\n> Peter Geoghegan\n>\n\nJust for my own knowledge:This index covers both columns needed in the predicate/projection, and the visibility bit is almost always set, why does it need to go to the heap at all and doesn't just get what it needs from the index?Or does scanning the _vm table count as a heap access in the planner ?On Mon, 19 Aug 2024 at 10:21, Peter Geoghegan <[email protected]> wrote:On Mon, Aug 19, 2024 at 12:06 AM Tom Lane <[email protected]> wrote:\n> > It was fixed? At least on 17.\n>\n> Oh, sorry, I was thinking of a related problem that doesn't apply\n> here: matching indexes on expressions to fragments of a filter\n> condition. However, the fact that the OP's EXPLAIN shows heap\n> fetches from a supposedly all-visible table suggests that his\n> IN isn't getting optimized that way.\n\nAs you pointed out, the number of tuples filtered out by the filter\nqual is only a small proportion of the total in this particular\nexample (wasn't really paying attention to that aspect myself). I\nguess that that factor makes the Postgres 17 nbtree SAOP work almost\nirrelevant to the exact scenario shown, since even if true index quals\ncould be used they'd only save at most one heap page access.\n\nI would still expect the 17 work to make the query slightly faster,\nsince my testing showed that avoiding expression evaluation is\nslightly faster. Plus it would *definitely* make similar queries\nfaster by avoiding heap access entirely -- cases where the use of true\nindex quals can eliminate most heap page accesses.\n\n> No heap fetches, so it must have done the filter from the index.\n> Why not in the original case?\n\nMy guess is that that's due to some kind of naturally occuring\ncorrelation. The few unset-in-VM pages are disproportionately likely\nto become heap fetches.\n\nThe difficulty at predicting this kind of variation argues for an\napproach that makes as many decisions as possible at runtime. This is\nparticularly true of how we skip within the index scan. I wouldn't\nexpect skipping to be useful in the exact scenario shown, but why not\nbe open to the possibility? If the planner only has one choice then\nthere are no wrong choices.\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 19 Aug 2024 18:44:04 -0500",
"msg_from": "\"Stephen Samuel (Sam)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "\"Stephen Samuel (Sam)\" <[email protected]> writes:\n> This index covers both columns needed in the predicate/projection, and the\n> visibility bit is almost always set, why does it need to go to the heap at\n> all and doesn't just get what it needs from the index?\n\nPeter's theory was that the particular tuples you were fetching were\nin not-all-visible pages. That seems plausible to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Aug 2024 19:55:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
},
{
"msg_contents": "Ah, his theory was that I got unlucky in my sample queries.\n\nIf I pick data that's much older in the table, then it would seem to\nconfirm his theory.\n\nIndex Only Scan using xx (cost=0.52..25.07 rows=1 width=19) (actual\ntime=0.032..0.039 rows=34 loops=1)\n Index Cond: (a = 1654)\n\" Filter: (b = ANY\n('{1654:150843999,1654:178559906,1654:196691125,1654:213859809,1654:215290364,1654:232833953,1654:234187139,1654:235553821,1654:2514914,1654:27042020,1654:28414362,1654:290939423,1654:294364845,1654:302084789,1654:308624761,1654:321909343,1654:325450448,1654:333349583,1654:333780122,1654:352705002,1654:357720420,1654:360894242,1654:37357227,1654:38419057,1654:397848555,1654:398104037,1654:414568491,1654:415804877,1654:425839729,1654:428927290,1654:430795031,1654:432428733,1654:485645738,1654:490213252}'::text[]))\"\n Rows Removed by Filter: 8\n Heap Fetches: 1\nPlanning Time: 0.348 ms\nExecution Time: 0.058 ms\n\nOn Mon, 19 Aug 2024 at 18:55, Tom Lane <[email protected]> wrote:\n\n> \"Stephen Samuel (Sam)\" <[email protected]> writes:\n> > This index covers both columns needed in the predicate/projection, and\n> the\n> > visibility bit is almost always set, why does it need to go to the heap\n> at\n> > all and doesn't just get what it needs from the index?\n>\n> Peter's theory was that the particular tuples you were fetching were\n> in not-all-visible pages. That seems plausible to me.\n>\n> regards, tom lane\n>\n\nAh, his theory was that I got unlucky in my sample queries.If I pick data that's much older in the table, then it would seem to confirm his theory.Index Only Scan using xx (cost=0.52..25.07 rows=1 width=19) (actual time=0.032..0.039 rows=34 loops=1) Index Cond: (a = 1654)\" Filter: (b = ANY ('{1654:150843999,1654:178559906,1654:196691125,1654:213859809,1654:215290364,1654:232833953,1654:234187139,1654:235553821,1654:2514914,1654:27042020,1654:28414362,1654:290939423,1654:294364845,1654:302084789,1654:308624761,1654:321909343,1654:325450448,1654:333349583,1654:333780122,1654:352705002,1654:357720420,1654:360894242,1654:37357227,1654:38419057,1654:397848555,1654:398104037,1654:414568491,1654:415804877,1654:425839729,1654:428927290,1654:430795031,1654:432428733,1654:485645738,1654:490213252}'::text[]))\" Rows Removed by Filter: 8 Heap Fetches: 1Planning Time: 0.348 msExecution Time: 0.058 msOn Mon, 19 Aug 2024 at 18:55, Tom Lane <[email protected]> wrote:\"Stephen Samuel (Sam)\" <[email protected]> writes:\n> This index covers both columns needed in the predicate/projection, and the\n> visibility bit is almost always set, why does it need to go to the heap at\n> all and doesn't just get what it needs from the index?\n\nPeter's theory was that the particular tuples you were fetching were\nin not-all-visible pages. That seems plausible to me.\n\n regards, tom lane",
"msg_date": "Mon, 19 Aug 2024 19:00:59 -0500",
"msg_from": "\"Stephen Samuel (Sam)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to understand why a query is filtering when there is a\n composite index"
}
] |
[
{
"msg_contents": "All;\n\nI am running a select from a partitioned table. The table (and all the \npartitions) have an index on contract_date like this:\nCREATE INDEX on part_tab (contract_date) where contract_date > \n'2022-01-01'::date\n\nThe table (including all partitions) has 32million rows\nThe db server is an aurora postgresql instance with 128GB of ram and 16 \nvcpu's\n\nThe shared buffers is set to 90GB and effective_cache_size is also 90GB\nI set default_statistics_target to 1000 and ram a vacuum analyze on the \ntable\n\nI am selecting a number of columns and specifying this where clause:\n\nWHERE (\n (contract_date IS NULL)\n OR\n (contract_date > '2022-01-01'::date)\n )\n\nThis takes 15 seconds to run and an explain says it's doing a table scan \non all partitions (the query is not specifying the partition key)\nIf I change the where clause to look like this:\n\nWHERE (\n (contract_date > '2022-01-01'::date)\n )\n\nThen it performs index scans on all the partitions and runs in about 600ms\n\nIf i leave the where clause off entirely it performs table scans of the \npartitions and takes approx 18 seconds to run\n\nI am trying to get the performance to less than 2sec,\nI have tried adding indexes on the table and all partitions like this:\nCREATE INDEX ON table (contract_date NULLS FIRST) ;\nbut the performance with the full where clause is the same:\n\nWHERE (\n (contract_date IS NULL)\n OR\n (contract_date > '2022-01-01'::date)\n )\n\nruns in 15 seconds and scans all partitions\n\nI also tried indexes i=on the table and all partitions like this:\nCREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n\nbut I get the same result, table scans on all partitions and it runs in \n15 seconds\n\nAny help or advice ?\n\nThanks in advance\n\n\n",
"msg_date": "Thu, 22 Aug 2024 15:44:49 -0600",
"msg_from": "Sbob <[email protected]>",
"msg_from_op": true,
"msg_subject": "checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "\n\n> On Aug 22, 2024, at 5:44 PM, Sbob <[email protected]> wrote:\n> \n> All;\n> \n> I am running a select from a partitioned table. The table (and all the partitions) have an index on contract_date like this:\n> CREATE INDEX on part_tab (contract_date) where contract_date > '2022-01-01'::date\n> \n> The table (including all partitions) has 32million rows\n> The db server is an aurora postgresql instance with 128GB of ram and 16 vcpu's\n> \n> The shared buffers is set to 90GB and effective_cache_size is also 90GB\n> I set default_statistics_target to 1000 and ram a vacuum analyze on the table\n> \n> I am selecting a number of columns and specifying this where clause:\n> \n> WHERE (\n> (contract_date IS NULL)\n> OR\n> (contract_date > '2022-01-01'::date)\n> )\n> \n> This takes 15 seconds to run and an explain says it's doing a table scan on all partitions (the query is not specifying the partition key)\n> If I change the where clause to look like this:\n> \n> WHERE (\n> (contract_date > '2022-01-01'::date)\n> )\n> \n> Then it performs index scans on all the partitions and runs in about 600ms\n> \n> If i leave the where clause off entirely it performs table scans of the partitions and takes approx 18 seconds to run\n> \n> I am trying to get the performance to less than 2sec,\n> I have tried adding indexes on the table and all partitions like this:\n> CREATE INDEX ON table (contract_date NULLS FIRST) ;\n> but the performance with the full where clause is the same:\n> \n> WHERE (\n> (contract_date IS NULL)\n> OR\n> (contract_date > '2022-01-01'::date)\n> )\n> \n> runs in 15 seconds and scans all partitions\n> \n> I also tried indexes i=on the table and all partitions like this:\n> CREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n> \n> but I get the same result, table scans on all partitions and it runs in 15 seconds\n> \n> Any help or advice ?\n> \n> Thanks in advance\n> \n> \n\nWhat is contract_date and when will it be null?\n\n",
"msg_date": "Thu, 22 Aug 2024 19:06:10 -0400",
"msg_from": "Rui DeSousa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "\nOn 8/22/24 5:06 PM, Rui DeSousa wrote:\n>\n>> On Aug 22, 2024, at 5:44 PM, Sbob <[email protected]> wrote:\n>>\n>> All;\n>>\n>> I am running a select from a partitioned table. The table (and all the partitions) have an index on contract_date like this:\n>> CREATE INDEX on part_tab (contract_date) where contract_date > '2022-01-01'::date\n>>\n>> The table (including all partitions) has 32million rows\n>> The db server is an aurora postgresql instance with 128GB of ram and 16 vcpu's\n>>\n>> The shared buffers is set to 90GB and effective_cache_size is also 90GB\n>> I set default_statistics_target to 1000 and ram a vacuum analyze on the table\n>>\n>> I am selecting a number of columns and specifying this where clause:\n>>\n>> WHERE (\n>> (contract_date IS NULL)\n>> OR\n>> (contract_date > '2022-01-01'::date)\n>> )\n>>\n>> This takes 15 seconds to run and an explain says it's doing a table scan on all partitions (the query is not specifying the partition key)\n>> If I change the where clause to look like this:\n>>\n>> WHERE (\n>> (contract_date > '2022-01-01'::date)\n>> )\n>>\n>> Then it performs index scans on all the partitions and runs in about 600ms\n>>\n>> If i leave the where clause off entirely it performs table scans of the partitions and takes approx 18 seconds to run\n>>\n>> I am trying to get the performance to less than 2sec,\n>> I have tried adding indexes on the table and all partitions like this:\n>> CREATE INDEX ON table (contract_date NULLS FIRST) ;\n>> but the performance with the full where clause is the same:\n>>\n>> WHERE (\n>> (contract_date IS NULL)\n>> OR\n>> (contract_date > '2022-01-01'::date)\n>> )\n>>\n>> runs in 15 seconds and scans all partitions\n>>\n>> I also tried indexes i=on the table and all partitions like this:\n>> CREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n>>\n>> but I get the same result, table scans on all partitions and it runs in 15 seconds\n>>\n>> Any help or advice ?\n>>\n>> Thanks in advance\n>>\n>>\n> What is contract_date and when will it be null?\n\n\nit's a date data type and it allows NULL's not sure why, this is a \nclient's system\n\n\n\n\n",
"msg_date": "Thu, 22 Aug 2024 17:26:22 -0600",
"msg_from": "Sbob <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "\nOn 8/22/24 5:26 PM, Sbob wrote:\n>\n> On 8/22/24 5:06 PM, Rui DeSousa wrote:\n>>\n>>> On Aug 22, 2024, at 5:44 PM, Sbob <[email protected]> wrote:\n>>>\n>>> All;\n>>>\n>>> I am running a select from a partitioned table. The table (and all \n>>> the partitions) have an index on contract_date like this:\n>>> CREATE INDEX on part_tab (contract_date) where contract_date > \n>>> '2022-01-01'::date\n>>>\n>>> The table (including all partitions) has 32million rows\n>>> The db server is an aurora postgresql instance with 128GB of ram and \n>>> 16 vcpu's\n>>>\n>>> The shared buffers is set to 90GB and effective_cache_size is also 90GB\n>>> I set default_statistics_target to 1000 and ram a vacuum analyze on \n>>> the table\n>>>\n>>> I am selecting a number of columns and specifying this where clause:\n>>>\n>>> WHERE (\n>>> (contract_date IS NULL)\n>>> OR\n>>> (contract_date > '2022-01-01'::date)\n>>> )\n>>>\n>>> This takes 15 seconds to run and an explain says it's doing a table \n>>> scan on all partitions (the query is not specifying the partition key)\n>>> If I change the where clause to look like this:\n>>>\n>>> WHERE (\n>>> (contract_date > '2022-01-01'::date)\n>>> )\n>>>\n>>> Then it performs index scans on all the partitions and runs in about \n>>> 600ms\n>>>\n>>> If i leave the where clause off entirely it performs table scans of \n>>> the partitions and takes approx 18 seconds to run\n>>>\n>>> I am trying to get the performance to less than 2sec,\n>>> I have tried adding indexes on the table and all partitions like this:\n>>> CREATE INDEX ON table (contract_date NULLS FIRST) ;\n>>> but the performance with the full where clause is the same:\n>>>\n>>> WHERE (\n>>> (contract_date IS NULL)\n>>> OR\n>>> (contract_date > '2022-01-01'::date)\n>>> )\n>>>\n>>> runs in 15 seconds and scans all partitions\n>>>\n>>> I also tried indexes i=on the table and all partitions like this:\n>>> CREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n>>>\n>>> but I get the same result, table scans on all partitions and it runs \n>>> in 15 seconds\n>>>\n>>> Any help or advice ?\n>>>\n>>> Thanks in advance\n>>>\n>>>\n>> What is contract_date and when will it be null?\n>\n>\n> it's a date data type and it allows NULL's not sure why, this is a \n> client's system\n>\n>\n29 million of the 32 million rows in the table have NULL for contract_date\n\n\n\n\n\n\n",
"msg_date": "Thu, 22 Aug 2024 17:32:18 -0600",
"msg_from": "Sbob <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "On Thu, Aug 22, 2024 at 4:32 PM Sbob <[email protected]> wrote:\n\n>\n> 29 million of the 32 million rows in the table have NULL for contract_date\n>\n>\nYour expectation that this query should use an index is flawed. Indexes\nare for highly selective queries. Finding nulls on that table is not\nselective.\n\nDavid J.\n\nOn Thu, Aug 22, 2024 at 4:32 PM Sbob <[email protected]> wrote:\n29 million of the 32 million rows in the table have NULL for contract_dateYour expectation that this query should use an index is flawed. Indexes are for highly selective queries. Finding nulls on that table is not selective.David J.",
"msg_date": "Thu, 22 Aug 2024 17:01:37 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "Sbob <[email protected]> writes:\n> 29 million of the 32 million rows in the table have NULL for contract_date\n\n[ blink... ] So your query is selecting at least 29/32nds of the\ntable, plus however much matches the contract_date > '2022-01-01'\nalternative. I'm not sure how you expect that to be significantly\ncheaper than scanning the whole table.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Aug 2024 20:05:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "> On Aug 22, 2024, at 7:32 PM, Sbob <[email protected]> wrote:\n> \n> \n> On 8/22/24 5:26 PM, Sbob wrote:\n>> \n>> On 8/22/24 5:06 PM, Rui DeSousa wrote:\n>>> \n>>>> On Aug 22, 2024, at 5:44 PM, Sbob <[email protected]> wrote:\n>>>> \n>>>> All;\n>>>> \n>>>> I am running a select from a partitioned table. The table (and all the partitions) have an index on contract_date like this:\n>>>> CREATE INDEX on part_tab (contract_date) where contract_date > '2022-01-01'::date\n>>>> \n>>>> The table (including all partitions) has 32million rows\n>>>> The db server is an aurora postgresql instance with 128GB of ram and 16 vcpu's\n>>>> \n>>>> The shared buffers is set to 90GB and effective_cache_size is also 90GB\n>>>> I set default_statistics_target to 1000 and ram a vacuum analyze on the table\n>>>> \n>>>> I am selecting a number of columns and specifying this where clause:\n>>>> \n>>>> WHERE (\n>>>> (contract_date IS NULL)\n>>>> OR\n>>>> (contract_date > '2022-01-01'::date)\n>>>> )\n>>>> \n>>>> This takes 15 seconds to run and an explain says it's doing a table scan on all partitions (the query is not specifying the partition key)\n>>>> If I change the where clause to look like this:\n>>>> \n>>>> WHERE (\n>>>> (contract_date > '2022-01-01'::date)\n>>>> )\n>>>> \n>>>> Then it performs index scans on all the partitions and runs in about 600ms\n>>>> \n>>>> If i leave the where clause off entirely it performs table scans of the partitions and takes approx 18 seconds to run\n>>>> \n>>>> I am trying to get the performance to less than 2sec,\n>>>> I have tried adding indexes on the table and all partitions like this:\n>>>> CREATE INDEX ON table (contract_date NULLS FIRST) ;\n>>>> but the performance with the full where clause is the same:\n>>>> \n>>>> WHERE (\n>>>> (contract_date IS NULL)\n>>>> OR\n>>>> (contract_date > '2022-01-01'::date)\n>>>> )\n>>>> \n>>>> runs in 15 seconds and scans all partitions\n>>>> \n>>>> I also tried indexes i=on the table and all partitions like this:\n>>>> CREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n>>>> \n>>>> but I get the same result, table scans on all partitions and it runs in 15 seconds\n>>>> \n>>>> Any help or advice ?\n>>>> \n>>>> Thanks in advance\n>>>> \n>>>> \n>>> What is contract_date and when will it be null?\n>> \n>> \n>> it's a date data type and it allows NULL's not sure why, this is a client's system\n>> \n>> \n> 29 million of the 32 million rows in the table have NULL for contract_date\n> \n\nNULLs are not indexed thus the OR predicate invalidate the use of the index. \n\nSince you are already creating a partial index just include the NULLs. It index will get used for both of your queries.\n\ncreate index table_idx1 \n on table (contract_date) \n where contract_date > ‘1/1/2022’\n or contract_date is null\n;\n\n\nThe reason why I asked when is contract_date null is because attributes in a table should be non nullable. If it’s nullable then that begs the question if it belong in that table in the first place; and sometimes the answer is yes. I just see a lot of half baked schemas out there. I refer to them as organically designed schemas. \n\n-Rui.\n\n\n\n\nOn Aug 22, 2024, at 7:32 PM, Sbob <[email protected]> wrote:On 8/22/24 5:26 PM, Sbob wrote:On 8/22/24 5:06 PM, Rui DeSousa wrote:On Aug 22, 2024, at 5:44 PM, Sbob <[email protected]> wrote:All;I am running a select from a partitioned table. The table (and all the partitions) have an index on contract_date like this:CREATE INDEX on part_tab (contract_date) where contract_date > '2022-01-01'::dateThe table (including all partitions) has 32million rowsThe db server is an aurora postgresql instance with 128GB of ram and 16 vcpu'sThe shared buffers is set to 90GB and effective_cache_size is also 90GBI set default_statistics_target to 1000 and ram a vacuum analyze on the tableI am selecting a number of columns and specifying this where clause:WHERE ( (contract_date IS NULL) OR (contract_date > '2022-01-01'::date) )This takes 15 seconds to run and an explain says it's doing a table scan on all partitions (the query is not specifying the partition key)If I change the where clause to look like this:WHERE ( (contract_date > '2022-01-01'::date) )Then it performs index scans on all the partitions and runs in about 600msIf i leave the where clause off entirely it performs table scans of the partitions and takes approx 18 seconds to runI am trying to get the performance to less than 2sec,I have tried adding indexes on the table and all partitions like this:CREATE INDEX ON table (contract_date NULLS FIRST) ;but the performance with the full where clause is the same:WHERE ( (contract_date IS NULL) OR (contract_date > '2022-01-01'::date) )runs in 15 seconds and scans all partitionsI also tried indexes i=on the table and all partitions like this:CREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;but I get the same result, table scans on all partitions and it runs in 15 secondsAny help or advice ?Thanks in advanceWhat is contract_date and when will it be null?it's a date data type and it allows NULL's not sure why, this is a client's system29 million of the 32 million rows in the table have NULL for contract_dateNULLs are not indexed thus the OR predicate invalidate the use of the index. Since you are already creating a partial index just include the NULLs. It index will get used for both of your queries.create index table_idx1 on table (contract_date) where contract_date > ‘1/1/2022’ or contract_date is null;The reason why I asked when is contract_date null is because attributes in a table should be non nullable. If it’s nullable then that begs the question if it belong in that table in the first place; and sometimes the answer is yes. I just see a lot of half baked schemas out there. I refer to them as organically designed schemas. -Rui.",
"msg_date": "Thu, 22 Aug 2024 20:07:55 -0400",
"msg_from": "Rui DeSousa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "Hi Sbob,\nHave you tried using the following indexes ?\n\nB-tree\n\n -\n\n The default and most commonly used type, ideal for equality, range\n and Pattern queries.\n\nBRIN (Block Range INdex)\n\n -\n\n Compact indexes that are efficient for large tables where the data is\n naturally ordered.\n\n\n\nOn Fri, 23 Aug 2024 at 02:45, Sbob <[email protected]> wrote:\n\n> All;\n>\n> I am running a select from a partitioned table. The table (and all the\n> partitions) have an index on contract_date like this:\n> CREATE INDEX on part_tab (contract_date) where contract_date >\n> '2022-01-01'::date\n>\n> The table (including all partitions) has 32million rows\n> The db server is an aurora postgresql instance with 128GB of ram and 16\n> vcpu's\n>\n> The shared buffers is set to 90GB and effective_cache_size is also 90GB\n> I set default_statistics_target to 1000 and ram a vacuum analyze on the\n> table\n>\n> I am selecting a number of columns and specifying this where clause:\n>\n> WHERE (\n> (contract_date IS NULL)\n> OR\n> (contract_date > '2022-01-01'::date)\n> )\n>\n> This takes 15 seconds to run and an explain says it's doing a table scan\n> on all partitions (the query is not specifying the partition key)\n> If I change the where clause to look like this:\n>\n> WHERE (\n> (contract_date > '2022-01-01'::date)\n> )\n>\n> Then it performs index scans on all the partitions and runs in about 600ms\n>\n> If i leave the where clause off entirely it performs table scans of the\n> partitions and takes approx 18 seconds to run\n>\n> I am trying to get the performance to less than 2sec,\n> I have tried adding indexes on the table and all partitions like this:\n> CREATE INDEX ON table (contract_date NULLS FIRST) ;\n> but the performance with the full where clause is the same:\n>\n> WHERE (\n> (contract_date IS NULL)\n> OR\n> (contract_date > '2022-01-01'::date)\n> )\n>\n> runs in 15 seconds and scans all partitions\n>\n> I also tried indexes i=on the table and all partitions like this:\n> CREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n>\n> but I get the same result, table scans on all partitions and it runs in\n> 15 seconds\n>\n> Any help or advice ?\n>\n> Thanks in advance\n>\n>\n>\n\nHi Sbob,Have you tried using the following indexes ?B-treeThe default and most commonly used type, ideal for equality, range and Pattern queries.BRIN (Block Range INdex)Compact indexes that are efficient for large tables where the data is naturally ordered.On Fri, 23 Aug 2024 at 02:45, Sbob <[email protected]> wrote:All;\n\nI am running a select from a partitioned table. The table (and all the \npartitions) have an index on contract_date like this:\nCREATE INDEX on part_tab (contract_date) where contract_date > \n'2022-01-01'::date\n\nThe table (including all partitions) has 32million rows\nThe db server is an aurora postgresql instance with 128GB of ram and 16 \nvcpu's\n\nThe shared buffers is set to 90GB and effective_cache_size is also 90GB\nI set default_statistics_target to 1000 and ram a vacuum analyze on the \ntable\n\nI am selecting a number of columns and specifying this where clause:\n\nWHERE (\n (contract_date IS NULL)\n OR\n (contract_date > '2022-01-01'::date)\n )\n\nThis takes 15 seconds to run and an explain says it's doing a table scan \non all partitions (the query is not specifying the partition key)\nIf I change the where clause to look like this:\n\nWHERE (\n (contract_date > '2022-01-01'::date)\n )\n\nThen it performs index scans on all the partitions and runs in about 600ms\n\nIf i leave the where clause off entirely it performs table scans of the \npartitions and takes approx 18 seconds to run\n\nI am trying to get the performance to less than 2sec,\nI have tried adding indexes on the table and all partitions like this:\nCREATE INDEX ON table (contract_date NULLS FIRST) ;\nbut the performance with the full where clause is the same:\n\nWHERE (\n (contract_date IS NULL)\n OR\n (contract_date > '2022-01-01'::date)\n )\n\nruns in 15 seconds and scans all partitions\n\nI also tried indexes i=on the table and all partitions like this:\nCREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n\nbut I get the same result, table scans on all partitions and it runs in \n15 seconds\n\nAny help or advice ?\n\nThanks in advance",
"msg_date": "Fri, 23 Aug 2024 09:40:28 +0500",
"msg_from": "Muhammad Usman Khan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "Sbob schrieb am 22.08.2024 um 23:44:\r\n> \r\n> I am selecting a number of columns and specifying this where clause:\r\n> \r\n> WHERE (\r\n> (contract_date IS NULL)\r\n> OR\r\n> (contract_date > '2022-01-01'::date)\r\n> )\r\n> \r\n\r\nIt's not the check for NULL, it's the OR that makes this perform so badly. \r\n\r\nI typically never set columns used for range queries to NULL. \r\n\r\nWould using infinity instead of NULL be a viable option here? \r\n\r\nThen you can remove the OR condition entirely. \r\n\r\n\r\n",
"msg_date": "Fri, 23 Aug 2024 08:45:44 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "> On Aug 22, 2024, at 8:05 PM, Tom Lane <[email protected]> wrote:\n> \n> Sbob <[email protected]> writes:\n>> 29 million of the 32 million rows in the table have NULL for contract_date\n> \n> [ blink... ] So your query is selecting at least 29/32nds of the\n> table, plus however much matches the contract_date > '2022-01-01'\n> alternative. I'm not sure how you expect that to be significantly\n> cheaper than scanning the whole table.\n> \n> \t\t\tregards, tom lane\n\n\n^^^ This is best answer; however, what is the actual query and how is it used? I assume it’s a analytical query and not actually extracting the rows. Is for a dashboard or an ad-hoc query? \n\n\nHere’s simple example; from 4/1 is uses the index 3/1 it does a full table scan. Depending on what you using the query for you could use a covered index or a materialized view.\n\n\nprod=# create index emp_idx3 on emp (contract_date);\nCREATE INDEX\nTime: 6036.030 ms (00:06.036)\nprod=# select sum(site_id) from emp where contract_date > '4/1/2024';\n sum \n-----------\n 927473447\n(1 row)\n\nTime: 711.774 ms\nprod=# select sum(site_id) from emp where contract_date > '3/1/2024';\n sum \n------------\n 1128971203\n(1 row)\n\nTime: 1945.397 ms (00:01.945)\nprod=# select sum(site_id) from emp where contract_date > '3/1/2024' or contract_date is null;\n sum \n------------\n 3823075309\n(1 row)\n\nTime: 1821.284 ms (00:01.821)\nprod=# explain select sum(site_id) from emp where contract_date > '4/1/2024';\n QUERY PLAN \n--------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=754070.31..754070.32 rows=1 width=8)\n -> Gather (cost=754069.59..754070.30 rows=7 width=8)\n Workers Planned: 7\n -> Partial Aggregate (cost=753069.59..753069.60 rows=1 width=8)\n -> Parallel Bitmap Heap Scan on emp (cost=5343.20..752867.80 rows=80715 width=4)\n Recheck Cond: (contract_date > '2024-04-01'::date)\n -> Bitmap Index Scan on emp_idx3 (cost=0.00..5201.95 rows=565002 width=0)\n Index Cond: (contract_date > '2024-04-01'::date)\n JIT:\n Functions: 7\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n(11 rows)\n\nTime: 1.196 ms\nprod=# explain select sum(site_id) from emp where contract_date > '3/1/2024';\n QUERY PLAN \n---------------------------------------------------------------------------------------\n Finalize Aggregate (cost=764566.90..764566.91 rows=1 width=8)\n -> Gather (cost=764566.18..764566.89 rows=7 width=8)\n Workers Planned: 7\n -> Partial Aggregate (cost=763566.18..763566.19 rows=1 width=8)\n -> Parallel Seq Scan on emp (cost=0.00..763320.15 rows=98411 width=4)\n Filter: (contract_date > '2024-03-01'::date)\n JIT:\n Functions: 7\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n(9 rows)\n\nTime: 1.172 ms\nprod=# drop index emp_idx3;\nDROP INDEX\nTime: 8.663 ms\nprod=# create index emp_idx3 on emp (contract_date) include (site_id);\nCREATE INDEX\nTime: 7002.860 ms (00:07.003)\nprod=# select sum(site_id) from emp where contract_date > '4/1/2024';\n sum \n-----------\n 927473447\n(1 row)\n\nTime: 56.450 ms\nprod=# select sum(site_id) from emp where contract_date > '3/1/2024';\n sum \n------------\n 1128971203\n(1 row)\n\nTime: 49.115 ms\nprod=# select sum(site_id) from emp where contract_date > '3/1/2024' or contract_date is null;\n sum \n------------\n 3823075309\n(1 row)\n\nTime: 702.962 ms\nprod=# explain select sum(site_id) from emp where contract_date > '3/1/2024' or contract_date is null;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=216035.47..216035.48 rows=1 width=8)\n -> Gather (cost=216034.74..216035.45 rows=7 width=8)\n Workers Planned: 7\n -> Partial Aggregate (cost=215034.74..215034.75 rows=1 width=8)\n -> Parallel Index Only Scan using emp_idx3 on emp (cost=0.44..213686.81 rows=539174 width=4)\n Filter: ((contract_date > '2024-03-01'::date) OR (contract_date IS NULL))\n JIT:\n Functions: 5\n Options: Inlining false, Optimization false, Expressions true, Deforming true\n(9 rows)\n\nTime: 0.972 ms\nprod=# select count(*) from emp;\n count \n----------\n 16862243\n(1 row)\n\nTime: 629.995 ms\n\n\nOn Aug 22, 2024, at 8:05 PM, Tom Lane <[email protected]> wrote:Sbob <[email protected]> writes:29 million of the 32 million rows in the table have NULL for contract_date[ blink... ] So your query is selecting at least 29/32nds of thetable, plus however much matches the contract_date > '2022-01-01'alternative. I'm not sure how you expect that to be significantlycheaper than scanning the whole table. regards, tom lane^^^ This is best answer; however, what is the actual query and how is it used? I assume it’s a analytical query and not actually extracting the rows. Is for a dashboard or an ad-hoc query? Here’s simple example; from 4/1 is uses the index 3/1 it does a full table scan. Depending on what you using the query for you could use a covered index or a materialized view.prod=# create index emp_idx3 on emp (contract_date);CREATE INDEXTime: 6036.030 ms (00:06.036)prod=# select sum(site_id) from emp where contract_date > '4/1/2024'; sum ----------- 927473447(1 row)Time: 711.774 msprod=# select sum(site_id) from emp where contract_date > '3/1/2024'; sum ------------ 1128971203(1 row)Time: 1945.397 ms (00:01.945)prod=# select sum(site_id) from emp where contract_date > '3/1/2024' or contract_date is null; sum ------------ 3823075309(1 row)Time: 1821.284 ms (00:01.821)prod=# explain select sum(site_id) from emp where contract_date > '4/1/2024'; QUERY PLAN -------------------------------------------------------------------------------------------------- Finalize Aggregate (cost=754070.31..754070.32 rows=1 width=8) -> Gather (cost=754069.59..754070.30 rows=7 width=8) Workers Planned: 7 -> Partial Aggregate (cost=753069.59..753069.60 rows=1 width=8) -> Parallel Bitmap Heap Scan on emp (cost=5343.20..752867.80 rows=80715 width=4) Recheck Cond: (contract_date > '2024-04-01'::date) -> Bitmap Index Scan on emp_idx3 (cost=0.00..5201.95 rows=565002 width=0) Index Cond: (contract_date > '2024-04-01'::date) JIT: Functions: 7 Options: Inlining true, Optimization true, Expressions true, Deforming true(11 rows)Time: 1.196 msprod=# explain select sum(site_id) from emp where contract_date > '3/1/2024'; QUERY PLAN --------------------------------------------------------------------------------------- Finalize Aggregate (cost=764566.90..764566.91 rows=1 width=8) -> Gather (cost=764566.18..764566.89 rows=7 width=8) Workers Planned: 7 -> Partial Aggregate (cost=763566.18..763566.19 rows=1 width=8) -> Parallel Seq Scan on emp (cost=0.00..763320.15 rows=98411 width=4) Filter: (contract_date > '2024-03-01'::date) JIT: Functions: 7 Options: Inlining true, Optimization true, Expressions true, Deforming true(9 rows)Time: 1.172 msprod=# drop index emp_idx3;DROP INDEXTime: 8.663 msprod=# create index emp_idx3 on emp (contract_date) include (site_id);CREATE INDEXTime: 7002.860 ms (00:07.003)prod=# select sum(site_id) from emp where contract_date > '4/1/2024'; sum ----------- 927473447(1 row)Time: 56.450 msprod=# select sum(site_id) from emp where contract_date > '3/1/2024'; sum ------------ 1128971203(1 row)Time: 49.115 msprod=# select sum(site_id) from emp where contract_date > '3/1/2024' or contract_date is null; sum ------------ 3823075309(1 row)Time: 702.962 msprod=# explain select sum(site_id) from emp where contract_date > '3/1/2024' or contract_date is null; QUERY PLAN -------------------------------------------------------------------------------------------------------------- Finalize Aggregate (cost=216035.47..216035.48 rows=1 width=8) -> Gather (cost=216034.74..216035.45 rows=7 width=8) Workers Planned: 7 -> Partial Aggregate (cost=215034.74..215034.75 rows=1 width=8) -> Parallel Index Only Scan using emp_idx3 on emp (cost=0.44..213686.81 rows=539174 width=4) Filter: ((contract_date > '2024-03-01'::date) OR (contract_date IS NULL)) JIT: Functions: 5 Options: Inlining false, Optimization false, Expressions true, Deforming true(9 rows)Time: 0.972 msprod=# select count(*) from emp; count ---------- 16862243(1 row)Time: 629.995 ms",
"msg_date": "Fri, 23 Aug 2024 05:54:51 -0400",
"msg_from": "Rui DeSousa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "I don’t see how an index is going to help since virtually all of the rows are null AND contract_date isn’t the partition key. \n\nPerhaps, you could try a UNION ALL with one query selecting the date and the other selecting where the date is null. \n\nYou could try something really ugly where you make a function index that COALESCEs the nulls to 1-1-1900 and use the COALESCE in the query.\n\n\nSent from my iPhone\n\n> On Aug 22, 2024, at 7:43 PM, Sbob <[email protected]> wrote:\n> \n> \n>> On 8/22/24 5:26 PM, Sbob wrote:\n>> \n>>> On 8/22/24 5:06 PM, Rui DeSousa wrote:\n>>> \n>>>> On Aug 22, 2024, at 5:44 PM, Sbob <[email protected]> wrote:\n>>>> \n>>>> All;\n>>>> \n>>>> I am running a select from a partitioned table. The table (and all the partitions) have an index on contract_date like this:\n>>>> CREATE INDEX on part_tab (contract_date) where contract_date > '2022-01-01'::date\n>>>> \n>>>> The table (including all partitions) has 32million rows\n>>>> The db server is an aurora postgresql instance with 128GB of ram and 16 vcpu's\n>>>> \n>>>> The shared buffers is set to 90GB and effective_cache_size is also 90GB\n>>>> I set default_statistics_target to 1000 and ram a vacuum analyze on the table\n>>>> \n>>>> I am selecting a number of columns and specifying this where clause:\n>>>> \n>>>> WHERE (\n>>>> (contract_date IS NULL)\n>>>> OR\n>>>> (contract_date > '2022-01-01'::date)\n>>>> )\n>>>> \n>>>> This takes 15 seconds to run and an explain says it's doing a table scan on all partitions (the query is not specifying the partition key)\n>>>> If I change the where clause to look like this:\n>>>> \n>>>> WHERE (\n>>>> (contract_date > '2022-01-01'::date)\n>>>> )\n>>>> \n>>>> Then it performs index scans on all the partitions and runs in about 600ms\n>>>> \n>>>> If i leave the where clause off entirely it performs table scans of the partitions and takes approx 18 seconds to run\n>>>> \n>>>> I am trying to get the performance to less than 2sec,\n>>>> I have tried adding indexes on the table and all partitions like this:\n>>>> CREATE INDEX ON table (contract_date NULLS FIRST) ;\n>>>> but the performance with the full where clause is the same:\n>>>> \n>>>> WHERE (\n>>>> (contract_date IS NULL)\n>>>> OR\n>>>> (contract_date > '2022-01-01'::date)\n>>>> )\n>>>> \n>>>> runs in 15 seconds and scans all partitions\n>>>> \n>>>> I also tried indexes i=on the table and all partitions like this:\n>>>> CREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n>>>> \n>>>> but I get the same result, table scans on all partitions and it runs in 15 seconds\n>>>> \n>>>> Any help or advice ?\n>>>> \n>>>> Thanks in advance\n>>>> \n>>>> \n>>> What is contract_date and when will it be null?\n>> \n>> \n>> it's a date data type and it allows NULL's not sure why, this is a client's system\n>> \n>> \n> 29 million of the 32 million rows in the table have NULL for contract_date\n> \n> \n> \n> \n> \n> \n\n\n\n",
"msg_date": "Fri, 23 Aug 2024 11:39:05 +0000",
"msg_from": "Doug Reynolds <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "On 8/22/24 6:07 PM, Rui DeSousa wrote:\n>\n>\n>> On Aug 22, 2024, at 7:32 PM, Sbob <[email protected]> wrote:\n>>\n>>\n>> On 8/22/24 5:26 PM, Sbob wrote:\n>>>\n>>> On 8/22/24 5:06 PM, Rui DeSousa wrote:\n>>>>\n>>>>> On Aug 22, 2024, at 5:44 PM, Sbob <[email protected]> wrote:\n>>>>>\n>>>>> All;\n>>>>>\n>>>>> I am running a select from a partitioned table. The table (and all \n>>>>> the partitions) have an index on contract_date like this:\n>>>>> CREATE INDEX on part_tab (contract_date) where contract_date > \n>>>>> '2022-01-01'::date\n>>>>>\n>>>>> The table (including all partitions) has 32million rows\n>>>>> The db server is an aurora postgresql instance with 128GB of ram \n>>>>> and 16 vcpu's\n>>>>>\n>>>>> The shared buffers is set to 90GB and effective_cache_size is also \n>>>>> 90GB\n>>>>> I set default_statistics_target to 1000 and ram a vacuum analyze \n>>>>> on the table\n>>>>>\n>>>>> I am selecting a number of columns and specifying this where clause:\n>>>>>\n>>>>> WHERE (\n>>>>> (contract_date IS NULL)\n>>>>> OR\n>>>>> (contract_date > '2022-01-01'::date)\n>>>>> )\n>>>>>\n>>>>> This takes 15 seconds to run and an explain says it's doing a \n>>>>> table scan on all partitions (the query is not specifying the \n>>>>> partition key)\n>>>>> If I change the where clause to look like this:\n>>>>>\n>>>>> WHERE (\n>>>>> (contract_date > '2022-01-01'::date)\n>>>>> )\n>>>>>\n>>>>> Then it performs index scans on all the partitions and runs in \n>>>>> about 600ms\n>>>>>\n>>>>> If i leave the where clause off entirely it performs table scans \n>>>>> of the partitions and takes approx 18 seconds to run\n>>>>>\n>>>>> I am trying to get the performance to less than 2sec,\n>>>>> I have tried adding indexes on the table and all partitions like this:\n>>>>> CREATE INDEX ON table (contract_date NULLS FIRST) ;\n>>>>> but the performance with the full where clause is the same:\n>>>>>\n>>>>> WHERE (\n>>>>> (contract_date IS NULL)\n>>>>> OR\n>>>>> (contract_date > '2022-01-01'::date)\n>>>>> )\n>>>>>\n>>>>> runs in 15 seconds and scans all partitions\n>>>>>\n>>>>> I also tried indexes i=on the table and all partitions like this:\n>>>>> CREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n>>>>>\n>>>>> but I get the same result, table scans on all partitions and it \n>>>>> runs in 15 seconds\n>>>>>\n>>>>> Any help or advice ?\n>>>>>\n>>>>> Thanks in advance\n>>>>>\n>>>>>\n>>>> What is contract_date and when will it be null?\n>>>\n>>>\n>>> it's a date data type and it allows NULL's not sure why, this is a \n>>> client's system\n>>>\n>>>\n>> 29 million of the 32 million rows in the table have NULL for \n>> contract_date\n>>\n>\n> NULLs are not indexed thus the OR predicate invalidate the use of the \n> index.\n>\n> Since you are already creating a partial index just include the NULLs. \n> It index will get used for both of your queries.\n>\n> create index table_idx1\n> on table (contract_date)\n> where contract_date > ‘1/1/2022’\n> or contract_date is null\n> ;\n>\n>\n> The reason why I asked when is contract_date null is because \n> attributes in a table should be non nullable. If it’s nullable then \n> that begs the question if it belong in that table in the first place; \n> and sometimes the answer is yes. I just see a lot of half baked \n> schemas out there. I refer to them as organically designed schemas.\n>\n> -Rui.\n>\nI agree, I will find out from the client\n\n\n\n\n\n\n\n\n\nOn 8/22/24 6:07 PM, Rui DeSousa wrote:\n\n\n\n\n\n\nOn Aug 22, 2024, at 7:32 PM, Sbob <[email protected]>\n wrote:\n\n\n\n On 8/22/24 5:26 PM, Sbob wrote:\n\n On 8/22/24 5:06 PM, Rui DeSousa wrote:\n\nOn Aug 22, 2024, at\n 5:44 PM, Sbob <[email protected]>\n wrote:\n\n All;\n\n I am running a select from a partitioned table. The\n table (and all the partitions) have an index on\n contract_date like this:\n CREATE INDEX on part_tab (contract_date) where\n contract_date > '2022-01-01'::date\n\n The table (including all partitions) has 32million\n rows\n The db server is an aurora postgresql instance with\n 128GB of ram and 16 vcpu's\n\n The shared buffers is set to 90GB and\n effective_cache_size is also 90GB\n I set default_statistics_target to 1000 and ram a\n vacuum analyze on the table\n\n I am selecting a number of columns and specifying\n this where clause:\n\n WHERE (\n (contract_date IS NULL)\n OR\n (contract_date >\n '2022-01-01'::date)\n )\n\n This takes 15 seconds to run and an explain says\n it's doing a table scan on all partitions (the query\n is not specifying the partition key)\n If I change the where clause to look like this:\n\n WHERE (\n (contract_date >\n '2022-01-01'::date)\n )\n\n Then it performs index scans on all the partitions\n and runs in about 600ms\n\n If i leave the where clause off entirely it performs\n table scans of the partitions and takes approx 18\n seconds to run\n\n I am trying to get the performance to less than\n 2sec,\n I have tried adding indexes on the table and all\n partitions like this:\n CREATE INDEX ON table (contract_date NULLS FIRST) ;\n but the performance with the full where clause is\n the same:\n\n WHERE (\n (contract_date IS NULL)\n OR\n (contract_date >\n '2022-01-01'::date)\n )\n\n runs in 15 seconds and scans all partitions\n\n I also tried indexes i=on the table and all\n partitions like this:\n CREATE INDEX ON table (contract_date) WHERE\n contract_date IS NULL;\n\n but I get the same result, table scans on all\n partitions and it runs in 15 seconds\n\n Any help or advice ?\n\n Thanks in advance\n\n\n\n What is contract_date and when will it be null?\n\n\n\n it's a date data type and it allows NULL's not sure why,\n this is a client's system\n\n\n\n 29 million of the 32 million rows in the table have NULL\n for contract_date\n\n\n\n\n\n\nNULLs are not indexed thus the OR predicate\n invalidate the use of the index. \n\n\nSince you are already creating a partial index just\n include the NULLs. It index will get used for both of your\n queries.\n\n\ncreate index\n table_idx1 \n on table\n (contract_date) \n where\n contract_date > ‘1/1/2022’\n or\n contract_date is null\n;\n\n\n\n\nThe reason why I asked when is contract_date null is\n because attributes in a table should be non nullable. If it’s\n nullable then that begs the question if it belong in that table\n in the first place; and sometimes the answer is yes. I just see\n a lot of half baked schemas out there. I refer to them as\n organically designed schemas. \n\n\n-Rui.\n\n\n\nI agree, I will find out from the client",
"msg_date": "Fri, 23 Aug 2024 07:47:00 -0600",
"msg_from": "Sbob <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "> On Aug 23, 2024, at 5:39 AM, Doug Reynolds <[email protected]> wrote:\n> \n> You could try something really ugly where you make a function index that COALESCEs the nulls to 1-1-1900 and use the COALESCE in the query.\n\nI don't see how that could be better than just creating a partial index on it WHERE contract_date IS NULL--and anyway I'm sure you're right that no index would help. No matter what, it seems that sequential scans of all partitions will be required since most rows have it null, and it's not even related to the partition key.\n\n",
"msg_date": "Fri, 23 Aug 2024 09:17:19 -0600",
"msg_from": "Scott Ribe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "The only difference is that you would be reading from one index instead of two, which could be more efficient. \n\n\nSent from my iPhone\n\n> On Aug 23, 2024, at 11:19 AM, Scott Ribe <[email protected]> wrote:\n> \n> \n>> \n>> On Aug 23, 2024, at 5:39 AM, Doug Reynolds <[email protected]> wrote:\n>> \n>> You could try something really ugly where you make a function index that COALESCEs the nulls to 1-1-1900 and use the COALESCE in the query.\n> \n> I don't see how that could be better than just creating a partial index on it WHERE contract_date IS NULL--and anyway I'm sure you're right that no index would help. No matter what, it seems that sequential scans of all partitions will be required since most rows have it null, and it's not even related to the partition key.\n> \n\n\n\n",
"msg_date": "Fri, 23 Aug 2024 15:42:16 +0000",
"msg_from": "Doug Reynolds <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "I have had this issue in the past.\n\nThe real admin fix to this is to have a NULL replacement character that prevents this.\nThis does a few things:\n\n 1. An index will index on a replacement character ( I use <->)\n 2. A join is easier on a replacement character than NULL (Nulls last/first avoided)\n 3. Stops all evil NULL rules.\n\nWe strive to fix things, but the real solution, IMHO, is better arch design and better fundamental understanding of how NULL works.\n\nPartitioned tables under 500M-750M rows will always have these performance issues.\n\nGreat ideas on the workaround’s though, I do understand sometimes you inherit a bad db.\n\n\n\nFrom: Scott Ribe <[email protected]>\nDate: Friday, August 23, 2024 at 8:17 AM\nTo: Pgsql-admin <[email protected]>\nSubject: Re: checking for a NULL date in a partitioned table kills performance\n\n> On Aug 23, 2024, at 5: 39 AM, Doug Reynolds <mav@ wastegate. net> wrote: > > You could try something really ugly where you make a function index that COALESCEs the nulls to 1-1-1900 and use the COALESCE in the query. I don't see\n\n\n> On Aug 23, 2024, at 5:39 AM, Doug Reynolds <[email protected]> wrote:\n\n>\n\n> You could try something really ugly where you make a function index that COALESCEs the nulls to 1-1-1900 and use the COALESCE in the query.\n\n\n\nI don't see how that could be better than just creating a partial index on it WHERE contract_date IS NULL--and anyway I'm sure you're right that no index would help. No matter what, it seems that sequential scans of all partitions will be required since most rows have it null, and it's not even related to the partition key.\n\n\n\n\n\n\n\n\n\n\n\nI have had this issue in the past.\n\n \nThe real admin fix to this is to have a NULL replacement character that prevents this.\nThis does a few things:\n\nAn index will index on a replacement character ( I use <->)A join is easier on a replacement character than NULL (Nulls last/first avoided)Stops all evil NULL rules.\n \nWe strive to fix things, but the real solution, IMHO, is better arch design and better fundamental understanding of how NULL works.\n \nPartitioned tables under 500M-750M rows will always have these performance issues.\n \nGreat ideas on the workaround’s though, I do understand sometimes you inherit a bad db.\n \n \n \n\nFrom:\nScott Ribe <[email protected]>\nDate: Friday, August 23, 2024 at 8:17 AM\nTo: Pgsql-admin <[email protected]>\nSubject: Re: checking for a NULL date in a partitioned table kills performance\n\n\n \n\n\n> On Aug 23, 2024, at 5: 39 AM,\n Doug Reynolds <mav@ wastegate. net>\n wrote: > > You could try something really ugly where you make a function index that COALESCEs the nulls to 1-1-1900 and use the COALESCE in the query. I don't see\n\n\n\n\n> On Aug 23, 2024, at 5:39 AM, Doug Reynolds <[email protected]> wrote:\n> \n> You could try something really ugly where you make a function index that COALESCEs the nulls to 1-1-1900 and use the COALESCE in the query.\n \nI don't see how that could be better than just creating a partial index on it WHERE contract_date IS NULL--and anyway I'm sure you're right that no index would help. No matter what, it seems that sequential scans of all partitions will be required since most rows have it null, and it's not even related to the partition key.",
"msg_date": "Fri, 23 Aug 2024 15:49:27 +0000",
"msg_from": "\"Wetmore, Matthew (CTR)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
},
{
"msg_contents": "> On Aug 23, 2024, at 9:42 AM, Doug Reynolds <[email protected]> wrote:\n> \n> The only difference is that you would be reading from one index instead of two, which could be more efficient. \n\nAh yes, that's a good point to take into consideration in such a case.\n\nIn the one at hand though, if statistics are correct, neither index is going to be used, given the 90% of rows with NULL values. Using an index would just waste time compared to a simple sequential scan.\n\n",
"msg_date": "Fri, 23 Aug 2024 10:08:49 -0600",
"msg_from": "Scott Ribe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance"
}
] |
[
{
"msg_contents": "All;\n\nI am running a select from a partitioned table. The table (and all the \npartitions) have an index on contract_date like this:\nCREATE INDEX on part_tab (contract_date) where contract_date > \n'2022-01-01'::date\n\nThe table (including all partitions) has 32million rows\nThe db server is an aurora postgresql instance with 128GB of ram and 16 \nvcpu's\n\nThe shared buffers is set to 90GB and effective_cache_size is also 90GB\nI set default_statistics_target to 1000 and ram a vacuum analyze on the \ntable\n\nI am selecting a number of columns and specifying this where clause:\n\nWHERE (\n (contract_date IS NULL)\n OR\n (contract_date > '2022-01-01'::date)\n )\n\nThis takes 15 seconds to run and an explain says it's doing a table scan \non all partitions (the query is not specifying the partition key)\nIf I change the where clause to look like this:\n\nWHERE (\n (contract_date > '2022-01-01'::date)\n )\n\nThen it performs index scans on all the partitions and runs in about 600ms\n\nIf i leave the where clause off entirely it performs table scans of the \npartitions and takes approx 18 seconds to run\n\nI am trying to get the performance to less than 2sec,\nI have tried adding indexes on the table and all partitions like this:\nCREATE INDEX ON table (contract_date NULLS FIRST) ;\nbut the performance with the full where clause is the same:\n\nWHERE (\n (contract_date IS NULL)\n OR\n (contract_date > '2022-01-01'::date)\n )\n\nruns in 15 seconds and scans all partitions\n\nI also tried indexes i=on the table and all partitions like this:\nCREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n\nbut I get the same result, table scans on all partitions and it runs in \n15 seconds\n\nAny help or advice ?\n\nThanks in advance\n\n\n\n",
"msg_date": "Thu, 22 Aug 2024 15:47:58 -0600",
"msg_from": "Sbob <[email protected]>",
"msg_from_op": true,
"msg_subject": "checking for a NULL date in a partitioned table kills performance\n (accidentally sent to the admin list before)"
},
{
"msg_contents": "Can you put the whole thing into the index where clause?\n\n(contract_date IS NULL)\n OR\n(contract_date > '2022-01-01'::date)\n\nBest regards, Vitalii Tymchyshyn\n\nчт, 22 серп. 2024 р. о 14:48 Sbob <[email protected]> пише:\n\n> All;\n>\n> I am running a select from a partitioned table. The table (and all the\n> partitions) have an index on contract_date like this:\n> CREATE INDEX on part_tab (contract_date) where contract_date >\n> '2022-01-01'::date\n>\n> The table (including all partitions) has 32million rows\n> The db server is an aurora postgresql instance with 128GB of ram and 16\n> vcpu's\n>\n> The shared buffers is set to 90GB and effective_cache_size is also 90GB\n> I set default_statistics_target to 1000 and ram a vacuum analyze on the\n> table\n>\n> I am selecting a number of columns and specifying this where clause:\n>\n> WHERE (\n> (contract_date IS NULL)\n> OR\n> (contract_date > '2022-01-01'::date)\n> )\n>\n> This takes 15 seconds to run and an explain says it's doing a table scan\n> on all partitions (the query is not specifying the partition key)\n> If I change the where clause to look like this:\n>\n> WHERE (\n> (contract_date > '2022-01-01'::date)\n> )\n>\n> Then it performs index scans on all the partitions and runs in about 600ms\n>\n> If i leave the where clause off entirely it performs table scans of the\n> partitions and takes approx 18 seconds to run\n>\n> I am trying to get the performance to less than 2sec,\n> I have tried adding indexes on the table and all partitions like this:\n> CREATE INDEX ON table (contract_date NULLS FIRST) ;\n> but the performance with the full where clause is the same:\n>\n> WHERE (\n> (contract_date IS NULL)\n> OR\n> (contract_date > '2022-01-01'::date)\n> )\n>\n> runs in 15 seconds and scans all partitions\n>\n> I also tried indexes i=on the table and all partitions like this:\n> CREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n>\n> but I get the same result, table scans on all partitions and it runs in\n> 15 seconds\n>\n> Any help or advice ?\n>\n> Thanks in advance\n>\n>\n>\n>\n\nCan you put the whole thing into the index where clause? (contract_date IS NULL) OR(contract_date > '2022-01-01'::date)Best regards, Vitalii Tymchyshynчт, 22 серп. 2024 р. о 14:48 Sbob <[email protected]> пише:All;\n\nI am running a select from a partitioned table. The table (and all the \npartitions) have an index on contract_date like this:\nCREATE INDEX on part_tab (contract_date) where contract_date > \n'2022-01-01'::date\n\nThe table (including all partitions) has 32million rows\nThe db server is an aurora postgresql instance with 128GB of ram and 16 \nvcpu's\n\nThe shared buffers is set to 90GB and effective_cache_size is also 90GB\nI set default_statistics_target to 1000 and ram a vacuum analyze on the \ntable\n\nI am selecting a number of columns and specifying this where clause:\n\nWHERE (\n (contract_date IS NULL)\n OR\n (contract_date > '2022-01-01'::date)\n )\n\nThis takes 15 seconds to run and an explain says it's doing a table scan \non all partitions (the query is not specifying the partition key)\nIf I change the where clause to look like this:\n\nWHERE (\n (contract_date > '2022-01-01'::date)\n )\n\nThen it performs index scans on all the partitions and runs in about 600ms\n\nIf i leave the where clause off entirely it performs table scans of the \npartitions and takes approx 18 seconds to run\n\nI am trying to get the performance to less than 2sec,\nI have tried adding indexes on the table and all partitions like this:\nCREATE INDEX ON table (contract_date NULLS FIRST) ;\nbut the performance with the full where clause is the same:\n\nWHERE (\n (contract_date IS NULL)\n OR\n (contract_date > '2022-01-01'::date)\n )\n\nruns in 15 seconds and scans all partitions\n\nI also tried indexes i=on the table and all partitions like this:\nCREATE INDEX ON table (contract_date) WHERE contract_date IS NULL;\n\nbut I get the same result, table scans on all partitions and it runs in \n15 seconds\n\nAny help or advice ?\n\nThanks in advance",
"msg_date": "Thu, 22 Aug 2024 16:54:19 -0700",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checking for a NULL date in a partitioned table kills performance\n (accidentally sent to the admin list before)"
}
] |
[
{
"msg_contents": "Hi\nI encounter an query plan problem like as the following. It's contain two nodes which assume the result is 1 and 7, but however, the last result is 7418. And the actual result is just 1, but because of the result is too big, which will affect the following join methods. And I've analyze the reason, but I think we can do bettwer.\n\n\npostgres=# explain select * from test t1 left join test t2 on t1.b = t2.b and t2.c = 10 where t1.a = 1;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.85..48.16 rows=7418 width=24)\n -> Index Scan using a_idx on test t1 (cost=0.43..8.45 rows=1 width=12)\n Index Cond: (a = 1)\n -> Index Scan using b_idx on test t2 (cost=0.43..39.64 rows=7 width=12)\n Index Cond: (b = t1.b)\n Filter: (c = 10)\n(6 rows)\n\n\npostgres=# \\d test\n Table \"public.test\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\n c | integer | | |\nIndexes:\n \"a_idx\" btree (a)\n \"b_idx\" btree (b)\n\n\nThroughing the source code, I know it how to get this result.\nFirstly, the result 7 is assume fron the condition t1.b = t2.b and t2.c = 10, in the function clauselist_selectivity_ext compute the selectivity, and the result is:\n t2.b selc * t2.c = 10 selc * ntuples\n(1/134830) * (1002795/1201000) * 1201000 = 7\n\n\nSecondly, when compute the join selec, the compute function is eqjoinsel,and the result function is calc_joinrel_size_estimate\ncase JOIN_LEFT:\nnrows = outer_rows * inner_rows * fkselec * jselec;\nif (nrows < outer_rows)\nnrows = outer_rows;\nnrows *= pselec;\nbreak;\nouter_rows is 1, inner_rows is 1002795, which is the result the estimate result of t2.c = 10, while not the 7.\nSo, through the analyze, I think the reason is the estimate result of inner_rows, now we just consider the condition t2.c = 10, not the condition t1.b = t2,b and t2.c = 10.\nHiI encounter an query plan problem like as the following. It's contain two nodes which assume the result is 1 and 7, but however, the last result is 7418. And the actual result is just 1, but because of the result is too big, which will affect the following join methods. And I've analyze the reason, but I think we can do bettwer.postgres=# explain select * from test t1 left join test t2 on t1.b = t2.b and t2.c = 10 where t1.a = 1; QUERY PLAN----------------------------------------------------------------------------- Nested Loop Left Join (cost=0.85..48.16 rows=7418 width=24) -> Index Scan using a_idx on test t1 (cost=0.43..8.45 rows=1 width=12) Index Cond: (a = 1) -> Index Scan using b_idx on test t2 (cost=0.43..39.64 rows=7 width=12) Index Cond: (b = t1.b) Filter: (c = 10)(6 rows)postgres=# \\d test Table \"public.test\" Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+--------- a | integer | | | b | integer | | | c | integer | | |Indexes: \"a_idx\" btree (a) \"b_idx\" btree (b)Throughing the source code, I know it how to get this result.Firstly, the result 7 is assume fron the condition t1.b = t2.b and t2.c = 10, in the function clauselist_selectivity_ext compute the selectivity, and the result is: t2.b selc * t2.c = 10 selc * ntuples(1/134830) * (1002795/1201000) * 1201000 = 7Secondly, when compute the join selec, the compute function is eqjoinsel,and the result function is calc_joinrel_size_estimatecase JOIN_LEFT: nrows = outer_rows * inner_rows * fkselec * jselec; if (nrows < outer_rows) nrows = outer_rows; nrows *= pselec; break;outer_rows is 1, inner_rows is 1002795, which is the result the estimate result of t2.c = 10, while not the 7.So, through the analyze, I think the reason is the estimate result of inner_rows, now we just consider the condition t2.c = 10, not the condition t1.b = t2,b and t2.c = 10.",
"msg_date": "Sun, 8 Sep 2024 16:21:12 +0800 (CST)",
"msg_from": "=?GBK?B?s8LR47fJ?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Estimate of the inner_rows"
},
{
"msg_contents": "=?GBK?B?s8LR47fJ?= <[email protected]> writes:\n> postgres=# explain select * from test t1 left join test t2 on t1.b = t2.b and t2.c = 10 where t1.a = 1;\n> QUERY PLAN\n> -----------------------------------------------------------------------------\n> Nested Loop Left Join (cost=0.85..48.16 rows=7418 width=24)\n> -> Index Scan using a_idx on test t1 (cost=0.43..8.45 rows=1 width=12)\n> Index Cond: (a = 1)\n> -> Index Scan using b_idx on test t2 (cost=0.43..39.64 rows=7 width=12)\n> Index Cond: (b = t1.b)\n> Filter: (c = 10)\n> (6 rows)\n\nI tried to reproduce this and could not. What PG version are you\nrunning exactly? Can you provide a self-contained example?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Sep 2024 12:50:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Estimate of the inner_rows"
},
{
"msg_contents": "=?GBK?B?s8LR47fJ?= <[email protected]> writes:\n> 1¡¢The first problem is found in PG9.2 for our profuction version. But I test in the latest version.\n\nYou're still running 9.2 in production? That's ... inadvisable.\n\n> 2¡¢The data is simply. The data in b has many distinct number, and the data in C=10 has too many numbers. I've dumped it in test.log in the email attachemt.\n\nThanks for the test data. I don't think there is actually anything\nwrong here, or at least nothing readily improvable. I get this\nfor your original query:\n\npostgres=# explain analyze select * from test t1 left join test t2 on t1.b = t2.b and t2.c = 10 where t1.a = 1;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.85..16.90 rows=7567 width=24) (actual time=0.014..0.015 rows=1 loops=1)\n -> Index Scan using a_idx on test t1 (cost=0.43..8.45 rows=1 width=12) (actual time=0.008..0.008 rows=1 loops=1)\n Index Cond: (a = 1)\n -> Index Scan using b_idx on test t2 (cost=0.43..8.45 rows=1 width=12) (actual time=0.003..0.004 rows=1 loops=1)\n Index Cond: (b = t1.b)\n Filter: (c = 10)\n Planning Time: 0.321 ms\n Execution Time: 0.032 ms\n(8 rows)\n\nSlightly different from your estimate, but random sampling or a\ndifferent statistics-target setting would be enough to explain that.\n\nIt is *not* the t2.c = 10 condition that's resulting in the incorrect\nestimate, because taking it out doesn't change things much:\n\npostgres=# explain analyze select * from test t1 left join test t2 on t1.b = t2.b where t1.a = 1;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.85..16.90 rows=9088 width=24) (actual time=0.014..0.015 rows=1 loops=1)\n -> Index Scan using a_idx on test t1 (cost=0.43..8.45 rows=1 width=12) (actual time=0.008..0.009 rows=1 loops=1)\n Index Cond: (a = 1)\n -> Index Scan using b_idx on test t2 (cost=0.43..8.45 rows=1 width=12) (actual time=0.002..0.003 rows=1 loops=1)\n Index Cond: (b = t1.b)\n Planning Time: 0.312 ms\n Execution Time: 0.031 ms\n(7 rows)\n\nNext, let's take out the t1.a = 1 condition:\n\npostgres=# explain analyze select * from test t1 left join test t2 on t1.b = t2.b ;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.43..602605.00 rows=10914402957 width=24) (actual time=0.023..1872651.044 rows=10914402000 loops=1)\n -> Seq Scan on test t1 (cost=0.00..18506.00 rows=1201000 width=12) (actual time=0.013..62.837 rows=1201000 loops=1)\n -> Index Scan using b_idx on test t2 (cost=0.43..0.48 rows=1 width=12) (actual time=0.001..0.945 rows=9088 loops=1201000)\n Index Cond: (b = t1.b)\n Planning Time: 0.297 ms\n Execution Time: 2062621.438 ms\n(6 rows)\n\nThe fundamental join-size estimate, that is the selectivity of t1.b =\nt2.b, is pretty much dead on here. And note the actual rows=9088 in\nthe inner indexscan. What we see here is that the actual average\nnumber of t2 rows joining to a t1 row is 9088. Now the reason for the\nestimate for the second query becomes clear: the planner doesn't know\nexactly how many rows join to the specific row with t1.a = 1, but it\nknows that the average number of joined rows should be 9088, so that's\nits estimate. In your original query, that is reduced a little bit by\nthe not-very-selective \"c = 10\" condition, but the big picture is the\nsame. Basically, the row with \"a = 1\" has an atypical number of join\npartners and that's why the join size estimate is wrong for this\nparticular query.\n\nYou might nonetheless complain that the join size estimate should\nbe the product of the two input-scan estimates, but that's not\nhow it works. (Typically those do match up, but with highly skewed\ndata like this the fact that they're derived differently can become\nobvious.) The size of the b_idx result is based on considering the\nselectivity of \"t2.b = ?\" where the comparison value is not known.\nBecause there are so many b values that are unique, we end up\nestimating the average number of matching rows as 1 even though\nit will be far higher for a few values.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Sep 2024 23:30:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Estimate of the inner_rows"
}
] |
[
{
"msg_contents": "Hi experts,\n we have a Postgresql v14.8 database, almost thousands of backends hang\non MultiXactOffsetSLRU at the same time, all of these sessions running same\nquery \"SELECT ....\", from OS and postgresql slow log, we found all of these\nquery on \"BIND\" stage.\n LOG: duration: 36631.688 ms bind S_813: SELECT\nLOG: duration: 36859.786 ms bind S_1111: SELECT\nLOG: duration: 35868.148 ms bind <unnamed>: SELECT\nLOG: duration: 36906.471 ms bind <unnamed>: SELECT\nLOG: duration: 35955.489 ms bind <unnamed>: SELECT\nLOG: duration: 36833.510 ms bind <unnamed>: SELECT\nLOG: duration: 36839.535 ms bind S_1219: SELECT\n...\n\nthis database hang on MultiXactOffsetSLRU and MultiXactOffsetBuffer long\ntime.\n\ncould you direct me why they are hanging on 'BIND‘ stage with\nMultiXactOffsetSLRU ?\n\nThanks,\n\nJames\n\nHi experts, we have a Postgresql v14.8 database, almost thousands of backends hang on MultiXactOffsetSLRU at the same time, all of these sessions running same query \"SELECT ....\", from OS and postgresql slow log, we found all of these query on \"BIND\" stage. LOG: duration: 36631.688 ms bind S_813: SELECTLOG: duration: 36859.786 ms bind S_1111: SELECTLOG: duration: 35868.148 ms bind <unnamed>: SELECTLOG: duration: 36906.471 ms bind <unnamed>: SELECTLOG: duration: 35955.489 ms bind <unnamed>: SELECTLOG: duration: 36833.510 ms bind <unnamed>: SELECTLOG: duration: 36839.535 ms bind S_1219: SELECT...this database hang on MultiXactOffsetSLRU and MultiXactOffsetBuffer long time. could you direct me why they are hanging on 'BIND‘ stage with MultiXactOffsetSLRU ? Thanks,James",
"msg_date": "Tue, 10 Sep 2024 15:33:18 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "many backends hang on MultiXactOffsetSLRU"
},
{
"msg_contents": "Hi,\n\n I encountered this in a project we migrated to PostgreSQL\nbefore, and unfortunately, it’s a situation that completely degrades\nperformance. We identified the cause as savepoints being used excessively\nand without control. Once they reduced the number of savepoints, the issue\nwas resolved. However, the documentation also mentions that it could be\ncaused by foreign keys.\n\n\n Kind regards..\n\nJames Pang <[email protected]>, 10 Eyl 2024 Sal, 10:33 tarihinde şunu\nyazdı:\n\n> Hi experts,\n> we have a Postgresql v14.8 database, almost thousands of backends hang\n> on MultiXactOffsetSLRU at the same time, all of these sessions running same\n> query \"SELECT ....\", from OS and postgresql slow log, we found all of these\n> query on \"BIND\" stage.\n> LOG: duration: 36631.688 ms bind S_813: SELECT\n> LOG: duration: 36859.786 ms bind S_1111: SELECT\n> LOG: duration: 35868.148 ms bind <unnamed>: SELECT\n> LOG: duration: 36906.471 ms bind <unnamed>: SELECT\n> LOG: duration: 35955.489 ms bind <unnamed>: SELECT\n> LOG: duration: 36833.510 ms bind <unnamed>: SELECT\n> LOG: duration: 36839.535 ms bind S_1219: SELECT\n> ...\n>\n> this database hang on MultiXactOffsetSLRU and MultiXactOffsetBuffer long\n> time.\n>\n> could you direct me why they are hanging on 'BIND‘ stage with\n> MultiXactOffsetSLRU ?\n>\n> Thanks,\n>\n> James\n>\n>\n>\n>\n>\n\n Hi, I encountered this in a project we migrated to PostgreSQL before, and unfortunately, it’s a situation that completely degrades performance. We identified the cause as savepoints being used excessively and without control. Once they reduced the number of savepoints, the issue was resolved. However, the documentation also mentions that it could be caused by foreign keys. Kind regards.. James Pang <[email protected]>, 10 Eyl 2024 Sal, 10:33 tarihinde şunu yazdı:Hi experts, we have a Postgresql v14.8 database, almost thousands of backends hang on MultiXactOffsetSLRU at the same time, all of these sessions running same query \"SELECT ....\", from OS and postgresql slow log, we found all of these query on \"BIND\" stage. LOG: duration: 36631.688 ms bind S_813: SELECTLOG: duration: 36859.786 ms bind S_1111: SELECTLOG: duration: 35868.148 ms bind <unnamed>: SELECTLOG: duration: 36906.471 ms bind <unnamed>: SELECTLOG: duration: 35955.489 ms bind <unnamed>: SELECTLOG: duration: 36833.510 ms bind <unnamed>: SELECTLOG: duration: 36839.535 ms bind S_1219: SELECT...this database hang on MultiXactOffsetSLRU and MultiXactOffsetBuffer long time. could you direct me why they are hanging on 'BIND‘ stage with MultiXactOffsetSLRU ? Thanks,James",
"msg_date": "Tue, 10 Sep 2024 11:12:00 +0300",
"msg_from": "Amine Tengilimoglu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: many backends hang on MultiXactOffsetSLRU"
},
{
"msg_contents": "On 2024-Sep-10, James Pang wrote:\n\n> Hi experts,\n> we have a Postgresql v14.8 database, almost thousands of backends hang\n> on MultiXactOffsetSLRU at the same time, all of these sessions running same\n> query \"SELECT ....\", from OS and postgresql slow log, we found all of these\n> query on \"BIND\" stage.\n> LOG: duration: 36631.688 ms bind S_813: SELECT\n> LOG: duration: 36859.786 ms bind S_1111: SELECT\n> LOG: duration: 35868.148 ms bind <unnamed>: SELECT\n> LOG: duration: 36906.471 ms bind <unnamed>: SELECT\n> LOG: duration: 35955.489 ms bind <unnamed>: SELECT\n> LOG: duration: 36833.510 ms bind <unnamed>: SELECT\n> LOG: duration: 36839.535 ms bind S_1219: SELECT\n> ...\n> \n> this database hang on MultiXactOffsetSLRU and MultiXactOffsetBuffer long\n> time.\n> \n> could you direct me why they are hanging on 'BIND‘ stage with\n> MultiXactOffsetSLRU ?\n\nVery likely, it's related to this problem\n[1] https://thebuild.com/blog/2023/01/18/a-foreign-key-pathology-to-avoid/\n\nThis is caused by a suboptimal implementation of what we call SLRU,\nwhich multixact uses underneath. For years, many people dodged this\nproblem by recompiling with a changed value for\nNUM_MULTIXACTOFFSET_BUFFERS in src/include/access/multixact.h (it was\noriginally 8 buffers, which is very small); you'll need to do that in\nall releases up to pg16. In pg17 this was improved[2] and you'll be\nable to change the value in postgresql.conf, though the default already\nbeing larger than the original (16 instead of 8), you may not need to.\n\n[2] https://pgconf.in/files/presentations/2023/Dilip_Kumar_RareExtremelyChallengingPostgresPerformanceProblems.pdf\n[3] https://www.pgevents.ca/events/pgconfdev2024/schedule/session/53-problem-in-postgresql-slru-and-how-we-are-optimizing-it/\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La victoria es para quien se atreve a estar solo\"\n\n\n",
"msg_date": "Tue, 10 Sep 2024 10:13:51 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: many backends hang on MultiXactOffsetSLRU"
},
{
"msg_contents": "On 2024-Sep-10, Amine Tengilimoglu wrote:\n\n> Hi,\n> \n> I encountered this in a project we migrated to PostgreSQL\n> before, and unfortunately, it’s a situation that completely degrades\n> performance. We identified the cause as savepoints being used excessively\n> and without control. Once they reduced the number of savepoints, the issue\n> was resolved. However, the documentation also mentions that it could be\n> caused by foreign keys.\n\nYeah, it's exactly the same problem; when it comes from savepoints the\nissue is pg_subtrans, and when foreign keys are involved, it is\npg_multixact. Both of those use the SLRU subsystem, which was heavily\nmodified in pg17 as I mentioned in my reply to James.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I think my standards have lowered enough that now I think 'good design'\nis when the page doesn't irritate the living f*ck out of me.\" (JWZ)\n\n\n",
"msg_date": "Tue, 10 Sep 2024 10:15:42 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: many backends hang on MultiXactOffsetSLRU"
},
{
"msg_contents": "There is no foreign keys, but there is one session who did transactions\nto tables with savepoints, one savepoints/per sql in same transaction. But\nsessions with query \"SELECT “ do not use savepoints , just with a lot of\nsessions running same query and hang on MultiXact suddenly. even only one\nsession doing DML with savepoints , and all other queries sessions can see\nthis kind of \"MultiXact\" waiting ,right?\n\n\nJames Pang <[email protected]> 於 2024年9月10日週二 下午4:26寫道:\n\n> There is no foreign keys, but there are several sessions who did\n> transactions to tables with savepoints, one savepoints/per sql in same\n> transaction. But sessions with query \"SELECT “ do not use savepoints , just\n> with a lot of sessions running same query and hang on MultiXact suddenly.\n>\n> Alvaro Herrera <[email protected]> 於 2024年9月10日週二 下午4:15寫道:\n>\n>> On 2024-Sep-10, Amine Tengilimoglu wrote:\n>>\n>> > Hi,\n>> >\n>> > I encountered this in a project we migrated to PostgreSQL\n>> > before, and unfortunately, it’s a situation that completely degrades\n>> > performance. We identified the cause as savepoints being used\n>> excessively\n>> > and without control. Once they reduced the number of savepoints, the\n>> issue\n>> > was resolved. However, the documentation also mentions that it could be\n>> > caused by foreign keys.\n>>\n>> Yeah, it's exactly the same problem; when it comes from savepoints the\n>> issue is pg_subtrans, and when foreign keys are involved, it is\n>> pg_multixact. Both of those use the SLRU subsystem, which was heavily\n>> modified in pg17 as I mentioned in my reply to James.\n>>\n>> --\n>> Álvaro Herrera 48°01'N 7°57'E —\n>> https://www.EnterpriseDB.com/\n>> \"I think my standards have lowered enough that now I think 'good design'\n>> is when the page doesn't irritate the living f*ck out of me.\" (JWZ)\n>>\n>\n\n There is no foreign keys, but there is one session who did transactions to tables with savepoints, one savepoints/per sql in same transaction. But sessions with query \"SELECT “ do not use savepoints , just with a lot of sessions running same query and hang on MultiXact suddenly. even only one session doing DML with savepoints , and all other queries sessions can see this kind of \"MultiXact\" waiting ,right? James Pang <[email protected]> 於 2024年9月10日週二 下午4:26寫道: There is no foreign keys, but there are several sessions who did transactions to tables with savepoints, one savepoints/per sql in same transaction. But sessions with query \"SELECT “ do not use savepoints , just with a lot of sessions running same query and hang on MultiXact suddenly. Alvaro Herrera <[email protected]> 於 2024年9月10日週二 下午4:15寫道:On 2024-Sep-10, Amine Tengilimoglu wrote:\n\n> Hi,\n> \n> I encountered this in a project we migrated to PostgreSQL\n> before, and unfortunately, it’s a situation that completely degrades\n> performance. We identified the cause as savepoints being used excessively\n> and without control. Once they reduced the number of savepoints, the issue\n> was resolved. However, the documentation also mentions that it could be\n> caused by foreign keys.\n\nYeah, it's exactly the same problem; when it comes from savepoints the\nissue is pg_subtrans, and when foreign keys are involved, it is\npg_multixact. Both of those use the SLRU subsystem, which was heavily\nmodified in pg17 as I mentioned in my reply to James.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I think my standards have lowered enough that now I think 'good design'\nis when the page doesn't irritate the living f*ck out of me.\" (JWZ)",
"msg_date": "Tue, 10 Sep 2024 16:34:46 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: many backends hang on MultiXactOffsetSLRU"
},
{
"msg_contents": "I hadn't found a satisfactory explanation about the top limit\nrelated to SLRU, so this document will be useful. It's a nice development\nthat the relevant limit has been increased in pg17; I hope I don't\nencounter a situation where this limit is exceeded in large systems.\n\n\n Kind regards..\n\nJames Pang <[email protected]>, 10 Eyl 2024 Sal, 11:35 tarihinde şunu\nyazdı:\n\n> There is no foreign keys, but there is one session who did\n> transactions to tables with savepoints, one savepoints/per sql in same\n> transaction. But sessions with query \"SELECT “ do not use savepoints , just\n> with a lot of sessions running same query and hang on MultiXact suddenly.\n> even only one session doing DML with savepoints , and all other\n> queries sessions can see this kind of \"MultiXact\" waiting ,right?\n>\n>\n> James Pang <[email protected]> 於 2024年9月10日週二 下午4:26寫道:\n>\n>> There is no foreign keys, but there are several sessions who did\n>> transactions to tables with savepoints, one savepoints/per sql in same\n>> transaction. But sessions with query \"SELECT “ do not use savepoints , just\n>> with a lot of sessions running same query and hang on MultiXact suddenly.\n>>\n>> Alvaro Herrera <[email protected]> 於 2024年9月10日週二 下午4:15寫道:\n>>\n>>> On 2024-Sep-10, Amine Tengilimoglu wrote:\n>>>\n>>> > Hi,\n>>> >\n>>> > I encountered this in a project we migrated to PostgreSQL\n>>> > before, and unfortunately, it’s a situation that completely degrades\n>>> > performance. We identified the cause as savepoints being used\n>>> excessively\n>>> > and without control. Once they reduced the number of savepoints, the\n>>> issue\n>>> > was resolved. However, the documentation also mentions that it could be\n>>> > caused by foreign keys.\n>>>\n>>> Yeah, it's exactly the same problem; when it comes from savepoints the\n>>> issue is pg_subtrans, and when foreign keys are involved, it is\n>>> pg_multixact. Both of those use the SLRU subsystem, which was heavily\n>>> modified in pg17 as I mentioned in my reply to James.\n>>>\n>>> --\n>>> Álvaro Herrera 48°01'N 7°57'E —\n>>> https://www.EnterpriseDB.com/\n>>> \"I think my standards have lowered enough that now I think 'good design'\n>>> is when the page doesn't irritate the living f*ck out of me.\" (JWZ)\n>>>\n>>\n\n I hadn't found a satisfactory explanation about the top limit related to SLRU, so this document will be useful. It's a nice development that the relevant limit has been increased in pg17; I hope I don't encounter a situation where this limit is exceeded in large systems. Kind regards.. James Pang <[email protected]>, 10 Eyl 2024 Sal, 11:35 tarihinde şunu yazdı: There is no foreign keys, but there is one session who did transactions to tables with savepoints, one savepoints/per sql in same transaction. But sessions with query \"SELECT “ do not use savepoints , just with a lot of sessions running same query and hang on MultiXact suddenly. even only one session doing DML with savepoints , and all other queries sessions can see this kind of \"MultiXact\" waiting ,right? James Pang <[email protected]> 於 2024年9月10日週二 下午4:26寫道: There is no foreign keys, but there are several sessions who did transactions to tables with savepoints, one savepoints/per sql in same transaction. But sessions with query \"SELECT “ do not use savepoints , just with a lot of sessions running same query and hang on MultiXact suddenly. Alvaro Herrera <[email protected]> 於 2024年9月10日週二 下午4:15寫道:On 2024-Sep-10, Amine Tengilimoglu wrote:\n\n> Hi,\n> \n> I encountered this in a project we migrated to PostgreSQL\n> before, and unfortunately, it’s a situation that completely degrades\n> performance. We identified the cause as savepoints being used excessively\n> and without control. Once they reduced the number of savepoints, the issue\n> was resolved. However, the documentation also mentions that it could be\n> caused by foreign keys.\n\nYeah, it's exactly the same problem; when it comes from savepoints the\nissue is pg_subtrans, and when foreign keys are involved, it is\npg_multixact. Both of those use the SLRU subsystem, which was heavily\nmodified in pg17 as I mentioned in my reply to James.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I think my standards have lowered enough that now I think 'good design'\nis when the page doesn't irritate the living f*ck out of me.\" (JWZ)",
"msg_date": "Tue, 10 Sep 2024 11:53:58 +0300",
"msg_from": "Amine Tengilimoglu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: many backends hang on MultiXactOffsetSLRU"
},
{
"msg_contents": "On 2024-Sep-10, James Pang wrote:\n\n> There is no foreign keys, but there is one session who did transactions\n> to tables with savepoints, one savepoints/per sql in same transaction. But\n> sessions with query \"SELECT “ do not use savepoints , just with a lot of\n> sessions running same query and hang on MultiXact suddenly. even only one\n> session doing DML with savepoints , and all other queries sessions can see\n> this kind of \"MultiXact\" waiting ,right?\n\nI think SELECT FOR UPDATE combined with savepoints can create\nmultixacts, in absence of foreign keys.\n\nA query that's waiting doesn't need to have *created* the multixact or\nsubtrans -- it is sufficient that it's forced to look it up.\n\nIf thousands of sessions tried to look up different multixact values\n(spread across more than 8 pages), then thrashing of the cache would\nresult, with catastrophic performance. This can probably be caused by\nsome operation that creates one multixact per tuple in a few thousand\ntuples.\n\nMaybe you could ease this by doing VACUUM on the table (perhaps with a\nlow multixact freeze age), which might remove some of the multixacts.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Para tener más hay que desear menos\"\n\n\n",
"msg_date": "Tue, 10 Sep 2024 11:00:14 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: many backends hang on MultiXactOffsetSLRU"
},
{
"msg_contents": "most of query sessions using jdbc connections, the one who use ODBC\none savepoint/per statement, but it does not run any \"select for update;\nsavepoint;update\", since row lock conflict, so not easy to touch same row\nwith update/delete, no idea how that create multixact? a MultiXact may\ncontain an update or delete Xid. ?\n in this server, we see thousands of session hang on\n‘MultixactOffsetSLRU\" but they are in \" bind \" stage instead of \"execute\",\nwhy a backend in \"bind\" need to access Multixact?\n\nThanks,\n\nJames\n\nAlvaro Herrera <[email protected]> 於 2024年9月10日週二 下午5:00寫道:\n\n> On 2024-Sep-10, James Pang wrote:\n>\n> > There is no foreign keys, but there is one session who did\n> transactions\n> > to tables with savepoints, one savepoints/per sql in same transaction.\n> But\n> > sessions with query \"SELECT “ do not use savepoints , just with a lot of\n> > sessions running same query and hang on MultiXact suddenly. even only\n> one\n> > session doing DML with savepoints , and all other queries sessions can\n> see\n> > this kind of \"MultiXact\" waiting ,right?\n>\n> I think SELECT FOR UPDATE combined with savepoints can create\n> multixacts, in absence of foreign keys.\n>\n> A query that's waiting doesn't need to have *created* the multixact or\n> subtrans -- it is sufficient that it's forced to look it up.\n>\n> If thousands of sessions tried to look up different multixact values\n> (spread across more than 8 pages), then thrashing of the cache would\n> result, with catastrophic performance. This can probably be caused by\n> some operation that creates one multixact per tuple in a few thousand\n> tuples.\n>\n> Maybe you could ease this by doing VACUUM on the table (perhaps with a\n> low multixact freeze age), which might remove some of the multixacts.\n>\n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n> \"Para tener más hay que desear menos\"\n>\n\n most of query sessions using jdbc connections, the one who use ODBC one savepoint/per statement, but it does not run any \"select for update; savepoint;update\", since row lock conflict, so not easy to touch same row with update/delete, no idea how that create multixact? a MultiXact may contain an update or delete Xid. ? in this server, we see thousands of session hang on ‘MultixactOffsetSLRU\" but they are in \" bind \" stage instead of \"execute\", why a backend in \"bind\" need to access Multixact?Thanks,JamesAlvaro Herrera <[email protected]> 於 2024年9月10日週二 下午5:00寫道:On 2024-Sep-10, James Pang wrote:\n\n> There is no foreign keys, but there is one session who did transactions\n> to tables with savepoints, one savepoints/per sql in same transaction. But\n> sessions with query \"SELECT “ do not use savepoints , just with a lot of\n> sessions running same query and hang on MultiXact suddenly. even only one\n> session doing DML with savepoints , and all other queries sessions can see\n> this kind of \"MultiXact\" waiting ,right?\n\nI think SELECT FOR UPDATE combined with savepoints can create\nmultixacts, in absence of foreign keys.\n\nA query that's waiting doesn't need to have *created* the multixact or\nsubtrans -- it is sufficient that it's forced to look it up.\n\nIf thousands of sessions tried to look up different multixact values\n(spread across more than 8 pages), then thrashing of the cache would\nresult, with catastrophic performance. This can probably be caused by\nsome operation that creates one multixact per tuple in a few thousand\ntuples.\n\nMaybe you could ease this by doing VACUUM on the table (perhaps with a\nlow multixact freeze age), which might remove some of the multixacts.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Para tener más hay que desear menos\"",
"msg_date": "Wed, 11 Sep 2024 15:00:38 +0800",
"msg_from": "James Pang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: many backends hang on MultiXactOffsetSLRU"
}
] |
[
{
"msg_contents": "I'm getting a bit concerned by the slow performance of generating uidds on\nlatest dev code versus older versions. Here I compare the time to generate\n50k random uuids. Both son the same machine.\nI must be missing something.\n\nAny clues please ?",
"msg_date": "Tue, 10 Sep 2024 14:58:03 +0100",
"msg_from": "David Mullineux <[email protected]>",
"msg_from_op": true,
"msg_subject": "Has gen_random_uuid() gotten much slower in v17?"
},
{
"msg_contents": "On 10.09.24 15:58, David Mullineux wrote:\n> I'm getting a bit concerned by the slow performance of generating uidds \n> on latest dev code versus older versions. Here I compare the time to \n> generate 50k random uuids. Both son the same machine.\n> I must be missing something.\n\nAre you sure that the 18devel installation isn't compiled with \nassertions enabled?\n\nThe underlying code for gen_random_uuid() is virtually unchanged between \nPG14 and current.\n\n\n\n",
"msg_date": "Wed, 11 Sep 2024 11:40:16 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Has gen_random_uuid() gotten much slower in v17?"
},
{
"msg_contents": "Good idea. Thanks. I did check. It's not enabled by default but just in\ncase I did another build. This time explicitly defining --disable-debug and\n--disable-cassert. And I tested. Still slower than old versions.\n\nThis feels like a build configuration problem. Just can't put my finger on\nit yet.\n\nOn Wed, 11 Sept 2024, 10:40 Peter Eisentraut, <[email protected]> wrote:\n\n> On 10.09.24 15:58, David Mullineux wrote:\n> > I'm getting a bit concerned by the slow performance of generating uidds\n> > on latest dev code versus older versions. Here I compare the time to\n> > generate 50k random uuids. Both son the same machine.\n> > I must be missing something.\n>\n> Are you sure that the 18devel installation isn't compiled with\n> assertions enabled?\n>\n> The underlying code for gen_random_uuid() is virtually unchanged between\n> PG14 and current.\n>\n>\n\nGood idea. Thanks. I did check. It's not enabled by default but just in case I did another build. This time explicitly defining --disable-debug and --disable-cassert. And I tested. Still slower than old versions. This feels like a build configuration problem. Just can't put my finger on it yet.On Wed, 11 Sept 2024, 10:40 Peter Eisentraut, <[email protected]> wrote:On 10.09.24 15:58, David Mullineux wrote:\n> I'm getting a bit concerned by the slow performance of generating uidds \n> on latest dev code versus older versions. Here I compare the time to \n> generate 50k random uuids. Both son the same machine.\n> I must be missing something.\n\nAre you sure that the 18devel installation isn't compiled with \nassertions enabled?\n\nThe underlying code for gen_random_uuid() is virtually unchanged between \nPG14 and current.",
"msg_date": "Wed, 11 Sep 2024 11:47:41 +0100",
"msg_from": "David Mullineux <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Has gen_random_uuid() gotten much slower in v17?"
}
] |
Subsets and Splits