threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Excuse me if this is a silly question. I am trying to fiddle with\nshared_buffers setting on postgresql 10.6 on ubuntu 18.04 server.\n\nI have this at bottom of my config file:\nshared_buffers = 1GB\n\nYet when I check the setting from pg_setting I see something quite different:\n\npostgres=# SELECT name, setting FROM pg_settings where name = 'shared_buffers';\n name | setting\n----------------+---------\n shared_buffers | 131072\n\nIs this a question of units? It looks like 128M. Note when I change\nthe setting to 2GB in conf file I see 262144 from pg_setting. I am\nnow unsure what the actual shared_buffers allocation is. I cant see\nanything in the docs which tells me how to interpret the integer.\n\nAny clarification welcome.\n\nRegards\nBob\n\n",
"msg_date": "Tue, 29 Jan 2019 12:32:55 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interpreting shared_buffers setting"
},
{
"msg_contents": "Hi,\n\ncheck for blocksize (8k) as factor.\n\n8k*131072=1G\n\nregards\nThomas\n\nAm 29.01.19 um 13:32 schrieb Bob Jolliffe:\n> Excuse me if this is a silly question. I am trying to fiddle with\n> shared_buffers setting on postgresql 10.6 on ubuntu 18.04 server.\n>\n> I have this at bottom of my config file:\n> shared_buffers = 1GB\n>\n> Yet when I check the setting from pg_setting I see something quite different:\n>\n> postgres=# SELECT name, setting FROM pg_settings where name = 'shared_buffers';\n> name | setting\n> ----------------+---------\n> shared_buffers | 131072\n>\n> Is this a question of units? It looks like 128M. Note when I change\n> the setting to 2GB in conf file I see 262144 from pg_setting. I am\n> now unsure what the actual shared_buffers allocation is. I cant see\n> anything in the docs which tells me how to interpret the integer.\n>\n> Any clarification welcome.\n>\n> Regards\n> Bob\n>\n\n\n",
"msg_date": "Tue, 29 Jan 2019 13:35:42 +0100",
"msg_from": "Thomas Markus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interpreting shared_buffers setting"
},
{
"msg_contents": ">>>>> \"Bob\" == Bob Jolliffe <[email protected]> writes:\n\n Bob> Excuse me if this is a silly question. I am trying to fiddle with\n Bob> shared_buffers setting on postgresql 10.6 on ubuntu 18.04 server.\n\n Bob> I have this at bottom of my config file:\n Bob> shared_buffers = 1GB\n\n Bob> Yet when I check the setting from pg_setting I see something quite\n Bob> different:\n\n Bob> postgres=# SELECT name, setting FROM pg_settings where name = 'shared_buffers';\n Bob> name | setting\n Bob> ----------------+---------\n Bob> shared_buffers | 131072\n\npg_settings can tell you more than you asked for:\n\npostgres=# select * from pg_settings where name='shared_buffers';\n-[ RECORD 1 ]---+-------------------------------------------------------------\nname | shared_buffers\nsetting | 16384\nunit | 8kB\ncategory | Resource Usage / Memory\nshort_desc | Sets the number of shared memory buffers used by the server.\nextra_desc | \ncontext | postmaster\nvartype | integer\nsource | configuration file\nmin_val | 16\nmax_val | 1073741823\nenumvals | \nboot_val | 1024\nreset_val | 16384\nsourcefile | /home/andrew/work/pgsql/inst/9.5/data/postgresql.conf\nsourceline | 113\npending_restart | f\n\nnotice that \"unit 8kB\" line; the displayed integer value is in units of\n8kB (which is the block size your server was compiled with, which you\ncan also see as the block_size parameter).\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Tue, 29 Jan 2019 13:07:03 +0000",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interpreting shared_buffers setting"
},
{
"msg_contents": "Thank you Andrew and Thomas. All is now clear :-)\n\nOn Tue, 29 Jan 2019 at 13:07, Andrew Gierth <[email protected]> wrote:\n>\n> >>>>> \"Bob\" == Bob Jolliffe <[email protected]> writes:\n>\n> Bob> Excuse me if this is a silly question. I am trying to fiddle with\n> Bob> shared_buffers setting on postgresql 10.6 on ubuntu 18.04 server.\n>\n> Bob> I have this at bottom of my config file:\n> Bob> shared_buffers = 1GB\n>\n> Bob> Yet when I check the setting from pg_setting I see something quite\n> Bob> different:\n>\n> Bob> postgres=# SELECT name, setting FROM pg_settings where name = 'shared_buffers';\n> Bob> name | setting\n> Bob> ----------------+---------\n> Bob> shared_buffers | 131072\n>\n> pg_settings can tell you more than you asked for:\n>\n> postgres=# select * from pg_settings where name='shared_buffers';\n> -[ RECORD 1 ]---+-------------------------------------------------------------\n> name | shared_buffers\n> setting | 16384\n> unit | 8kB\n> category | Resource Usage / Memory\n> short_desc | Sets the number of shared memory buffers used by the server.\n> extra_desc |\n> context | postmaster\n> vartype | integer\n> source | configuration file\n> min_val | 16\n> max_val | 1073741823\n> enumvals |\n> boot_val | 1024\n> reset_val | 16384\n> sourcefile | /home/andrew/work/pgsql/inst/9.5/data/postgresql.conf\n> sourceline | 113\n> pending_restart | f\n>\n> notice that \"unit 8kB\" line; the displayed integer value is in units of\n> 8kB (which is the block size your server was compiled with, which you\n> can also see as the block_size parameter).\n>\n> --\n> Andrew (irc:RhodiumToad)\n\n",
"msg_date": "Tue, 29 Jan 2019 15:42:57 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Interpreting shared_buffers setting"
},
{
"msg_contents": "Bob Jolliffe <[email protected]> writes:\n\n> Excuse me if this is a silly question. I am trying to fiddle with\n> shared_buffers setting on postgresql 10.6 on ubuntu 18.04 server.\n>\n> I have this at bottom of my config file:\n> shared_buffers = 1GB\n>\n> Yet when I check the setting from pg_setting I see something quite different:\n>\n> postgres=# SELECT name, setting FROM pg_settings where name = 'shared_buffers';\n> name | setting\n> ----------------+---------\n> shared_buffers | 131072\n\nWhy not use the show command which is good about output in human\nterms...\n\npsql (11.1 (Ubuntu 11.1-1.pgdg16.04+1))\nType \"help\" for help.\n\nmeta_a:postgres# select name, setting from pg_settings where name = 'shared_buffers');\nERROR: syntax error at or near \")\"\nLINE 1: ...me, setting from pg_settings where name = 'shared_buffers');\n ^\nmeta_a:postgres# \n\n>\n> Is this a question of units? It looks like 128M. Note when I change\n> the setting to 2GB in conf file I see 262144 from pg_setting. I am\n> now unsure what the actual shared_buffers allocation is. I cant see\n> anything in the docs which tells me how to interpret the integer.\n>\n> Any clarification welcome.\n>\n> Regards\n> Bob\n>\n>\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\n\n",
"msg_date": "Tue, 29 Jan 2019 12:31:51 -0600",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interpreting shared_buffers setting"
},
{
"msg_contents": "Jerry Sievers <[email protected]> writes:\n\n> Bob Jolliffe <[email protected]> writes:\n>\n>> Excuse me if this is a silly question. I am trying to fiddle with\n>> shared_buffers setting on postgresql 10.6 on ubuntu 18.04 server.\n>>\n>> I have this at bottom of my config file:\n>> shared_buffers = 1GB\n>>\n>> Yet when I check the setting from pg_setting I see something quite different:\n>>\n>> postgres=# SELECT name, setting FROM pg_settings where name = 'shared_buffers';\n>> name | setting\n>> ----------------+---------\n>> shared_buffers | 131072\n\nAaaaay! Pasted junk... Here's what I meant.\n\n\nmeta_a:postgres# select setting from pg_settings where name='shared_buffers';\n setting \n---------\n 131072\n(1 row)\n\nmeta_a:postgres# show shared_buffers;\n shared_buffers \n----------------\n 1GB\n(1 row)\n\n\n>\n> Why not use the show command which is good about output in human\n> terms...\n>\n> psql (11.1 (Ubuntu 11.1-1.pgdg16.04+1))\n> Type \"help\" for help.\n>\n> meta_a:postgres# select name, setting from pg_settings where name = 'shared_buffers');\n> ERROR: syntax error at or near \")\"\n> LINE 1: ...me, setting from pg_settings where name = 'shared_buffers');\n> ^\n> meta_a:postgres# \n>\n>>\n>> Is this a question of units? It looks like 128M. Note when I change\n>> the setting to 2GB in conf file I see 262144 from pg_setting. I am\n>> now unsure what the actual shared_buffers allocation is. I cant see\n>> anything in the docs which tells me how to interpret the integer.\n>>\n>> Any clarification welcome.\n>>\n>> Regards\n>> Bob\n>>\n>>\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\n\n",
"msg_date": "Tue, 29 Jan 2019 12:40:01 -0600",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interpreting shared_buffers setting"
}
] |
[
{
"msg_contents": "The following is output from analyzing a simple query on a table of\n13436 rows on postgresql 10, ubuntu 18.04.\n\n explain analyze select * from chart order by name;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Sort (cost=1470.65..1504.24 rows=13436 width=725) (actual\ntime=224340.949..224343.499 rows=13436 loops=1)\n Sort Key: name\n Sort Method: quicksort Memory: 4977kB\n -> Seq Scan on chart (cost=0.00..549.36 rows=13436 width=725)\n(actual time=0.015..1.395 rows=13436 loops=1)\n Planning time: 0.865 ms\n Execution time: 224344.281 ms\n(6 rows)\n\nThe planner has predictably done a sequential scan followed by a sort.\nThough it might have wished it hadn't and just used the index (there\nis an index on name). The sort is taking a mind boggling 224 seconds,\nnearly 2 minutes.\n\nThis is on a cloud vps server.\n\nInteresting when I run the same query on my laptop it completes in\nwell under one second.\n\nI wonder what can cause such a massive discrepancy in the sort time.\nCan it be that the VPS server has heavily over committed CPU. Note I\nhave tried this with 2 different company's servers with similar\nresults.\n\nI am baffled. The sort seems to be all done in memory (only 5MB).\nTested when nothing else was going on at the time. I can expect some\ndifference between the VPS and my laptop, but almost 1000x seems odd.\nThe CPUs are different but not that different.\n\nAny theories?\n\nRegards\nBob\n\n",
"msg_date": "Tue, 29 Jan 2019 18:29:25 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "How can sort performance be so different"
},
{
"msg_contents": "út 29. 1. 2019 v 19:29 odesílatel Bob Jolliffe <[email protected]>\nnapsal:\n\n> The following is output from analyzing a simple query on a table of\n> 13436 rows on postgresql 10, ubuntu 18.04.\n>\n> explain analyze select * from chart order by name;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------\n> Sort (cost=1470.65..1504.24 rows=13436 width=725) (actual\n> time=224340.949..224343.499 rows=13436 loops=1)\n> Sort Key: name\n> Sort Method: quicksort Memory: 4977kB\n> -> Seq Scan on chart (cost=0.00..549.36 rows=13436 width=725)\n> (actual time=0.015..1.395 rows=13436 loops=1)\n> Planning time: 0.865 ms\n> Execution time: 224344.281 ms\n> (6 rows)\n>\n> The planner has predictably done a sequential scan followed by a sort.\n> Though it might have wished it hadn't and just used the index (there\n> is an index on name). The sort is taking a mind boggling 224 seconds,\n> nearly 2 minutes.\n>\n> This is on a cloud vps server.\n>\n> Interesting when I run the same query on my laptop it completes in\n> well under one second.\n>\n> I wonder what can cause such a massive discrepancy in the sort time.\n> Can it be that the VPS server has heavily over committed CPU. Note I\n> have tried this with 2 different company's servers with similar\n> results.\n>\n> I am baffled. The sort seems to be all done in memory (only 5MB).\n> Tested when nothing else was going on at the time. I can expect some\n> difference between the VPS and my laptop, but almost 1000x seems odd.\n> The CPUs are different but not that different.\n>\n> Any theories?\n>\n\nI am sure so sort of 10K rows cannot be 224sec. Really looks like VPS issue.\n\nRegards\n\nPavel\n\n\n\n> Regards\n> Bob\n>\n>\n\nút 29. 1. 2019 v 19:29 odesílatel Bob Jolliffe <[email protected]> napsal:The following is output from analyzing a simple query on a table of\n13436 rows on postgresql 10, ubuntu 18.04.\n\n explain analyze select * from chart order by name;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Sort (cost=1470.65..1504.24 rows=13436 width=725) (actual\ntime=224340.949..224343.499 rows=13436 loops=1)\n Sort Key: name\n Sort Method: quicksort Memory: 4977kB\n -> Seq Scan on chart (cost=0.00..549.36 rows=13436 width=725)\n(actual time=0.015..1.395 rows=13436 loops=1)\n Planning time: 0.865 ms\n Execution time: 224344.281 ms\n(6 rows)\n\nThe planner has predictably done a sequential scan followed by a sort.\nThough it might have wished it hadn't and just used the index (there\nis an index on name). The sort is taking a mind boggling 224 seconds,\nnearly 2 minutes.\n\nThis is on a cloud vps server.\n\nInteresting when I run the same query on my laptop it completes in\nwell under one second.\n\nI wonder what can cause such a massive discrepancy in the sort time.\nCan it be that the VPS server has heavily over committed CPU. Note I\nhave tried this with 2 different company's servers with similar\nresults.\n\nI am baffled. The sort seems to be all done in memory (only 5MB).\nTested when nothing else was going on at the time. I can expect some\ndifference between the VPS and my laptop, but almost 1000x seems odd.\nThe CPUs are different but not that different.\n\nAny theories?I am sure so sort of 10K rows cannot be 224sec. Really looks like VPS issue.RegardsPavel\n\nRegards\nBob",
"msg_date": "Tue, 29 Jan 2019 19:33:53 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "Run https://github.com/n-st/nench and benchmark the underlying vps first.\n\nOn Tue 29 Jan, 2019, 11:59 PM Bob Jolliffe <[email protected] wrote:\n\n> The following is output from analyzing a simple query on a table of\n> 13436 rows on postgresql 10, ubuntu 18.04.\n>\n> explain analyze select * from chart order by name;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------\n> Sort (cost=1470.65..1504.24 rows=13436 width=725) (actual\n> time=224340.949..224343.499 rows=13436 loops=1)\n> Sort Key: name\n> Sort Method: quicksort Memory: 4977kB\n> -> Seq Scan on chart (cost=0.00..549.36 rows=13436 width=725)\n> (actual time=0.015..1.395 rows=13436 loops=1)\n> Planning time: 0.865 ms\n> Execution time: 224344.281 ms\n> (6 rows)\n>\n> The planner has predictably done a sequential scan followed by a sort.\n> Though it might have wished it hadn't and just used the index (there\n> is an index on name). The sort is taking a mind boggling 224 seconds,\n> nearly 2 minutes.\n>\n> This is on a cloud vps server.\n>\n> Interesting when I run the same query on my laptop it completes in\n> well under one second.\n>\n> I wonder what can cause such a massive discrepancy in the sort time.\n> Can it be that the VPS server has heavily over committed CPU. Note I\n> have tried this with 2 different company's servers with similar\n> results.\n>\n> I am baffled. The sort seems to be all done in memory (only 5MB).\n> Tested when nothing else was going on at the time. I can expect some\n> difference between the VPS and my laptop, but almost 1000x seems odd.\n> The CPUs are different but not that different.\n>\n> Any theories?\n>\n> Regards\n> Bob\n>\n>\n\nRun https://github.com/n-st/nench and benchmark the underlying vps first. On Tue 29 Jan, 2019, 11:59 PM Bob Jolliffe <[email protected] wrote:The following is output from analyzing a simple query on a table of\n13436 rows on postgresql 10, ubuntu 18.04.\n\n explain analyze select * from chart order by name;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Sort (cost=1470.65..1504.24 rows=13436 width=725) (actual\ntime=224340.949..224343.499 rows=13436 loops=1)\n Sort Key: name\n Sort Method: quicksort Memory: 4977kB\n -> Seq Scan on chart (cost=0.00..549.36 rows=13436 width=725)\n(actual time=0.015..1.395 rows=13436 loops=1)\n Planning time: 0.865 ms\n Execution time: 224344.281 ms\n(6 rows)\n\nThe planner has predictably done a sequential scan followed by a sort.\nThough it might have wished it hadn't and just used the index (there\nis an index on name). The sort is taking a mind boggling 224 seconds,\nnearly 2 minutes.\n\nThis is on a cloud vps server.\n\nInteresting when I run the same query on my laptop it completes in\nwell under one second.\n\nI wonder what can cause such a massive discrepancy in the sort time.\nCan it be that the VPS server has heavily over committed CPU. Note I\nhave tried this with 2 different company's servers with similar\nresults.\n\nI am baffled. The sort seems to be all done in memory (only 5MB).\nTested when nothing else was going on at the time. I can expect some\ndifference between the VPS and my laptop, but almost 1000x seems odd.\nThe CPUs are different but not that different.\n\nAny theories?\n\nRegards\nBob",
"msg_date": "Wed, 30 Jan 2019 00:06:04 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "Bob Jolliffe <[email protected]> writes:\n> I wonder what can cause such a massive discrepancy in the sort time.\n\nAre you using the same locale (LC_COLLATE) setting on both machines?\nSome locales sort way slower than C locale does. That's not enough\nto explain a 1000X discrepancy --- I concur with the other opinions\nthat there's something wrong with your VPS --- but it might account\nfor something like 10X of it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 29 Jan 2019 13:47:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "Hi Tom\n\nAfter much performance measuring of VPS I believe you are right in\nyour suspicion about locale.\n\nThe database is full of Laos characters (it is a government system in\nLaos). When I tested on my VPS (en_US.UTF-8) I get the crazy slow\nperformance, whereas my laptop postgresql is C.UTF-8.\n\nModifying the query from :\n\nexplain analyze select * from chart order by name;\n\nto\n\nexplain analyze select * from chart order by name COLLATE \"C\";\n\nand the same query runs like a rocket. Amazing, yes 1000 times faster.\n\nWhat I don't know yet is\n(i) whether the sort order makes sense for the Laos names; and\n(ii) what the locale settings are on the production server where the\nproblem was first reported.\n\nThere will be some turnaround before I get this information. I am\nguessing that the database is using \"en_US\" rather than anything Laos\nspecific. In which case \"C\" would probably be no worse re sort order.\nBut will know better soon.\n\nThis has been a long but very fruitful investigation. Thank you all for input.\n\nRegards\nBob\n\nOn Tue, 29 Jan 2019 at 18:47, Tom Lane <[email protected]> wrote:\n>\n> Bob Jolliffe <[email protected]> writes:\n> > I wonder what can cause such a massive discrepancy in the sort time.\n>\n> Are you using the same locale (LC_COLLATE) setting on both machines?\n> Some locales sort way slower than C locale does. That's not enough\n> to explain a 1000X discrepancy --- I concur with the other opinions\n> that there's something wrong with your VPS --- but it might account\n> for something like 10X of it.\n>\n> regards, tom lane\n\n",
"msg_date": "Wed, 30 Jan 2019 11:57:10 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "On Wed, Jan 30, 2019 at 3:57 AM Bob Jolliffe <[email protected]> wrote:\n> (i) whether the sort order makes sense for the Laos names; and\n> (ii) what the locale settings are on the production server where the\n> problem was first reported.\n>\n> There will be some turnaround before I get this information. I am\n> guessing that the database is using \"en_US\" rather than anything Laos\n> specific. In which case \"C\" would probably be no worse re sort order.\n> But will know better soon.\n>\n> This has been a long but very fruitful investigation. Thank you all for input.\n\nIf you can find a way to use an ICU collation, it may be possible to\nget Laotian sort order with performance that's a lot closer to the\nperformance you see with the C locale. The difference that you're\nseeing is obviously explainable in large part by the C locale using\nthe abbreviated keys technique. The system glibc's collations cannot\nuse this optimization.\n\nI believe that some locales have inherently more expensive\nnormalization processes (comparisons) than others, but if you can\neffective amortize the cost per key by building an abbreviated key, it\nmay not matter that much. And, ICU may be faster than glibc anyway.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Wed, 30 Jan 2019 15:54:08 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "Hi Peter\n\nI did check out using ICU and the performance does indeed seem\ncomparable with C locale:\n\nEXPLAIN ANALYZE select * from chart order by name COLLATE \"lo-x-icu\";\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Sort (cost=1470.65..1504.24 rows=13436 width=1203) (actual\ntime=82.752..85.723 rows=13436 loops=1)\n Sort Key: name COLLATE \"lo-x-icu\"\n Sort Method: quicksort Memory: 6253kB\n -> Seq Scan on chart (cost=0.00..549.36 rows=13436 width=1203)\n(actual time=0.043..12.634 rows=13436 loops=1)\n Planning time: 1.610 ms\n Execution time: 96.060 ms\n(6 rows)\n\nThe Laos folk have confirmed that the sort order with C locale was not\ncorrect. So setting the ICU locale seems to be the way forward.\n\nThe problem is that this is a large java application with a great\nnumber of tables and queries. Also it is used in 60+ countries not\njust Laos. So we cannot simply modify the queries or table creation\nscripts directly such as in the manner above. I was hoping the\nsolution would just be to set a default locale on the database\n(perhaps even und-x-icu) but I see now that this doesn't seem to be\ncurrently possible with postgresql 10 ie. set the locale on database\ncreation to a *-icu locale.\n\nIs this also a limitation on postgresql 11? (Upgrading would be possible)\n\nAny other workarounds worth trying? The magnitude of this issue is\nsignificant - 1000x slower on these basic sorts is crippling the\napplication, probably also in a number of other queries.\n\nRegards\nBob\n\nOn Wed, 30 Jan 2019 at 23:54, Peter Geoghegan <[email protected]> wrote:\n>\n> On Wed, Jan 30, 2019 at 3:57 AM Bob Jolliffe <[email protected]> wrote:\n> > (i) whether the sort order makes sense for the Laos names; and\n> > (ii) what the locale settings are on the production server where the\n> > problem was first reported.\n> >\n> > There will be some turnaround before I get this information. I am\n> > guessing that the database is using \"en_US\" rather than anything Laos\n> > specific. In which case \"C\" would probably be no worse re sort order.\n> > But will know better soon.\n> >\n> > This has been a long but very fruitful investigation. Thank you all for input.\n>\n> If you can find a way to use an ICU collation, it may be possible to\n> get Laotian sort order with performance that's a lot closer to the\n> performance you see with the C locale. The difference that you're\n> seeing is obviously explainable in large part by the C locale using\n> the abbreviated keys technique. The system glibc's collations cannot\n> use this optimization.\n>\n> I believe that some locales have inherently more expensive\n> normalization processes (comparisons) than others, but if you can\n> effective amortize the cost per key by building an abbreviated key, it\n> may not matter that much. And, ICU may be faster than glibc anyway.\n>\n> --\n> Peter Geoghegan\n\n",
"msg_date": "Thu, 31 Jan 2019 13:29:59 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "On Thu, Jan 31, 2019 at 7:30 AM Bob Jolliffe <[email protected]> wrote:\n>\n> Hi Peter\n>\n> I did check out using ICU and the performance does indeed seem\n> comparable with C locale:\n>\n> EXPLAIN ANALYZE select * from chart order by name COLLATE \"lo-x-icu\";\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------\n> Sort (cost=1470.65..1504.24 rows=13436 width=1203) (actual\n> time=82.752..85.723 rows=13436 loops=1)\n> Sort Key: name COLLATE \"lo-x-icu\"\n> Sort Method: quicksort Memory: 6253kB\n> -> Seq Scan on chart (cost=0.00..549.36 rows=13436 width=1203)\n> (actual time=0.043..12.634 rows=13436 loops=1)\n> Planning time: 1.610 ms\n> Execution time: 96.060 ms\n> (6 rows)\n>\n> The Laos folk have confirmed that the sort order with C locale was not\n> correct. So setting the ICU locale seems to be the way forward.\n>\n> The problem is that this is a large java application with a great\n> number of tables and queries. Also it is used in 60+ countries not\n> just Laos. So we cannot simply modify the queries or table creation\n> scripts directly such as in the manner above. I was hoping the\n> solution would just be to set a default locale on the database\n> (perhaps even und-x-icu) but I see now that this doesn't seem to be\n> currently possible with postgresql 10 ie. set the locale on database\n> creation to a *-icu locale.\n>\n> Is this also a limitation on postgresql 11? (Upgrading would be possible)\n\nyeah, probably. Having said that, I'm really struggling that it can\ntake take several minutes to sort such a small number of rows even\nwith location issues. I can sort rocks faster than that :-).\n\nSwitching between various european collations, I'm seeing subsecond\nsort responses for 44k records on my test box. I don't have the laos\ncollation installed unfortunately. Are you seeing kind of penalty in\nother conversions?\n\nmerlin\n\n",
"msg_date": "Tue, 5 Feb 2019 14:30:36 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "Merlin Moncure wrote:\n> yeah, probably. Having said that, I'm really struggling that it can\n> take take several minutes to sort such a small number of rows even\n> with location issues. I can sort rocks faster than that :-).\n> \n> Switching between various european collations, I'm seeing subsecond\n> sort responses for 44k records on my test box. I don't have the laos\n> collation installed unfortunately. Are you seeing kind of penalty in\n> other conversions?\n\nI find that it makes a lot of difference what you sort:\n\nCREATE TABLE sort(t text);\n\nINSERT INTO sort SELECT 'ຕົວອັກສອນລາວ... ງ່າຍຂື້ນ' || i FROM generate_series(1, 100000) AS i;\n\nSET work_mem = '1GB';\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT t FROM sort ORDER BY t COLLATE \"C\";\n[...]\n Execution Time: 288.752 ms\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT t FROM sort ORDER BY t COLLATE \"lo_LA.utf8\";\n[...]\n Execution Time: 47006.683 ms\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT t FROM sort ORDER BY t COLLATE \"en_US.utf8\";\n[...]\n Execution Time: 73962.934 ms\n\n\nCREATE TABLE sort2(t text);\n\nINSERT INTO sort2 SELECT 'this is plain old English' || i FROM generate_series(1, 100000) AS i;\n\nSET work_mem = '1GB';\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT t FROM sort2 ORDER BY t COLLATE \"C\";\n[...]\n Execution Time: 237.615 ms\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT t FROM sort2 ORDER BY t COLLATE \"lo_LA.utf8\";\n[...]\n Execution Time: 2467.848 ms\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT t FROM sort2 ORDER BY t COLLATE \"en_US.utf8\";\n[...]\n Execution Time: 2927.667 ms\n\n\nThis is on my x86_64 Fedora 29 system, kernel 4.20.6, glibc 2.28.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 06 Feb 2019 09:38:42 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "Sorry Merlin for not replying earlier. The difference is indeed hard\nto understand but it is certainly there. We altered the collation to\nuse on the name field in that table and the problem has gone. Having\nhaving solved the immediate problem we haven't investigated much\nfurther yet.\n\nNot sure what exactly you mean by \"other conversions\"?\n\nOn Tue, 5 Feb 2019 at 20:28, Merlin Moncure <[email protected]> wrote:\n>\n> On Thu, Jan 31, 2019 at 7:30 AM Bob Jolliffe <[email protected]> wrote:\n> >\n> > Hi Peter\n> >\n> > I did check out using ICU and the performance does indeed seem\n> > comparable with C locale:\n> >\n> > EXPLAIN ANALYZE select * from chart order by name COLLATE \"lo-x-icu\";\n> > QUERY PLAN\n> > -------------------------------------------------------------------------------------------------------------------\n> > Sort (cost=1470.65..1504.24 rows=13436 width=1203) (actual\n> > time=82.752..85.723 rows=13436 loops=1)\n> > Sort Key: name COLLATE \"lo-x-icu\"\n> > Sort Method: quicksort Memory: 6253kB\n> > -> Seq Scan on chart (cost=0.00..549.36 rows=13436 width=1203)\n> > (actual time=0.043..12.634 rows=13436 loops=1)\n> > Planning time: 1.610 ms\n> > Execution time: 96.060 ms\n> > (6 rows)\n> >\n> > The Laos folk have confirmed that the sort order with C locale was not\n> > correct. So setting the ICU locale seems to be the way forward.\n> >\n> > The problem is that this is a large java application with a great\n> > number of tables and queries. Also it is used in 60+ countries not\n> > just Laos. So we cannot simply modify the queries or table creation\n> > scripts directly such as in the manner above. I was hoping the\n> > solution would just be to set a default locale on the database\n> > (perhaps even und-x-icu) but I see now that this doesn't seem to be\n> > currently possible with postgresql 10 ie. set the locale on database\n> > creation to a *-icu locale.\n> >\n> > Is this also a limitation on postgresql 11? (Upgrading would be possible)\n>\n> yeah, probably. Having said that, I'm really struggling that it can\n> take take several minutes to sort such a small number of rows even\n> with location issues. I can sort rocks faster than that :-).\n>\n> Switching between various european collations, I'm seeing subsecond\n> sort responses for 44k records on my test box. I don't have the laos\n> collation installed unfortunately. Are you seeing kind of penalty in\n> other conversions?\n>\n> merlin\n\n",
"msg_date": "Mon, 18 Feb 2019 15:49:34 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 9:49 AM Bob Jolliffe <[email protected]> wrote:\n>\n> Sorry Merlin for not replying earlier. The difference is indeed hard\n> to understand but it is certainly there. We altered the collation to\n> use on the name field in that table and the problem has gone. Having\n> having solved the immediate problem we haven't investigated much\n> further yet.\n>\n> Not sure what exactly you mean by \"other conversions\"?\n\n\nI hand tested similar query for other (generally western) collations.\nDid not observe anything nearly so bad. What I'm hoping is that this\nis some kind of weird performance issue specific to your installation;\nin the worst (unfortunately likely) case we are looking at something\nspecific to your specific sort collation :(.\n\nmerlin\n\n",
"msg_date": "Wed, 20 Feb 2019 15:35:42 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "On Wed, 20 Feb 2019 at 21:35, Merlin Moncure <[email protected]> wrote:\n>\n> On Mon, Feb 18, 2019 at 9:49 AM Bob Jolliffe <[email protected]> wrote:\n> >\n> > Sorry Merlin for not replying earlier. The difference is indeed hard\n> > to understand but it is certainly there. We altered the collation to\n> > use on the name field in that table and the problem has gone. Having\n> > having solved the immediate problem we haven't investigated much\n> > further yet.\n> >\n> > Not sure what exactly you mean by \"other conversions\"?\n>\n>\n> I hand tested similar query for other (generally western) collations.\n> Did not observe anything nearly so bad. What I'm hoping is that this\n> is some kind of weird performance issue specific to your installation;\n> in the worst (unfortunately likely) case we are looking at something\n> specific to your specific sort collation :(.\n>\n\nIt seems not to be (completely) particular to the installation.\nTesting on different platforms we found variable speed difference\nbetween 100x and 1000x slower, but always a considerable order of\nmagnitiude. The very slow performance comes from sorting Lao\ncharacters using en_US.UTF-8 collation.\n\n",
"msg_date": "Wed, 20 Feb 2019 21:42:15 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 1:42 PM Bob Jolliffe <[email protected]> wrote:\n> It seems not to be (completely) particular to the installation.\n> Testing on different platforms we found variable speed difference\n> between 100x and 1000x slower, but always a considerable order of\n> magnitiude. The very slow performance comes from sorting Lao\n> characters using en_US.UTF-8 collation.\n\nI knew that some collations were slower, generally for reasons that\nmake some sense. For example, I was aware that ICU's use of Japanese\nstandard JIS X 4061 is particularly complicated and expensive, but\nproduces the most useful possible result from the point of view of a\nJapanese speaker. Apparently glibc does not use that algorithm, and so\noffers less useful sort order (though it may actually be faster in\nthat particular case).\n\nI suspect that the reasons why the Lao locale sorts so much slower may\nalso have something to do with the intrinsic cost of supporting more\ncomplicated rules. However, it's such a ridiculously large difference\nthat it also seems likely that somebody was disinclined to go to the\neffort of optimizing it. The ICU people found that to be a tractable\ngoal, but they may have had to work at it. I also have a vague notion\nthat there are special cases that are more or less only useful for\nsorting French. These complicate the implementation of UCA style\nalgorithms.\n\nI am only speculating, based on what I've heard about other cases --\nperhaps this explanation is totally wrong. I know a lot more about\nthis stuff than most people on this mailing list, but I'm still far\nfrom being an expert.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Wed, 20 Feb 2019 14:25:01 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can sort performance be so different"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 2:25 PM Peter Geoghegan <[email protected]> wrote:\n> I suspect that the reasons why the Lao locale sorts so much slower may\n> also have something to do with the intrinsic cost of supporting more\n> complicated rules.\n\nI strongly suspect that it has something to do with the issue\ndescribed here specifically:\n\nhttp://userguide.icu-project.org/collation/concepts#TOC-Thai-Lao-reordering\n-- \nPeter Geoghegan\n\n",
"msg_date": "Wed, 20 Feb 2019 16:54:52 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can sort performance be so different"
}
] |
[
{
"msg_contents": "Hey,\nI'm using postgresql 9.6.11. I wanted to ask something about the functions\nI mentioned in the title :\nI created the next table :\npostgres=# \\d students;\n Table \"public. students \"\n Column | Type | Modifiers\n----------+---------+-----------\n id| integer |\n name| text |\n age| integer |\n data | jsonb |\n\nI inserted one row. When I query the table`s size with\npg_total_relation_size I see that the data occupies 2 pages :\n\npostgres=# select pg_total_relation_size(' students ');\n pg_total_relation_size\n------------------------\n 16384\n(1 row)\n\npostgres=# select pg_relation_size(' students ');\n pg_relation_size\n------------------\n 8192\n(1 row)\n\nWhen I used pgstattuple :\npostgres=# select * from pgstattuple('pg_toast.pg_toast_1187222');\n table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\ndead_tuple_len | dead_tuple_percent | free_space | free_percent\n-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n 0 | 0 | 0 | 0 | 0 |\n 0 | 0 | 0 | 0\n(1 row)\n\npostgres=# select * from pgstattuple('students');\n table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\ndead_tuple_len | dead_tuple_percent | free_space | free_percent\n-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n 8192 | 1 | 1221 | 14.9 | 0 |\n 0 | 0 | 6936 | 84.67\n(1 row)\n\nWhich means, the toasted table is empty and you can see that the row I\ninserted should occupy only one page(8K in my system).\n\nThen, why the pg_total_relation_size shows another page ?(16KB in total)\n\nHey,I'm using postgresql 9.6.11. I wanted to ask something about the functions I mentioned in the title : I created the next table : postgres=# \\d students; Table \"public.\n\nstudents \" Column | Type | Modifiers----------+---------+----------- id| integer | name| text | age| integer | data | jsonb |I inserted one row. When I query the table`s size with pg_total_relation_size I see that the data occupies 2 pages : postgres=# select pg_total_relation_size('\n\nstudents '); pg_total_relation_size------------------------ 16384(1 row)postgres=# select pg_relation_size('\n\nstudents '); pg_relation_size------------------ 8192(1 row)When I used pgstattuple : postgres=# select * from pgstattuple('pg_toast.pg_toast_1187222'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+-------------- 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0(1 row)postgres=# select * from pgstattuple('students'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+-------------- 8192 | 1 | 1221 | 14.9 | 0 | 0 | 0 | 6936 | 84.67(1 row)Which means, the toasted table is empty and you can see that the row I inserted should occupy only one page(8K in my system).Then, why the pg_total_relation_size shows another page ?(16KB in total)",
"msg_date": "Wed, 30 Jan 2019 12:41:55 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgstattupple vs pg_total_relation_size"
},
{
"msg_contents": "According to the doc [1],\npg_total_relation_size add toasted data *and* indexes to the mix.\nAny index, unique constraint, or primary key on your table ?\n\n[1]\nhttps://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE\n\nLe mer. 30 janv. 2019 à 11:42, Mariel Cherkassky <\[email protected]> a écrit :\n\n> Hey,\n> I'm using postgresql 9.6.11. I wanted to ask something about the functions\n> I mentioned in the title :\n> I created the next table :\n> postgres=# \\d students;\n> Table \"public. students \"\n> Column | Type | Modifiers\n> ----------+---------+-----------\n> id| integer |\n> name| text |\n> age| integer |\n> data | jsonb |\n>\n> I inserted one row. When I query the table`s size with\n> pg_total_relation_size I see that the data occupies 2 pages :\n>\n> postgres=# select pg_total_relation_size(' students ');\n> pg_total_relation_size\n> ------------------------\n> 16384\n> (1 row)\n>\n> postgres=# select pg_relation_size(' students ');\n> pg_relation_size\n> ------------------\n> 8192\n> (1 row)\n>\n> When I used pgstattuple :\n> postgres=# select * from pgstattuple('pg_toast.pg_toast_1187222');\n> table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\n> dead_tuple_len | dead_tuple_percent | free_space | free_percent\n>\n> -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> 0 | 0 | 0 | 0 | 0 |\n> 0 | 0 | 0 | 0\n> (1 row)\n>\n> postgres=# select * from pgstattuple('students');\n> table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\n> dead_tuple_len | dead_tuple_percent | free_space | free_percent\n>\n> -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> 8192 | 1 | 1221 | 14.9 | 0 |\n> 0 | 0 | 6936 | 84.67\n> (1 row)\n>\n> Which means, the toasted table is empty and you can see that the row I\n> inserted should occupy only one page(8K in my system).\n>\n> Then, why the pg_total_relation_size shows another page ?(16KB in total)\n>\n>\n>\n>\n>\n\nAccording to the doc [1],pg_total_relation_size add toasted data *and* indexes to the mix.Any index, unique constraint, or primary key on your table ?[1] https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZELe mer. 30 janv. 2019 à 11:42, Mariel Cherkassky <[email protected]> a écrit :Hey,I'm using postgresql 9.6.11. I wanted to ask something about the functions I mentioned in the title : I created the next table : postgres=# \\d students; Table \"public.\n\nstudents \" Column | Type | Modifiers----------+---------+----------- id| integer | name| text | age| integer | data | jsonb |I inserted one row. When I query the table`s size with pg_total_relation_size I see that the data occupies 2 pages : postgres=# select pg_total_relation_size('\n\nstudents '); pg_total_relation_size------------------------ 16384(1 row)postgres=# select pg_relation_size('\n\nstudents '); pg_relation_size------------------ 8192(1 row)When I used pgstattuple : postgres=# select * from pgstattuple('pg_toast.pg_toast_1187222'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+-------------- 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0(1 row)postgres=# select * from pgstattuple('students'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+-------------- 8192 | 1 | 1221 | 14.9 | 0 | 0 | 0 | 6936 | 84.67(1 row)Which means, the toasted table is empty and you can see that the row I inserted should occupy only one page(8K in my system).Then, why the pg_total_relation_size shows another page ?(16KB in total)",
"msg_date": "Wed, 30 Jan 2019 14:19:52 +0100",
"msg_from": "Tumasgiu Rossini <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgstattupple vs pg_total_relation_size"
},
{
"msg_contents": "On Wed, 30 Jan 2019 14:19:52 +0100\nTumasgiu Rossini <[email protected]> wrote:\n\n> According to the doc [1],\n> pg_total_relation_size add toasted data *and* indexes to the mix.\n\n*and* FSM *and* VM.\n\n> Any index, unique constraint, or primary key on your table ?\n> \n> [1]\n> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE\n> \n> Le mer. 30 janv. 2019 à 11:42, Mariel Cherkassky <\n> [email protected]> a écrit : \n> \n> > Hey,\n> > I'm using postgresql 9.6.11. I wanted to ask something about the functions\n> > I mentioned in the title :\n> > I created the next table :\n> > postgres=# \\d students;\n> > Table \"public. students \"\n> > Column | Type | Modifiers\n> > ----------+---------+-----------\n> > id| integer |\n> > name| text |\n> > age| integer |\n> > data | jsonb |\n> >\n> > I inserted one row. When I query the table`s size with\n> > pg_total_relation_size I see that the data occupies 2 pages :\n> >\n> > postgres=# select pg_total_relation_size(' students ');\n> > pg_total_relation_size\n> > ------------------------\n> > 16384\n> > (1 row)\n> >\n> > postgres=# select pg_relation_size(' students ');\n> > pg_relation_size\n> > ------------------\n> > 8192\n> > (1 row)\n> >\n> > When I used pgstattuple :\n> > postgres=# select * from pgstattuple('pg_toast.pg_toast_1187222');\n> > table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\n> > dead_tuple_len | dead_tuple_percent | free_space | free_percent\n> >\n> > -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> > 0 | 0 | 0 | 0 | 0 |\n> > 0 | 0 | 0 | 0\n> > (1 row)\n> >\n> > postgres=# select * from pgstattuple('students');\n> > table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\n> > dead_tuple_len | dead_tuple_percent | free_space | free_percent\n> >\n> > -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> > 8192 | 1 | 1221 | 14.9 | 0 |\n> > 0 | 0 | 6936 | 84.67\n> > (1 row)\n> >\n> > Which means, the toasted table is empty and you can see that the row I\n> > inserted should occupy only one page(8K in my system).\n> >\n> > Then, why the pg_total_relation_size shows another page ?(16KB in total)\n> >\n> >\n> >\n> >\n> > \n\n\n\n-- \nJehan-Guillaume de Rorthais\nDalibo\n\n",
"msg_date": "Wed, 30 Jan 2019 14:27:01 +0100",
"msg_from": "\"Jehan-Guillaume (ioguix) de Rorthais\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgstattupple vs pg_total_relation_size"
},
{
"msg_contents": "There aren't any constraint or indexes, just a regular table. I didn't see\nthe fsm and vm files in the base dir. Were they created immediately for\nevery table or after some updates/deletes ?\n\nOn Wed, Jan 30, 2019, 3:27 PM Jehan-Guillaume (ioguix) de Rorthais <\[email protected] wrote:\n\n> On Wed, 30 Jan 2019 14:19:52 +0100\n> Tumasgiu Rossini <[email protected]> wrote:\n>\n> > According to the doc [1],\n> > pg_total_relation_size add toasted data *and* indexes to the mix.\n>\n> *and* FSM *and* VM.\n>\n> > Any index, unique constraint, or primary key on your table ?\n> >\n> > [1]\n> >\n> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE\n> >\n> > Le mer. 30 janv. 2019 à 11:42, Mariel Cherkassky <\n> > [email protected]> a écrit :\n> >\n> > > Hey,\n> > > I'm using postgresql 9.6.11. I wanted to ask something about the\n> functions\n> > > I mentioned in the title :\n> > > I created the next table :\n> > > postgres=# \\d students;\n> > > Table \"public. students \"\n> > > Column | Type | Modifiers\n> > > ----------+---------+-----------\n> > > id| integer |\n> > > name| text |\n> > > age| integer |\n> > > data | jsonb |\n> > >\n> > > I inserted one row. When I query the table`s size with\n> > > pg_total_relation_size I see that the data occupies 2 pages :\n> > >\n> > > postgres=# select pg_total_relation_size(' students ');\n> > > pg_total_relation_size\n> > > ------------------------\n> > > 16384\n> > > (1 row)\n> > >\n> > > postgres=# select pg_relation_size(' students ');\n> > > pg_relation_size\n> > > ------------------\n> > > 8192\n> > > (1 row)\n> > >\n> > > When I used pgstattuple :\n> > > postgres=# select * from pgstattuple('pg_toast.pg_toast_1187222');\n> > > table_len | tuple_count | tuple_len | tuple_percent |\n> dead_tuple_count |\n> > > dead_tuple_len | dead_tuple_percent | free_space | free_percent\n> > >\n> > >\n> -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> > > 0 | 0 | 0 | 0 |\n> 0 |\n> > > 0 | 0 | 0 | 0\n> > > (1 row)\n> > >\n> > > postgres=# select * from pgstattuple('students');\n> > > table_len | tuple_count | tuple_len | tuple_percent |\n> dead_tuple_count |\n> > > dead_tuple_len | dead_tuple_percent | free_space | free_percent\n> > >\n> > >\n> -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> > > 8192 | 1 | 1221 | 14.9 |\n> 0 |\n> > > 0 | 0 | 6936 | 84.67\n> > > (1 row)\n> > >\n> > > Which means, the toasted table is empty and you can see that the row I\n> > > inserted should occupy only one page(8K in my system).\n> > >\n> > > Then, why the pg_total_relation_size shows another page ?(16KB in\n> total)\n> > >\n> > >\n> > >\n> > >\n> > >\n>\n>\n>\n> --\n> Jehan-Guillaume de Rorthais\n> Dalibo\n>\n\nThere aren't any constraint or indexes, just a regular table. I didn't see the fsm and vm files in the base dir. Were they created immediately for every table or after some updates/deletes ?On Wed, Jan 30, 2019, 3:27 PM Jehan-Guillaume (ioguix) de Rorthais <[email protected] wrote:On Wed, 30 Jan 2019 14:19:52 +0100\nTumasgiu Rossini <[email protected]> wrote:\n\n> According to the doc [1],\n> pg_total_relation_size add toasted data *and* indexes to the mix.\n\n*and* FSM *and* VM.\n\n> Any index, unique constraint, or primary key on your table ?\n> \n> [1]\n> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE\n> \n> Le mer. 30 janv. 2019 à 11:42, Mariel Cherkassky <\n> [email protected]> a écrit : \n> \n> > Hey,\n> > I'm using postgresql 9.6.11. I wanted to ask something about the functions\n> > I mentioned in the title :\n> > I created the next table :\n> > postgres=# \\d students;\n> > Table \"public. students \"\n> > Column | Type | Modifiers\n> > ----------+---------+-----------\n> > id| integer |\n> > name| text |\n> > age| integer |\n> > data | jsonb |\n> >\n> > I inserted one row. When I query the table`s size with\n> > pg_total_relation_size I see that the data occupies 2 pages :\n> >\n> > postgres=# select pg_total_relation_size(' students ');\n> > pg_total_relation_size\n> > ------------------------\n> > 16384\n> > (1 row)\n> >\n> > postgres=# select pg_relation_size(' students ');\n> > pg_relation_size\n> > ------------------\n> > 8192\n> > (1 row)\n> >\n> > When I used pgstattuple :\n> > postgres=# select * from pgstattuple('pg_toast.pg_toast_1187222');\n> > table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\n> > dead_tuple_len | dead_tuple_percent | free_space | free_percent\n> >\n> > -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> > 0 | 0 | 0 | 0 | 0 |\n> > 0 | 0 | 0 | 0\n> > (1 row)\n> >\n> > postgres=# select * from pgstattuple('students');\n> > table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\n> > dead_tuple_len | dead_tuple_percent | free_space | free_percent\n> >\n> > -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> > 8192 | 1 | 1221 | 14.9 | 0 |\n> > 0 | 0 | 6936 | 84.67\n> > (1 row)\n> >\n> > Which means, the toasted table is empty and you can see that the row I\n> > inserted should occupy only one page(8K in my system).\n> >\n> > Then, why the pg_total_relation_size shows another page ?(16KB in total)\n> >\n> >\n> >\n> >\n> > \n\n\n\n-- \nJehan-Guillaume de Rorthais\nDalibo",
"msg_date": "Wed, 30 Jan 2019 16:32:01 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgstattupple vs pg_total_relation_size"
},
{
"msg_contents": "\"Jehan-Guillaume (ioguix) de Rorthais\" <[email protected]> writes:\n> On Wed, 30 Jan 2019 14:19:52 +0100\n> Tumasgiu Rossini <[email protected]> wrote:\n>> According to the doc [1],\n>> pg_total_relation_size add toasted data *and* indexes to the mix.\n\n> *and* FSM *and* VM.\n\nYeah. In this particular case, the other page presumably belongs to the\ntoast table's index, which will have a metapage even if the table is\nempty.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 30 Jan 2019 09:46:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgstattupple vs pg_total_relation_size"
}
] |
[
{
"msg_contents": "Hi,\n\nAccording to https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server :\n\n> effective_cache_size should be set to an estimate of how much memory is available for disk caching by the operating system and within the database itself, after taking into account what's used by the OS itself and other applications.\n\nI intend to run a java application and postgres server in the same\nserver machine. The java application requires 2 GB RAM max.\n\nConsidering that our server machine has 4 GB RAM, should I reduce the\neffective_cache_size to say 768 MB or am I better off with the default\n4 GB value?\n\nThis is particularly confusing because in this thread Tom Lane says\nthe following\n\n> I see no problem with a value of say 4GB;\n> that's very unlikely to be worse than the pre-9.4 default (128MB) on any modern machine.\n\nPS : I got the value 768 MB from https://pgtune.leopard.in.ua/#/ by\ngiving 1 GB as the amount of memory postgres can use.\n\n\nRegards,\nNanda\n\n",
"msg_date": "Thu, 31 Jan 2019 13:00:36 +0530",
"msg_from": "Nandakumar M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Setting effective_cache size"
},
{
"msg_contents": "On Thu, Jan 31, 2019 at 1:00 PM Nandakumar M <[email protected]> wrote:\n> This is particularly confusing because in this thread Tom Lane says\n> the following\n>\nMissed to link the thread..\nhttps://postgrespro.com/list/thread-id/1813920\n\nRegards,\nNanda\n\n",
"msg_date": "Thu, 31 Jan 2019 13:04:35 +0530",
"msg_from": "Nandakumar M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting effective_cache size"
},
{
"msg_contents": "Nandakumar M wrote:\n> According to https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server :\n> \n> > effective_cache_size should be set to an estimate of how much memory is available for disk caching by the operating system and within the database itself, after taking into account what's used by the OS itself and other applications.\n> \n> I intend to run a java application and postgres server in the same\n> server machine. The java application requires 2 GB RAM max.\n> \n> Considering that our server machine has 4 GB RAM, should I reduce the\n> effective_cache_size to say 768 MB or am I better off with the default\n> 4 GB value?\n> \n> This is particularly confusing because in this thread Tom Lane says\n> the following\n> \n> > I see no problem with a value of say 4GB;\n> > that's very unlikely to be worse than the pre-9.4 default (128MB) on any modern machine.\n> \n> PS : I got the value 768 MB from https://pgtune.leopard.in.ua/#/ by\n> giving 1 GB as the amount of memory postgres can use.\n\nI would set effective_cache_size to 2GB or a little lower.\n\nThis is a number that tells the optimizer how likely it is to\nfind index data cached if the index is used repeatedly, so it\nis not important to get the value exactly right.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 31 Jan 2019 12:57:39 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting effective_cache size"
}
] |
[
{
"msg_contents": "Hi ,\n\n\nI need know how to calculate hardware sizing for database or query\n\n\nRAM\n\nCPU\n\nConfig tuning\n\n\nRequirement :\n\n\n1100 concurrent connection\n\n1600 column of table\n\n1GB of data can be select and dynamic aggregation will happen\n\n\nRegards\n\nSuganthiSekar\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi ,\n\n\nI need know how to calculate hardware sizing for database or query\n\n\nRAM\nCPU\nConfig tuning\n\n\nRequirement : \n\n\n1100 concurrent connection\n1600 column of table\n1GB of data can be select and dynamic aggregation will happen\n\n\nRegards\nSuganthiSekar",
"msg_date": "Mon, 4 Feb 2019 09:57:31 +0000",
"msg_from": "suganthi Sekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: server hardware tuning."
},
{
"msg_contents": "Hi Suganthi,\nI can give you a start, some pro users can suggest you better.\n\n1. Don't use this much of connections on a single postgres server. Use a\nconnection pooler in front of it.\n2. RAM: Depends upon how much data you want to be cached.\n3. Use PCIe SATA SSD with RAID10, Postgres uses a lot of IO for its\noperations.\n4. For config tuning: https://pgtune.leopard.in.ua/#/ Though please go\nthrough all params for more understanding\n\nHappy to help :)\nPrince Pathria Systems Architect Intern Evive +91 9478670472 goevive.com\n\n\nOn Mon, Feb 4, 2019 at 6:07 PM suganthi Sekar <[email protected]> wrote:\n\n>\n> Hi ,\n>\n>\n> I need know how to calculate hardware sizing for database or query\n>\n>\n> RAM\n>\n> CPU\n>\n> Config tuning\n>\n>\n> Requirement :\n>\n>\n> 1100 concurrent connection\n>\n> 1600 column of table\n>\n> 1GB of data can be select and dynamic aggregation will happen\n>\n>\n> Regards\n>\n> SuganthiSekar\n>\n\nHi Suganthi,I can give you a start, some pro users can suggest you better.1. Don't use this much of connections on a single postgres server. Use a connection pooler in front of it.2. RAM: Depends upon how much data you want to be cached.3. Use PCIe SATA SSD with RAID10, Postgres uses a lot of IO for its operations. 4. For config tuning: https://pgtune.leopard.in.ua/#/ Though please go through all params for more understandingHappy to help :)Prince Pathria\nSystems Architect Intern\nEvive\n+91 9478670472\ngoevive.comOn Mon, Feb 4, 2019 at 6:07 PM suganthi Sekar <[email protected]> wrote:\n\n\n\n\n\n\n\n\n\n\nHi ,\n\n\nI need know how to calculate hardware sizing for database or query\n\n\nRAM\nCPU\nConfig tuning\n\n\nRequirement : \n\n\n1100 concurrent connection\n1600 column of table\n1GB of data can be select and dynamic aggregation will happen\n\n\nRegards\nSuganthiSekar",
"msg_date": "Mon, 4 Feb 2019 20:15:30 +0530",
"msg_from": "Prince Pathria <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: server hardware tuning."
},
{
"msg_contents": "På mandag 04. februar 2019 kl. 15:45:30, skrev Prince Pathria <\[email protected] <mailto:[email protected]>>: Hi Suganthi, I \ncan give you a start, some pro users can suggest you better. 1. Don't use \nthis much of connections on a single postgres server. Use a connection pooler \nin front of it. 2. RAM: Depends upon how much data you want to be cached. 3. \nUse PCIe SATA SSD with RAID10, Postgres uses a lot of IO for its operations. \n4. For config tuning: https://pgtune.leopard.in.ua/#/ \n<https://pgtune.leopard.in.ua/#/> Though please go through all params for more \nunderstanding Happy to help :) Prince Pathria Systems Architect Intern Evive \n+91 9478670472goevive.com <http://goevive.com> There's no such thing as PCIe \nSATA, use PCIe or NVMe in RAID-10, it's quite affordable these days and \nmeaningless not to use. --\n Andreas Joseph Krogh",
"msg_date": "Mon, 4 Feb 2019 15:56:45 +0100 (CET)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Sv: Re: Fw: server hardware tuning."
},
{
"msg_contents": "Hi Team ,\n\n i am using Postgresql 11, i have 2 partition table , when i joined both table in query\na table its goes exact partition table , but other table scan all partition\n\nplease clarify on this .\n\ni have enabled below parameter on in configuration file\nNote : alter system set enable_partitionwise_join to 'on';\n\n\nExample :\n\nexplain analyze\nselect * from call_report1 as a inner join call_report2 as b on a.call_id=b.call_id\n where a.call_created_date ='2017-11-01' and '2017-11-30'\n\n\n\n \"Hash Right Join (cost=8.19..50.47 rows=2 width=3635) (actual time=0.426..0.447 rows=7 loops=1)\"\n\" Hash Cond: (b.call_id = a.call_id)\"\n\" -> Append (cost=0.00..41.81 rows=121 width=2319) (actual time=0.040..0.170 rows=104 loops=1)\"\n\" -> Seq Scan on call_report2 b (cost=0.00..0.00 rows=1 width=528) (actual time=0.010..0.010 rows=0 loops=1)\"\n\" -> Seq Scan on call_report2_201803 b_1 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.029..0.031 rows=14 loops=1)\"\n\" -> Seq Scan on call_report2_201711 b_2 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.014..0.015 rows=7 loops=1)\"\n\" -> Seq Scan on call_report2_201712 b_3 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.047 rows=34 loops=1)\"\n\" -> Seq Scan on call_report2_201801 b_4 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.058 rows=49 loops=1)\"\n\" -> Hash (cost=8.17..8.17 rows=2 width=1314) (actual time=0.104..0.104 rows=7 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 12kB\"\n\" -> Append (cost=0.00..8.17 rows=2 width=1314) (actual time=0.053..0.060 rows=7 loops=1)\"\n\" -> Seq Scan on call_report1 a (cost=0.00..0.00 rows=1 width=437) (actual time=0.022..0.022 rows=0 loops=1)\"\n\" Filter: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n\" -> Index Scan using idx_call_report1_201711_ccd on call_report1_201711 a_1 (cost=0.14..8.16 rows=1 width=2190) (actual time=0.029..0.034 rows=7 loops=1)\"\n\" Index Cond: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n\"Planning Time: 20.866 ms\"\n\"Execution Time: 1.205 ms\"\n\n\n________________________________\nFrom: suganthi Sekar\nSent: 04 February 2019 15:27:31\nTo: [email protected]\nSubject: Fw: server hardware tuning.\n\n\n\nHi ,\n\n\nI need know how to calculate hardware sizing for database or query\n\n\nRAM\n\nCPU\n\nConfig tuning\n\n\nRequirement :\n\n\n1100 concurrent connection\n\n1600 column of table\n\n1GB of data can be select and dynamic aggregation will happen\n\n\nRegards\n\nSuganthiSekar\n\n\n\n\n\n\n\n\n\n\nHi Team , \n\n\n\n\n i am using Postgresql 11, i have 2 partition table , when i joined both table in\n query \n\na table its goes exact partition table , but other table scan all partition\n\n\n\n\nplease clarify on this .\n\n\n\n\ni have enabled below parameter on in configuration file\n\nNote : alter system set enable_partitionwise_join to 'on';\n\n\n\n\n\n\n\n\nExample : \n\n\n\n\nexplain analyze\n\nselect * from call_report1 as a inner join call_report2 as b on a.call_id=b.call_id\n\n where a.call_created_date ='2017-11-01' and '2017-11-30'\n\n\n\n\n\n\n\n\n\n\n\n\n \"Hash Right Join (cost=8.19..50.47 rows=2 width=3635) (actual time=0.426..0.447 rows=7 loops=1)\"\n\n\" Hash Cond: (b.call_id = a.call_id)\"\n\n\" -> Append (cost=0.00..41.81 rows=121 width=2319) (actual time=0.040..0.170 rows=104 loops=1)\"\n\n\" -> Seq Scan on call_report2 b (cost=0.00..0.00 rows=1 width=528) (actual time=0.010..0.010 rows=0 loops=1)\"\n\n\" -> Seq Scan on call_report2_201803 b_1 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.029..0.031 rows=14 loops=1)\"\n\n\" -> Seq Scan on call_report2_201711 b_2 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.014..0.015 rows=7 loops=1)\"\n\n\" -> Seq Scan on call_report2_201712 b_3 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.047 rows=34 loops=1)\"\n\n\" -> Seq Scan on call_report2_201801 b_4 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.058 rows=49 loops=1)\"\n\n\" -> Hash (cost=8.17..8.17 rows=2 width=1314) (actual time=0.104..0.104 rows=7 loops=1)\"\n\n\" Buckets: 1024 Batches: 1 Memory Usage: 12kB\"\n\n\" -> Append (cost=0.00..8.17 rows=2 width=1314) (actual time=0.053..0.060 rows=7 loops=1)\"\n\n\" -> Seq Scan on call_report1 a (cost=0.00..0.00 rows=1 width=437) (actual time=0.022..0.022 rows=0 loops=1)\"\n\n\" Filter: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n\n\" -> Index Scan using idx_call_report1_201711_ccd on call_report1_201711 a_1 (cost=0.14..8.16 rows=1 width=2190) (actual time=0.029..0.034 rows=7 loops=1)\"\n\n\" Index Cond: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n\n\"Planning Time: 20.866 ms\"\n\n\"Execution Time: 1.205 ms\"\n\n\n\n\nFrom: suganthi Sekar\nSent: 04 February 2019 15:27:31\nTo: [email protected]\nSubject: Fw: server hardware tuning.\n \n\n\n\n\n\n\n\n\n\n\n\n\nHi ,\n\n\nI need know how to calculate hardware sizing for database or query\n\n\nRAM\nCPU\nConfig tuning\n\n\nRequirement : \n\n\n1100 concurrent connection\n1600 column of table\n1GB of data can be select and dynamic aggregation will happen\n\n\nRegards\nSuganthiSekar",
"msg_date": "Thu, 14 Feb 2019 09:38:52 +0000",
"msg_from": "suganthi Sekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: server hardware tuning."
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 09:38:52AM +0000, suganthi Sekar wrote:\n> i am using Postgresql 11, i have 2 partition table , when i joined both table in query\n> a table its goes exact partition table , but other table scan all partition\n> \n> please clarify on this .\n> \n> Example :\n> \n> explain analyze\n> select * from call_report1 as a inner join call_report2 as b on a.call_id=b.call_id\n> where a.call_created_date ='2017-11-01' and '2017-11-30'\n\nLooks like this query waas manally editted and should say:\n> where a.call_created_date >='2017-11-01' AND a.call_created_date<'2017-11-30'\nRight?\n\nThe issue is described well here:\nhttps://www.postgresql.org/message-id/flat/7DF51702-0F6A-4571-80BB-188AAEF260DA%40gmail.com\nhttps://www.postgresql.org/message-id/499.1496696552%40sss.pgh.pa.us\n\nYou can work around it by specifying the same condition on b.call_created_date:\n> AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'\n\nJustin\n\n",
"msg_date": "Thu, 14 Feb 2019 04:05:33 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
},
{
"msg_contents": "HI,\n\n\nu mean the below parameter need to set on . its already on only.\n\n\n alter system set constraint_exclusion to 'on';\n\n\nRegards,\n\nSuganthi Sekar\n\n________________________________\nFrom: Justin Pryzby <[email protected]>\nSent: 14 February 2019 15:35:33\nTo: suganthi Sekar\nCc: [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nOn Thu, Feb 14, 2019 at 09:38:52AM +0000, suganthi Sekar wrote:\n> i am using Postgresql 11, i have 2 partition table , when i joined both table in query\n> a table its goes exact partition table , but other table scan all partition\n>\n> please clarify on this .\n>\n> Example :\n>\n> explain analyze\n> select * from call_report1 as a inner join call_report2 as b on a.call_id=b.call_id\n> where a.call_created_date ='2017-11-01' and '2017-11-30'\n\nLooks like this query waas manally editted and should say:\n> where a.call_created_date >='2017-11-01' AND a.call_created_date<'2017-11-30'\nRight?\n\nThe issue is described well here:\nhttps://www.postgresql.org/message-id/flat/7DF51702-0F6A-4571-80BB-188AAEF260DA%40gmail.com\nhttps://www.postgresql.org/message-id/499.1496696552%40sss.pgh.pa.us\n\nYou can work around it by specifying the same condition on b.call_created_date:\n> AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'\n\nJustin\n\n\n\n\n\n\n\n\nHI,\n\n\nu mean the below parameter need to set on . its already on only.\n\n\n alter system set constraint_exclusion to 'on';\n\n\n\nRegards,\nSuganthi Sekar\n\n\nFrom: Justin Pryzby <[email protected]>\nSent: 14 February 2019 15:35:33\nTo: suganthi Sekar\nCc: [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n \n\n\nOn Thu, Feb 14, 2019 at 09:38:52AM +0000, suganthi Sekar wrote:\n> i am using Postgresql 11, i have 2 partition table , when i joined both table in query\n> a table its goes exact partition table , but other table scan all partition\n> \n> please clarify on this .\n> \n> Example :\n> \n> explain analyze\n> select * from call_report1 as a inner join call_report2 as b on a.call_id=b.call_id\n> where a.call_created_date ='2017-11-01' and '2017-11-30'\n\nLooks like this query waas manally editted and should say:\n> where a.call_created_date >='2017-11-01' AND a.call_created_date<'2017-11-30'\nRight?\n\nThe issue is described well here:\nhttps://www.postgresql.org/message-id/flat/7DF51702-0F6A-4571-80BB-188AAEF260DA%40gmail.com\nhttps://www.postgresql.org/message-id/499.1496696552%40sss.pgh.pa.us\n\nYou can work around it by specifying the same condition on b.call_created_date:\n> AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'\n\nJustin",
"msg_date": "Thu, 14 Feb 2019 10:38:36 +0000",
"msg_from": "suganthi Sekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> u mean the below parameter need to set on . its already on only.\n> alter system set constraint_exclusion to 'on';\n\nNo, I said:\n> You can work around it by specifying the same condition on b.call_created_date:\n> > AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'\n\n",
"msg_date": "Thu, 14 Feb 2019 04:40:01 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
},
{
"msg_contents": "Hi,\n\nThanks, i know if explicitly we give in where condition it is working.\n\ni thought with below parameter in Postgresq11 this issue is fixed ?\n\n enable_partitionwise_join to 'on';\n\n what is the use of enable_partitionwise_join to 'on';\n\nThanks for your response.\n\nRegards\nSuganthi Sekar\n________________________________\nFrom: Justin Pryzby <[email protected]>\nSent: 14 February 2019 16:10:01\nTo: suganthi Sekar\nCc: [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nOn Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> u mean the below parameter need to set on . its already on only.\n> alter system set constraint_exclusion to 'on';\n\nNo, I said:\n> You can work around it by specifying the same condition on b.call_created_date:\n> > AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'\n\n\n\n\n\n\n\n\nHi,\n\n\nThanks, i know if explicitly we give in where condition it is working.\n\n\ni thought with below parameter in Postgresq11 this issue is fixed ?\n\n\n enable_partitionwise_join to 'on';\n\n\n\n what is the use of enable_partitionwise_join to 'on';\n\n\n\nThanks for your response.\n\n\nRegards\nSuganthi Sekar\n\n\nFrom: Justin Pryzby <[email protected]>\nSent: 14 February 2019 16:10:01\nTo: suganthi Sekar\nCc: [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n \n\n\nOn Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> u mean the below parameter need to set on . its already on only.\n> alter system set constraint_exclusion to 'on';\n\nNo, I said:\n> You can work around it by specifying the same condition on b.call_created_date:\n> > AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'",
"msg_date": "Thu, 14 Feb 2019 10:49:16 +0000",
"msg_from": "suganthi Sekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
},
{
"msg_contents": "suganthi Sekar wrote:\n> i am using Postgresql 11, i have 2 partition table , when i joined both table in query \n> a table its goes exact partition table , but other table scan all partition\n> \n> please clarify on this .\n> \n> i have enabled below parameter on in configuration file\n> Note : alter system set enable_partitionwise_join to 'on';\n> \n> \n> Example : \n> \n> explain analyze\n> select * from call_report1 as a inner join call_report2 as b on a.call_id=b.call_id\n> where a.call_created_date ='2017-11-01' and '2017-11-30'\n> \n> \n> \n> \"Hash Right Join (cost=8.19..50.47 rows=2 width=3635) (actual time=0.426..0.447 rows=7 loops=1)\"\n> \" Hash Cond: (b.call_id = a.call_id)\"\n> \" -> Append (cost=0.00..41.81 rows=121 width=2319) (actual time=0.040..0.170 rows=104 loops=1)\"\n> \" -> Seq Scan on call_report2 b (cost=0.00..0.00 rows=1 width=528) (actual time=0.010..0.010 rows=0 loops=1)\"\n> \" -> Seq Scan on call_report2_201803 b_1 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.029..0.031 rows=14 loops=1)\"\n> \" -> Seq Scan on call_report2_201711 b_2 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.014..0.015 rows=7 loops=1)\"\n> \" -> Seq Scan on call_report2_201712 b_3 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.047 rows=34 loops=1)\"\n> \" -> Seq Scan on call_report2_201801 b_4 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.058 rows=49 loops=1)\"\n> \" -> Hash (cost=8.17..8.17 rows=2 width=1314) (actual time=0.104..0.104 rows=7 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 12kB\"\n> \" -> Append (cost=0.00..8.17 rows=2 width=1314) (actual time=0.053..0.060 rows=7 loops=1)\"\n> \" -> Seq Scan on call_report1 a (cost=0.00..0.00 rows=1 width=437) (actual time=0.022..0.022 rows=0 loops=1)\"\n> \" Filter: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n> \" -> Index Scan using idx_call_report1_201711_ccd on call_report1_201711 a_1 (cost=0.14..8.16 rows=1 width=2190) (actual time=0.029..0.034 rows=7 loops=1)\"\n> \" Index Cond: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n> \"Planning Time: 20.866 ms\"\n> \"Execution Time: 1.205 ms\"\n\nThere is no condition on the table \"call_report2\" in your query,\nso it is not surprising that all partitions are scanned, right?\n\nYou have to add a WHERE condition that filters on the partitioning\ncolumn(s) of \"call_report2\".\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 14 Feb 2019 13:37:49 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition pruning"
},
{
"msg_contents": "HI ,\n\n\nOk thanks.\n\n\nRegards,\n\nSuganthi Sekar\n\n\n________________________________\nFrom: Laurenz Albe <[email protected]>\nSent: 14 February 2019 18:07:49\nTo: suganthi Sekar; [email protected]\nSubject: Re: partition pruning\n\nsuganthi Sekar wrote:\n> i am using Postgresql 11, i have 2 partition table , when i joined both table in query\n> a table its goes exact partition table , but other table scan all partition\n>\n> please clarify on this .\n>\n> i have enabled below parameter on in configuration file\n> Note : alter system set enable_partitionwise_join to 'on';\n>\n>\n> Example :\n>\n> explain analyze\n> select * from call_report1 as a inner join call_report2 as b on a.call_id=b.call_id\n> where a.call_created_date ='2017-11-01' and '2017-11-30'\n>\n>\n>\n> \"Hash Right Join (cost=8.19..50.47 rows=2 width=3635) (actual time=0.426..0.447 rows=7 loops=1)\"\n> \" Hash Cond: (b.call_id = a.call_id)\"\n> \" -> Append (cost=0.00..41.81 rows=121 width=2319) (actual time=0.040..0.170 rows=104 loops=1)\"\n> \" -> Seq Scan on call_report2 b (cost=0.00..0.00 rows=1 width=528) (actual time=0.010..0.010 rows=0 loops=1)\"\n> \" -> Seq Scan on call_report2_201803 b_1 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.029..0.031 rows=14 loops=1)\"\n> \" -> Seq Scan on call_report2_201711 b_2 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.014..0.015 rows=7 loops=1)\"\n> \" -> Seq Scan on call_report2_201712 b_3 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.047 rows=34 loops=1)\"\n> \" -> Seq Scan on call_report2_201801 b_4 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.058 rows=49 loops=1)\"\n> \" -> Hash (cost=8.17..8.17 rows=2 width=1314) (actual time=0.104..0.104 rows=7 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 12kB\"\n> \" -> Append (cost=0.00..8.17 rows=2 width=1314) (actual time=0.053..0.060 rows=7 loops=1)\"\n> \" -> Seq Scan on call_report1 a (cost=0.00..0.00 rows=1 width=437) (actual time=0.022..0.022 rows=0 loops=1)\"\n> \" Filter: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n> \" -> Index Scan using idx_call_report1_201711_ccd on call_report1_201711 a_1 (cost=0.14..8.16 rows=1 width=2190) (actual time=0.029..0.034 rows=7 loops=1)\"\n> \" Index Cond: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n> \"Planning Time: 20.866 ms\"\n> \"Execution Time: 1.205 ms\"\n\nThere is no condition on the table \"call_report2\" in your query,\nso it is not surprising that all partitions are scanned, right?\n\nYou have to add a WHERE condition that filters on the partitioning\ncolumn(s) of \"call_report2\".\n\nYours,\nLaurenz Albe\n--\nCybertec | https://www.cybertec-postgresql.com\n\n\n\n\n\n\n\n\n\nHI ,\n\n\nOk thanks.\n\n\nRegards,\n\nSuganthi Sekar\n\n\n\n\nFrom: Laurenz Albe <[email protected]>\nSent: 14 February 2019 18:07:49\nTo: suganthi Sekar; [email protected]\nSubject: Re: partition pruning\n \n\n\nsuganthi Sekar wrote:\n> i am using Postgresql 11, i have 2 partition table , when i joined both table in query\n\n> a table its goes exact partition table , but other table scan all partition\n> \n> please clarify on this .\n> \n> i have enabled below parameter on in configuration file\n> Note : alter system set enable_partitionwise_join to 'on';\n> \n> \n> Example : \n> \n> explain analyze\n> select * from call_report1 as a inner join call_report2 as b on a.call_id=b.call_id\n> where a.call_created_date ='2017-11-01' and '2017-11-30'\n> \n> \n> \n> \"Hash Right Join (cost=8.19..50.47 rows=2 width=3635) (actual time=0.426..0.447 rows=7 loops=1)\"\n> \" Hash Cond: (b.call_id = a.call_id)\"\n> \" -> Append (cost=0.00..41.81 rows=121 width=2319) (actual time=0.040..0.170 rows=104 loops=1)\"\n> \" -> Seq Scan on call_report2 b (cost=0.00..0.00 rows=1 width=528) (actual time=0.010..0.010 rows=0 loops=1)\"\n> \" -> Seq Scan on call_report2_201803 b_1 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.029..0.031 rows=14 loops=1)\"\n> \" -> Seq Scan on call_report2_201711 b_2 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.014..0.015 rows=7 loops=1)\"\n> \" -> Seq Scan on call_report2_201712 b_3 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.047 rows=34 loops=1)\"\n> \" -> Seq Scan on call_report2_201801 b_4 (cost=0.00..10.30 rows=30 width=2334) (actual time=0.017..0.058 rows=49 loops=1)\"\n> \" -> Hash (cost=8.17..8.17 rows=2 width=1314) (actual time=0.104..0.104 rows=7 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 12kB\"\n> \" -> Append (cost=0.00..8.17 rows=2 width=1314) (actual time=0.053..0.060 rows=7 loops=1)\"\n> \" -> Seq Scan on call_report1 a (cost=0.00..0.00 rows=1 width=437) (actual time=0.022..0.022 rows=0 loops=1)\"\n> \" Filter: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n> \" -> Index Scan using idx_call_report1_201711_ccd on call_report1_201711 a_1 (cost=0.14..8.16 rows=1 width=2190) (actual time=0.029..0.034 rows=7 loops=1)\"\n> \" Index Cond: ((call_created_date >= '2017-11-01'::date) AND (call_created_date <= '2017-11-30'::date))\"\n> \"Planning Time: 20.866 ms\"\n> \"Execution Time: 1.205 ms\"\n\nThere is no condition on the table \"call_report2\" in your query,\nso it is not surprising that all partitions are scanned, right?\n\nYou have to add a WHERE condition that filters on the partitioning\ncolumn(s) of \"call_report2\".\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com",
"msg_date": "Thu, 14 Feb 2019 12:41:56 +0000",
"msg_from": "suganthi Sekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partition pruning"
},
{
"msg_contents": "What are these two tables partitioned by?\n\nOn Thu, Feb 14, 2019, 5:03 AM suganthi Sekar <[email protected] wrote:\n\n> Hi,\n>\n> Thanks, i know if explicitly we give in where condition it is working.\n>\n> i thought with below parameter in Postgresq11 this issue is fixed ?\n>\n> * enable_partitionwise_join to 'on';*\n>\n> * what is the use of enable_partitionwise_join to 'on';*\n>\n> *Thanks for your response.*\n>\n> *Regards*\n> *Suganthi Sekar*\n> ------------------------------\n> *From:* Justin Pryzby <[email protected]>\n> *Sent:* 14 February 2019 16:10:01\n> *To:* suganthi Sekar\n> *Cc:* [email protected]\n> *Subject:* Re: constraint exclusion with ineq condition (Re: server\n> hardware tuning.)\n>\n> On Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> > u mean the below parameter need to set on . its already on only.\n> > alter system set constraint_exclusion to 'on';\n>\n> No, I said:\n> > You can work around it by specifying the same condition on\n> b.call_created_date:\n> > > AND b.call_created_date >='2017-11-01' AND\n> b.call_created_date<'2017-11-30'\n>\n\nWhat are these two tables partitioned by?On Thu, Feb 14, 2019, 5:03 AM suganthi Sekar <[email protected] wrote:\n\n\nHi,\n\n\nThanks, i know if explicitly we give in where condition it is working.\n\n\ni thought with below parameter in Postgresq11 this issue is fixed ?\n\n\n enable_partitionwise_join to 'on';\n\n\n\n what is the use of enable_partitionwise_join to 'on';\n\n\n\nThanks for your response.\n\n\nRegards\nSuganthi Sekar\n\n\nFrom: Justin Pryzby <[email protected]>\nSent: 14 February 2019 16:10:01\nTo: suganthi Sekar\nCc: [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n \n\n\nOn Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> u mean the below parameter need to set on . its already on only.\n> alter system set constraint_exclusion to 'on';\n\nNo, I said:\n> You can work around it by specifying the same condition on b.call_created_date:\n> > AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'",
"msg_date": "Thu, 14 Feb 2019 07:05:48 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
},
{
"msg_contents": "Both table Portion by same column call_created_date\n________________________________\nFrom: Michael Lewis <[email protected]>\nSent: 14 February 2019 19:35:48\nTo: suganthi Sekar\nCc: Justin Pryzby; [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nWhat are these two tables partitioned by?\n\nOn Thu, Feb 14, 2019, 5:03 AM suganthi Sekar <[email protected]<mailto:[email protected]> wrote:\nHi,\n\nThanks, i know if explicitly we give in where condition it is working.\n\ni thought with below parameter in Postgresq11 this issue is fixed ?\n\n enable_partitionwise_join to 'on';\n\n what is the use of enable_partitionwise_join to 'on';\n\nThanks for your response.\n\nRegards\nSuganthi Sekar\n________________________________\nFrom: Justin Pryzby <[email protected]<mailto:[email protected]>>\nSent: 14 February 2019 16:10:01\nTo: suganthi Sekar\nCc: [email protected]<mailto:[email protected]>\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nOn Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> u mean the below parameter need to set on . its already on only.\n> alter system set constraint_exclusion to 'on';\n\nNo, I said:\n> You can work around it by specifying the same condition on b.call_created_date:\n> > AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'\n\n",
"msg_date": "Thu, 14 Feb 2019 14:35:12 +0000",
"msg_from": "suganthi Sekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 01:37:49PM +0100, Laurenz Albe wrote:\n> There is no condition on the table \"call_report2\" in your query,\n> so it is not surprising that all partitions are scanned, right?\n\nSome people find it surprising, since: a.call_id=b.call_id\n\nsuganthi Sekar wrote:\n> > explain analyze\n> > select * from call_report1 as a inner join call_report2 as b on a.call_id=b.call_id\n> > where a.call_created_date ='2017-11-01' and '2017-11-30'\n\nJustin\n\n",
"msg_date": "Thu, 14 Feb 2019 08:36:11 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition pruning"
},
{
"msg_contents": "Yeah, the planner doesn't know that call_created_date can be limited on\nboth tables unless you tell it specify it in the where condition as Laurenz\nsaid on another thread.\n\n\n*Michael Lewis*\n\nOn Thu, Feb 14, 2019 at 7:35 AM suganthi Sekar <[email protected]>\nwrote:\n\n> Both table Portion by same column call_created_date\n> ________________________________\n> From: Michael Lewis <[email protected]>\n> Sent: 14 February 2019 19:35:48\n> To: suganthi Sekar\n> Cc: Justin Pryzby; [email protected]\n> Subject: Re: constraint exclusion with ineq condition (Re: server hardware\n> tuning.)\n>\n> What are these two tables partitioned by?\n>\n> On Thu, Feb 14, 2019, 5:03 AM suganthi Sekar <[email protected]\n> <mailto:[email protected]> wrote:\n> Hi,\n>\n> Thanks, i know if explicitly we give in where condition it is working.\n>\n> i thought with below parameter in Postgresq11 this issue is fixed ?\n>\n> enable_partitionwise_join to 'on';\n>\n> what is the use of enable_partitionwise_join to 'on';\n>\n> Thanks for your response.\n>\n> Regards\n> Suganthi Sekar\n> ________________________________\n> From: Justin Pryzby <[email protected]<mailto:[email protected]>>\n> Sent: 14 February 2019 16:10:01\n> To: suganthi Sekar\n> Cc: [email protected]<mailto:\n> [email protected]>\n> Subject: Re: constraint exclusion with ineq condition (Re: server hardware\n> tuning.)\n>\n> On Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> > u mean the below parameter need to set on . its already on only.\n> > alter system set constraint_exclusion to 'on';\n>\n> No, I said:\n> > You can work around it by specifying the same condition on\n> b.call_created_date:\n> > > AND b.call_created_date >='2017-11-01' AND\n> b.call_created_date<'2017-11-30'\n>\n\nYeah, the planner doesn't know that call_created_date can be limited on both tables unless you tell it specify it in the where condition as Laurenz said on another thread.Michael LewisOn Thu, Feb 14, 2019 at 7:35 AM suganthi Sekar <[email protected]> wrote:Both table Portion by same column call_created_date\n________________________________\nFrom: Michael Lewis <[email protected]>\nSent: 14 February 2019 19:35:48\nTo: suganthi Sekar\nCc: Justin Pryzby; [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nWhat are these two tables partitioned by?\n\nOn Thu, Feb 14, 2019, 5:03 AM suganthi Sekar <[email protected]<mailto:[email protected]> wrote:\nHi,\n\nThanks, i know if explicitly we give in where condition it is working.\n\ni thought with below parameter in Postgresq11 this issue is fixed ?\n\n enable_partitionwise_join to 'on';\n\n what is the use of enable_partitionwise_join to 'on';\n\nThanks for your response.\n\nRegards\nSuganthi Sekar\n________________________________\nFrom: Justin Pryzby <[email protected]<mailto:[email protected]>>\nSent: 14 February 2019 16:10:01\nTo: suganthi Sekar\nCc: [email protected]<mailto:[email protected]>\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nOn Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> u mean the below parameter need to set on . its already on only.\n> alter system set constraint_exclusion to 'on';\n\nNo, I said:\n> You can work around it by specifying the same condition on b.call_created_date:\n> > AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'",
"msg_date": "Thu, 14 Feb 2019 11:50:00 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
},
{
"msg_contents": "Hi,\n\n\nyes i accept , but when i will do for existing tables, i am facing issue.\n\n\n\nI have created 100 Function , all the function having five table join(now all partition by date) , now its not possible to change where condition in all 100 Function.\n\n\nso that i am trying any other possibilities are there.\n\n\n\nRegards,\n\nSuganthi Sekar\n\n________________________________\nFrom: Michael Lewis <[email protected]>\nSent: 15 February 2019 00:20:00\nTo: suganthi Sekar\nCc: Justin Pryzby; [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nYeah, the planner doesn't know that call_created_date can be limited on both tables unless you tell it specify it in the where condition as Laurenz said on another thread.\n\n\nMichael Lewis\n\nOn Thu, Feb 14, 2019 at 7:35 AM suganthi Sekar <[email protected]<mailto:[email protected]>> wrote:\nBoth table Portion by same column call_created_date\n________________________________\nFrom: Michael Lewis <[email protected]<mailto:[email protected]>>\nSent: 14 February 2019 19:35:48\nTo: suganthi Sekar\nCc: Justin Pryzby; [email protected]<mailto:[email protected]>\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nWhat are these two tables partitioned by?\n\nOn Thu, Feb 14, 2019, 5:03 AM suganthi Sekar <[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>> wrote:\nHi,\n\nThanks, i know if explicitly we give in where condition it is working.\n\ni thought with below parameter in Postgresq11 this issue is fixed ?\n\n enable_partitionwise_join to 'on';\n\n what is the use of enable_partitionwise_join to 'on';\n\nThanks for your response.\n\nRegards\nSuganthi Sekar\n________________________________\nFrom: Justin Pryzby <[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>\nSent: 14 February 2019 16:10:01\nTo: suganthi Sekar\nCc: [email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nOn Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> u mean the below parameter need to set on . its already on only.\n> alter system set constraint_exclusion to 'on';\n\nNo, I said:\n> You can work around it by specifying the same condition on b.call_created_date:\n> > AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'\n\n\n\n\n\n\n\n\nHi,\n\n\nyes i accept , but when i will do for existing tables, i am facing issue.\n\n\n\n\nI have created 100 Function , all the function having five table join(now all partition by date) , now its not possible to change where condition in all 100 Function.\n\n\nso that i am trying any other possibilities are there. \n\n\n\n\n\nRegards,\nSuganthi Sekar\n\n\nFrom: Michael Lewis <[email protected]>\nSent: 15 February 2019 00:20:00\nTo: suganthi Sekar\nCc: Justin Pryzby; [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n \n\n\n\nYeah, the planner doesn't know that call_created_date can be limited on both tables unless you tell it specify it in the where condition as Laurenz said on another thread.\n\n\n\n\n\n\n\n\n\n\nMichael\n Lewis\n\n\n\n\n\n\n\n\n\n\nOn Thu, Feb 14, 2019 at 7:35 AM suganthi Sekar <[email protected]> wrote:\n\n\nBoth table Portion by same column call_created_date\n________________________________\nFrom: Michael Lewis <[email protected]>\nSent: 14 February 2019 19:35:48\nTo: suganthi Sekar\nCc: Justin Pryzby; \[email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nWhat are these two tables partitioned by?\n\nOn Thu, Feb 14, 2019, 5:03 AM suganthi Sekar <[email protected]<mailto:[email protected]> wrote:\nHi,\n\nThanks, i know if explicitly we give in where condition it is working.\n\ni thought with below parameter in Postgresq11 this issue is fixed ?\n\n enable_partitionwise_join to 'on';\n\n what is the use of enable_partitionwise_join to 'on';\n\nThanks for your response.\n\nRegards\nSuganthi Sekar\n________________________________\nFrom: Justin Pryzby <[email protected]<mailto:[email protected]>>\nSent: 14 February 2019 16:10:01\nTo: suganthi Sekar\nCc: [email protected]<mailto:[email protected]>\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nOn Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> u mean the below parameter need to set on . its already on only.\n> alter system set constraint_exclusion to 'on';\n\nNo, I said:\n> You can work around it by specifying the same condition on b.call_created_date:\n> > AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'",
"msg_date": "Fri, 15 Feb 2019 05:09:36 +0000",
"msg_from": "suganthi Sekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
},
{
"msg_contents": "Hi Sugathi,\n\nThat sounds like a perfect task for a view if the joins are all the same.\n\n~Ben\n\n\n\nOn Fri, Feb 15, 2019 at 9:36 AM suganthi Sekar <[email protected]>\nwrote:\n\n> Hi,\n>\n>\n> yes i accept , but when i will do for existing tables, i am facing issue.\n>\n>\n>\n> I have created 100 Function , all the function having five table join(*now\n> all partition by date*) , now its not possible to change where condition\n> in all 100 Function.\n>\n> so that i am trying any other possibilities are there.\n>\n>\n>\n> Regards,\n>\n> Suganthi Sekar\n> ------------------------------\n> *From:* Michael Lewis <[email protected]>\n> *Sent:* 15 February 2019 00:20:00\n> *To:* suganthi Sekar\n> *Cc:* Justin Pryzby; [email protected]\n> *Subject:* Re: constraint exclusion with ineq condition (Re: server\n> hardware tuning.)\n>\n> Yeah, the planner doesn't know that call_created_date can be limited on\n> both tables unless you tell it specify it in the where condition as Laurenz\n> said on another thread.\n>\n>\n> *Michael Lewis*\n>\n> On Thu, Feb 14, 2019 at 7:35 AM suganthi Sekar <[email protected]>\n> wrote:\n>\n> Both table Portion by same column call_created_date\n> ________________________________\n> From: Michael Lewis <[email protected]>\n> Sent: 14 February 2019 19:35:48\n> To: suganthi Sekar\n> Cc: Justin Pryzby; [email protected]\n> Subject: Re: constraint exclusion with ineq condition (Re: server hardware\n> tuning.)\n>\n> What are these two tables partitioned by?\n>\n> On Thu, Feb 14, 2019, 5:03 AM suganthi Sekar <[email protected]\n> <mailto:[email protected]> wrote:\n> Hi,\n>\n> Thanks, i know if explicitly we give in where condition it is working.\n>\n> i thought with below parameter in Postgresq11 this issue is fixed ?\n>\n> enable_partitionwise_join to 'on';\n>\n> what is the use of enable_partitionwise_join to 'on';\n>\n> Thanks for your response.\n>\n> Regards\n> Suganthi Sekar\n> ________________________________\n> From: Justin Pryzby <[email protected]<mailto:[email protected]>>\n> Sent: 14 February 2019 16:10:01\n> To: suganthi Sekar\n> Cc: [email protected]<mailto:\n> [email protected]>\n> Subject: Re: constraint exclusion with ineq condition (Re: server hardware\n> tuning.)\n>\n> On Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> > u mean the below parameter need to set on . its already on only.\n> > alter system set constraint_exclusion to 'on';\n>\n> No, I said:\n> > You can work around it by specifying the same condition on\n> b.call_created_date:\n> > > AND b.call_created_date >='2017-11-01' AND\n> b.call_created_date<'2017-11-30'\n>\n>\n\nHi Sugathi, That sounds like a perfect task for a view if the joins are all the same.~BenOn Fri, Feb 15, 2019 at 9:36 AM suganthi Sekar <[email protected]> wrote:\n\n\nHi,\n\n\nyes i accept , but when i will do for existing tables, i am facing issue.\n\n\n\n\nI have created 100 Function , all the function having five table join(now all partition by date) , now its not possible to change where condition in all 100 Function.\n\n\nso that i am trying any other possibilities are there. \n\n\n\n\n\nRegards,\nSuganthi Sekar\n\n\nFrom: Michael Lewis <[email protected]>\nSent: 15 February 2019 00:20:00\nTo: suganthi Sekar\nCc: Justin Pryzby; [email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n \n\n\n\nYeah, the planner doesn't know that call_created_date can be limited on both tables unless you tell it specify it in the where condition as Laurenz said on another thread.\n\n\n\n\n\n\n\n\n\n\nMichael\n Lewis\n\n\n\n\n\n\n\n\n\n\nOn Thu, Feb 14, 2019 at 7:35 AM suganthi Sekar <[email protected]> wrote:\n\n\nBoth table Portion by same column call_created_date\n________________________________\nFrom: Michael Lewis <[email protected]>\nSent: 14 February 2019 19:35:48\nTo: suganthi Sekar\nCc: Justin Pryzby; \[email protected]\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nWhat are these two tables partitioned by?\n\nOn Thu, Feb 14, 2019, 5:03 AM suganthi Sekar <[email protected]<mailto:[email protected]> wrote:\nHi,\n\nThanks, i know if explicitly we give in where condition it is working.\n\ni thought with below parameter in Postgresq11 this issue is fixed ?\n\n enable_partitionwise_join to 'on';\n\n what is the use of enable_partitionwise_join to 'on';\n\nThanks for your response.\n\nRegards\nSuganthi Sekar\n________________________________\nFrom: Justin Pryzby <[email protected]<mailto:[email protected]>>\nSent: 14 February 2019 16:10:01\nTo: suganthi Sekar\nCc: [email protected]<mailto:[email protected]>\nSubject: Re: constraint exclusion with ineq condition (Re: server hardware tuning.)\n\nOn Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> u mean the below parameter need to set on . its already on only.\n> alter system set constraint_exclusion to 'on';\n\nNo, I said:\n> You can work around it by specifying the same condition on b.call_created_date:\n> > AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'",
"msg_date": "Fri, 15 Feb 2019 12:24:18 -0500",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 12:24:18PM -0500, Benedict Holland wrote:\n> That sounds like a perfect task for a view if the joins are all the same.\n\nBut note that either the view itself needs to have both where clauses (with\nhardcoded dates?), or otherwise the view needs to be on only one table, and the\ntoplevel query needs to have where clause on each view, or else one of the\ntables won't get constraint exclusion.\n\nOn Fri, Feb 15, 2019 at 9:36 AM suganthi Sekar <[email protected]> wrote:\n> > yes i accept , but when i will do for existing tables, i am facing issue.\n> >\n> > I have created 100 Function , all the function having five table join(*now\n> > all partition by date*) , now its not possible to change where condition\n> > in all 100 Function.\n> >\n> > so that i am trying any other possibilities are there.\n\n> > From: Justin Pryzby <[email protected]<mailto:[email protected]>>\n> > Sent: 14 February 2019 16:10:01\n> > To: suganthi Sekar\n> > Cc: [email protected]<mailto:\n> > [email protected]>\n> > Subject: Re: constraint exclusion with ineq condition (Re: server hardware\n> > tuning.)\n> >\n> > On Thu, Feb 14, 2019 at 10:38:36AM +0000, suganthi Sekar wrote:\n> > > u mean the below parameter need to set on . its already on only.\n> > > alter system set constraint_exclusion to 'on';\n> >\n> > No, I said:\n> > > You can work around it by specifying the same condition on\n> > b.call_created_date:\n> > > > AND b.call_created_date >='2017-11-01' AND\n> > b.call_created_date<'2017-11-30'\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n",
"msg_date": "Sat, 16 Feb 2019 16:04:06 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constraint exclusion with ineq condition (Re: server hardware\n tuning.)"
}
] |
[
{
"msg_contents": "Hi,\nI have a table with a bytea column and its size is huge and thats why\npostgres created a toasted table for that column. The original table\ncontains about 1K-10K rows but the toasted can contain up to 20M rows. I\nassigned the next two settings for the toasted table :\n\n alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0);\n\n alter table orig_table set (toast.autovacuum_vacuum_threshold =10000);\n\n\nTherefore I hoped that after deletion of 10K rows from the toasted table\nautovacuum will launch vacuum on the toasted table.\n\n From the logs I see that sometimes the autovacuum is running once in a few\nhours (3-4 hours) and sometimes it runs even every few minutes.\n\nNow I wanted to check if only depends on the thresholds and on the\nfrequency of the deletes/updates on the table ? In some cases the\nautovacuum is taking a few hours (4+) it finishes and then immediatly is\nstarting to run vacuum again on the table :\n\n2019-01-29 *07:10:58* EST 14083 LOG: automatic vacuum of table\n\"db.pg_toast.pg_toast_14430\": index scans: 3\n\npages: 1672 removed, 7085056 remain\n\ntuples: 6706885 removed, 2023847 remain\n\nbuffer usage: 4808221 hits, 6404148 misses, 6152603 dirtied\n\navg read rate: 2.617 MiB/s, avg write rate: 2.514 MiB/s\n\nsystem usage: CPU 148.65s/70.06u sec elapsed 19119.55 sec\n\nThis run took 19119 sec ~ 5 hours\n\n\n2019-01-29 *10:05:45* EST 11985 LOG: automatic vacuum of table\n\"db.pg_toast.pg_toast_14430\": index scans: 2\n\npages: 2752 removed, 7082304 remain\n\ntuples: 3621620 removed, 1339540 remain\n\nbuffer usage: 2655076 hits, 3506964 misses, 3333423 dirtied\n\navg read rate: 2.638 MiB/s, avg write rate: 2.508 MiB/s\n\nsystem usage: CPU 71.22s/37.65u sec elapsed 10384.93 sec\n\n\nthis run took 10384 sec ~ 2.88 hours.\n\n\nthe diff between the summaries is 3 hours and the second run took 2.88\nhours which means that the autovacuum launched vacuum on the table a few\nminutes after the first vacuum has finished.\n\n\nIn addition, as I said sometimes if runs very often :\n\n2019-02-04 09:26:23 EST 14735 LOG: automatic vacuum of table\n\"db.pg_toast.pg_toast_14430\": index scans: 1\n\npages: 1760 removed, 11149568 remain\n\ntuples: 47870 removed, 4929452 remain\n\nbuffer usage: 200575 hits, 197394 misses, 24264 dirtied\n\navg read rate: 5.798 MiB/s, avg write rate: 0.713 MiB/s\n\nsystem usage: CPU 1.55s/1.38u sec elapsed 265.96 sec\n\n\n2019-02-04 09:32:57 EST 26171 LOG: automatic vacuum of table\n\"db.pg_toast.pg_toast_14430\": index scans: 1\n\npages: 2144 removed, 11147424 remain\n\ntuples: 55484 removed, 4921526 remain\n\nbuffer usage: 196811 hits, 209267 misses, 34471 dirtied\n\navg read rate: 5.459 MiB/s, avg write rate: 0.899 MiB/s\n\nsystem usage: CPU 1.73s/1.54u sec elapsed 299.50 sec\n\n\nNow the question is how to handle or tune it ? Is there any change that I\nneed to increase the cost_limit / cost_delay ?\n\nHi,I have a table with a bytea column and its size is huge and thats why postgres created a toasted table for that column. The original table contains about 1K-10K rows but the toasted can contain up to 20M rows. I assigned the next two settings for the toasted table : alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0); alter table orig_table set (toast.autovacuum_vacuum_threshold =10000);Therefore I hoped that after deletion of 10K rows from the toasted table autovacuum will launch vacuum on the toasted table.From the logs I see that sometimes the autovacuum is running once in a few hours (3-4 hours) and sometimes it runs even every few minutes.Now I wanted to check if only depends on the thresholds and on the frequency of the deletes/updates on the table ? In some cases the autovacuum is taking a few hours (4+) it finishes and then immediatly is starting to run vacuum again on the table : 2019-01-29 07:10:58 EST 14083 LOG: automatic vacuum of table \"db.pg_toast.pg_toast_14430\": index scans: 3 pages: 1672 removed, 7085056 remain tuples: 6706885 removed, 2023847 remain buffer usage: 4808221 hits, 6404148 misses, 6152603 dirtied avg read rate: 2.617 MiB/s, avg write rate: 2.514 MiB/s system usage: CPU 148.65s/70.06u sec elapsed 19119.55 sec This run took 19119 sec ~ 5 hours2019-01-29 10:05:45 EST 11985 LOG: automatic vacuum of table \"db.pg_toast.pg_toast_14430\": index scans: 2 pages: 2752 removed, 7082304 remain tuples: 3621620 removed, 1339540 remain buffer usage: 2655076 hits, 3506964 misses, 3333423 dirtied avg read rate: 2.638 MiB/s, avg write rate: 2.508 MiB/s system usage: CPU 71.22s/37.65u sec elapsed 10384.93 secthis run took 10384 sec ~ 2.88 hours.the diff between the summaries is 3 hours and the second run took 2.88 hours which means that the autovacuum launched vacuum on the table a few minutes after the first vacuum has finished.In addition, as I said sometimes if runs very often : 2019-02-04 09:26:23 EST 14735 LOG: automatic vacuum of table \"db.pg_toast.pg_toast_14430\": index scans: 1 pages: 1760 removed, 11149568 remain tuples: 47870 removed, 4929452 remain buffer usage: 200575 hits, 197394 misses, 24264 dirtied avg read rate: 5.798 MiB/s, avg write rate: 0.713 MiB/s system usage: CPU 1.55s/1.38u sec elapsed 265.96 sec2019-02-04 09:32:57 EST 26171 LOG: automatic vacuum of table \"db.pg_toast.pg_toast_14430\": index scans: 1 pages: 2144 removed, 11147424 remain tuples: 55484 removed, 4921526 remain buffer usage: 196811 hits, 209267 misses, 34471 dirtied avg read rate: 5.459 MiB/s, avg write rate: 0.899 MiB/s system usage: CPU 1.73s/1.54u sec elapsed 299.50 secNow the question is how to handle or tune it ? Is there any change that I need to increase the cost_limit / cost_delay ?",
"msg_date": "Wed, 6 Feb 2019 12:29:06 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "On Wed, 2019-02-06 at 12:29 +0200, Mariel Cherkassky wrote:\n> Hi,\n> I have a table with a bytea column and its size is huge and thats why postgres created a toasted table for that column.\n> The original table contains about 1K-10K rows but the toasted can contain up to 20M rows.\n> I assigned the next two settings for the toasted table : \n> alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0);\n> alter table orig_table set (toast.autovacuum_vacuum_threshold =10000);\n> \n> Therefore I hoped that after deletion of 10K rows from the toasted table autovacuum will launch vacuum on the toasted table.\n> From the logs I see that sometimes the autovacuum is running once in a few hours (3-4 hours) and sometimes it runs even every few minutes.\n> Now I wanted to check if only depends on the thresholds and on the frequency of the deletes/updates on the table ?\n> In some cases the autovacuum is taking a few hours (4+) it finishes and then immediatly is starting to run vacuum again on the table : \n> \n> Now the question is how to handle or tune it ? Is there any change that I need to increase the cost_limit / cost_delay ?\n\nMaybe configuring autovacuum to run faster will help:\n\nalter table orig_table set (toast.autovacuum_vacuum_cost_limit = 2000);\n\nOr, more extreme:\n\nalter table orig_table set (toast.autovacuum_vacuum_cost_delay = 0);\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 06 Feb 2019 12:16:54 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "On Thu, 7 Feb 2019 at 00:17, Laurenz Albe <[email protected]> wrote:\n>\n> On Wed, 2019-02-06 at 12:29 +0200, Mariel Cherkassky wrote:\n> > Now the question is how to handle or tune it ? Is there any change that I need to increase the cost_limit / cost_delay ?\n>\n> Maybe configuring autovacuum to run faster will help:\n>\n> alter table orig_table set (toast.autovacuum_vacuum_cost_limit = 2000);\n>\n> Or, more extreme:\n>\n> alter table orig_table set (toast.autovacuum_vacuum_cost_delay = 0);\n\nGoing by the block hits/misses/dirtied and the mentioned vacuum times,\nit looks like auto-vacuum is set to the standard settings and if so it\nspent about 100% of its time sleeping on the job.\n\nIt might be a better idea to consider changing the vacuum settings\nglobally rather than just for one table.\n\nRunning a vacuum_cost_limit of 200 is likely something you'd not want\nto ever do with modern hardware... well maybe unless you just bought\nthe latest Raspberry PI, or something. You should be tuning that\nvalue to something that runs your vacuums to a speed you're happy with\nbut leaves enough IO and CPU for queries running on the database.\n\nIf you see that all auto-vacuum workers are busy more often than not,\nthen they're likely running too slowly and should be set to run more\nquickly.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 7 Feb 2019 02:05:42 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "Hey,\nAs I said, I set the next settings for the toasted table :\n\n alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0);\n\n alter table orig_table set (toast.autovacuum_vacuum_threshold =10000);\n\nCan you explain a little bit more why you decided that the autovacuum spent\nit time on sleeping ?\n\nI see the autovacuum statistics from the logs, how can I check that the\nworkers are busy very often ?\n\nMy vacuum limit is 200 right now, basically If vacuum runs on my toasted\ntable and reached 200 but it didnt finish to clean all the dead tuples,\nafter the nap, should it continue cleaning it or wait until the\nvacuum_threshold hit again ?\n\nבתאריך יום ד׳, 6 בפבר׳ 2019 ב-15:05 מאת David Rowley <\[email protected]>:\n\n> On Thu, 7 Feb 2019 at 00:17, Laurenz Albe <[email protected]>\n> wrote:\n> >\n> > On Wed, 2019-02-06 at 12:29 +0200, Mariel Cherkassky wrote:\n> > > Now the question is how to handle or tune it ? Is there any change\n> that I need to increase the cost_limit / cost_delay ?\n> >\n> > Maybe configuring autovacuum to run faster will help:\n> >\n> > alter table orig_table set (toast.autovacuum_vacuum_cost_limit = 2000);\n> >\n> > Or, more extreme:\n> >\n> > alter table orig_table set (toast.autovacuum_vacuum_cost_delay = 0);\n>\n> Going by the block hits/misses/dirtied and the mentioned vacuum times,\n> it looks like auto-vacuum is set to the standard settings and if so it\n> spent about 100% of its time sleeping on the job.\n>\n> It might be a better idea to consider changing the vacuum settings\n> globally rather than just for one table.\n>\n> Running a vacuum_cost_limit of 200 is likely something you'd not want\n> to ever do with modern hardware... well maybe unless you just bought\n> the latest Raspberry PI, or something. You should be tuning that\n> value to something that runs your vacuums to a speed you're happy with\n> but leaves enough IO and CPU for queries running on the database.\n>\n> If you see that all auto-vacuum workers are busy more often than not,\n> then they're likely running too slowly and should be set to run more\n> quickly.\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nHey,As I said, I set the next settings for the toasted table : alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0); alter table orig_table set (toast.autovacuum_vacuum_threshold =10000);Can you explain a little bit more why you decided that the autovacuum spent it time on sleeping ?I see the autovacuum statistics from the logs, how can I check that the workers are busy very often ?My vacuum limit is 200 right now, basically If vacuum runs on my toasted table and reached 200 but it didnt finish to clean all the dead tuples, after the nap, should it continue cleaning it or wait until the vacuum_threshold hit again ?בתאריך יום ד׳, 6 בפבר׳ 2019 ב-15:05 מאת David Rowley <[email protected]>:On Thu, 7 Feb 2019 at 00:17, Laurenz Albe <[email protected]> wrote:\n>\n> On Wed, 2019-02-06 at 12:29 +0200, Mariel Cherkassky wrote:\n> > Now the question is how to handle or tune it ? Is there any change that I need to increase the cost_limit / cost_delay ?\n>\n> Maybe configuring autovacuum to run faster will help:\n>\n> alter table orig_table set (toast.autovacuum_vacuum_cost_limit = 2000);\n>\n> Or, more extreme:\n>\n> alter table orig_table set (toast.autovacuum_vacuum_cost_delay = 0);\n\nGoing by the block hits/misses/dirtied and the mentioned vacuum times,\nit looks like auto-vacuum is set to the standard settings and if so it\nspent about 100% of its time sleeping on the job.\n\nIt might be a better idea to consider changing the vacuum settings\nglobally rather than just for one table.\n\nRunning a vacuum_cost_limit of 200 is likely something you'd not want\nto ever do with modern hardware... well maybe unless you just bought\nthe latest Raspberry PI, or something. You should be tuning that\nvalue to something that runs your vacuums to a speed you're happy with\nbut leaves enough IO and CPU for queries running on the database.\n\nIf you see that all auto-vacuum workers are busy more often than not,\nthen they're likely running too slowly and should be set to run more\nquickly.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 6 Feb 2019 15:34:06 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "Would it be nice to start changing those values found in the default\npostgres.conf so low?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Wed, 6 Feb 2019 06:36:14 -0700 (MST)",
"msg_from": "dangal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "which one you mean ? I changed the threshold and the scale for the specific\ntable...\n\nבתאריך יום ד׳, 6 בפבר׳ 2019 ב-15:36 מאת dangal <\[email protected]>:\n\n> Would it be nice to start changing those values found in the default\n> postgres.conf so low?\n>\n>\n>\n> --\n> Sent from:\n> http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n>\n>\n\nwhich one you mean ? I changed the threshold and the scale for the specific table...בתאריך יום ד׳, 6 בפבר׳ 2019 ב-15:36 מאת dangal <[email protected]>:Would it be nice to start changing those values found in the default\npostgres.conf so low?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html",
"msg_date": "Wed, 6 Feb 2019 15:41:04 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "Hi all,\n\nIn the myriad of articles written about autovacuum tuning, I really like \nthis article by Tomas Vondra of 2ndQuadrant:\nhttps://blog.2ndquadrant.com/autovacuum-tuning-basics/\n\nIt is a concise article that touches on all the major aspects of \nautovacuuming tuning: thresholds, scale factors, throttling, etc.\n\nRegards and happy vacuuming to yas!\nMichael Vitale\n\n> Mariel Cherkassky <mailto:[email protected]>\n> Wednesday, February 6, 2019 8:41 AM\n> which one you mean ? I changed the threshold and the scale for the \n> specific table...\n>\n> dangal <mailto:[email protected]>\n> Wednesday, February 6, 2019 8:36 AM\n> Would it be nice to start changing those values found in the default\n> postgres.conf so low?\n>\n>\n>\n> --\n> Sent from: \n> http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n>\n> David Rowley <mailto:[email protected]>\n> Wednesday, February 6, 2019 8:05 AM\n>\n> Going by the block hits/misses/dirtied and the mentioned vacuum times,\n> it looks like auto-vacuum is set to the standard settings and if so it\n> spent about 100% of its time sleeping on the job.\n>\n> It might be a better idea to consider changing the vacuum settings\n> globally rather than just for one table.\n>\n> Running a vacuum_cost_limit of 200 is likely something you'd not want\n> to ever do with modern hardware... well maybe unless you just bought\n> the latest Raspberry PI, or something. You should be tuning that\n> value to something that runs your vacuums to a speed you're happy with\n> but leaves enough IO and CPU for queries running on the database.\n>\n> If you see that all auto-vacuum workers are busy more often than not,\n> then they're likely running too slowly and should be set to run more\n> quickly.\n>\n\n\n\n\nHi all,\n\nIn the myriad of articles written about autovacuum tuning, I really like\n this article by Tomas Vondra of 2ndQuadrant:\nhttps://blog.2ndquadrant.com/autovacuum-tuning-basics/\n\nIt is a concise article that touches on all the major aspects of \nautovacuuming tuning: thresholds, scale factors, throttling, etc.\n\nRegards and happy vacuuming to yas!\nMichael Vitale\n\n\n\n \nMariel Cherkassky Wednesday,\n February 6, 2019 8:41 AM \nwhich one you mean ? I changed the threshold and the scale for \nthe specific table...\n\n \ndangal Wednesday,\n February 6, 2019 8:36 AM \nWould it be nice to start \nchanging those values found in the defaultpostgres.conf so low?--Sent\n from: \nhttp://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n \nDavid Rowley Wednesday,\n February 6, 2019 8:05 AM \nGoing by the \nblock hits/misses/dirtied and the mentioned vacuum times,it looks \nlike auto-vacuum is set to the standard settings and if so itspent \nabout 100% of its time sleeping on the job.It might be a better \nidea to consider changing the vacuum settingsglobally rather than \njust for one table.Running a vacuum_cost_limit of 200 is likely \nsomething you'd not wantto ever do with modern hardware... well \nmaybe unless you just boughtthe latest Raspberry PI, or something. \nYou should be tuning thatvalue to something that runs your vacuums \nto a speed you're happy withbut leaves enough IO and CPU for queries\n running on the database.If you see that all auto-vacuum workers\n are busy more often than not,then they're likely running too slowly\n and should be set to run morequickly.",
"msg_date": "Wed, 06 Feb 2019 09:11:40 -0500",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "On Wed, Feb 6, 2019 at 5:29 AM Mariel Cherkassky <\[email protected]> wrote:\n\n\n> Now the question is how to handle or tune it ? Is there any change that I\n> need to increase the cost_limit / cost_delay ?\n>\n\nSometimes vacuum has more work to do, so it takes more time to do it.\n\nThere is no indication of a problem. Or at least, you haven't described\none. So, there is nothing to handle or to tune.\n\nIf there is a problem, those log entries might help identify it. But in\nthe absence of a problem, they are just log spam.\n\nCheers,\n\nJeff\n\nOn Wed, Feb 6, 2019 at 5:29 AM Mariel Cherkassky <[email protected]> wrote: Now the question is how to handle or tune it ? Is there any change that I need to increase the cost_limit / cost_delay ?Sometimes vacuum has more work to do, so it takes more time to do it.There is no indication of a problem. Or at least, you haven't described one. So, there is nothing to handle or to tune.If there is a problem, those log entries might help identify it. But in the absence of a problem, they are just log spam.Cheers,Jeff",
"msg_date": "Wed, 6 Feb 2019 09:12:43 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "Well, basically I'm trying to tune it because the table still keep growing.\nI thought that by setting the scale and the threshold it will be enough but\nits seems that it wasnt. I attached some of the logs output to hear what\nyou guys think about it ..\n\nבתאריך יום ד׳, 6 בפבר׳ 2019 ב-16:12 מאת Jeff Janes <\[email protected]>:\n\n> On Wed, Feb 6, 2019 at 5:29 AM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>\n>> Now the question is how to handle or tune it ? Is there any change that I\n>> need to increase the cost_limit / cost_delay ?\n>>\n>\n> Sometimes vacuum has more work to do, so it takes more time to do it.\n>\n> There is no indication of a problem. Or at least, you haven't described\n> one. So, there is nothing to handle or to tune.\n>\n> If there is a problem, those log entries might help identify it. But in\n> the absence of a problem, they are just log spam.\n>\n> Cheers,\n>\n> Jeff\n>\n\nWell, basically I'm trying to tune it because the table still keep growing. I thought that by setting the scale and the threshold it will be enough but its seems that it wasnt. I attached some of the logs output to hear what you guys think about it ..בתאריך יום ד׳, 6 בפבר׳ 2019 ב-16:12 מאת Jeff Janes <[email protected]>:On Wed, Feb 6, 2019 at 5:29 AM Mariel Cherkassky <[email protected]> wrote: Now the question is how to handle or tune it ? Is there any change that I need to increase the cost_limit / cost_delay ?Sometimes vacuum has more work to do, so it takes more time to do it.There is no indication of a problem. Or at least, you haven't described one. So, there is nothing to handle or to tune.If there is a problem, those log entries might help identify it. But in the absence of a problem, they are just log spam.Cheers,Jeff",
"msg_date": "Wed, 6 Feb 2019 16:42:24 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "On Wed, Feb 6, 2019 at 9:42 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> Well, basically I'm trying to tune it because the table still keep\n> growing. I thought that by setting the scale and the threshold it will be\n> enough but its seems that it wasnt. I attached some of the logs output to\n> hear what you guys think about it ..\n>\n\nAre all four log entries from well after you made the change? My first\ninclination is to think that the first 2 are from either before the change,\nor just after the change when it is still settling into the new regime.\nAlso, is the table still continuing to grow, or is at a new steady-state of\nbloat which isn't growing but also isn't shrinking back to where you want\nit to be? More aggressive vacuuming alone should stop the bloat, but is\nnot likely to reverse it.\n\nI habitually set vacuum_cost_page_hit and vacuum_cost_page_miss to zero.\nPage reads are self-limiting (vacuum is single threaded, so you can't have\nmore than one read (times autovacuum_max_workers) going on at a time) so I\ndon't see a need to throttle them intentionally as well--unless your entire\ndb is sitting on one spindle. Based on the high ratio of read rates to\nwrite rates in the last two log entries, this change alone should be enough\ngreatly speed up the run time of the vacuum.\n\nIf you need to speed it up beyond that, I don't think it matters much\nwhether you decrease cost_delay or increase cost_limit, it is the ratio\nthat mostly matters.\n\nAnd if these latter measures do work, you should consider undoing changes\nto autovacuum_vacuum_scale_factor. Reading the entire index just to remove\n10,000 rows from the table is a lot of extra work that might be\nunnecessary. Although that extra work might not be on anyone's critical\npath.\n\n>\n\nOn Wed, Feb 6, 2019 at 9:42 AM Mariel Cherkassky <[email protected]> wrote:Well, basically I'm trying to tune it because the table still keep growing. I thought that by setting the scale and the threshold it will be enough but its seems that it wasnt. I attached some of the logs output to hear what you guys think about it ..Are all four log entries from well after you made the change? My first inclination is to think that the first 2 are from either before the change, or just after the change when it is still settling into the new regime. Also, is the table still continuing to grow, or is at a new steady-state of bloat which isn't growing but also isn't shrinking back to where you want it to be? More aggressive vacuuming alone should stop the bloat, but is not likely to reverse it.I habitually set vacuum_cost_page_hit and vacuum_cost_page_miss to zero. Page reads are self-limiting (vacuum is single threaded, so you can't have more than one read (times autovacuum_max_workers) going on at a time) so I don't see a need to throttle them intentionally as well--unless your entire db is sitting on one spindle. Based on the high ratio of read rates to write rates in the last two log entries, this change alone should be enough greatly speed up the run time of the vacuum.If you need to speed it up beyond that, I don't think it matters much whether you decrease cost_delay or increase cost_limit, it is the ratio that mostly matters.And if these latter measures do work, you should consider undoing changes to autovacuum_vacuum_scale_factor. Reading the entire index just to remove 10,000 rows from the table is a lot of extra work that might be unnecessary. Although that extra work might not be on anyone's critical path.",
"msg_date": "Wed, 6 Feb 2019 13:49:53 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "On Thu, 7 Feb 2019 at 02:34, Mariel Cherkassky\n<[email protected]> wrote:\n> As I said, I set the next settings for the toasted table :\n>\n> alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0);\n>\n> alter table orig_table set (toast.autovacuum_vacuum_threshold =10000);\n\nThese settings don't control how fast auto-vacuum runs, just when it should run.\n\n> Can you explain a little bit more why you decided that the autovacuum spent it time on sleeping ?\n\nYeah, if you look at the following settings.\n\n vacuum_cost_limit | 200\n vacuum_cost_page_dirty | 20\n vacuum_cost_page_hit | 1\n vacuum_cost_page_miss | 10\n autovacuum_vacuum_cost_delay | 20ms\n\nI've tagged on the default setting for each of these. Both vacuum and\nauto-vacuum keep score of how many points they've accumulated while\nrunning. 20 points for dirtying a page, 10 for a read that's not found\nto be in shared_buffers, 1 for reading a buffer from shared buffers.\nWhen vacuum_cost_limit points is reached (or\nautovacuum_vacuum_cost_limit if not -1) auto-vacuum sleeps for\nautovacuum_vacuum_cost_delay, normal manual vacuums sleep for\nvacuum_cost_delay.\n\nIn one of the log entries you saw:\n\n> buffer usage: 4808221 hits, 6404148 misses, 6152603 dirtied\n> avg read rate: 2.617 MiB/s, avg write rate: 2.514 MiB/s\n> system usage: CPU 148.65s/70.06u sec elapsed 19119.55 sec\n\nDoing a bit of maths to see how much time that vacuum should have slept for:\n\npostgres=# select (4808221 * 1 + 6404148 * 10 + 6152603 * 20) / 200.0\n* 20 / 1000;\n ?column?\n--------------------\n 19190.176100000000\n\nThat's remarkably close to the actual time of 19119.55 sec. If you do\nthe same for the other 3 vacuums then you'll see the same close match.\n\n> I see the autovacuum statistics from the logs, how can I check that the workers are busy very often ?\n\nIt would be nice if there was something better, but periodically doing:\n\nSELECT count(*) FROM pg_stat_activity where query like 'autovacuum%';\n\nwill work.\n\n> My vacuum limit is 200 right now, basically If vacuum runs on my toasted table and reached 200 but it didnt finish to clean all the dead tuples, after the nap, should it continue cleaning it or wait until the vacuum_threshold hit again ?\n\nYou're confusing nap time is something else, Maybe you're confusing\nthat with speed of vacuum? Napping is just the time auto-vacuum will\nwait between checking for new tables to work on. Having the\nauto-vacuum run so slowly is a probable cause of still having dead\ntuples after the vacuum... likely because they became dead after\nvacuum started.\n\nI'd recommend reading the manual or Tomas Vondra's blog about vacuum\ncosts. It's not overly complex, once you understand what each of the\nvacuum settings does.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 7 Feb 2019 11:33:45 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "Just to make sure that I understood :\n-By increasing the cost_limit or decreasing the cost of the page_cost we\ncan decrease the time it takes the autovacuum process to vacuum a specific\ntable.\n-The vacuum threshold/scale are used to decide how often the table will be\nvacuum and not how long it should take.\n\nI have 3 questions :\n1)To what value do you recommend to increase the vacuum cost_limit ? 2000\nseems reasonable ? Or maybe its better to leave it as default and assign a\nspecific value for big tables ?\n2)When the autovacuum reaches the cost_limit while trying to vacuum a\nspecific table, it wait nap_time seconds and then it continue to work on\nthe same table ?\n3)So in case I have a table that keeps growing (not fast because I set the\nvacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If\nthe table keep growing it means I should try to increase the cost right ?\nDo you see any other option ? The table represent sessions of my system so\nbasically from my point of view I should have almost the same amount of\nsessions every day and the table shouldn't grow dramatically but before\nchanging the vacuum threshold/factor it happened. As I mentioned in my\nfirst comment there is a byte column and therefore the toasted table is the\nproblematic here.\n\nבתאריך יום ה׳, 7 בפבר׳ 2019 ב-0:34 מאת David Rowley <\[email protected]>:\n\n> On Thu, 7 Feb 2019 at 02:34, Mariel Cherkassky\n> <[email protected]> wrote:\n> > As I said, I set the next settings for the toasted table :\n> >\n> > alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0);\n> >\n> > alter table orig_table set (toast.autovacuum_vacuum_threshold =10000);\n>\n> These settings don't control how fast auto-vacuum runs, just when it\n> should run.\n>\n> > Can you explain a little bit more why you decided that the autovacuum\n> spent it time on sleeping ?\n>\n> Yeah, if you look at the following settings.\n>\n> vacuum_cost_limit | 200\n> vacuum_cost_page_dirty | 20\n> vacuum_cost_page_hit | 1\n> vacuum_cost_page_miss | 10\n> autovacuum_vacuum_cost_delay | 20ms\n>\n> I've tagged on the default setting for each of these. Both vacuum and\n> auto-vacuum keep score of how many points they've accumulated while\n> running. 20 points for dirtying a page, 10 for a read that's not found\n> to be in shared_buffers, 1 for reading a buffer from shared buffers.\n> When vacuum_cost_limit points is reached (or\n> autovacuum_vacuum_cost_limit if not -1) auto-vacuum sleeps for\n> autovacuum_vacuum_cost_delay, normal manual vacuums sleep for\n> vacuum_cost_delay.\n>\n> In one of the log entries you saw:\n>\n> > buffer usage: 4808221 hits, 6404148 misses, 6152603 dirtied\n> > avg read rate: 2.617 MiB/s, avg write rate: 2.514 MiB/s\n> > system usage: CPU 148.65s/70.06u sec elapsed 19119.55 sec\n>\n> Doing a bit of maths to see how much time that vacuum should have slept\n> for:\n>\n> postgres=# select (4808221 * 1 + 6404148 * 10 + 6152603 * 20) / 200.0\n> * 20 / 1000;\n> ?column?\n> --------------------\n> 19190.176100000000\n>\n> That's remarkably close to the actual time of 19119.55 sec. If you do\n> the same for the other 3 vacuums then you'll see the same close match.\n>\n> > I see the autovacuum statistics from the logs, how can I check that the\n> workers are busy very often ?\n>\n> It would be nice if there was something better, but periodically doing:\n>\n> SELECT count(*) FROM pg_stat_activity where query like 'autovacuum%';\n>\n> will work.\n>\n> > My vacuum limit is 200 right now, basically If vacuum runs on my toasted\n> table and reached 200 but it didnt finish to clean all the dead tuples,\n> after the nap, should it continue cleaning it or wait until the\n> vacuum_threshold hit again ?\n>\n> You're confusing nap time is something else, Maybe you're confusing\n> that with speed of vacuum? Napping is just the time auto-vacuum will\n> wait between checking for new tables to work on. Having the\n> auto-vacuum run so slowly is a probable cause of still having dead\n> tuples after the vacuum... likely because they became dead after\n> vacuum started.\n>\n> I'd recommend reading the manual or Tomas Vondra's blog about vacuum\n> costs. It's not overly complex, once you understand what each of the\n> vacuum settings does.\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nJust to make sure that I understood : -By increasing the cost_limit or decreasing the cost of the page_cost we can decrease the time it takes the autovacuum process to vacuum a specific table. -The vacuum threshold/scale are used to decide how often the table will be vacuum and not how long it should take.I have 3 questions : 1)To what value do you recommend to increase the vacuum cost_limit ? 2000 seems reasonable ? Or maybe its better to leave it as default and assign a specific value for big tables ?2)When the autovacuum reaches the cost_limit while trying to vacuum a specific table, it wait nap_time seconds and then it continue to work on the same table ? 3)So in case I have a table that keeps growing (not fast because I set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If the table keep growing it means I should try to increase the cost right ? Do you see any other option ? The table represent sessions of my system so basically from my point of view I should have almost the same amount of sessions every day and the table shouldn't grow dramatically but before changing the vacuum threshold/factor it happened. As I mentioned in my first comment there is a byte column and therefore the toasted table is the problematic here.בתאריך יום ה׳, 7 בפבר׳ 2019 ב-0:34 מאת David Rowley <[email protected]>:On Thu, 7 Feb 2019 at 02:34, Mariel Cherkassky\n<[email protected]> wrote:\n> As I said, I set the next settings for the toasted table :\n>\n> alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0);\n>\n> alter table orig_table set (toast.autovacuum_vacuum_threshold =10000);\n\nThese settings don't control how fast auto-vacuum runs, just when it should run.\n\n> Can you explain a little bit more why you decided that the autovacuum spent it time on sleeping ?\n\nYeah, if you look at the following settings.\n\n vacuum_cost_limit | 200\n vacuum_cost_page_dirty | 20\n vacuum_cost_page_hit | 1\n vacuum_cost_page_miss | 10\n autovacuum_vacuum_cost_delay | 20ms\n\nI've tagged on the default setting for each of these. Both vacuum and\nauto-vacuum keep score of how many points they've accumulated while\nrunning. 20 points for dirtying a page, 10 for a read that's not found\nto be in shared_buffers, 1 for reading a buffer from shared buffers.\nWhen vacuum_cost_limit points is reached (or\nautovacuum_vacuum_cost_limit if not -1) auto-vacuum sleeps for\nautovacuum_vacuum_cost_delay, normal manual vacuums sleep for\nvacuum_cost_delay.\n\nIn one of the log entries you saw:\n\n> buffer usage: 4808221 hits, 6404148 misses, 6152603 dirtied\n> avg read rate: 2.617 MiB/s, avg write rate: 2.514 MiB/s\n> system usage: CPU 148.65s/70.06u sec elapsed 19119.55 sec\n\nDoing a bit of maths to see how much time that vacuum should have slept for:\n\npostgres=# select (4808221 * 1 + 6404148 * 10 + 6152603 * 20) / 200.0\n* 20 / 1000;\n ?column?\n--------------------\n 19190.176100000000\n\nThat's remarkably close to the actual time of 19119.55 sec. If you do\nthe same for the other 3 vacuums then you'll see the same close match.\n\n> I see the autovacuum statistics from the logs, how can I check that the workers are busy very often ?\n\nIt would be nice if there was something better, but periodically doing:\n\nSELECT count(*) FROM pg_stat_activity where query like 'autovacuum%';\n\nwill work.\n\n> My vacuum limit is 200 right now, basically If vacuum runs on my toasted table and reached 200 but it didnt finish to clean all the dead tuples, after the nap, should it continue cleaning it or wait until the vacuum_threshold hit again ?\n\nYou're confusing nap time is something else, Maybe you're confusing\nthat with speed of vacuum? Napping is just the time auto-vacuum will\nwait between checking for new tables to work on. Having the\nauto-vacuum run so slowly is a probable cause of still having dead\ntuples after the vacuum... likely because they became dead after\nvacuum started.\n\nI'd recommend reading the manual or Tomas Vondra's blog about vacuum\ncosts. It's not overly complex, once you understand what each of the\nvacuum settings does.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 7 Feb 2019 13:55:34 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <\[email protected]> wrote:\n\nI have 3 questions :\n> 1)To what value do you recommend to increase the vacuum cost_limit ? 2000\n> seems reasonable ? Or maybe its better to leave it as default and assign a\n> specific value for big tables ?\n>\n\nThat depends on your IO hardware, and your workload. You wouldn't want\nbackground vacuum to use so much of your available IO that it starves your\nother processes.\n\n\n\n> 2)When the autovacuum reaches the cost_limit while trying to vacuum a\n> specific table, it wait nap_time seconds and then it continue to work on\n> the same table ?\n>\n\nNo, it waits for autovacuum_vacuum_cost_delay before resuming within the\nsame table. During this delay, the table is still open and it still holds a\nlock on it, and holds the transaction open, etc. Naptime is entirely\ndifferent, it controls how often the vacuum scheduler checks to see which\ntables need to be vacuumed again.\n\n\n\n> 3)So in case I have a table that keeps growing (not fast because I set the\n> vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If\n> the table keep growing it means I should try to increase the cost right ?\n> Do you see any other option ?\n>\n\n You can use pg_freespacemap to see if the free space is spread evenly\nthroughout the table, or clustered together. That might help figure out\nwhat is going on. And, is it the table itself that is growing, or the\nindex on it?\n\nCheers,\n\nJeff\n\nOn Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <[email protected]> wrote:I have 3 questions : 1)To what value do you recommend to increase the vacuum cost_limit ? 2000 seems reasonable ? Or maybe its better to leave it as default and assign a specific value for big tables ?That depends on your IO hardware, and your workload. You wouldn't want background vacuum to use so much of your available IO that it starves your other processes. 2)When the autovacuum reaches the cost_limit while trying to vacuum a specific table, it wait nap_time seconds and then it continue to work on the same table ? No, it waits for autovacuum_vacuum_cost_delay before resuming within the same table. During this delay, the table is still open and it still holds a lock on it, and holds the transaction open, etc. Naptime is entirely different, it controls how often the vacuum scheduler checks to see which tables need to be vacuumed again. 3)So in case I have a table that keeps growing (not fast because I set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If the table keep growing it means I should try to increase the cost right ? Do you see any other option ? You can use pg_freespacemap to see if the free space is spread evenly throughout the table, or clustered together. That might help figure out what is going on. And, is it the table itself that is growing, or the index on it?Cheers,Jeff",
"msg_date": "Thu, 7 Feb 2019 11:26:40 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "I checked in the logs when the autovacuum vacuum my big toasted table\nduring the week and I wanted to confirm with you what I think :\npostgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum\nof table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\npostgresql-Fri.log- pages: 2253 removed, 13737828 remain\npostgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\npostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530\ndirtied\npostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s\n--\npostgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum of\ntable \"myDB.pg_toast.pg_toast_1958391\": index scans: 23\npostgresql-Mon.log- pages: 0 removed, 23176876 remain\npostgresql-Mon.log- tuples: 62269200 removed, 82958 remain\npostgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses, 38950869\ndirtied\npostgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396 MiB/s\n--\npostgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum\nof table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1\npostgresql-Mon.log- pages: 0 removed, 23176876 remain\npostgresql-Mon.log- tuples: 114573 removed, 57785 remain\npostgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses, 15626466\ndirtied\npostgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470 MiB/s\n--\npostgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum of\ntable \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\npostgresql-Sat.log- pages: 0 removed, 13737828 remain\npostgresql-Sat.log- tuples: 34457593 removed, 15871942 remain\npostgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses, 22473776\ndirtied\npostgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410 MiB/s\n--\npostgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum\nof table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\npostgresql-Thu.log- pages: 0 removed, 10290976 remain\npostgresql-Thu.log- tuples: 35357057 removed, 3436237 remain\npostgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses, 19232835\ndirtied\npostgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437 MiB/s\n--\npostgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum\nof table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10\npostgresql-Tue.log- pages: 0 removed, 23176876 remain\npostgresql-Tue.log- tuples: 26011446 removed, 49426774 remain\npostgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses, 25472137\ndirtied\npostgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385 MiB/s\n--\n\n\nLets focus for example on one of the outputs :\npostgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum\nof table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\npostgresql-Fri.log- pages: 2253 removed, 13737828 remain\npostgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\npostgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, *19274530\n*dirtied\npostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s\n\nThe cost_limit is set to 200 (default) and the cost_delay is set to 20ms.\nThe calculation I did : (1**15031267*+10**21081633*+20**19274530)*/200*20/1000\n= 61133.8197 seconds ~ 17H\nSo autovacuum was laying down for 17h ? I think that I should increase the\ncost_limit to max specifically on the toasted table. What do you think ? Am\nI wrong here ?\n\n\nבתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <\[email protected]>:\n\n> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n> I have 3 questions :\n>> 1)To what value do you recommend to increase the vacuum cost_limit ? 2000\n>> seems reasonable ? Or maybe its better to leave it as default and assign a\n>> specific value for big tables ?\n>>\n>\n> That depends on your IO hardware, and your workload. You wouldn't want\n> background vacuum to use so much of your available IO that it starves your\n> other processes.\n>\n>\n>\n>> 2)When the autovacuum reaches the cost_limit while trying to vacuum a\n>> specific table, it wait nap_time seconds and then it continue to work on\n>> the same table ?\n>>\n>\n> No, it waits for autovacuum_vacuum_cost_delay before resuming within the\n> same table. During this delay, the table is still open and it still holds a\n> lock on it, and holds the transaction open, etc. Naptime is entirely\n> different, it controls how often the vacuum scheduler checks to see which\n> tables need to be vacuumed again.\n>\n>\n>\n>> 3)So in case I have a table that keeps growing (not fast because I set\n>> the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000).\n>> If the table keep growing it means I should try to increase the cost right\n>> ? Do you see any other option ?\n>>\n>\n> You can use pg_freespacemap to see if the free space is spread evenly\n> throughout the table, or clustered together. That might help figure out\n> what is going on. And, is it the table itself that is growing, or the\n> index on it?\n>\n> Cheers,\n>\n> Jeff\n>\n\nI checked in the logs when the autovacuum vacuum my big toasted table during the week and I wanted to confirm with you what I think : postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s--postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 62269200 removed, 82958 remainpostgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses, 38950869 dirtiedpostgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396 MiB/s--postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 114573 removed, 57785 remainpostgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses, 15626466 dirtiedpostgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470 MiB/s--postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Sat.log- pages: 0 removed, 13737828 remainpostgresql-Sat.log- tuples: 34457593 removed, 15871942 remainpostgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses, 22473776 dirtiedpostgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410 MiB/s--postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Thu.log- pages: 0 removed, 10290976 remainpostgresql-Thu.log- tuples: 35357057 removed, 3436237 remainpostgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses, 19232835 dirtiedpostgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437 MiB/s--postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10postgresql-Tue.log- pages: 0 removed, 23176876 remainpostgresql-Tue.log- tuples: 26011446 removed, 49426774 remainpostgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses, 25472137 dirtiedpostgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385 MiB/s--Lets focus for example on one of the outputs :postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/sThe cost_limit is set to 200 (default) and the cost_delay is set to 20ms. The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17HSo autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the toasted table. What do you think ? Am I wrong here ?בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <[email protected]>:On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <[email protected]> wrote:I have 3 questions : 1)To what value do you recommend to increase the vacuum cost_limit ? 2000 seems reasonable ? Or maybe its better to leave it as default and assign a specific value for big tables ?That depends on your IO hardware, and your workload. You wouldn't want background vacuum to use so much of your available IO that it starves your other processes. 2)When the autovacuum reaches the cost_limit while trying to vacuum a specific table, it wait nap_time seconds and then it continue to work on the same table ? No, it waits for autovacuum_vacuum_cost_delay before resuming within the same table. During this delay, the table is still open and it still holds a lock on it, and holds the transaction open, etc. Naptime is entirely different, it controls how often the vacuum scheduler checks to see which tables need to be vacuumed again. 3)So in case I have a table that keeps growing (not fast because I set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If the table keep growing it means I should try to increase the cost right ? Do you see any other option ? You can use pg_freespacemap to see if the free space is spread evenly throughout the table, or clustered together. That might help figure out what is going on. And, is it the table itself that is growing, or the index on it?Cheers,Jeff",
"msg_date": "Thu, 14 Feb 2019 18:09:04 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "It is curious to me that the tuples remaining count varies so wildly. Is\nthis expected?\n\n\n*Michael Lewis*\n\nOn Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> I checked in the logs when the autovacuum vacuum my big toasted table\n> during the week and I wanted to confirm with you what I think :\n> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum\n> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,\n> 19274530 dirtied\n> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n> MiB/s\n> --\n> postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum\n> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23\n> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain\n> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,\n> 38950869 dirtied\n> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396\n> MiB/s\n> --\n> postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum\n> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1\n> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n> postgresql-Mon.log- tuples: 114573 removed, 57785 remain\n> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,\n> 15626466 dirtied\n> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470\n> MiB/s\n> --\n> postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum\n> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n> postgresql-Sat.log- pages: 0 removed, 13737828 remain\n> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain\n> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,\n> 22473776 dirtied\n> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410\n> MiB/s\n> --\n> postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum\n> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n> postgresql-Thu.log- pages: 0 removed, 10290976 remain\n> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain\n> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,\n> 19232835 dirtied\n> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437\n> MiB/s\n> --\n> postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum\n> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10\n> postgresql-Tue.log- pages: 0 removed, 23176876 remain\n> postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain\n> postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,\n> 25472137 dirtied\n> postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385\n> MiB/s\n> --\n>\n>\n> Lets focus for example on one of the outputs :\n> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum\n> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n> postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, *19274530\n> *dirtied\n> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n> MiB/s\n>\n> The cost_limit is set to 200 (default) and the cost_delay is set to 20ms.\n> The calculation I did : (1**15031267*+10**21081633*+20**19274530)*/200*20/1000\n> = 61133.8197 seconds ~ 17H\n> So autovacuum was laying down for 17h ? I think that I should increase the\n> cost_limit to max specifically on the toasted table. What do you think ? Am\n> I wrong here ?\n>\n>\n> בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <\n> [email protected]>:\n>\n>> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <\n>> [email protected]> wrote:\n>>\n>> I have 3 questions :\n>>> 1)To what value do you recommend to increase the vacuum cost_limit ?\n>>> 2000 seems reasonable ? Or maybe its better to leave it as default and\n>>> assign a specific value for big tables ?\n>>>\n>>\n>> That depends on your IO hardware, and your workload. You wouldn't want\n>> background vacuum to use so much of your available IO that it starves your\n>> other processes.\n>>\n>>\n>>\n>>> 2)When the autovacuum reaches the cost_limit while trying to vacuum a\n>>> specific table, it wait nap_time seconds and then it continue to work on\n>>> the same table ?\n>>>\n>>\n>> No, it waits for autovacuum_vacuum_cost_delay before resuming within the\n>> same table. During this delay, the table is still open and it still holds a\n>> lock on it, and holds the transaction open, etc. Naptime is entirely\n>> different, it controls how often the vacuum scheduler checks to see which\n>> tables need to be vacuumed again.\n>>\n>>\n>>\n>>> 3)So in case I have a table that keeps growing (not fast because I set\n>>> the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000).\n>>> If the table keep growing it means I should try to increase the cost right\n>>> ? Do you see any other option ?\n>>>\n>>\n>> You can use pg_freespacemap to see if the free space is spread evenly\n>> throughout the table, or clustered together. That might help figure out\n>> what is going on. And, is it the table itself that is growing, or the\n>> index on it?\n>>\n>> Cheers,\n>>\n>> Jeff\n>>\n>\n\nIt is curious to me that the tuples remaining count varies so wildly. Is this expected?Michael LewisOn Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <[email protected]> wrote:I checked in the logs when the autovacuum vacuum my big toasted table during the week and I wanted to confirm with you what I think : postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s--postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 62269200 removed, 82958 remainpostgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses, 38950869 dirtiedpostgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396 MiB/s--postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 114573 removed, 57785 remainpostgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses, 15626466 dirtiedpostgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470 MiB/s--postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Sat.log- pages: 0 removed, 13737828 remainpostgresql-Sat.log- tuples: 34457593 removed, 15871942 remainpostgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses, 22473776 dirtiedpostgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410 MiB/s--postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Thu.log- pages: 0 removed, 10290976 remainpostgresql-Thu.log- tuples: 35357057 removed, 3436237 remainpostgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses, 19232835 dirtiedpostgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437 MiB/s--postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10postgresql-Tue.log- pages: 0 removed, 23176876 remainpostgresql-Tue.log- tuples: 26011446 removed, 49426774 remainpostgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses, 25472137 dirtiedpostgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385 MiB/s--Lets focus for example on one of the outputs :postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/sThe cost_limit is set to 200 (default) and the cost_delay is set to 20ms. The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17HSo autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the toasted table. What do you think ? Am I wrong here ?בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <[email protected]>:On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <[email protected]> wrote:I have 3 questions : 1)To what value do you recommend to increase the vacuum cost_limit ? 2000 seems reasonable ? Or maybe its better to leave it as default and assign a specific value for big tables ?That depends on your IO hardware, and your workload. You wouldn't want background vacuum to use so much of your available IO that it starves your other processes. 2)When the autovacuum reaches the cost_limit while trying to vacuum a specific table, it wait nap_time seconds and then it continue to work on the same table ? No, it waits for autovacuum_vacuum_cost_delay before resuming within the same table. During this delay, the table is still open and it still holds a lock on it, and holds the transaction open, etc. Naptime is entirely different, it controls how often the vacuum scheduler checks to see which tables need to be vacuumed again. 3)So in case I have a table that keeps growing (not fast because I set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If the table keep growing it means I should try to increase the cost right ? Do you see any other option ? You can use pg_freespacemap to see if the free space is spread evenly throughout the table, or clustered together. That might help figure out what is going on. And, is it the table itself that is growing, or the index on it?Cheers,Jeff",
"msg_date": "Thu, 14 Feb 2019 11:37:40 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "Maybe by explaining the tables purpose it will be cleaner. The original\ntable contains rows for sessions in my app. Every session saves for itself\nsome raw data which is saved in the toasted table. We clean old sessions\n(3+ days) every night. During the day sessions are created so the size of\nthe table should grow during the day and freed in the night after the\nautovacuum run.However, the autovacuums sleeps for alot of time and during\nthat time more sessions are created so maybe this can explain the big size\n? Do you think that by increasing the cost limit and decreasing the cost\ndelay I can solve the issue ?\n\nOn Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:\n\n> It is curious to me that the tuples remaining count varies so wildly. Is\n> this expected?\n>\n>\n> *Michael Lewis*\n>\n> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> I checked in the logs when the autovacuum vacuum my big toasted table\n>> during the week and I wanted to confirm with you what I think :\n>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum\n>> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,\n>> 19274530 dirtied\n>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n>> MiB/s\n>> --\n>> postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum\n>> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23\n>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain\n>> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,\n>> 38950869 dirtied\n>> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396\n>> MiB/s\n>> --\n>> postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum\n>> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1\n>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>> postgresql-Mon.log- tuples: 114573 removed, 57785 remain\n>> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,\n>> 15626466 dirtied\n>> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470\n>> MiB/s\n>> --\n>> postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum\n>> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>> postgresql-Sat.log- pages: 0 removed, 13737828 remain\n>> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain\n>> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,\n>> 22473776 dirtied\n>> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410\n>> MiB/s\n>> --\n>> postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum\n>> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>> postgresql-Thu.log- pages: 0 removed, 10290976 remain\n>> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain\n>> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,\n>> 19232835 dirtied\n>> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437\n>> MiB/s\n>> --\n>> postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum\n>> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10\n>> postgresql-Tue.log- pages: 0 removed, 23176876 remain\n>> postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain\n>> postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,\n>> 25472137 dirtied\n>> postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385\n>> MiB/s\n>> --\n>>\n>>\n>> Lets focus for example on one of the outputs :\n>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum\n>> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>> postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, *19274530\n>> *dirtied\n>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n>> MiB/s\n>>\n>> The cost_limit is set to 200 (default) and the cost_delay is set to 20ms.\n>> The calculation I did : (1**15031267*+10**21081633*+20**19274530)*/200*20/1000\n>> = 61133.8197 seconds ~ 17H\n>> So autovacuum was laying down for 17h ? I think that I should increase\n>> the cost_limit to max specifically on the toasted table. What do you think\n>> ? Am I wrong here ?\n>>\n>>\n>> בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <\n>> [email protected]>:\n>>\n>>> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <\n>>> [email protected]> wrote:\n>>>\n>>> I have 3 questions :\n>>>> 1)To what value do you recommend to increase the vacuum cost_limit ?\n>>>> 2000 seems reasonable ? Or maybe its better to leave it as default and\n>>>> assign a specific value for big tables ?\n>>>>\n>>>\n>>> That depends on your IO hardware, and your workload. You wouldn't want\n>>> background vacuum to use so much of your available IO that it starves your\n>>> other processes.\n>>>\n>>>\n>>>\n>>>> 2)When the autovacuum reaches the cost_limit while trying to vacuum a\n>>>> specific table, it wait nap_time seconds and then it continue to work on\n>>>> the same table ?\n>>>>\n>>>\n>>> No, it waits for autovacuum_vacuum_cost_delay before resuming within the\n>>> same table. During this delay, the table is still open and it still holds a\n>>> lock on it, and holds the transaction open, etc. Naptime is entirely\n>>> different, it controls how often the vacuum scheduler checks to see which\n>>> tables need to be vacuumed again.\n>>>\n>>>\n>>>\n>>>> 3)So in case I have a table that keeps growing (not fast because I set\n>>>> the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000).\n>>>> If the table keep growing it means I should try to increase the cost right\n>>>> ? Do you see any other option ?\n>>>>\n>>>\n>>> You can use pg_freespacemap to see if the free space is spread evenly\n>>> throughout the table, or clustered together. That might help figure out\n>>> what is going on. And, is it the table itself that is growing, or the\n>>> index on it?\n>>>\n>>> Cheers,\n>>>\n>>> Jeff\n>>>\n>>\n\nMaybe by explaining the tables purpose it will be cleaner. The original table contains rows for sessions in my app. Every session saves for itself some raw data which is saved in the toasted table. We clean old sessions (3+ days) every night. During the day sessions are created so the size of the table should grow during the day and freed in the night after the autovacuum run.However, the autovacuums sleeps for alot of time and during that time more sessions are created so maybe this can explain the big size ? Do you think that by increasing the cost limit and decreasing the cost delay I can solve the issue ?On Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:It is curious to me that the tuples remaining count varies so wildly. Is this expected?Michael LewisOn Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <[email protected]> wrote:I checked in the logs when the autovacuum vacuum my big toasted table during the week and I wanted to confirm with you what I think : postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s--postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 62269200 removed, 82958 remainpostgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses, 38950869 dirtiedpostgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396 MiB/s--postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 114573 removed, 57785 remainpostgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses, 15626466 dirtiedpostgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470 MiB/s--postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Sat.log- pages: 0 removed, 13737828 remainpostgresql-Sat.log- tuples: 34457593 removed, 15871942 remainpostgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses, 22473776 dirtiedpostgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410 MiB/s--postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Thu.log- pages: 0 removed, 10290976 remainpostgresql-Thu.log- tuples: 35357057 removed, 3436237 remainpostgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses, 19232835 dirtiedpostgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437 MiB/s--postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10postgresql-Tue.log- pages: 0 removed, 23176876 remainpostgresql-Tue.log- tuples: 26011446 removed, 49426774 remainpostgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses, 25472137 dirtiedpostgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385 MiB/s--Lets focus for example on one of the outputs :postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/sThe cost_limit is set to 200 (default) and the cost_delay is set to 20ms. The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17HSo autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the toasted table. What do you think ? Am I wrong here ?בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <[email protected]>:On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <[email protected]> wrote:I have 3 questions : 1)To what value do you recommend to increase the vacuum cost_limit ? 2000 seems reasonable ? Or maybe its better to leave it as default and assign a specific value for big tables ?That depends on your IO hardware, and your workload. You wouldn't want background vacuum to use so much of your available IO that it starves your other processes. 2)When the autovacuum reaches the cost_limit while trying to vacuum a specific table, it wait nap_time seconds and then it continue to work on the same table ? No, it waits for autovacuum_vacuum_cost_delay before resuming within the same table. During this delay, the table is still open and it still holds a lock on it, and holds the transaction open, etc. Naptime is entirely different, it controls how often the vacuum scheduler checks to see which tables need to be vacuumed again. 3)So in case I have a table that keeps growing (not fast because I set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If the table keep growing it means I should try to increase the cost right ? Do you see any other option ? You can use pg_freespacemap to see if the free space is spread evenly throughout the table, or clustered together. That might help figure out what is going on. And, is it the table itself that is growing, or the index on it?Cheers,Jeff",
"msg_date": "Thu, 14 Feb 2019 21:40:54 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "Thanks, that context is very enlightening. Do you manually vacuum after\ndoing the big purge of old session data? Is bloat causing issues for you?\nWhy is it a concern that autovacuum's behavior varies?\n\n\n*Michael Lewis*\n\nOn Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <\[email protected]> wrote:\n\n> Maybe by explaining the tables purpose it will be cleaner. The original\n> table contains rows for sessions in my app. Every session saves for itself\n> some raw data which is saved in the toasted table. We clean old sessions\n> (3+ days) every night. During the day sessions are created so the size of\n> the table should grow during the day and freed in the night after the\n> autovacuum run.However, the autovacuums sleeps for alot of time and during\n> that time more sessions are created so maybe this can explain the big size\n> ? Do you think that by increasing the cost limit and decreasing the cost\n> delay I can solve the issue ?\n>\n> On Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:\n>\n>> It is curious to me that the tuples remaining count varies so wildly. Is\n>> this expected?\n>>\n>>\n>> *Michael Lewis*\n>>\n>> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <\n>> [email protected]> wrote:\n>>\n>>> I checked in the logs when the autovacuum vacuum my big toasted table\n>>> during the week and I wanted to confirm with you what I think :\n>>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic\n>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>>> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,\n>>> 19274530 dirtied\n>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n>>> MiB/s\n>>> --\n>>> postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum\n>>> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23\n>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>>> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain\n>>> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,\n>>> 38950869 dirtied\n>>> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396\n>>> MiB/s\n>>> --\n>>> postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic\n>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1\n>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>>> postgresql-Mon.log- tuples: 114573 removed, 57785 remain\n>>> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,\n>>> 15626466 dirtied\n>>> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470\n>>> MiB/s\n>>> --\n>>> postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum\n>>> of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>>> postgresql-Sat.log- pages: 0 removed, 13737828 remain\n>>> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain\n>>> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,\n>>> 22473776 dirtied\n>>> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410\n>>> MiB/s\n>>> --\n>>> postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic\n>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>>> postgresql-Thu.log- pages: 0 removed, 10290976 remain\n>>> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain\n>>> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,\n>>> 19232835 dirtied\n>>> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437\n>>> MiB/s\n>>> --\n>>> postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic\n>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10\n>>> postgresql-Tue.log- pages: 0 removed, 23176876 remain\n>>> postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain\n>>> postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,\n>>> 25472137 dirtied\n>>> postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385\n>>> MiB/s\n>>> --\n>>>\n>>>\n>>> Lets focus for example on one of the outputs :\n>>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic\n>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>>> postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, *19274530\n>>> *dirtied\n>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n>>> MiB/s\n>>>\n>>> The cost_limit is set to 200 (default) and the cost_delay is set to\n>>> 20ms.\n>>> The calculation I did : (1**15031267*+10**21081633*+20**19274530)*/200*20/1000\n>>> = 61133.8197 seconds ~ 17H\n>>> So autovacuum was laying down for 17h ? I think that I should increase\n>>> the cost_limit to max specifically on the toasted table. What do you think\n>>> ? Am I wrong here ?\n>>>\n>>>\n>>> בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <\n>>> [email protected]>:\n>>>\n>>>> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <\n>>>> [email protected]> wrote:\n>>>>\n>>>> I have 3 questions :\n>>>>> 1)To what value do you recommend to increase the vacuum cost_limit ?\n>>>>> 2000 seems reasonable ? Or maybe its better to leave it as default and\n>>>>> assign a specific value for big tables ?\n>>>>>\n>>>>\n>>>> That depends on your IO hardware, and your workload. You wouldn't want\n>>>> background vacuum to use so much of your available IO that it starves your\n>>>> other processes.\n>>>>\n>>>>\n>>>>\n>>>>> 2)When the autovacuum reaches the cost_limit while trying to vacuum a\n>>>>> specific table, it wait nap_time seconds and then it continue to work on\n>>>>> the same table ?\n>>>>>\n>>>>\n>>>> No, it waits for autovacuum_vacuum_cost_delay before resuming within\n>>>> the same table. During this delay, the table is still open and it still\n>>>> holds a lock on it, and holds the transaction open, etc. Naptime is\n>>>> entirely different, it controls how often the vacuum scheduler checks to\n>>>> see which tables need to be vacuumed again.\n>>>>\n>>>>\n>>>>\n>>>>> 3)So in case I have a table that keeps growing (not fast because I set\n>>>>> the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000).\n>>>>> If the table keep growing it means I should try to increase the cost right\n>>>>> ? Do you see any other option ?\n>>>>>\n>>>>\n>>>> You can use pg_freespacemap to see if the free space is spread evenly\n>>>> throughout the table, or clustered together. That might help figure out\n>>>> what is going on. And, is it the table itself that is growing, or the\n>>>> index on it?\n>>>>\n>>>> Cheers,\n>>>>\n>>>> Jeff\n>>>>\n>>>\n\nThanks, that context is very enlightening. Do you manually vacuum after doing the big purge of old session data? Is bloat causing issues for you? Why is it a concern that autovacuum's behavior varies?Michael LewisOn Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <[email protected]> wrote:Maybe by explaining the tables purpose it will be cleaner. The original table contains rows for sessions in my app. Every session saves for itself some raw data which is saved in the toasted table. We clean old sessions (3+ days) every night. During the day sessions are created so the size of the table should grow during the day and freed in the night after the autovacuum run.However, the autovacuums sleeps for alot of time and during that time more sessions are created so maybe this can explain the big size ? Do you think that by increasing the cost limit and decreasing the cost delay I can solve the issue ?On Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:It is curious to me that the tuples remaining count varies so wildly. Is this expected?Michael LewisOn Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <[email protected]> wrote:I checked in the logs when the autovacuum vacuum my big toasted table during the week and I wanted to confirm with you what I think : postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s--postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 62269200 removed, 82958 remainpostgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses, 38950869 dirtiedpostgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396 MiB/s--postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 114573 removed, 57785 remainpostgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses, 15626466 dirtiedpostgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470 MiB/s--postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Sat.log- pages: 0 removed, 13737828 remainpostgresql-Sat.log- tuples: 34457593 removed, 15871942 remainpostgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses, 22473776 dirtiedpostgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410 MiB/s--postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Thu.log- pages: 0 removed, 10290976 remainpostgresql-Thu.log- tuples: 35357057 removed, 3436237 remainpostgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses, 19232835 dirtiedpostgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437 MiB/s--postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10postgresql-Tue.log- pages: 0 removed, 23176876 remainpostgresql-Tue.log- tuples: 26011446 removed, 49426774 remainpostgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses, 25472137 dirtiedpostgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385 MiB/s--Lets focus for example on one of the outputs :postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/sThe cost_limit is set to 200 (default) and the cost_delay is set to 20ms. The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17HSo autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the toasted table. What do you think ? Am I wrong here ?בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <[email protected]>:On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <[email protected]> wrote:I have 3 questions : 1)To what value do you recommend to increase the vacuum cost_limit ? 2000 seems reasonable ? Or maybe its better to leave it as default and assign a specific value for big tables ?That depends on your IO hardware, and your workload. You wouldn't want background vacuum to use so much of your available IO that it starves your other processes. 2)When the autovacuum reaches the cost_limit while trying to vacuum a specific table, it wait nap_time seconds and then it continue to work on the same table ? No, it waits for autovacuum_vacuum_cost_delay before resuming within the same table. During this delay, the table is still open and it still holds a lock on it, and holds the transaction open, etc. Naptime is entirely different, it controls how often the vacuum scheduler checks to see which tables need to be vacuumed again. 3)So in case I have a table that keeps growing (not fast because I set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If the table keep growing it means I should try to increase the cost right ? Do you see any other option ? You can use pg_freespacemap to see if the free space is spread evenly throughout the table, or clustered together. That might help figure out what is going on. And, is it the table itself that is growing, or the index on it?Cheers,Jeff",
"msg_date": "Thu, 14 Feb 2019 12:52:24 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "No I don't run vacuum manually afterwards because the autovacuum should\nrun. This process happens every night. Yes , bloating is an issue because\nthe table grow and take a lot of space on disk. Regarding the autovacuum,\nI think that it sleeps too much time (17h) during it's work, don't you\nthink so?\n\nOn Thu, Feb 14, 2019, 9:52 PM Michael Lewis <[email protected] wrote:\n\n> Thanks, that context is very enlightening. Do you manually vacuum after\n> doing the big purge of old session data? Is bloat causing issues for you?\n> Why is it a concern that autovacuum's behavior varies?\n>\n>\n> *Michael Lewis*\n>\n> On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> Maybe by explaining the tables purpose it will be cleaner. The original\n>> table contains rows for sessions in my app. Every session saves for itself\n>> some raw data which is saved in the toasted table. We clean old sessions\n>> (3+ days) every night. During the day sessions are created so the size of\n>> the table should grow during the day and freed in the night after the\n>> autovacuum run.However, the autovacuums sleeps for alot of time and during\n>> that time more sessions are created so maybe this can explain the big size\n>> ? Do you think that by increasing the cost limit and decreasing the cost\n>> delay I can solve the issue ?\n>>\n>> On Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:\n>>\n>>> It is curious to me that the tuples remaining count varies so wildly. Is\n>>> this expected?\n>>>\n>>>\n>>> *Michael Lewis*\n>>>\n>>> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <\n>>> [email protected]> wrote:\n>>>\n>>>> I checked in the logs when the autovacuum vacuum my big toasted table\n>>>> during the week and I wanted to confirm with you what I think :\n>>>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic\n>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>>>> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,\n>>>> 19274530 dirtied\n>>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n>>>> MiB/s\n>>>> --\n>>>> postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic\n>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23\n>>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>>>> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain\n>>>> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,\n>>>> 38950869 dirtied\n>>>> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396\n>>>> MiB/s\n>>>> --\n>>>> postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic\n>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1\n>>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>>>> postgresql-Mon.log- tuples: 114573 removed, 57785 remain\n>>>> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,\n>>>> 15626466 dirtied\n>>>> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470\n>>>> MiB/s\n>>>> --\n>>>> postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic\n>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>>>> postgresql-Sat.log- pages: 0 removed, 13737828 remain\n>>>> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain\n>>>> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,\n>>>> 22473776 dirtied\n>>>> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410\n>>>> MiB/s\n>>>> --\n>>>> postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic\n>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>>>> postgresql-Thu.log- pages: 0 removed, 10290976 remain\n>>>> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain\n>>>> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,\n>>>> 19232835 dirtied\n>>>> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437\n>>>> MiB/s\n>>>> --\n>>>> postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic\n>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10\n>>>> postgresql-Tue.log- pages: 0 removed, 23176876 remain\n>>>> postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain\n>>>> postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,\n>>>> 25472137 dirtied\n>>>> postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385\n>>>> MiB/s\n>>>> --\n>>>>\n>>>>\n>>>> Lets focus for example on one of the outputs :\n>>>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic\n>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>>>> postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, *19274530\n>>>> *dirtied\n>>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n>>>> MiB/s\n>>>>\n>>>> The cost_limit is set to 200 (default) and the cost_delay is set to\n>>>> 20ms.\n>>>> The calculation I did : (1**15031267*+10**21081633*+20**19274530)*/200*20/1000\n>>>> = 61133.8197 seconds ~ 17H\n>>>> So autovacuum was laying down for 17h ? I think that I should increase\n>>>> the cost_limit to max specifically on the toasted table. What do you think\n>>>> ? Am I wrong here ?\n>>>>\n>>>>\n>>>> בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <\n>>>> [email protected]>:\n>>>>\n>>>>> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <\n>>>>> [email protected]> wrote:\n>>>>>\n>>>>> I have 3 questions :\n>>>>>> 1)To what value do you recommend to increase the vacuum cost_limit ?\n>>>>>> 2000 seems reasonable ? Or maybe its better to leave it as default and\n>>>>>> assign a specific value for big tables ?\n>>>>>>\n>>>>>\n>>>>> That depends on your IO hardware, and your workload. You wouldn't\n>>>>> want background vacuum to use so much of your available IO that it starves\n>>>>> your other processes.\n>>>>>\n>>>>>\n>>>>>\n>>>>>> 2)When the autovacuum reaches the cost_limit while trying to vacuum a\n>>>>>> specific table, it wait nap_time seconds and then it continue to work on\n>>>>>> the same table ?\n>>>>>>\n>>>>>\n>>>>> No, it waits for autovacuum_vacuum_cost_delay before resuming within\n>>>>> the same table. During this delay, the table is still open and it still\n>>>>> holds a lock on it, and holds the transaction open, etc. Naptime is\n>>>>> entirely different, it controls how often the vacuum scheduler checks to\n>>>>> see which tables need to be vacuumed again.\n>>>>>\n>>>>>\n>>>>>\n>>>>>> 3)So in case I have a table that keeps growing (not fast because I\n>>>>>> set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to\n>>>>>> 10000). If the table keep growing it means I should try to increase the\n>>>>>> cost right ? Do you see any other option ?\n>>>>>>\n>>>>>\n>>>>> You can use pg_freespacemap to see if the free space is spread evenly\n>>>>> throughout the table, or clustered together. That might help figure out\n>>>>> what is going on. And, is it the table itself that is growing, or the\n>>>>> index on it?\n>>>>>\n>>>>> Cheers,\n>>>>>\n>>>>> Jeff\n>>>>>\n>>>>\n\nNo I don't run vacuum manually afterwards because the autovacuum should run. This process happens every night. Yes , bloating is an issue because the table grow and take a lot of space on disk. Regarding the autovacuum, I think that it sleeps too much time (17h) during it's work, don't you think so? On Thu, Feb 14, 2019, 9:52 PM Michael Lewis <[email protected] wrote:Thanks, that context is very enlightening. Do you manually vacuum after doing the big purge of old session data? Is bloat causing issues for you? Why is it a concern that autovacuum's behavior varies?Michael LewisOn Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <[email protected]> wrote:Maybe by explaining the tables purpose it will be cleaner. The original table contains rows for sessions in my app. Every session saves for itself some raw data which is saved in the toasted table. We clean old sessions (3+ days) every night. During the day sessions are created so the size of the table should grow during the day and freed in the night after the autovacuum run.However, the autovacuums sleeps for alot of time and during that time more sessions are created so maybe this can explain the big size ? Do you think that by increasing the cost limit and decreasing the cost delay I can solve the issue ?On Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:It is curious to me that the tuples remaining count varies so wildly. Is this expected?Michael LewisOn Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <[email protected]> wrote:I checked in the logs when the autovacuum vacuum my big toasted table during the week and I wanted to confirm with you what I think : postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s--postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 62269200 removed, 82958 remainpostgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses, 38950869 dirtiedpostgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396 MiB/s--postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 114573 removed, 57785 remainpostgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses, 15626466 dirtiedpostgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470 MiB/s--postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Sat.log- pages: 0 removed, 13737828 remainpostgresql-Sat.log- tuples: 34457593 removed, 15871942 remainpostgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses, 22473776 dirtiedpostgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410 MiB/s--postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Thu.log- pages: 0 removed, 10290976 remainpostgresql-Thu.log- tuples: 35357057 removed, 3436237 remainpostgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses, 19232835 dirtiedpostgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437 MiB/s--postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10postgresql-Tue.log- pages: 0 removed, 23176876 remainpostgresql-Tue.log- tuples: 26011446 removed, 49426774 remainpostgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses, 25472137 dirtiedpostgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385 MiB/s--Lets focus for example on one of the outputs :postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/sThe cost_limit is set to 200 (default) and the cost_delay is set to 20ms. The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17HSo autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the toasted table. What do you think ? Am I wrong here ?בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <[email protected]>:On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <[email protected]> wrote:I have 3 questions : 1)To what value do you recommend to increase the vacuum cost_limit ? 2000 seems reasonable ? Or maybe its better to leave it as default and assign a specific value for big tables ?That depends on your IO hardware, and your workload. You wouldn't want background vacuum to use so much of your available IO that it starves your other processes. 2)When the autovacuum reaches the cost_limit while trying to vacuum a specific table, it wait nap_time seconds and then it continue to work on the same table ? No, it waits for autovacuum_vacuum_cost_delay before resuming within the same table. During this delay, the table is still open and it still holds a lock on it, and holds the transaction open, etc. Naptime is entirely different, it controls how often the vacuum scheduler checks to see which tables need to be vacuumed again. 3)So in case I have a table that keeps growing (not fast because I set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If the table keep growing it means I should try to increase the cost right ? Do you see any other option ? You can use pg_freespacemap to see if the free space is spread evenly throughout the table, or clustered together. That might help figure out what is going on. And, is it the table itself that is growing, or the index on it?Cheers,Jeff",
"msg_date": "Thu, 14 Feb 2019 22:07:46 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "If there are high number of updates during normal daytime processes, then\nyes you need to ensure autovacuum is handling this table as needed. If the\nnightly delete is the only major source of bloat on this table, then\nperhaps running a manual vacuum keeps things tidy after the big delete.\nGranted, if you are manually going to vacuum, don't use vacuum full as\nthere is not much sense in recovering that disk space if the table is going\nto expected to be similarly sized again by the end of the day.\n\nDo you have a proper number of workers and maintenance_work_mem to get the\njob done?\n\nAs you proposed, it seems likely to be good to significantly increase\nautovacuum_vacuum_cost_limit on this table, and perhaps decrease\nautovacuum_vacuum_scale_factor if it is not being picked up as a candidate\nfor vacuum very frequently.\n\n\n\n*Michael Lewis *\n\n\nOn Thu, Feb 14, 2019 at 1:08 PM Mariel Cherkassky <\[email protected]> wrote:\n\n> No I don't run vacuum manually afterwards because the autovacuum should\n> run. This process happens every night. Yes , bloating is an issue because\n> the table grow and take a lot of space on disk. Regarding the autovacuum,\n> I think that it sleeps too much time (17h) during it's work, don't you\n> think so?\n>\n> On Thu, Feb 14, 2019, 9:52 PM Michael Lewis <[email protected] wrote:\n>\n>> Thanks, that context is very enlightening. Do you manually vacuum after\n>> doing the big purge of old session data? Is bloat causing issues for you?\n>> Why is it a concern that autovacuum's behavior varies?\n>>\n>>\n>> *Michael Lewis*\n>>\n>> On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <\n>> [email protected]> wrote:\n>>\n>>> Maybe by explaining the tables purpose it will be cleaner. The original\n>>> table contains rows for sessions in my app. Every session saves for itself\n>>> some raw data which is saved in the toasted table. We clean old sessions\n>>> (3+ days) every night. During the day sessions are created so the size of\n>>> the table should grow during the day and freed in the night after the\n>>> autovacuum run.However, the autovacuums sleeps for alot of time and during\n>>> that time more sessions are created so maybe this can explain the big size\n>>> ? Do you think that by increasing the cost limit and decreasing the cost\n>>> delay I can solve the issue ?\n>>>\n>>> On Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:\n>>>\n>>>> It is curious to me that the tuples remaining count varies so wildly.\n>>>> Is this expected?\n>>>>\n>>>>\n>>>> *Michael Lewis*\n>>>>\n>>>> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <\n>>>> [email protected]> wrote:\n>>>>\n>>>>> I checked in the logs when the autovacuum vacuum my big toasted table\n>>>>> during the week and I wanted to confirm with you what I think :\n>>>>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic\n>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>>>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>>>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>>>>> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,\n>>>>> 19274530 dirtied\n>>>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n>>>>> MiB/s\n>>>>> --\n>>>>> postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic\n>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23\n>>>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>>>>> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain\n>>>>> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,\n>>>>> 38950869 dirtied\n>>>>> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396\n>>>>> MiB/s\n>>>>> --\n>>>>> postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic\n>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1\n>>>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>>>>> postgresql-Mon.log- tuples: 114573 removed, 57785 remain\n>>>>> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,\n>>>>> 15626466 dirtied\n>>>>> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470\n>>>>> MiB/s\n>>>>> --\n>>>>> postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic\n>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>>>>> postgresql-Sat.log- pages: 0 removed, 13737828 remain\n>>>>> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain\n>>>>> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,\n>>>>> 22473776 dirtied\n>>>>> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410\n>>>>> MiB/s\n>>>>> --\n>>>>> postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic\n>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>>>>> postgresql-Thu.log- pages: 0 removed, 10290976 remain\n>>>>> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain\n>>>>> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,\n>>>>> 19232835 dirtied\n>>>>> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437\n>>>>> MiB/s\n>>>>> --\n>>>>> postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic\n>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10\n>>>>> postgresql-Tue.log- pages: 0 removed, 23176876 remain\n>>>>> postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain\n>>>>> postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,\n>>>>> 25472137 dirtied\n>>>>> postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385\n>>>>> MiB/s\n>>>>> --\n>>>>>\n>>>>>\n>>>>> Lets focus for example on one of the outputs :\n>>>>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic\n>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>>>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>>>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>>>>> postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, *19274530\n>>>>> *dirtied\n>>>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469\n>>>>> MiB/s\n>>>>>\n>>>>> The cost_limit is set to 200 (default) and the cost_delay is set to\n>>>>> 20ms.\n>>>>> The calculation I did : (1**15031267*+10**21081633*+20**19274530)*/200*20/1000\n>>>>> = 61133.8197 seconds ~ 17H\n>>>>> So autovacuum was laying down for 17h ? I think that I should increase\n>>>>> the cost_limit to max specifically on the toasted table. What do you think\n>>>>> ? Am I wrong here ?\n>>>>>\n>>>>>\n>>>>> בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <\n>>>>> [email protected]>:\n>>>>>\n>>>>>> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <\n>>>>>> [email protected]> wrote:\n>>>>>>\n>>>>>> I have 3 questions :\n>>>>>>> 1)To what value do you recommend to increase the vacuum cost_limit ?\n>>>>>>> 2000 seems reasonable ? Or maybe its better to leave it as default and\n>>>>>>> assign a specific value for big tables ?\n>>>>>>>\n>>>>>>\n>>>>>> That depends on your IO hardware, and your workload. You wouldn't\n>>>>>> want background vacuum to use so much of your available IO that it starves\n>>>>>> your other processes.\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>> 2)When the autovacuum reaches the cost_limit while trying to vacuum\n>>>>>>> a specific table, it wait nap_time seconds and then it continue to work on\n>>>>>>> the same table ?\n>>>>>>>\n>>>>>>\n>>>>>> No, it waits for autovacuum_vacuum_cost_delay before resuming within\n>>>>>> the same table. During this delay, the table is still open and it still\n>>>>>> holds a lock on it, and holds the transaction open, etc. Naptime is\n>>>>>> entirely different, it controls how often the vacuum scheduler checks to\n>>>>>> see which tables need to be vacuumed again.\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>> 3)So in case I have a table that keeps growing (not fast because I\n>>>>>>> set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to\n>>>>>>> 10000). If the table keep growing it means I should try to increase the\n>>>>>>> cost right ? Do you see any other option ?\n>>>>>>>\n>>>>>>\n>>>>>> You can use pg_freespacemap to see if the free space is spread\n>>>>>> evenly throughout the table, or clustered together. That might help figure\n>>>>>> out what is going on. And, is it the table itself that is growing, or the\n>>>>>> index on it?\n>>>>>>\n>>>>>> Cheers,\n>>>>>>\n>>>>>> Jeff\n>>>>>>\n>>>>>\n\nIf there are high number of updates during normal daytime processes, then yes you need to ensure autovacuum is handling this table as needed. If the nightly delete is the only major source of bloat on this table, then perhaps running a manual vacuum keeps things tidy after the big delete. Granted, if you are manually going to vacuum, don't use vacuum full as there is not much sense in recovering that disk space if the table is going to expected to be similarly sized again by the end of the day.Do you have a proper number of workers and maintenance_work_mem to get the job done?As you proposed, it seems likely to be good to significantly increase autovacuum_vacuum_cost_limit on this table, and perhaps decrease autovacuum_vacuum_scale_factor if it is not being picked up as a candidate for vacuum very frequently.Michael Lewis On Thu, Feb 14, 2019 at 1:08 PM Mariel Cherkassky <[email protected]> wrote:No I don't run vacuum manually afterwards because the autovacuum should run. This process happens every night. Yes , bloating is an issue because the table grow and take a lot of space on disk. Regarding the autovacuum, I think that it sleeps too much time (17h) during it's work, don't you think so? On Thu, Feb 14, 2019, 9:52 PM Michael Lewis <[email protected] wrote:Thanks, that context is very enlightening. Do you manually vacuum after doing the big purge of old session data? Is bloat causing issues for you? Why is it a concern that autovacuum's behavior varies?Michael LewisOn Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <[email protected]> wrote:Maybe by explaining the tables purpose it will be cleaner. The original table contains rows for sessions in my app. Every session saves for itself some raw data which is saved in the toasted table. We clean old sessions (3+ days) every night. During the day sessions are created so the size of the table should grow during the day and freed in the night after the autovacuum run.However, the autovacuums sleeps for alot of time and during that time more sessions are created so maybe this can explain the big size ? Do you think that by increasing the cost limit and decreasing the cost delay I can solve the issue ?On Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:It is curious to me that the tuples remaining count varies so wildly. Is this expected?Michael LewisOn Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <[email protected]> wrote:I checked in the logs when the autovacuum vacuum my big toasted table during the week and I wanted to confirm with you what I think : postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s--postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 62269200 removed, 82958 remainpostgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses, 38950869 dirtiedpostgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396 MiB/s--postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 114573 removed, 57785 remainpostgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses, 15626466 dirtiedpostgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470 MiB/s--postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Sat.log- pages: 0 removed, 13737828 remainpostgresql-Sat.log- tuples: 34457593 removed, 15871942 remainpostgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses, 22473776 dirtiedpostgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410 MiB/s--postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Thu.log- pages: 0 removed, 10290976 remainpostgresql-Thu.log- tuples: 35357057 removed, 3436237 remainpostgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses, 19232835 dirtiedpostgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437 MiB/s--postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10postgresql-Tue.log- pages: 0 removed, 23176876 remainpostgresql-Tue.log- tuples: 26011446 removed, 49426774 remainpostgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses, 25472137 dirtiedpostgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385 MiB/s--Lets focus for example on one of the outputs :postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/sThe cost_limit is set to 200 (default) and the cost_delay is set to 20ms. The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17HSo autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the toasted table. What do you think ? Am I wrong here ?בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <[email protected]>:On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <[email protected]> wrote:I have 3 questions : 1)To what value do you recommend to increase the vacuum cost_limit ? 2000 seems reasonable ? Or maybe its better to leave it as default and assign a specific value for big tables ?That depends on your IO hardware, and your workload. You wouldn't want background vacuum to use so much of your available IO that it starves your other processes. 2)When the autovacuum reaches the cost_limit while trying to vacuum a specific table, it wait nap_time seconds and then it continue to work on the same table ? No, it waits for autovacuum_vacuum_cost_delay before resuming within the same table. During this delay, the table is still open and it still holds a lock on it, and holds the transaction open, etc. Naptime is entirely different, it controls how often the vacuum scheduler checks to see which tables need to be vacuumed again. 3)So in case I have a table that keeps growing (not fast because I set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If the table keep growing it means I should try to increase the cost right ? Do you see any other option ? You can use pg_freespacemap to see if the free space is spread evenly throughout the table, or clustered together. That might help figure out what is going on. And, is it the table itself that is growing, or the index on it?Cheers,Jeff",
"msg_date": "Thu, 14 Feb 2019 14:45:10 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "I set the toast.autovacuum_vacuum_scale_factor to 0 and the\ntoast.autovacuum_vacuum threshold to 10000 so it should be enough to force\na vacuum after the nightly deletes. Now , I changed the cost limit and the\ncost delay, my question is if I have anything else to do ? My\nmaintenance_work_mem is about 1gb and I didn't change the default value of\nthe workers. Is there a way to calc what size the maintenance_work_mem\nshould be in order to clean the table ? And what exactly is saved in the\nmaintenance_work_mem ? I mean how it used by the autovacuum..\n\nOn Thu, Feb 14, 2019, 11:45 PM Michael Lewis <[email protected] wrote:\n\n> If there are high number of updates during normal daytime processes, then\n> yes you need to ensure autovacuum is handling this table as needed. If the\n> nightly delete is the only major source of bloat on this table, then\n> perhaps running a manual vacuum keeps things tidy after the big delete.\n> Granted, if you are manually going to vacuum, don't use vacuum full as\n> there is not much sense in recovering that disk space if the table is going\n> to expected to be similarly sized again by the end of the day.\n>\n> Do you have a proper number of workers and maintenance_work_mem to get the\n> job done?\n>\n> As you proposed, it seems likely to be good to significantly increase\n> autovacuum_vacuum_cost_limit on this table, and perhaps decrease\n> autovacuum_vacuum_scale_factor if it is not being picked up as a candidate\n> for vacuum very frequently.\n>\n>\n>\n> *Michael Lewis *\n>\n>\n> On Thu, Feb 14, 2019 at 1:08 PM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> No I don't run vacuum manually afterwards because the autovacuum should\n>> run. This process happens every night. Yes , bloating is an issue because\n>> the table grow and take a lot of space on disk. Regarding the autovacuum,\n>> I think that it sleeps too much time (17h) during it's work, don't you\n>> think so?\n>>\n>> On Thu, Feb 14, 2019, 9:52 PM Michael Lewis <[email protected] wrote:\n>>\n>>> Thanks, that context is very enlightening. Do you manually vacuum after\n>>> doing the big purge of old session data? Is bloat causing issues for you?\n>>> Why is it a concern that autovacuum's behavior varies?\n>>>\n>>>\n>>> *Michael Lewis*\n>>>\n>>> On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <\n>>> [email protected]> wrote:\n>>>\n>>>> Maybe by explaining the tables purpose it will be cleaner. The original\n>>>> table contains rows for sessions in my app. Every session saves for itself\n>>>> some raw data which is saved in the toasted table. We clean old sessions\n>>>> (3+ days) every night. During the day sessions are created so the size of\n>>>> the table should grow during the day and freed in the night after the\n>>>> autovacuum run.However, the autovacuums sleeps for alot of time and during\n>>>> that time more sessions are created so maybe this can explain the big size\n>>>> ? Do you think that by increasing the cost limit and decreasing the cost\n>>>> delay I can solve the issue ?\n>>>>\n>>>> On Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:\n>>>>\n>>>>> It is curious to me that the tuples remaining count varies so wildly.\n>>>>> Is this expected?\n>>>>>\n>>>>>\n>>>>> *Michael Lewis*\n>>>>>\n>>>>> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <\n>>>>> [email protected]> wrote:\n>>>>>\n>>>>>> I checked in the logs when the autovacuum vacuum my big toasted table\n>>>>>> during the week and I wanted to confirm with you what I think :\n>>>>>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic\n>>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>>>>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>>>>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>>>>>> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,\n>>>>>> 19274530 dirtied\n>>>>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate:\n>>>>>> 2.469 MiB/s\n>>>>>> --\n>>>>>> postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic\n>>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23\n>>>>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>>>>>> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain\n>>>>>> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,\n>>>>>> 38950869 dirtied\n>>>>>> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate:\n>>>>>> 2.396 MiB/s\n>>>>>> --\n>>>>>> postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic\n>>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1\n>>>>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain\n>>>>>> postgresql-Mon.log- tuples: 114573 removed, 57785 remain\n>>>>>> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,\n>>>>>> 15626466 dirtied\n>>>>>> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate:\n>>>>>> 2.470 MiB/s\n>>>>>> --\n>>>>>> postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic\n>>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>>>>>> postgresql-Sat.log- pages: 0 removed, 13737828 remain\n>>>>>> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain\n>>>>>> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,\n>>>>>> 22473776 dirtied\n>>>>>> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate:\n>>>>>> 2.410 MiB/s\n>>>>>> --\n>>>>>> postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic\n>>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13\n>>>>>> postgresql-Thu.log- pages: 0 removed, 10290976 remain\n>>>>>> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain\n>>>>>> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,\n>>>>>> 19232835 dirtied\n>>>>>> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate:\n>>>>>> 2.437 MiB/s\n>>>>>> --\n>>>>>> postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic\n>>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10\n>>>>>> postgresql-Tue.log- pages: 0 removed, 23176876 remain\n>>>>>> postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain\n>>>>>> postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,\n>>>>>> 25472137 dirtied\n>>>>>> postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate:\n>>>>>> 2.385 MiB/s\n>>>>>> --\n>>>>>>\n>>>>>>\n>>>>>> Lets focus for example on one of the outputs :\n>>>>>> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic\n>>>>>> vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n>>>>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain\n>>>>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain\n>>>>>> postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses,\n>>>>>> *19274530 *dirtied\n>>>>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate:\n>>>>>> 2.469 MiB/s\n>>>>>>\n>>>>>> The cost_limit is set to 200 (default) and the cost_delay is set to\n>>>>>> 20ms.\n>>>>>> The calculation I did : (1**15031267*+10**21081633*+20**19274530)*/200*20/1000\n>>>>>> = 61133.8197 seconds ~ 17H\n>>>>>> So autovacuum was laying down for 17h ? I think that I should\n>>>>>> increase the cost_limit to max specifically on the toasted table. What do\n>>>>>> you think ? Am I wrong here ?\n>>>>>>\n>>>>>>\n>>>>>> בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <\n>>>>>> [email protected]>:\n>>>>>>\n>>>>>>> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <\n>>>>>>> [email protected]> wrote:\n>>>>>>>\n>>>>>>> I have 3 questions :\n>>>>>>>> 1)To what value do you recommend to increase the vacuum cost_limit\n>>>>>>>> ? 2000 seems reasonable ? Or maybe its better to leave it as default and\n>>>>>>>> assign a specific value for big tables ?\n>>>>>>>>\n>>>>>>>\n>>>>>>> That depends on your IO hardware, and your workload. You wouldn't\n>>>>>>> want background vacuum to use so much of your available IO that it starves\n>>>>>>> your other processes.\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>>> 2)When the autovacuum reaches the cost_limit while trying to vacuum\n>>>>>>>> a specific table, it wait nap_time seconds and then it continue to work on\n>>>>>>>> the same table ?\n>>>>>>>>\n>>>>>>>\n>>>>>>> No, it waits for autovacuum_vacuum_cost_delay before resuming within\n>>>>>>> the same table. During this delay, the table is still open and it still\n>>>>>>> holds a lock on it, and holds the transaction open, etc. Naptime is\n>>>>>>> entirely different, it controls how often the vacuum scheduler checks to\n>>>>>>> see which tables need to be vacuumed again.\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>>> 3)So in case I have a table that keeps growing (not fast because I\n>>>>>>>> set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to\n>>>>>>>> 10000). If the table keep growing it means I should try to increase the\n>>>>>>>> cost right ? Do you see any other option ?\n>>>>>>>>\n>>>>>>>\n>>>>>>> You can use pg_freespacemap to see if the free space is spread\n>>>>>>> evenly throughout the table, or clustered together. That might help figure\n>>>>>>> out what is going on. And, is it the table itself that is growing, or the\n>>>>>>> index on it?\n>>>>>>>\n>>>>>>> Cheers,\n>>>>>>>\n>>>>>>> Jeff\n>>>>>>>\n>>>>>>\n\nI set the toast.autovacuum_vacuum_scale_factor to 0 and the toast.autovacuum_vacuum threshold to 10000 so it should be enough to force a vacuum after the nightly deletes. Now , I changed the cost limit and the cost delay, my question is if I have anything else to do ? My maintenance_work_mem is about 1gb and I didn't change the default value of the workers. Is there a way to calc what size the maintenance_work_mem should be in order to clean the table ? And what exactly is saved in the maintenance_work_mem ? I mean how it used by the autovacuum..On Thu, Feb 14, 2019, 11:45 PM Michael Lewis <[email protected] wrote:If there are high number of updates during normal daytime processes, then yes you need to ensure autovacuum is handling this table as needed. If the nightly delete is the only major source of bloat on this table, then perhaps running a manual vacuum keeps things tidy after the big delete. Granted, if you are manually going to vacuum, don't use vacuum full as there is not much sense in recovering that disk space if the table is going to expected to be similarly sized again by the end of the day.Do you have a proper number of workers and maintenance_work_mem to get the job done?As you proposed, it seems likely to be good to significantly increase autovacuum_vacuum_cost_limit on this table, and perhaps decrease autovacuum_vacuum_scale_factor if it is not being picked up as a candidate for vacuum very frequently.Michael Lewis On Thu, Feb 14, 2019 at 1:08 PM Mariel Cherkassky <[email protected]> wrote:No I don't run vacuum manually afterwards because the autovacuum should run. This process happens every night. Yes , bloating is an issue because the table grow and take a lot of space on disk. Regarding the autovacuum, I think that it sleeps too much time (17h) during it's work, don't you think so? On Thu, Feb 14, 2019, 9:52 PM Michael Lewis <[email protected] wrote:Thanks, that context is very enlightening. Do you manually vacuum after doing the big purge of old session data? Is bloat causing issues for you? Why is it a concern that autovacuum's behavior varies?Michael LewisOn Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <[email protected]> wrote:Maybe by explaining the tables purpose it will be cleaner. The original table contains rows for sessions in my app. Every session saves for itself some raw data which is saved in the toasted table. We clean old sessions (3+ days) every night. During the day sessions are created so the size of the table should grow during the day and freed in the night after the autovacuum run.However, the autovacuums sleeps for alot of time and during that time more sessions are created so maybe this can explain the big size ? Do you think that by increasing the cost limit and decreasing the cost delay I can solve the issue ?On Thu, Feb 14, 2019, 8:38 PM Michael Lewis <[email protected] wrote:It is curious to me that the tuples remaining count varies so wildly. Is this expected?Michael LewisOn Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <[email protected]> wrote:I checked in the logs when the autovacuum vacuum my big toasted table during the week and I wanted to confirm with you what I think : postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s--postgresql-Mon.log:2019-02-11 01:11:46 EST 8426 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 23postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 62269200 removed, 82958 remainpostgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses, 38950869 dirtiedpostgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396 MiB/s--postgresql-Mon.log:2019-02-11 21:43:19 EST 24323 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 1postgresql-Mon.log- pages: 0 removed, 23176876 remainpostgresql-Mon.log- tuples: 114573 removed, 57785 remainpostgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses, 15626466 dirtiedpostgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470 MiB/s--postgresql-Sat.log:2019-02-09 04:54:50 EST 1793 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Sat.log- pages: 0 removed, 13737828 remainpostgresql-Sat.log- tuples: 34457593 removed, 15871942 remainpostgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses, 22473776 dirtiedpostgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410 MiB/s--postgresql-Thu.log:2019-02-07 12:08:50 EST 29630 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 13postgresql-Thu.log- pages: 0 removed, 10290976 remainpostgresql-Thu.log- tuples: 35357057 removed, 3436237 remainpostgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses, 19232835 dirtiedpostgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437 MiB/s--postgresql-Tue.log:2019-02-12 20:54:44 EST 21464 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 10postgresql-Tue.log- pages: 0 removed, 23176876 remainpostgresql-Tue.log- tuples: 26011446 removed, 49426774 remainpostgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses, 25472137 dirtiedpostgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385 MiB/s--Lets focus for example on one of the outputs :postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8postgresql-Fri.log- pages: 2253 removed, 13737828 remainpostgresql-Fri.log- tuples: 21759258 removed, 27324090 remainpostgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtiedpostgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/sThe cost_limit is set to 200 (default) and the cost_delay is set to 20ms. The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17HSo autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the toasted table. What do you think ? Am I wrong here ?בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <[email protected]>:On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <[email protected]> wrote:I have 3 questions : 1)To what value do you recommend to increase the vacuum cost_limit ? 2000 seems reasonable ? Or maybe its better to leave it as default and assign a specific value for big tables ?That depends on your IO hardware, and your workload. You wouldn't want background vacuum to use so much of your available IO that it starves your other processes. 2)When the autovacuum reaches the cost_limit while trying to vacuum a specific table, it wait nap_time seconds and then it continue to work on the same table ? No, it waits for autovacuum_vacuum_cost_delay before resuming within the same table. During this delay, the table is still open and it still holds a lock on it, and holds the transaction open, etc. Naptime is entirely different, it controls how often the vacuum scheduler checks to see which tables need to be vacuumed again. 3)So in case I have a table that keeps growing (not fast because I set the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 10000). If the table keep growing it means I should try to increase the cost right ? Do you see any other option ? You can use pg_freespacemap to see if the free space is spread evenly throughout the table, or clustered together. That might help figure out what is going on. And, is it the table itself that is growing, or the index on it?Cheers,Jeff",
"msg_date": "Fri, 15 Feb 2019 08:23:28 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> Lets focus for example on one of the outputs :\n> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table \"myDB.pg_toast.pg_toast_1958391\": index scans: 8\n> postgresql-Fri.log-\tpages: 2253 removed, 13737828 remain\n> postgresql-Fri.log-\ttuples: 21759258 removed, 27324090 remain\n> postgresql-Fri.log-\tbuffer usage: 15031267 hits, 21081633 misses, 19274530 dirtied\n> postgresql-Fri.log-\tavg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s\n> \n> The cost_limit is set to 200 (default) and the cost_delay is set to 20ms. \n> The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17H\n> So autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the toasted table. What do you think ? Am I wrong here ?\n\nIncreasing cost_limit or reducing cost_delay improves the situation.\n\ncost_delay = 0 makes autovacuum as fast as possible.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 15 Feb 2019 09:06:13 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum big table taking hours and sometimes seconds"
}
] |
[
{
"msg_contents": "Hello,\n\nWe use the plugin Wal2Json in order to catch every modification on database. We’ve got an issue : WAL were growing very fast, the state of pg_stat_replication still on ‘catchup’ , an error:pg_recvlogical: unexpected termination of replication stream: ERROR: out of memory DETAIL: Cannot enlarge string buffer. It seems Wal2Json can not handle very big transactions ( more than 1 Gb).\nHow could we measure the size of a transaction ?\nCould we increase this limitation ? \nThank you \nMai\n",
"msg_date": "Thu, 7 Feb 2019 11:38:01 +0100",
"msg_from": "Mai Peng <[email protected]>",
"msg_from_op": true,
"msg_subject": "Transaction size and Wal2Json"
},
{
"msg_contents": "On 07/02/2019 11:38, Mai Peng wrote:\n> We use the plugin Wal2Json in order to catch every modification on database. We’ve got an issue : WAL were growing very fast, the state of pg_stat_replication still on ‘catchup’ , an error:pg_recvlogical: unexpected termination of replication stream: ERROR: out of memory DETAIL: Cannot enlarge string buffer. It seems Wal2Json can not handle very big transactions ( more than 1 Gb).\n> How could we measure the size of a transaction ?\n> Could we increase this limitation ? \n\nYou should send a bug report to wal2json.\n\nIt's plausible that some naive coding would run into the limitation that\nyou describe, but a bit of effort can probably solve it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 7 Feb 2019 11:45:58 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Transaction size and Wal2Json"
}
] |
[
{
"msg_contents": "I am using Postgres on a large system (recording approximately 20million transactions per day). We use partitioning by date to assist with both vacuum processing time and to archive old data. At the core of the system are records in 2 different tables detailing different types of activity for monetary transactions (e.g. money in and money out) -> a single transaction has entries in both tables, so to retrieve all details for a single transaction we need to join the 2 tables.\n\nThe use of partitioning however has a significant impact on the performance of retrieving this data. Being relatively new to Postgres I wanted to share my findings and understand how others address them. We run postgres version 9.6 on CentOS, but the same behaviour is apparent in postgres 10.6. The test case outputs are from version 10.6 running on my Ubuntu machine with default postgres configuration.\nBelow is an example script to populate the test data:\n=======================drop table if exists tablea cascade;drop table if exists tableb cascade;\nCREATE TABLE tablea ( id serial, reference int not null, created date not null) PARTITION BY RANGE (created);\n\nCREATE TABLE tablea_part1 PARTITION OF tablea FOR VALUES FROM ('2018-01-01') TO ('2018-01-02');CREATE TABLE tablea_part2 PARTITION OF tablea FOR VALUES FROM ('2018-01-02') TO ('2018-01-03');CREATE TABLE tablea_part3 PARTITION OF tablea FOR VALUES FROM ('2018-01-03') TO ('2018-01-04');CREATE TABLE tablea_part4 PARTITION OF tablea FOR VALUES FROM ('2018-01-04') TO ('2018-01-05');CREATE TABLE tablea_part5 PARTITION OF tablea FOR VALUES FROM ('2018-01-05') TO ('2018-01-06');\nCREATE INDEX tablea_id_1 ON tablea_part1 (id);CREATE INDEX tablea_id_2 ON tablea_part2 (id);CREATE INDEX tablea_id_3 ON tablea_part3 (id);CREATE INDEX tablea_id_4 ON tablea_part4 (id);CREATE INDEX tablea_id_5 ON tablea_part5 (id);CREATE INDEX tablea_reference_1 ON tablea_part1 (reference);CREATE INDEX tablea_reference_2 ON tablea_part2 (reference);CREATE INDEX tablea_reference_3 ON tablea_part3 (reference);CREATE INDEX tablea_reference_4 ON tablea_part4 (reference);CREATE INDEX tablea_reference_5 ON tablea_part5 (reference);CREATE INDEX tablea_created_1 ON tablea_part1 (created);CREATE INDEX tablea_created_2 ON tablea_part2 (created);CREATE INDEX tablea_created_3 ON tablea_part3 (created);CREATE INDEX tablea_created_4 ON tablea_part4 (created);CREATE INDEX tablea_created_5 ON tablea_part5 (created);alter table tablea_part1 add CHECK ( created >= DATE '2018-01-01' AND created < DATE '2018-01-02');alter table tablea_part2 add CHECK ( created >= DATE '2018-01-02' AND created < DATE '2018-01-03');alter table tablea_part3 add CHECK ( created >= DATE '2018-01-03' AND created < DATE '2018-01-04');alter table tablea_part4 add CHECK ( created >= DATE '2018-01-04' AND created < DATE '2018-01-05');alter table tablea_part5 add CHECK ( created >= DATE '2018-01-05' AND created < DATE '2018-01-06');\ncreate or replace function populate_tablea() RETURNS integer AS$BODY$ DECLARE i integer; v_created date;BEGIN i := 0; WHILE (i < 50000) loop i := i + 1; IF (mod(i,5) = 1) THEN v_created = '2018-01-01'; ELSIF (mod(i,5) = 2) THEN v_created = '2018-01-02'; ELSIF (mod(i,5) = 3) THEN v_created = '2018-01-03'; ELSIF (mod(i,5) = 4) THEN v_created = '2018-01-04'; ELSIF (mod(i,5) = 0) THEN v_created = '2018-01-05'; END IF; insert into tablea values (i, i, v_created);\n end loop; RETURN i;END;$BODY$LANGUAGE plpgsql VOLATILE COST 100;\nCREATE TABLE tableb( id serial, reference int not null, created date not null) PARTITION BY RANGE (created);\n\nCREATE TABLE tableb_part1 PARTITION OF tableb FOR VALUES FROM ('2018-01-01') TO ('2018-01-02');CREATE TABLE tableb_part2 PARTITION OF tableb FOR VALUES FROM ('2018-01-02') TO ('2018-01-03');CREATE TABLE tableb_part3 PARTITION OF tableb FOR VALUES FROM ('2018-01-03') TO ('2018-01-04');CREATE TABLE tableb_part4 PARTITION OF tableb FOR VALUES FROM ('2018-01-04') TO ('2018-01-05');CREATE TABLE tableb_part5 PARTITION OF tableb FOR VALUES FROM ('2018-01-05') TO ('2018-01-06');\n\nCREATE INDEX tableb_id_1 ON tableb_part1 (id);CREATE INDEX tableb_id_2 ON tableb_part2 (id);CREATE INDEX tableb_id_3 ON tableb_part3 (id);CREATE INDEX tableb_id_4 ON tableb_part4 (id);CREATE INDEX tableb_id_5 ON tableb_part5 (id);CREATE INDEX tableb_reference_1 ON tableb_part1 (reference);CREATE INDEX tableb_reference_2 ON tableb_part2 (reference);CREATE INDEX tableb_reference_3 ON tableb_part3 (reference);CREATE INDEX tableb_reference_4 ON tableb_part4 (reference);CREATE INDEX tableb_reference_5 ON tableb_part5 (reference);CREATE INDEX tableb_created_1 ON tableb_part1 (created);CREATE INDEX tableb_created_2 ON tableb_part2 (created);CREATE INDEX tableb_created_3 ON tableb_part3 (created);CREATE INDEX tableb_created_4 ON tableb_part4 (created);CREATE INDEX tableb_created_5 ON tableb_part5 (created);alter table tableb_part1 add CHECK ( created >= DATE '2018-01-01' AND created < DATE '2018-01-02');alter table tableb_part2 add CHECK ( created >= DATE '2018-01-02' AND created < DATE '2018-01-03');alter table tableb_part3 add CHECK ( created >= DATE '2018-01-03' AND created < DATE '2018-01-04');alter table tableb_part4 add CHECK ( created >= DATE '2018-01-04' AND created < DATE '2018-01-05');alter table tableb_part5 add CHECK ( created >= DATE '2018-01-05' AND created < DATE '2018-01-06');\n\ncreate or replace function populate_tableb() RETURNS integer AS$BODY$DECLARE i integer; v_created date;BEGIN i := 0; WHILE (i < 50000) loop i := i + 1; IF (mod(i,5) = 0) THEN v_created = '2018-01-01'; ELSIF (mod(i,5) = 1) THEN v_created = '2018-01-02'; ELSIF (mod(i,5) = 2) THEN v_created = '2018-01-03'; ELSIF (mod(i,5) = 3) THEN v_created = '2018-01-04'; ELSIF (mod(i,5) = 4) THEN v_created = '2018-01-05'; END IF; insert into tableb values (i, i, v_created); end loop; RETURN i;END;$BODY$LANGUAGE plpgsql VOLATILE COST 100;\nselect populate_tablea();select populate_tableb();vacuum analyze;==================================\n\nSo it creates 2 tables, both with 5 partitions (using range partitioning on the created column). Each partition has 10000 rows in it.\nBelow are some example queries I have run, the outputs of explain analyze for each and notes on each of my findings/questions:\n\n============\n-- NOTICE IN THE BELOW THAT WE USE A SINGLE ID (ESSENTIALLY THE PRIMARY KEY) BUT WE HAVE ESTIMATED 5 ROWS RETURNED. WE SEEM TO BE BASING-- ON PARTITION STATS ONLY AND SUMMING. SO EACH PARTITION ASSUMES ID IS UNIQUE, BUT WITH 5 PARTITIONS, THE TOTAL ROWS IS 5.\nexplain analyze select * from tablea where id = 101; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Append (cost=0.29..41.51 rows=5 width=12) (actual time=0.027..0.066 rows=1 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 (cost=0.29..8.30 rows=1 width=12) (actual time=0.026..0.029 rows=1 loops=1) Index Cond: (id = 101) -> Index Scan using tablea_id_2 on tablea_part2 (cost=0.29..8.30 rows=1 width=12) (actual time=0.010..0.010 rows=0 loops=1) Index Cond: (id = 101) -> Index Scan using tablea_id_3 on tablea_part3 (cost=0.29..8.30 rows=1 width=12) (actual time=0.008..0.009 rows=0 loops=1) Index Cond: (id = 101) -> Index Scan using tablea_id_4 on tablea_part4 (cost=0.29..8.30 rows=1 width=12) (actual time=0.008..0.008 rows=0 loops=1) Index Cond: (id = 101) -> Index Scan using tablea_id_5 on tablea_part5 (cost=0.29..8.30 rows=1 width=12) (actual time=0.007..0.007 rows=0 loops=1) Index Cond: (id = 101) Planning time: 0.875 ms Execution time: 0.176 ms\n\n============\n-- IF WE USE AN IN WITH 10 ID'S WE ESTIMATE 50 ROWS RETURNED INSTEAD OF THE ACTUAL 10. AGAIN SEEMS TO BE AGGREGATING PARTITION STATISTICS.\nexplain analyze select * from tablea where id in (101,102,103,104,105,106,107,108,109,110); QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------- Append (cost=0.29..215.13 rows=50 width=12) (actual time=0.040..0.283 rows=10 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 (cost=0.29..43.03 rows=10 width=12) (actual time=0.039..0.079 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_2 on tablea_part2 (cost=0.29..43.03 rows=10 width=12) (actual time=0.021..0.052 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_3 on tablea_part3 (cost=0.29..43.03 rows=10 width=12) (actual time=0.022..0.048 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_4 on tablea_part4 (cost=0.29..43.03 rows=10 width=12) (actual time=0.026..0.049 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_5 on tablea_part5 (cost=0.29..43.03 rows=10 width=12) (actual time=0.028..0.048 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) Planning time: 1.526 ms Execution time: 0.397 ms \n===========\n-- IF WE USE A RANGE INSTEAD OF INDIVIDUAL ID'S, WE GET ESTIMATED 10 ROWS RETURNED (GOOD). -- IS THIS USING THE GLOBAL TABLE STATISTICS INSTEAD? WHY DOES IT DIFFER FROM DISTINCT ID'S?\nexplain analyze select * from tablea where id >= 101 and id <= 110; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Append (cost=0.29..41.62 rows=10 width=12) (actual time=0.022..0.074 rows=10 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 (cost=0.29..8.32 rows=2 width=12) (actual time=0.021..0.026 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_2 on tablea_part2 (cost=0.29..8.32 rows=2 width=12) (actual time=0.010..0.012 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_3 on tablea_part3 (cost=0.29..8.32 rows=2 width=12) (actual time=0.009..0.010 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_4 on tablea_part4 (cost=0.29..8.32 rows=2 width=12) (actual time=0.009..0.010 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_5 on tablea_part5 (cost=0.29..8.32 rows=2 width=12) (actual time=0.008..0.010 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) Planning time: 1.845 ms Execution time: 0.196 ms\n==========\n-- HERE ARE THE TABLE STATS, SHOWING THAT POSTGRES IS AWARE THAT ID'S ARE GLOBALLY UNIQUE IN THE TABLEA TABLE. CAN IT USE THEM?\nselect tablename,n_distinct from pg_stats where tablename like '%tablea%' and attname = 'id'; tablename | n_distinct --------------+------------ tablea_part3 | -1 tablea | -1 tablea_part2 | -1 tablea_part4 | -1 tablea_part5 | -1 tablea_part1 | -1\n==========\n-- WHEN I JOIN, THE NUMBER OF ROWS MULTIPLIES. NOTICE THE SEQUENTIAL SCAN OF THE TABLEB PARTITION. THIS IS CAUSED BY THE OVERESTIMATION-- OF ROWS RETURNED BY TABLEA\nexplain analyze select * from tablea a, tableb b where a.reference = b.reference and a.id in (101,102,103,104,105,106,107,108,109,110); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=215.75..1178.75 rows=50 width=24) (actual time=0.386..46.845 rows=10 loops=1) Hash Cond: (b.reference = a.reference) -> Append (cost=0.00..775.00 rows=50000 width=12) (actual time=0.024..26.527 rows=50000 loops=1) -> Seq Scan on tableb_part1 b (cost=0.00..155.00 rows=10000 width=12) (actual time=0.022..4.006 rows=10000 loops=1) -> Seq Scan on tableb_part2 b_1 (cost=0.00..155.00 rows=10000 width=12) (actual time=0.023..4.039 rows=10000 loops=1) -> Seq Scan on tableb_part3 b_2 (cost=0.00..155.00 rows=10000 width=12) (actual time=0.023..3.247 rows=10000 loops=1) -> Seq Scan on tableb_part4 b_3 (cost=0.00..155.00 rows=10000 width=12) (actual time=0.016..1.421 rows=10000 loops=1) -> Seq Scan on tableb_part5 b_4 (cost=0.00..155.00 rows=10000 width=12) (actual time=0.007..1.113 rows=10000 loops=1) -> Hash (cost=215.13..215.13 rows=50 width=12) (actual time=0.316..0.316 rows=10 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Append (cost=0.29..215.13 rows=50 width=12) (actual time=0.034..0.301 rows=10 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 a (cost=0.29..43.03 rows=10 width=12) (actual time=0.033..0.074 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_2 on tablea_part2 a_1 (cost=0.29..43.03 rows=10 width=12) (actual time=0.020..0.051 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_3 on tablea_part3 a_2 (cost=0.29..43.03 rows=10 width=12) (actual time=0.021..0.048 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_4 on tablea_part4 a_3 (cost=0.29..43.03 rows=10 width=12) (actual time=0.025..0.072 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_5 on tablea_part5 a_4 (cost=0.29..43.03 rows=10 width=12) (actual time=0.028..0.049 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) Planning time: 2.642 ms Execution time: 47.005 ms\n===========\n-- REPEAT, BUT USING A RANGE QUERY. NO LONGER SEQUENTIAL SCAN AS THE TABLEA ROW ESTIMATE DROPS TO 10 FROM 50. -- QUERY EXECUTION TIME DROPS FROM 47MS TO 0.7MS\nexplain analyze select * from tablea a, tableb b where a.reference = b.reference and a.id >= 101 and a. id <= 110; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=0.57..437.25 rows=10 width=24) (actual time=0.063..0.543 rows=10 loops=1) -> Append (cost=0.29..41.62 rows=10 width=12) (actual time=0.027..0.091 rows=10 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 a (cost=0.29..8.32 rows=2 width=12) (actual time=0.026..0.031 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_2 on tablea_part2 a_1 (cost=0.29..8.32 rows=2 width=12) (actual time=0.010..0.013 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_3 on tablea_part3 a_2 (cost=0.29..8.32 rows=2 width=12) (actual time=0.009..0.011 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_4 on tablea_part4 a_3 (cost=0.29..8.32 rows=2 width=12) (actual time=0.009..0.012 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_5 on tablea_part5 a_4 (cost=0.29..8.32 rows=2 width=12) (actual time=0.015..0.018 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Append (cost=0.29..39.51 rows=5 width=12) (actual time=0.021..0.041 rows=1 loops=10) -> Index Scan using tableb_reference_1 on tableb_part1 b (cost=0.29..7.90 rows=1 width=12) (actual time=0.007..0.007 rows=0 loops=10) Index Cond: (reference = a.reference) -> Index Scan using tableb_reference_2 on tableb_part2 b_1 (cost=0.29..7.90 rows=1 width=12) (actual time=0.006..0.006 rows=0 loops=10) Index Cond: (reference = a.reference) -> Index Scan using tableb_reference_3 on tableb_part3 b_2 (cost=0.29..7.90 rows=1 width=12) (actual time=0.009..0.010 rows=0 loops=10) Index Cond: (reference = a.reference) -> Index Scan using tableb_reference_4 on tableb_part4 b_3 (cost=0.29..7.90 rows=1 width=12) (actual time=0.006..0.007 rows=0 loops=10) Index Cond: (reference = a.reference) -> Index Scan using tableb_reference_5 on tableb_part5 b_4 (cost=0.29..7.90 rows=1 width=12) (actual time=0.006..0.006 rows=0 loops=10) Index Cond: (reference = a.reference) Planning time: 3.629 ms Execution time: 0.762 ms\n===========\n\nSo to summarise the findings/questions from above:\n- It seems like the Postgres optimizer sometimes uses the partition level statistics, and sometimes the global table level statistics? Or is it using something else?- With partitioning tables with unique identifier and retrieving explicitly on those identifiers, at present the optimizer will always understimate the selectivity and overestimate the rows returned. This inaccuracy increases in proportion to the number of partitions.- As a result, when joining to other tables, you are liable to hitting sequential scans. This becomes more likely as you have more partitions or if join to more partitioned tables (note I am aware I could try and tune random_page_cost to try and prevent this).- To me in the examples queries described above, it makes sense to use the partition statistics for the partition level access strategy, but the global statistics when estimating the actual rows returned by all the individual partition queries. Is there a reason not to do this? Or do others believe the optimizer is doing the right thing here?\nAnd then some general questions:\n- How do other people use partitioning but without a significant performance disadvantage on reading the data? Is there something else I should be doing here to achieve the same thing without the overhead? At present my reads have increased optimization cost (as it needs to optimize access to each partition) and also execution cost (access the index on every partition). Even without the optimizer issues described above, the cost of reading simple data is extremely high relative to non-partitioned data (unless you use the partition key as a filter for each table to eliminate those partitions).- Is there any chance/plan to add global indexes to postgres? If so would that impact significantly the cost of the partition drop e.g. to clean up the index.\nThanks in advance for any feedback/support,\nKeith\n\n\nI am using Postgres on a large system (recording approximately 20million transactions per day). We use partitioning by date to assist with both vacuum processing time and to archive old data. At the core of the system are records in 2 different tables detailing different types of activity for monetary transactions (e.g. money in and money out) -> a single transaction has entries in both tables, so to retrieve all details for a single transaction we need to join the 2 tables.The use of partitioning however has a significant impact on the performance of retrieving this data. Being relatively new to Postgres I wanted to share my findings and understand how others address them. We run postgres version 9.6 on CentOS, but the same behaviour is apparent in postgres 10.6. The test case outputs are from version 10.6 running on my Ubuntu machine with default postgres configuration.Below is an example script to populate the test data:=======================drop table if exists tablea cascade;drop table if exists tableb cascade;CREATE TABLE tablea ( id serial, reference int not null, created date not null) PARTITION BY RANGE (created);CREATE TABLE tablea_part1 PARTITION OF tablea FOR VALUES FROM ('2018-01-01') TO ('2018-01-02');CREATE TABLE tablea_part2 PARTITION OF tablea FOR VALUES FROM ('2018-01-02') TO ('2018-01-03');CREATE TABLE tablea_part3 PARTITION OF tablea FOR VALUES FROM ('2018-01-03') TO ('2018-01-04');CREATE TABLE tablea_part4 PARTITION OF tablea FOR VALUES FROM ('2018-01-04') TO ('2018-01-05');CREATE TABLE tablea_part5 PARTITION OF tablea FOR VALUES FROM ('2018-01-05') TO ('2018-01-06');CREATE INDEX tablea_id_1 ON tablea_part1 (id);CREATE INDEX tablea_id_2 ON tablea_part2 (id);CREATE INDEX tablea_id_3 ON tablea_part3 (id);CREATE INDEX tablea_id_4 ON tablea_part4 (id);CREATE INDEX tablea_id_5 ON tablea_part5 (id);CREATE INDEX tablea_reference_1 ON tablea_part1 (reference);CREATE INDEX tablea_reference_2 ON tablea_part2 (reference);CREATE INDEX tablea_reference_3 ON tablea_part3 (reference);CREATE INDEX tablea_reference_4 ON tablea_part4 (reference);CREATE INDEX tablea_reference_5 ON tablea_part5 (reference);CREATE INDEX tablea_created_1 ON tablea_part1 (created);CREATE INDEX tablea_created_2 ON tablea_part2 (created);CREATE INDEX tablea_created_3 ON tablea_part3 (created);CREATE INDEX tablea_created_4 ON tablea_part4 (created);CREATE INDEX tablea_created_5 ON tablea_part5 (created);alter table tablea_part1 add CHECK ( created >= DATE '2018-01-01' AND created < DATE '2018-01-02');alter table tablea_part2 add CHECK ( created >= DATE '2018-01-02' AND created < DATE '2018-01-03');alter table tablea_part3 add CHECK ( created >= DATE '2018-01-03' AND created < DATE '2018-01-04');alter table tablea_part4 add CHECK ( created >= DATE '2018-01-04' AND created < DATE '2018-01-05');alter table tablea_part5 add CHECK ( created >= DATE '2018-01-05' AND created < DATE '2018-01-06');create or replace function populate_tablea() RETURNS integer AS$BODY$ DECLARE i integer; v_created date;BEGIN i := 0; WHILE (i < 50000) loop i := i + 1; IF (mod(i,5) = 1) THEN v_created = '2018-01-01'; ELSIF (mod(i,5) = 2) THEN v_created = '2018-01-02'; ELSIF (mod(i,5) = 3) THEN v_created = '2018-01-03'; ELSIF (mod(i,5) = 4) THEN v_created = '2018-01-04'; ELSIF (mod(i,5) = 0) THEN v_created = '2018-01-05'; END IF; insert into tablea values (i, i, v_created); end loop; RETURN i;END;$BODY$LANGUAGE plpgsql VOLATILE COST 100;CREATE TABLE tableb( id serial, reference int not null, created date not null) PARTITION BY RANGE (created);CREATE TABLE tableb_part1 PARTITION OF tableb FOR VALUES FROM ('2018-01-01') TO ('2018-01-02');CREATE TABLE tableb_part2 PARTITION OF tableb FOR VALUES FROM ('2018-01-02') TO ('2018-01-03');CREATE TABLE tableb_part3 PARTITION OF tableb FOR VALUES FROM ('2018-01-03') TO ('2018-01-04');CREATE TABLE tableb_part4 PARTITION OF tableb FOR VALUES FROM ('2018-01-04') TO ('2018-01-05');CREATE TABLE tableb_part5 PARTITION OF tableb FOR VALUES FROM ('2018-01-05') TO ('2018-01-06');CREATE INDEX tableb_id_1 ON tableb_part1 (id);CREATE INDEX tableb_id_2 ON tableb_part2 (id);CREATE INDEX tableb_id_3 ON tableb_part3 (id);CREATE INDEX tableb_id_4 ON tableb_part4 (id);CREATE INDEX tableb_id_5 ON tableb_part5 (id);CREATE INDEX tableb_reference_1 ON tableb_part1 (reference);CREATE INDEX tableb_reference_2 ON tableb_part2 (reference);CREATE INDEX tableb_reference_3 ON tableb_part3 (reference);CREATE INDEX tableb_reference_4 ON tableb_part4 (reference);CREATE INDEX tableb_reference_5 ON tableb_part5 (reference);CREATE INDEX tableb_created_1 ON tableb_part1 (created);CREATE INDEX tableb_created_2 ON tableb_part2 (created);CREATE INDEX tableb_created_3 ON tableb_part3 (created);CREATE INDEX tableb_created_4 ON tableb_part4 (created);CREATE INDEX tableb_created_5 ON tableb_part5 (created);alter table tableb_part1 add CHECK ( created >= DATE '2018-01-01' AND created < DATE '2018-01-02');alter table tableb_part2 add CHECK ( created >= DATE '2018-01-02' AND created < DATE '2018-01-03');alter table tableb_part3 add CHECK ( created >= DATE '2018-01-03' AND created < DATE '2018-01-04');alter table tableb_part4 add CHECK ( created >= DATE '2018-01-04' AND created < DATE '2018-01-05');alter table tableb_part5 add CHECK ( created >= DATE '2018-01-05' AND created < DATE '2018-01-06');create or replace function populate_tableb() RETURNS integer AS$BODY$DECLARE i integer; v_created date;BEGIN i := 0; WHILE (i < 50000) loop i := i + 1; IF (mod(i,5) = 0) THEN v_created = '2018-01-01'; ELSIF (mod(i,5) = 1) THEN v_created = '2018-01-02'; ELSIF (mod(i,5) = 2) THEN v_created = '2018-01-03'; ELSIF (mod(i,5) = 3) THEN v_created = '2018-01-04'; ELSIF (mod(i,5) = 4) THEN v_created = '2018-01-05'; END IF; insert into tableb values (i, i, v_created); end loop; RETURN i;END;$BODY$LANGUAGE plpgsql VOLATILE COST 100;select populate_tablea();select populate_tableb();vacuum analyze;==================================So it creates 2 tables, both with 5 partitions (using range partitioning on the created column). Each partition has 10000 rows in it.Below are some example queries I have run, the outputs of explain analyze for each and notes on each of my findings/questions:============-- NOTICE IN THE BELOW THAT WE USE A SINGLE ID (ESSENTIALLY THE PRIMARY KEY) BUT WE HAVE ESTIMATED 5 ROWS RETURNED. WE SEEM TO BE BASING-- ON PARTITION STATS ONLY AND SUMMING. SO EACH PARTITION ASSUMES ID IS UNIQUE, BUT WITH 5 PARTITIONS, THE TOTAL ROWS IS 5.explain analyze select * from tablea where id = 101; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Append (cost=0.29..41.51 rows=5 width=12) (actual time=0.027..0.066 rows=1 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 (cost=0.29..8.30 rows=1 width=12) (actual time=0.026..0.029 rows=1 loops=1) Index Cond: (id = 101) -> Index Scan using tablea_id_2 on tablea_part2 (cost=0.29..8.30 rows=1 width=12) (actual time=0.010..0.010 rows=0 loops=1) Index Cond: (id = 101) -> Index Scan using tablea_id_3 on tablea_part3 (cost=0.29..8.30 rows=1 width=12) (actual time=0.008..0.009 rows=0 loops=1) Index Cond: (id = 101) -> Index Scan using tablea_id_4 on tablea_part4 (cost=0.29..8.30 rows=1 width=12) (actual time=0.008..0.008 rows=0 loops=1) Index Cond: (id = 101) -> Index Scan using tablea_id_5 on tablea_part5 (cost=0.29..8.30 rows=1 width=12) (actual time=0.007..0.007 rows=0 loops=1) Index Cond: (id = 101) Planning time: 0.875 ms Execution time: 0.176 ms============-- IF WE USE AN IN WITH 10 ID'S WE ESTIMATE 50 ROWS RETURNED INSTEAD OF THE ACTUAL 10. AGAIN SEEMS TO BE AGGREGATING PARTITION STATISTICS.explain analyze select * from tablea where id in (101,102,103,104,105,106,107,108,109,110); QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------- Append (cost=0.29..215.13 rows=50 width=12) (actual time=0.040..0.283 rows=10 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 (cost=0.29..43.03 rows=10 width=12) (actual time=0.039..0.079 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_2 on tablea_part2 (cost=0.29..43.03 rows=10 width=12) (actual time=0.021..0.052 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_3 on tablea_part3 (cost=0.29..43.03 rows=10 width=12) (actual time=0.022..0.048 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_4 on tablea_part4 (cost=0.29..43.03 rows=10 width=12) (actual time=0.026..0.049 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_5 on tablea_part5 (cost=0.29..43.03 rows=10 width=12) (actual time=0.028..0.048 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) Planning time: 1.526 ms Execution time: 0.397 ms ===========-- IF WE USE A RANGE INSTEAD OF INDIVIDUAL ID'S, WE GET ESTIMATED 10 ROWS RETURNED (GOOD). -- IS THIS USING THE GLOBAL TABLE STATISTICS INSTEAD? WHY DOES IT DIFFER FROM DISTINCT ID'S?explain analyze select * from tablea where id >= 101 and id <= 110; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Append (cost=0.29..41.62 rows=10 width=12) (actual time=0.022..0.074 rows=10 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 (cost=0.29..8.32 rows=2 width=12) (actual time=0.021..0.026 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_2 on tablea_part2 (cost=0.29..8.32 rows=2 width=12) (actual time=0.010..0.012 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_3 on tablea_part3 (cost=0.29..8.32 rows=2 width=12) (actual time=0.009..0.010 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_4 on tablea_part4 (cost=0.29..8.32 rows=2 width=12) (actual time=0.009..0.010 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_5 on tablea_part5 (cost=0.29..8.32 rows=2 width=12) (actual time=0.008..0.010 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) Planning time: 1.845 ms Execution time: 0.196 ms==========-- HERE ARE THE TABLE STATS, SHOWING THAT POSTGRES IS AWARE THAT ID'S ARE GLOBALLY UNIQUE IN THE TABLEA TABLE. CAN IT USE THEM?select tablename,n_distinct from pg_stats where tablename like '%tablea%' and attname = 'id'; tablename | n_distinct --------------+------------ tablea_part3 | -1 tablea | -1 tablea_part2 | -1 tablea_part4 | -1 tablea_part5 | -1 tablea_part1 | -1==========-- WHEN I JOIN, THE NUMBER OF ROWS MULTIPLIES. NOTICE THE SEQUENTIAL SCAN OF THE TABLEB PARTITION. THIS IS CAUSED BY THE OVERESTIMATION-- OF ROWS RETURNED BY TABLEAexplain analyze select * from tablea a, tableb b where a.reference = b.reference and a.id in (101,102,103,104,105,106,107,108,109,110); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=215.75..1178.75 rows=50 width=24) (actual time=0.386..46.845 rows=10 loops=1) Hash Cond: (b.reference = a.reference) -> Append (cost=0.00..775.00 rows=50000 width=12) (actual time=0.024..26.527 rows=50000 loops=1) -> Seq Scan on tableb_part1 b (cost=0.00..155.00 rows=10000 width=12) (actual time=0.022..4.006 rows=10000 loops=1) -> Seq Scan on tableb_part2 b_1 (cost=0.00..155.00 rows=10000 width=12) (actual time=0.023..4.039 rows=10000 loops=1) -> Seq Scan on tableb_part3 b_2 (cost=0.00..155.00 rows=10000 width=12) (actual time=0.023..3.247 rows=10000 loops=1) -> Seq Scan on tableb_part4 b_3 (cost=0.00..155.00 rows=10000 width=12) (actual time=0.016..1.421 rows=10000 loops=1) -> Seq Scan on tableb_part5 b_4 (cost=0.00..155.00 rows=10000 width=12) (actual time=0.007..1.113 rows=10000 loops=1) -> Hash (cost=215.13..215.13 rows=50 width=12) (actual time=0.316..0.316 rows=10 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Append (cost=0.29..215.13 rows=50 width=12) (actual time=0.034..0.301 rows=10 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 a (cost=0.29..43.03 rows=10 width=12) (actual time=0.033..0.074 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_2 on tablea_part2 a_1 (cost=0.29..43.03 rows=10 width=12) (actual time=0.020..0.051 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_3 on tablea_part3 a_2 (cost=0.29..43.03 rows=10 width=12) (actual time=0.021..0.048 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_4 on tablea_part4 a_3 (cost=0.29..43.03 rows=10 width=12) (actual time=0.025..0.072 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) -> Index Scan using tablea_id_5 on tablea_part5 a_4 (cost=0.29..43.03 rows=10 width=12) (actual time=0.028..0.049 rows=2 loops=1) Index Cond: (id = ANY ('{101,102,103,104,105,106,107,108,109,110}'::integer[])) Planning time: 2.642 ms Execution time: 47.005 ms===========-- REPEAT, BUT USING A RANGE QUERY. NO LONGER SEQUENTIAL SCAN AS THE TABLEA ROW ESTIMATE DROPS TO 10 FROM 50. -- QUERY EXECUTION TIME DROPS FROM 47MS TO 0.7MSexplain analyze select * from tablea a, tableb b where a.reference = b.reference and a.id >= 101 and a. id <= 110; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=0.57..437.25 rows=10 width=24) (actual time=0.063..0.543 rows=10 loops=1) -> Append (cost=0.29..41.62 rows=10 width=12) (actual time=0.027..0.091 rows=10 loops=1) -> Index Scan using tablea_id_1 on tablea_part1 a (cost=0.29..8.32 rows=2 width=12) (actual time=0.026..0.031 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_2 on tablea_part2 a_1 (cost=0.29..8.32 rows=2 width=12) (actual time=0.010..0.013 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_3 on tablea_part3 a_2 (cost=0.29..8.32 rows=2 width=12) (actual time=0.009..0.011 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_4 on tablea_part4 a_3 (cost=0.29..8.32 rows=2 width=12) (actual time=0.009..0.012 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Index Scan using tablea_id_5 on tablea_part5 a_4 (cost=0.29..8.32 rows=2 width=12) (actual time=0.015..0.018 rows=2 loops=1) Index Cond: ((id >= 101) AND (id <= 110)) -> Append (cost=0.29..39.51 rows=5 width=12) (actual time=0.021..0.041 rows=1 loops=10) -> Index Scan using tableb_reference_1 on tableb_part1 b (cost=0.29..7.90 rows=1 width=12) (actual time=0.007..0.007 rows=0 loops=10) Index Cond: (reference = a.reference) -> Index Scan using tableb_reference_2 on tableb_part2 b_1 (cost=0.29..7.90 rows=1 width=12) (actual time=0.006..0.006 rows=0 loops=10) Index Cond: (reference = a.reference) -> Index Scan using tableb_reference_3 on tableb_part3 b_2 (cost=0.29..7.90 rows=1 width=12) (actual time=0.009..0.010 rows=0 loops=10) Index Cond: (reference = a.reference) -> Index Scan using tableb_reference_4 on tableb_part4 b_3 (cost=0.29..7.90 rows=1 width=12) (actual time=0.006..0.007 rows=0 loops=10) Index Cond: (reference = a.reference) -> Index Scan using tableb_reference_5 on tableb_part5 b_4 (cost=0.29..7.90 rows=1 width=12) (actual time=0.006..0.006 rows=0 loops=10) Index Cond: (reference = a.reference) Planning time: 3.629 ms Execution time: 0.762 ms===========So to summarise the findings/questions from above:- It seems like the Postgres optimizer sometimes uses the partition level statistics, and sometimes the global table level statistics? Or is it using something else?- With partitioning tables with unique identifier and retrieving explicitly on those identifiers, at present the optimizer will always understimate the selectivity and overestimate the rows returned. This inaccuracy increases in proportion to the number of partitions.- As a result, when joining to other tables, you are liable to hitting sequential scans. This becomes more likely as you have more partitions or if join to more partitioned tables (note I am aware I could try and tune random_page_cost to try and prevent this).- To me in the examples queries described above, it makes sense to use the partition statistics for the partition level access strategy, but the global statistics when estimating the actual rows returned by all the individual partition queries. Is there a reason not to do this? Or do others believe the optimizer is doing the right thing here?And then some general questions:- How do other people use partitioning but without a significant performance disadvantage on reading the data? Is there something else I should be doing here to achieve the same thing without the overhead? At present my reads have increased optimization cost (as it needs to optimize access to each partition) and also execution cost (access the index on every partition). Even without the optimizer issues described above, the cost of reading simple data is extremely high relative to non-partitioned data (unless you use the partition key as a filter for each table to eliminate those partitions).- Is there any chance/plan to add global indexes to postgres? If so would that impact significantly the cost of the partition drop e.g. to clean up the index.Thanks in advance for any feedback/support,Keith",
"msg_date": "Fri, 8 Feb 2019 11:13:51 +0000 (UTC)",
"msg_from": "keith anderson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning Optimizer Questions and Issues"
},
{
"msg_contents": "On Fri, Feb 08, 2019 at 11:13:51AM +0000, keith anderson wrote:\n> So to summarise the findings/questions from above:\n> - It seems like the Postgres optimizer sometimes uses the partition level statistics, and sometimes the global table level statistics? Or is it using something else?- With partitioning tables with unique identifier and retrieving explicitly on those identifiers, at present the optimizer will always understimate the selectivity and overestimate the rows returned. This inaccuracy increases in proportion to the number of partitions.- As a result, when joining to other tables, you are liable to hitting sequential scans. This becomes more likely as you have more partitions or if join to more partitioned tables (note I am aware I could try and tune random_page_cost to try and prevent this).- To me in the examples queries described above, it makes sense to use the partition statistics for the partition level access strategy, but the global statistics when estimating the actual rows returned by all the individual partition queries. Is there a reason not to do this? Or do others believe the optimizer is doing the right thing here?\n> And then some general questions:\n> - How do other people use partitioning but without a significant performance disadvantage on reading the data? Is there something else I should be doing here to achieve the same thing without the overhead? At present my reads have increased optimization cost (as it needs to optimize access to each partition) and also execution cost (access the index on every partition). Even without the optimizer issues described above, the cost of reading simple data is extremely high relative to non-partitioned data (unless you use the partition key as a filter for each table to eliminate those partitions).- Is there any chance/plan to add global indexes to postgres? If so would that impact significantly the cost of the partition drop e.g. to clean up the index.\n> Thanks in advance for any feedback/support,\n\nAn equality or IN() query will use the pg_stats most-common-values list,\nwhereas a range query will use the histogram.\n\nThe tables probably doesn't have complete MCV list. By default, that's limited\nto 100 entries. Since the maximum allowed by ALTER..SET STATISTICS is 10k, I\ndon't think it'll help to change it (at least for your production use case).\nEach partition's rowcount appears to be estimated from its ndistinct, and not\nfrom its content, so each is estimated as having about the same rowcount.\n\nYour partitions are sharing a sequence for their ID column, which causes the\nDEFAULT IDs to be unique...but their global uniqueness isn't enforced nor\nguaranteed.\n\nNote, in postgres11, it's possible to create an index on the parent table.\nIt's NOT a global index, but it can be unique if it includes the partition key.\nI don't know how closely your example describes your real use case, but I don't\nthink that helps with your example; it doesn't seems useful to partition on a\nserial column.\n\nYou seem to be adding unnecessary CHECK constraints that duplicate the\npartition bounds. Note, it's still useful to include CHECK constraints on key\ncolumn if you're planning on DETACHing and re-ATTACHing the partitions, in\norder to avoid seqscan to verify tuples don't violate specified bounds.\n\nYou might need to rethink your partitioning scheme - you should choose one that\ncauses performance to improve, and probably naturally includes the partition\nkey in most queries.\n\nPerhaps you'd use 2 levels of partitioning: a RANGE partition by date, which\nallows for archiving, and a HASH partition by ID, which allows for partition\npruning. Note that it's also possible to partition on multiple keys, like\nRANGE(id,date) - I don't think that's useful here, though. PG11 also allows a\n\"default\" partition.\n\nOr perhaps you could partition by RANGE(date) but add CHECK constraints on ID,\nafter the table is fully populated, to optimize queries by allowing for\npartition pruning.\n\nOr you could maybe change the ID column to include the timestamp (like BIND\nzonesfiles YYYYMMDDNNNNNNNN). You'd set a bigint sequence on each partition's\nID as default to the beginning of the month. A bigint is enough to handle\n5*10^4 times your volume: 20190401000020111222. (I think this is trying to be\nunnecessarily clever, unless there's some reason the other two ideas don't\nwork.)\n\nJustin\n\n",
"msg_date": "Fri, 8 Feb 2019 07:04:44 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning Optimizer Questions and Issues"
},
{
"msg_contents": "Thanks for the feedback Justin.\n\nYou are right, the most-common-values list is empty for my test case and so it is using n_distinct for the 'id IN()' scenario.And I can see that with the pg_class.reltuples and the pg_stats.histogram_bounds values how the optimizer can conclude with my range query that only 1 in 5 entries in my range query are in each individual partition.\nHowever, I can also see that the pg_stats.n_distinct value for tablea shows -1, as do all the individual child partitions. In my opinion it makes sense for the optimizer when using n_distinct on partitioned tables to use the n_distinct value of the parent table level when estimating row counts rather than a sum of the partition level statistics. Or can someone come up with a good reason to avoid this? \nA couple of examples of different data in a partitioned table:\n- Unique identifier -> if providing a single value in a query -> using n_distinct from parent will estimate 1, using child tables will be 1 * (number of partitions). Use of parent table would be correct.- Date of activity, with 1000 records per day -> if providing a single day to the query -> using n_distinct from parent would show 1000 rows returned, using child tables will be 1000 * (number of partitions). Use of parent table n_distinct is correct.\nPerhaps when querying on columns that are part of the partition logic you could use the partition level stats, but I think the vast majority of the time, using the parent statistics would be much more reliable/accurate than summing across partitions.\nIn terms of the partition strategy, I agree that it should be done with a view to helping performance improve. I will look into more detail at your suggestions, but in general it is very hard to use effectively as there are competing priorities:\n- I would like to not have to manage massive numbers of partitions- I would like to be able to archive data easily using date (a big plus point to the existing date partitioning strategy)- It is hard in most cases to come up with a partition strategy that allows for partition elimination e.g. consider a common 'transaction record' table with a primary key, an account identifier, and a date -> it is natural to want to be able to query on any one of these, but as things stand it cannot be achieved performantly with partitioning.\nGlobal index support feels like it has potential to resolve many of the issues I have with partitioning (beyond the optimizer concern above). I assume this has been discussed and rejected though by the community?\nI've attached as a file the original test script.\nKeith\n\n\n On Friday, 8 February 2019, 13:05:04 GMT, Justin Pryzby <[email protected]> wrote: \n \n On Fri, Feb 08, 2019 at 11:13:51AM +0000, keith anderson wrote:\n> So to summarise the findings/questions from above:\n> - It seems like the Postgres optimizer sometimes uses the partition level statistics, and sometimes the global table level statistics? Or is it using something else?- With partitioning tables with unique identifier and retrieving explicitly on those identifiers, at present the optimizer will always understimate the selectivity and overestimate the rows returned. This inaccuracy increases in proportion to the number of partitions.- As a result, when joining to other tables, you are liable to hitting sequential scans. This becomes more likely as you have more partitions or if join to more partitioned tables (note I am aware I could try and tune random_page_cost to try and prevent this).- To me in the examples queries described above, it makes sense to use the partition statistics for the partition level access strategy, but the global statistics when estimating the actual rows returned by all the individual partition queries. Is there a reason not to do this? Or do others believe the optimizer is doing the right thing here?\n> And then some general questions:\n> - How do other people use partitioning but without a significant performance disadvantage on reading the data? Is there something else I should be doing here to achieve the same thing without the overhead? At present my reads have increased optimization cost (as it needs to optimize access to each partition) and also execution cost (access the index on every partition). Even without the optimizer issues described above, the cost of reading simple data is extremely high relative to non-partitioned data (unless you use the partition key as a filter for each table to eliminate those partitions).- Is there any chance/plan to add global indexes to postgres? If so would that impact significantly the cost of the partition drop e.g. to clean up the index.\n> Thanks in advance for any feedback/support,\n\nAn equality or IN() query will use the pg_stats most-common-values list,\nwhereas a range query will use the histogram.\n\nThe tables probably doesn't have complete MCV list. By default, that's limited\nto 100 entries. Since the maximum allowed by ALTER..SET STATISTICS is 10k, I\ndon't think it'll help to change it (at least for your production use case).\nEach partition's rowcount appears to be estimated from its ndistinct, and not\nfrom its content, so each is estimated as having about the same rowcount.\n\nYour partitions are sharing a sequence for their ID column, which causes the\nDEFAULT IDs to be unique...but their global uniqueness isn't enforced nor\nguaranteed.\n\nNote, in postgres11, it's possible to create an index on the parent table.\nIt's NOT a global index, but it can be unique if it includes the partition key.\nI don't know how closely your example describes your real use case, but I don't\nthink that helps with your example; it doesn't seems useful to partition on a\nserial column.\n\nYou seem to be adding unnecessary CHECK constraints that duplicate the\npartition bounds. Note, it's still useful to include CHECK constraints on key\ncolumn if you're planning on DETACHing and re-ATTACHing the partitions, in\norder to avoid seqscan to verify tuples don't violate specified bounds.\n\nYou might need to rethink your partitioning scheme - you should choose one that\ncauses performance to improve, and probably naturally includes the partition\nkey in most queries.\n\nPerhaps you'd use 2 levels of partitioning: a RANGE partition by date, which\nallows for archiving, and a HASH partition by ID, which allows for partition\npruning. Note that it's also possible to partition on multiple keys, like\nRANGE(id,date) - I don't think that's useful here, though. PG11 also allows a\n\"default\" partition.\n\nOr perhaps you could partition by RANGE(date) but add CHECK constraints on ID,\nafter the table is fully populated, to optimize queries by allowing for\npartition pruning.\n\nOr you could maybe change the ID column to include the timestamp (like BIND\nzonesfiles YYYYMMDDNNNNNNNN). You'd set a bigint sequence on each partition's\nID as default to the beginning of the month. A bigint is enough to handle\n5*10^4 times your volume: 20190401000020111222. (I think this is trying to be\nunnecessarily clever, unless there's some reason the other two ideas don't\nwork.)\n\nJustin",
"msg_date": "Mon, 11 Feb 2019 08:05:50 +0000 (UTC)",
"msg_from": "keith anderson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning Optimizer Questions and Issues"
}
] |
[
{
"msg_contents": "I have a report that takes about 20 minutes to generate. It is generated\nfrom 3 tables: according to image.\nThe report input parameter is a date range. So to generate it I select all\nrecords in Table A and run them\nin loop-for. For each record in Table A I make a query Table B join with\nTable C where I filter the records through the date field and make the sum\nof the value field.\n\nGiven this scenario, I would like your help in finding a solution that can\nreduce the generation time of this report. System developed in PHP /\nLaravel.\n\nPostgreSQL\nmax_connections = 50\nshared_buffers = 4GB\neffective_cache_size = 12GB\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.7\nwal_buffers = 16MB\ndefault_statistics_target = 100\nrandom_page_cost = 4\neffective_io_concurrency = 2\nwork_mem = 83886kB\nmin_wal_size = 1GB\nmax_wal_size = 2GB\n\nLinux Server CentOS 7, Single Xeon 4-Core E3-1230 v5 3.4Ghz w / HT, 16GB\nRAM.\n\nI've already created indexes in the fields that are involved in the queries.\nDatabase schema\n[image: Untitled2.png]Report result\n[image: Untitled.png]\n\n-- \nAtenciosamente,\n\nEvandro Abreu.\nEngenheiro de Sistemas at STS Informática Ltda.\nGoogle Talk: evandro.abreu\nTwitter: http://twitter.com/abreu_evandro\nSkype: evandro_abreu\nFacebook: Evandro Abreu <http://www.facebook.com/evandro.abreu.9>\nWhatsApp: +55 86 99929-1788\nPhone: +55 86 98835-0468",
"msg_date": "Sat, 9 Feb 2019 13:45:50 -0300",
"msg_from": "Evandro Abreu <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "Hi,\n\nPlease don't send images to the list, you can send a link to one of the image\nhost websites if you need to describe something graphical.\n\nBut here, you could just send the queries and \\d for the tables.\n\nOn Sat, Feb 09, 2019 at 01:45:50PM -0300, Evandro Abreu wrote:\n> I have a report that takes about 20 minutes to generate. It is generated\n> from 3 tables: according to image.\n> The report input parameter is a date range. So to generate it I select all\n> records in Table A and run them\n> in loop-for. For each record in Table A I make a query Table B join with\n> Table C where I filter the records through the date field and make the sum\n> of the value field.\n\nSo you're running query 5000 times ?\n\nDo you really need a for loop ? Could you just join the 3 tables together and GROUP BY a.id ?\n\nPlease send \"explain analyze\" for the queries, or a link to the output on\ndepesz site.\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Information_You_Need_To_Include\n\nAlso, are they all taking about the same amount of time ?\n\nJustin\n\n",
"msg_date": "Sat, 9 Feb 2019 11:16:33 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow to run query 5000 times"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-09 11:16:33 -0600, Justin Pryzby wrote:\n> Please don't send images to the list, you can send a link to one of the image\n> host websites if you need to describe something graphical.\n\nFWIW, I don't agree with that policy. I plenty of time process email\nwhile withoug internet, so I appreciate self contained email.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Sat, 9 Feb 2019 12:23:14 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow to run query 5000 times"
},
{
"msg_contents": "Hi,\nDo you have an index in the date field?\n\nObtener Outlook para Android<https://aka.ms/ghei36>\n\n________________________________\nFrom: Andres Freund <[email protected]>\nSent: Saturday, February 9, 2019 5:23:14 PM\nTo: Justin Pryzby\nCc: Evandro Abreu; [email protected]\nSubject: Re: slow to run query 5000 times\n\nHi,\n\nOn 2019-02-09 11:16:33 -0600, Justin Pryzby wrote:\n> Please don't send images to the list, you can send a link to one of the image\n> host websites if you need to describe something graphical.\n\nFWIW, I don't agree with that policy. I plenty of time process email\nwhile withoug internet, so I appreciate self contained email.\n\nGreetings,\n\nAndres Freund\n\n\n\n\n\n\n\n\n\n\n\nHi, \n\n\nDo you have an index in the date field?\n\n\n\n\nObtener Outlook para Android\n\n\n\nFrom: Andres Freund <[email protected]>\nSent: Saturday, February 9, 2019 5:23:14 PM\nTo: Justin Pryzby\nCc: Evandro Abreu; [email protected]\nSubject: Re: slow to run query 5000 times\n \n\n\n\nHi,\n\nOn 2019-02-09 11:16:33 -0600, Justin Pryzby wrote:\n> Please don't send images to the list, you can send a link to one of the image\n> host websites if you need to describe something graphical.\n\nFWIW, I don't agree with that policy. I plenty of time process email\nwhile withoug internet, so I appreciate self contained email.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 10 Feb 2019 00:23:59 +0000",
"msg_from": "Ricardo Martin Gomez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow to run query 5000 times"
},
{
"msg_contents": "Hi,\n\nit will be good if you could post the queries you use + the explain output.\n\nThanks\n\nLe sam. 9 févr. 2019 à 17:46, Evandro Abreu <[email protected]> a\nécrit :\n\n> I have a report that takes about 20 minutes to generate. It is generated\n> from 3 tables: according to image.\n> The report input parameter is a date range. So to generate it I select all\n> records in Table A and run them\n> in loop-for. For each record in Table A I make a query Table B join with\n> Table C where I filter the records through the date field and make the sum\n> of the value field.\n>\n> Given this scenario, I would like your help in finding a solution that can\n> reduce the generation time of this report. System developed in PHP /\n> Laravel.\n>\n> PostgreSQL\n> max_connections = 50\n> shared_buffers = 4GB\n> effective_cache_size = 12GB\n> maintenance_work_mem = 1GB\n> checkpoint_completion_target = 0.7\n> wal_buffers = 16MB\n> default_statistics_target = 100\n> random_page_cost = 4\n> effective_io_concurrency = 2\n> work_mem = 83886kB\n> min_wal_size = 1GB\n> max_wal_size = 2GB\n>\n> Linux Server CentOS 7, Single Xeon 4-Core E3-1230 v5 3.4Ghz w / HT, 16GB\n> RAM.\n>\n> I've already created indexes in the fields that are involved in the\n> queries.\n> Database schema\n> [image: Untitled2.png]Report result\n> [image: Untitled.png]\n>\n> --\n> Atenciosamente,\n>\n> Evandro Abreu.\n> Engenheiro de Sistemas at STS Informática Ltda.\n> Google Talk: evandro.abreu\n> Twitter: http://twitter.com/abreu_evandro\n> Skype: evandro_abreu\n> Facebook: Evandro Abreu <http://www.facebook.com/evandro.abreu.9>\n> WhatsApp: +55 86 99929-1788\n> Phone: +55 86 98835-0468\n>\n>",
"msg_date": "Sun, 10 Feb 2019 19:43:55 +0100",
"msg_from": "Tumasgiu Rossini <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are developing a tool called sqlfuzz for automatically finding performance regressions in PostgreSQL. sqlfuzz performs mutational fuzzing to generate SQL queries that take more time to execute on the latest version of PostgreSQL compared to prior versions. We hope that these queries would help further increase the utility of the regression test suite.\n\nWe would greatly appreciate feedback from the community regarding the queries found by the tool so far. We have already incorporated prior feedback from the community in the latest version of sqlfuzz.\n\nWe are sharing four SQL queries that exhibit regressions in this report. These queries have an average size of 245 bytes. Here’s an illustrative query:\n\nEXAMPLE:\n\nselect distinct\n ref_0.i_im_id as c0,\n ref_1.ol_dist_info as c1\nfrom\n public.item as ref_0\n right join public.order_line as ref_1\n on (cast(null as \"numeric\") <> 1)\n\nTime taken on PostgreSQL v9.5: 15.1 (seconds)\nTime taken on PostgreSQL v11: 64.8 (seconds)\n\nHere are the steps for reproducing our observations:\n\n[Test environment]\n* Ubuntu 16.04 machine \"Linux sludge 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\"\n* Postgres installed via APT package manager\n* Database: TPC-C benchmark (with three scale factors)\n\n[Setup Test Environment]\n\n1. Install PostgreSQL v11 and v9.5\n\n $ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -\n $ sudo sh -c 'echo \"deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main\" > /etc/apt/sources.list.d/pgdg_xenial.list'\n $ sudo apt update\n $ sudo sudo apt-get install postgresql-11\n $ sudo sudo apt-get install postgresql-9.5\n\n* set password of postgres user (with your desirable one)\n $ sudo passwd postgres\n\n* change port number of two version of DBs\n $ sudo vim /etc/postgresql/11/main/postgresql.conf\n => change \"port = ????\" ==> \"port = 5435\"\n $ sudo vim /etc/postgresql/9.5/main/postgresql.conf\n => change \"port = ????\" ==> \"port = 5432\"\n\n* restart DB\n $ sudo pg_ctlcluster 9.5 main restart\n $ sudo pg_ctlcluster 11 main restart\n\n => check you have opened ports at 5432 and 5435\n\n* setup privilege\n $ sudo -i -u postgres\n $ psql -p 5432 (then copy and run the below query to setup password)\n # ALTER USER postgres PASSWORD 'mysecretpassword';\n # \\q\n\n $ psql -p 5435 (then copy and run the below query to setup password)\n # ALTER USER postgres PASSWORD 'mysecretpassword';\n # \\q\n\n $ exit (to original user)\n $ sudo -u postgres createuser -s $(whoami); createdb $(whoami)\n\n* test your setting by showing DB version\n < old version >\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5432 -U postgres -c \"select version();\"\n\n < new version >\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5435 -U postgres -c \"select version();\"\n\n\n2. Set up TPC-C test benchmark\n\n* Download TPC-C (scale factor of 1) and extract it\n $ wget https://gts3.org/~/jjung/tpcc/tpcc1.tar.gz\n $ wget https://gts3.org/~/jjung/tpcc/tpcc10.tar.gz\n $ wget https://gts3.org/~/jjung/tpcc/tpcc50.tar.gz\n\n $ tar xzvf tpcc1.tar.gz\n $ tar xzvf tpcc10.tar.gz\n $ tar xzvf tpcc50.tar.gz\n\n* Create DB (example of TPC-C scale factor 1)\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5432 -U postgres -c \"create database test_bd;\"\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5435 -U postgres -c \"create database test_bd;\"\n\n* Import benchmark (example of TPC-C scale factor 1)\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5432 -U postgres -d test_bd -f ./tpcc_host.pgsql\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5435 -U postgres -d test_bd -f ./tpcc_host.pgsql\n\n* (Optional) Deleting databases\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5432 -U postgres -c \"drop database test_bd;\"\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5435 -U postgres -c \"drop database test_bd;\"\n\n3. Test SQL queries that exhibit performance regressions\n\nWe are sharing four queries in this report. We vary the scale-factor of the TPC-C benchmark from 1 through 50 to demonstrate that the performance regressions are more prominent on larger databases.\n\n* Download queries\n $ wget https://gts3.org/~/jjung/tpcc/case.tar.gz\n $ tar xzvf case.tar.gz\n\n* Execute the queries\n $ PGPASSWORD=mysecretpassword psql -t -A -F\",\" -h 127.0.0.1 -p 5432 -U postgres -d test_bd -f case-1.sql\n $ PGPASSWORD=mysecretpassword psql -t -A -F\",\" -h 127.0.0.1 -p 5435 -U postgres -d test_bd -f case-1.sql\n\nHere’s the time taken to execute four SQL queries on old (v9.5) and newer version (v11) of PostgreSQL (in milliseconds):\n\n+----------------------+--------+---------+---------+\n| | scale1 | scale10 | scale50 |\n+----------------------+--------+---------+---------+\n| Case-1 (v9.5) | 28 | 273 | 1459 |\n| Case-1 (v11) | 90 | 854 | 4818 |\n+----------------------+--------+---------+---------+\n| Case-2 (v9.5) | 229 | 2793 | 15096 |\n| Case-2 (v11) | 838 | 11276 | 64808 |\n+----------------------+--------+---------+---------+\n| Case-3 (v9.5) | 28 | 248 | 1231 |\n| Case-3 (v11) | 74 | 677 | 3345 |\n+----------------------+--------+---------+---------+\n| Case-4 (v9.5) | 0.03 | 0.03 | 0.04 |\n| Case-4 (v11) | 0.04 | 0.04 | 632 |\n+----------------------+--------+---------+---------+\n\n1) CASE-1 shares same plan but shows different execution time. Execution time increases on larger databases.\n\n2) CASE-2 shows different cost estimation and it causes performance regression. Execution time increases on larger databases.\n\n3) CASE-3 uses different executor. Newer version (PG11.1) uses parallel seq scan but shows slower execution time. Execution time increases on larger databases.\n\n4) CASE-4 shows performance regression only in TPC-C with scale factor 50. Instead of using index scan, newer version (PG11.1) applies filter, thereby increasing the time taken to execute the query.\n\nWe would greatly appreciate feedback from the community regarding these queries and are looking forward to improving the tool based on the community’s feedback.\n\nThanks.\n\n\nJinho Jung\n\n\n\n\n\n\n\n\n\n\n\nHello,\n\n\nWe are developing a tool called sqlfuzz for automatically finding performance regressions in PostgreSQL. sqlfuzz performs mutational fuzzing to generate SQL queries that take more time to execute on the latest version of PostgreSQL compared to prior versions.\n We hope that these queries would help further increase the utility of the regression test suite.\n\n\nWe would greatly appreciate feedback from the community regarding the queries found by the tool so far. We have already incorporated prior feedback from the community in the latest version of sqlfuzz.\n\n\nWe are sharing four SQL queries that exhibit regressions in this report. These queries have an average size of 245 bytes. Here’s an illustrative query:\n\n\nEXAMPLE:\n\n\nselect distinct \n ref_0.i_im_id as c0,\n ref_1.ol_dist_info as c1\nfrom\n public.item as ref_0\n right join public.order_line as ref_1\n on (cast(null as \"numeric\") <> 1)\n\n\nTime taken on PostgreSQL v9.5: 15.1 (seconds)\nTime taken on PostgreSQL v11: 64.8 (seconds)\n\n\nHere are the steps for reproducing our observations:\n\n\n[Test environment]\n* Ubuntu 16.04 machine \"Linux sludge 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\"\n* Postgres installed via APT package manager\n* Database: TPC-C benchmark (with three scale factors)\n\n\n[Setup Test Environment]\n\n\n1. Install PostgreSQL v11 and v9.5\n\n\n $ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - \n $ sudo sh -c 'echo \"deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main\" > /etc/apt/sources.list.d/pgdg_xenial.list'\n $ sudo apt update\n $ sudo sudo apt-get install postgresql-11\n $ sudo sudo apt-get install postgresql-9.5\n\n\n* set password of postgres user (with your desirable one)\n $ sudo passwd postgres\n\n\n* change port number of two version of DBs\n $ sudo vim /etc/postgresql/11/main/postgresql.conf \n => change \"port = ????\" ==> \"port = 5435\"\n $ sudo vim /etc/postgresql/9.5/main/postgresql.conf \n => change \"port = ????\" ==> \"port = 5432\"\n\n\n* restart DB\n $ sudo pg_ctlcluster 9.5 main restart\n $ sudo pg_ctlcluster 11 main restart\n\n\n => check you have opened ports at 5432 and 5435\n\n\n* setup privilege\n $ sudo -i -u postgres\n $ psql -p 5432 (then copy and run the below query to setup password)\n # ALTER USER postgres PASSWORD 'mysecretpassword';\n # \\q\n\n\n $ psql -p 5435 (then copy and run the below query to setup password)\n # ALTER USER postgres PASSWORD 'mysecretpassword';\n # \\q\n\n\n $ exit (to original user)\n $ sudo -u postgres createuser -s $(whoami); createdb $(whoami)\n\n\n* test your setting by showing DB version\n < old version >\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5432 -U postgres -c \"select version();\"\n\n\n < new version >\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5435 -U postgres -c \"select version();\"\n\n\n\n\n2. Set up TPC-C test benchmark\n\n\n* Download TPC-C (scale factor of 1) and extract it\n $ wget https://gts3.org/~/jjung/tpcc/tpcc1.tar.gz\n $ wget https://gts3.org/~/jjung/tpcc/tpcc10.tar.gz\n $ wget https://gts3.org/~/jjung/tpcc/tpcc50.tar.gz\n\n\n $ tar xzvf tpcc1.tar.gz\n $ tar xzvf tpcc10.tar.gz\n $ tar xzvf tpcc50.tar.gz\n\n\n* Create DB (example of TPC-C scale factor 1)\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5432 -U postgres -c \"create database test_bd;\"\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5435 -U postgres -c \"create database test_bd;\"\n\n\n* Import benchmark (example of TPC-C scale factor 1)\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5432 -U postgres -d test_bd -f ./tpcc_host.pgsql\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5435 -U postgres -d test_bd -f ./tpcc_host.pgsql\n\n\n* (Optional) Deleting databases\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5432 -U postgres -c \"drop database test_bd;\"\n $ PGPASSWORD=mysecretpassword psql -h 127.0.0.1 -p 5435 -U postgres -c \"drop database test_bd;\"\n\n\n3. Test SQL queries that exhibit performance regressions\n\n\nWe are sharing four queries in this report. We vary the scale-factor of the TPC-C benchmark from 1 through 50 to demonstrate that the performance regressions are more prominent on larger databases.\n\n\n* Download queries \n $ wget https://gts3.org/~/jjung/tpcc/case.tar.gz \n $ tar xzvf case.tar.gz \n\n\n* Execute the queries \n $ PGPASSWORD=mysecretpassword psql -t -A -F\",\" -h 127.0.0.1 -p 5432 -U postgres -d test_bd -f case-1.sql\n $ PGPASSWORD=mysecretpassword psql -t -A -F\",\" -h 127.0.0.1 -p 5435 -U postgres -d test_bd -f case-1.sql\n\n\nHere’s the time taken to execute four SQL queries on old (v9.5) and newer version (v11) of PostgreSQL (in milliseconds):\n\n\n+----------------------+--------+---------+---------+\n| | scale1 | scale10 | scale50 |\n+----------------------+--------+---------+---------+\n| Case-1 (v9.5) | 28 | 273 | 1459 |\n| Case-1 (v11) | 90 | 854 | 4818 |\n+----------------------+--------+---------+---------+\n| Case-2 (v9.5) | 229 | 2793 | 15096 |\n| Case-2 (v11) | 838 | 11276 | 64808 |\n+----------------------+--------+---------+---------+\n| Case-3 (v9.5) | 28 | 248 | 1231 |\n| Case-3 (v11) | 74 | 677 | 3345 |\n+----------------------+--------+---------+---------+\n| Case-4 (v9.5) | 0.03 | 0.03 | 0.04 |\n| Case-4 (v11) | 0.04 | 0.04 | 632 |\n+----------------------+--------+---------+---------+\n\n\n1) CASE-1 shares same plan but shows different execution time. Execution time increases on larger databases. \n\n\n2) CASE-2 shows different cost estimation and it causes performance regression. Execution time increases on larger databases. \n\n\n3) CASE-3 uses different executor. Newer version (PG11.1) uses parallel seq scan but shows slower execution time. Execution time increases on larger databases. \n\n\n4) CASE-4 shows performance regression only in TPC-C with scale factor 50. Instead of using index scan, newer version (PG11.1) applies filter, thereby increasing the time taken to execute the query.\n\n\nWe would greatly appreciate feedback from the community regarding these queries and are looking forward to improving the tool based on the community’s feedback. \n\n\nThanks.\n\n\n\nJinho Jung",
"msg_date": "Mon, 11 Feb 2019 22:29:36 +0000",
"msg_from": "\"Jung, Jinho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance regressions found using sqlfuzz"
},
{
"msg_contents": "Re: Jung, Jinho 2019-02-11 <BN6PR07MB3409922471073F2B619A8CA4EE640@BN6PR07MB3409.namprd07.prod.outlook.com>\n> We are developing a tool called sqlfuzz for automatically finding performance regressions in PostgreSQL. sqlfuzz performs mutational fuzzing to generate SQL queries that take more time to execute on the latest version of PostgreSQL compared to prior versions. We hope that these queries would help further increase the utility of the regression test suite.\n\nHi,\n\nis sqlfuzz available anywhere? Does it have anything to do with\nsqlsmith? The query formatting looks very much like sqlsmith's.\n\nChristoph\n\n",
"msg_date": "Tue, 12 Feb 2019 13:00:18 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance regressions found using sqlfuzz"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 4:23 AM Jung, Jinho <[email protected]> wrote:\n\n>\n> Hello,\n>\n> We are developing a tool called sqlfuzz for automatically finding\n> performance regressions in PostgreSQL. sqlfuzz performs mutational fuzzing\n> to generate SQL queries that take more time to execute on the latest\n> version of PostgreSQL compared to prior versions. We hope that these\n> queries would help further increase the utility of the regression test\n> suite.\n>\n> We would greatly appreciate feedback from the community regarding the\n> queries found by the tool so far. We have already incorporated prior\n> feedback from the community in the latest version of sqlfuzz.\n>\n\nThis approach doesn't seem very exciting to me as-is, because optimization\nis a very pragmatic endeavor. We make decisions all the time that might\nmake some queries better and others worse. If the queries that get better\nare natural/common ones, and the ones that get worse are weird/uncommon\nones (like generated by a fuzzer), then making that change is an\nimprovement even if there are some performance (as opposed to correctness)\nregressions.\n\nI would be more interested in investigating some of these if the report\nwould:\n\n1) include the exact commit in which the regression was introduced (i.e.\nautomate \"git bisect\").\n2) verify that the regression still exists in the dev HEAD and report which\ncommit it was verified in (since HEAD changes frequently).\n3) report which queries (if any) in your corpus were made better by the\nsame commit which made the victim query worse.\n\nCheers,\n\nJeff\n\n>\n\nOn Tue, Feb 12, 2019 at 4:23 AM Jung, Jinho <[email protected]> wrote:\n\n\n\n\n\nHello,\n\n\nWe are developing a tool called sqlfuzz for automatically finding performance regressions in PostgreSQL. sqlfuzz performs mutational fuzzing to generate SQL queries that take more time to execute on the latest version of PostgreSQL compared to prior versions.\n We hope that these queries would help further increase the utility of the regression test suite.\n\n\nWe would greatly appreciate feedback from the community regarding the queries found by the tool so far. We have already incorporated prior feedback from the community in the latest version of sqlfuzz.This approach doesn't seem very exciting to me as-is, because optimization is a very pragmatic endeavor. We make decisions all the time that might make some queries better and others worse. If the queries that get better are natural/common ones, and the ones that get worse are weird/uncommon ones (like generated by a fuzzer), then making that change is an improvement even if there are some performance (as opposed to correctness) regressions. I would be more interested in investigating some of these if the report would:1) include the exact commit in which the regression was introduced (i.e. automate \"git bisect\").2) verify that the regression still exists in the dev HEAD and report which commit it was verified in (since HEAD changes frequently).3) report which queries (if any) in your corpus were made better by the same commit which made the victim query worse.Cheers,Jeff",
"msg_date": "Tue, 12 Feb 2019 13:03:48 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance regressions found using sqlfuzz"
},
{
"msg_contents": "Hi Jeff,\n\n\nThanks for the feedback! The git bisect idea was particularly helpful.\n\n\nWe use query complexity constraints in sqlfuzz to ensure that the constructed queries are realistic (e.g., limit the query size, automatically minimize the query, avoid esoteric expressions and functions, restrict number of joins, etc.).\n\n\nOur goal is to augment the test suite with queries that will assist developers with more comprehensively evaluating the impact of new optimization heuristics, query processing strategies etc. We are working on improving the utility of the tool and your feedback on these reports will be super helpful. Thanks.\n\n\nFor each regression, we share:\n\n\n1) the associated query,\n\n2) the commit that activated it,\n\n3) our high-level analysis, and\n\n4) query execution plans in old and new versions of PostgreSQL.\n\n\nAll these regressions are observed on the latest version (dev HEAD).\n\n\n####### QUERY 2:\n\n select distinct\n ref_0.i_im_id as c0,\n ref_1.ol_dist_info as c1\n from\n public.item as ref_0 right join\n public.order_line as ref_1\n on (ref_0.i_id = 5)\n\n- Commit: 84f9a35 (Improve estimate of distinct values in estimate_num_groups())\n\n- Our analysis: We believe that this regression is related to the new logic for estimating the number of distinct values in the optimizer. This is affecting even queries with point lookups (ref_0.i_id = 5) in the TPC-C benchmark.\n\n- Query Execution Plans\n\n [OLD version]\n HashAggregate (cost=11972.38..12266.20 rows=29382 width=29) (actual time=233.543..324.973 rows=300144 loops=1)\n Group Key: ref_0.i_im_id, ref_1.ol_dist_info\n -> Nested Loop Left Join (cost=0.29..10471.64 rows=300148 width=29) (actual time=0.012..114.955 rows=300148 loops=1)\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=25) (actual time=0.004..25.061 rows=300148 loops=1)\n -> Materialize (cost=0.29..8.32 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=300148)\n -> Index Scan using item_pkey on item ref_0 (cost=0.29..8.31 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=1)\n Index Cond: (i_id = 10)\n Planning time: 0.267 ms\n Execution time: 338.027 ms\n\n\n [NEW version]\n Unique (cost=44960.08..47211.19 rows=300148 width=29) (actual time=646.545..885.502 rows=300144 loops=1)\n -> Sort (cost=44960.08..45710.45 rows=300148 width=29) (actual time=646.544..838.933 rows=300148 loops=1)\n Sort Key: ref_0.i_im_id, ref_1.ol_dist_info\n Sort Method: external merge Disk: 11480kB\n -> Nested Loop Left Join (cost=0.29..10471.64 rows=300148 width=29) (actual time=0.016..111.889 rows=300148 loops=1)\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=25) (actual time=0.004..24.612 rows=300148 loops=1)\n -> Materialize (cost=0.29..8.32 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=300148)\n -> Index Scan using item_pkey on item ref_0 (cost=0.29..8.31 rows=1 width=4) (actual time=0.008..0.008 rows=1 loops=1)\n Index Cond: (i_id = 10)\n Planning Time: 0.341 ms\n Execution Time: 896.662 ms\n\n\n####### QUERY 3:\n\n select\n cast(ref_1.ol_i_id as int4) as c0\n from\n public.stock as ref_0\n left join public.order_line as ref_1\n on (ref_1.ol_number is not null)\n where ref_1.ol_number is null\n\n- Commit: 77cd477 (Enable parallel query by default.)\n\n- Our analysis: We believe that this regression is due to parallel queries being enabled by default. Surprisingly, we found that even on a larger TPC-C database (scale factor of 50, roughly 4GB), parallel scan is still slower than the non-parallel one in the old version, when the query is not returning any tuples.\n\n- Query Execution Plans\n\n [OLD version]\n Nested Loop Anti Join (cost=0.00..18006.81 rows=1 width=4) (actual time=28.689..28.689 rows=0 loops=1)\n -> Seq Scan on stock ref_0 (cost=0.00..5340.00 rows=100000 width=0) (actual time=0.028..15.722 rows=100000 loops=1)\n -> Materialize (cost=0.00..9385.22 rows=300148 width=4) (actual time=0.000..0.000 rows=1 loops=100000)\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=4) (actual time=0.004..0.004 rows=1 loops=1)\n Filter: (ol_number IS NOT NULL)\n Planning time: 0.198 ms\n Execution time: 28.745 ms\n\n [NEW version]\n Gather (cost=1000.00..15164.93 rows=1 width=4) (actual time=91.022..92.634 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Anti Join (cost=0.00..14164.83 rows=1 width=4) (actual time=88.889..88.889 rows=0 loops=3)\n -> Parallel Seq Scan on stock ref_0 (cost=0.00..4756.67 rows=41667 width=0) (actual time=0.025..7.331 rows=33333 loops=3)\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=4) (actual time=0.002..0.002 rows=1 loops=100000)\n Filter: (ol_number IS NOT NULL)\n Planning Time: 0.258 ms\n Execution Time: 92.699 ms\n\n\n####### QUERY 4:\n\n select\n ref_0.s_dist_06 as c0\n from\n public.stock as ref_0\n where (ref_0.s_w_id < cast(least(0, 1) as int8))\n\n- Commit: 5edc63b (Account for the effect of lossy pages when costing bitmap scans)\n\n- Our analysis: We believe that this regression has to do with two factors: 1) conditional expression (e.g., LEAST or NULLIF) are not reduced to constants unlike string functions (e.g., CHAR_LENGTH) 2) change in the cost estimation function for bitmap scan. Execution time grows by 3 orders of magnitude. We note that this regression is only observed on large databases (e.g., scale factor of 50).\n\n- Query Execution Plans\n\n [OLD version]\n Bitmap Heap Scan on stock ref_0 (cost=31201.11..273173.13 rows=1666668 width=25) (actual time=0.005..0.005 rows=0 loops=1)\n Recheck Cond: (s_w_id < (LEAST(0, 1))::bigint)\n -> Bitmap Index Scan on stock_pkey (cost=0.00..30784.44 rows=1666668 width=0) (actual time=0.005..0.005 rows=0 loops=1)\n Index Cond: (s_w_id < (LEAST(0, 1))::bigint)\n Planning time: 0.228 ms\n Execution time: 0.107 ms\n\n [NEW version]\n Seq Scan on stock ref_0 (cost=0.00..304469.17 rows=1666613 width=25) (actual time=716.397..716.397 rows=0 loops=1)\n Filter: (s_w_id < (LEAST(0, 1))::bigint)\n Rows Removed by Filter: 5000000\n Planning Time: 0.221 ms\n Execution Time: 716.513 ms\n\n\n####### QUERY 1:\n\n select\n ref_0.o_d_id as c0\n from\n public.oorder as ref_0\n where EXISTS (\n select\n 1\n from\n (select distinct\n ref_0.o_entry_d as c0,\n ref_1.c_credit as c1\n from\n public.customer as ref_1\n where (false)\n ) as subq_1\n );\n\n- Commit: bf6c614 (Do execGrouping.c via expression eval machinery, take two)\n\n- Our analysis: We are not sure about the root cause of this regression. This might have to do with grouping logic.\n\n- Query Execution Plans\n\n [OLD version]\n Seq Scan on oorder ref_0 (cost=0.00..77184338.54 rows=15022 width=4) (actual time=34.173..34.173 rows=0 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 30044\n SubPlan 1\n -> Subquery Scan on subq_1 (cost=2569.01..2569.03 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=30044)\n -> HashAggregate (cost=2569.01..2569.02 rows=1 width=3) (actual time=0.000..0.000 rows=0 loops=30044)\n Group Key: ref_0.o_entry_d, ref_1.c_credit\n -> Result (cost=0.00..2569.00 rows=1 width=3) (actual time=0.000..0.000 rows=0 loops=30044)\n One-Time Filter: false\n -> Seq Scan on customer ref_1 (cost=0.00..2569.00 rows=1 width=3) (never executed)\n Planning time: 0.325 ms\n Execution time: 34.234 ms\n\n [NEW version]\n Seq Scan on oorder ref_0 (cost=0.00..1152.32 rows=15022 width=4) (actual time=74.799..74.799 rows=0 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 30044\n SubPlan 1\n -> Subquery Scan on subq_1 (cost=0.00..0.02 rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=30044)\n -> HashAggregate (cost=0.00..0.01 rows=1 width=20) (actual time=0.000..0.000 rows=0 loops=30044)\n Group Key: ref_0.o_entry_d, c_credit\n -> Result (cost=0.00..0.00 rows=0 width=20) (actual time=0.000..0.000 rows=0 loops=30044)\n One-Time Filter: false\n Planning Time: 0.350 ms\n Execution Time: 79.237 ms\n\n________________________________\nFrom: Jeff Janes <[email protected]>\nSent: Tuesday, February 12, 2019 1:03 PM\nTo: Jung, Jinho\nCc: [email protected]\nSubject: Re: Performance regressions found using sqlfuzz\n\nOn Tue, Feb 12, 2019 at 4:23 AM Jung, Jinho <[email protected]<mailto:[email protected]>> wrote:\n\n\nHello,\n\nWe are developing a tool called sqlfuzz for automatically finding performance regressions in PostgreSQL. sqlfuzz performs mutational fuzzing to generate SQL queries that take more time to execute on the latest version of PostgreSQL compared to prior versions. We hope that these queries would help further increase the utility of the regression test suite.\n\nWe would greatly appreciate feedback from the community regarding the queries found by the tool so far. We have already incorporated prior feedback from the community in the latest version of sqlfuzz.\n\nThis approach doesn't seem very exciting to me as-is, because optimization is a very pragmatic endeavor. We make decisions all the time that might make some queries better and others worse. If the queries that get better are natural/common ones, and the ones that get worse are weird/uncommon ones (like generated by a fuzzer), then making that change is an improvement even if there are some performance (as opposed to correctness) regressions.\n\nI would be more interested in investigating some of these if the report would:\n\n1) include the exact commit in which the regression was introduced (i.e. automate \"git bisect\").\n2) verify that the regression still exists in the dev HEAD and report which commit it was verified in (since HEAD changes frequently).\n3) report which queries (if any) in your corpus were made better by the same commit which made the victim query worse.\n\nCheers,\n\nJeff\n\n\n\n\n\n\n\n\n\nHi Jeff, \n\n\n\n\nThanks for the feedback! The git bisect idea was particularly helpful.\n\n\n\n\nWe use query complexity constraints in sqlfuzz to ensure that the constructed queries are realistic (e.g., limit the query size, automatically minimize the query, avoid esoteric expressions\n and functions, restrict number of joins, etc.). \n\n\n\n\nOur goal is to augment the test suite with queries that will assist developers with more comprehensively evaluating\n the impact of new optimization heuristics, query processing strategies etc. We are working on improving the utility of the tool and your feedback on these reports will be super helpful.\n Thanks.\n\n\n\n\nFor each regression, we share:\n\n\n\n\n1) the associated query,\n\n2) the commit that activated it,\n\n\n3) our high-level analysis, and\n\n4) query execution plans in old and new versions of PostgreSQL.\n\n\n\n\n\n\nAll these regressions are observed on the latest version (dev HEAD).\n\n\n\n\n####### QUERY 2: \n\n\n\n\n\n\n select distinct \n\n\n ref_0.i_im_id as c0,\n\n ref_1.ol_dist_info as c1\n\n from \n\n public.item as ref_0 right join \n\n public.order_line as ref_1\n\n on (ref_0.i_id = 5)\n\n\n\n\n- Commit: 84f9a35 (Improve estimate of distinct values in estimate_num_groups())\n\n\n\n\n\n- Our analysis: We believe that this regression is related to the\n new logic for estimating the number of distinct values in the optimizer. This is affecting even queries with point lookups\n\n(ref_0.i_id = 5) in the TPC-C benchmark. \n\n\n\n\n\n\n- Query Execution Plans\n\n\n\n\n [OLD version]\n\n\n HashAggregate (cost=11972.38..12266.20 rows=29382 width=29) (actual time=233.543..324.973 rows=300144 loops=1)\n\n Group Key: ref_0.i_im_id, ref_1.ol_dist_info\n\n -> Nested Loop Left Join (cost=0.29..10471.64 rows=300148 width=29) (actual time=0.012..114.955 rows=300148 loops=1)\n\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=25) (actual time=0.004..25.061 rows=300148 loops=1)\n\n -> Materialize (cost=0.29..8.32 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=300148)\n\n -> Index Scan using item_pkey on item ref_0 (cost=0.29..8.31 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=1)\n\n Index Cond: (i_id = 10)\n\n Planning time: 0.267 ms\n\n Execution time: 338.027 ms\n\n\n\n\n\n\n\n [NEW version]\n\n Unique (cost=44960.08..47211.19 rows=300148 width=29) (actual time=646.545..885.502 rows=300144 loops=1)\n\n -> Sort (cost=44960.08..45710.45 rows=300148 width=29) (actual time=646.544..838.933 rows=300148 loops=1)\n\n Sort Key: ref_0.i_im_id, ref_1.ol_dist_info\n\n Sort Method: external merge Disk: 11480kB\n\n -> Nested Loop Left Join (cost=0.29..10471.64 rows=300148 width=29) (actual time=0.016..111.889 rows=300148 loops=1)\n\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=25) (actual time=0.004..24.612 rows=300148 loops=1)\n\n -> Materialize (cost=0.29..8.32 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=300148)\n\n -> Index Scan using item_pkey on item ref_0 (cost=0.29..8.31 rows=1 width=4) (actual time=0.008..0.008 rows=1 loops=1)\n\n Index Cond: (i_id = 10)\n\n Planning Time: 0.341 ms\n\n Execution Time: 896.662 ms\n\n\n\n\n\n\n####### QUERY 3: \n\n\n\n\n\n select \n\n cast(ref_1.ol_i_id as int4) as c0\n\n from \n\n public.stock as ref_0\n\n left join public.order_line as ref_1\n\n on (ref_1.ol_number is not null)\n\n where ref_1.ol_number is null\n\n\n\n\n- Commit: 77cd477 (Enable parallel query by default.)\n\n\n\n\n\n-\n Our analysis: We believe that this regression is due to parallel queries being enabled by default. Surprisingly, we found that even on a larger TPC-C database (scale factor of 50, roughly 4GB), parallel scan is still slower than the non-parallel one in the\n old version, when the query is not returning any tuples. \n\n\n\n\n\n\n\n- Query Execution Plans\n\n\n\n\n [OLD version]\n\n Nested Loop Anti Join (cost=0.00..18006.81 rows=1 width=4) (actual time=28.689..28.689 rows=0 loops=1)\n\n -> Seq Scan on stock ref_0 (cost=0.00..5340.00 rows=100000 width=0) (actual time=0.028..15.722 rows=100000 loops=1)\n\n -> Materialize (cost=0.00..9385.22 rows=300148 width=4) (actual time=0.000..0.000 rows=1 loops=100000)\n\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=4) (actual time=0.004..0.004 rows=1 loops=1)\n\n Filter: (ol_number IS NOT NULL)\n\n Planning time: 0.198 ms\n\n Execution time: 28.745 ms\n\n\n\n\n [NEW version]\n\n Gather (cost=1000.00..15164.93 rows=1 width=4) (actual time=91.022..92.634 rows=0 loops=1)\n\n Workers Planned: 2\n\n Workers Launched: 2\n\n -> Nested Loop Anti Join (cost=0.00..14164.83 rows=1 width=4) (actual time=88.889..88.889 rows=0 loops=3)\n\n -> Parallel Seq Scan on stock ref_0 (cost=0.00..4756.67 rows=41667 width=0) (actual time=0.025..7.331 rows=33333 loops=3)\n\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=4) (actual time=0.002..0.002 rows=1 loops=100000)\n\n Filter: (ol_number IS NOT NULL)\n\n Planning Time: 0.258 ms\n\n Execution Time: 92.699 ms\n\n\n\n\n\n####### QUERY 4: \n\n\n\n\n\n select\n\n ref_0.s_dist_06 as c0\n\n from\n\n public.stock as ref_0\n\n where (ref_0.s_w_id < cast(least(0, 1) as int8))\n\n\n\n\n- Commit: 5edc63b (Account for the effect of lossy pages when costing\n bitmap scans)\n\n\n\n\n- Our analysis: We believe that this regression has to do with two factors: 1) conditional expression (e.g., LEAST or NULLIF) are not reduced to constants unlike string functions (e.g., CHAR_LENGTH) 2) change in the cost estimation function for\n bitmap scan. Execution time grows by 3 orders of magnitude. We note that this regression is only observed on large databases (e.g., scale factor of 50).\n\n\n\n\n\n\n\n- Query Execution Plans\n\n\n\n\n [OLD version]\n\n Bitmap Heap Scan on stock ref_0 (cost=31201.11..273173.13 rows=1666668 width=25) (actual time=0.005..0.005 rows=0 loops=1)\n\n Recheck Cond: (s_w_id < (LEAST(0, 1))::bigint)\n\n -> Bitmap Index Scan on stock_pkey (cost=0.00..30784.44 rows=1666668 width=0) (actual time=0.005..0.005 rows=0 loops=1)\n\n Index Cond: (s_w_id < (LEAST(0, 1))::bigint)\n\n Planning time: 0.228 ms\n\n Execution time: 0.107 ms\n\n\n\n\n [NEW version]\n\n Seq Scan on stock ref_0 (cost=0.00..304469.17 rows=1666613 width=25) (actual time=716.397..716.397 rows=0 loops=1)\n\n Filter: (s_w_id < (LEAST(0, 1))::bigint)\n\n Rows Removed by Filter: 5000000\n\n Planning Time: 0.221 ms\n\n Execution Time: 716.513 ms\n\n\n\n\n\n####### QUERY 1: \n\n\n\n\n\n\n select \n\n ref_0.o_d_id as c0\n\n from \n\n public.oorder as ref_0\n\n where EXISTS (\n\n select \n\n 1\n\n from \n\n (select distinct \n\n ref_0.o_entry_d as c0, \n\n ref_1.c_credit as c1\n\n from \n\n public.customer as ref_1\n\n where (false)\n\n ) as subq_1\n\n );\n\n\n\n\n- Commit: bf6c614 (Do execGrouping.c via expression eval machinery, take two)\n\n\n\n\n- Our analysis: We are not sure about the root cause of this regression.\n This might have to do with grouping logic. \n\n\n\n\n\n- Query Execution Plans\n\n\n\n\n [OLD version]\n\n Seq Scan on oorder ref_0 (cost=0.00..77184338.54 rows=15022 width=4) (actual time=34.173..34.173 rows=0 loops=1)\n\n Filter: (SubPlan 1)\n\n Rows Removed by Filter: 30044\n\n SubPlan 1\n\n -> Subquery Scan on subq_1 (cost=2569.01..2569.03 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=30044)\n\n -> HashAggregate (cost=2569.01..2569.02 rows=1 width=3) (actual time=0.000..0.000 rows=0 loops=30044)\n\n Group Key: ref_0.o_entry_d, ref_1.c_credit\n\n -> Result (cost=0.00..2569.00 rows=1 width=3) (actual time=0.000..0.000 rows=0 loops=30044)\n\n One-Time Filter: false\n\n -> Seq Scan on customer ref_1 (cost=0.00..2569.00 rows=1 width=3) (never executed)\n\n Planning time: 0.325 ms\n\n Execution time: 34.234 ms\n\n\n\n\n [NEW version]\n\n Seq Scan on oorder ref_0 (cost=0.00..1152.32 rows=15022 width=4) (actual time=74.799..74.799 rows=0 loops=1)\n\n Filter: (SubPlan 1)\n\n Rows Removed by Filter: 30044\n\n SubPlan 1\n\n -> Subquery Scan on subq_1 (cost=0.00..0.02 rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=30044)\n\n -> HashAggregate (cost=0.00..0.01 rows=1 width=20) (actual time=0.000..0.000 rows=0 loops=30044)\n\n Group Key: ref_0.o_entry_d, c_credit\n\n -> Result (cost=0.00..0.00 rows=0 width=20) (actual time=0.000..0.000 rows=0 loops=30044)\n\n One-Time Filter: false\n\n Planning Time: 0.350 ms\n\n Execution Time: 79.237 ms\n\n\n\n\n\nFrom: Jeff Janes <[email protected]>\nSent: Tuesday, February 12, 2019 1:03 PM\nTo: Jung, Jinho\nCc: [email protected]\nSubject: Re: Performance regressions found using sqlfuzz\n \n\n\n\nOn Tue, Feb 12, 2019 at 4:23 AM Jung, Jinho <[email protected]> wrote:\n\n\n\n\n\n\n\n\nHello,\n\n\nWe are developing a tool called sqlfuzz for automatically finding performance regressions in PostgreSQL. sqlfuzz performs mutational fuzzing to generate SQL queries that take more time to execute on the latest version of PostgreSQL compared to prior versions.\n We hope that these queries would help further increase the utility of the regression test suite.\n\n\nWe would greatly appreciate feedback from the community regarding the queries found by the tool so far. We have already incorporated prior feedback from the community in the latest version of sqlfuzz.\n\n\n\n\n\nThis approach doesn't seem very exciting to me as-is, because optimization is a very pragmatic endeavor. We make decisions all the time that might make some queries better and others worse. If the queries that get better are natural/common ones, and\n the ones that get worse are weird/uncommon ones (like generated by a fuzzer), then making that change is an improvement even if there are some performance (as opposed to correctness) regressions. \n\n\nI would be more interested in investigating some of these if the report would:\n\n\n1) include the exact commit in which the regression was introduced (i.e. automate \"git bisect\").\n2) verify that the regression still exists in the dev HEAD and report which commit it was verified in (since HEAD changes frequently).\n3) report which queries (if any) in your corpus were made better by the same commit which made the victim query worse.\n\n\nCheers,\n\n\nJeff",
"msg_date": "Thu, 14 Feb 2019 17:27:40 +0000",
"msg_from": "\"Jung, Jinho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance regressions found using sqlfuzz"
},
{
"msg_contents": ">>>>> \"Jung\" == Jung, Jinho <[email protected]> writes:\n\n Jung> select distinct\n Jung> ref_0.i_im_id as c0,\n Jung> ref_1.ol_dist_info as c1\n Jung> from\n Jung> public.item as ref_0 right join\n Jung> public.order_line as ref_1\n Jung> on (ref_0.i_id = 5)\n\n Jung> - Commit: 84f9a35 (Improve estimate of distinct values in estimate_num_groups())\n\n Jung> - Our analysis: We believe that this regression is related to the\n Jung> new logic for estimating the number of distinct values in the\n Jung> optimizer. This is affecting even queries with point lookups\n Jung> (ref_0.i_id = 5) in the TPC-C benchmark.\n\nSo what's happening here is that the old plan was mis-estimating the\nresult, believed incorrectly that it would fit into work_mem, and\ngenerated a hashaggregate plan accordingly; it ran fast because\nhashaggregate doesn't spill to disk but silently overflows work_mem.\n\nThe new plan correctly estimates the result size, and therefore is\nforbidden from generating the hashaggregate plan at the default work_mem\nsetting; it generates a sort plan, and the sort of course spills to disk\nsince work_mem is exceeded.\n\nHad the value of work_mem been set to something appropriate for the\nworkload, then the query plan would not have changed.\n\nSo the problem (from an automated testing perspective) is that an actual\n_improvement_ in the code is being reported as a regression.\n\n Jung> ####### QUERY 3:\n\n Jung> select\n Jung> cast(ref_1.ol_i_id as int4) as c0\n Jung> from\n Jung> public.stock as ref_0\n Jung> left join public.order_line as ref_1\n Jung> on (ref_1.ol_number is not null)\n Jung> where ref_1.ol_number is null\n\n Jung> - Commit: 77cd477 (Enable parallel query by default.)\n\n Jung> - Our analysis: We believe that this regression is due to\n Jung> parallel queries being enabled by default. Surprisingly, we found\n Jung> that even on a larger TPC-C database (scale factor of 50, roughly\n Jung> 4GB), parallel scan is still slower than the non-parallel one in\n Jung> the old version, when the query is not returning any tuples.\n\nThe problem here is not actually with parallel scans as such, but rather\nthe omission of a Materialize node in the parallel plan, and what looks\nlike some rather serious mis-costing of the nestloop antijoin.\n\n Jung> ####### QUERY 4:\n\n Jung> select\n Jung> ref_0.s_dist_06 as c0\n Jung> from\n Jung> public.stock as ref_0\n Jung> where (ref_0.s_w_id < cast(least(0, 1) as int8))\n\n Jung> - Commit: 5edc63b (Account for the effect of lossy pages when costing bitmap scans)\n\n Jung> - Our analysis: We believe that this regression has to do with\n Jung> two factors: 1) conditional expression (e.g., LEAST or NULLIF)\n Jung> are not reduced to constants unlike string functions (e.g.,\n Jung> CHAR_LENGTH) 2) change in the cost estimation function for bitmap\n Jung> scan. Execution time grows by 3 orders of magnitude. We note that\n Jung> this regression is only observed on large databases (e.g., scale\n Jung> factor of 50).\n\nAgain, this is showing up because of a large database and a small\nwork_mem. The bitmap scan on stock only becomes lossy if the number of\nrows matched in the index is very large relative to work_mem; the lack\nof plan-time evaluation of LEAST means that the planner doesn't have any\ngood way to estimate the selectivity, so it's taking a default estimate.\n\n Jung> ####### QUERY 1:\n\n Jung> select\n Jung> ref_0.o_d_id as c0\n Jung> from\n Jung> public.oorder as ref_0\n Jung> where EXISTS (\n Jung> select\n Jung> 1\n Jung> from\n Jung> (select distinct\n Jung> ref_0.o_entry_d as c0,\n Jung> ref_1.c_credit as c1\n Jung> from\n Jung> public.customer as ref_1\n Jung> where (false)\n Jung> ) as subq_1\n Jung> );\n\n Jung> - Commit: bf6c614 (Do execGrouping.c via expression eval machinery, take two)\n\n Jung> - Our analysis: We are not sure about the root cause of this\n Jung> regression. This might have to do with grouping logic.\n\nWhat this query is basically exercising is how fast one can do\nExecReScan on a DISTINCT query, without also considering the performance\neffects of actually doing the grouping (the constant-false qual here\nmeans that the grouping comparison is never actually performed). An\noptimization tradeoff that speeds up comparisons within a scan at the\ncost of a fixed overhead for the scan will therefore make this query\nslower, but it still seems a good tradeoff to make (of course it would\nbe even better to make the overhead per-query rather than per-scan, and\nthere were other issues with this commit that should have been caught at\nthe time).\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Fri, 15 Feb 2019 16:20:07 +0000",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance regressions found using sqlfuzz"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-14 17:27:40 +0000, Jung, Jinho wrote:\n> ####### QUERY 2:\n> \n> select distinct\n> ref_0.i_im_id as c0,\n> ref_1.ol_dist_info as c1\n> from\n> public.item as ref_0 right join\n> public.order_line as ref_1\n> on (ref_0.i_id = 5)\n> \n> - Commit: 84f9a35 (Improve estimate of distinct values in estimate_num_groups())\n> \n> - Our analysis: We believe that this regression is related to the new logic for estimating the number of distinct values in the optimizer. This is affecting even queries with point lookups (ref_0.i_id = 5) in the TPC-C benchmark.\n> \n> - Query Execution Plans\n> \n> [OLD version]\n> HashAggregate (cost=11972.38..12266.20 rows=29382 width=29) (actual time=233.543..324.973 rows=300144 loops=1)\n> Group Key: ref_0.i_im_id, ref_1.ol_dist_info\n> -> Nested Loop Left Join (cost=0.29..10471.64 rows=300148 width=29) (actual time=0.012..114.955 rows=300148 loops=1)\n> -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=25) (actual time=0.004..25.061 rows=300148 loops=1)\n> -> Materialize (cost=0.29..8.32 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=300148)\n> -> Index Scan using item_pkey on item ref_0 (cost=0.29..8.31 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=1)\n> Index Cond: (i_id = 10)\n> Planning time: 0.267 ms\n> Execution time: 338.027 ms\n> \n> \n> [NEW version]\n> Unique (cost=44960.08..47211.19 rows=300148 width=29) (actual time=646.545..885.502 rows=300144 loops=1)\n> -> Sort (cost=44960.08..45710.45 rows=300148 width=29) (actual time=646.544..838.933 rows=300148 loops=1)\n> Sort Key: ref_0.i_im_id, ref_1.ol_dist_info\n> Sort Method: external merge Disk: 11480kB\n> -> Nested Loop Left Join (cost=0.29..10471.64 rows=300148 width=29) (actual time=0.016..111.889 rows=300148 loops=1)\n> -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=25) (actual time=0.004..24.612 rows=300148 loops=1)\n> -> Materialize (cost=0.29..8.32 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=300148)\n> -> Index Scan using item_pkey on item ref_0 (cost=0.29..8.31 rows=1 width=4) (actual time=0.008..0.008 rows=1 loops=1)\n> Index Cond: (i_id = 10)\n> Planning Time: 0.341 ms\n> Execution Time: 896.662 ms\n\nThis seems perfectly alright - the old version used more memory than\nwork_mem actually should have allowed. I'd bet you get the performance\nback if you set work mem to a bigger value.\n\n\n> ####### QUERY 3:\n> \n> select\n> cast(ref_1.ol_i_id as int4) as c0\n> from\n> public.stock as ref_0\n> left join public.order_line as ref_1\n> on (ref_1.ol_number is not null)\n> where ref_1.ol_number is null\n> \n> - Commit: 77cd477 (Enable parallel query by default.)\n> \n> - Our analysis: We believe that this regression is due to parallel queries being enabled by default. Surprisingly, we found that even on a larger TPC-C database (scale factor of 50, roughly 4GB), parallel scan is still slower than the non-parallel one in the old version, when the query is not returning any tuples.\n> \n> - Query Execution Plans\n> \n> [OLD version]\n> Nested Loop Anti Join (cost=0.00..18006.81 rows=1 width=4) (actual time=28.689..28.689 rows=0 loops=1)\n> -> Seq Scan on stock ref_0 (cost=0.00..5340.00 rows=100000 width=0) (actual time=0.028..15.722 rows=100000 loops=1)\n> -> Materialize (cost=0.00..9385.22 rows=300148 width=4) (actual time=0.000..0.000 rows=1 loops=100000)\n> -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=4) (actual time=0.004..0.004 rows=1 loops=1)\n> Filter: (ol_number IS NOT NULL)\n> Planning time: 0.198 ms\n> Execution time: 28.745 ms\n> \n> [NEW version]\n> Gather (cost=1000.00..15164.93 rows=1 width=4) (actual time=91.022..92.634 rows=0 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Nested Loop Anti Join (cost=0.00..14164.83 rows=1 width=4) (actual time=88.889..88.889 rows=0 loops=3)\n> -> Parallel Seq Scan on stock ref_0 (cost=0.00..4756.67 rows=41667 width=0) (actual time=0.025..7.331 rows=33333 loops=3)\n> -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=4) (actual time=0.002..0.002 rows=1 loops=100000)\n> Filter: (ol_number IS NOT NULL)\n> Planning Time: 0.258 ms\n> Execution Time: 92.699 ms\n\nI'm not particularly bothered - this is a pretty small difference. Most\nof the time here is likely spent starting the workers, the cost of which\nis hard to predict/model accurately.\n\n\n> ####### QUERY 4:\n> \n> select\n> ref_0.s_dist_06 as c0\n> from\n> public.stock as ref_0\n> where (ref_0.s_w_id < cast(least(0, 1) as int8))\n> \n> - Commit: 5edc63b (Account for the effect of lossy pages when costing bitmap scans)\n> \n> - Our analysis: We believe that this regression has to do with two factors: 1) conditional expression (e.g., LEAST or NULLIF) are not reduced to constants unlike string functions (e.g., CHAR_LENGTH) 2) change in the cost estimation function for bitmap scan. Execution time grows by 3 orders of magnitude. We note that this regression is only observed on large databases (e.g., scale factor of 50).\n> \n> - Query Execution Plans\n> \n> [OLD version]\n> Bitmap Heap Scan on stock ref_0 (cost=31201.11..273173.13 rows=1666668 width=25) (actual time=0.005..0.005 rows=0 loops=1)\n> Recheck Cond: (s_w_id < (LEAST(0, 1))::bigint)\n> -> Bitmap Index Scan on stock_pkey (cost=0.00..30784.44 rows=1666668 width=0) (actual time=0.005..0.005 rows=0 loops=1)\n> Index Cond: (s_w_id < (LEAST(0, 1))::bigint)\n> Planning time: 0.228 ms\n> Execution time: 0.107 ms\n> \n> [NEW version]\n> Seq Scan on stock ref_0 (cost=0.00..304469.17 rows=1666613 width=25) (actual time=716.397..716.397 rows=0 loops=1)\n> Filter: (s_w_id < (LEAST(0, 1))::bigint)\n> Rows Removed by Filter: 5000000\n> Planning Time: 0.221 ms\n> Execution Time: 716.513 ms\n\n\nHm. The primary problem here is that the estimation both before and\nafter are really bad. So I don't think the commit you point out here is\nreally to blame. I'm not that bothered by the query not being great,\ngiven the weird construction with LEAST(), but we probably could fix\nthat pretty easily.\n\n\n> ####### QUERY 1:\n> \n> select\n> ref_0.o_d_id as c0\n> from\n> public.oorder as ref_0\n> where EXISTS (\n> select\n> 1\n> from\n> (select distinct\n> ref_0.o_entry_d as c0,\n> ref_1.c_credit as c1\n> from\n> public.customer as ref_1\n> where (false)\n> ) as subq_1\n> );\n> \n> - Commit: bf6c614 (Do execGrouping.c via expression eval machinery, take two)\n> \n> - Our analysis: We are not sure about the root cause of this regression. This might have to do with grouping logic.\n> \n> - Query Execution Plans\n> \n> [OLD version]\n> Seq Scan on oorder ref_0 (cost=0.00..77184338.54 rows=15022 width=4) (actual time=34.173..34.173 rows=0 loops=1)\n> Filter: (SubPlan 1)\n> Rows Removed by Filter: 30044\n> SubPlan 1\n> -> Subquery Scan on subq_1 (cost=2569.01..2569.03 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=30044)\n> -> HashAggregate (cost=2569.01..2569.02 rows=1 width=3) (actual time=0.000..0.000 rows=0 loops=30044)\n> Group Key: ref_0.o_entry_d, ref_1.c_credit\n> -> Result (cost=0.00..2569.00 rows=1 width=3) (actual time=0.000..0.000 rows=0 loops=30044)\n> One-Time Filter: false\n> -> Seq Scan on customer ref_1 (cost=0.00..2569.00 rows=1 width=3) (never executed)\n> Planning time: 0.325 ms\n> Execution time: 34.234 ms\n> \n> [NEW version]\n> Seq Scan on oorder ref_0 (cost=0.00..1152.32 rows=15022 width=4) (actual time=74.799..74.799 rows=0 loops=1)\n> Filter: (SubPlan 1)\n> Rows Removed by Filter: 30044\n> SubPlan 1\n> -> Subquery Scan on subq_1 (cost=0.00..0.02 rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=30044)\n> -> HashAggregate (cost=0.00..0.01 rows=1 width=20) (actual time=0.000..0.000 rows=0 loops=30044)\n> Group Key: ref_0.o_entry_d, c_credit\n> -> Result (cost=0.00..0.00 rows=0 width=20) (actual time=0.000..0.000 rows=0 loops=30044)\n> One-Time Filter: false\n> Planning Time: 0.350 ms\n> Execution Time: 79.237 ms\n\nI think that might be fixed in the latest point release. I screwed up\nand made resets of tuple hash tables (and there's 30044 of those here)\nmore expensive. It's fixed in the latest minor release however.\n\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 15 Feb 2019 08:44:34 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance regressions found using sqlfuzz"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2019-02-14 17:27:40 +0000, Jung, Jinho wrote:\n>> - Our analysis: We believe that this regression has to do with two factors: 1) conditional expression (e.g., LEAST or NULLIF) are not reduced to constants unlike string functions (e.g., CHAR_LENGTH) 2) change in the cost estimation function for bitmap scan. Execution time grows by 3 orders of magnitude. We note that this regression is only observed on large databases (e.g., scale factor of 50).\n\n> Hm. The primary problem here is that the estimation both before and\n> after are really bad. So I don't think the commit you point out here is\n> really to blame. I'm not that bothered by the query not being great,\n> given the weird construction with LEAST(), but we probably could fix\n> that pretty easily.\n\nWe already did:\n\nAuthor: Tom Lane <[email protected]>\nBranch: master [6f19a8c41] 2018-12-30 13:42:04 -0500\n\n Teach eval_const_expressions to constant-fold LEAST/GREATEST expressions.\n \n Doing this requires an assumption that the invoked btree comparison\n function is immutable. We could check that explicitly, but in other\n places such as contain_mutable_functions we just assume that it's true,\n so we may as well do likewise here. (If the comparison function's\n behavior isn't immutable, the sort order in indexes built with it would\n be unstable, so it seems certainly wrong for it not to be so.)\n \n Vik Fearing\n \n Discussion: https://postgr.es/m/[email protected]\n\nBTW, const-folding NULLIF would not be a similarly tiny fix, because\nit would need to check for immutability of the underlying operator\n(since it is possibly a cross-type comparison, we can't get\naway with just assuming it's immutable). I'm not convinced that\ncase is worth carrying extra code for.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 16 Feb 2019 17:37:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance regressions found using sqlfuzz"
},
{
"msg_contents": "Andres, Andrew, and Tom:\n\nThanks for your insightful comments! We conducted additional analysis based on your comments and would like to share the results. We would also like to get your feedback on a few design decisions to increase the utility of our performance regression reports.\n\n####### QUERY 2:\n\nAs Andres and Andrew correctly pointed out, this regression is not observed after we increase the work_mem parameter.\n\n * work_mem = 256MB\n * shared_buffers = 1024MB\n * temp_buffers = 64MB\n\nWe wanted to check if these settings look good to you. If so, we will use them for validating the regressions identified by sqlfuzz.\n\n####### QUERY 3:\n\nSurprisingly, the performance impact of this regression is more prominent on larger databases. We concur with Andrew in that this might be related to the lack of a Materialize node and mis-costing of the Nested Loop Anti-Join.\n\nWhen we increased the scale-factor of TPC-C to 300 (~30 GB), this query ran three times slower on v11 (24 seconds) in comparison to v9.5 (7 seconds). We also found more than 15 regressions related to the same commit and share a couple of them below.\n\nCommit: 77cd477 (Enable parallel query by default.)\n\nSummary: Execution Time (milliseconds)\n\n+----------------------+--------+---------+---------+-----------+\n| | scale1 | scale10 | scale50 | scale 300 |\n+----------------------+--------+---------+---------+-----------+\n| Case-3 (v9.5) | 28 | 248 | 1231 | 7265 |\n| Case-3 (v11) | 74 | 677 | 3345 | 24581 |\n+----------------------+--------+---------+---------+-----------+\n| Case-3A (v9.5) | 88 | 937 | 4721 | 27241 |\n| Case-3A (v11) | 288 | 2822 | 13838 | 85081 |\n+----------------------+--------+---------+---------+-----------+\n| Case-3B (v9.5) | 101 | 934 | 4824 | 29363 |\n| Case-3B (v11) | 200 | 2331 | 12327 | 74110 |\n+----------------------+--------+---------+---------+-----------+\n\n\n###### QUERY 3A:\n\nselect\n ref_0.ol_delivery_d as c1\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_1.i_im_id as c0\n from\n public.item as ref_1\n where ref_0.ol_d_id <= ref_1.i_im_id\n )\n\n Execution plan:\n\n [old]\nNested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8) (actual time=0.034..24981.895 rows=90017507 loops=1)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=12) (actual time=0.022..7145.811 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual time=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.006..0.006 rows=1 loops=1)\nPlanning time: 0.290 ms\nExecution time: 27241.239 ms\n\n [new]\nGather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual time=0.265..82355.289 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431 width=8) (actual time=0.033..68529.259 rows=30005836 loops=3)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.001..0.001 rows=1 loops=90017507)\nPlanning Time: 0.319 ms\nExecution Time: 85081.158 ms\n\n###### QUERY 3B:\n\nselect\n ref_0.ol_i_id as c0\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_0.ol_delivery_d as c0\n from\n public.order_line as ref_1\n where ref_1.ol_d_id <= cast(nullif(ref_1.ol_o_id, ref_0.ol_i_id) as int4))\n\n Execution plan:\n\n[old]\nNested Loop Semi Join (cost=0.00..115638730740936.53 rows=30005835 width=4) (actual time=0.017..27009.302 rows=90017507 loops=1)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 11557\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=4) (actual time=0.009..7199.540 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2813221.56 rows=90017504 width=8) (actual time=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.002 rows=14 loops=1)\nPlanning time: 0.252 ms\nExecution time: 29363.737 ms\n\n[new]\nGather (cost=1000.00..84060490326155.39 rows=30005835 width=4) (actual time=0.272..71712.491 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..84060487324571.89 rows=12502431 width=4) (actual time=0.046..60153.472 rows=30005836 loops=3)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 1717\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=4) (actual time=0.023..2819.361 rows=30005836 loops=3)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.001 rows=1 loops=90017507)\nPlanning Time: 0.334 ms\nExecution Time: 74110.942 ms\n\n####### QUERY 4:\n\nAs Tom pointed out, this regression is not present in DEV head. We will not report regressions related to NULLIF in the future.\n\n####### QUERY 1:\n\nThis regression is also not present in DEV head. We will validate our regressions on DEV head in the future report.\n\nBest,\n\nJinho Jung\n\n\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Saturday, February 16, 2019 5:37:49 PM\nTo: Andres Freund\nCc: Jung, Jinho; Jeff Janes; [email protected]\nSubject: Re: Performance regressions found using sqlfuzz\n\nAndres Freund <[email protected]> writes:\n> On 2019-02-14 17:27:40 +0000, Jung, Jinho wrote:\n>> - Our analysis: We believe that this regression has to do with two factors: 1) conditional expression (e.g., LEAST or NULLIF) are not reduced to constants unlike string functions (e.g., CHAR_LENGTH) 2) change in the cost estimation function for bitmap scan. Execution time grows by 3 orders of magnitude. We note that this regression is only observed on large databases (e.g., scale factor of 50).\n\n> Hm. The primary problem here is that the estimation both before and\n> after are really bad. So I don't think the commit you point out here is\n> really to blame. I'm not that bothered by the query not being great,\n> given the weird construction with LEAST(), but we probably could fix\n> that pretty easily.\n\nWe already did:\n\nAuthor: Tom Lane <[email protected]>\nBranch: master [6f19a8c41] 2018-12-30 13:42:04 -0500\n\n Teach eval_const_expressions to constant-fold LEAST/GREATEST expressions.\n\n Doing this requires an assumption that the invoked btree comparison\n function is immutable. We could check that explicitly, but in other\n places such as contain_mutable_functions we just assume that it's true,\n so we may as well do likewise here. (If the comparison function's\n behavior isn't immutable, the sort order in indexes built with it would\n be unstable, so it seems certainly wrong for it not to be so.)\n\n Vik Fearing\n\n Discussion: https://postgr.es/m/[email protected]\n\nBTW, const-folding NULLIF would not be a similarly tiny fix, because\nit would need to check for immutability of the underlying operator\n(since it is possibly a cross-type comparison, we can't get\naway with just assuming it's immutable). I'm not convinced that\ncase is worth carrying extra code for.\n\n regards, tom lane\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAndres, Andrew, and Tom:\n\n\nThanks for your insightful comments! We conducted additional analysis based on your comments and would like to share the results. We would also like to get your feedback on a few design decisions to increase the utility of our performance regression reports.\n\n\n####### QUERY 2:\n\n\nAs Andres and Andrew correctly pointed out, this regression is not observed after we increase the work_mem parameter.\n\n\n * work_mem = 256MB\n * shared_buffers = 1024MB\n * temp_buffers = 64MB\n\n\nWe wanted to check if these settings look good to you. If so, we will use them for validating the regressions identified by sqlfuzz.\n\n\n####### QUERY 3:\n\n\nSurprisingly, the performance impact of this regression is more prominent on larger databases. We concur with Andrew in that this might be related to the lack of a Materialize node and mis-costing of the Nested Loop Anti-Join.\n\n\nWhen we increased the scale-factor of TPC-C to 300 (~30 GB), this query ran three times slower on v11 (24 seconds) in comparison to v9.5 (7 seconds). We also found more than 15 regressions related to the same commit and share a couple of them below.\n\n\nCommit: 77cd477 (Enable parallel query by default.)\n\n\nSummary: Execution Time (milliseconds)\n\n\n+----------------------+--------+---------+---------+-----------+\n| | scale1 | scale10 | scale50 | scale 300 |\n+----------------------+--------+---------+---------+-----------+\n| Case-3 (v9.5) | 28 | 248 | 1231 | 7265 |\n| Case-3 (v11) | 74 | 677 | 3345 | 24581 |\n+----------------------+--------+---------+---------+-----------+\n| Case-3A (v9.5) | 88 | 937 | 4721 | 27241 |\n| Case-3A (v11) | 288 | 2822 | 13838 | 85081 |\n+----------------------+--------+---------+---------+-----------+\n| Case-3B (v9.5) | 101 | 934 | 4824 | 29363 |\n| Case-3B (v11) | 200 | 2331 | 12327 | 74110 |\n+----------------------+--------+---------+---------+-----------+\n\n\n\n\n###### QUERY 3A:\n\n\nselect \n ref_0.ol_delivery_d as c1\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select \n ref_1.i_im_id as c0 \n from\n public.item as ref_1\n where ref_0.ol_d_id <= ref_1.i_im_id\n )\n\n\n Execution plan:\n\n\n [old]\nNested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8) (actual time=0.034..24981.895 rows=90017507 loops=1)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=12) (actual time=0.022..7145.811 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual time=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.006..0.006 rows=1 loops=1)\nPlanning time: 0.290 ms\nExecution time: 27241.239 ms\n\n\n [new]\nGather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual time=0.265..82355.289 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431 width=8) (actual time=0.033..68529.259 rows=30005836 loops=3)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.001..0.001 rows=1 loops=90017507)\nPlanning Time: 0.319 ms\nExecution Time: 85081.158 ms\n\n\n###### QUERY 3B:\n\n\nselect \n ref_0.ol_i_id as c0\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select \n ref_0.ol_delivery_d as c0\n from\n public.order_line as ref_1\n where ref_1.ol_d_id <= cast(nullif(ref_1.ol_o_id, ref_0.ol_i_id) as int4))\n\n\n Execution plan:\n\n\n[old]\nNested Loop Semi Join (cost=0.00..115638730740936.53 rows=30005835 width=4) (actual time=0.017..27009.302 rows=90017507 loops=1)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 11557\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=4) (actual time=0.009..7199.540 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2813221.56 rows=90017504 width=8) (actual time=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.002 rows=14 loops=1)\nPlanning time: 0.252 ms\nExecution time: 29363.737 ms\n\n\n[new]\nGather (cost=1000.00..84060490326155.39 rows=30005835 width=4) (actual time=0.272..71712.491 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..84060487324571.89 rows=12502431 width=4) (actual time=0.046..60153.472 rows=30005836 loops=3)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 1717\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=4) (actual time=0.023..2819.361 rows=30005836 loops=3)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.001 rows=1 loops=90017507)\nPlanning Time: 0.334 ms\nExecution Time: 74110.942 ms\n\n\n####### QUERY 4:\n\n\nAs Tom pointed out, this regression is not present in DEV head. We will not report regressions related to NULLIF in the future.\n\n\n####### QUERY 1:\n\n\nThis regression is also not present in DEV head. We will validate our regressions on DEV head in the future report.\n\n\nBest, \n\nJinho Jung\n\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Saturday, February 16, 2019 5:37:49 PM\nTo: Andres Freund\nCc: Jung, Jinho; Jeff Janes; [email protected]\nSubject: Re: Performance regressions found using sqlfuzz\n \n\n\nAndres Freund <[email protected]> writes:\n> On 2019-02-14 17:27:40 +0000, Jung, Jinho wrote:\n>> - Our analysis: We believe that this regression has to do with two factors: 1) conditional expression (e.g., LEAST or NULLIF) are not reduced to constants unlike string functions (e.g., CHAR_LENGTH) 2) change in the cost estimation function for bitmap scan.\n Execution time grows by 3 orders of magnitude. We note that this regression is only observed on large databases (e.g., scale factor of 50).\n\n> Hm. The primary problem here is that the estimation both before and\n> after are really bad. So I don't think the commit you point out here is\n> really to blame. I'm not that bothered by the query not being great,\n> given the weird construction with LEAST(), but we probably could fix\n> that pretty easily.\n\nWe already did:\n\nAuthor: Tom Lane <[email protected]>\nBranch: master [6f19a8c41] 2018-12-30 13:42:04 -0500\n\n Teach eval_const_expressions to constant-fold LEAST/GREATEST expressions.\n \n Doing this requires an assumption that the invoked btree comparison\n function is immutable. We could check that explicitly, but in other\n places such as contain_mutable_functions we just assume that it's true,\n so we may as well do likewise here. (If the comparison function's\n behavior isn't immutable, the sort order in indexes built with it would\n be unstable, so it seems certainly wrong for it not to be so.)\n \n Vik Fearing\n \n Discussion: \nhttps://postgr.es/m/[email protected]\n\nBTW, const-folding NULLIF would not be a similarly tiny fix, because\nit would need to check for immutability of the underlying operator\n(since it is possibly a cross-type comparison, we can't get\naway with just assuming it's immutable). I'm not convinced that\ncase is worth carrying extra code for.\n\n regards, tom lane",
"msg_date": "Mon, 18 Feb 2019 21:08:52 +0000",
"msg_from": "\"Jung, Jinho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance regressions found using sqlfuzz"
},
{
"msg_contents": "Hi Andres:\n\nCould you please share your thoughts on QUERY 3?\n\nThe performance impact of this regression increases *linearly* on larger databases. We concur with Andrew in that this is related to the lack of a Materialize node and mis-costing of the Nested Loop Anti-Join.\n\nWe found more than 20 regressions related to this commit. We have shared two illustrative examples (QUERIES 3A and 3B) below.\n\n- Commit: 77cd477 (Enable parallel query by default.)\n\n- Summary: Execution Time (milliseconds)\n\nWhen we increased the scale-factor of TPC-C to 300 (~30 GB), this query ran three times slower on v11 (24 seconds) in comparison to v9.5 (7 seconds). We also found more than 15 regressions related to the same commit and share a couple of them below.\n\n+-----------------------+--------+---------+---------+-----------+\n| | scale1 | scale10 | scale50 | scale 300 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 3 (v9.5) | 28 | 248 | 1231 | 7265 |\n| Query 3 (v11) | 74 | 677 | 3345 | 24581 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 3A (v9.5) | 88 | 937 | 4721 | 27241 |\n| Query 3A (v11) | 288 | 2822 | 13838 | 85081 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 3B (v9.5) | 101 | 934 | 4824 | 29363 |\n| Query 3B (v11) | 200 | 2331 | 12327 | 74110 |\n+-----------------------+--------+---------+---------+-----------+\n\n\n###### QUERY 3:\n\nselect\n cast(ref_1.ol_i_id as int4) as c0\nfrom\n public.stock as ref_0\n left join public.order_line as ref_1\n on (ref_1.ol_number is not null)\nwhere ref_1.ol_number is null\n\n\n###### QUERY 3A:\n\nselect\n ref_0.ol_delivery_d as c1\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_1.i_im_id as c0\n from\n public.item as ref_1\n where ref_0.ol_d_id <= ref_1.i_im_id\n )\n\n Execution plan:\n\n[OLD version]\nNested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8) (actual time=0.034..24981.895 rows=90017507 loops=1)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=12) (actual time=0.022..7145.811 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual time=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.006..0.006 rows=1 loops=1)\n\nPlanning time: 0.290 ms\nExecution time: 27241.239 ms\n\n[NEW version]\nGather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual time=0.265..82355.289 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431 width=8) (actual time=0.033..68529.259 rows=30005836 loops=3)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.001..0.001 rows=1 loops=90017507)\n\nPlanning Time: 0.319 ms\nExecution Time: 85081.158 ms\n\n\n###### QUERY 3B:\n\n\nselect\n ref_0.ol_i_id as c0\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_0.ol_delivery_d as c0\n from\n public.order_line as ref_1\n where ref_1.ol_d_id <= cast(nullif(ref_1.ol_o_id, ref_0.ol_i_id) as int4))\n\nExecution plan:\n\n[OLD version]\nNested Loop Semi Join (cost=0.00..115638730740936.53 rows=30005835 width=4) (actual time=0.017..27009.302 rows=90017507 loops=1)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 11557\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=4) (actual time=0.009..7199.540 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2813221.56 rows=90017504 width=8) (actual time=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.002 rows=14 loops=1)\n\nPlanning time: 0.252 ms\nExecution time: 29363.737 ms\n\n[NEW version]\nGather (cost=1000.00..84060490326155.39 rows=30005835 width=4) (actual time=0.272..71712.491 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..84060487324571.89 rows=12502431 width=4) (actual time=0.046..60153.472 rows=30005836 loops=3)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 1717\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=4) (actual time=0.023..2819.361 rows=30005836 loops=3)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.001 rows=1 loops=90017507)\n\nPlanning Time: 0.334 ms\nExecution Time: 74110.942 ms\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Andres:\n\n\nCould you please share your thoughts on QUERY 3?\n\n\nThe performance impact of this regression increases *linearly* on larger databases. We concur with Andrew in that this is related to the lack of a Materialize node and mis-costing of the Nested Loop Anti-Join. \n\n\nWe found more than 20 regressions related to this commit. We have shared two illustrative examples (QUERIES 3A and 3B) below.\n\n\n- Commit: 77cd477 (Enable parallel query by default.)\n\n\n- Summary: Execution Time (milliseconds)\n\n\nWhen we increased the scale-factor of TPC-C to 300 (~30 GB), this query ran three times slower on v11 (24 seconds) in comparison to v9.5 (7 seconds). We also found more than 15 regressions related to the same commit and share a couple of them below.\n\n\n+-----------------------+--------+---------+---------+-----------+\n| | scale1 | scale10 | scale50 | scale 300 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 3 (v9.5) | 28 | 248 | 1231 | 7265 |\n| Query 3 (v11) | 74 | 677 | 3345 | 24581 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 3A (v9.5) | 88 | 937 | 4721 | 27241 |\n| Query 3A (v11) | 288 | 2822 | 13838 | 85081 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 3B (v9.5) | 101 | 934 | 4824 | 29363 |\n| Query 3B (v11) | 200 | 2331 | 12327 | 74110 |\n+-----------------------+--------+---------+---------+-----------+\n\n\n\n\n###### QUERY 3:\n\n\nselect\n cast(ref_1.ol_i_id as int4) as c0\nfrom\n public.stock as ref_0\n left join public.order_line as ref_1\n on (ref_1.ol_number is not null)\nwhere ref_1.ol_number is null\n\n\n\n\n###### QUERY 3A:\n\n\nselect\n ref_0.ol_delivery_d as c1\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_1.i_im_id as c0\n from\n public.item as ref_1\n where ref_0.ol_d_id <= ref_1.i_im_id\n )\n\n\n Execution plan:\n\n\n[OLD version]\nNested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8) (actual time=0.034..24981.895 rows=90017507 loops=1)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=12) (actual time=0.022..7145.811 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual time=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.006..0.006 rows=1 loops=1)\n\n\nPlanning time: 0.290 ms\nExecution time: 27241.239 ms\n\n\n[NEW version]\nGather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual time=0.265..82355.289 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431 width=8) (actual time=0.033..68529.259 rows=30005836 loops=3)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.001..0.001 rows=1 loops=90017507)\n\n\nPlanning Time: 0.319 ms\nExecution Time: 85081.158 ms\n\n\n\n\n###### QUERY 3B:\n\n\n\n\nselect\n ref_0.ol_i_id as c0\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_0.ol_delivery_d as c0\n from\n public.order_line as ref_1\n where ref_1.ol_d_id <= cast(nullif(ref_1.ol_o_id, ref_0.ol_i_id) as int4))\n\n\nExecution plan:\n\n\n[OLD version]\nNested Loop Semi Join (cost=0.00..115638730740936.53 rows=30005835 width=4) (actual time=0.017..27009.302 rows=90017507 loops=1)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 11557\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=4) (actual time=0.009..7199.540 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2813221.56 rows=90017504 width=8) (actual time=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.002 rows=14 loops=1)\n\n\nPlanning time: 0.252 ms\nExecution time: 29363.737 ms\n\n\n[NEW version]\nGather (cost=1000.00..84060490326155.39 rows=30005835 width=4) (actual time=0.272..71712.491 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..84060487324571.89 rows=12502431 width=4) (actual time=0.046..60153.472 rows=30005836 loops=3)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 1717\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=4) (actual time=0.023..2819.361 rows=30005836 loops=3)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.001 rows=1 loops=90017507)\n\n\nPlanning Time: 0.334 ms\nExecution Time: 74110.942 ms",
"msg_date": "Thu, 28 Feb 2019 16:43:26 +0000",
"msg_from": "\"Jung, Jinho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance regressions found using sqlfuzz"
}
] |
[
{
"msg_contents": "I stumbled upon this question:\n\n https://dba.stackexchange.com/questions/229413\n\nin a nutshell: the bloom index is not used with the example from the manual. \n\nThe bloom index is only used if either Seq Scan is disabled or if the random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my Windows laptop). \n\nIf parallel execution is disabled, then the bloom index is only used if the random_page_cost is lower than 4. \n\nThis does not use the index:\n\n set random_page_cost = 4; \n set max_parallel_workers_per_gather=0;\n explain (analyze, buffers) \n select * \n from tbloom \n where i2 = 898732 \n and i5 = 123451;\n\nThis uses the bloom index:\n\n set random_page_cost = 3.5; \n set max_parallel_workers_per_gather=0;\n explain (analyze, buffers) \n select * \n from tbloom \n where i2 = 898732 \n and i5 = 123451;\n\nAnd this uses the index also: \n\n set random_page_cost = 1; \n explain (analyze, buffers) \n select * \n from tbloom \n where i2 = 898732 \n and i5 = 123451;\n\nThis is the plan with when the index is used (either through \"enable_seqscan = off\" or \"random_page_cost = 1\")\n\nBitmap Heap Scan on tbloom (cost=138436.69..138440.70 rows=1 width=24) (actual time=42.444..42.444 rows=0 loops=1) \n Recheck Cond: ((i2 = 898732) AND (i5 = 123451)) \n Rows Removed by Index Recheck: 2400 \n Heap Blocks: exact=2365 \n Buffers: shared hit=21973 \n -> Bitmap Index Scan on bloomidx (cost=0.00..138436.69 rows=1 width=0) (actual time=40.756..40.756 rows=2400 loops=1)\n Index Cond: ((i2 = 898732) AND (i5 = 123451)) \n Buffers: shared hit=19608 \nPlanning Time: 0.075 ms \nExecution Time: 42.531 ms \n\nAnd this is the plan when everything left at default settings:\n\nSeq Scan on tbloom (cost=0.00..133695.80 rows=1 width=24) (actual time=1220.116..1220.116 rows=0 loops=1)\n Filter: ((i2 = 898732) AND (i5 = 123451)) \n Rows Removed by Filter: 10000000 \n Buffers: shared hit=4697 read=58998 \n I/O Timings: read=354.670 \nPlanning Time: 0.075 ms \nExecution Time: 1220.144 ms \n\nCan this be considered a bug in the cost model of the bloom index implementation? \nOr is it expected that this is only used if random access is really cheap? \n\nThomas\n\n\n\n",
"msg_date": "Tue, 12 Feb 2019 16:08:25 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bloom index cost model seems to be wrong"
},
{
"msg_contents": "Thomas Kellerer <[email protected]> writes:\n> The bloom index is only used if either Seq Scan is disabled or if the random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my Windows laptop). \n\nHm. blcostestimate is using the default cost calculation, except for\n\n\t/* We have to visit all index tuples anyway */\n\tcosts.numIndexTuples = index->tuples;\n\nwhich essentially tells genericcostestimate to assume that every index\ntuple will be visited. This obviously is going to increase the cost\nestimate; maybe there's something wrong with that?\n\nI notice that the bloom regression test script is only testing queries\nwhere it forces the choice of plan type, so it really doesn't prove\nanything about whether the cost estimates are sane.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 12 Feb 2019 10:41:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 10:42 AM Tom Lane <[email protected]> wrote:\n\n> Thomas Kellerer <[email protected]> writes:\n> > The bloom index is only used if either Seq Scan is disabled or if the\n> random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my\n> Windows laptop).\n>\n> Hm. blcostestimate is using the default cost calculation, except for\n>\n> /* We have to visit all index tuples anyway */\n> costs.numIndexTuples = index->tuples;\n>\n> which essentially tells genericcostestimate to assume that every index\n> tuple will be visited. This obviously is going to increase the cost\n> estimate; maybe there's something wrong with that?\n>\n\nI assumed (without investigating yet) that genericcostestimate is applying\na cpu_operator_cost (or a few of them) on each index tuple, while the\npremise of a bloom index is that you do very fast bit-fiddling, not more\nexpense SQL operators, for each tuple and then do the recheck only on what\nsurvives to the table tuple part.\n\nCheers,\n\nJeff\n\nOn Tue, Feb 12, 2019 at 10:42 AM Tom Lane <[email protected]> wrote:Thomas Kellerer <[email protected]> writes:\n> The bloom index is only used if either Seq Scan is disabled or if the random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my Windows laptop). \n\nHm. blcostestimate is using the default cost calculation, except for\n\n /* We have to visit all index tuples anyway */\n costs.numIndexTuples = index->tuples;\n\nwhich essentially tells genericcostestimate to assume that every index\ntuple will be visited. This obviously is going to increase the cost\nestimate; maybe there's something wrong with that?I assumed (without investigating yet) that genericcostestimate is applying a cpu_operator_cost (or a few of them) on each index tuple, while the premise of a bloom index is that you do very fast bit-fiddling, not more expense SQL operators, for each tuple and then do the recheck only on what survives to the table tuple part.Cheers,Jeff",
"msg_date": "Tue, 12 Feb 2019 11:58:08 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 11:58 AM Jeff Janes <[email protected]> wrote:\n\n>\n> On Tue, Feb 12, 2019 at 10:42 AM Tom Lane <[email protected]> wrote:\n>\n>>\n>> Hm. blcostestimate is using the default cost calculation, except for\n>>\n>> /* We have to visit all index tuples anyway */\n>> costs.numIndexTuples = index->tuples;\n>>\n>> which essentially tells genericcostestimate to assume that every index\n>> tuple will be visited. This obviously is going to increase the cost\n>> estimate; maybe there's something wrong with that?\n>>\n>\n> I assumed (without investigating yet) that genericcostestimate is applying\n> a cpu_operator_cost (or a few of them) on each index tuple, while the\n> premise of a bloom index is that you do very fast bit-fiddling, not more\n> expense SQL operators, for each tuple and then do the recheck only on what\n> survives to the table tuple part.\n>\n\nIn order for bloom (or any other users of CREATE ACCESS METHOD, if there\nare any) to have a fighting chance to do better, I think many of selfuncs.c\ncurrently private functions would have to be declared in some header file,\nperhaps utils/selfuncs.h. But that then requires a cascade of other\ninclusions. Perhaps that is why it was not done.\n\nCheers,\n\nJeff\n\n>\n\nOn Tue, Feb 12, 2019 at 11:58 AM Jeff Janes <[email protected]> wrote:On Tue, Feb 12, 2019 at 10:42 AM Tom Lane <[email protected]> wrote:\nHm. blcostestimate is using the default cost calculation, except for\n\n /* We have to visit all index tuples anyway */\n costs.numIndexTuples = index->tuples;\n\nwhich essentially tells genericcostestimate to assume that every index\ntuple will be visited. This obviously is going to increase the cost\nestimate; maybe there's something wrong with that?I assumed (without investigating yet) that genericcostestimate is applying a cpu_operator_cost (or a few of them) on each index tuple, while the premise of a bloom index is that you do very fast bit-fiddling, not more expense SQL operators, for each tuple and then do the recheck only on what survives to the table tuple part.In order for bloom (or any other users of CREATE ACCESS METHOD, if there are any) to have a fighting chance to do better, I think many of selfuncs.c currently private functions would have to be declared in some header file, perhaps utils/selfuncs.h. But that then requires a cascade of other inclusions. Perhaps that is why it was not done.Cheers,Jeff",
"msg_date": "Tue, 12 Feb 2019 14:56:40 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> In order for bloom (or any other users of CREATE ACCESS METHOD, if there\n> are any) to have a fighting chance to do better, I think many of selfuncs.c\n> currently private functions would have to be declared in some header file,\n> perhaps utils/selfuncs.h. But that then requires a cascade of other\n> inclusions. Perhaps that is why it was not done.\n\nI'm just in the midst of refactoring that stuff, so if you have\nsuggestions, let's hear 'em.\n\nIt's possible that a good cost model for bloom is so far outside\ngenericcostestimate's ideas that trying to use it is not a good\nidea anyway.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 12 Feb 2019 16:17:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 4:17 PM Tom Lane <[email protected]> wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > In order for bloom (or any other users of CREATE ACCESS METHOD, if there\n> > are any) to have a fighting chance to do better, I think many of\n> selfuncs.c\n> > currently private functions would have to be declared in some header\n> file,\n> > perhaps utils/selfuncs.h. But that then requires a cascade of other\n> > inclusions. Perhaps that is why it was not done.\n>\n> I'm just in the midst of refactoring that stuff, so if you have\n> suggestions, let's hear 'em.\n>\n\nThe goal would be that I can copy the entire definition of\ngenericcostestimate into blcost.c, change the function's name, and get it\nto compile. I don't know the correct way accomplish that. Maybe\nutils/selfuncs.h can be expanded to work, or if there should be a new\nheader file like \"utils/index_am_cost.h\"\n\nWhat I've done for now is:\n\n#include \"../../src/backend/utils/adt/selfuncs.c\"\n\nwhich I assume is not acceptable as a real solution.\n\n\nIt's possible that a good cost model for bloom is so far outside\n> genericcostestimate's ideas that trying to use it is not a good\n> idea anyway.\n>\n\nI think that might be the case. I don't know what the right answer would\nlook like, but I think it will likely end up needing to access everything\nthat genericcostestimate currently needs to access. Or if bloom doesn't,\nsome other extension implementing an ACCESS METHOD will.\n\nCheers,\n\nJeff\n\nOn Tue, Feb 12, 2019 at 4:17 PM Tom Lane <[email protected]> wrote:Jeff Janes <[email protected]> writes:\n> In order for bloom (or any other users of CREATE ACCESS METHOD, if there\n> are any) to have a fighting chance to do better, I think many of selfuncs.c\n> currently private functions would have to be declared in some header file,\n> perhaps utils/selfuncs.h. But that then requires a cascade of other\n> inclusions. Perhaps that is why it was not done.\n\nI'm just in the midst of refactoring that stuff, so if you have\nsuggestions, let's hear 'em.The goal would be that I can copy the entire definition of genericcostestimate into blcost.c, change the function's name, and get it to compile. I don't know the correct way accomplish that. Maybe utils/selfuncs.h can be expanded to work, or if there should be a new header file like \"utils/index_am_cost.h\"What I've done for now is: #include \"../../src/backend/utils/adt/selfuncs.c\"which I assume is not acceptable as a real solution.It's possible that a good cost model for bloom is so far outside\ngenericcostestimate's ideas that trying to use it is not a good\nidea anyway.I think that might be the case. I don't know what the right answer would look like, but I think it will likely end up needing to access everything that genericcostestimate currently needs to access. Or if bloom doesn't, some other extension implementing an ACCESS METHOD will.Cheers,Jeff",
"msg_date": "Tue, 12 Feb 2019 17:38:28 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "I've moved this to the hackers list, and added Teodor and Alexander of the\nbloom extension, as I would like to hear their opinions on the costing.\n\nOn Tue, Feb 12, 2019 at 4:17 PM Tom Lane <[email protected]> wrote:\n\n>\n> It's possible that a good cost model for bloom is so far outside\n> genericcostestimate's ideas that trying to use it is not a good\n> idea anyway.\n>\n>\nI'm attaching a crude patch based over your refactored header files.\n\nI just copied genericcostestimate into bloom, and made a few changes.\n\nI think one change should be conceptually uncontroversial, which is to\nchange the IO cost from random_page_cost to seq_page_cost. Bloom indexes\nare always scanned in their entirety.\n\nThe other change is not to charge any cpu_operator_cost per tuple. Bloom\ndoesn't go through the ADT machinery, it just does very fast\nbit-twiddling. I could assign a fraction of a cpu_operator_cost,\nmultiplied by bloomLength rather than list_length(indexQuals), to this\nbit-twiddling. But I think that that fraction would need to be quite close\nto zero, so I just removed it.\n\nWhen everything is in memory, Bloom still gets way overcharged for CPU\nusage even without the cpu_operator_cost. This patch doesn't do anything\nabout that. I don't know if the default value of cpu_index_tuple_cost is\nway too high, or if Bloom just shouldn't be charging the full value of it\nin the first place given the way it accesses index tuples. For comparison,\nwhen using a Btree as an equality filter on a non-leading column, most of\nthe time goes to index_getattr. Should the time spent there be loaded on\ncpu_operator_cost or onto cpu_index_tuple_cost? It is not strictly spent\nin the operator, but fetching the parameter to be used in an operator is\nmore closely a per-operator problem than a per-tuple problem.\n\nMost of genericcostestimate still applies. For example, ScalarArrayOpExpr\nhandling, and Mackert-Lohman formula. It is a shame that all of that has\nto be copied.\n\nThere are some other parts of genericcostestimate that probably don't apply\n(OrderBy, for example) but I left them untouched for now to make it easier\nto reconcile changes to the real genericcostestimate with the copy.\n\nFor ScalarArrayOpExpr, it would be nice to scan the index once and add to\nthe bitmap all branches of the =ANY in one index scan, but I don't see the\nmachinery to do that. It would be a matter for another patch anyway, other\nthan the way it would change the cost estimate.\n\nCheers,\n\nJeff",
"msg_date": "Sun, 24 Feb 2019 11:09:50 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Sun, Feb 24, 2019 at 11:09 AM Jeff Janes <[email protected]> wrote:\n\n> I've moved this to the hackers list, and added Teodor and Alexander of the\n> bloom extension, as I would like to hear their opinions on the costing.\n>\n\nMy previous patch had accidentally included a couple lines of a different\nthing I was working on (memory leak, now-committed), so this patch removes\nthat diff.\n\nI'm adding it to the commitfest targeting v13. I'm more interested in\nfeedback on the conceptual issues rather than stylistic ones, as I would\nprobably merge the two functions together before proposing something to\nactually be committed.\n\nShould we be trying to estimate the false positive rate and charging\ncpu_tuple_cost and cpu_operator_cost the IO costs for visiting the table to\nrecheck and reject those? I don't think other index types do that, and I'm\ninclined to think the burden should be on the user not to create silly\nindexes that produce an overwhelming number of false positives.\n\nCheers,\n\nJeff",
"msg_date": "Thu, 28 Feb 2019 13:11:16 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> Should we be trying to estimate the false positive rate and charging\n> cpu_tuple_cost and cpu_operator_cost the IO costs for visiting the table to\n> recheck and reject those? I don't think other index types do that, and I'm\n> inclined to think the burden should be on the user not to create silly\n> indexes that produce an overwhelming number of false positives.\n\nHeap-access costs are added on in costsize.c, not in the index\ncost estimator. I don't remember at the moment whether there's\nany explicit accounting for lossy indexes (i.e. false positives).\nUp to now, there haven't been cases where we could estimate the\nfalse-positive rate with any accuracy, so we may just be ignoring\nthe effect. But if we decide to account for it, I'd rather have\ncostsize.c continue to add on the actual cost, perhaps based on\na false-positive-rate fraction returned by the index estimator.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 13:30:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Fri, Mar 1, 2019 at 7:11 AM Jeff Janes <[email protected]> wrote:\n> I'm adding it to the commitfest targeting v13. I'm more interested in feedback on the conceptual issues rather than stylistic ones, as I would probably merge the two functions together before proposing something to actually be committed.\n\n From the trivialities department, this patch shows up as a CI failure\nwith -Werror, because there is no declaration for\ngenericcostestimate2(). I realise that's just a temporary name in a\nWIP patch anyway so this isn't useful feedback, but for the benefit of\nanyone going through CI failures in bulk looking for things to\ncomplain about: this isn't a real one.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 11:57:37 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "It's not clear to me what the next action should be on this patch. I\nthink Jeff got some feedback from Tom, but was that enough to expect a\nnew version to be posted? That was in February; should we now (in late\nSeptember) close this as Returned with Feedback?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 17:12:26 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 05:12:26PM -0300, Alvaro Herrera wrote:\n> It's not clear to me what the next action should be on this patch. I\n> think Jeff got some feedback from Tom, but was that enough to expect a\n> new version to be posted? That was in February; should we now (in late\n> September) close this as Returned with Feedback?\n\nThat sounds rather right to me.\n--\nMichael",
"msg_date": "Thu, 26 Sep 2019 16:00:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
}
] |
[
{
"msg_contents": "Hey,\nI'm trying to understand the logic behind all of these so I would be happy\nif you can confirm what I understood or correct me if I'm wrong :\n-The commit command writes all the data in the wal_buffers is written into\nthe wal files.\n-Checkpoints writes the data itself (blocks that were changed) into the\ndata files in the base dir. Just to make sure, as part of the checkpoint,\nit needs to read the wal files that were generated since the last\ncheckpoint right ?\n-max_wal_size is a soft limit for the total size of all the wals that were\ngenerated. When the total_size of the pg_xlog dir reaches max_wal_size(can\nincrease it because of peaks and some other issues..) the db will force a\ncheckpoint to write the changes from the wals into the disk and then it\nwill start recycling old wals (all of them ? or only those who were written\n?).\n-wal_keep_segments is meant to help standbys that didn't receive the wals,\nso it allow us to keep wal_keep_segments wals in our pg_xlog dir.\n- in case we have a collision between wal_keep_segments and max_wal_size\nthe wal_keep_segments will be the one that be used right ?. For example,\nlets say my wal_size is default(16MB). I set max_wal_size to 1GB which is\n1600/16=100 wals. However, my wal_keep_segments is set to 300. It means\nthat when the total_size of the pg_xlog directory will reach 1GB,\ncheckpoint will be forced but old wal files wont be recycled/deleted ?\n\nThanks.\n\nHey,I'm trying to understand the logic behind all of these so I would be happy if you can confirm what I understood or correct me if I'm wrong :-The commit command writes all the data in the wal_buffers is written into the wal files.-Checkpoints writes the data itself (blocks that were changed) into the data files in the base dir. Just to make sure, as part of the checkpoint, it needs to read the wal files that were generated since the last checkpoint right ?-max_wal_size is a soft limit for the total size of all the wals that were generated. When the total_size of the pg_xlog dir reaches max_wal_size(can increase it because of peaks and some other issues..) the db will force a checkpoint to write the changes from the wals into the disk and then it will start recycling old wals (all of them ? or only those who were written ?).-wal_keep_segments is meant to help standbys that didn't receive the wals, so it allow us to keep wal_keep_segments wals in our pg_xlog dir.- in case we have a collision between wal_keep_segments and max_wal_size the wal_keep_segments will be the one that be used right ?. For example, lets say my wal_size is default(16MB). I set max_wal_size to 1GB which is 1600/16=100 wals. However, my wal_keep_segments is set to 300. It means that when the total_size of the pg_xlog directory will reach 1GB, checkpoint will be forced but old wal files wont be recycled/deleted ?Thanks.",
"msg_date": "Wed, 13 Feb 2019 10:34:17 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "understanding max_wal_size,wal_keep_segments and checkpoints"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> I'm trying to understand the logic behind all of these so I would be happy\n> if you can confirm what I understood or correct me if I'm wrong :\n> -The commit command writes all the data in the wal_buffers is written into the wal files.\n\nAll the transaction log for the transaction has to be written to file, and the\nfiles have to be sync'ed to storage before COMMIT completes.\nThat way the transaction can be replayed in case of a crash.\n\n> -Checkpoints writes the data itself (blocks that were changed) into the data files\n> in the base dir. Just to make sure, as part of the checkpoint, it needs to read the\n> wal files that were generated since the last checkpoint right ?\n\nNo WAL file has to be read during a checkpoint.\n\nWhen data in the database ar modified, they are modified in the \"shared buffers\"\nRAM cache. Later, these \"direty blocks\" are written to disk by the background\nwriter process or the checkpoint.\n\n> -max_wal_size is a soft limit for the total size of all the wals that were generated.\n> When the total_size of the pg_xlog dir reaches max_wal_size(can increase it because\n> of peaks and some other issues..) the db will force a checkpoint to write the changes\n> from the wals into the disk and then it will start recycling old wals (all of them ?\n> or only those who were written ?).\n\nA checkpoint is forced when more than max_wal_size WAL has been written since the\nlast checkpoint.\n\nAfter a checkpoint, unneeded WAL segments are either recycled (renamed and reused)\nor deleted (if max_wal_size has been exceeded).\n\nWAL segments are unneeded if they are older than the checkpoint, have been archived\n(if archiving is configured), don't need to be kept around because of wal_keep_segments\nand are older than the position of any active replication slot.\n\n> -wal_keep_segments is meant to help standbys that didn't receive the wals, so it allow\n> us to keep wal_keep_segments wals in our pg_xlog dir.\n\nYes.\n\n> - in case we have a collision between wal_keep_segments and max_wal_size the\n> wal_keep_segments will be the one that be used right ?. For example, lets say my\n> wal_size is default(16MB). I set max_wal_size to 1GB which is 1600/16=100 wals.\n> However, my wal_keep_segments is set to 300. It means that when the total_size of\n> the pg_xlog directory will reach 1GB, checkpoint will be forced but old wal files\n> wont be recycled/deleted ?\n\nCheckpoints are not forced by the size of pg_xlog, but by the amount of WAL\ncreated since the last checkpoint.\n\nThe last wal_keep_segments WAL segments are always kept around, even if that\nexceeds max_wal_size.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 13 Feb 2019 10:43:36 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding max_wal_size,wal_keep_segments and checkpoints"
},
{
"msg_contents": "> > I'm trying to understand the logic behind all of these so I would be\n> happy\n> > if you can confirm what I understood or correct me if I'm wrong :\n> > -The commit command writes all the data in the wal_buffers is written\n> into the wal files.\n>\n> All the transaction log for the transaction has to be written to file, and\n> the\n> files have to be sync'ed to storage before COMMIT completes.\n> That way the transaction can be replayed in case of a crash.\n>\n> *Yeah, so basically if we open a transaction and we do some insert\nqueries, until the transaction is commited the changes**(the wal data and\nnot the blocked that are chaned)** are kept in the wal buffers ? . When\nthe user commits the transaction, the wal buffer(only the transaction log\nof that specific transaction ?) is written to wal files. When the database\ncompletes saving the content of the transaction log into the wal files, the\ncommit completes. Did I got it right ?*\n\n> -Checkpoints writes the data itself (blocks that were changed) into the\n> data files\n> > in the base dir. Just to make sure, as part of the checkpoint, it needs\n> to read the\n> > wal files that were generated since the last checkpoint right ?\n>\n> No WAL file has to be read during a checkpoint.\n>\n> When data in the database ar modified, they are modified in the \"shared\n> buffers\"\n> RAM cache. Later, these \"direty blocks\" are written to disk by the\n> background\n> writer process or the checkpoint.\n>\n\n*What I meant, when checkpoint occurs, it reads the wal files created since\nlast checkpoint, and does those changing on the data blocks on the disk ? I\nwas not talking about dirty blocks from shared_buffer.*\n\n>\n> > -max_wal_size is a soft limit for the total size of all the wals that\n> were generated.\n> > When the total_size of the pg_xlog dir reaches max_wal_size(can\n> increase it because\n> > of peaks and some other issues..) the db will force a checkpoint to\n> write the changes\n> > from the wals into the disk and then it will start recycling old wals\n> (all of them ?\n> > or only those who were written ?).\n>\n> A checkpoint is forced when more than max_wal_size WAL has been written\n> since the\n> last checkpoint.\n>\n> After a checkpoint, unneeded WAL segments are either recycled (renamed and\n> reused)\n> or deleted (if max_wal_size has been exceeded).\n>\n> WAL segments are unneeded if they are older than the checkpoint, have been\n> archived\n> (if archiving is configured), don't need to be kept around because of\n> wal_keep_segments\n> and are older than the position of any active replication slot.\n> *so I'f I want have replication slot and wal_keep_segment is 0 after the\n> archiving of the wal it should be recycled/deleted ?*\n>\n\n\n> > -wal_keep_segments is meant to help standbys that didn't receive the\n> wals, so it allow\n> > us to keep wal_keep_segments wals in our pg_xlog dir.\n>\n> Yes.\n>\n> > - in case we have a collision between wal_keep_segments and max_wal_size\n> the\n> > wal_keep_segments will be the one that be used right ?. For example,\n> lets say my\n> > wal_size is default(16MB). I set max_wal_size to 1GB which is\n> 1600/16=100 wals.\n> > However, my wal_keep_segments is set to 300. It means that when the\n> total_size of\n> > the pg_xlog directory will reach 1GB, checkpoint will be forced but old\n> wal files\n> > wont be recycled/deleted ?\n>\n> Checkpoints are not forced by the size of pg_xlog, but by the amount of WAL\n> created since the last checkpoint.\n>\n\n> The last wal_keep_segments WAL segments are always kept around, even if\n> that\n> exceeds max_wal_size.\n> *So basically having wal_keep_segments and replication slot configured\n> together is a mistake right ? In that case, if you have both configured,\n> and you set wal_keep_segments to 0, the db should delete all the unused\n> wals ?*\n>\n\n\n\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\n> I'm trying to understand the logic behind all of these so I would be happy> if you can confirm what I understood or correct me if I'm wrong :> -The commit command writes all the data in the wal_buffers is written into the wal files.\nAll the transaction log for the transaction has to be written to file, and thefiles have to be sync'ed to storage before COMMIT completes.That way the transaction can be replayed in case of a crash.\n Yeah, so basically if we open a transaction and we do some insert queries, until the transaction is commited the changes(the wal data and not the blocked that are chaned) are kept in the wal buffers ? . When the user commits the transaction, the wal buffer(only the transaction log of that specific transaction ?) is written to wal files. When the database completes saving the content of the transaction log into the wal files, the commit completes. Did I got it right ?> -Checkpoints writes the data itself (blocks that were changed) into the data files> in the base dir. Just to make sure, as part of the checkpoint, it needs to read the> wal files that were generated since the last checkpoint right ?\nNo WAL file has to be read during a checkpoint.\nWhen data in the database ar modified, they are modified in the \"shared buffers\"RAM cache. Later, these \"direty blocks\" are written to disk by the backgroundwriter process or the checkpoint.What I meant, when checkpoint occurs, it reads the wal files created since last checkpoint, and does those changing on the data blocks on the disk ? I was not talking about dirty blocks from shared_buffer.\n> -max_wal_size is a soft limit for the total size of all the wals that were generated.> When the total_size of the pg_xlog dir reaches max_wal_size(can increase it because> of peaks and some other issues..) the db will force a checkpoint to write the changes> from the wals into the disk and then it will start recycling old wals (all of them ?> or only those who were written ?).\nA checkpoint is forced when more than max_wal_size WAL has been written since thelast checkpoint.\nAfter a checkpoint, unneeded WAL segments are either recycled (renamed and reused)or deleted (if max_wal_size has been exceeded).\nWAL segments are unneeded if they are older than the checkpoint, have been archived(if archiving is configured), don't need to be kept around because of wal_keep_segmentsand are older than the position of any active replication slot.\nso I'f I want have replication slot and wal_keep_segment is 0 after the archiving of the wal it should be recycled/deleted ? > -wal_keep_segments is meant to help standbys that didn't receive the wals, so it allow> us to keep wal_keep_segments wals in our pg_xlog dir.\nYes.\n> - in case we have a collision between wal_keep_segments and max_wal_size the> wal_keep_segments will be the one that be used right ?. For example, lets say my> wal_size is default(16MB). I set max_wal_size to 1GB which is 1600/16=100 wals.> However, my wal_keep_segments is set to 300. It means that when the total_size of> the pg_xlog directory will reach 1GB, checkpoint will be forced but old wal files> wont be recycled/deleted ?\nCheckpoints are not forced by the size of pg_xlog, but by the amount of WALcreated since the last checkpoint. \nThe last wal_keep_segments WAL segments are always kept around, even if thatexceeds max_wal_size.\nSo basically having wal_keep_segments and replication slot configured together is a mistake right ? In that case, if you have both configured, and you set wal_keep_segments to 0, the db should delete all the unused wals ? Yours,Laurenz Albe-- Cybertec | https://www.cybertec-postgresql.com",
"msg_date": "Wed, 13 Feb 2019 12:45:58 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: understanding max_wal_size,wal_keep_segments and checkpoints"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> Yeah, so basically if we open a transaction and we do some insert queries, until the transaction\n> is commited the changes(the wal data and not the blocked that are chaned) are kept in the wal buffers ?\n> . When the user commits the transaction, the wal buffer(only the transaction log of that specific\n> transaction ?) is written to wal files. When the database completes saving the content of the\n> transaction log into the wal files, the commit completes. Did I got it right ?\n\nWAL can be written to file before the transaction commits.\nOtherwise the size of a transaction would be limited.\nOnly at commit time, it has to be written out and flushed to disk.\n\n> What I meant, when checkpoint occurs, it reads the wal files created since last checkpoint,\n> and does those changing on the data blocks on the disk ? I was not talking about dirty blocks\n> from shared_buffer.\n\nNo, PostgreSQL does not read the WAL files when it performs a checkpoint.\nWhen data are modified, first WAL is written, then it is written to shared buffers.\nThe checkpoint flushes dirty pages in shared buffers to disk.\n\n> > so I'f I want have replication slot and wal_keep_segment is 0 after the archiving of\n> > the wal it should be recycled/deleted ?\n\nOnly if it is older than the position of the replication slot.\n\n> > So basically having wal_keep_segments and replication slot configured together is a mistake right ?\n> > In that case, if you have both configured, and you set wal_keep_segments to 0, the db should \n> > delete all the unused wals ?\n\nIt is pointless to have both a replication slot and wal_keep_segments, yes.\nSetting wal_keep_segments to 0 is the right move in that case and should\nreduce pg_xlog size in time.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 13 Feb 2019 12:23:53 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding max_wal_size,wal_keep_segments and checkpoints"
}
] |
[
{
"msg_contents": "Hey,\nI have a very big toasted table in my db(9.2.5). Autovacuum doesnt gather\nstatistics on it because the analyze_scale/threshold are default and as a\nresult autoanalyze is never run and the statistics are wrong :\n\nselect * from pg_stat_all_Tables where relname='pg_toast_13488395';\n-[ RECORD 1 ]-----+------------------------------\nrelid | 13388396\nschemaname | pg_toast\nrelname | pg_toast_13488395\nseq_scan | 42\nseq_tup_read | 71163925\nidx_scan | 5374497\nidx_tup_fetch | 2530272449\nn_tup_ins | 1253680014\nn_tup_upd | 0\nn_tup_del | 1253658363\nn_tup_hot_upd | 0\nn_live_tup | 49425717 *wrong*\nn_dead_tup | 7822920 *wrong*\nlast_vacuum |\nlast_autovacuum | 2019-02-12 20:54:44.247083-05\nlast_analyze |\n*last_autoanalyze |*\nvacuum_count | 0\nautovacuum_count | 3747\nanalyze_count | 0\nautoanalyze_count | 0\n\nWhen I try to set the both the scale_factor / threshold I'm getting the\nnext error :\nalter table orig_table set (toast.autovacuum_analyze_scale_factor=0);\nERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\"\n\nAny idea why ? I didn't find in the release documentations a note for a\nsimilar bug.\n\nHey,I have a very big toasted table in my db(9.2.5). Autovacuum doesnt gather statistics on it because the analyze_scale/threshold are default and as a result autoanalyze is never run and the statistics are wrong : select * from pg_stat_all_Tables where relname='pg_toast_13488395';-[ RECORD 1 ]-----+------------------------------relid | 13388396schemaname | pg_toastrelname | pg_toast_13488395seq_scan | 42seq_tup_read | 71163925idx_scan | 5374497idx_tup_fetch | 2530272449n_tup_ins | 1253680014n_tup_upd | 0n_tup_del | 1253658363n_tup_hot_upd | 0n_live_tup | 49425717 wrongn_dead_tup | 7822920 wronglast_vacuum |last_autovacuum | 2019-02-12 20:54:44.247083-05last_analyze |last_autoanalyze |vacuum_count | 0autovacuum_count | 3747analyze_count | 0autoanalyze_count | 0When I try to set the both the scale_factor / threshold I'm getting the next error : alter table orig_table set (toast.autovacuum_analyze_scale_factor=0);ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\"Any idea why ? I didn't find in the release documentations a note for a similar bug.",
"msg_date": "Wed, 13 Feb 2019 17:43:08 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\""
},
{
"msg_contents": "On 2019-Feb-13, Mariel Cherkassky wrote:\n\n> Hey,\n> I have a very big toasted table in my db(9.2.5).\n\nSix years of bugfixes missing there ... you need to think about an\nupdate.\n\n> Autovacuum doesnt gather\n> statistics on it because the analyze_scale/threshold are default and as a\n> result autoanalyze is never run and the statistics are wrong :\n\nanalyze doesn't process toast tables anyway.\n\nI think the best you could do is manually vacuum this table.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 13 Feb 2019 13:13:33 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\""
},
{
"msg_contents": "To be honest, it isnt my db, but I just have access to it ...\nEither way, so I need to change the vacuum_Analyze_scale/threshold for the\noriginal table ? But the value will be too high/low for the original table.\nFor example if my original table has 30,000 rows and my toasted has\n100,000,000 rows. I want to analyze every 50K records in the toasted (by\nthe way, sounds legit ?) which is 0.05% of 100m. With this value it means\nthat every 0.05*30,000=1500 updated/deletes on the original table it will\nrun analyze on the original table which is very often...\nDoesn't it seems a little bit problematic ?\n\n\nבתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:13 מאת Alvaro Herrera <\[email protected]>:\n\n> On 2019-Feb-13, Mariel Cherkassky wrote:\n>\n> > Hey,\n> > I have a very big toasted table in my db(9.2.5).\n>\n> Six years of bugfixes missing there ... you need to think about an\n> update.\n>\n> > Autovacuum doesnt gather\n> > statistics on it because the analyze_scale/threshold are default and as a\n> > result autoanalyze is never run and the statistics are wrong :\n>\n> analyze doesn't process toast tables anyway.\n>\n> I think the best you could do is manually vacuum this table.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nTo be honest, it isnt my db, but I just have access to it ...Either way, so I need to change the vacuum_Analyze_scale/threshold for the original table ? But the value will be too high/low for the original table. For example if my original table has 30,000 rows and my toasted has 100,000,000 rows. I want to analyze every 50K records in the toasted (by the way, sounds legit ?) which is 0.05% of 100m. With this value it means that every 0.05*30,000=1500 updated/deletes on the original table it will run analyze on the original table which is very often...Doesn't it seems a little bit problematic ?בתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:13 מאת Alvaro Herrera <[email protected]>:On 2019-Feb-13, Mariel Cherkassky wrote:\n\n> Hey,\n> I have a very big toasted table in my db(9.2.5).\n\nSix years of bugfixes missing there ... you need to think about an\nupdate.\n\n> Autovacuum doesnt gather\n> statistics on it because the analyze_scale/threshold are default and as a\n> result autoanalyze is never run and the statistics are wrong :\n\nanalyze doesn't process toast tables anyway.\n\nI think the best you could do is manually vacuum this table.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 13 Feb 2019 18:41:15 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\""
},
{
"msg_contents": "On 2019-Feb-13, Mariel Cherkassky wrote:\n\n> To be honest, it isnt my db, but I just have access to it ...\n\nWell, I suggest you forget the password then :-)\n\n> Either way, so I need to change the vacuum_Analyze_scale/threshold for the\n> original table ? But the value will be too high/low for the original table.\n> For example if my original table has 30,000 rows and my toasted has\n> 100,000,000 rows. I want to analyze every 50K records in the toasted (by\n> the way, sounds legit ?) which is 0.05% of 100m. With this value it means\n> that every 0.05*30,000=1500 updated/deletes on the original table it will\n> run analyze on the original table which is very often...\n> Doesn't it seems a little bit problematic ?\n\nAutovacuum considers main table and toast table separately for\nvacuuming, so nothing you do to the parameters for the main table will\naffect the vacuuming schedule of the toast table.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 13 Feb 2019 13:54:45 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\""
},
{
"msg_contents": "I meant the anaylze, if anaylze will run very often on the original table,\narent there disadvantages for it ?\n\nבתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:54 מאת Alvaro Herrera <\[email protected]>:\n\n> On 2019-Feb-13, Mariel Cherkassky wrote:\n>\n> > To be honest, it isnt my db, but I just have access to it ...\n>\n> Well, I suggest you forget the password then :-)\n>\n> > Either way, so I need to change the vacuum_Analyze_scale/threshold for\n> the\n> > original table ? But the value will be too high/low for the original\n> table.\n> > For example if my original table has 30,000 rows and my toasted has\n> > 100,000,000 rows. I want to analyze every 50K records in the toasted (by\n> > the way, sounds legit ?) which is 0.05% of 100m. With this value it means\n> > that every 0.05*30,000=1500 updated/deletes on the original table it will\n> > run analyze on the original table which is very often...\n> > Doesn't it seems a little bit problematic ?\n>\n> Autovacuum considers main table and toast table separately for\n> vacuuming, so nothing you do to the parameters for the main table will\n> affect the vacuuming schedule of the toast table.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nI meant the anaylze, if anaylze will run very often on the original table, arent there disadvantages for it ?בתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:54 מאת Alvaro Herrera <[email protected]>:On 2019-Feb-13, Mariel Cherkassky wrote:\n\n> To be honest, it isnt my db, but I just have access to it ...\n\nWell, I suggest you forget the password then :-)\n\n> Either way, so I need to change the vacuum_Analyze_scale/threshold for the\n> original table ? But the value will be too high/low for the original table.\n> For example if my original table has 30,000 rows and my toasted has\n> 100,000,000 rows. I want to analyze every 50K records in the toasted (by\n> the way, sounds legit ?) which is 0.05% of 100m. With this value it means\n> that every 0.05*30,000=1500 updated/deletes on the original table it will\n> run analyze on the original table which is very often...\n> Doesn't it seems a little bit problematic ?\n\nAutovacuum considers main table and toast table separately for\nvacuuming, so nothing you do to the parameters for the main table will\naffect the vacuuming schedule of the toast table.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 14 Feb 2019 09:56:36 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\""
},
{
"msg_contents": "On 2019-Feb-14, Mariel Cherkassky wrote:\n\n> I meant the anaylze, if anaylze will run very often on the original table,\n> arent there disadvantages for it ?\n\nIt'll waste time and resources pointlessly. Don't do it -- it won't do\nany good.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 15 Feb 2019 10:39:55 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\""
},
{
"msg_contents": "'but then I don't have accurate statistics on my toasted table..\n\nOn Fri, Feb 15, 2019, 3:39 PM Alvaro Herrera <[email protected]\nwrote:\n\n> On 2019-Feb-14, Mariel Cherkassky wrote:\n>\n> > I meant the anaylze, if anaylze will run very often on the original\n> table,\n> > arent there disadvantages for it ?\n>\n> It'll waste time and resources pointlessly. Don't do it -- it won't do\n> any good.\n>\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n'but then I don't have accurate statistics on my toasted table..On Fri, Feb 15, 2019, 3:39 PM Alvaro Herrera <[email protected] wrote:On 2019-Feb-14, Mariel Cherkassky wrote:\n\n> I meant the anaylze, if anaylze will run very often on the original table,\n> arent there disadvantages for it ?\n\nIt'll waste time and resources pointlessly. Don't do it -- it won't do\nany good.\n\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 15 Feb 2019 20:00:24 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\""
}
] |
[
{
"msg_contents": "Hi,\nI have a table with json col : R(object int, data jsonb).\nExample for content :\n object | data\n----------------+---------------------------------------\n 50 | {\"ranges\": [[1, 1]]}\n 51 | {\"ranges\": [[5, 700],[1,5],[9,10}\n 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]}\n 53 | {\"ranges\": [[2, 2]]}\n 54 | {\"ranges\": [[5, 10]]}\n\nNow I tried to query for all the objects that contains a specific range,\nfor example [2,2] :\nexplain analyze SELECT *\nFROM R d\nWHERE EXISTS (\n SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng\n WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2\n );\n\nI saw that the gin index isnt suitable for this type of comparison.\nHowever, I saw that the gist index is suitable to handle ranges. Any idea\nof I can implement a gist index here ?\n\nIn addition, I saved the same data in relational table\nR2(object,range_first,range_last).\n The previous data in this format :\nobject range_first range_last\n50 1 1\n51 5 700\n51 1 5\n51 9 10\n\ni compared the first query with :\n explain analyze select * from R2 where range_first <=2 and range_last\n>= 2; (I have an index on range_first,range_last that is used)\n\nThe query on the jsonb column was 100x slower (700 m/s vs 7m/s). The\nquestion is, Am I missing an index or the jsonb datatype isnt suitable for\nthis structure of data. The R2 table contains 500K records while the R\ntable contains about 200K records.\n\nHi,I have a table with json col : R(object int, data jsonb).Example for content : object | data----------------+--------------------------------------- 50 | {\"ranges\": [[1, 1]]} 51 | {\"ranges\": [[5, 700],[1,5],[9,10} 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]} 53 | {\"ranges\": [[2, 2]]} 54 | {\"ranges\": [[5, 10]]}Now I tried to query for all the objects that contains a specific range, for example [2,2] : explain analyze SELECT *FROM R dWHERE EXISTS ( SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2 );I saw that the gin index isnt suitable for this type of comparison. However, I saw that the gist index is suitable to handle ranges. Any idea of I can implement a gist index here ? In addition, I saved the same data in relational table R2(object,range_first,range_last). The previous data in this format : object range_first range_last50 1 151 5 700 51 1 551 9 10i compared the first query with : explain analyze select * from R2 where \n\nrange_first <=2 and range_last >= 2; (I have an index on range_first,range_last that is used)The query on the jsonb column was 100x slower (700 m/s vs 7m/s). The question is, Am I missing an index or the jsonb datatype isnt suitable for this structure of data. The R2 table contains 500K records while the R table contains about 200K records.",
"msg_date": "Tue, 19 Feb 2019 17:59:01 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "index on jsonb col with 2D array inside the json"
},
{
"msg_contents": "Is your JSON data getting toasted? I wouldn't assume so if it is remaining\nsmall but something to check. Regardless, if an index exists and isn't\nbeing used, then that would be the primary concern. You didn't share what\nthe definition of the index on R.data is... what do you already have?\n\nYou have an array of ranges stored as the value of key \"ranges\" in jsonb\nfield data. If you created a table like R2, but with a single \"range\"\ncolumn that is int4range type, then I would expect that you could add a\nGiST and then use overlaps &&, or another operator. I would not expect that\nyou could index (unnest data->>'ranges' for instance) to get the separated\nout range values.\n\n\n\n*Michael Lewis *\n\n\nOn Tue, Feb 19, 2019 at 8:59 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> Hi,\n> I have a table with json col : R(object int, data jsonb).\n> Example for content :\n> object | data\n> ----------------+---------------------------------------\n> 50 | {\"ranges\": [[1, 1]]}\n> 51 | {\"ranges\": [[5, 700],[1,5],[9,10}\n> 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]}\n> 53 | {\"ranges\": [[2, 2]]}\n> 54 | {\"ranges\": [[5, 10]]}\n>\n> Now I tried to query for all the objects that contains a specific range,\n> for example [2,2] :\n> explain analyze SELECT *\n> FROM R d\n> WHERE EXISTS (\n> SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng\n> WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2\n> );\n>\n> I saw that the gin index isnt suitable for this type of comparison.\n> However, I saw that the gist index is suitable to handle ranges. Any idea\n> of I can implement a gist index here ?\n>\n> In addition, I saved the same data in relational table\n> R2(object,range_first,range_last).\n> The previous data in this format :\n> object range_first range_last\n> 50 1 1\n> 51 5 700\n> 51 1 5\n> 51 9 10\n>\n> i compared the first query with :\n> explain analyze select * from R2 where range_first <=2 and\n> range_last >= 2; (I have an index on range_first,range_last that is used)\n>\n> The query on the jsonb column was 100x slower (700 m/s vs 7m/s). The\n> question is, Am I missing an index or the jsonb datatype isnt suitable for\n> this structure of data. The R2 table contains 500K records while the R\n> table contains about 200K records.\n>\n>\n>\n>\n\nIs your JSON data getting toasted? I wouldn't assume so if it is remaining small but something to check. Regardless, if an index exists and isn't being used, then that would be the primary concern. You didn't share what the definition of the index on R.data is... what do you already have?You have an array of ranges stored as the value of key \"ranges\" in jsonb field data. If you created a table like R2, but with a single \"range\" column that is int4range type, then I would expect that you could add a GiST and then use overlaps &&, or another operator. I would not expect that you could index (unnest data->>'ranges' for instance) to get the separated out range values.Michael Lewis On Tue, Feb 19, 2019 at 8:59 AM Mariel Cherkassky <[email protected]> wrote:Hi,I have a table with json col : R(object int, data jsonb).Example for content : object | data----------------+--------------------------------------- 50 | {\"ranges\": [[1, 1]]} 51 | {\"ranges\": [[5, 700],[1,5],[9,10} 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]} 53 | {\"ranges\": [[2, 2]]} 54 | {\"ranges\": [[5, 10]]}Now I tried to query for all the objects that contains a specific range, for example [2,2] : explain analyze SELECT *FROM R dWHERE EXISTS ( SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2 );I saw that the gin index isnt suitable for this type of comparison. However, I saw that the gist index is suitable to handle ranges. Any idea of I can implement a gist index here ? In addition, I saved the same data in relational table R2(object,range_first,range_last). The previous data in this format : object range_first range_last50 1 151 5 700 51 1 551 9 10i compared the first query with : explain analyze select * from R2 where \n\nrange_first <=2 and range_last >= 2; (I have an index on range_first,range_last that is used)The query on the jsonb column was 100x slower (700 m/s vs 7m/s). The question is, Am I missing an index or the jsonb datatype isnt suitable for this structure of data. The R2 table contains 500K records while the R table contains about 200K records.",
"msg_date": "Tue, 19 Feb 2019 09:28:25 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index on jsonb col with 2D array inside the json"
},
{
"msg_contents": "I dont have any indexes on R (the table with the jsonb column). I was\nasking if I can create any that can increase this query`s performance.\nIf I understood you correctly I have 3 options right now :\n1)R, without indexes\n2)R2 with an index on first and last\n3)R3 that should contain a single range column (type int4range) with gist\nindex on it.\n\nIn aspect of performance, R<R2<? R3\n\nבתאריך יום ג׳, 19 בפבר׳ 2019 ב-18:28 מאת Michael Lewis <\[email protected]>:\n\n> Is your JSON data getting toasted? I wouldn't assume so if it is remaining\n> small but something to check. Regardless, if an index exists and isn't\n> being used, then that would be the primary concern. You didn't share what\n> the definition of the index on R.data is... what do you already have?\n>\n> You have an array of ranges stored as the value of key \"ranges\" in jsonb\n> field data. If you created a table like R2, but with a single \"range\"\n> column that is int4range type, then I would expect that you could add a\n> GiST and then use overlaps &&, or another operator. I would not expect that\n> you could index (unnest data->>'ranges' for instance) to get the separated\n> out range values.\n>\n>\n>\n> *Michael Lewis *\n>\n>\n> On Tue, Feb 19, 2019 at 8:59 AM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> Hi,\n>> I have a table with json col : R(object int, data jsonb).\n>> Example for content :\n>> object | data\n>> ----------------+---------------------------------------\n>> 50 | {\"ranges\": [[1, 1]]}\n>> 51 | {\"ranges\": [[5, 700],[1,5],[9,10}\n>> 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]}\n>> 53 | {\"ranges\": [[2, 2]]}\n>> 54 | {\"ranges\": [[5, 10]]}\n>>\n>> Now I tried to query for all the objects that contains a specific range,\n>> for example [2,2] :\n>> explain analyze SELECT *\n>> FROM R d\n>> WHERE EXISTS (\n>> SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng\n>> WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2\n>> );\n>>\n>> I saw that the gin index isnt suitable for this type of comparison.\n>> However, I saw that the gist index is suitable to handle ranges. Any idea\n>> of I can implement a gist index here ?\n>>\n>> In addition, I saved the same data in relational table\n>> R2(object,range_first,range_last).\n>> The previous data in this format :\n>> object range_first range_last\n>> 50 1 1\n>> 51 5 700\n>> 51 1 5\n>> 51 9 10\n>>\n>> i compared the first query with :\n>> explain analyze select * from R2 where range_first <=2 and\n>> range_last >= 2; (I have an index on range_first,range_last that is used)\n>>\n>> The query on the jsonb column was 100x slower (700 m/s vs 7m/s). The\n>> question is, Am I missing an index or the jsonb datatype isnt suitable for\n>> this structure of data. The R2 table contains 500K records while the R\n>> table contains about 200K records.\n>>\n>>\n>>\n>>\n\nI dont have any indexes on R (the table with the jsonb column). I was asking if I can create any that can increase this query`s performance. If I understood you correctly I have 3 options right now :1)R, without indexes 2)R2 with an index on first and last3)R3 that should contain a single range column (type int4range) with gist index on it. In aspect of performance, R<R2<? R3 בתאריך יום ג׳, 19 בפבר׳ 2019 ב-18:28 מאת Michael Lewis <[email protected]>:Is your JSON data getting toasted? I wouldn't assume so if it is remaining small but something to check. Regardless, if an index exists and isn't being used, then that would be the primary concern. You didn't share what the definition of the index on R.data is... what do you already have?You have an array of ranges stored as the value of key \"ranges\" in jsonb field data. If you created a table like R2, but with a single \"range\" column that is int4range type, then I would expect that you could add a GiST and then use overlaps &&, or another operator. I would not expect that you could index (unnest data->>'ranges' for instance) to get the separated out range values.Michael Lewis On Tue, Feb 19, 2019 at 8:59 AM Mariel Cherkassky <[email protected]> wrote:Hi,I have a table with json col : R(object int, data jsonb).Example for content : object | data----------------+--------------------------------------- 50 | {\"ranges\": [[1, 1]]} 51 | {\"ranges\": [[5, 700],[1,5],[9,10} 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]} 53 | {\"ranges\": [[2, 2]]} 54 | {\"ranges\": [[5, 10]]}Now I tried to query for all the objects that contains a specific range, for example [2,2] : explain analyze SELECT *FROM R dWHERE EXISTS ( SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2 );I saw that the gin index isnt suitable for this type of comparison. However, I saw that the gist index is suitable to handle ranges. Any idea of I can implement a gist index here ? In addition, I saved the same data in relational table R2(object,range_first,range_last). The previous data in this format : object range_first range_last50 1 151 5 700 51 1 551 9 10i compared the first query with : explain analyze select * from R2 where \n\nrange_first <=2 and range_last >= 2; (I have an index on range_first,range_last that is used)The query on the jsonb column was 100x slower (700 m/s vs 7m/s). The question is, Am I missing an index or the jsonb datatype isnt suitable for this structure of data. The R2 table contains 500K records while the R table contains about 200K records.",
"msg_date": "Tue, 19 Feb 2019 18:33:54 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index on jsonb col with 2D array inside the json"
},
{
"msg_contents": "I would expect that R2 vs R3 would be negligible but perhaps gist works\nmuch better and would be an improvement. When you are down to 7ms already,\nI wouldn't hope for any big change. I assume you used btree for the\nmulti-column index on R2 range_first, range_last but am not familiar with\ngist on range vs btree on two int columns.\n\nIt seems a little odd to have a jsonb value to hold multiple range values.\nA range is already a complex type so separating out into the association\ntable like R3 would make sense to me.\n\n*Michael Lewis*\n\nOn Tue, Feb 19, 2019 at 9:34 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> I dont have any indexes on R (the table with the jsonb column). I was\n> asking if I can create any that can increase this query`s performance.\n> If I understood you correctly I have 3 options right now :\n> 1)R, without indexes\n> 2)R2 with an index on first and last\n> 3)R3 that should contain a single range column (type int4range) with gist\n> index on it.\n>\n> In aspect of performance, R<R2<? R3\n>\n> בתאריך יום ג׳, 19 בפבר׳ 2019 ב-18:28 מאת Michael Lewis <\n> [email protected]>:\n>\n>> Is your JSON data getting toasted? I wouldn't assume so if it is\n>> remaining small but something to check. Regardless, if an index exists and\n>> isn't being used, then that would be the primary concern. You didn't share\n>> what the definition of the index on R.data is... what do you already have?\n>>\n>> You have an array of ranges stored as the value of key \"ranges\" in jsonb\n>> field data. If you created a table like R2, but with a single \"range\"\n>> column that is int4range type, then I would expect that you could add a\n>> GiST and then use overlaps &&, or another operator. I would not expect that\n>> you could index (unnest data->>'ranges' for instance) to get the separated\n>> out range values.\n>>\n>>\n>>\n>> *Michael Lewis *\n>>\n>>\n>> On Tue, Feb 19, 2019 at 8:59 AM Mariel Cherkassky <\n>> [email protected]> wrote:\n>>\n>>> Hi,\n>>> I have a table with json col : R(object int, data jsonb).\n>>> Example for content :\n>>> object | data\n>>> ----------------+---------------------------------------\n>>> 50 | {\"ranges\": [[1, 1]]}\n>>> 51 | {\"ranges\": [[5, 700],[1,5],[9,10}\n>>> 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]}\n>>> 53 | {\"ranges\": [[2, 2]]}\n>>> 54 | {\"ranges\": [[5, 10]]}\n>>>\n>>> Now I tried to query for all the objects that contains a specific range,\n>>> for example [2,2] :\n>>> explain analyze SELECT *\n>>> FROM R d\n>>> WHERE EXISTS (\n>>> SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng\n>>> WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2\n>>> );\n>>>\n>>> I saw that the gin index isnt suitable for this type of comparison.\n>>> However, I saw that the gist index is suitable to handle ranges. Any idea\n>>> of I can implement a gist index here ?\n>>>\n>>> In addition, I saved the same data in relational table\n>>> R2(object,range_first,range_last).\n>>> The previous data in this format :\n>>> object range_first range_last\n>>> 50 1 1\n>>> 51 5 700\n>>> 51 1 5\n>>> 51 9 10\n>>>\n>>> i compared the first query with :\n>>> explain analyze select * from R2 where range_first <=2 and\n>>> range_last >= 2; (I have an index on range_first,range_last that is used)\n>>>\n>>> The query on the jsonb column was 100x slower (700 m/s vs 7m/s). The\n>>> question is, Am I missing an index or the jsonb datatype isnt suitable for\n>>> this structure of data. The R2 table contains 500K records while the R\n>>> table contains about 200K records.\n>>>\n>>>\n>>>\n>>>\n\nI would expect that R2 vs R3 would be negligible but perhaps gist works much better and would be an improvement. When you are down to 7ms already, I wouldn't hope for any big change. I assume you used btree for the multi-column index on R2 range_first, range_last but am not familiar with gist on range vs btree on two int columns.It seems a little odd to have a jsonb value to hold multiple range values. A range is already a complex type so separating out into the association table like R3 would make sense to me.Michael LewisOn Tue, Feb 19, 2019 at 9:34 AM Mariel Cherkassky <[email protected]> wrote:I dont have any indexes on R (the table with the jsonb column). I was asking if I can create any that can increase this query`s performance. If I understood you correctly I have 3 options right now :1)R, without indexes 2)R2 with an index on first and last3)R3 that should contain a single range column (type int4range) with gist index on it. In aspect of performance, R<R2<? R3 בתאריך יום ג׳, 19 בפבר׳ 2019 ב-18:28 מאת Michael Lewis <[email protected]>:Is your JSON data getting toasted? I wouldn't assume so if it is remaining small but something to check. Regardless, if an index exists and isn't being used, then that would be the primary concern. You didn't share what the definition of the index on R.data is... what do you already have?You have an array of ranges stored as the value of key \"ranges\" in jsonb field data. If you created a table like R2, but with a single \"range\" column that is int4range type, then I would expect that you could add a GiST and then use overlaps &&, or another operator. I would not expect that you could index (unnest data->>'ranges' for instance) to get the separated out range values.Michael Lewis On Tue, Feb 19, 2019 at 8:59 AM Mariel Cherkassky <[email protected]> wrote:Hi,I have a table with json col : R(object int, data jsonb).Example for content : object | data----------------+--------------------------------------- 50 | {\"ranges\": [[1, 1]]} 51 | {\"ranges\": [[5, 700],[1,5],[9,10} 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]} 53 | {\"ranges\": [[2, 2]]} 54 | {\"ranges\": [[5, 10]]}Now I tried to query for all the objects that contains a specific range, for example [2,2] : explain analyze SELECT *FROM R dWHERE EXISTS ( SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2 );I saw that the gin index isnt suitable for this type of comparison. However, I saw that the gist index is suitable to handle ranges. Any idea of I can implement a gist index here ? In addition, I saved the same data in relational table R2(object,range_first,range_last). The previous data in this format : object range_first range_last50 1 151 5 700 51 1 551 9 10i compared the first query with : explain analyze select * from R2 where \n\nrange_first <=2 and range_last >= 2; (I have an index on range_first,range_last that is used)The query on the jsonb column was 100x slower (700 m/s vs 7m/s). The question is, Am I missing an index or the jsonb datatype isnt suitable for this structure of data. The R2 table contains 500K records while the R table contains about 200K records.",
"msg_date": "Tue, 19 Feb 2019 09:41:52 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index on jsonb col with 2D array inside the json"
},
{
"msg_contents": "Thanks for the feedback!\n\nOn Tue, Feb 19, 2019, 6:42 PM Michael Lewis <[email protected] wrote:\n\n> I would expect that R2 vs R3 would be negligible but perhaps gist works\n> much better and would be an improvement. When you are down to 7ms already,\n> I wouldn't hope for any big change. I assume you used btree for the\n> multi-column index on R2 range_first, range_last but am not familiar with\n> gist on range vs btree on two int columns.\n>\n> It seems a little odd to have a jsonb value to hold multiple range values.\n> A range is already a complex type so separating out into the association\n> table like R3 would make sense to me.\n>\n> *Michael Lewis*\n>\n> On Tue, Feb 19, 2019 at 9:34 AM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> I dont have any indexes on R (the table with the jsonb column). I was\n>> asking if I can create any that can increase this query`s performance.\n>> If I understood you correctly I have 3 options right now :\n>> 1)R, without indexes\n>> 2)R2 with an index on first and last\n>> 3)R3 that should contain a single range column (type int4range) with gist\n>> index on it.\n>>\n>> In aspect of performance, R<R2<? R3\n>>\n>> בתאריך יום ג׳, 19 בפבר׳ 2019 ב-18:28 מאת Michael Lewis <\n>> [email protected]>:\n>>\n>>> Is your JSON data getting toasted? I wouldn't assume so if it is\n>>> remaining small but something to check. Regardless, if an index exists and\n>>> isn't being used, then that would be the primary concern. You didn't share\n>>> what the definition of the index on R.data is... what do you already have?\n>>>\n>>> You have an array of ranges stored as the value of key \"ranges\" in jsonb\n>>> field data. If you created a table like R2, but with a single \"range\"\n>>> column that is int4range type, then I would expect that you could add a\n>>> GiST and then use overlaps &&, or another operator. I would not expect that\n>>> you could index (unnest data->>'ranges' for instance) to get the separated\n>>> out range values.\n>>>\n>>>\n>>>\n>>> *Michael Lewis *\n>>>\n>>>\n>>> On Tue, Feb 19, 2019 at 8:59 AM Mariel Cherkassky <\n>>> [email protected]> wrote:\n>>>\n>>>> Hi,\n>>>> I have a table with json col : R(object int, data jsonb).\n>>>> Example for content :\n>>>> object | data\n>>>> ----------------+---------------------------------------\n>>>> 50 | {\"ranges\": [[1, 1]]}\n>>>> 51 | {\"ranges\": [[5, 700],[1,5],[9,10}\n>>>> 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]}\n>>>> 53 | {\"ranges\": [[2, 2]]}\n>>>> 54 | {\"ranges\": [[5, 10]]}\n>>>>\n>>>> Now I tried to query for all the objects that contains a specific\n>>>> range, for example [2,2] :\n>>>> explain analyze SELECT *\n>>>> FROM R d\n>>>> WHERE EXISTS (\n>>>> SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng\n>>>> WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2\n>>>> );\n>>>>\n>>>> I saw that the gin index isnt suitable for this type of comparison.\n>>>> However, I saw that the gist index is suitable to handle ranges. Any idea\n>>>> of I can implement a gist index here ?\n>>>>\n>>>> In addition, I saved the same data in relational table\n>>>> R2(object,range_first,range_last).\n>>>> The previous data in this format :\n>>>> object range_first range_last\n>>>> 50 1 1\n>>>> 51 5 700\n>>>> 51 1 5\n>>>> 51 9 10\n>>>>\n>>>> i compared the first query with :\n>>>> explain analyze select * from R2 where range_first <=2 and\n>>>> range_last >= 2; (I have an index on range_first,range_last that is used)\n>>>>\n>>>> The query on the jsonb column was 100x slower (700 m/s vs 7m/s). The\n>>>> question is, Am I missing an index or the jsonb datatype isnt suitable for\n>>>> this structure of data. The R2 table contains 500K records while the R\n>>>> table contains about 200K records.\n>>>>\n>>>>\n>>>>\n>>>>\n\nThanks for the feedback! On Tue, Feb 19, 2019, 6:42 PM Michael Lewis <[email protected] wrote:I would expect that R2 vs R3 would be negligible but perhaps gist works much better and would be an improvement. When you are down to 7ms already, I wouldn't hope for any big change. I assume you used btree for the multi-column index on R2 range_first, range_last but am not familiar with gist on range vs btree on two int columns.It seems a little odd to have a jsonb value to hold multiple range values. A range is already a complex type so separating out into the association table like R3 would make sense to me.Michael LewisOn Tue, Feb 19, 2019 at 9:34 AM Mariel Cherkassky <[email protected]> wrote:I dont have any indexes on R (the table with the jsonb column). I was asking if I can create any that can increase this query`s performance. If I understood you correctly I have 3 options right now :1)R, without indexes 2)R2 with an index on first and last3)R3 that should contain a single range column (type int4range) with gist index on it. In aspect of performance, R<R2<? R3 בתאריך יום ג׳, 19 בפבר׳ 2019 ב-18:28 מאת Michael Lewis <[email protected]>:Is your JSON data getting toasted? I wouldn't assume so if it is remaining small but something to check. Regardless, if an index exists and isn't being used, then that would be the primary concern. You didn't share what the definition of the index on R.data is... what do you already have?You have an array of ranges stored as the value of key \"ranges\" in jsonb field data. If you created a table like R2, but with a single \"range\" column that is int4range type, then I would expect that you could add a GiST and then use overlaps &&, or another operator. I would not expect that you could index (unnest data->>'ranges' for instance) to get the separated out range values.Michael Lewis On Tue, Feb 19, 2019 at 8:59 AM Mariel Cherkassky <[email protected]> wrote:Hi,I have a table with json col : R(object int, data jsonb).Example for content : object | data----------------+--------------------------------------- 50 | {\"ranges\": [[1, 1]]} 51 | {\"ranges\": [[5, 700],[1,5],[9,10} 52 | {\"ranges\": [[4, 200],[2,4],[3,4]]} 53 | {\"ranges\": [[2, 2]]} 54 | {\"ranges\": [[5, 10]]}Now I tried to query for all the objects that contains a specific range, for example [2,2] : explain analyze SELECT *FROM R dWHERE EXISTS ( SELECT FROM jsonb_array_elements(R.data -> 'ranges') rng WHERE (rng->>0)::bigint <= 2 and (rng->>1)::bigint >= 2 );I saw that the gin index isnt suitable for this type of comparison. However, I saw that the gist index is suitable to handle ranges. Any idea of I can implement a gist index here ? In addition, I saved the same data in relational table R2(object,range_first,range_last). The previous data in this format : object range_first range_last50 1 151 5 700 51 1 551 9 10i compared the first query with : explain analyze select * from R2 where \n\nrange_first <=2 and range_last >= 2; (I have an index on range_first,range_last that is used)The query on the jsonb column was 100x slower (700 m/s vs 7m/s). The question is, Am I missing an index or the jsonb datatype isnt suitable for this structure of data. The R2 table contains 500K records while the R table contains about 200K records.",
"msg_date": "Tue, 19 Feb 2019 20:30:56 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index on jsonb col with 2D array inside the json"
}
] |
[
{
"msg_contents": "*Postgres version: PostgreSQL 10.3 on x86_64-apple-darwin16.7.0*\n*Operating system and version: MacOS v10.12.6*\n*How you installed PostgreSQL: Homebrew*\n\nI have a table as defined below. The table contains 1,027,616 rows, 50,349\nof which have state='open' and closed IS NULL. Since closed IS NULL for all\nrows where state='open', I want to remove the unnecessary state column.\n\n```\nCREATE TABLE tickets (\n id bigserial primary key,\n title character varying,\n description character varying,\n state character varying,\n closed timestamp,\n created timestamp,\n updated timestamp,\n last_comment timestamp,\n size integer NOT NULL,\n comment_count integer NOT NULL\n);\n\nCREATE INDEX \"state_index\" ON \"tickets\" (\"state\") WHERE ((state)::text =\n'open'::text));\n```\n\nAs part of the process of removing the state column, I am trying to index\nthe closed column so I can achieve equal query performance (index scan) as\nwhen I query on the state column as shown below:\n\n```\nEXPLAIN ANALYZE select title, created, closed, updated from tickets where\nstate = 'open';\n\nIndex Scan using state_index on tickets (cost=0.29..23430.20 rows=50349\nwidth=64) (actual time=17.221..52.110 rows=51533 loops=1)\nPlanning time: 0.197 ms\nExecution time: 56.255 ms\n```\n\nHowever, when I index the closed column, a bitmap scan is used instead of\nan index scan, with slightly slower performance. Why isn't an index scan\nbeing used, given that the exact same number of rows are at play as in my\nquery on the state column? How do I index closed in a way where an index\nscan is used?\n\n```\nCREATE INDEX closed_index ON tickets (id) WHERE closed IS NULL;\n\nVACUUM ANALYZE tickets;\n\nEXPLAIN ANALYZE select title, created, closed, updated from tickets where\nclosed IS NULL;\n\nBitmap Heap Scan on tickets (cost=824.62..33955.85 rows=50349 width=64)\n(actual time=10.420..56.095 rows=51537 loops=1)\n Recheck Cond: (closed IS NULL)\n Heap Blocks: exact=17478\n -> Bitmap Index Scan on closed_index (cost=0.00..812.03 rows=50349\nwidth=0) (actual time=6.005..6.005 rows=51537 loops=1)\nPlanning time: 0.145 ms\nExecution time: 60.266 ms\n```\n\nPostgres version: PostgreSQL 10.3 on x86_64-apple-darwin16.7.0Operating system and version: MacOS v10.12.6How you installed PostgreSQL: HomebrewI have a table as defined below. The table contains 1,027,616 rows, 50,349 of which have state='open' and closed IS NULL. Since closed IS NULL for all rows where state='open', I want to remove the unnecessary state column.```CREATE TABLE tickets ( id bigserial primary key, title character varying, description character varying, state character varying, closed timestamp, created timestamp, updated timestamp, last_comment timestamp, size integer NOT NULL, comment_count integer NOT NULL);CREATE INDEX \"state_index\" ON \"tickets\" (\"state\") WHERE ((state)::text = 'open'::text));```As part of the process of removing the state column, I am trying to index the closed column so I can achieve equal query performance (index scan) as when I query on the state column as shown below:```EXPLAIN ANALYZE select title, created, closed, updated from tickets where state = 'open';Index Scan using state_index on tickets (cost=0.29..23430.20 rows=50349 width=64) (actual time=17.221..52.110 rows=51533 loops=1)Planning time: 0.197 msExecution time: 56.255 ms```However, when I index the closed column, a bitmap scan is used instead of an index scan, with slightly slower performance. Why isn't an index scan being used, given that the exact same number of rows are at play as in my query on the state column? How do I index closed in a way where an index scan is used?```CREATE INDEX closed_index ON tickets (id) WHERE closed IS NULL;VACUUM ANALYZE tickets;EXPLAIN ANALYZE select title, created, closed, updated from tickets where closed IS NULL;Bitmap Heap Scan on tickets (cost=824.62..33955.85 rows=50349 width=64) (actual time=10.420..56.095 rows=51537 loops=1) Recheck Cond: (closed IS NULL) Heap Blocks: exact=17478 -> Bitmap Index Scan on closed_index (cost=0.00..812.03 rows=50349 width=0) (actual time=6.005..6.005 rows=51537 loops=1)Planning time: 0.145 msExecution time: 60.266 ms```",
"msg_date": "Tue, 19 Feb 2019 17:10:43 -0700",
"msg_from": "Abi Noda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why isn't an index scan being used?"
},
{
"msg_contents": "On Wed, 20 Feb 2019 at 13:11, Abi Noda <[email protected]> wrote:\n> However, when I index the closed column, a bitmap scan is used instead of an index scan, with slightly slower performance. Why isn't an index scan being used, given that the exact same number of rows are at play as in my query on the state column?\n\nThat's down to the planner's cost estimates. Likely it thinks that\neither doing a bitmap scan is cheaper, or close enough that it does\nnot matter.\n\n> How do I index closed in a way where an index scan is used?\n\nThe costing does account for the size of the index. If the\n\"closed_index\" index is large than the \"state_index\", then doing an\nIndex scan on \"closed_index\" is going to be costed higher.\n\nMost of this likely boils down to random_page_cost being a guess. You\nmay want to check your effective_cache_size is set to something like\n75% of the machine's memory, and/or tweak random page cost down, if\nit's set to the standard 4 setting. modern SSDs are pretty fast at\nrandom reads. HDDs, not so much.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 20 Feb 2019 13:51:04 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why isn't an index scan being used?"
},
{
"msg_contents": "Thank you for the help.\n\n> If the \"closed_index\" index is large than the \"state_index\", then doing\nan Index scan on \"closed_index\" is going to be costed higher.\n\nFWIW, both indexes appear to be the same size:\n\nselect pg_size_pretty(pg_relation_size('state_index'));\n1144 kB\n\nselect pg_size_pretty(pg_relation_size('closed_index'));\n1144 kB\n\n> Most of this likely boils down to random_page_cost being a guess. You may\nwant to check your effective_cache_size is set to something like 75% of the\nmachine's memory, and/or tweak random page cost down, if it's set to the\nstandard 4 setting.\n\nOk, let me try this.\n\n\nOn Tue, Feb 19, 2019 at 5:51 PM David Rowley <[email protected]>\nwrote:\n\n> On Wed, 20 Feb 2019 at 13:11, Abi Noda <[email protected]> wrote:\n> > However, when I index the closed column, a bitmap scan is used instead\n> of an index scan, with slightly slower performance. Why isn't an index scan\n> being used, given that the exact same number of rows are at play as in my\n> query on the state column?\n>\n> That's down to the planner's cost estimates. Likely it thinks that\n> either doing a bitmap scan is cheaper, or close enough that it does\n> not matter.\n>\n> > How do I index closed in a way where an index scan is used?\n>\n> The costing does account for the size of the index. If the\n> \"closed_index\" index is large than the \"state_index\", then doing an\n> Index scan on \"closed_index\" is going to be costed higher.\n>\n> Most of this likely boils down to random_page_cost being a guess. You\n> may want to check your effective_cache_size is set to something like\n> 75% of the machine's memory, and/or tweak random page cost down, if\n> it's set to the standard 4 setting. modern SSDs are pretty fast at\n> random reads. HDDs, not so much.\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nThank you for the help.> If the \"closed_index\" index is large than the \"state_index\", then doing an Index scan on \"closed_index\" is going to be costed higher.FWIW, both indexes appear to be the same size:select pg_size_pretty(pg_relation_size('state_index'));1144 kBselect pg_size_pretty(pg_relation_size('closed_index'));1144 kB> Most of this likely boils down to random_page_cost being a guess. You may want to check your effective_cache_size is set to something like 75% of the machine's memory, and/or tweak random page cost down, if it's set to the standard 4 setting.Ok, let me try this.On Tue, Feb 19, 2019 at 5:51 PM David Rowley <[email protected]> wrote:On Wed, 20 Feb 2019 at 13:11, Abi Noda <[email protected]> wrote:\n> However, when I index the closed column, a bitmap scan is used instead of an index scan, with slightly slower performance. Why isn't an index scan being used, given that the exact same number of rows are at play as in my query on the state column?\n\nThat's down to the planner's cost estimates. Likely it thinks that\neither doing a bitmap scan is cheaper, or close enough that it does\nnot matter.\n\n> How do I index closed in a way where an index scan is used?\n\nThe costing does account for the size of the index. If the\n\"closed_index\" index is large than the \"state_index\", then doing an\nIndex scan on \"closed_index\" is going to be costed higher.\n\nMost of this likely boils down to random_page_cost being a guess. You\nmay want to check your effective_cache_size is set to something like\n75% of the machine's memory, and/or tweak random page cost down, if\nit's set to the standard 4 setting. modern SSDs are pretty fast at\nrandom reads. HDDs, not so much.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 19 Feb 2019 17:59:20 -0700",
"msg_from": "Abi Noda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why isn't an index scan being used?"
},
{
"msg_contents": ">>>>> \"Abi\" == Abi Noda <[email protected]> writes:\n\n Abi> However, when I index the closed column, a bitmap scan is used\n Abi> instead of an index scan, with slightly slower performance. Why\n Abi> isn't an index scan being used, given that the exact same number\n Abi> of rows are at play as in my query on the state column?\n\nMost likely difference is the correlation estimate for the conditions.\nThe cost of an index scan includes a factor based on how well correlated\nthe physical position of rows is with the index order, because this\naffects the number of random seeks in the scan. But for nulls this\nestimate cannot be performed, and bitmapscan is cheaper than plain\nindexscan on poorly correlated data.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Wed, 20 Feb 2019 03:00:23 +0000",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why isn't an index scan being used?"
},
{
"msg_contents": "On Tue, Feb 19, 2019, 8:00 PM Andrew Gierth <[email protected]\nwrote:\n\n> >>>>> \"Abi\" == Abi Noda <[email protected]> writes:\n>\n> Abi> However, when I index the closed column, a bitmap scan is used\n> Abi> instead of an index scan, with slightly slower performance. Why\n> Abi> isn't an index scan being used, given that the exact same number\n> Abi> of rows are at play as in my query on the state column?\n>\n> Most likely difference is the correlation estimate for the conditions.\n> The cost of an index scan includes a factor based on how well correlated\n> the physical position of rows is with the index order, because this\n> affects the number of random seeks in the scan. But for nulls this\n> estimate cannot be performed, and bitmapscan is cheaper than plain\n> indexscan on poorly correlated data.\n>\n\nDoes this imply that the optimizer would always prefer the bitmapscan\nrather than index scan even if random page cost = 1, aka sequential cost,\nwhen the correlation is unknown like a null? Or only when it thinks random\naccess is more expensive by some significant factor?\n\n\n> --\n> Andrew (irc:RhodiumToad)\n>\n>\n\nOn Tue, Feb 19, 2019, 8:00 PM Andrew Gierth <[email protected] wrote:>>>>> \"Abi\" == Abi Noda <[email protected]> writes:\n\n Abi> However, when I index the closed column, a bitmap scan is used\n Abi> instead of an index scan, with slightly slower performance. Why\n Abi> isn't an index scan being used, given that the exact same number\n Abi> of rows are at play as in my query on the state column?\n\nMost likely difference is the correlation estimate for the conditions.\nThe cost of an index scan includes a factor based on how well correlated\nthe physical position of rows is with the index order, because this\naffects the number of random seeks in the scan. But for nulls this\nestimate cannot be performed, and bitmapscan is cheaper than plain\nindexscan on poorly correlated data.Does this imply that the optimizer would always prefer the bitmapscan rather than index scan even if random page cost = 1, aka sequential cost, when the correlation is unknown like a null? Or only when it thinks random access is more expensive by some significant factor?\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Tue, 19 Feb 2019 21:29:46 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why isn't an index scan being used?"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 05:10:43PM -0700, Abi Noda wrote:\n> I have a table as defined below. The table contains 1,027,616 rows, 50,349\n> of which have state='open' and closed IS NULL. Since closed IS NULL for all\n> rows where state='open', I want to remove the unnecessary state column.\n> \n> CREATE TABLE tickets (\n> id bigserial primary key,\n> state character varying,\n> closed timestamp,\n...\n> );\n> \n> CREATE INDEX \"state_index\" ON \"tickets\" (\"state\") WHERE ((state)::text =\n> 'open'::text));\n> \n> As part of the process of removing the state column, I am trying to index\n> the closed column so I can achieve equal query performance (index scan) as\n> when I query on the state column as shown below:\n> \n> EXPLAIN ANALYZE select title, created, closed, updated from tickets where state = 'open';\n> Index Scan using state_index on tickets (cost=0.29..23430.20 rows=50349 width=64) (actual time=17.221..52.110 rows=51533 loops=1)\n> \n> However, when I index the closed column, a bitmap scan is used instead of\n> an index scan, with slightly slower performance. Why isn't an index scan\n> being used, given that the exact same number of rows are at play as in my\n> query on the state column? How do I index closed in a way where an index\n> scan is used?\n> \n> CREATE INDEX closed_index ON tickets (id) WHERE closed IS NULL;\n> EXPLAIN ANALYZE select title, created, closed, updated from tickets where closed IS NULL;\n> Bitmap Heap Scan on tickets (cost=824.62..33955.85 rows=50349 width=64) (actual time=10.420..56.095 rows=51537 loops=1)\n> -> Bitmap Index Scan on closed_index (cost=0.00..812.03 rows=50349 width=0) (actual time=6.005..6.005 rows=51537 loops=1)\n\nAre you really concerned about 4ms ? If this is a toy-sized test system,\nplease try on something resembling production, perhaps by loading production or\nfake data, or perhaps on a production system within a transactions (begin; CREATE\nINDEX CONCURRENTLY; explain ...; rollback).\n\nYou can see that most of the estimated cost is from the table (the index scan\naccounts for only 812 of total 33955 cost units). So I'm guessing the planner\nthinks that an index scan will either 1) access the table randomly; and/or, 2)\naccess a large fraction of the table.\n\nIf it was just built, the first (partial/conditional/predicate/where) index\nwill scan table in its \"physical\" order (if not sequentially).\n\nThe 2nd index is going to scan table in order of ID, which I'm guessing is not\n\"correlated\" with its physical order, so an index scan cost is computed as\naccessing a larger fraction of the table (but by using an \"bitmap\" scan it's at\nleast in physical order). In fact: 50349/17478 = ~3 tuples/page is low, so\nyou're accessing a large fraction of the table to return a small fraction of\nits tuples.\n\nYou can check what it thinks here:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nYou could try CLUSTERing the table on ID (which requires a non-partial index)\nand ANALYZEing (which might cause this and other queries to be planned and/or\nperform differently). That causes the table to be locked exclusively. Then,\nthe planner knows that scanning index and returning results ordered by IDs\n(which doesn't matter) will also access table in physical order (which\nmatters), and maybe fewer pages need to be read, too.\n\nJustin\n\n",
"msg_date": "Tue, 19 Feb 2019 22:37:48 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why isn't an index scan being used?"
},
{
"msg_contents": "Thanks Justin.\n\nThe 4ms different in the examples isn't an accurate benchmark. I'm seeing\nabout a ~20% difference over a larger sample size. And this is on a fork of\nthe production database.\n\nApart from the end-performance, I'm motivated to figure out why one index\nresults in an index scan whereas the other one does not.\n\nI didn't mention this in my original email but I've separately tested\ndropping the `state` index, running VACUUM FULL on the table, then\nrecreating both indexes. The result was the same where querying on state\nproduced an index scan whereas closed produced a bitmap scan.\n\nAndrew's email and Michael's follow-up has me curious because it suggests\nI'm running into a issue specific to indexing on IS NULL, @Justin what do\nyou think of this?\n\nIn the meantime Justin I'll investigate some more of your suggestions.\n\nOn Tue, Feb 19, 2019 at 9:37 PM Justin Pryzby <[email protected]> wrote:\n\n> On Tue, Feb 19, 2019 at 05:10:43PM -0700, Abi Noda wrote:\n> > I have a table as defined below. The table contains 1,027,616 rows,\n> 50,349\n> > of which have state='open' and closed IS NULL. Since closed IS NULL for\n> all\n> > rows where state='open', I want to remove the unnecessary state column.\n> >\n> > CREATE TABLE tickets (\n> > id bigserial primary key,\n> > state character varying,\n> > closed timestamp,\n> ...\n> > );\n> >\n> > CREATE INDEX \"state_index\" ON \"tickets\" (\"state\") WHERE ((state)::text =\n> > 'open'::text));\n> >\n> > As part of the process of removing the state column, I am trying to index\n> > the closed column so I can achieve equal query performance (index scan)\n> as\n> > when I query on the state column as shown below:\n> >\n> > EXPLAIN ANALYZE select title, created, closed, updated from tickets\n> where state = 'open';\n> > Index Scan using state_index on tickets (cost=0.29..23430.20 rows=50349\n> width=64) (actual time=17.221..52.110 rows=51533 loops=1)\n> >\n> > However, when I index the closed column, a bitmap scan is used instead of\n> > an index scan, with slightly slower performance. Why isn't an index scan\n> > being used, given that the exact same number of rows are at play as in my\n> > query on the state column? How do I index closed in a way where an index\n> > scan is used?\n> >\n> > CREATE INDEX closed_index ON tickets (id) WHERE closed IS NULL;\n> > EXPLAIN ANALYZE select title, created, closed, updated from tickets\n> where closed IS NULL;\n> > Bitmap Heap Scan on tickets (cost=824.62..33955.85 rows=50349 width=64)\n> (actual time=10.420..56.095 rows=51537 loops=1)\n> > -> Bitmap Index Scan on closed_index (cost=0.00..812.03 rows=50349\n> width=0) (actual time=6.005..6.005 rows=51537 loops=1)\n>\n> Are you really concerned about 4ms ? If this is a toy-sized test system,\n> please try on something resembling production, perhaps by loading\n> production or\n> fake data, or perhaps on a production system within a transactions (begin;\n> CREATE\n> INDEX CONCURRENTLY; explain ...; rollback).\n>\n> You can see that most of the estimated cost is from the table (the index\n> scan\n> accounts for only 812 of total 33955 cost units). So I'm guessing the\n> planner\n> thinks that an index scan will either 1) access the table randomly;\n> and/or, 2)\n> access a large fraction of the table.\n>\n> If it was just built, the first (partial/conditional/predicate/where) index\n> will scan table in its \"physical\" order (if not sequentially).\n>\n> The 2nd index is going to scan table in order of ID, which I'm guessing is\n> not\n> \"correlated\" with its physical order, so an index scan cost is computed as\n> accessing a larger fraction of the table (but by using an \"bitmap\" scan\n> it's at\n> least in physical order). In fact: 50349/17478 = ~3 tuples/page is low, so\n> you're accessing a large fraction of the table to return a small fraction\n> of\n> its tuples.\n>\n> You can check what it thinks here:\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n>\n> You could try CLUSTERing the table on ID (which requires a non-partial\n> index)\n> and ANALYZEing (which might cause this and other queries to be planned\n> and/or\n> perform differently). That causes the table to be locked exclusively.\n> Then,\n> the planner knows that scanning index and returning results ordered by IDs\n> (which doesn't matter) will also access table in physical order (which\n> matters), and maybe fewer pages need to be read, too.\n>\n> Justin\n>\n\nThanks Justin.The 4ms different in the examples isn't an accurate benchmark. I'm seeing about a ~20% difference over a larger sample size. And this is on a fork of the production database.Apart from the end-performance, I'm motivated to figure out why one index results in an index scan whereas the other one does not.I didn't mention this in my original email but I've separately tested dropping the \n`state` index, running VACUUM FULL on the table, then recreating both indexes. The result was the same where querying on state produced an index scan whereas closed produced a bitmap scan.Andrew's email and Michael's follow-up has me curious because it suggests I'm running into a issue specific to indexing on IS NULL, @Justin what do you think of this?In the meantime Justin I'll investigate some more of your suggestions.On Tue, Feb 19, 2019 at 9:37 PM Justin Pryzby <[email protected]> wrote:On Tue, Feb 19, 2019 at 05:10:43PM -0700, Abi Noda wrote:\n> I have a table as defined below. The table contains 1,027,616 rows, 50,349\n> of which have state='open' and closed IS NULL. Since closed IS NULL for all\n> rows where state='open', I want to remove the unnecessary state column.\n> \n> CREATE TABLE tickets (\n> id bigserial primary key,\n> state character varying,\n> closed timestamp,\n...\n> );\n> \n> CREATE INDEX \"state_index\" ON \"tickets\" (\"state\") WHERE ((state)::text =\n> 'open'::text));\n> \n> As part of the process of removing the state column, I am trying to index\n> the closed column so I can achieve equal query performance (index scan) as\n> when I query on the state column as shown below:\n> \n> EXPLAIN ANALYZE select title, created, closed, updated from tickets where state = 'open';\n> Index Scan using state_index on tickets (cost=0.29..23430.20 rows=50349 width=64) (actual time=17.221..52.110 rows=51533 loops=1)\n> \n> However, when I index the closed column, a bitmap scan is used instead of\n> an index scan, with slightly slower performance. Why isn't an index scan\n> being used, given that the exact same number of rows are at play as in my\n> query on the state column? How do I index closed in a way where an index\n> scan is used?\n> \n> CREATE INDEX closed_index ON tickets (id) WHERE closed IS NULL;\n> EXPLAIN ANALYZE select title, created, closed, updated from tickets where closed IS NULL;\n> Bitmap Heap Scan on tickets (cost=824.62..33955.85 rows=50349 width=64) (actual time=10.420..56.095 rows=51537 loops=1)\n> -> Bitmap Index Scan on closed_index (cost=0.00..812.03 rows=50349 width=0) (actual time=6.005..6.005 rows=51537 loops=1)\n\nAre you really concerned about 4ms ? If this is a toy-sized test system,\nplease try on something resembling production, perhaps by loading production or\nfake data, or perhaps on a production system within a transactions (begin; CREATE\nINDEX CONCURRENTLY; explain ...; rollback).\n\nYou can see that most of the estimated cost is from the table (the index scan\naccounts for only 812 of total 33955 cost units). So I'm guessing the planner\nthinks that an index scan will either 1) access the table randomly; and/or, 2)\naccess a large fraction of the table.\n\nIf it was just built, the first (partial/conditional/predicate/where) index\nwill scan table in its \"physical\" order (if not sequentially).\n\nThe 2nd index is going to scan table in order of ID, which I'm guessing is not\n\"correlated\" with its physical order, so an index scan cost is computed as\naccessing a larger fraction of the table (but by using an \"bitmap\" scan it's at\nleast in physical order). In fact: 50349/17478 = ~3 tuples/page is low, so\nyou're accessing a large fraction of the table to return a small fraction of\nits tuples.\n\nYou can check what it thinks here:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nYou could try CLUSTERing the table on ID (which requires a non-partial index)\nand ANALYZEing (which might cause this and other queries to be planned and/or\nperform differently). That causes the table to be locked exclusively. Then,\nthe planner knows that scanning index and returning results ordered by IDs\n(which doesn't matter) will also access table in physical order (which\nmatters), and maybe fewer pages need to be read, too.\n\nJustin",
"msg_date": "Tue, 19 Feb 2019 21:58:43 -0700",
"msg_from": "Abi Noda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why isn't an index scan being used?"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 09:29:46PM -0700, Michael Lewis wrote:\n> On Tue, Feb 19, 2019, 8:00 PM Andrew Gierth <[email protected]> wrote:\n> \n> > >>>>> \"Abi\" == Abi Noda <[email protected]> writes:\n> > Abi> However, when I index the closed column, a bitmap scan is used\n> > Abi> instead of an index scan, with slightly slower performance. Why\n> > Abi> isn't an index scan being used, given that the exact same number\n> > Abi> of rows are at play as in my query on the state column?\n> >\n> > Most likely difference is the correlation estimate for the conditions.\n> > The cost of an index scan includes a factor based on how well correlated\n> > the physical position of rows is with the index order, because this\n> > affects the number of random seeks in the scan. But for nulls this\n> > estimate cannot be performed, and bitmapscan is cheaper than plain\n> > indexscan on poorly correlated data.\n> \n> Does this imply that the optimizer would always prefer the bitmapscan\n> rather than index scan even if random page cost = 1, aka sequential cost,\n> when the correlation is unknown like a null? Or only when it thinks random\n> access is more expensive by some significant factor?\n\nNo; for one, since for a bitmap scan, the heap scan can't begin until the index\nscan is done, so there's a high(er) initial cost.\n\nOtherwise bitmap scan could always be used and all access could be ordered\n(even if not sequential).\n\nJustin\n\n",
"msg_date": "Wed, 20 Feb 2019 02:05:00 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why isn't an index scan being used?"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 11:59 PM Abi Noda <[email protected]> wrote:\n\n> Thanks Justin.\n>\n> The 4ms different in the examples isn't an accurate benchmark. I'm seeing\n> about a ~20% difference over a larger sample size. And this is on a fork of\n> the production database.\n>\n\nPlease show the execution plans from that larger sample, if that is the one\nthat is most relevant.\n\nYou can \"set enable_bitmapscan = off\" to get rid of the bitmap scan in\norder to see the estimated cost and actual performance of the next-best\nplan (which will probably the regular index scan).\n\nCheers,\n\nJeff\n\nOn Tue, Feb 19, 2019 at 11:59 PM Abi Noda <[email protected]> wrote:Thanks Justin.The 4ms different in the examples isn't an accurate benchmark. I'm seeing about a ~20% difference over a larger sample size. And this is on a fork of the production database.Please show the execution plans from that larger sample, if that is the one that is most relevant.You can \"set enable_bitmapscan = off\" to get rid of the bitmap scan in order to see the estimated cost and actual performance of the next-best plan (which will probably the regular index scan).Cheers,Jeff",
"msg_date": "Wed, 20 Feb 2019 09:58:22 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why isn't an index scan being used?"
}
] |
[
{
"msg_contents": "Hi, I have an Amazon Linux based Postgresql 11 server here on a \nt2.medium EC2 instance.\n\nIt is serving 24 worker processes that read jobs from a queue (thanks to \nSELECT ... FOR UPDATE SKIP LOCKED!) and do jobs some of which are \nreading and writing business data to the database, others are only \nreading, and some don't hit the business data at all, only the queue.\n\nEverything flows quite nicely. Except, I don't understand why I can't \nmax out the CPU or the IO, instead, IO is almost negligible yet the CPU \nis at 30% hardly hitting 50%.\n\nHere I give you a view of top:\n\ntop - 23:17:09 up 45 days, 2:07, 4 users, load average: 20.32, 18.92, \n13.80 Tasks: 338 total, 24 running, 111 sleeping, 0 stopped, 0 zombie \n%Cpu(s): 28.7 us, 2.5 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 68.7 \nst KiB Mem : 4040028 total, 1070368 free, 324460 used, 2645200 \nbuff/cache KiB Swap: 0 total, 0 free, 0 used. 2223720 avail Mem PID USER \nPR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7678 postgres 20 0 1235072 \n509744 506356 R 8.7 12.6 1:14.82 postgres: auser integrator \n172.31.61.242(25783) BIND 7998 postgres 20 0 1235108 516480 512772 R 8.7 \n12.8 1:16.20 postgres: auser integrator 172.31.49.159(51708) SELECT 2183 \npostgres 20 0 1261436 985.8m 982544 R 8.5 25.0 0:44.04 postgres: auser \nintegrator [local] SELECT 7653 postgres 20 0 1235180 501760 497984 R 8.2 \n12.4 1:13.66 postgres: auser integrator 172.31.54.158(47959) SELECT 7677 \npostgres 20 0 1235144 506740 502980 S 8.2 12.5 1:13.54 postgres: auser \nintegrator 172.31.61.242(56510) idle in t+ 7680 postgres 20 0 1234684 \n484356 481100 R 8.2 12.0 1:13.86 postgres: auser integrator \n172.31.61.242(49966) SELECT 2631 postgres 20 0 1235120 937964 934528 R \n7.9 23.2 10:48.39 postgres: auser integrator 172.31.49.159(33522) idle \nin t+ 7664 postgres 20 0 1235104 524664 520976 R 7.9 13.0 1:13.95 \npostgres: auser integrator 172.31.57.147(30036) BIND 7682 postgres 20 0 \n1234660 496188 492956 R 7.9 12.3 1:15.50 postgres: auser integrator \n172.31.61.242(26330) COMMIT 7687 postgres 20 0 1234876 490104 486656 R \n7.9 12.1 1:16.77 postgres: auser integrator 172.31.63.71(25285) BIND \n7660 postgres 20 0 1235100 502004 498596 R 7.6 12.4 1:18.00 postgres: \nauser integrator 172.31.57.147(46051) PARSE 7662 postgres 20 0 1235148 \n503532 500280 R 7.6 12.5 1:14.03 postgres: auser integrator \n172.31.57.147(48852) UPDATE 7681 postgres 20 0 1234688 516864 513596 R \n7.6 12.8 1:17.77 postgres: auser integrator 172.31.61.242(48192) SELECT \n7685 postgres 20 0 1235096 515352 511968 R 7.6 12.8 1:16.17 postgres: \nauser integrator 172.31.63.71(62540) BIND 7689 postgres 20 0 1235100 \n509504 505836 S 7.6 12.6 1:14.78 postgres: auser integrator \n172.31.63.71(12287) idle in tr+ 7684 postgres 20 0 1235052 500336 496916 \nR 7.3 12.4 1:14.83 postgres: auser integrator 172.31.63.71(19633) BIND \n7654 postgres 20 0 1235224 514512 511040 S 7.0 12.7 1:18.89 postgres: \nauser integrator 172.31.57.147(43437) idle in t+ 7656 postgres 20 0 \n1234684 510900 507636 R 7.0 12.6 1:16.19 postgres: auser integrator \n172.31.54.158(30397) idle in t+ 7661 postgres 20 0 1234684 514920 511648 \nS 7.0 12.7 1:16.27 postgres: auser integrator 172.31.57.147(38243) \nSELECT 7679 postgres 20 0 1235112 512228 508544 R 7.0 12.7 1:14.60 \npostgres: auser integrator 172.31.61.242(34261) PARSE 7663 postgres 20 0 \n1234684 517068 513812 R 6.8 12.8 1:17.42 postgres: auser integrator \n172.31.57.147(19711) SELECT 7655 postgres 20 0 1235036 505584 502208 R \n6.5 12.5 1:16.17 postgres: auser integrator 172.31.54.158(24401) BIND \n7658 postgres 20 0 1235072 509432 506108 R 6.5 12.6 1:15.90 postgres: \nauser integrator 172.31.54.158(48566) idle in t+ 7659 postgres 20 0 \n1234688 497488 494232 S 6.2 12.3 1:15.70 postgres: auser integrator \n172.31.54.158(49841) idle in t+ 7686 postgres 20 0 1234748 508412 505084 \nR 5.9 12.6 1:14.70 postgres: auser integrator 172.31.63.71(54938) BIND \n7688 postgres 20 0 1234708 502416 499204 S 5.9 12.4 1:14.14 postgres: \nauser integrator 172.31.63.71(49857) SELECT 7657 postgres 20 0 1234684 \n513740 510476 R 5.6 12.7 1:15.96 postgres: auser integrator \n172.31.54.158(47300) SELECT 1304 ec2-user 20 0 171400 4648 3840 R 1.7 \n0.1 1:25.66 top -c 8492 root 20 0 0 0 0 R 1.7 0.0 0:00.41 [kworker/1:2]\n\nand of iostat:\n\navg-cpu: %user %nice %system %iowait %steal %idle 36.41 0.00 3.80 0.00 \n59.78 0.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz \nawait r_await w_await svctm %util xvda 0.00 0.00 0.00 7.69 0.00 92.31 \n24.00 0.02 3.00 0.00 3.00 3.00 2.31 xvdf 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdg 0.00 0.00 0.00 11.54 0.00 130.77 \n22.67 0.05 4.00 0.00 4.00 0.33 0.38 xvdh 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdi 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdj 0.00 0.00 5.77 0.96 46.15 7.69 \n16.00 0.01 1.14 0.67 4.00 1.14 0.77 xvdk 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdl 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdm 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdn 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdo 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdp 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdq 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdr 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvds 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdt 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdu 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdv 0.00 0.00 0.00 8.65 0.00 76.92 \n17.78 0.03 4.00 0.00 4.00 0.44 0.38 xvdw 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdx 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdy 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdz 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdaa 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 xvdab 0.00 0.00 0.00 2.88 0.00 30.77 \n21.33 0.01 4.00 0.00 4.00 1.33 0.38\n\npreviously I had hit 100 %util here, that was when I didn't have the \ntables so spread out over so many tablespaces. Now I have it spread out \nlike in the olden days where you spread out your tables over many \n\"spindles\", and I did this here so I could see which tables or indexes \nwould be bottlenecks.\n\nSo how can it be that queries take quite long without the process \nrunning at higher CPU%?\n\nOr is there something wrong with the total CPU% estimated by both top \nand iostat? From the top it looks like I have 24 worker processes use 8% \neach, most of them in R(unning) state, so that would be 192%, which is \ndivided over the 2 CPUs of the t2.medium instance, really 96%. So I am \nCPU bound after all?\n\nregards,\n-Gunther\n\n\n\n\n\n\n\nHi, I have an Amazon Linux based Postgresql 11 server here on a\n t2.medium EC2 instance.\nIt is serving 24 worker processes that read jobs from a queue\n (thanks to SELECT ... FOR UPDATE SKIP LOCKED!) and do jobs some of\n which are reading and writing business data to the database,\n others are only reading, and some don't hit the business data at\n all, only the queue. \n\nEverything flows quite nicely. Except, I don't understand why I\n can't max out the CPU or the IO, instead, IO is almost negligible\n yet the CPU is at 30% hardly hitting 50%.\nHere I give you a view of top:\ntop - 23:17:09 up 45 days, 2:07, 4 users, load average: 20.32, 18.92, 13.80\nTasks: 338 total, 24 running, 111 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 28.7 us, 2.5 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 68.7 st\nKiB Mem : 4040028 total, 1070368 free, 324460 used, 2645200 buff/cache\nKiB Swap: 0 total, 0 free, 0 used. 2223720 avail Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 7678 postgres 20 0 1235072 509744 506356 R 8.7 12.6 1:14.82 postgres: auser integrator 172.31.61.242(25783) BIND\n 7998 postgres 20 0 1235108 516480 512772 R 8.7 12.8 1:16.20 postgres: auser integrator 172.31.49.159(51708) SELECT\n 2183 postgres 20 0 1261436 985.8m 982544 R 8.5 25.0 0:44.04 postgres: auser integrator [local] SELECT\n 7653 postgres 20 0 1235180 501760 497984 R 8.2 12.4 1:13.66 postgres: auser integrator 172.31.54.158(47959) SELECT\n 7677 postgres 20 0 1235144 506740 502980 S 8.2 12.5 1:13.54 postgres: auser integrator 172.31.61.242(56510) idle in t+\n 7680 postgres 20 0 1234684 484356 481100 R 8.2 12.0 1:13.86 postgres: auser integrator 172.31.61.242(49966) SELECT\n 2631 postgres 20 0 1235120 937964 934528 R 7.9 23.2 10:48.39 postgres: auser integrator 172.31.49.159(33522) idle in t+\n 7664 postgres 20 0 1235104 524664 520976 R 7.9 13.0 1:13.95 postgres: auser integrator 172.31.57.147(30036) BIND\n 7682 postgres 20 0 1234660 496188 492956 R 7.9 12.3 1:15.50 postgres: auser integrator 172.31.61.242(26330) COMMIT\n 7687 postgres 20 0 1234876 490104 486656 R 7.9 12.1 1:16.77 postgres: auser integrator 172.31.63.71(25285) BIND\n 7660 postgres 20 0 1235100 502004 498596 R 7.6 12.4 1:18.00 postgres: auser integrator 172.31.57.147(46051) PARSE\n 7662 postgres 20 0 1235148 503532 500280 R 7.6 12.5 1:14.03 postgres: auser integrator 172.31.57.147(48852) UPDATE\n 7681 postgres 20 0 1234688 516864 513596 R 7.6 12.8 1:17.77 postgres: auser integrator 172.31.61.242(48192) SELECT\n 7685 postgres 20 0 1235096 515352 511968 R 7.6 12.8 1:16.17 postgres: auser integrator 172.31.63.71(62540) BIND\n 7689 postgres 20 0 1235100 509504 505836 S 7.6 12.6 1:14.78 postgres: auser integrator 172.31.63.71(12287) idle in tr+\n 7684 postgres 20 0 1235052 500336 496916 R 7.3 12.4 1:14.83 postgres: auser integrator 172.31.63.71(19633) BIND\n 7654 postgres 20 0 1235224 514512 511040 S 7.0 12.7 1:18.89 postgres: auser integrator 172.31.57.147(43437) idle in t+\n 7656 postgres 20 0 1234684 510900 507636 R 7.0 12.6 1:16.19 postgres: auser integrator 172.31.54.158(30397) idle in t+\n 7661 postgres 20 0 1234684 514920 511648 S 7.0 12.7 1:16.27 postgres: auser integrator 172.31.57.147(38243) SELECT\n 7679 postgres 20 0 1235112 512228 508544 R 7.0 12.7 1:14.60 postgres: auser integrator 172.31.61.242(34261) PARSE\n 7663 postgres 20 0 1234684 517068 513812 R 6.8 12.8 1:17.42 postgres: auser integrator 172.31.57.147(19711) SELECT\n 7655 postgres 20 0 1235036 505584 502208 R 6.5 12.5 1:16.17 postgres: auser integrator 172.31.54.158(24401) BIND\n 7658 postgres 20 0 1235072 509432 506108 R 6.5 12.6 1:15.90 postgres: auser integrator 172.31.54.158(48566) idle in t+\n 7659 postgres 20 0 1234688 497488 494232 S 6.2 12.3 1:15.70 postgres: auser integrator 172.31.54.158(49841) idle in t+\n 7686 postgres 20 0 1234748 508412 505084 R 5.9 12.6 1:14.70 postgres: auser integrator 172.31.63.71(54938) BIND\n 7688 postgres 20 0 1234708 502416 499204 S 5.9 12.4 1:14.14 postgres: auser integrator 172.31.63.71(49857) SELECT\n 7657 postgres 20 0 1234684 513740 510476 R 5.6 12.7 1:15.96 postgres: auser integrator 172.31.54.158(47300) SELECT\n 1304 ec2-user 20 0 171400 4648 3840 R 1.7 0.1 1:25.66 top -c\n 8492 root 20 0 0 0 0 R 1.7 0.0 0:00.41 [kworker/1:2]\n\n and of iostat:\navg-cpu: %user %nice %system %iowait %steal %idle\n 36.41 0.00 3.80 0.00 59.78 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\nxvda 0.00 0.00 0.00 7.69 0.00 92.31 24.00 0.02 3.00 0.00 3.00 3.00 2.31\nxvdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdg 0.00 0.00 0.00 11.54 0.00 130.77 22.67 0.05 4.00 0.00 4.00 0.33 0.38\nxvdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdi 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdj 0.00 0.00 5.77 0.96 46.15 7.69 16.00 0.01 1.14 0.67 4.00 1.14 0.77\nxvdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdl 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdm 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdn 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdp 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdq 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdr 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvds 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdt 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdu 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdv 0.00 0.00 0.00 8.65 0.00 76.92 17.78 0.03 4.00 0.00 4.00 0.44 0.38\nxvdw 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdx 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdy 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdz 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdaa 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdab 0.00 0.00 0.00 2.88 0.00 30.77 21.33 0.01 4.00 0.00 4.00 1.33 0.38\n\npreviously I had hit 100 %util here, that was when I didn't have\n the tables so spread out over so many tablespaces. Now I have it\n spread out like in the olden days where you spread out your tables\n over many \"spindles\", and I did this here so I could see which\n tables or indexes would be bottlenecks.\nSo how can it be that queries take quite long without the process\n running at higher CPU%? \n\nOr is there something wrong with the total CPU% estimated by both\n top and iostat? From the top it looks like I have 24 worker\n processes use 8% each, most of them in R(unning) state, so that\n would be 192%, which is divided over the 2 CPUs of the t2.medium\n instance, really 96%. So I am CPU bound after all?\nregards,\n -Gunther",
"msg_date": "Wed, 20 Feb 2019 18:32:49 -0500",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "neither CPU nor IO bound, but throttled performance"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 06:32:49PM -0500, Gunther wrote:\n> Hi, I have an Amazon Linux based Postgresql 11 server here on a t2.medium\n> EC2 instance.\n> \n> Everything flows quite nicely. Except, I don't understand why I can't max\n> out the CPU or the IO, instead, IO is almost negligible yet the CPU is at\n> 30% hardly hitting 50%.\n\n> avg-cpu: %user %nice %system %iowait %steal %idle 36.41 0.00 3.80 0.00 59.78\n> 0.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await\n> r_await w_await svctm %util xvda 0.00 0.00 0.00 7.69 0.00 92.31 24.00 0.02\n\nThis is unreadable, please try to attach it ?\n\n> previously I had hit 100 %util here, that was when I didn't have the tables\n> so spread out over so many tablespaces. Now I have it spread out like in the\n> olden days where you spread out your tables over many \"spindles\", and I did\n> this here so I could see which tables or indexes would be bottlenecks.\n\nWhat was the old storage configuration and what is it now ?\n\n> So how can it be that queries take quite long without the process running at\n> higher CPU%?\n\nYou said everything flows nicely, but take a long time, and \"throttled\", can\nyou show an high-level performance change ? Other than %cpu or %io.\n\nJustin\n\n",
"msg_date": "Wed, 20 Feb 2019 19:32:07 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: neither CPU nor IO bound, but throttled performance"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 12:34 AM Gunther <[email protected]> wrote:\n\n> Hi, I have an Amazon Linux based Postgresql 11 server here on a t2.medium\n> EC2 instance.\n>\n> It is serving 24 worker processes that read jobs from a queue (thanks to\n> SELECT ... FOR UPDATE SKIP LOCKED!) and do jobs some of which are reading\n> and writing business data to the database, others are only reading, and\n> some don't hit the business data at all, only the queue.\n>\n> Everything flows quite nicely. Except, I don't understand why I can't max\n> out the CPU or the IO, instead, IO is almost negligible yet the CPU is at\n> 30% hardly hitting 50%.\n>\n> Here I give you a view of top:\n>\n> top - 23:17:09 up 45 days, 2:07, 4 users, load average: 20.32, 18.92, 13.80\n> Tasks: 338 total, 24 running, 111 sleeping, 0 stopped, 0 zombie\n> %Cpu(s): 28.7 us, 2.5 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 68.7 st\n>\n>\nIf I read that right, it's about 70% \"steal\". The description for this is\n\"the percentage of time spent in involuntary wait by the virtual CPU or\nCPUs while the hypervisor was servicing another virtual processor.\".\n\nSo basically, the CPU is spent dealing with other peoples VMs on the same\nhardware. Welcome to the multi-tenant cloud.\n\nIn particular, I believe T series instances get a limited number of CPU\n\"credits\" per hours. My guess is you've hit this limit and are thus being\nthrottled. T series are not intended for persistent workloads. Either way,\nthis is probably a question better asked at the Amazon EC2 forums rather\nthan PostgreSQL as you'll find more people who know the EC2 interactions\nthere.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Feb 21, 2019 at 12:34 AM Gunther <[email protected]> wrote:\n\nHi, I have an Amazon Linux based Postgresql 11 server here on a\n t2.medium EC2 instance.\nIt is serving 24 worker processes that read jobs from a queue\n (thanks to SELECT ... FOR UPDATE SKIP LOCKED!) and do jobs some of\n which are reading and writing business data to the database,\n others are only reading, and some don't hit the business data at\n all, only the queue. \n\nEverything flows quite nicely. Except, I don't understand why I\n can't max out the CPU or the IO, instead, IO is almost negligible\n yet the CPU is at 30% hardly hitting 50%.\nHere I give you a view of top:\ntop - 23:17:09 up 45 days, 2:07, 4 users, load average: 20.32, 18.92, 13.80\nTasks: 338 total, 24 running, 111 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 28.7 us, 2.5 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 68.7 stIf I read that right, it's about 70% \"steal\". The description for this is \"the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor.\".So basically, the CPU is spent dealing with other peoples VMs on the same hardware. Welcome to the multi-tenant cloud. In particular, I believe T series instances get a limited number of CPU \"credits\" per hours. My guess is you've hit this limit and are thus being throttled. T series are not intended for persistent workloads. Either way, this is probably a question better asked at the Amazon EC2 forums rather than PostgreSQL as you'll find more people who know the EC2 interactions there. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 21 Feb 2019 10:08:19 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: neither CPU nor IO bound, but throttled performance"
},
{
"msg_contents": "Thank you Magnus. 68% steal. Indeed. You probably hit the target. Yes.\n\nThat explains the discrepancy. I need to watch and understand that CPU \ncredits issue.\n\nregards,\n-Gunther\n\nOn 2/21/2019 4:08, Magnus Hagander wrote:\n> On Thu, Feb 21, 2019 at 12:34 AM Gunther <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi, I have an Amazon Linux based Postgresql 11 server here on a\n> t2.medium EC2 instance.\n>\n> It is serving 24 worker processes that read jobs from a queue\n> (thanks to SELECT ... FOR UPDATE SKIP LOCKED!) and do jobs some of\n> which are reading and writing business data to the database,\n> others are only reading, and some don't hit the business data at\n> all, only the queue.\n>\n> Everything flows quite nicely. Except, I don't understand why I\n> can't max out the CPU or the IO, instead, IO is almost negligible\n> yet the CPU is at 30% hardly hitting 50%.\n>\n> Here I give you a view of top:\n>\n> top - 23:17:09 up 45 days, 2:07, 4 users, load average: 20.32,\n> 18.92, 13.80 Tasks: 338 total, 24 running, 111 sleeping, 0\n> stopped, 0 zombie %Cpu(s): 28.7 us, 2.5 sy, 0.0 ni, 0.0 id, 0.0\n> wa, 0.0 hi, 0.0 si, 68.7 st\n>\n>\n> If I read that right, it's about 70% \"steal\". The description for this \n> is \"the percentage of time spent in involuntary wait by the virtual \n> CPU or CPUs while the hypervisor was servicing another virtual \n> processor.\".\n>\n> So basically, the CPU is spent dealing with other peoples VMs on the \n> same hardware. Welcome to the multi-tenant cloud.\n>\n> In particular, I believe T series instances get a limited number of \n> CPU \"credits\" per hours. My guess is you've hit this limit and are \n> thus being throttled. T series are not intended for persistent \n> workloads. Either way, this is probably a question better asked at the \n> Amazon EC2 forums rather than PostgreSQL as you'll find more people \n> who know the EC2 interactions there.\n> -- \n> Magnus Hagander\n> Me: https://www.hagander.net/ <http://www.hagander.net/>\n> Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\n\n\n\n\n\nThank you Magnus. 68% steal. Indeed. You probably hit the target.\n Yes. \n\nThat explains the discrepancy. I need to watch and understand\n that CPU credits issue.\nregards,\n -Gunther\n\nOn 2/21/2019 4:08, Magnus Hagander\n wrote:\n\n\n\n\n\nOn Thu, Feb 21, 2019 at 12:34 AM Gunther <[email protected]>\n wrote:\n\n\n\n\nHi, I have an Amazon Linux based Postgresql 11 server\n here on a t2.medium EC2 instance.\nIt is serving 24 worker processes that read jobs from\n a queue (thanks to SELECT ... FOR UPDATE SKIP LOCKED!)\n and do jobs some of which are reading and writing\n business data to the database, others are only\n reading, and some don't hit the business data at all,\n only the queue. \n\nEverything flows quite nicely. Except, I don't\n understand why I can't max out the CPU or the IO,\n instead, IO is almost negligible yet the CPU is at 30%\n hardly hitting 50%.\nHere I give you a view of top:\ntop - 23:17:09 up 45 days, 2:07, 4 users, load average: 20.32, 18.92, 13.80\nTasks: 338 total, 24 running, 111 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 28.7 us, 2.5 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 68.7 st\n\n\n\n\nIf I read that right, it's about 70% \"steal\". The\n description for this is \"the percentage of time spent in\n involuntary wait by the virtual CPU or CPUs while the\n hypervisor was servicing another virtual processor.\".\n\n\nSo basically, the CPU is spent dealing with other\n peoples VMs on the same hardware. Welcome to the\n multi-tenant cloud. \n\n\nIn particular, I believe T series instances get a\n limited number of CPU \"credits\" per hours. My guess is\n you've hit this limit and are thus being throttled. T\n series are not intended for persistent workloads. Either\n way, this is probably a question better asked at the\n Amazon EC2 forums rather than PostgreSQL as you'll find\n more people who know the EC2 interactions there.\n \n\n -- \n\n\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 21 Feb 2019 10:59:44 -0500",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: neither CPU nor IO bound, but throttled performance"
}
] |
[
{
"msg_contents": "Hi all,\n\nI need to optimize the following query\nhttp://paste.debian.net/hidden/ef08f864/\nI use it to create a materialized view, but I think there is room for\noptimization.\nI tried to\nSET join_collapse_limit TO 15;\nwith to real difference.\n\nExplain shows that the GROUP AGGREGATE and needed sort kill the performance.\nDo you have any hint how to optimize this ?\nhttps://explain.depesz.com/s/6nf\n\nRegards\nMichaël\n\nHi all,I need to optimize the following queryhttp://paste.debian.net/hidden/ef08f864/I use it to create a materialized view, but I think there is room for optimization.I tried to SET join_collapse_limit TO 15;with to real difference.Explain shows that the GROUP AGGREGATE and needed sort kill the performance.Do you have any hint how to optimize this ?https://explain.depesz.com/s/6nfRegardsMichaël",
"msg_date": "Fri, 22 Feb 2019 16:14:05 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Aggregate and many LEFT JOIN"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 04:14:05PM +0100, kimaidou wrote:\n> Explain shows that the GROUP AGGREGATE and needed sort kill the performance.\n> Do you have any hint how to optimize this ?\n> https://explain.depesz.com/s/6nf\n\nThis is writing 2GB tempfile, perhaps the query would benefit from larger\nwork_mem:\n|Sort (cost=3,014,498.66..3,016,923.15 rows=969,796 width=1,818) (actual time=21,745.193..22,446.561 rows=1,212,419 loops=1)\n| Sort Method: external sort Disk: 1782200kB\n| Buffers: shared hit=5882951, temp read=230958 written=230958\n\nThis is apparently joining without indices:\n|Nested Loop Left Join (cost=1.76..360,977.37 rows=321,583 width=1,404) (actual time=0.080..1,953.007 rows=321,849 loops=1)\n| Join Filter: (tgc1.groupe_nom = t.group1_inpn)\n| Rows Removed by Join Filter: 965547\n| Buffers: shared hit=1486327\n\nThis perhaps should have an index on tgc2.groupe_type ?\n|Index Scan using t_group_categorie_pkey on taxon.t_group_categorie tgc2 (cost=0.14..0.42 rows=1 width=28) (actual time=0.002..0.002 rows=1 loops=321,849)\n| Index Cond: (tgc2.groupe_nom = t.group2_inpn)\n| Filter: (tgc2.groupe_type = 'group2_inpn'::text)\n| Buffers: shared hit=643687\n\nThis would perhaps benefit from an index on tv.cd_ref ?\n|Index Scan using taxref_consolide_non_filtre_cd_nom_idx on taxon.taxref_consolide_non_filtre tv (cost=0.42..0.63 rows=1 width=94) (actual time=0.002..0.002 rows=1 loops=690,785)\n| Index Cond: (tv.cd_nom = t.cd_ref)\n| Filter: (tv.cd_nom = tv.cd_ref)\n| Buffers: shared hit=2764875\n\nI don't think it's causing a significant fraction of the issue, but for some\nreason this is overestimating rowcount by 2000. Do you need to VACUUM ANALYZE\nthe table ?\n|Seq Scan on occtax.personne p_1 (cost=0.00..78.04 ROWS=2,204 width=56) (actual time=0.011..0.011 ROWS=1 loops=1)\n\nJustin\n\n",
"msg_date": "Fri, 22 Feb 2019 09:54:15 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "Curious- Is geqo_threshold still set to 12? Is increasing\njoin_collapse_limit to be higher than geqo_threshold going to have a\nnoticeable impact?\n\nThe disk sorts are the killer as Justin says. I wonder how it performs with\nthat increased significantly. Is the storage SSD or traditional hard disks?\n\n*Michael Lewis*\n\nOn Fri, Feb 22, 2019 at 8:54 AM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, Feb 22, 2019 at 04:14:05PM +0100, kimaidou wrote:\n> > Explain shows that the GROUP AGGREGATE and needed sort kill the\n> performance.\n> > Do you have any hint how to optimize this ?\n> > https://explain.depesz.com/s/6nf\n>\n> This is writing 2GB tempfile, perhaps the query would benefit from larger\n> work_mem:\n> |Sort (cost=3,014,498.66..3,016,923.15 rows=969,796 width=1,818) (actual\n> time=21,745.193..22,446.561 rows=1,212,419 loops=1)\n> | Sort Method: external sort Disk: 1782200kB\n> | Buffers: shared hit=5882951, temp read=230958 written=230958\n>\n> This is apparently joining without indices:\n> |Nested Loop Left Join (cost=1.76..360,977.37 rows=321,583 width=1,404)\n> (actual time=0.080..1,953.007 rows=321,849 loops=1)\n> | Join Filter: (tgc1.groupe_nom = t.group1_inpn)\n> | Rows Removed by Join Filter: 965547\n> | Buffers: shared hit=1486327\n>\n> This perhaps should have an index on tgc2.groupe_type ?\n> |Index Scan using t_group_categorie_pkey on taxon.t_group_categorie tgc2\n> (cost=0.14..0.42 rows=1 width=28) (actual time=0.002..0.002 rows=1\n> loops=321,849)\n> | Index Cond: (tgc2.groupe_nom = t.group2_inpn)\n> | Filter: (tgc2.groupe_type = 'group2_inpn'::text)\n> | Buffers: shared hit=643687\n>\n> This would perhaps benefit from an index on tv.cd_ref ?\n> |Index Scan using taxref_consolide_non_filtre_cd_nom_idx on\n> taxon.taxref_consolide_non_filtre tv (cost=0.42..0.63 rows=1 width=94)\n> (actual time=0.002..0.002 rows=1 loops=690,785)\n> | Index Cond: (tv.cd_nom = t.cd_ref)\n> | Filter: (tv.cd_nom = tv.cd_ref)\n> | Buffers: shared hit=2764875\n>\n> I don't think it's causing a significant fraction of the issue, but for\n> some\n> reason this is overestimating rowcount by 2000. Do you need to VACUUM\n> ANALYZE\n> the table ?\n> |Seq Scan on occtax.personne p_1 (cost=0.00..78.04 ROWS=2,204 width=56)\n> (actual time=0.011..0.011 ROWS=1 loops=1)\n>\n> Justin\n>\n>\n\nCurious- Is geqo_threshold still set to 12? Is increasing join_collapse_limit to be higher than geqo_threshold going to have a noticeable impact?The disk sorts are the killer as Justin says. I wonder how it performs with that increased significantly. Is the storage SSD or traditional hard disks?Michael LewisOn Fri, Feb 22, 2019 at 8:54 AM Justin Pryzby <[email protected]> wrote:On Fri, Feb 22, 2019 at 04:14:05PM +0100, kimaidou wrote:\n> Explain shows that the GROUP AGGREGATE and needed sort kill the performance.\n> Do you have any hint how to optimize this ?\n> https://explain.depesz.com/s/6nf\n\nThis is writing 2GB tempfile, perhaps the query would benefit from larger\nwork_mem:\n|Sort (cost=3,014,498.66..3,016,923.15 rows=969,796 width=1,818) (actual time=21,745.193..22,446.561 rows=1,212,419 loops=1)\n| Sort Method: external sort Disk: 1782200kB\n| Buffers: shared hit=5882951, temp read=230958 written=230958\n\nThis is apparently joining without indices:\n|Nested Loop Left Join (cost=1.76..360,977.37 rows=321,583 width=1,404) (actual time=0.080..1,953.007 rows=321,849 loops=1)\n| Join Filter: (tgc1.groupe_nom = t.group1_inpn)\n| Rows Removed by Join Filter: 965547\n| Buffers: shared hit=1486327\n\nThis perhaps should have an index on tgc2.groupe_type ?\n|Index Scan using t_group_categorie_pkey on taxon.t_group_categorie tgc2 (cost=0.14..0.42 rows=1 width=28) (actual time=0.002..0.002 rows=1 loops=321,849)\n| Index Cond: (tgc2.groupe_nom = t.group2_inpn)\n| Filter: (tgc2.groupe_type = 'group2_inpn'::text)\n| Buffers: shared hit=643687\n\nThis would perhaps benefit from an index on tv.cd_ref ?\n|Index Scan using taxref_consolide_non_filtre_cd_nom_idx on taxon.taxref_consolide_non_filtre tv (cost=0.42..0.63 rows=1 width=94) (actual time=0.002..0.002 rows=1 loops=690,785)\n| Index Cond: (tv.cd_nom = t.cd_ref)\n| Filter: (tv.cd_nom = tv.cd_ref)\n| Buffers: shared hit=2764875\n\nI don't think it's causing a significant fraction of the issue, but for some\nreason this is overestimating rowcount by 2000. Do you need to VACUUM ANALYZE\nthe table ?\n|Seq Scan on occtax.personne p_1 (cost=0.00..78.04 ROWS=2,204 width=56) (actual time=0.011..0.011 ROWS=1 loops=1)\n\nJustin",
"msg_date": "Fri, 22 Feb 2019 09:14:17 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "Thanks for your answers.\n\nI have tried via\n\n--show work_mem; \"10485kB\" -> initial work_mem for my first post\n-- set session work_mem='100000kB';\n-- set session geqo_threshold = 12;\n-- set session join_collapse_limit = 15;\n\nI have a small machine, with SSD disk and 8GB RAM. I cannot really increase\nwork_mem up to 2GB (or more). There are only 300 000 data in\nocctax.observation, which will increase (and possibly go up to 3\nmillions...)\nI am running PostgreSQL 9.6. I should probably test it against PostgreSQL\n11 as many improvements has been made.\n\nI even tried to remove all non aggregated columns and keep only o.cle_obs\n(the primary key) to have a\nGROUP BY o.cle_obs\nAND the query plan does not show a HASH AGGREGATE, but only a GROUP\nAGGREGATE.\n\nObviously I have already tried to VACUUM ANALYSE\n\nMy current PostgreSQL settings\nmax_connections = 100\nshared_buffers = 2GB\neffective_cache_size = 6GB\nwork_mem = 10485kB\nmaintenance_work_mem = 512MB\nmin_wal_size = 1GB\nmax_wal_size = 2GB\ncheckpoint_completion_target = 0.9\nwal_buffers = 16MB\ndefault_statistics_target = 100\n\nThanks for your answers.I have tried via --show work_mem; \"10485kB\" -> initial work_mem for my first post-- set session work_mem='100000kB';-- set session geqo_threshold = 12;-- set session join_collapse_limit = 15;I have a small machine, with SSD disk and 8GB RAM. I cannot really increase work_mem up to 2GB (or more). There are only 300 000 data in occtax.observation, which will increase (and possibly go up to 3 millions...)I am running PostgreSQL 9.6. I should probably test it against PostgreSQL 11 as many improvements has been made.I even tried to remove all non aggregated columns and keep only o.cle_obs (the primary key) to have a GROUP BY o.cle_obsAND the query plan does not show a HASH AGGREGATE, but only a GROUP AGGREGATE.Obviously I have already tried to VACUUM ANALYSEMy current PostgreSQL settingsmax_connections = 100shared_buffers = 2GBeffective_cache_size = 6GBwork_mem = 10485kBmaintenance_work_mem = 512MBmin_wal_size = 1GBmax_wal_size = 2GBcheckpoint_completion_target = 0.9wal_buffers = 16MBdefault_statistics_target = 100",
"msg_date": "Fri, 22 Feb 2019 17:33:11 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "Does the plan change significantly with this-\n\nset session work_mem='250MB';\nset session geqo_threshold = 20;\nset session join_collapse_limit = 20;\n\nWith that expensive sort spilling to disk and then aggregating after that,\nit would seem like the work_mem being significantly increased is going to\nmake the critical difference. Unless it could fetch the data sorted via an\nindex, but that doesn't seem likely.\n\nI would suggest increase default_statistics_target, but you have good\nestimates already for the most part. Hopefully someone else will chime in\nwith more.\n\n*Michael Lewis*\n\nDoes the plan change significantly with this-set session work_mem='250MB';set session geqo_threshold = 20;set session join_collapse_limit = 20;With that expensive sort spilling to disk and then aggregating after that, it would seem like the work_mem being significantly increased is going to make the critical difference. Unless it could fetch the data sorted via an index, but that doesn't seem likely.I would suggest increase default_statistics_target, but you have good estimates already for the most part. Hopefully someone else will chime in with more.Michael Lewis",
"msg_date": "Fri, 22 Feb 2019 10:18:02 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "Michael Lewis <[email protected]> writes:\n> Does the plan change significantly with this-\n> set session work_mem='250MB';\n> set session geqo_threshold = 20;\n> set session join_collapse_limit = 20;\n\nYeah ... by my count there are 16 tables in this query, so raising\njoin_collapse_limit to 15 is not enough to ensure that the planner\nconsiders all join orders. Whether use of GEQO is a big problem\nis harder to say, but it might be.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 13:05:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "Thanks for your answers. I tried with\n> set session work_mem='250MB';\n> set session geqo_threshold = 20;\n> set session join_collapse_limit = 20;\n\nIt seems to have no real impact :\nhttps://explain.depesz.com/s/CBVd\n\nIndeed an index cannot really be used for sorting here, based on the\ncomplexity of the returned fields.\nWich strikes me is that if I try to simplify it a lot, removing all data\nbut the main table (occtax.observation) primary key cd_nom and aggregate,\nthe query plan should be able tu use the cd_nom index for sorting and\nprovide better query plan (hash aggregate), but it does not seems so :\n\n* SQL ; http://paste.debian.net/hidden/c3ee7889/\n* EXPLAIN : https://explain.depesz.com/s/FR3h -> a group aggregate is used,\nwhich : GroupAggregate 1 10,639.313 ms 72.6 %\n\nIt is better, but I think 10s for such a query seems bad perf for me.\n\nRegards\nMichaël\n\nLe ven. 22 févr. 2019 à 19:06, Tom Lane <[email protected]> a écrit :\n\n> Michael Lewis <[email protected]> writes:\n> > Does the plan change significantly with this-\n> > set session work_mem='250MB';\n> > set session geqo_threshold = 20;\n> > set session join_collapse_limit = 20;\n>\n> Yeah ... by my count there are 16 tables in this query, so raising\n> join_collapse_limit to 15 is not enough to ensure that the planner\n> considers all join orders. Whether use of GEQO is a big problem\n> is harder to say, but it might be.\n>\n> regards, tom lane\n>\n\nThanks for your answers. I tried with> set session work_mem='250MB';> set session geqo_threshold = 20;> set session join_collapse_limit = 20;It seems to have no real impact :https://explain.depesz.com/s/CBVdIndeed an index cannot really be used for sorting here, based on the complexity of the returned fields.Wich strikes me is that if I try to simplify it a lot, removing all data but the main table (occtax.observation) primary key cd_nom and aggregate, the query plan should be able tu use the cd_nom index for sorting and provide better query plan (hash aggregate), but it does not seems so :* SQL ; http://paste.debian.net/hidden/c3ee7889/* EXPLAIN : https://explain.depesz.com/s/FR3h -> a group aggregate is used, which : GroupAggregate 1 10,639.313 ms 72.6 %It is better, but I think 10s for such a query seems bad perf for me.RegardsMichaëlLe ven. 22 févr. 2019 à 19:06, Tom Lane <[email protected]> a écrit :Michael Lewis <[email protected]> writes:\n> Does the plan change significantly with this-\n> set session work_mem='250MB';\n> set session geqo_threshold = 20;\n> set session join_collapse_limit = 20;\n\nYeah ... by my count there are 16 tables in this query, so raising\njoin_collapse_limit to 15 is not enough to ensure that the planner\nconsiders all join orders. Whether use of GEQO is a big problem\nis harder to say, but it might be.\n\n regards, tom lane",
"msg_date": "Mon, 25 Feb 2019 09:54:14 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "I have better results with this version. Basically, I run a first query\nonly made for aggregation, and then do a JOIN to get other needed data.\n\n* SQL : http://paste.debian.net/1070007/\n* EXPLAIN: https://explain.depesz.com/s/D0l\n\nNot really \"fast\", but I gained 30%\n\nLe lun. 25 févr. 2019 à 09:54, kimaidou <[email protected]> a écrit :\n\n> Thanks for your answers. I tried with\n> > set session work_mem='250MB';\n> > set session geqo_threshold = 20;\n> > set session join_collapse_limit = 20;\n>\n> It seems to have no real impact :\n> https://explain.depesz.com/s/CBVd\n>\n> Indeed an index cannot really be used for sorting here, based on the\n> complexity of the returned fields.\n> Wich strikes me is that if I try to simplify it a lot, removing all data\n> but the main table (occtax.observation) primary key cd_nom and aggregate,\n> the query plan should be able tu use the cd_nom index for sorting and\n> provide better query plan (hash aggregate), but it does not seems so :\n>\n> * SQL ; http://paste.debian.net/hidden/c3ee7889/\n> * EXPLAIN : https://explain.depesz.com/s/FR3h -> a group aggregate is\n> used, which : GroupAggregate 1 10,639.313 ms 72.6 %\n>\n> It is better, but I think 10s for such a query seems bad perf for me.\n>\n> Regards\n> Michaël\n>\n> Le ven. 22 févr. 2019 à 19:06, Tom Lane <[email protected]> a écrit :\n>\n>> Michael Lewis <[email protected]> writes:\n>> > Does the plan change significantly with this-\n>> > set session work_mem='250MB';\n>> > set session geqo_threshold = 20;\n>> > set session join_collapse_limit = 20;\n>>\n>> Yeah ... by my count there are 16 tables in this query, so raising\n>> join_collapse_limit to 15 is not enough to ensure that the planner\n>> considers all join orders. Whether use of GEQO is a big problem\n>> is harder to say, but it might be.\n>>\n>> regards, tom lane\n>>\n>\n\nI have better results with this version. Basically, I run a first query only made for aggregation, and then do a JOIN to get other needed data.* SQL : http://paste.debian.net/1070007/* EXPLAIN: https://explain.depesz.com/s/D0lNot really \"fast\", but I gained 30%Le lun. 25 févr. 2019 à 09:54, kimaidou <[email protected]> a écrit :Thanks for your answers. I tried with> set session work_mem='250MB';> set session geqo_threshold = 20;> set session join_collapse_limit = 20;It seems to have no real impact :https://explain.depesz.com/s/CBVdIndeed an index cannot really be used for sorting here, based on the complexity of the returned fields.Wich strikes me is that if I try to simplify it a lot, removing all data but the main table (occtax.observation) primary key cd_nom and aggregate, the query plan should be able tu use the cd_nom index for sorting and provide better query plan (hash aggregate), but it does not seems so :* SQL ; http://paste.debian.net/hidden/c3ee7889/* EXPLAIN : https://explain.depesz.com/s/FR3h -> a group aggregate is used, which : GroupAggregate 1 10,639.313 ms 72.6 %It is better, but I think 10s for such a query seems bad perf for me.RegardsMichaëlLe ven. 22 févr. 2019 à 19:06, Tom Lane <[email protected]> a écrit :Michael Lewis <[email protected]> writes:\n> Does the plan change significantly with this-\n> set session work_mem='250MB';\n> set session geqo_threshold = 20;\n> set session join_collapse_limit = 20;\n\nYeah ... by my count there are 16 tables in this query, so raising\njoin_collapse_limit to 15 is not enough to ensure that the planner\nconsiders all join orders. Whether use of GEQO is a big problem\nis harder to say, but it might be.\n\n regards, tom lane",
"msg_date": "Mon, 25 Feb 2019 10:44:45 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 2:44 AM kimaidou <[email protected]> wrote:\n\n> I have better results with this version. Basically, I run a first query\n> only made for aggregation, and then do a JOIN to get other needed data.\n>\n> * SQL : http://paste.debian.net/1070007/\n> * EXPLAIN: https://explain.depesz.com/s/D0l\n>\n> Not really \"fast\", but I gained 30%\n>\n\n\nIt still seems that disk sort and everything after that is where the query\nplan dies. It seems odd that it went to disk if work_mem was already 250MB.\nCan you allocate more as a test? As an alternative, if this is a frequently\nneeded data, can you aggregate this data and keep a summarized copy updated\nperiodically?\n\nOn Mon, Feb 25, 2019 at 2:44 AM kimaidou <[email protected]> wrote:I have better results with this version. Basically, I run a first query only made for aggregation, and then do a JOIN to get other needed data.* SQL : http://paste.debian.net/1070007/* EXPLAIN: https://explain.depesz.com/s/D0lNot really \"fast\", but I gained 30% It still seems that disk sort and everything after that is where the query plan dies. It seems odd that it went to disk if work_mem was already 250MB. Can you allocate more as a test? As an alternative, if this is a frequently needed data, can you aggregate this data and keep a summarized copy updated periodically?",
"msg_date": "Mon, 25 Feb 2019 11:29:40 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "I manage to avoid the disk sort after performing a VACUUM ANALYSE;\nAnd with a session work_mem = '250MB'\n\n* SQL http://paste.debian.net/1070207/\n* EXPLAIN https://explain.depesz.com/s/nJ2y\n\nIt stills spent 16s\nIt seems this kind of query will need better hardware to scale...\n\nThanks for your help\n\n\nLe lun. 25 févr. 2019 à 19:30, Michael Lewis <[email protected]> a écrit :\n\n>\n>\n> On Mon, Feb 25, 2019 at 2:44 AM kimaidou <[email protected]> wrote:\n>\n>> I have better results with this version. Basically, I run a first query\n>> only made for aggregation, and then do a JOIN to get other needed data.\n>>\n>> * SQL : http://paste.debian.net/1070007/\n>> * EXPLAIN: https://explain.depesz.com/s/D0l\n>>\n>> Not really \"fast\", but I gained 30%\n>>\n>\n>\n> It still seems that disk sort and everything after that is where the query\n> plan dies. It seems odd that it went to disk if work_mem was already 250MB.\n> Can you allocate more as a test? As an alternative, if this is a frequently\n> needed data, can you aggregate this data and keep a summarized copy updated\n> periodically?\n>\n\nI manage to avoid the disk sort after performing a VACUUM ANALYSE;And with a session work_mem = '250MB'* SQL http://paste.debian.net/1070207/* EXPLAIN https://explain.depesz.com/s/nJ2yIt stills spent 16sIt seems this kind of query will need better hardware to scale...Thanks for your helpLe lun. 25 févr. 2019 à 19:30, Michael Lewis <[email protected]> a écrit :On Mon, Feb 25, 2019 at 2:44 AM kimaidou <[email protected]> wrote:I have better results with this version. Basically, I run a first query only made for aggregation, and then do a JOIN to get other needed data.* SQL : http://paste.debian.net/1070007/* EXPLAIN: https://explain.depesz.com/s/D0lNot really \"fast\", but I gained 30% It still seems that disk sort and everything after that is where the query plan dies. It seems odd that it went to disk if work_mem was already 250MB. Can you allocate more as a test? As an alternative, if this is a frequently needed data, can you aggregate this data and keep a summarized copy updated periodically?",
"msg_date": "Tue, 26 Feb 2019 13:54:00 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "On Tue, Feb 26, 2019 at 01:54:00PM +0100, kimaidou wrote:\n> I manage to avoid the disk sort after performing a VACUUM ANALYSE;\n> And with a session work_mem = '250MB'\n> \n> * SQL http://paste.debian.net/1070207/\n> * EXPLAIN https://explain.depesz.com/s/nJ2y\n> \n> It stills spent 16s\n> It seems this kind of query will need better hardware to scale...\n\nOnce you've exhausted other ideas, you could consider making that a TEMPORARY\nTABLE, and creating an index on it (and analyzing it) and then aggregating.\nIt'd be several separate queries.\n\nJustin\n\n",
"msg_date": "Tue, 26 Feb 2019 07:51:34 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 3:54 AM kimaidou <[email protected]> wrote:\n\n\n> Wich strikes me is that if I try to simplify it a lot, removing all data\n> but the main table (occtax.observation) primary key cd_nom and aggregate,\n> the query plan should be able tu use the cd_nom index for sorting and\n> provide better query plan (hash aggregate), but it does not seems so :\n>\n\nHashAggregate doesn't support aggregates with DISTINCT. I don't think\nthere is any reason it can't, it is just that no one has gotten around to\nit.\n\nAggregates with DISTINCT also kill your ability to get parallel queries.\n\nCheers,\n\nJeff\n\nOn Mon, Feb 25, 2019 at 3:54 AM kimaidou <[email protected]> wrote: Wich strikes me is that if I try to simplify it a lot, removing all data but the main table (occtax.observation) primary key cd_nom and aggregate, the query plan should be able tu use the cd_nom index for sorting and provide better query plan (hash aggregate), but it does not seems so :HashAggregate doesn't support aggregates with DISTINCT. I don't think there is any reason it can't, it is just that no one has gotten around to it.Aggregates with DISTINCT also kill your ability to get parallel queries.Cheers,Jeff",
"msg_date": "Tue, 26 Feb 2019 08:52:29 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate and many LEFT JOIN"
}
] |
[
{
"msg_contents": "Hi all,\n\nI need to optimize the following query\nhttp://paste.debian.net/hidden/ef08f864/\nI use it to create a materialized view, but I think there is room for\noptimization.\nI tried to\nSET join_collapse_limit TO 15;\nwith to real difference.\n\nExplain shows that the GROUP AGGREGATE and needed sort kill the performance.\nDo you have any hint how to optimize this ?\nhttps://explain.depesz.com/s/6nf\n\nRegards\nMichaël\n\nHi all,I need to optimize the following queryhttp://paste.debian.net/hidden/ef08f864/I use it to create a materialized view, but I think there is room for optimization.I tried toSET join_collapse_limit TO 15;with to real difference.Explain shows that the GROUP AGGREGATE and needed sort kill the performance.Do you have any hint how to optimize this ?https://explain.depesz.com/s/6nfRegardsMichaël",
"msg_date": "Fri, 22 Feb 2019 16:36:33 +0100",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query with aggregate and many LEFT JOINS"
},
{
"msg_contents": "From: kimaidou [mailto:[email protected]]\r\nSent: Friday, February 22, 2019 10:37 AM\r\nTo: [email protected]\r\nSubject: Slow query with aggregate and many LEFT JOINS\r\n\r\nHi all,\r\n\r\nI need to optimize the following query\r\nhttp://paste.debian.net/hidden/ef08f864/\r\nI use it to create a materialized view, but I think there is room for optimization.\r\nI tried to\r\nSET join_collapse_limit TO 15;\r\nwith to real difference.\r\n\r\nExplain shows that the GROUP AGGREGATE and needed sort kill the performance.\r\nDo you have any hint how to optimize this ?\r\nhttps://explain.depesz.com/s/6nf\r\n\r\nRegards\r\nMichaël\r\n\r\nTry increasing both: join_collapse_limit and from_collapse_limit to 16 (or even 17).\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n\n\n\n\n\n\n\n\nFrom: kimaidou [mailto:[email protected]]\r\n\nSent: Friday, February 22, 2019 10:37 AM\nTo: [email protected]\nSubject: Slow query with aggregate and many LEFT JOINS\n\n\n\n \n\nHi all,\n\r\nI need to optimize the following query\nhttp://paste.debian.net/hidden/ef08f864/\r\nI use it to create a materialized view, but I think there is room for optimization.\r\nI tried to\r\nSET join_collapse_limit TO 15;\r\nwith to real difference.\n\r\nExplain shows that the GROUP AGGREGATE and needed sort kill the performance.\r\nDo you have any hint how to optimize this ?\nhttps://explain.depesz.com/s/6nf\n\r\nRegards\r\nMichaël\n\n \nTry increasing both: join_collapse_limit and from_collapse_limit to 16 (or even 17).\n \nRegards,\nIgor Neyman",
"msg_date": "Fri, 22 Feb 2019 19:47:45 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow query with aggregate and many LEFT JOINS"
},
{
"msg_contents": "From: kimaidou [mailto:[email protected]]\r\nSent: Friday, February 22, 2019 10:37 AM\r\nTo: [email protected]<mailto:[email protected]>\r\nSubject: Slow query with aggregate and many LEFT JOINS\r\n\r\nHi all,\r\n\r\nI need to optimize the following query\r\nhttp://paste.debian.net/hidden/ef08f864/\r\nI use it to create a materialized view, but I think there is room for optimization.\r\nI tried to\r\nSET join_collapse_limit TO 15;\r\nwith to real difference.\r\n\r\nExplain shows that the GROUP AGGREGATE and needed sort kill the performance.\r\nDo you have any hint how to optimize this ?\r\nhttps://explain.depesz.com/s/6nf\r\n\r\nRegards\r\nMichaël\r\n\r\nDon’t know your hardware config, or Postgres settings,\r\nbut I see external disk sort. So, try setting work_mem to ~48MB.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n\n\n\n\n\n\n\n\nFrom: kimaidou [mailto:[email protected]]\r\n\nSent: Friday, February 22, 2019 10:37 AM\nTo: [email protected]\nSubject: Slow query with aggregate and many LEFT JOINS\n\n\n\n \n\nHi all,\n\r\nI need to optimize the following query\nhttp://paste.debian.net/hidden/ef08f864/\r\nI use it to create a materialized view, but I think there is room for optimization.\r\nI tried to\r\nSET join_collapse_limit TO 15;\r\nwith to real difference.\n\r\nExplain shows that the GROUP AGGREGATE and needed sort kill the performance.\r\nDo you have any hint how to optimize this ?\nhttps://explain.depesz.com/s/6nf\n\r\nRegards\r\nMichaël\n\n \nDon’t know your hardware config, or Postgres settings,\nbut I see external disk sort. So, try setting work_mem to ~48MB.\n \nRegards,\nIgor Neyman",
"msg_date": "Fri, 22 Feb 2019 19:53:51 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow query with aggregate and many LEFT JOINS"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using an SQL queue for distributing work to massively parallel \nworkers. Workers come in servers with 12 parallel threads. One of those \nworker sets handles 7 transactions per second. If I add a second one, \nfor 24 parallel workers, it scales to 14 /s. Even a third, for 36 \nparallel workers, I can add to reach 21 /s. If I try a fourth set, 48 \nworkers, I end up in trouble. But that isn't even so much my problem \nrather than the fact that in short time, the performance will \ndeteriorate, and it looks like that is because the queue index \ndeteriorates and needs a REINDEX.\n\nThe queue table is essentially this:\n\nCREATE TABLE Queue (\n jobId bigint,\n action text,\n pending boolean,\n result text\n);\n\nthe dequeue operation is essentially this:\n\nBEGIN\n\nSELECT jobId, action\n FROM Queue\n WHERE pending\n FOR UPDATE SKIP LOCKED\n\nwhich is a wonderful concept with the SKIP LOCKED.\n\nThen I perform the action and finally:\n\nUPDATE Queue\n SET pending = false,\n result = ?\n WHERE jobId = ?\n\nCOMMIT\n\nI thought to keep my index tight, I would define it like this:\n\nCREATE UNIQUE INDEX Queue_idx_pending ON Queue(jobId) WHERE pending;\n\nso that only pending jobs are in that index.\n\nWhen a job is done, follow up work is often inserted into the Queue as \npending, thus adding to that index.\n\nBelow is the performance chart.\n\nThe blue line at the bottom is the db server.\n\n\nYou can see the orange line is the first worker server with 12 threads. \nIt settled into a steady state of 7/s ran with 90% CPU for some 30 min, \nand then the database CPU% started climbing and I tried to rebuild the \nindexes on the queue, got stuck there, exclusive lock, no jobs were \nprocessing, but the exclusive lock was never obtained for too long. So I \nshut down the worker server. Database quiet I could resolve the messed \nup indexes and restarted again. Soon I added a second worker server \n(green line) starting around 19:15. Once settled in they were pulling \n14/s together. but you can see in just 15 min, the db server CPU % \nclimbed again to over 40% and the performance of the workers dropped, \ntheir load falling to 30%. Now at around 19:30 I stopped them all, \nREINDEXed the queue table and then started 3 workers servers \nsimultaneously. They settled in to 21/s but in just 10 min again the \ndeterioration happened. Again I stopped them all, REINDEXed, and now \nstarted 4 worker servers (48 threads). This time 5 min was not enough to \nsee them ever settling into a decent 28/s transaction rate, but I guess \nthey might have reached that for a minute or two, only for the index \ndeteriorating again. I did another stop now started only 2 servers and \nagain, soon the index deteriorated again.\n\nClearly that index is deteriorating quickly, in about 10,000 transactions.\n\nBTW: when I said 7/s, it is in reality about 4 times as many \ntransactions, because of the follow up jobs that also get added on this \nqueue. So 10,0000 transactions may be 30 or 40 k transactions before the \nindex deteriorates.\n\nDo you have any suggestion how I can avoid that index deterioration \nproblem smoothly?\n\nI figured I might just pause all workers briefly to schedule the REINDEX \nQueue command, but the problem with this is that while the transaction \nvolume is large, some jobs may take minutes to process, and in that case \nwe need to wait minutes to quiet the database with then 47 workers \nsitting as idle capacity waiting for the 48th to finish so that the \nindex can be rebuilt!\n\nOf course I tried to resolve the issue with vacuumdb --analyze (just in \ncase if the autovacuum doesn't act in time) and that doesn't help. \nVacuumdb --full --analyze would probably help but can't work because it \nrequired an exclusive table lock.\n\nI tried to just create a new index of the same\n\nCREATE UNIQUE INDEX Queue_idx2_pending ON Queue(jobId) WHERE pending;\nDROP INDEX Queue_idx_pending;\nANALYZE Queue;\n\nbut with that I got completely stuck with two indexes where I could not \nremove either of them for those locking issues. And REINDEX will give me \na deadlock error rightout.\n\nI am looking for a way to manage that index so that it does not deteriorate.\n\nMay be if I was not defining it with\n\n... WHERE pending;\n\nthen it would only grow, but never shrink. May be that helps somehow? I \ndoubt it though. Adding to an index also causes deterioration, and most \nof the rows would be irrelevant because they would be past work. It \nwould be nicer if there was another smooth way.\n\nregards,\n-Gunther",
"msg_date": "Sat, 23 Feb 2019 16:05:51 -0500",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Massive parallel queue table causes index deterioration, but REINDEX\n fails with deadlocks."
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 1:06 PM Gunther <[email protected]> wrote:\n> I thought to keep my index tight, I would define it like this:\n>\n> CREATE UNIQUE INDEX Queue_idx_pending ON Queue(jobId) WHERE pending;\n>\n> so that only pending jobs are in that index.\n>\n> When a job is done, follow up work is often inserted into the Queue as pending, thus adding to that index.\n\nHow many distinct jobIds are there in play, roughly? Would you say\nthat there are many fewer distinct Jobs than distinct entries in the\nindex/table? Is the number of jobs fixed at a fairly low number, that\ndoesn't really grow as the workload needs to scale up?\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Sat, 23 Feb 2019 13:13:53 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On 2/23/2019 16:13, Peter Geoghegan wrote:\n> On Sat, Feb 23, 2019 at 1:06 PM Gunther <[email protected]> wrote:\n>> I thought to keep my index tight, I would define it like this:\n>>\n>> CREATE UNIQUE INDEX Queue_idx_pending ON Queue(jobId) WHERE pending;\n>>\n>> so that only pending jobs are in that index.\n>>\n>> When a job is done, follow up work is often inserted into the Queue as pending, thus adding to that index.\n> How many distinct jobIds are there in play, roughly? Would you say\n> that there are many fewer distinct Jobs than distinct entries in the\n> index/table? Is the number of jobs fixed at a fairly low number, that\n> doesn't really grow as the workload needs to scale up?\n\nJobs start on another, external queue, there were about 200,000 of them \nwaiting when I started the run.\n\nWhen the SQL Queue is empty, the workers pick one job from the external \nqueue and add it to the SQL queue.\n\nWhen that happens immediately 2 more jobs are created on that queue. \nLet's cal it phase 1 a and b\n\nWhen phase 1 a has been worked off, another follow-up job is created. \nLet' s call it phase 2.\n\nWhen phase 2 has been worked off, a final phase 3 job is created.\n\nWhen that is worked off, nothing new is created, and the next item is \npulled from the external queue and added to the SQL queue.\n\nSo this means, each of the 200,000 items add (up to) 4 jobs onto the \nqueue during their processing.\n\nBut since these 200,000 items are on an external queue, the SQL queue \nitself is not stuffed full at all. It only slowly grows, and on the main \nindex where we have only the pending jobs, there are only probably than \n20 at any given point in time. When I said 7 jobs per second, it meant \n7/s simultaneously for all these 3+1 phases, i.e., 28 jobs per second. \nAnd at that rate it takes little less than 30 min for the index to \ndeteriorate. I.e. once about 50,000 queue entries have been processed \nthrough that index it has deteriorated to become nearly unusable until \nit is rebuilt.\n\nthanks,\n-Gunther\n\n\n\n",
"msg_date": "Sat, 23 Feb 2019 21:55:00 -0500",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sun, 24 Feb 2019 at 10:06, Gunther <[email protected]> wrote:\n> I am using an SQL queue for distributing work to massively parallel workers. Workers come in servers with 12 parallel threads. One of those worker sets handles 7 transactions per second. If I add a second one, for 24 parallel workers, it scales to 14 /s. Even a third, for 36 parallel workers, I can add to reach 21 /s. If I try a fourth set, 48 workers, I end up in trouble. But that isn't even so much my problem rather than the fact that in short time, the performance will deteriorate, and it looks like that is because the queue index deteriorates and needs a REINDEX.\n\nIt sounds very much like auto-vacuum is simply unable to keep up with\nthe rate at which the table is being updated. Please be aware, that\nby default, auto-vacuum is configured to run fairly slowly so as not\nto saturate low-end machines.\n\nvacuum_cost_limit / autovacuum_vacuum_cost limit control how many\n\"points\" the vacuum process can accumulate before it will perform an\nautovacuum_vacuum_cost_delay / vacuum_cost_delay.\n\nAdditionally, after an auto-vacuum run completes it will wait for\nautovacuum_naptime before checking again if any tables require some\nattention.\n\nI think you should be monitoring how many auto-vacuums workers are\nbusy during your runs. If you find that the \"queue\" table is being\nvacuumed almost constantly, then you'll likely want to increase\nvacuum_cost_limit / autovacuum_vacuum_cost_limit. You could get an\nidea of how often this table is being auto-vacuumed by setting\nlog_autovacuum_min_duration to 0 and checking the logs. Another way\nto check would be to sample what: SELECT query FROM pg_stat_activity\nWHERE query LIKE 'autovacuum%'; returns. You may find that all of the\nworkers are busy most of the time. If so, that indicates that the\ncost limits need to be raised.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sun, 24 Feb 2019 19:41:08 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "Some ideas:\n\nYou could ALTER TABLE SET (fillfactor=50) to try to maximize use of HOT indices\nduring UPDATEs (check pg_stat_user_indexes).\n\nYou could also ALTER TABLE SET autovacuum parameters for more aggressive vacuuming.\n\nYou could recreate indices using the CONCURRENTLY trick\n(CREATE INDEX CONCURRENTLY new; DROP old; ALTER .. RENAME;)\n\nOn Sat, Feb 23, 2019 at 04:05:51PM -0500, Gunther wrote:\n> I am using an SQL queue for distributing work to massively parallel workers.\n> Workers come in servers with 12 parallel threads. One of those worker sets\n> handles 7 transactions per second. If I add a second one, for 24 parallel\n> workers, it scales to 14 /s. Even a third, for 36 parallel workers, I can\n> add to reach 21 /s. If I try a fourth set, 48 workers, I end up in trouble.\n> But that isn't even so much my problem rather than the fact that in short\n> time, the performance will deteriorate, and it looks like that is because\n> the queue index deteriorates and needs a REINDEX.\n> \n> The queue table is essentially this:\n> \n> CREATE TABLE Queue (\n> jobId bigint,\n> action text,\n> pending boolean,\n> result text\n> );\n> \n> the dequeue operation is essentially this:\n> \n> BEGIN\n> \n> SELECT jobId, action\n> � FROM Queue\n> � WHERE pending\n> FOR UPDATE SKIP LOCKED\n> \n> which is a wonderful concept with the SKIP LOCKED.\n> \n> Then I perform the action and finally:\n> \n> UPDATE Queue\n> SET pending = false,\n> result = ?\n> WHERE jobId = ?\n> \n> COMMIT\n> \n> I thought to keep my index tight, I would define it like this:\n> \n> CREATE UNIQUE INDEX Queue_idx_pending ON Queue(jobId) WHERE pending;\n> \n> so that only pending jobs are in that index.\n> \n> When a job is done, follow up work is often inserted into the Queue as\n> pending, thus adding to that index.\n> \n> Below is the performance chart.\n> \n> The blue line at the bottom is the db server.\n> \n> \n> You can see the orange line is the first worker server with 12 threads. It\n> settled into a steady state of 7/s ran with 90% CPU for some 30 min, and\n> then the database CPU% started climbing and I tried to rebuild the indexes\n> on the queue, got stuck there, exclusive lock, no jobs were processing, but\n> the exclusive lock was never obtained for too long. So I shut down the\n> worker server. Database quiet I could resolve the messed up indexes and\n> restarted again. Soon I added a second worker server (green line) starting\n> around 19:15. Once settled in they were pulling 14/s together. but you can\n> see in just 15 min, the db server CPU % climbed again to over 40% and the\n> performance of the workers dropped, their load falling to 30%. Now at around\n> 19:30 I stopped them all, REINDEXed the queue table and then started 3\n> workers servers simultaneously. They settled in to 21/s but in just 10 min\n> again the deterioration happened. Again I stopped them all, REINDEXed, and\n> now started 4 worker servers (48 threads). This time 5 min was not enough to\n> see them ever settling into a decent 28/s transaction rate, but I guess they\n> might have reached that for a minute or two, only for the index\n> deteriorating again. I did another stop now started only 2 servers and\n> again, soon the index deteriorated again.\n> \n> Clearly that index is deteriorating quickly, in about 10,000 transactions.\n> \n> BTW: when I said 7/s, it is in reality about 4 times as many transactions,\n> because of the follow up jobs that also get added on this queue. So 10,0000\n> transactions may be 30 or 40 k transactions before the index deteriorates.\n> \n> Do you have any suggestion how I can avoid that index deterioration problem\n> smoothly?\n> \n> I figured I might just pause all workers briefly to schedule the REINDEX\n> Queue command, but the problem with this is that while the transaction\n> volume is large, some jobs may take minutes to process, and in that case we\n> need to wait minutes to quiet the database with then 47 workers sitting as\n> idle capacity waiting for the 48th to finish so that the index can be\n> rebuilt!\n> \n> Of course I tried to resolve the issue with vacuumdb --analyze (just in case\n> if the autovacuum doesn't act in time) and that doesn't help. Vacuumdb\n> --full --analyze would probably help but can't work because it required an\n> exclusive table lock.\n> \n> I tried to just create a new index of the same\n> \n> CREATE UNIQUE INDEX Queue_idx2_pending ON Queue(jobId) WHERE pending;\n> DROP INDEX Queue_idx_pending;\n> ANALYZE Queue;\n> \n> but with that I got completely stuck with two indexes where I could not\n> remove either of them for those locking issues. And REINDEX will give me a\n> deadlock error rightout.\n> \n> I am looking for a way to manage that index so that it does not deteriorate.\n> \n> May be if I was not defining it with\n> \n> ... WHERE pending;\n> \n> then it would only grow, but never shrink. May be that helps somehow? I\n> doubt it though. Adding to an index also causes deterioration, and most of\n> the rows would be irrelevant because they would be past work. It would be\n> nicer if there was another smooth way.\n\n",
"msg_date": "Sun, 24 Feb 2019 01:03:58 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sun, Feb 24, 2019 at 2:04 AM Justin Pryzby <[email protected]> wrote:\n\n> Some ideas:\n>\n> You could ALTER TABLE SET (fillfactor=50) to try to maximize use of HOT\n> indices\n> during UPDATEs (check pg_stat_user_indexes).\n>\n> You could also ALTER TABLE SET autovacuum parameters for more aggressive\n> vacuuming.\n>\n> You could recreate indices using the CONCURRENTLY trick\n> (CREATE INDEX CONCURRENTLY new; DROP old; ALTER .. RENAME;)\n>\n\nI have basically the same issue with a table. Each new row enters the table\nwith a active=true kind of flag. The row is updated a lot, until a business\ncondition expires, it is updated to active=false and then the row is almost\nnever updated after that.\n\nWe also used a partial index, to good effect, but also had/have an issue\nwhere the index bloats and performs worse rather quickly, only to recover a\nbit after an autovacuum pass completes.\n\nLowering the fillfactor isn't a good solution because 99%+ of the table is\n\"cold\".\n\nOne manual VACUUM FREEZE coupled with lowering the vacuum sensitivity on\nthat one table helps quite a bit by increasing the frequency shortening the\nruntimes of autovacuums, but it's not a total solution.\n\nMy next step is to partition the table on the \"active\" boolean flag, which\neliminates the need for the partial indexes, and allows for different\nfillfactor for each partition (50 for true, 100 for false). This should\nalso aid vacuum speed and make re-indexing the hot partition much faster.\nHowever, we have to upgrade to v11 first to enable row migration, so I\ncan't yet report on how much of a solution that is.\n\nOn Sun, Feb 24, 2019 at 2:04 AM Justin Pryzby <[email protected]> wrote:Some ideas:\n\nYou could ALTER TABLE SET (fillfactor=50) to try to maximize use of HOT indices\nduring UPDATEs (check pg_stat_user_indexes).\n\nYou could also ALTER TABLE SET autovacuum parameters for more aggressive vacuuming.\n\nYou could recreate indices using the CONCURRENTLY trick\n(CREATE INDEX CONCURRENTLY new; DROP old; ALTER .. RENAME;)I have basically the same issue with a table. Each new row enters the table with a active=true kind of flag. The row is updated a lot, until a business condition expires, it is updated to active=false and then the row is almost never updated after that.We also used a partial index, to good effect, but also had/have an issue where the index bloats and performs worse rather quickly, only to recover a bit after an autovacuum pass completes.Lowering the fillfactor isn't a good solution because 99%+ of the table is \"cold\".One manual VACUUM FREEZE coupled with lowering the vacuum sensitivity on that one table helps quite a bit by increasing the frequency shortening the runtimes of autovacuums, but it's not a total solution.My next step is to partition the table on the \"active\" boolean flag, which eliminates the need for the partial indexes, and allows for different fillfactor for each partition (50 for true, 100 for false). This should also aid vacuum speed and make re-indexing the hot partition much faster. However, we have to upgrade to v11 first to enable row migration, so I can't yet report on how much of a solution that is.",
"msg_date": "Sun, 24 Feb 2019 02:25:09 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "Thank you all for responding so far.\n\nDavid Rowley and Justin Pryzby suggested things about autovacuum. But I \ndon't think autovacuum has any helpful role here. I am explicitly doing \na vacuum on that table. And it doesn't help at all. Almost not at all.\n\nI want to believe that\n\nVACUUM FREEZE Queue;\n\nwill push the database CPU% down again once it is climbing up, and I can \ndo this may be 3 to 4 times, but ultimately I will always have to \nrebuild the index. But also, none of these vaccuum operations I do takes \nvery long at all. It is just not efficacious at all.\n\nRebuilding the index by building a new index and removing the old, then \nrename, and vacuum again, is prone to get stuck.\n\nI tried to do it in a transaction. But it says CREATE INDEX can't be \ndone in a transaction.\n\nNeed to CREATE INDEX CONCURRENTLY ... and I can't even do that in a \nprocedure.\n\nIf I do it manually by issuing first CREATE INDEX CONCURRENTLY new and \nthen DROP INDEX CONCURRENTLY old, it might work once, but usually it \njust gets stuck with two indexes. Although I noticed that it would \nactually put CPU back down and improve transaction throughput.\n\nI also noticed that after I quit from DROP INDEX CONCURRENTLY old, that \nindex is shown as INVALID\n\n\\d Queue\n ...\nIndexes:\n \"queue_idx_pending\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending2\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending3\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending4\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending5\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending6\" UNIQUE, btree (jobId, action) WHERE pending\n...\n\nand so I keep doing that same routine hands-on, every time that the CPU% \ncreeps above 50% I do\n\nCREATE UNIQUE INDEX CONCURRENTLY Queue_idx_pending6 ON Queue(jobId, action) WHERE currentlyOpen;\nDROP INDEX CONCURRENTLY Queue_idx_pending5;\n\nat which place it hangs, I interrupt the DROP command, which leaves the \nold index behind as \"INVALID\".\n\nVACUUM FREEZE ANALYZE Queue;\n\nAt this point the db's CPU% dropping below 20% after the new index has \nbeen built.\n\nUnfortunately this is totally hands on approach I have to do this every \n5 minutes or so. And possibly the time between these necessities \ndecreases. It also leads to inefficiency over time, even despite the CPU \nseemingly recovering.\n\nSo this isn't sustainable like that (worse because my Internet \nconstantly drops).\n\nWhat I am most puzzled by is that no matter how long I wait, the DROP \nINDEX CONCURRENTLY never completes. Why is that?\n\nAlso, the REINDEX command always fails with a deadlock because there is \na row lock and a complete table lock involved.\n\nI consider this ultimately a bug, or at the very least there is room for \nimprovement. And I am on version 11.1.\n\nregards,\n-Gunther\n\n\n\n\n\n\nThank you all for responding so far.\nDavid Rowley and Justin Pryzby suggested things about\n autovacuum. But I don't think autovacuum has any helpful role\n here. I am explicitly doing a vacuum on that table. And it doesn't\n help at all. Almost not at all.\n\nI want to believe that \n\nVACUUM FREEZE Queue;\nwill push the database CPU% down again once it is climbing up,\n and I can do this may be 3 to 4 times, but ultimately I will\n always have to rebuild the index. But also, none of these vaccuum\n operations I do takes very long at all. It is just not efficacious\n at all.\n\nRebuilding the index by building a new index and removing the\n old, then rename, and vacuum again, is prone to get stuck. \n\nI tried to do it in a transaction. But it says CREATE INDEX can't\n be done in a transaction. \n\nNeed to CREATE INDEX CONCURRENTLY ... and I can't even do that in\n a procedure.\nIf I do it manually by issuing first CREATE INDEX CONCURRENTLY\n new and then DROP INDEX CONCURRENTLY old, it might work once, but\n usually it just gets stuck with two indexes. Although I noticed\n that it would actually put CPU back down and improve transaction\n throughput.\nI also noticed that after I quit from DROP INDEX CONCURRENTLY\n old, that index is shown as INVALID\n\\d Queue\n ...\nIndexes:\n \"queue_idx_pending\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending2\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending3\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending4\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending5\" UNIQUE, btree (jobId, action) WHERE pending INVALID\n \"queue_idx_pending6\" UNIQUE, btree (jobId, action) WHERE pending\n...\n\nand so I keep doing that same routine hands-on, every time that\n the CPU% creeps above 50% I do\nCREATE UNIQUE INDEX CONCURRENTLY Queue_idx_pending6 ON Queue(jobId, action) WHERE currentlyOpen;\nDROP INDEX CONCURRENTLY Queue_idx_pending5;\n\nat which place it hangs, I interrupt the DROP command, which\n leaves the old index behind as \"INVALID\".\nVACUUM FREEZE ANALYZE Queue;\nAt this point the db's CPU% dropping below 20% after the new\n index has been built.\nUnfortunately this is totally hands on approach I have to do this\n every 5 minutes or so. And possibly the time between these\n necessities decreases. It also leads to inefficiency over time,\n even despite the CPU seemingly recovering. \n\nSo this isn't sustainable like that (worse because my Internet\n constantly drops).\nWhat I am most puzzled by is that no matter how long I wait, the\n DROP INDEX CONCURRENTLY never completes. Why is that?\nAlso, the REINDEX command always fails with a deadlock because\n there is a row lock and a complete table lock involved. \n\nI consider this ultimately a bug, or at the very least there is\n room for improvement. And I am on version 11.1.\n\n regards,\n -Gunther",
"msg_date": "Sun, 24 Feb 2019 12:45:34 -0500",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sun, Feb 24, 2019 at 12:45:34PM -0500, Gunther wrote:\n> What I am most puzzled by is that no matter how long I wait, the DROP INDEX\n> CONCURRENTLY never completes. Why is that?\n\nhttps://www.postgresql.org/docs/11/sql-dropindex.html\nCONCURRENTLY\n[...] With this option, the command instead waits until conflicting transactions have completed.\n\nDo you have a longrunning txn ? That's possibly the cause of the original\nissue; vacuum cannot mark tuples as dead until txns have finished.\n\n> I consider this ultimately a bug, or at the very least there is room for\n> improvement. And I am on version 11.1.\n\nDid you try upgrading to 11.2 ? I suspect this doesn't affect the query plan..but may be relevant?\n\nhttps://www.postgresql.org/docs/11/release-11-2.html\n|Improve ANALYZE's handling of concurrently-updated rows (Jeff Janes, Tom Lane)\n|\n|Previously, rows deleted by an in-progress transaction were omitted from ANALYZE's sample, but this has been found to lead to more inconsistency than including them would do. In effect, the sample now corresponds to an MVCC snapshot as of ANALYZE's start time.\n\nJustin\n\n",
"msg_date": "Sun, 24 Feb 2019 14:20:37 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sun, Feb 24, 2019 at 10:02 AM Gunther <[email protected]> wrote:\n> David Rowley and Justin Pryzby suggested things about autovacuum. But I don't think autovacuum has any helpful role here.\n\nMy suspicion is that this has something to do with the behavior of\nB-Tree indexes with lots of duplicates. See also:\n\nhttps://www.postgresql.org/message-id/flat/CAH2-Wznf1uVBguutwrvR%2B6NcXTKYhagvNOY3-dg9dzcYiu_vKw%40mail.gmail.com#993f152a41a1e2c257d12d118aa7ebfc\n\nI am working on a patch to address the problem which is slated for\nPostgres 12. I give an illustrative example of one of the problems\nthat my patch addresses here (it actually addresses a number of\ndistinct issues all at once):\n\nhttps://postgr.es/m/CAH2-Wzmf0fvVhU+SSZpGW4Qe9t--j_DmXdX3it5JcdB8FF2EsA@mail.gmail.com\n\nDo you think that that could be a significant factor here? I found\nyour response to my initial questions unclear.\n\nUsing a Postgres table as a queue is known to create particular\nproblems with bloat, especially index bloat. Fixing the underlying\nbehavior in the nbtree code would likely sharply limit the growth in\nindex bloat over time, though it still may not make queue-like tables\ncompletely painless to operate. The problem is described in high level\nterms from a user's perspective here:\n\nhttps://brandur.org/postgres-queues\n\n--\nPeter Geoghegan\n\n",
"msg_date": "Sun, 24 Feb 2019 12:41:39 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": ">\n> Also, the REINDEX command always fails with a deadlock because there is a\n> row lock and a complete table lock involved.\n>\n> I consider this ultimately a bug, or at the very least there is room for\n> improvement. And I am on version 11.1.\n> regards,\n> -Gunther\n>\n\nREINDEX doesn't work concurrently yet (slated for v12).\n\nI think your solution may be something like this:\n1. Create a new table, same columns, partitioned on the pending column.\n2. Rename your existing queue table old_queue to the partitioned table as a\ndefault partition.\n3. Rename new table to queue\n4. add old_queue as the default partition of queue\n5. add a new partition for pending = true rows, set the fillfactor kind of\nlow, maybe 50, you can always change it. Now your pending = true rows can\nbe one of two places, but your pending = false rows are all in\n6. add all existing old_queue indexes (except those that are partial\nindexes on pending) to queue, these will be created on the new (empty)\npartition, and just matched to the existing indexes on old_queue\n7. If pending = true records all ultimately become pending = false, wait\nfor normal attrition to reach a state where all rows in the default\npartition are pending = false. If that won't happen, you may need to\nmanually migrate some with a DELETE-INSERT\n8. At this point, you can transactionally remove old_queue as a partition\nof queue, and then immediately re-add it to queue as the pending = false\npartition. There won't need to be a default partition.\n9. drop all remaining partial indexes on pending, they're no longer useful.\n\nThat's roughly my plan for my own hotspot table when we can upgrade to 11.\n\nAlso, the REINDEX command always fails with a deadlock because\n there is a row lock and a complete table lock involved. \n\nI consider this ultimately a bug, or at the very least there is\n room for improvement. And I am on version 11.1.\n\n regards,\n -GuntherREINDEX doesn't work concurrently yet (slated for v12).I think your solution may be something like this:1. Create a new table, same columns, partitioned on the pending column.2. Rename your existing queue table old_queue to the partitioned table as a default partition.3. Rename new table to queue4. add old_queue as the default partition of queue5. add a new partition for pending = true rows, set the fillfactor kind of low, maybe 50, you can always change it. Now your pending = true rows can be one of two places, but your pending = false rows are all in 6. add all existing old_queue indexes (except those that are partial indexes on pending) to queue, these will be created on the new (empty) partition, and just matched to the existing indexes on old_queue7. If pending = true records all ultimately become pending = false, wait for normal attrition to reach a state where all rows in the default partition are pending = false. If that won't happen, you may need to manually migrate some with a DELETE-INSERT8. At this point, you can transactionally remove old_queue as a partition of queue, and then immediately re-add it to queue as the pending = false partition. There won't need to be a default partition.9. drop all remaining partial indexes on pending, they're no longer useful.That's roughly my plan for my own hotspot table when we can upgrade to 11.",
"msg_date": "Sun, 24 Feb 2019 16:34:34 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sun, Feb 24, 2019 at 04:34:34PM -0500, Corey Huinker wrote:\n> I think your solution may be something like this:\n> 1. Create a new table, same columns, partitioned on the pending column.\n> 2. Rename your existing queue table old_queue to the partitioned table as a\n> default partition.\n> 3. Rename new table to queue\n> 4. add old_queue as the default partition of queue\n> 5. add a new partition for pending = true rows, set the fillfactor kind of\n\nFYI, the \"default partition\" isn't just for various and sundry uncategorized\ntuples (like a relkind='r' inheritence without any constraint). It's for\n\"tuples which are excluded by every other partition\". And \"row migration\"\ndoesn't happen during \"ALTER..ATTACH\", only UPDATE. So you'll be unable to\nattach a partition for pending=true if the default partition includes any such\nrows:\n\n|ERROR: updated partition constraint for default partition \"t0\" would be violated by some row\n\nI think you'll need to schedule a maintenance window, create a new partitioned\nheirarchy, and INSERT INTO queue SELECT * FROM old_queue, or similar.\n\nJustin\n\n",
"msg_date": "Sun, 24 Feb 2019 16:43:17 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sun, Feb 24, 2019 at 5:43 PM Justin Pryzby <[email protected]> wrote:\n\n> On Sun, Feb 24, 2019 at 04:34:34PM -0500, Corey Huinker wrote:\n> > I think your solution may be something like this:\n> > 1. Create a new table, same columns, partitioned on the pending column.\n> > 2. Rename your existing queue table old_queue to the partitioned table\n> as a\n> > default partition.\n> > 3. Rename new table to queue\n> > 4. add old_queue as the default partition of queue\n> > 5. add a new partition for pending = true rows, set the fillfactor kind\n> of\n>\n> FYI, the \"default partition\" isn't just for various and sundry\n> uncategorized\n> tuples (like a relkind='r' inheritence without any constraint). It's for\n> \"tuples which are excluded by every other partition\". And \"row migration\"\n> doesn't happen during \"ALTER..ATTACH\", only UPDATE. So you'll be unable to\n> attach a partition for pending=true if the default partition includes any\n> such\n> rows:\n>\n> |ERROR: updated partition constraint for default partition \"t0\" would be\n> violated by some row\n>\n> I think you'll need to schedule a maintenance window, create a new\n> partitioned\n> heirarchy, and INSERT INTO queue SELECT * FROM old_queue, or similar.\n>\n> Justin\n>\n\nGood point, I forgot about that. I had also considered making a partitioned\ntable, adding a \"true\" partition to that, and then making the partitioned\ntable an *inheritance* partition of the existing table, then siphoning off\nrows from the original table until such time as it has no more pending\nrows, then doing a transaction where you de-inherit the partitioned table,\nand then attach the original table as the false partition. It's all a lot\nof acrobatics to try to minimize downtime and it could be done better by\nhaving a longer maintenance window, but I got the impression from the OP\nthat big windows were not to be had.\n\nOn Sun, Feb 24, 2019 at 5:43 PM Justin Pryzby <[email protected]> wrote:On Sun, Feb 24, 2019 at 04:34:34PM -0500, Corey Huinker wrote:\n> I think your solution may be something like this:\n> 1. Create a new table, same columns, partitioned on the pending column.\n> 2. Rename your existing queue table old_queue to the partitioned table as a\n> default partition.\n> 3. Rename new table to queue\n> 4. add old_queue as the default partition of queue\n> 5. add a new partition for pending = true rows, set the fillfactor kind of\n\nFYI, the \"default partition\" isn't just for various and sundry uncategorized\ntuples (like a relkind='r' inheritence without any constraint). It's for\n\"tuples which are excluded by every other partition\". And \"row migration\"\ndoesn't happen during \"ALTER..ATTACH\", only UPDATE. So you'll be unable to\nattach a partition for pending=true if the default partition includes any such\nrows:\n\n|ERROR: updated partition constraint for default partition \"t0\" would be violated by some row\n\nI think you'll need to schedule a maintenance window, create a new partitioned\nheirarchy, and INSERT INTO queue SELECT * FROM old_queue, or similar.\n\nJustinGood point, I forgot about that. I had also considered making a partitioned table, adding a \"true\" partition to that, and then making the partitioned table an inheritance partition of the existing table, then siphoning off rows from the original table until such time as it has no more pending rows, then doing a transaction where you de-inherit the partitioned table, and then attach the original table as the false partition. It's all a lot of acrobatics to try to minimize downtime and it could be done better by having a longer maintenance window, but I got the impression from the OP that big windows were not to be had.",
"msg_date": "Sun, 24 Feb 2019 17:52:00 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 4:06 PM Gunther <[email protected]> wrote:\n\n> the dequeue operation is essentially this:\n>\n> BEGIN\n>\n> SELECT jobId, action\n> FROM Queue\n> WHERE pending\n> FOR UPDATE SKIP LOCKED\n>\n>\nThere is no LIMIT shown. Wouldn't the first thread to start up just lock\nall the rows and everyone else would starve?\n\nCheers,\n\nJeff",
"msg_date": "Sun, 24 Feb 2019 18:33:19 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sun, Feb 24, 2019 at 1:02 PM Gunther <[email protected]> wrote:\n\n> Thank you all for responding so far.\n>\n> David Rowley and Justin Pryzby suggested things about autovacuum. But I\n> don't think autovacuum has any helpful role here. I am explicitly doing a\n> vacuum on that table. And it doesn't help at all. Almost not at all.\n>\nIf you do a vacuum verbose, what stats do you get back? What is the size\nof the index when the degradation starts to show, and immediately after a\nsuccessful reindex?\n\nAlso, how is JobID assigned? Are they from a sequence, or some other\nmethod where they are always added to the end of the index's keyspace?\n\nWhen it starts to degrade, what is the EXPLAIN plan for the query?\n\nCheers,\n\nJeff\n\nOn Sun, Feb 24, 2019 at 1:02 PM Gunther <[email protected]> wrote:\n\nThank you all for responding so far.\nDavid Rowley and Justin Pryzby suggested things about\n autovacuum. But I don't think autovacuum has any helpful role\n here. I am explicitly doing a vacuum on that table. And it doesn't\n help at all. Almost not at all.If you do a vacuum verbose, what stats do you get back? What is the size of the index when the degradation starts to show, and immediately after a successful reindex?Also, how is JobID assigned? Are they from a sequence, or some other method where they are always added to the end of the index's keyspace?When it starts to degrade, what is the EXPLAIN plan for the query?Cheers,Jeff",
"msg_date": "Sun, 24 Feb 2019 18:39:57 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "Wow, yes, partition instead of index, that is interesting. Thanks Corey \nand Justin.\n\nThe index isn't required at all if all my pending jobs are in a \npartition of only pending jobs. In that case the plan can just be a \nsequential scan.\n\nAnd Jeff James, sorry, I failed to show the LIMIT 1 clause on my dequeue \nquery. That was an omission. My query is actually somewhat more complex \nand I just translated it down to the essentials but forgot the LIMIT 1 \nclause.\n\nSELECT seqNo, action\n FROM Queue\n WHERE pending\n AND/... other criteria .../\n LIMIT 1\n FOR UPDATE SKIP LOCKED;\n\nAnd sorry I didn't capture the stats for vacuum verbose. And they would \nbe confusing because there are other things involved.\n\nAnyway, I think the partitioned table is the right and brilliant \nsolution, because an index really isn't required. The actual pending \npartition will always remain quite small, and being a queue, it doesn't \neven matter how big it might grow, as long as new rows are inserted at \nthe end and not in the middle of the data file and still there be some \nway of fast skip over the part of the dead rows at the beginning that \nhave already been processed and moved away.\n\nGood thing is, I don't worry about maintenance window. I have the \nleisure to simply tear down my design now and make a better design. \nWhat's 2 million transactions if I can re-process them at a rate of \n80/s? 7 hours max. I am still in development. So, no need to worry about \nmigration / transition acrobatics. So if I take Corey's steps and \nenvision the final result, not worrying about the transition steps, then \nI understand this:\n\n1. Create the Queue table partitioned on the pending column, this \ncreates the partition with the pending jobs (on which I set the \nfillfactor kind of low, maybe 50) and the default partition with all the \nrest. Of course that allows people with a constant transaction volume to \nalso partition on jobId or completionTime and move chunks out to cold \narchive storage. But that's beside the current point.\n\n2. Add all needed indexes on the partitioned table, except the main \npartial index that I used before and that required all that reindexing \nmaintenance. Actually I won't need any other indexes really, why invite \nanother similar problem again.\n\nThat's really simple.\n\nOne question I have though: I imagine our pending partition heap file to \nnow be essentially sequentially organized as a queue. New jobs are \nappended at the end, old jobs are at the beginning. As pending jobs \nbecome completed (pending = false) these initial rows will be marked as \ndead. So, while the number of live rows will remain small in that \npending partition, sequential scans will have to skip over the dead rows \nin the beginning.\n\nDoes PostgreSQL structure its files such that skipping over dead rows is \nfast? Or do the dead rows have to be read and discarded during a table \nscan?\n\nOf course vacuum eliminates dead rows, but unless I do vacuum full, it \nwill not re-pack the live rows, and that requires an exclusive table \nlock. So, what is the benefit of vacuuming that pending partition? What \nI _/don't/_ want is insertion of new jobs to go into open slots at the \nbeginning of the file. I want them to be appended (in Oracle there is an \nINSERT /*+APPEND*/ hint for that. How does that work in PostgreSQL?\n\nUltimately that partition will amass too many dead rows, then what do I \ndo? I don't think that the OS has a way to truncate files physically \nfrom the head, does it? I guess it could set the file pointer from the \nfirst block to a later block. But I don't know of an IOCTL/FCNTL command \nfor that. On some OS there is a way of making blocks sparse again, is \nthat how PostgreSQL might do it? Just knock out blocks as sparse from \nthe front of the file?\n\nIf not, the next thing I can think of is to partition the table further \nby time, may be alternating even and odd days, such that on any given \nday one of the two pending partitions are quiet? Is that how it's done?\n\nregards,\n-Gunther\n\n\n\n\n\n\n\n\n\n\n\n\nWow, yes, partition instead of index, that is interesting. Thanks\n Corey and Justin.\n\nThe index isn't required at all if all my pending jobs are in a\n partition of only pending jobs. In that case the plan can just be\n a sequential scan.\nAnd Jeff James, sorry, I failed to show the LIMIT 1 clause on my\n dequeue query. That was an omission. My query is actually somewhat\n more complex and I just translated it down to the essentials but\n forgot the LIMIT 1 clause.\nSELECT seqNo, action\n FROM Queue\n WHERE pending\n AND ... other criteria ...\n LIMIT 1\n FOR UPDATE SKIP LOCKED; \n\nAnd sorry I didn't capture the stats for vacuum verbose. And they\n would be confusing because there are other things involved. \n\nAnyway, I think the partitioned table is the right and brilliant\n solution, because an index really isn't required. The actual\n pending partition will always remain quite small, and being a\n queue, it doesn't even matter how big it might grow, as long as\n new rows are inserted at the end and not in the middle of the data\n file and still there be some way of fast skip over the part of the\n dead rows at the beginning that have already been processed and\n moved away. \n\nGood thing is, I don't worry about maintenance window. I have\n the leisure to simply tear down my design now and make a better\n design. What's 2 million transactions if I can re-process them at\n a rate of 80/s? 7 hours max. I am still in development. So, no\n need to worry about migration / transition acrobatics. So if I\n take Corey's steps and envision the final result, not worrying\n about the transition steps, then I understand this:\n1. Create the Queue table partitioned on the pending column, this\n creates the partition with the pending jobs (on which I set the\n fillfactor kind of low, maybe 50) and the default partition with\n all the rest. Of course that allows people with a constant\n transaction volume to also partition on jobId or completionTime\n and move chunks out to cold archive storage. But that's beside the\n current point.\n\n2. Add all needed indexes on the partitioned table, except the\n main partial index that I used before and that required all that\n reindexing maintenance. Actually I won't need any other indexes\n really, why invite another similar problem again.\n\n\nThat's really simple.\n\n\nOne question I have though: I imagine our pending partition\n heap file to now be essentially sequentially organized as a queue.\n New jobs are appended at the end, old jobs are at the beginning.\n As pending jobs become completed (pending = false) these initial\n rows will be marked as dead. So, while the number of live rows\n will remain small in that pending partition, sequential scans will\n have to skip over the dead rows in the beginning.\n\n\nDoes PostgreSQL structure its files such that skipping over\n dead rows is fast? Or do the dead rows have to be read and\n discarded during a table scan? \n\n\n\nOf course vacuum eliminates dead rows, but unless I do vacuum\n full, it will not re-pack the live rows, and that requires an\n exclusive table lock. So, what is the benefit of vacuuming that\n pending partition? What I don't want is insertion\n of new jobs to go into open slots at the beginning of the file. I\n want them to be appended (in Oracle there is an INSERT /*+APPEND*/\n hint for that. How does that work in PostgreSQL? \n\n\nUltimately that partition will amass too many dead rows, then\n what do I do? I don't think that the OS has a way to truncate\n files physically from the head, does it? I guess it could set the\n file pointer from the first block to a later block. But I don't\n know of an IOCTL/FCNTL command for that. On some OS there is a way\n of making blocks sparse again, is that how PostgreSQL might do it?\n Just knock out blocks as sparse from the front of the file?\n\n\nIf not, the next thing I can think of is to partition the table\n further by time, may be alternating even and odd days, such that\n on any given day one of the two pending partitions are quiet? Is\n that how it's done?\n\n\nregards,\n -Gunther",
"msg_date": "Sun, 24 Feb 2019 22:06:02 -0500",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "Wow, yes, partition instead of index, that is interesting. Thanks Corey \nand Justin.\n\nThe index isn't required at all if all my pending jobs are in a \npartition of only pending jobs. In that case the plan can just be a \nsequential scan.\n\nAnd Jeff James, sorry, I failed to show the LIMIT 1 clause on my dequeue \nquery. That was an omission. My query is actually somewhat more complex \nand I just translated it down to the essentials but forgot the LIMIT 1 \nclause.\n\nSELECT seqNo, action\n FROM Queue\n WHERE pending\n AND/... other criteria .../\n LIMIT 1\n FOR UPDATE SKIP LOCKED;\n\nAnd sorry I didn't capture the stats for vacuum verbose. And they would \nbe confusing because there are other things involved.\n\nAnyway, I think the partitioned table is the right and brilliant \nsolution, because an index really isn't required. The actual pending \npartition will always remain quite small, and being a queue, it doesn't \neven matter how big it might grow, as long as new rows are inserted at \nthe end and not in the middle of the data file and still there be some \nway of fast skip over the part of the dead rows at the beginning that \nhave already been processed and moved away.\n\nGood thing is, I don't worry about maintenance window. I have the \nleisure to simply tear down my design now and make a better design. \nWhat's 2 million transactions if I can re-process them at a rate of \n80/s? 7 hours max. I am still in development. So, no need to worry about \nmigration / transition acrobatics. So if I take Corey's steps and \nenvision the final result, not worrying about the transition steps, then \nI understand this:\n\n1. Create the Queue table partitioned on the pending column, this \ncreates the partition with the pending jobs (on which I set the \nfillfactor kind of low, maybe 50) and the default partition with all the \nrest. Of course that allows people with a constant transaction volume to \nalso partition on jobId or completionTime and move chunks out to cold \narchive storage. But that's beside the current point.\n\n2. Add all needed indexes on the partitioned table, except the main \npartial index that I used before and that required all that reindexing \nmaintenance. Actually I won't need any other indexes really, why invite \nanother similar problem again.\n\nThat's really simple.\n\nOne question I have though: I imagine our pending partition heap file to \nnow be essentially sequentially organized as a queue. New jobs are \nappended at the end, old jobs are at the beginning. As pending jobs \nbecome completed (pending = false) these initial rows will be marked as \ndead. So, while the number of live rows will remain small in that \npending partition, sequential scans will have to skip over the dead rows \nin the beginning.\n\nDoes PostgreSQL structure its files such that skipping over dead rows is \nfast? Or do the dead rows have to be read and discarded during a table \nscan?\n\nOf course vacuum eliminates dead rows, but unless I do vacuum full, it \nwill not re-pack the live rows, and that requires an exclusive table \nlock. So, what is the benefit of vacuuming that pending partition? What \nI _/don't/_ want is insertion of new jobs to go into open slots at the \nbeginning of the file. I want them to be appended (in Oracle there is an \nINSERT /*+APPEND*/ hint for that. How does that work in PostgreSQL?\n\nUltimately that partition will amass too many dead rows, then what do I \ndo? I don't think that the OS has a way to truncate files physically \nfrom the head, does it? I guess it could set the file pointer from the \nfirst block to a later block. But I don't know of an IOCTL/FCNTL command \nfor that. On some OS there is a way of making blocks sparse again, is \nthat how PostgreSQL might do it? Just knock out blocks as sparse from \nthe front of the file?\n\nIf not, the next thing I can think of is to partition the table further \nby time, may be alternating even and odd days, such that on any given \nday one of the two pending partitions are quiet? Is that how it's done?\n\nregards,\n-Gunther\n\n\n\n\n\n\n\n\n\n\n\n\nWow, yes, partition instead of index, that is interesting. Thanks\n Corey and Justin.\n\nThe index isn't required at all if all my pending jobs are in a\n partition of only pending jobs. In that case the plan can just be\n a sequential scan.\nAnd Jeff James, sorry, I failed to show the LIMIT 1 clause on my\n dequeue query. That was an omission. My query is actually somewhat\n more complex and I just translated it down to the essentials but\n forgot the LIMIT 1 clause.\nSELECT seqNo, action\n FROM Queue\n WHERE pending\n AND ... other criteria ...\n LIMIT 1\n FOR UPDATE SKIP LOCKED; \n\nAnd sorry I didn't capture the stats for vacuum verbose. And they\n would be confusing because there are other things involved. \n\nAnyway, I think the partitioned table is the right and brilliant\n solution, because an index really isn't required. The actual\n pending partition will always remain quite small, and being a\n queue, it doesn't even matter how big it might grow, as long as\n new rows are inserted at the end and not in the middle of the data\n file and still there be some way of fast skip over the part of the\n dead rows at the beginning that have already been processed and\n moved away. \n\nGood thing is, I don't worry about maintenance window. I have\n the leisure to simply tear down my design now and make a better\n design. What's 2 million transactions if I can re-process them at\n a rate of 80/s? 7 hours max. I am still in development. So, no\n need to worry about migration / transition acrobatics. So if I\n take Corey's steps and envision the final result, not worrying\n about the transition steps, then I understand this:\n1. Create the Queue table partitioned on the pending column, this\n creates the partition with the pending jobs (on which I set the\n fillfactor kind of low, maybe 50) and the default partition with\n all the rest. Of course that allows people with a constant\n transaction volume to also partition on jobId or completionTime\n and move chunks out to cold archive storage. But that's beside the\n current point.\n\n2. Add all needed indexes on the partitioned table, except the\n main partial index that I used before and that required all that\n reindexing maintenance. Actually I won't need any other indexes\n really, why invite another similar problem again.\n\n\nThat's really simple.\n\n\nOne question I have though: I imagine our pending partition\n heap file to now be essentially sequentially organized as a queue.\n New jobs are appended at the end, old jobs are at the beginning.\n As pending jobs become completed (pending = false) these initial\n rows will be marked as dead. So, while the number of live rows\n will remain small in that pending partition, sequential scans will\n have to skip over the dead rows in the beginning.\n\n\nDoes PostgreSQL structure its files such that skipping over\n dead rows is fast? Or do the dead rows have to be read and\n discarded during a table scan? \n\n\n\nOf course vacuum eliminates dead rows, but unless I do vacuum\n full, it will not re-pack the live rows, and that requires an\n exclusive table lock. So, what is the benefit of vacuuming that\n pending partition? What I don't want is insertion\n of new jobs to go into open slots at the beginning of the file. I\n want them to be appended (in Oracle there is an INSERT /*+APPEND*/\n hint for that. How does that work in PostgreSQL? \n\n\nUltimately that partition will amass too many dead rows, then\n what do I do? I don't think that the OS has a way to truncate\n files physically from the head, does it? I guess it could set the\n file pointer from the first block to a later block. But I don't\n know of an IOCTL/FCNTL command for that. On some OS there is a way\n of making blocks sparse again, is that how PostgreSQL might do it?\n Just knock out blocks as sparse from the front of the file?\n\n\nIf not, the next thing I can think of is to partition the table\n further by time, may be alternating even and odd days, such that\n on any given day one of the two pending partitions are quiet? Is\n that how it's done?\n\n\nregards,\n -Gunther",
"msg_date": "Sun, 24 Feb 2019 22:06:10 -0500",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sun, Feb 24, 2019 at 10:06:10PM -0500, Gunther wrote:\n> The index isn't required at all if all my pending jobs are in a partition of\n> only pending jobs. In that case the plan can just be a sequential scan.\n..\n> because an index really isn't required. The actual pending partition will\n> always remain quite small, and being a queue, it doesn't even matter how big\n> it might grow, as long as new rows are inserted at the end and not in the\n> middle of the data file and still there be some way of fast skip over the\n> part of the dead rows at the beginning that have already been processed and\n> moved away.\n..\n> One question I have though: I imagine our pending partition heap file to now\n> be essentially sequentially organized as a queue. New jobs are appended at\n> the end, old jobs are at the beginning. As pending jobs become completed\n> (pending = false) these initial rows will be marked as dead. So, while the\n> number of live rows will remain small in that pending partition, sequential\n> scans will have to skip over the dead rows in the beginning.\n> \n> Does PostgreSQL structure its files such that skipping over dead rows is\n> fast? Or do the dead rows have to be read and discarded during a table scan?\n..\n> Of course vacuum eliminates dead rows, but unless I do vacuum full, it will\n> not re-pack the live rows, and that requires an exclusive table lock. So,\n> what is the benefit of vacuuming that pending partition? What I _/don't/_\n> want is insertion of new jobs to go into open slots at the beginning of the\n> file. I want them to be appended (in Oracle there is an INSERT /*+APPEND*/\n> hint for that. How does that work in PostgreSQL?\n> \n> Ultimately that partition will amass too many dead rows, then what do I do?\n\nWhy don't you want to reuse free space in the table ?\n\nWhen you UPDATE the tuples and set pending='f', the row will be moved to\nanother partition, and the \"dead\" tuple in the pending partition can be reused\nfor a future INSERT. The table will remain small, as you said, only as large\nas the number of items in the pending queue, plus tuples not yet vacuumed and\nnot yet available for reuse.\n\n(BTW, index scans do intelligently skip over dead rows if the table/index are\nvacuumed sufficiently often).\n\n> 1. Create the Queue table partitioned on the pending column, this creates\n> the partition with the pending jobs (on which I set the fillfactor kind of\n> low, maybe 50) and the default partition with all the rest. Of course that\n> allows people with a constant transaction volume to also partition on jobId\n> or completionTime and move chunks out to cold archive storage. But that's\n> beside the current point.\n\nI suggest you might want to partition on something other than (or in addition\nto) the boolean column. For example, if you have a timestamp column for\n\"date_processed\", then the active queue would be for \"processed IS NULL\" (which\nI think would actually have to be the DEFAULT partition). Or you could use\nsub-partitioning, or partition on multiple columns (pending, date_processed) or\nsimilar.\n\nJustin\n\n",
"msg_date": "Sun, 24 Feb 2019 21:54:38 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": ">\n>\n> Anyway, I think the partitioned table is the right and brilliant solution,\n> because an index really isn't required. The actual pending partition will\n> always remain quite small, and being a queue, it doesn't even matter how\n> big it might grow, as long as new rows are inserted at the end and not in\n> the middle of the data file and still there be some way of fast skip over\n> the part of the dead rows at the beginning that have already been processed\n> and moved away.\n>\n> Good thing is, I don't worry about maintenance window. I have the leisure\n> to simply tear down my design now and make a better design. What's 2\n> million transactions if I can re-process them at a rate of 80/s? 7 hours\n> max. I am still in development. So, no need to worry about migration /\n> transition acrobatics. So if I take Corey's steps and envision the final\n> result, not worrying about the transition steps, then I understand this:\n>\n> 1. Create the Queue table partitioned on the pending column, this creates\n> the partition with the pending jobs (on which I set the fillfactor kind of\n> low, maybe 50) and the default partition with all the rest. Of course that\n> allows people with a constant transaction volume to also partition on jobId\n> or completionTime and move chunks out to cold archive storage. But that's\n> beside the current point.\n>\nI'm guessing there's a fairly insignificant difference in performance\nbetween one true partition and one false partition vs one true partition\nand a default partition, but I don't have insight into which one is\nbetter.\n\n>\n> One question I have though: I imagine our pending partition heap file to\n> now be essentially sequentially organized as a queue. New jobs are appended\n> at the end, old jobs are at the beginning. As pending jobs become completed\n> (pending = false) these initial rows will be marked as dead. So, while the\n> number of live rows will remain small in that pending partition, sequential\n> scans will have to skip over the dead rows in the beginning.\n>\n\nThat's basically true, but vacuums are erasing deleted rows, and that space\ngets re-used. So the table builds up to a working-set size, and I envision\nit looking like a clock sweep, where your existing rows are at 11pm to 7pm,\nyour new rows are inserting into space at 8pm that was vacuumed clean a\nwhile ago, and 9pm and 10pm have deleted rows that haven't been vacuumed\nyet. Where the empty spot is just keeps cycling through the table.\n\nOf course vacuum eliminates dead rows, but unless I do vacuum full, it will\n> not re-pack the live rows, and that requires an exclusive table lock. So,\n> what is the benefit of vacuuming that pending partition? What I *don't*\n> want is insertion of new jobs to go into open slots at the beginning of the\n> file. I want them to be appended (in Oracle there is an INSERT /*+APPEND*/\n> hint for that. How does that work in PostgreSQL?\n>\n\nSee above, the db (tries to) reuse the space space before new space is\nallocated.\n\nI don't know of an append equivalent for pgsql. If memory servers, the big\nwin of /*+ APPEND */ was that raw data blocks were assembled out-of-band\nand then just written to disk.\n\n\n> Ultimately that partition will amass too many dead rows, then what do I\n> do? I don't think that the OS has a way to truncate files physically from\n> the head, does it? I guess it could set the file pointer from the first\n> block to a later block. But I don't know of an IOCTL/FCNTL command for\n> that. On some OS there is a way of making blocks sparse again, is that how\n> PostgreSQL might do it? Just knock out blocks as sparse from the front of\n> the file?\n>\n\nSee clock sweep analogy above.\n\nAnyway, I think the partitioned table is the right and brilliant\n solution, because an index really isn't required. The actual\n pending partition will always remain quite small, and being a\n queue, it doesn't even matter how big it might grow, as long as\n new rows are inserted at the end and not in the middle of the data\n file and still there be some way of fast skip over the part of the\n dead rows at the beginning that have already been processed and\n moved away. \n\nGood thing is, I don't worry about maintenance window. I have\n the leisure to simply tear down my design now and make a better\n design. What's 2 million transactions if I can re-process them at\n a rate of 80/s? 7 hours max. I am still in development. So, no\n need to worry about migration / transition acrobatics. So if I\n take Corey's steps and envision the final result, not worrying\n about the transition steps, then I understand this:\n1. Create the Queue table partitioned on the pending column, this\n creates the partition with the pending jobs (on which I set the\n fillfactor kind of low, maybe 50) and the default partition with\n all the rest. Of course that allows people with a constant\n transaction volume to also partition on jobId or completionTime\n and move chunks out to cold archive storage. But that's beside the\n current point.I'm guessing there's a fairly insignificant difference in performance between one true partition and one false partition vs one true partition and a default partition, but I don't have insight into which one is better. \n\n\nOne question I have though: I imagine our pending partition\n heap file to now be essentially sequentially organized as a queue.\n New jobs are appended at the end, old jobs are at the beginning.\n As pending jobs become completed (pending = false) these initial\n rows will be marked as dead. So, while the number of live rows\n will remain small in that pending partition, sequential scans will\n have to skip over the dead rows in the beginning.That's basically true, but vacuums are erasing deleted rows, and that space gets re-used. So the table builds up to a working-set size, and I envision it looking like a clock sweep, where your existing rows are at 11pm to 7pm, your new rows are inserting into space at 8pm that was vacuumed clean a while ago, and 9pm and 10pm have deleted rows that haven't been vacuumed yet. Where the empty spot is just keeps cycling through the table.\nOf course vacuum eliminates dead rows, but unless I do vacuum\n full, it will not re-pack the live rows, and that requires an\n exclusive table lock. So, what is the benefit of vacuuming that\n pending partition? What I don't want is insertion\n of new jobs to go into open slots at the beginning of the file. I\n want them to be appended (in Oracle there is an INSERT /*+APPEND*/\n hint for that. How does that work in PostgreSQL? See above, the db (tries to) reuse the space space before new space is allocated.I don't know of an append equivalent for pgsql. If memory servers, the big win of /*+ APPEND */ was that raw data blocks were assembled out-of-band and then just written to disk. \nUltimately that partition will amass too many dead rows, then\n what do I do? I don't think that the OS has a way to truncate\n files physically from the head, does it? I guess it could set the\n file pointer from the first block to a later block. But I don't\n know of an IOCTL/FCNTL command for that. On some OS there is a way\n of making blocks sparse again, is that how PostgreSQL might do it?\n Just knock out blocks as sparse from the front of the file?See clock sweep analogy above.",
"msg_date": "Mon, 25 Feb 2019 11:30:33 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 4:06 PM Gunther <[email protected]> wrote:\n\n> Hi,\n>\n> I am using an SQL queue for distributing work to massively parallel\n> workers.\n>\nYou should look into specialized queueing software.\n\n...\n\n> I figured I might just pause all workers briefly to schedule the REINDEX\n> Queue command, but the problem with this is that while the transaction\n> volume is large, some jobs may take minutes to process, and in that case we\n> need to wait minutes to quiet the database with then 47 workers sitting as\n> idle capacity waiting for the 48th to finish so that the index can be\n> rebuilt!\n>\nThe jobs that take minutes are themselves the problem. They prevent tuples\nfrom being cleaned up, meaning all the other jobs needs to grovel through\nthe detritus every time they need to claim a new row. If you got those\nlong running jobs to end, you probably wouldn't even need to reindex--the\nproblem would go away on its own as the dead-to-all tuples get cleaned up.\n\nLocking a tuple and leaving the transaction open for minutes is going to\ncause no end of trouble on a highly active system. You should look at a\nthree-state method where the tuple can be pending/claimed/finished, rather\nthan pending/locked/finished. That way the process commits immediately\nafter claiming the tuple, and then records the outcome in another\ntransaction once it is done processing. You will need a way to detect\nprocesses that failed after claiming a row but before finishing, but\nimplementing that is going to be easier than all of this re-indexing stuff\nyou are trying to do now. You would claim the row by updating a field in\nit to have something distinctive about the process, like its hostname and\npid, so you can figure out if it is still running when it comes time to\nclean up apparently forgotten entries.\n\nCheers,\n\nJeff",
"msg_date": "Mon, 25 Feb 2019 15:30:40 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 11:13 AM Gunther Schadow <[email protected]>\nwrote:\n\n> Anyway, I think the partitioned table is the right and brilliant solution,\n> because an index really isn't required. The actual pending partition will\n> always remain quite small, and being a queue, it doesn't even matter how\n> big it might grow, as long as new rows are inserted at the end and not in\n> the middle of the data file and still there be some way of fast skip over\n> the part of the dead rows at the beginning that have already been processed\n> and moved away.\n>\nWhy do you want to do that? If you are trying to force the queue to be\nhandled in a \"fair\" order, then this isn't the way to do it, you would want\nto add an \"ORDER BY\" to your dequeueing query (in which case you are\nprobably back to adding an index).\n\nOnce the space in the beginning of the table has been reclaimed as free,\nthen it will be reused for newly inserted tuples. After the space is freed\nup but before it is reused, the seq scan can't skip those blocks entirely,\nbut it can deal with the blocks quickly because they are empty. If the\nblocks are full of dead but not freed tuples (because the long-running\ntransactions are preventing them from being cleaned up) then it will have\nto go through each dead tuple to satisfy itself that it actually is dead.\nThis might not be as bad as it is for indexes, but certainly won't be good\nfor performance.\n\n Cheers,\n\nJeff\n\nOn Mon, Feb 25, 2019 at 11:13 AM Gunther Schadow <[email protected]> wrote:\n\nAnyway, I think the partitioned table is the right and brilliant\n solution, because an index really isn't required. The actual\n pending partition will always remain quite small, and being a\n queue, it doesn't even matter how big it might grow, as long as\n new rows are inserted at the end and not in the middle of the data\n file and still there be some way of fast skip over the part of the\n dead rows at the beginning that have already been processed and\n moved away. Why do you want to do that? If you are trying to force the queue to be handled in a \"fair\" order, then this isn't the way to do it, you would want to add an \"ORDER BY\" to your dequeueing query (in which case you are probably back to adding an index). Once the space in the beginning of the table has been reclaimed as free, then it will be reused for newly inserted tuples. After the space is freed up but before it is reused, the seq scan can't skip those blocks entirely, but it can deal with the blocks quickly because they are empty. If the blocks are full of dead but not freed tuples (because the long-running transactions are preventing them from being cleaned up) then it will have to go through each dead tuple to satisfy itself that it actually is dead. This might not be as bad as it is for indexes, but certainly won't be good for performance. Cheers,Jeff",
"msg_date": "Mon, 25 Feb 2019 16:03:55 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
},
{
"msg_contents": "Was wondering when that would come up, taking queuing logic outside the \ndatabase. Can be overly painful architecting queuing logic in \nrelational databases. imho.\n\nRegards,\nMichael Vitale\n\n> Jeff Janes <mailto:[email protected]>\n> Monday, February 25, 2019 3:30 PM\n> On Sat, Feb 23, 2019 at 4:06 PM Gunther <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> I am using an SQL queue for distributing work to massively\n> parallel workers.\n>\n> You should look into specialized queueing software.\n>\n> ...\n>\n> I figured I might just pause all workers briefly to schedule the\n> REINDEX Queue command, but the problem with this is that while the\n> transaction volume is large, some jobs may take minutes to\n> process, and in that case we need to wait minutes to quiet the\n> database with then 47 workers sitting as idle capacity waiting for\n> the 48th to finish so that the index can be rebuilt!\n>\n> The jobs that take minutes are themselves the problem. They prevent \n> tuples from being cleaned up, meaning all the other jobs needs to \n> grovel through the detritus every time they need to claim a new row. \n> If you got those long running jobs to end, you probably wouldn't even \n> need to reindex--the problem would go away on its own as the \n> dead-to-all tuples get cleaned up.\n>\n> Locking a tuple and leaving the transaction open for minutes is going \n> to cause no end of trouble on a highly active system. You should look \n> at a three-state method where the tuple can be \n> pending/claimed/finished, rather than pending/locked/finished. That \n> way the process commits immediately after claiming the tuple, and then \n> records the outcome in another transaction once it is done \n> processing. You will need a way to detect processes that failed after \n> claiming a row but before finishing, but implementing that is going to \n> be easier than all of this re-indexing stuff you are trying to do \n> now. You would claim the row by updating a field in it to have \n> something distinctive about the process, like its hostname and pid, so \n> you can figure out if it is still running when it comes time to clean \n> up apparently forgotten entries.\n>\n> Cheers,\n>\n> Jeff\n\n\n\n\nWas wondering when that \nwould come up, taking queuing logic outside the database. Can be overly\n painful architecting queuing logic in relational databases. imho.\n\nRegards,\nMichael Vitale\n\n\n\n \nJeff Janes Monday,\n February 25, 2019 3:30 PM \nOn\n Sat, Feb 23, 2019 at 4:06 PM Gunther <[email protected]> wrote:\nHi, \n\nI am using an SQL queue for distributing work to massively\n parallel workers.You should look into \nspecialized queueing software....\nI figured I might just pause all workers briefly to schedule the\n REINDEX Queue command, but the problem with this is that while the\n transaction volume is large, some jobs may take minutes to\n process, and in that case we need to wait minutes to quiet the\n database with then 47 workers sitting as idle capacity waiting for\n the 48th to finish so that the index can be rebuilt!The\n jobs that take minutes are themselves the problem. They prevent tuples\n from being cleaned up, meaning all the other jobs needs to grovel \nthrough the detritus every time they need to claim a new row. If you \ngot those long running jobs to end, you probably wouldn't even need to \nreindex--the problem would go away on its own as the dead-to-all tuples \nget cleaned up.Locking a tuple and leaving the\n transaction open for minutes is going to cause no end of trouble on a \nhighly active system. You should look at a three-state method where the\n tuple can be pending/claimed/finished, rather than \npending/locked/finished. That way the process commits immediately after\n claiming the tuple, and then records the outcome in another transaction\n once it is done processing. You will need a way to detect processes \nthat failed after claiming a row but before finishing, but implementing \nthat is going to be easier than all of this re-indexing stuff you are \ntrying to do now. You would claim the row by updating a field in it to \nhave something distinctive about the process, like its hostname and pid,\n so you can figure out if it is still running when it comes time to \nclean up apparently forgotten entries.Cheers,Jeff",
"msg_date": "Mon, 25 Feb 2019 16:07:30 -0500",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive parallel queue table causes index deterioration, but\n REINDEX fails with deadlocks."
}
] |
[
{
"msg_contents": "Hi,\n\nRecently we started seeing the Linux OOM killer kicking in and killing \nPostgreSQL processes on one of our development machines.\n\nThe PostgreSQL version we're using was compiled by us, is running on \nCentOS 7 and is\n\nPostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 \n20150623 (Red Hat 4.8.5-28), 64-bit\n\nWhile looking at the machine I saw the following peculiar thing: Swap is \nalmost completely full while buff/cache still has ~3GB available.\n\nroot@demo:/etc/systemd/system # free -m\n total used free shared buff/cache \navailable\nMem: 7820 3932 770 1917 3116 1548\nSwap: 4095 3627 468\n\nRunning the following one-liner shows that two PostgreSQL processes are \nusing most of the swap:\n\nfor proc in /proc/*; do echo $proc ; cat $proc/smaps 2>/dev/null | awk \n'/Swap/{swap+=$2}END{print swap \"\\tKB\\t'`echo $proc|awk '{print $1}' `'\" \n}'; done | sort -n | awk '{total+=$1}/[0-9]/;END{print total \"\\tKB\\tTotal\"}'\n\n1387496 KB /proc/22788\n1837872 KB /proc/22789\n\nI attached the memory mappings of these processes to the mail. Both \nprocesses inside PostgreSQL show up as idle outside of any transaction \nand belong to a JDBC (Java) connection pool.\n\nvoip=# select * from pg_stat_activity where pid in (22788,22789);\n-[ RECORD 1 ]----+------------------------------\ndatid | 16404\npid | 22789\nusesysid | 10\nusename | postgres\nclient_addr | 127.0.0.1\nclient_hostname |\nclient_port | 45649\nbackend_start | 2019-02-25 00:17:15.246625+01\nxact_start |\nquery_start | 2019-02-25 10:52:07.729096+01\nstate_change | 2019-02-25 10:52:07.748077+01\nwait_event_type | Client\nwait_event | ClientRead\nstate | idle\nbackend_xid |\nbackend_xmin |\nquery | COMMIT\nbackend_type | client backend\n-[ RECORD 2 ]----+------------------------------\ndatid | 16404\npid | 22788\nusesysid | 10\nusename | postgres\nclient_addr | 127.0.0.1\nclient_hostname |\nclient_port | 45648\nbackend_start | 2019-02-25 00:17:15.24631+01\nxact_start |\nquery_start | 2019-02-25 10:55:42.577158+01\nstate_change | 2019-02-25 10:55:42.577218+01\nwait_event_type | Client\nwait_event | ClientRead\nstate | idle\nbackend_xid |\nbackend_xmin |\nquery | ROLLBACK\nbackend_type | client backend\n--------->8------------------>8------------------>8------------------>8---------\n\nI attached the postgresql.conf we're using to this mail as well.\n\nIs this expected behaviour ? Did we over-commission the machine in our \npostgresql.conf ?\n\nThanks,\nTobias",
"msg_date": "Mon, 25 Feb 2019 11:36:40 +0100",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Idle backends outside a transaction holding onto large amounts of\n memory / swap space?"
},
{
"msg_contents": "Hi\n\npo 25. 2. 2019 v 11:37 odesílatel Tobias Gierke <\[email protected]> napsal:\n\n> Hi,\n>\n> Recently we started seeing the Linux OOM killer kicking in and killing\n> PostgreSQL processes on one of our development machines.\n>\n> The PostgreSQL version we're using was compiled by us, is running on\n> CentOS 7 and is\n>\n> PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n> 20150623 (Red Hat 4.8.5-28), 64-bit\n>\n> While looking at the machine I saw the following peculiar thing: Swap is\n> almost completely full while buff/cache still has ~3GB available.\n>\n> root@demo:/etc/systemd/system # free -m\n> total used free shared buff/cache\n> available\n> Mem: 7820 3932 770 1917 3116 1548\n> Swap: 4095 3627 468\n>\n> Running the following one-liner shows that two PostgreSQL processes are\n> using most of the swap:\n>\n> for proc in /proc/*; do echo $proc ; cat $proc/smaps 2>/dev/null | awk\n> '/Swap/{swap+=$2}END{print swap \"\\tKB\\t'`echo $proc|awk '{print $1}' `'\"\n> }'; done | sort -n | awk '{total+=$1}/[0-9]/;END{print total\n> \"\\tKB\\tTotal\"}'\n>\n> 1387496 KB /proc/22788\n> 1837872 KB /proc/22789\n>\n> I attached the memory mappings of these processes to the mail. Both\n> processes inside PostgreSQL show up as idle outside of any transaction\n> and belong to a JDBC (Java) connection pool.\n>\n\nIs good to close sessions after some times (once per hour) because\nallocated memory is released to operation system when process is closed.\nWithout it, the operation memory can be fragmented.\n\nif run some big queries then some memory can be assigned to process, and is\nnot released.\n\nRegards\n\nPavel\n\n\n> voip=# select * from pg_stat_activity where pid in (22788,22789);\n> -[ RECORD 1 ]----+------------------------------\n> datid | 16404\n> pid | 22789\n> usesysid | 10\n> usename | postgres\n> client_addr | 127.0.0.1\n> client_hostname |\n> client_port | 45649\n> backend_start | 2019-02-25 00:17:15.246625+01\n> xact_start |\n> query_start | 2019-02-25 10:52:07.729096+01\n> state_change | 2019-02-25 10:52:07.748077+01\n> wait_event_type | Client\n> wait_event | ClientRead\n> state | idle\n> backend_xid |\n> backend_xmin |\n> query | COMMIT\n> backend_type | client backend\n> -[ RECORD 2 ]----+------------------------------\n> datid | 16404\n> pid | 22788\n> usesysid | 10\n> usename | postgres\n> client_addr | 127.0.0.1\n> client_hostname |\n> client_port | 45648\n> backend_start | 2019-02-25 00:17:15.24631+01\n> xact_start |\n> query_start | 2019-02-25 10:55:42.577158+01\n> state_change | 2019-02-25 10:55:42.577218+01\n> wait_event_type | Client\n> wait_event | ClientRead\n> state | idle\n> backend_xid |\n> backend_xmin |\n> query | ROLLBACK\n> backend_type | client backend\n>\n> --------->8------------------>8------------------>8------------------>8---------\n>\n> I attached the postgresql.conf we're using to this mail as well.\n>\n> Is this expected behaviour ? Did we over-commission the machine in our\n> postgresql.conf ?\n>\n> Thanks,\n> Tobias\n>\n>\n>\n>\n>\n>\n\nHipo 25. 2. 2019 v 11:37 odesílatel Tobias Gierke <[email protected]> napsal:Hi,\n\nRecently we started seeing the Linux OOM killer kicking in and killing \nPostgreSQL processes on one of our development machines.\n\nThe PostgreSQL version we're using was compiled by us, is running on \nCentOS 7 and is\n\nPostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 \n20150623 (Red Hat 4.8.5-28), 64-bit\n\nWhile looking at the machine I saw the following peculiar thing: Swap is \nalmost completely full while buff/cache still has ~3GB available.\n\nroot@demo:/etc/systemd/system # free -m\n total used free shared buff/cache \navailable\nMem: 7820 3932 770 1917 3116 1548\nSwap: 4095 3627 468\n\nRunning the following one-liner shows that two PostgreSQL processes are \nusing most of the swap:\n\nfor proc in /proc/*; do echo $proc ; cat $proc/smaps 2>/dev/null | awk \n'/Swap/{swap+=$2}END{print swap \"\\tKB\\t'`echo $proc|awk '{print $1}' `'\" \n}'; done | sort -n | awk '{total+=$1}/[0-9]/;END{print total \"\\tKB\\tTotal\"}'\n\n1387496 KB /proc/22788\n1837872 KB /proc/22789\n\nI attached the memory mappings of these processes to the mail. Both \nprocesses inside PostgreSQL show up as idle outside of any transaction \nand belong to a JDBC (Java) connection pool.Is good to close sessions after some times (once per hour) because allocated memory is released to operation system when process is closed. Without it, the operation memory can be fragmented.if run some big queries then some memory can be assigned to process, and is not released.RegardsPavel\n\nvoip=# select * from pg_stat_activity where pid in (22788,22789);\n-[ RECORD 1 ]----+------------------------------\ndatid | 16404\npid | 22789\nusesysid | 10\nusename | postgres\nclient_addr | 127.0.0.1\nclient_hostname |\nclient_port | 45649\nbackend_start | 2019-02-25 00:17:15.246625+01\nxact_start |\nquery_start | 2019-02-25 10:52:07.729096+01\nstate_change | 2019-02-25 10:52:07.748077+01\nwait_event_type | Client\nwait_event | ClientRead\nstate | idle\nbackend_xid |\nbackend_xmin |\nquery | COMMIT\nbackend_type | client backend\n-[ RECORD 2 ]----+------------------------------\ndatid | 16404\npid | 22788\nusesysid | 10\nusename | postgres\nclient_addr | 127.0.0.1\nclient_hostname |\nclient_port | 45648\nbackend_start | 2019-02-25 00:17:15.24631+01\nxact_start |\nquery_start | 2019-02-25 10:55:42.577158+01\nstate_change | 2019-02-25 10:55:42.577218+01\nwait_event_type | Client\nwait_event | ClientRead\nstate | idle\nbackend_xid |\nbackend_xmin |\nquery | ROLLBACK\nbackend_type | client backend\n--------->8------------------>8------------------>8------------------>8---------\n\nI attached the postgresql.conf we're using to this mail as well.\n\nIs this expected behaviour ? Did we over-commission the machine in our \npostgresql.conf ?\n\nThanks,\nTobias",
"msg_date": "Mon, 25 Feb 2019 11:52:10 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idle backends outside a transaction holding onto large amounts of\n memory / swap space?"
}
] |
[
{
"msg_contents": "Hello,\n\nI have an article query which returns articles enabled for a participant.\nArticle table – Participant table – Table in between which stores the links\nbetween the Article and particitpant including characteristics such as\nenabled.\nIt is possible to search on the articles by number, description,…\nFor all of my participants, the articles are return in up to 3 seconds.\nHowever, when I add a new participant, which has in fact very few articles\nenabled, the query takes up to 30 seconds.\nWhen running analyse explain, I can see that the execution plan for all\nparticipants uses indexes and joins the table in the same order.\nFor the new participant, also indexes are used, but the tables are joined in\na different order which makes the query very slow.\nIs there any way how I can make the queries fast for new participants? This\nis a big problem, because for new participants, speed is even more\nimportant.\n\nThank you for your help.\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Mon, 25 Feb 2019 03:41:18 -0700 (MST)",
"msg_from": "Kim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query slow for new participants"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 03:41:18AM -0700, Kim wrote:\n> Is there any way how I can make the queries fast for new participants? This\n> is a big problem, because for new participants, speed is even more\n> important.\n> \n> Thank you for your help.\n\nCould you include information requested here ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nJustin\n\n",
"msg_date": "Mon, 25 Feb 2019 10:16:07 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow for new participants"
},
{
"msg_contents": "Hi,\n\nthank you for your reply.\nYes, I will go through this page.\n\nRegards,\nKim\n\nOp ma 25 feb. 2019 om 17:16 schreef Justin Pryzby <[email protected]>:\n\n> On Mon, Feb 25, 2019 at 03:41:18AM -0700, Kim wrote:\n> > Is there any way how I can make the queries fast for new participants?\n> This\n> > is a big problem, because for new participants, speed is even more\n> > important.\n> >\n> > Thank you for your help.\n>\n> Could you include information requested here ?\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> Justin\n>\n\n\n-- \nMet vriendelijke groeten,\n\nHi,thank you for your reply.Yes, I will go through this page.Regards,KimOp ma 25 feb. 2019 om 17:16 schreef Justin Pryzby <[email protected]>:On Mon, Feb 25, 2019 at 03:41:18AM -0700, Kim wrote:\n> Is there any way how I can make the queries fast for new participants? This\n> is a big problem, because for new participants, speed is even more\n> important.\n> \n> Thank you for your help.\n\nCould you include information requested here ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nJustin\n-- Met vriendelijke groeten,",
"msg_date": "Mon, 25 Feb 2019 20:23:35 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow for new participants"
},
{
"msg_contents": "Hello,\n\nThings to Try Before You Post\n-> I went through these steps and they did not bring any difference.\n\n\nInformation You Need To Include\nPostgres version\n\"PostgreSQL 10.6 on x86_64-pc-linux-gnu, compiled by gcc (Debian\n6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\"\n\nFull Table and Index Schema\nThe difference is very bad for the new company, even on the simplest query\n\n SELECT * FROM CompanyArticleDB\n WHERE CompanyId = '77'\n AND ArticleId= '7869071'\n\n Table \"public.companyarticledb\"\n Column | Type | Collation |\nNullable | Default\n----------------------------+-----------------------------+-----------+----------+---------\n companyid | integer | | not\nnull |\n articleid | integer | | not\nnull |\n price | numeric(19,4) | |\n |\n contractstartdate | timestamp without time zone | |\n |\n contractenddate | timestamp without time zone | |\n |\n enabled | boolean | |\n |\n visible | boolean | |\n |\n sheid | integer | |\n |\n inmassbalance | boolean | |\n |\n internalwastetype | character varying(50) | |\n |\n buom | character varying(50) | |\n |\n stockunit | numeric(18,2) | |\n |\n priceperbuom | numeric(19,4) | |\n |\n purchaseunit | numeric(18,2) | |\n |\n preventioncounselorid | integer | |\n |\n licenseprovided | boolean | |\n |\n licensevaliduntil | timestamp without time zone | |\n |\n authorisationlocationid | integer | |\n |\n priceagreementreference | character varying(50) | |\n |\n interfaceaccountid | integer | |\n |\n createdon | timestamp without time zone | |\n |\n modifiedby | integer | |\n |\n createdby | integer | |\n |\n modifiedon | timestamp without time zone | |\n |\n createdonsupplier | timestamp without time zone | |\n |\n modifiedbysupplier | integer | |\n |\n createdbysupplier | integer | |\n |\n modifiedonsupplier | timestamp without time zone | |\n |\n newprice | numeric(19,4) | |\n |\n newcontractstartdate | timestamp without time zone | |\n |\n newcontractenddate | timestamp without time zone | |\n |\n newpriceagreementreference | character varying(50) | |\n |\n licensereference | character varying(50) | |\n |\n purchasercomment | character varying(500) | |\n |\n reportingunit | character varying(5) | |\n |\n articlecode | character varying(50) | |\n |\n participantdescription | character varying(500) | |\n |\n motivationneeded | boolean | |\n |\n photourl | character varying(500) | |\n |\n reviewedshe | boolean | |\n |\nnoinspectionuntil | timestamp without time zone | |\n |\n priority | boolean | |\n |\n needschecking | boolean | |\n |\n role | character varying(20) | |\n |\nIndexes:\n \"pk_pricedb\" PRIMARY KEY, btree (companyid, articleid)\n \"EnabledIndex\" btree (enabled)\n \"ix_companyarticledb_article\" btree (articleid)\n \"ix_companyarticledb_company\" btree (companyid)\n \"participantarticlecodeindex\" btree (articlecode)\n \"participantdescriptionindex\" gin (participantdescription gin_trgm_ops)\nForeign-key constraints:\n \"fk_companyarticledb_accountsdb\" FOREIGN KEY (modifiedby) REFERENCES\naccountsdb(id)\n \"fk_companyarticledb_accountsdb1\" FOREIGN KEY (createdby) REFERENCES\naccountsdb(id)\n \"fk_companyarticledb_accountsdb2\" FOREIGN KEY (preventioncounselorid)\nREFERENCES accountsdb(id)\n \"fk_companyarticledb_articledb\" FOREIGN KEY (articleid) REFERENCES\narticledb(id)\n \"fk_companyarticledb_companydb\" FOREIGN KEY (companyid) REFERENCES\ncompanydb(id)\n \"fk_companyarticledb_interfaceaccountdb\" FOREIGN KEY\n(interfaceaccountid) REFERENCES interfaceaccountdb(id)\n \"fk_companyarticledb_supplieraccountdb\" FOREIGN KEY (createdbysupplier)\nREFERENCES supplieraccountdb(id)\n \"fk_companyarticledb_supplieraccountdb1\" FOREIGN KEY\n(modifiedbysupplier) REFERENCES supplieraccountdb(id)\n\nTable Metadata\nrelname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid)\n\"companyarticledb\" 6191886 \"5.40276e+08\" 6188459 \"r\" 44 false \"50737979392\"\n\n\nEXPLAIN (ANALYZE, BUFFERS), not just EXPLAIN\n\"Index Scan using ix_companyarticledb_company on companyarticledb\n(cost=0.57..2.80 rows=1 width=193) (actual time=1011.335..1011.454 rows=1\nloops=1)\"\n\" Index Cond: (companyid = 77)\"\n\" Filter: (articleid = 7869071)\"\n\" Rows Removed by Filter: 2674361\"\n\" Buffers: shared hit=30287\"\n\"Planning time: 0.220 ms\"\n\"Execution time: 1011.502 ms\"\n\nHistory\n\nFor all other participants this returns a lot faster, for this new\nparticipant this goes very slow.\n\nExample for another participant, there another index is used.\n\n\"Index Scan using pk_pricedb on companyarticledb (cost=0.57..2.79 rows=1\nwidth=193) (actual time=0.038..0.039 rows=0 loops=1)\"\n\" Index Cond: ((companyid = 39) AND (articleid = 7869071))\"\n\" Buffers: shared hit=4\"\n\"Planning time: 0.233 ms\"\n\"Execution time: 0.087 ms\"\n\n\n\nThis is applicable for all queries joining companyarticledb for\ncompanyid='77' for this participant.\nI do not know why this participant is different than the others except that\nit was recently added.\n\n\nHardware\nStandard DS15 v2 (20 vcpus, 140 GB memory)\n\n\n\nMaintenance Setup\nI did ran VACUUM on the db just before executing the queries\nI did reindex the indexes on companyarticledb\n\nGUC Settings\n\n\"application_name\" \"pgAdmin 4 - CONN:6235249\" \"client\"\n\"bytea_output\" \"escape\" \"session\"\n\"checkpoint_completion_target\" \"0.7\" \"configuration file\"\n\"client_encoding\" \"UNICODE\" \"session\"\n\"client_min_messages\" \"notice\" \"session\"\n\"DateStyle\" \"ISO, MDY\" \"session\"\n\"default_statistics_target\" \"100\" \"configuration file\"\n\"effective_cache_size\" \"105GB\" \"configuration file\"\n\"effective_io_concurrency\" \"200\" \"configuration file\"\n\"external_pid_file\" \"/opt/bitnami/postgresql/tmp/postgresql.pid\" \"command\nline\"\n\"hot_standby\" \"on\" \"configuration file\"\n\"listen_addresses\" \"*\" \"configuration file\"\n\"maintenance_work_mem\" \"2GB\" \"configuration file\"\n\"max_connections\" \"200\" \"configuration file\"\n\"max_parallel_workers\" \"20\" \"configuration file\"\n\"max_parallel_workers_per_gather\" \"10\" \"configuration file\"\n\"max_stack_depth\" \"2MB\" \"environment variable\"\n\"max_wal_senders\" \"16\" \"configuration file\"\n\"max_wal_size\" \"2GB\" \"configuration file\"\n\"max_worker_processes\" \"20\" \"configuration file\"\n\"min_wal_size\" \"1GB\" \"configuration file\"\n\"random_page_cost\" \"1.1\" \"configuration file\"\n\"shared_buffers\" \"35GB\" \"configuration file\"\n\"wal_buffers\" \"16MB\" \"configuration file\"\n\"wal_keep_segments\" \"32\" \"configuration file\"\n\"wal_level\" \"replica\" \"configuration file\"\n\"work_mem\" \"18350kB\" \"configuration file\"\n\n\nThank you for your help\n\nRegards,\nKim\n\nOp ma 25 feb. 2019 om 17:16 schreef Justin Pryzby <[email protected]>:\n\n> On Mon, Feb 25, 2019 at 03:41:18AM -0700, Kim wrote:\n> > Is there any way how I can make the queries fast for new participants?\n> This\n> > is a big problem, because for new participants, speed is even more\n> > important.\n> >\n> > Thank you for your help.\n>\n> Could you include information requested here ?\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> Justin\n>\n\n\n-- \nMet vriendelijke groeten,\n\nHello,Things to Try Before You Post-> I went through these steps and they did not bring any difference.Information You Need To IncludePostgres version\"PostgreSQL 10.6 on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\"Full Table and Index SchemaThe difference is very bad for the new company, even on the simplest query SELECT * FROM CompanyArticleDB WHERE CompanyId = '77' AND ArticleId= '7869071' Table \"public.companyarticledb\" Column | Type | Collation | Nullable | Default----------------------------+-----------------------------+-----------+----------+--------- companyid | integer | | not null | articleid | integer | | not null | price | numeric(19,4) | | | contractstartdate | timestamp without time zone | | | contractenddate | timestamp without time zone | | | enabled | boolean | | | visible | boolean | | | sheid | integer | | | inmassbalance | boolean | | | internalwastetype | character varying(50) | | | buom | character varying(50) | | | stockunit | numeric(18,2) | | | priceperbuom | numeric(19,4) | | | purchaseunit | numeric(18,2) | | | preventioncounselorid | integer | | | licenseprovided | boolean | | | licensevaliduntil | timestamp without time zone | | | authorisationlocationid | integer | | | priceagreementreference | character varying(50) | | | interfaceaccountid | integer | | | createdon | timestamp without time zone | | | modifiedby | integer | | | createdby | integer | | | modifiedon | timestamp without time zone | | | createdonsupplier | timestamp without time zone | | | modifiedbysupplier | integer | | | createdbysupplier | integer | | | modifiedonsupplier | timestamp without time zone | | | newprice | numeric(19,4) | | | newcontractstartdate | timestamp without time zone | | | newcontractenddate | timestamp without time zone | | | newpriceagreementreference | character varying(50) | | | licensereference | character varying(50) | | | purchasercomment | character varying(500) | | | reportingunit | character varying(5) | | | articlecode | character varying(50) | | | participantdescription | character varying(500) | | | motivationneeded | boolean | | | photourl | character varying(500) | | | reviewedshe | boolean | | |noinspectionuntil | timestamp without time zone | | | priority | boolean | | | needschecking | boolean | | | role | character varying(20) | | |Indexes: \"pk_pricedb\" PRIMARY KEY, btree (companyid, articleid) \"EnabledIndex\" btree (enabled) \"ix_companyarticledb_article\" btree (articleid) \"ix_companyarticledb_company\" btree (companyid) \"participantarticlecodeindex\" btree (articlecode) \"participantdescriptionindex\" gin (participantdescription gin_trgm_ops)Foreign-key constraints: \"fk_companyarticledb_accountsdb\" FOREIGN KEY (modifiedby) REFERENCES accountsdb(id) \"fk_companyarticledb_accountsdb1\" FOREIGN KEY (createdby) REFERENCES accountsdb(id) \"fk_companyarticledb_accountsdb2\" FOREIGN KEY (preventioncounselorid) REFERENCES accountsdb(id) \"fk_companyarticledb_articledb\" FOREIGN KEY (articleid) REFERENCES articledb(id) \"fk_companyarticledb_companydb\" FOREIGN KEY (companyid) REFERENCES companydb(id) \"fk_companyarticledb_interfaceaccountdb\" FOREIGN KEY (interfaceaccountid) REFERENCES interfaceaccountdb(id) \"fk_companyarticledb_supplieraccountdb\" FOREIGN KEY (createdbysupplier) REFERENCES supplieraccountdb(id) \"fk_companyarticledb_supplieraccountdb1\" FOREIGN KEY (modifiedbysupplier) REFERENCES supplieraccountdb(id)Table Metadatarelname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid)\"companyarticledb\" 6191886 \"5.40276e+08\" 6188459 \"r\" 44 false \"50737979392\"EXPLAIN (ANALYZE, BUFFERS), not just EXPLAIN\"Index Scan using ix_companyarticledb_company on companyarticledb (cost=0.57..2.80 rows=1 width=193) (actual time=1011.335..1011.454 rows=1 loops=1)\"\" Index Cond: (companyid = 77)\"\" Filter: (articleid = 7869071)\"\" Rows Removed by Filter: 2674361\"\" Buffers: shared hit=30287\"\"Planning time: 0.220 ms\"\"Execution time: 1011.502 ms\"HistoryFor all other participants this returns a lot faster, for this new participant this goes very slow.Example for another participant, there another index is used.\"Index Scan using pk_pricedb on companyarticledb (cost=0.57..2.79 rows=1 width=193) (actual time=0.038..0.039 rows=0 loops=1)\"\" Index Cond: ((companyid = 39) AND (articleid = 7869071))\"\" Buffers: shared hit=4\"\"Planning time: 0.233 ms\"\"Execution time: 0.087 ms\"This is applicable for all queries joining companyarticledb for companyid='77' for this participant.I do not know why this participant is different than the others except that it was recently added.HardwareStandard DS15 v2 (20 vcpus, 140 GB memory) Maintenance SetupI did ran VACUUM on the db just before executing the queriesI did reindex the indexes on companyarticledb GUC Settings\"application_name\" \"pgAdmin 4 - CONN:6235249\" \"client\"\"bytea_output\" \"escape\" \"session\"\"checkpoint_completion_target\" \"0.7\" \"configuration file\"\"client_encoding\" \"UNICODE\" \"session\"\"client_min_messages\" \"notice\" \"session\"\"DateStyle\" \"ISO, MDY\" \"session\"\"default_statistics_target\" \"100\" \"configuration file\"\"effective_cache_size\" \"105GB\" \"configuration file\"\"effective_io_concurrency\" \"200\" \"configuration file\"\"external_pid_file\" \"/opt/bitnami/postgresql/tmp/postgresql.pid\" \"command line\"\"hot_standby\" \"on\" \"configuration file\"\"listen_addresses\" \"*\" \"configuration file\"\"maintenance_work_mem\" \"2GB\" \"configuration file\"\"max_connections\" \"200\" \"configuration file\"\"max_parallel_workers\" \"20\" \"configuration file\"\"max_parallel_workers_per_gather\" \"10\" \"configuration file\"\"max_stack_depth\" \"2MB\" \"environment variable\"\"max_wal_senders\" \"16\" \"configuration file\"\"max_wal_size\" \"2GB\" \"configuration file\"\"max_worker_processes\" \"20\" \"configuration file\"\"min_wal_size\" \"1GB\" \"configuration file\"\"random_page_cost\" \"1.1\" \"configuration file\"\"shared_buffers\" \"35GB\" \"configuration file\"\"wal_buffers\" \"16MB\" \"configuration file\"\"wal_keep_segments\" \"32\" \"configuration file\"\"wal_level\" \"replica\" \"configuration file\"\"work_mem\" \"18350kB\" \"configuration file\" Thank you for your helpRegards,KimOp ma 25 feb. 2019 om 17:16 schreef Justin Pryzby <[email protected]>:On Mon, Feb 25, 2019 at 03:41:18AM -0700, Kim wrote:\n> Is there any way how I can make the queries fast for new participants? This\n> is a big problem, because for new participants, speed is even more\n> important.\n> \n> Thank you for your help.\n\nCould you include information requested here ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nJustin\n-- Met vriendelijke groeten,",
"msg_date": "Tue, 26 Feb 2019 00:22:39 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow for new participants"
},
{
"msg_contents": "On Tue, Feb 26, 2019 at 12:22:39AM +0100, [email protected] wrote:\n\n> Hardware\n> Standard DS15 v2 (20 vcpus, 140 GB memory)\n\n> \"effective_cache_size\" \"105GB\" \"configuration file\"\n> \"effective_io_concurrency\" \"200\" \"configuration file\"\n> \"maintenance_work_mem\" \"2GB\" \"configuration file\"\n> \"max_parallel_workers\" \"20\" \"configuration file\"\n> \"max_parallel_workers_per_gather\" \"10\" \"configuration file\"\n> \"max_worker_processes\" \"20\" \"configuration file\"\n> \"random_page_cost\" \"1.1\" \"configuration file\"\n> \"shared_buffers\" \"35GB\" \"configuration file\"\n> \"work_mem\" \"18350kB\" \"configuration file\"\n\nI don't know for sure, but 35GB is very possibly too large shared_buffers. The\nrule of thumb is \"start at 25% of RAM\" but I think anything over 10-15GB is\nfrequently too large, unless you can keep the whole DB in RAM (can you?)\n\n> Table Metadata\n> relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid)\n> \"companyarticledb\" 6191886 \"5.40276e+08\" 6188459 \"r\" 44 false \"50737979392\"\n\nwork_mem could probably benefit from being larger (just be careful that you\ndon't end up with 20x parallel workers running complex plans each node of which\nusing 100MB work_mem).\n\n> Full Table and Index Schema\n> The difference is very bad for the new company, even on the simplest query\n> \n> SELECT * FROM CompanyArticleDB\n> WHERE CompanyId = '77'\n> AND ArticleId= '7869071'\n\nIt sounds to me like the planner thinks that the distribution of companyID and\narticleID are independent, when they're not. For example it think that\ncompanyID=33 filters out 99% of the rows.\n\n> companyid | integer | | not null |\n> articleid | integer | | not null |\n\n> EXPLAIN (ANALYZE, BUFFERS), not just EXPLAIN\n> SELECT * FROM CompanyArticleDB\n> WHERE CompanyId = '77'\n> AND ArticleId= '7869071'\n> \"Index Scan using ix_companyarticledb_company on companyarticledb (cost=0.57..2.80 rows=1 width=193) (actual time=1011.335..1011.454 rows=1 loops=1)\"\n> \" Index Cond: (companyid = 77)\"\n> \" Filter: (articleid = 7869071)\"\n> \" Rows Removed by Filter: 2674361\"\n> \" Buffers: shared hit=30287\"\n\n> Example for another participant, there another index is used.\n> \"Index Scan using pk_pricedb on companyarticledb (cost=0.57..2.79 rows=1 width=193) (actual time=0.038..0.039 rows=0 loops=1)\"\n> \" Index Cond: ((companyid = 39) AND (articleid = 7869071))\"\n> \" Buffers: shared hit=4\"\n\n> I do not know why this participant is different than the others except that\n> it was recently added.\n\nWere the tables ANALYZEd since then ? You could check:\nSELECT * FROM pg_stat_user_tables WHERE relname='companyarticledb';\n\nIf you have small number of companyIDs (~100), then the table statistics may\nincldue a most-common-values list, and companies not in the MCV list may end up\nwith different query plans, even without correlation issues.\n\nIt looks like the NEW company has ~3e6 articles, out of a total ~5e8 articles.\nThe planner may think that companyID doesn't exist at all, so scanning the idx\non companyID will be slightly faster than using the larger, composite index on\ncompanyID,articleID.\n\nJustin\n\n> Indexes:\n> \"pk_pricedb\" PRIMARY KEY, btree (companyid, articleid)\n> \"EnabledIndex\" btree (enabled)\n> \"ix_companyarticledb_article\" btree (articleid)\n> \"ix_companyarticledb_company\" btree (companyid)\n> \"participantarticlecodeindex\" btree (articlecode)\n> \"participantdescriptionindex\" gin (participantdescription gin_trgm_ops)\n> Foreign-key constraints:\n> \"fk_companyarticledb_accountsdb\" FOREIGN KEY (modifiedby) REFERENCES accountsdb(id)\n> \"fk_companyarticledb_accountsdb1\" FOREIGN KEY (createdby) REFERENCES accountsdb(id)\n> \"fk_companyarticledb_accountsdb2\" FOREIGN KEY (preventioncounselorid) REFERENCES accountsdb(id)\n> \"fk_companyarticledb_articledb\" FOREIGN KEY (articleid) REFERENCES articledb(id)\n> \"fk_companyarticledb_companydb\" FOREIGN KEY (companyid) REFERENCES companydb(id)\n> \"fk_companyarticledb_interfaceaccountdb\" FOREIGN KEY (interfaceaccountid) REFERENCES interfaceaccountdb(id)\n> \"fk_companyarticledb_supplieraccountdb\" FOREIGN KEY (createdbysupplier) REFERENCES supplieraccountdb(id)\n> \"fk_companyarticledb_supplieraccountdb1\" FOREIGN KEY (modifiedbysupplier) REFERENCES supplieraccountdb(id)\n\n",
"msg_date": "Mon, 25 Feb 2019 17:59:20 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow for new participants"
},
{
"msg_contents": "Regarding shared_buffers, please install the pg_buffercache extension \nand run the recommended queries with that extension during high load \ntimes to really get an idea about the right value for shared_buffers. \nLet's take the guess work out of it.\n\nRegards,\nMichael Vitale\n\n> Justin Pryzby <mailto:[email protected]>\n> Monday, February 25, 2019 6:59 PM\n> On Tue, Feb 26, 2019 at 12:22:39AM +0100, [email protected] wrote:\n>\n>> Hardware\n>> Standard DS15 v2 (20 vcpus, 140 GB memory)\n>\n>> \"effective_cache_size\" \"105GB\" \"configuration file\"\n>> \"effective_io_concurrency\" \"200\" \"configuration file\"\n>> \"maintenance_work_mem\" \"2GB\" \"configuration file\"\n>> \"max_parallel_workers\" \"20\" \"configuration file\"\n>> \"max_parallel_workers_per_gather\" \"10\" \"configuration file\"\n>> \"max_worker_processes\" \"20\" \"configuration file\"\n>> \"random_page_cost\" \"1.1\" \"configuration file\"\n>> \"shared_buffers\" \"35GB\" \"configuration file\"\n>> \"work_mem\" \"18350kB\" \"configuration file\"\n>\n> I don't know for sure, but 35GB is very possibly too large shared_buffers. The\n> rule of thumb is \"start at 25% of RAM\" but I think anything over 10-15GB is\n> frequently too large, unless you can keep the whole DB in RAM (can you?)\n>\n>> Table Metadata\n>> relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid)\n>> \"companyarticledb\" 6191886 \"5.40276e+08\" 6188459 \"r\" 44 false \"50737979392\"\n>\n> work_mem could probably benefit from being larger (just be careful that you\n> don't end up with 20x parallel workers running complex plans each node of which\n> using 100MB work_mem).\n>\n>> Full Table and Index Schema\n>> The difference is very bad for the new company, even on the simplest query\n>>\n>> SELECT * FROM CompanyArticleDB\n>> WHERE CompanyId = '77'\n>> AND ArticleId= '7869071'\n>\n> It sounds to me like the planner thinks that the distribution of companyID and\n> articleID are independent, when they're not. For example it think that\n> companyID=33 filters out 99% of the rows.\n>\n>> companyid | integer | | not null |\n>> articleid | integer | | not null |\n>\n>> EXPLAIN (ANALYZE, BUFFERS), not just EXPLAIN\n>> SELECT * FROM CompanyArticleDB\n>> WHERE CompanyId = '77'\n>> AND ArticleId= '7869071'\n>> \"Index Scan using ix_companyarticledb_company on companyarticledb (cost=0.57..2.80 rows=1 width=193) (actual time=1011.335..1011.454 rows=1 loops=1)\"\n>> \" Index Cond: (companyid = 77)\"\n>> \" Filter: (articleid = 7869071)\"\n>> \" Rows Removed by Filter: 2674361\"\n>> \" Buffers: shared hit=30287\"\n>\n>> Example for another participant, there another index is used.\n>> \"Index Scan using pk_pricedb on companyarticledb (cost=0.57..2.79 rows=1 width=193) (actual time=0.038..0.039 rows=0 loops=1)\"\n>> \" Index Cond: ((companyid = 39) AND (articleid = 7869071))\"\n>> \" Buffers: shared hit=4\"\n>\n>> I do not know why this participant is different than the others except that\n>> it was recently added.\n>\n> Were the tables ANALYZEd since then ? You could check:\n> SELECT * FROM pg_stat_user_tables WHERE relname='companyarticledb';\n>\n> If you have small number of companyIDs (~100), then the table statistics may\n> incldue a most-common-values list, and companies not in the MCV list may end up\n> with different query plans, even without correlation issues.\n>\n> It looks like the NEW company has ~3e6 articles, out of a total ~5e8 articles.\n> The planner may think that companyID doesn't exist at all, so scanning the idx\n> on companyID will be slightly faster than using the larger, composite index on\n> companyID,articleID.\n>\n> Justin\n>\n>> Indexes:\n>> \"pk_pricedb\" PRIMARY KEY, btree (companyid, articleid)\n>> \"EnabledIndex\" btree (enabled)\n>> \"ix_companyarticledb_article\" btree (articleid)\n>> \"ix_companyarticledb_company\" btree (companyid)\n>> \"participantarticlecodeindex\" btree (articlecode)\n>> \"participantdescriptionindex\" gin (participantdescription gin_trgm_ops)\n>> Foreign-key constraints:\n>> \"fk_companyarticledb_accountsdb\" FOREIGN KEY (modifiedby) REFERENCES accountsdb(id)\n>> \"fk_companyarticledb_accountsdb1\" FOREIGN KEY (createdby) REFERENCES accountsdb(id)\n>> \"fk_companyarticledb_accountsdb2\" FOREIGN KEY (preventioncounselorid) REFERENCES accountsdb(id)\n>> \"fk_companyarticledb_articledb\" FOREIGN KEY (articleid) REFERENCES articledb(id)\n>> \"fk_companyarticledb_companydb\" FOREIGN KEY (companyid) REFERENCES companydb(id)\n>> \"fk_companyarticledb_interfaceaccountdb\" FOREIGN KEY (interfaceaccountid) REFERENCES interfaceaccountdb(id)\n>> \"fk_companyarticledb_supplieraccountdb\" FOREIGN KEY (createdbysupplier) REFERENCES supplieraccountdb(id)\n>> \"fk_companyarticledb_supplieraccountdb1\" FOREIGN KEY (modifiedbysupplier) REFERENCES supplieraccountdb(id)\n>\n\n\n\n\nRegarding shared_buffers, \nplease install the pg_buffercache extension and run the recommended \nqueries with that extension during high load times to really get an idea\n about the right value for shared_buffers.� Let's take the guess work \nout of it.\n\nRegards,\nMichael Vitale\n\n\n\n \nJustin Pryzby Monday,\n February 25, 2019 6:59 PM \nOn Tue, Feb 26, 2019 at 12:22:39AM +0100, [email protected] wrote:\n\nHardware\nStandard DS15 v2 (20 vcpus, 140 GB memory)\n\n\"effective_cache_size\" \"105GB\" \"configuration file\"\n\"effective_io_concurrency\" \"200\" \"configuration file\"\n\"maintenance_work_mem\" \"2GB\" \"configuration file\"\n\"max_parallel_workers\" \"20\" \"configuration file\"\n\"max_parallel_workers_per_gather\" \"10\" \"configuration file\"\n\"max_worker_processes\" \"20\" \"configuration file\"\n\"random_page_cost\" \"1.1\" \"configuration file\"\n\"shared_buffers\" \"35GB\" \"configuration file\"\n\"work_mem\" \"18350kB\" \"configuration file\"\n\nI don't know for sure, but 35GB is very possibly too large shared_buffers. The\nrule of thumb is \"start at 25% of RAM\" but I think anything over 10-15GB is\nfrequently too large, unless you can keep the whole DB in RAM (can you?)\n\nTable Metadata\nrelname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid)\n\"companyarticledb\" 6191886 \"5.40276e+08\" 6188459 \"r\" 44 false \"50737979392\"\n\nwork_mem could probably benefit from being larger (just be careful that you\ndon't end up with 20x parallel workers running complex plans each node of which\nusing 100MB work_mem).\n\nFull Table and Index Schema\nThe difference is very bad for the new company, even on the simplest query\n\n SELECT * FROM CompanyArticleDB\n WHERE CompanyId = '77'\n AND ArticleId= '7869071'\n\nIt sounds to me like the planner thinks that the distribution of companyID and\narticleID are independent, when they're not. For example it think that\ncompanyID=33 filters out 99% of the rows.\n\n companyid | integer | | not null |\n articleid | integer | | not null |\n\nEXPLAIN (ANALYZE, BUFFERS), not just EXPLAIN\n SELECT * FROM CompanyArticleDB\n WHERE CompanyId = '77'\n AND ArticleId= '7869071'\n\"Index Scan using ix_companyarticledb_company on companyarticledb (cost=0.57..2.80 rows=1 width=193) (actual time=1011.335..1011.454 rows=1 loops=1)\"\n\" Index Cond: (companyid = 77)\"\n\" Filter: (articleid = 7869071)\"\n\" Rows Removed by Filter: 2674361\"\n\" Buffers: shared hit=30287\"\n\nExample for another participant, there another index is used.\n\"Index Scan using pk_pricedb on companyarticledb (cost=0.57..2.79 rows=1 width=193) (actual time=0.038..0.039 rows=0 loops=1)\"\n\" Index Cond: ((companyid = 39) AND (articleid = 7869071))\"\n\" Buffers: shared hit=4\"\n\nI do not know why this participant is different than the others except that\nit was recently added.\n\nWere the tables ANALYZEd since then ? You could check:\nSELECT * FROM pg_stat_user_tables WHERE relname='companyarticledb';\n\nIf you have small number of companyIDs (~100), then the table statistics may\nincldue a most-common-values list, and companies not in the MCV list may end up\nwith different query plans, even without correlation issues.\n\nIt looks like the NEW company has ~3e6 articles, out of a total ~5e8 articles.\nThe planner may think that companyID doesn't exist at all, so scanning the idx\non companyID will be slightly faster than using the larger, composite index on\ncompanyID,articleID.\n\nJustin\n\nIndexes:\n \"pk_pricedb\" PRIMARY KEY, btree (companyid, articleid)\n \"EnabledIndex\" btree (enabled)\n \"ix_companyarticledb_article\" btree (articleid)\n \"ix_companyarticledb_company\" btree (companyid)\n \"participantarticlecodeindex\" btree (articlecode)\n \"participantdescriptionindex\" gin (participantdescription gin_trgm_ops)\nForeign-key constraints:\n \"fk_companyarticledb_accountsdb\" FOREIGN KEY (modifiedby) REFERENCES accountsdb(id)\n \"fk_companyarticledb_accountsdb1\" FOREIGN KEY (createdby) REFERENCES accountsdb(id)\n \"fk_companyarticledb_accountsdb2\" FOREIGN KEY (preventioncounselorid) REFERENCES accountsdb(id)\n \"fk_companyarticledb_articledb\" FOREIGN KEY (articleid) REFERENCES articledb(id)\n \"fk_companyarticledb_companydb\" FOREIGN KEY (companyid) REFERENCES companydb(id)\n \"fk_companyarticledb_interfaceaccountdb\" FOREIGN KEY (interfaceaccountid) REFERENCES interfaceaccountdb(id)\n \"fk_companyarticledb_supplieraccountdb\" FOREIGN KEY (createdbysupplier) REFERENCES supplieraccountdb(id)\n \"fk_companyarticledb_supplieraccountdb1\" FOREIGN KEY (modifiedbysupplier) REFERENCES supplieraccountdb(id)",
"msg_date": "Mon, 25 Feb 2019 19:30:18 -0500",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow for new participants"
},
{
"msg_contents": "> Indexes:\n> \"pk_pricedb\" PRIMARY KEY, btree (companyid, articleid)\n> \"EnabledIndex\" btree (enabled)\n> \"ix_companyarticledb_article\" btree (articleid)\n> \"ix_companyarticledb_company\" btree (companyid)\n>\n\nI'd say drop ix_companyarticledb_company since pk_pricedb can be used\ninstead even if other queries are only on companyid field, and it will be\nfaster for this case certainly since it targets the row you want directly\nfrom the index without the *\"Rows Removed by Filter: 2674361\"*\n\nI doubt the default_statistics_target = 100 default is doing you any\nfavors. You may want to try increasing that to 500 or 1000 if you can\nafford a small increase in planning cost and more storage for the bigger\nsampling of stats.\n\nIndexes: \"pk_pricedb\" PRIMARY KEY, btree (companyid, articleid) \"EnabledIndex\" btree (enabled) \"ix_companyarticledb_article\" btree (articleid) \"ix_companyarticledb_company\" btree (companyid)I'd say drop ix_companyarticledb_company since pk_pricedb can be used instead even if other queries are only on companyid field, and it will be faster for this case certainly since it targets the row you want directly from the index without the \"Rows Removed by Filter: 2674361\"I doubt the default_statistics_target = 100 default is doing you any favors. You may want to try increasing that to 500 or 1000 if you can afford a small increase in planning cost and more storage for the bigger sampling of stats.",
"msg_date": "Tue, 26 Feb 2019 12:08:51 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow for new participants"
},
{
"msg_contents": "Hello All,\n\n\n\nThank you very much for your help. You have really helped me out!\n\nThe query is now as fast as the others.\n\n\n\nThe indexes ix_companyarticledb_article and ix_companyarticledb_company are\nremoved.\n\nThe parameter for default_statistics_target was set to 1000\n\nANALYZE was performed on the database\n\n\n\nI am so happy this worked out.\n\nThe pg_buffercache extension is now installed, and I will be working with\nit the coming days to improve my settings.\n\nFirst time I ran the query (evening, not high peak usage)\n\n\n\nSELECT c.relname, count(*) AS buffers\n\n FROM pg_buffercache b INNER JOIN pg_class c\n\n ON b.relfilenode = pg_relation_filenode(c.oid) AND\n\n b.reldatabase IN (0, (SELECT oid FROM pg_database\n\n WHERE datname = current_database()))\n\n GROUP BY c.relname\n\n ORDER BY 2 DESC\n\n LIMIT 10;\n\n\n\n\"pk_pricedb\" \"1479655\"\n\n\"companyarticledb\" \"1378549\"\n\n\"articledb\" \"780821\"\n\n\"pricedb\" \"280771\"\n\n\"descriptionindex\" \"138514\"\n\n\"ix_pricedb\" \"122833\"\n\n\"pk_articledb\" \"47290\"\n\n\"EnabledIndex\" \"29958\"\n\n\"strippedmanufacturernumberindex\" \"25604\"\n\n\"strippedcataloguenumberindex\" \"24360\"\n\n\n\n\n\nHow can I see if the whole DB is kept in RAM?\n\nHow to define the best setting for work_mem ?\n\n\n\nThanks for your help!\n\n\n\nRegards,\n\nKim\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOp di 26 feb. 2019 om 20:08 schreef Michael Lewis <[email protected]>:\n\n>\n> Indexes:\n>> \"pk_pricedb\" PRIMARY KEY, btree (companyid, articleid)\n>> \"EnabledIndex\" btree (enabled)\n>> \"ix_companyarticledb_article\" btree (articleid)\n>> \"ix_companyarticledb_company\" btree (companyid)\n>>\n>\n> I'd say drop ix_companyarticledb_company since pk_pricedb can be used\n> instead even if other queries are only on companyid field, and it will be\n> faster for this case certainly since it targets the row you want directly\n> from the index without the *\"Rows Removed by Filter: 2674361\"*\n>\n> I doubt the default_statistics_target = 100 default is doing you any\n> favors. You may want to try increasing that to 500 or 1000 if you can\n> afford a small increase in planning cost and more storage for the bigger\n> sampling of stats.\n>\n\n\n-- \nMet vriendelijke groeten,\n\nHello All,\n \nThank you\nvery much for your help. You have really helped me out!\nThe query\nis now as fast as the others.\n \nThe indexes\nix_companyarticledb_article and ix_companyarticledb_company are removed.\nThe parameter\nfor default_statistics_target was set to 1000\nANALYZE was\nperformed on the database\n \nI am so\nhappy this worked out.\nThe pg_buffercache\nextension is now installed, and I will be working with it the coming days to\nimprove my settings.\nFirst time\nI ran the query (evening, not high peak usage)\n \nSELECT\nc.relname, count(*) AS buffers\n FROM pg_buffercache b INNER JOIN pg_class c\n ON b.relfilenode =\npg_relation_filenode(c.oid) AND\n b.reldatabase IN (0, (SELECT\noid FROM pg_database\n WHERE\ndatname = current_database()))\n GROUP BY c.relname\n ORDER BY 2 DESC\n LIMIT 10;\n \n\"pk_pricedb\" \"1479655\"\n\"companyarticledb\" \"1378549\"\n\"articledb\" \"780821\"\n\"pricedb\" \"280771\"\n\"descriptionindex\" \"138514\"\n\"ix_pricedb\" \"122833\"\n\"pk_articledb\" \"47290\"\n\"EnabledIndex\" \"29958\"\n\"strippedmanufacturernumberindex\" \"25604\"\n\"strippedcataloguenumberindex\" \"24360\"\n \n \nHow can I\nsee if the whole DB is kept in RAM?\nHow to\ndefine the best setting for work_mem ?\n \nThanks for your help!\n \nRegards,\nKim\n \n \n \n \n \n \n Op di 26 feb. 2019 om 20:08 schreef Michael Lewis <[email protected]>:Indexes: \"pk_pricedb\" PRIMARY KEY, btree (companyid, articleid) \"EnabledIndex\" btree (enabled) \"ix_companyarticledb_article\" btree (articleid) \"ix_companyarticledb_company\" btree (companyid)I'd say drop ix_companyarticledb_company since pk_pricedb can be used instead even if other queries are only on companyid field, and it will be faster for this case certainly since it targets the row you want directly from the index without the \"Rows Removed by Filter: 2674361\"I doubt the default_statistics_target = 100 default is doing you any favors. You may want to try increasing that to 500 or 1000 if you can afford a small increase in planning cost and more storage for the bigger sampling of stats.\n-- Met vriendelijke groeten,",
"msg_date": "Tue, 26 Feb 2019 22:39:05 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow for new participants"
},
{
"msg_contents": "[email protected] wrote:\n> EXPLAIN (ANALYZE, BUFFERS), not just EXPLAIN\n> \"Index Scan using ix_companyarticledb_company on companyarticledb (cost=0.57..2.80 rows=1 width=193) (actual time=1011.335..1011.454 rows=1 loops=1)\"\n> \" Index Cond: (companyid = 77)\"\n> \" Filter: (articleid = 7869071)\"\n> \" Rows Removed by Filter: 2674361\"\n> \" Buffers: shared hit=30287\"\n> \"Planning time: 0.220 ms\"\n> \"Execution time: 1011.502 ms\"\n\nYour problem are the \"Rows Removed by Filter: 2674361\".\n\nThe first thing I would try is:\n\n ALTER TABLE public.companyarticledb\n ALTER companyid SET STATISTICS 1000;\n\n ALTER TABLE public.companyarticledb\n ALTER articleid SET STATISTICS 1000;\n\n ANALYZE public.companyarticledb;\n\nThen PostgreSQL has a better idea which condition is selective.\n\nYou can set STATISTICS up to 10000, but don't forget that high values\nmake ANALYZE and planning slower.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 27 Feb 2019 08:58:17 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow for new participants"
}
] |
[
{
"msg_contents": "I have been able to locate four google search results with the same inquiry. What’ve been able to understand is …\n\n1. If auto-vaccum is working as expected, stats collector does not nullify these values as part of a startup sequence or regular Maitenance. If a relation gets auto[vacuumed|analyzed], the timestamps should remain. \n2. A database engine crash or restart with ‘immediate’ option will cause the timestamps to nullify. \n3. Table never qualified for vacuuming based on auto-vacuum settings. \n\nI can rule out all three scenarios above, but I still see null values. What else could be at play here? \n\n\n\n----------------\nThank you\n\n\nI have been able to locate four google search results with the same inquiry. What’ve been able to understand is … If auto-vaccum is working as expected, stats collector does not nullify these values as part of a startup sequence or regular Maitenance. If a relation gets auto[vacuumed|analyzed], the timestamps should remain. A database engine crash or restart with ‘immediate’ option will cause the timestamps to nullify. Table never qualified for vacuuming based on auto-vacuum settings. I can rule out all three scenarios above, but I still see null values. What else could be at play here? ----------------Thank you",
"msg_date": "Wed, 27 Feb 2019 09:47:13 -0500",
"msg_from": "Fd Habash <[email protected]>",
"msg_from_op": true,
"msg_subject": "What is pg_stat_user_tables Showing NULL for last_autoanalyze &\n last_autovacuum"
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 09:47:13AM -0500, Fd Habash wrote:\n> I have been able to locate four google search results with the same inquiry. What’ve been able to understand is …\n> \n> 1. If auto-vaccum is working as expected, stats collector does not nullify these values as part of a startup sequence or regular Maitenance. If a relation gets auto[vacuumed|analyzed], the timestamps should remain.\n> 2. A database engine crash or restart with ‘immediate’ option will cause the timestamps to nullify. \n> 3. Table never qualified for vacuuming based on auto-vacuum settings. \n\nCan you give an example ?\n\nIf it's an empty inheritence parent (relkind=r), then it won't trigger\nautovacuum/analyze thresholds (but you should analyze it manually).\n\nNote that relkind=p \"partitioned\" tables don't have entries at all.\nhttps://www.postgresql.org/message-id/flat/20180503141430.GA28019%40telsasoft.com\n\nIf it's never DELETEd from, then it won't trigger autovacuum (but may trigger\nautoanalyze).\n\nJustin\n\n",
"msg_date": "Wed, 27 Feb 2019 10:15:56 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is pg_stat_user_tables Showing NULL for last_autoanalyze &\n last_autovacuum"
},
{
"msg_contents": "On Wed, 2019-02-27 at 09:47 -0500, Fd Habash wrote:\n> I have been able to locate four google search results with the same inquiry. What’ve been able to understand is …\n> \n> If auto-vaccum is working as expected, stats collector does not nullify these values as part of a\n> startup sequence or regular Maitenance. If a relation gets auto[vacuumed|analyzed], the timestamps should remain.\n> A database engine crash or restart with ‘immediate’ option will cause the timestamps to nullify.\n> Table never qualified for vacuuming based on auto-vacuum settings.\n> \n> I can rule out all three scenarios above, but I still see null values. What else could be at play here?\n\nThe obvious suspicion is that autovacuum starts, but cannot finish because it either\ncannot keep up with the change rate or gives up because it is blocking a concurrent\nsession.\n\nWhat is \"n_live_tup\" and \"n_dead_tup\" in \"pg_stat_user_tables\" for these tables?\nAre there any autovacuum workers running currently?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 28 Feb 2019 09:44:13 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is pg_stat_user_tables Showing NULL for last_autoanalyze &\n last_autovacuum"
},
{
"msg_contents": "Thank you …\n\nAre the calculations for triggering autovacuum dependent upon statistics generated by auto-anaylyze. In other words, if autoanalyze does not run at all, will autovac be able to run its math for threshold (updates & deletes) & scale factor (table rows) to do its thing?\n\nMy understanding from the documentation is that it does not need autoanalyze stats.\n\nThanks \n\n----------------\nThank you\n\nFrom: Justin Pryzby\nSent: Wednesday, February 27, 2019 11:15 AM\nTo: Fd Habash\nCc: [email protected]\nSubject: Re: What is pg_stat_user_tables Showing NULL for last_autoanalyze &last_autovacuum\n\nOn Wed, Feb 27, 2019 at 09:47:13AM -0500, Fd Habash wrote:\n> I have been able to locate four google search results with the same inquiry. What’ve been able to understand is …\n> \n> 1. If auto-vaccum is working as expected, stats collector does not nullify these values as part of a startup sequence or regular Maitenance. If a relation gets auto[vacuumed|analyzed], the timestamps should remain.\n> 2. A database engine crash or restart with ‘immediate’ option will cause the timestamps to nullify. \n> 3. Table never qualified for vacuuming based on auto-vacuum settings. \n\nCan you give an example ?\n\nIf it's an empty inheritence parent (relkind=r), then it won't trigger\nautovacuum/analyze thresholds (but you should analyze it manually).\n\nNote that relkind=p \"partitioned\" tables don't have entries at all.\nhttps://www.postgresql.org/message-id/flat/20180503141430.GA28019%40telsasoft.com\n\nIf it's never DELETEd from, then it won't trigger autovacuum (but may trigger\nautoanalyze).\n\nJustin\n\n\nThank you … Are the calculations for triggering autovacuum dependent upon statistics generated by auto-anaylyze. In other words, if autoanalyze does not run at all, will autovac be able to run its math for threshold (updates & deletes) & scale factor (table rows) to do its thing? My understanding from the documentation is that it does not need autoanalyze stats. Thanks ----------------Thank you From: Justin PryzbySent: Wednesday, February 27, 2019 11:15 AMTo: Fd HabashCc: [email protected]: Re: What is pg_stat_user_tables Showing NULL for last_autoanalyze &last_autovacuum On Wed, Feb 27, 2019 at 09:47:13AM -0500, Fd Habash wrote:> I have been able to locate four google search results with the same inquiry. What’ve been able to understand is …> > 1. If auto-vaccum is working as expected, stats collector does not nullify these values as part of a startup sequence or regular Maitenance. If a relation gets auto[vacuumed|analyzed], the timestamps should remain.> 2. A database engine crash or restart with ‘immediate’ option will cause the timestamps to nullify. > 3. Table never qualified for vacuuming based on auto-vacuum settings. Can you give an example ? If it's an empty inheritence parent (relkind=r), then it won't triggerautovacuum/analyze thresholds (but you should analyze it manually). Note that relkind=p \"partitioned\" tables don't have entries at all.https://www.postgresql.org/message-id/flat/20180503141430.GA28019%40telsasoft.com If it's never DELETEd from, then it won't trigger autovacuum (but may triggerautoanalyze). Justin",
"msg_date": "Thu, 28 Feb 2019 08:34:54 -0500",
"msg_from": "Fd Habash <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: What is pg_stat_user_tables Showing NULL for last_autoanalyze\n &last_autovacuum"
}
] |
[
{
"msg_contents": "Hi\n In the log file of my PostgreSQL cluster, I find :\n>>\nStatement: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n<<\n\n\nè how to get the content of the bind variables ?\n\nThanks in advance\n\nBest Regards\n[cid:[email protected]]\n\n\nDidier ROS\nExpertise SGBD\nEDF - DTEO - DSIT - IT DMA\nDépartement Solutions Groupe\nGroupe Performance Applicative\n32 avenue Pablo Picasso\n92000 NANTERRE\n\[email protected]<mailto:[email protected]>\nTél. : +33 6 49 51 11 88\n[cid:[email protected]]<mailto:[email protected]>[cid:[email protected]]<sip:[email protected]>\n\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Thu, 28 Feb 2019 12:21:56 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to get the content of Bind variables"
},
{
"msg_contents": "If you set log_min_duration_statement low enough for your particular \nquery, you will see another line below it showing what values are \nassociated with each bind variable like this:\n\n2019-02-28 00:07:55CST 2019-02-2800:02:09CST ihr2 10.86.42.184(43460) \nSELECT LOG: duration: 26078.308 ms execute <unnamed>: select \npg_advisory_lock($1)\n\n2019-02-28 00:07:55CST 2019-02-2800:02:09CST ihr2 10.86.42.184(43460) \nSELECT DETAIL: parameters: $1 = '3428922050323511872'\n\nRegards,\nMichael Vitale\n> ROS Didier <mailto:[email protected]>\n> Thursday, February 28, 2019 7:21 AM\n>\n> Hi\n>\n> In the log file of my PostgreSQL cluster, I find :\n>\n> >>\n>\n> *Statement:*update t_shared_liste_valeurs set deletion_date=*$1*, \n> deletion_login=*$2*, modification_date=*$3*, modification_login=*$4*, \n> administrable=*$5*, libelle=*$6*, niveau=*$7* where code=*$8*\n>\n> <<\n>\n> �how to get the content of the bind variables ?\n>\n> Thanks in advance\n>\n> Best Regards\n>\n> cid:[email protected]\n>\n> \t\n>\n> *\n> **Didier ROS*\n>\n> *Expertise SGBD*\n>\n> EDF - DTEO - DSIT - IT DMA\n>\n> D�partement Solutions Groupe\n>\n> Groupe Performance Applicative\n>\n> 32 avenue Pablo Picasso\n>\n> 92000 NANTERRE\n>\n> _didier*[email protected]* <mailto:[email protected]>_\n>\n> T�l. : +33 6 49 51 11 88\n>\n> cid:[email protected] \n> <mailto:[email protected]>cid:[email protected] \n> <sip:[email protected]>\n>\n>\n> Ce message et toutes les pi�ces jointes (ci-apr�s le 'Message') sont \n> �tablis � l'intention exclusive des destinataires et les informations \n> qui y figurent sont strictement confidentielles. Toute utilisation de \n> ce Message non conforme � sa destination, toute diffusion ou toute \n> publication totale ou partielle, est interdite sauf autorisation expresse.\n>\n> Si vous n'�tes pas le destinataire de ce Message, il vous est interdit \n> de le copier, de le faire suivre, de le divulguer ou d'en utiliser \n> tout ou partie. Si vous avez re�u ce Message par erreur, merci de le \n> supprimer de votre syst�me, ainsi que toutes ses copies, et de n'en \n> garder aucune trace sur quelque support que ce soit. Nous vous \n> remercions �galement d'en avertir imm�diatement l'exp�diteur par \n> retour du message.\n>\n> Il est impossible de garantir que les communications par messagerie \n> �lectronique arrivent en temps utile, sont s�curis�es ou d�nu�es de \n> toute erreur ou virus.\n> ____________________________________________________\n>\n> This message and any attachments (the 'Message') are intended solely \n> for the addressees. The information contained in this Message is \n> confidential. Any use of information contained in this Message not in \n> accord with its purpose, any dissemination or disclosure, either whole \n> or partial, is prohibited except formal approval.\n>\n> If you are not the addressee, you may not copy, forward, disclose or \n> use any part of it. If you have received this message in error, please \n> delete it and all copies from your system and notify the sender \n> immediately by return message.\n>\n> E-mail communication cannot be guaranteed to be timely secure, error \n> or virus-free.\n>",
"msg_date": "Thu, 28 Feb 2019 07:36:56 -0500",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get the content of Bind variables"
},
{
"msg_contents": "Hi\n Thanks for the answer.\n\nI have in my postgresql.conf :\nlog_min_duration_statement=0\nand the content of bind variables is not showed in the log file.\nWhat can I do to get the content of the bind variables ?\n\nBest Regard\n[cid:[email protected]]\n\n\nDidier ROS\nExpertise SGBD\nEDF - DTEO - DSIT - IT DMA\nDépartement Solutions Groupe\nGroupe Performance Applicative\n32 avenue Pablo Picasso\n92000 NANTERRE\n\[email protected]<mailto:[email protected]>\nTél. : +33 6 49 51 11 88\n[cid:[email protected]]<mailto:[email protected]>[cid:[email protected]]<sip:[email protected]>\n\n\n\nDe : [email protected] [mailto:[email protected]]\nEnvoyé : jeudi 28 février 2019 13:37\nÀ : ROS Didier <[email protected]>\nCc : [email protected]\nObjet : Re: How to get the content of Bind variables\n\nIf you set log_min_duration_statement low enough for your particular query, you will see another line below it showing what values are associated with each bind variable like this:\n2019-02-28 00:07:55 CST 2019-02-28 00:02:09 CST ihr2 10.86.42.184(43460) SELECT LOG: duration: 26078.308 ms execute <unnamed>: select pg_advisory_lock($1)\n2019-02-28 00:07:55 CST 2019-02-28 00:02:09 CST ihr2 10.86.42.184(43460) SELECT DETAIL: parameters: $1 = '3428922050323511872'\n\nRegards,\nMichael Vitale\n\nROS Didier<mailto:[email protected]>\nThursday, February 28, 2019 7:21 AM\nHi\n In the log file of my PostgreSQL cluster, I find :\n>>\nStatement: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n<<\n\n\nè how to get the content of the bind variables ?\n\nThanks in advance\n\nBest Regards\n[cid:[email protected]]\n\n\nDidier ROS\nExpertise SGBD\nEDF - DTEO - DSIT - IT DMA\nDépartement Solutions Groupe\nGroupe Performance Applicative\n32 avenue Pablo Picasso\n92000 NANTERRE\n\[email protected]<mailto:[email protected]>\nTél. : +33 6 49 51 11 88\n[cid:[email protected]]<mailto:[email protected]>[cid:[email protected]]<sip:[email protected]>\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Thu, 28 Feb 2019 12:47:02 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to get the content of Bind variables"
},
{
"msg_contents": "ROS Didier wrote:\n> In the log file of my PostgreSQL cluster, I find :\n> >> \n> Statement: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n> << \n> \n> how to get the content of the bind variables ?\n\nCan we see the whole log entry and the following one?\n\nPerhaps there was a syntax error or similar, and the statement was never executed.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 28 Feb 2019 17:01:28 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get the content of Bind variables"
},
{
"msg_contents": "Hi Laurent\r\n\r\nHere is a biggest part of my log file :\r\n\r\n>>\r\n 2019-02-27 14:41:28 CET [16239]: [5696-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: duration: 1.604 ms\r\n2019-02-27 14:41:28 CET [16239]: [5697-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: duration: 0.084 ms parse <unnamed>: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\r\n2019-02-27 14:41:28 CET [16239]: [5698-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: plan:\r\n2019-02-27 14:41:28 CET [16239]: [5699-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainSTATEMENT: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\r\n2019-02-27 14:41:28 CET [16239]: [5700-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: duration: 0.288 ms bind <unnamed>: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\r\n2019-02-27 14:41:28 CET [16239]: [5701-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: execute <unnamed>: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\r\n<<\r\nThe statement has been executed\r\nIt is the same problem for all the statements.\r\nI can not get the content of the bind variables.\r\n\r\n\r\nDidier ROS\r\nExpertise SGBD\r\nEDF - DTEO - DSIT - IT DMA\r\nDépartement Solutions Groupe\r\nGroupe Performance Applicative\r\n32 avenue Pablo Picasso\r\n92000 NANTERRE\r\n \r\[email protected]\r\nTél. : +33 6 49 51 11 88\r\n\r\n\r\n-----Message d'origine-----\r\nDe : [email protected] [mailto:[email protected]] \r\nEnvoyé : jeudi 28 février 2019 17:01\r\nÀ : ROS Didier <[email protected]>; [email protected]\r\nObjet : Re: How to get the content of Bind variables\r\n\r\nROS Didier wrote:\r\n> In the log file of my PostgreSQL cluster, I find :\r\n> >> \r\n> Statement: update t_shared_liste_valeurs set deletion_date=$1, \r\n> deletion_login=$2, modification_date=$3, modification_login=$4, \r\n> administrable=$5, libelle=$6, niveau=$7 where code=$8 <<\r\n> \r\n> how to get the content of the bind variables ?\r\n\r\nCan we see the whole log entry and the following one?\r\n\r\nPerhaps there was a syntax error or similar, and the statement was never executed.\r\n\r\nYours,\r\nLaurenz Albe\r\n--\r\nCybertec | https://www.cybertec-postgresql.com\r\n\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n",
"msg_date": "Thu, 28 Feb 2019 16:12:49 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to get the content of Bind variables"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 12:21:56PM +0000, ROS Didier wrote:\n> Statement: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n> \n> � how to get the content of the bind variables ?\n\nWhat is your setting of log_error_verbosity ?\nhttps://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ERROR-VERBOSITY\n\nAlso, I recommend using CSV logs, since they're easier to import into the DB\nand then easier to parse.\nhttps://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ERROR-VERBOSITY\n\nAlso, note that you can either set log_min_duration_statement=0, which logs all\nstatement durations, and associated statements (if they haven't been previously\nlogged).\n\nOr, you can set log_statement=all, which logs all statements (but duration is\nonly logged according to log_min_duration_statement).\n\nJustin\n\n",
"msg_date": "Thu, 28 Feb 2019 10:19:00 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get the content of Bind variables"
},
{
"msg_contents": "Hi\n\tHere is the information :\n\npostgres=# show log_error_verbosity ;\n log_error_verbosity\n---------------------\n default\n(1 row)\n\n\npostgres=# show log_statement ;\n log_statement\n---------------\n none\n(1 row) \n\nI am trying now to set up log_statement :\nlog_statement=all ;\nlog_min_duration_statement=250;\n\n\nDidier ROS\nExpertise SGBD\nEDF - DTEO - DSIT - IT DMA\nDépartement Solutions Groupe\nGroupe Performance Applicative\n32 avenue Pablo Picasso\n92000 NANTERRE\n \[email protected]\nTél. : +33 6 49 51 11 88\n\n\n\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] \nEnvoyé : jeudi 28 février 2019 17:19\nÀ : ROS Didier <[email protected]>\nCc : [email protected]\nObjet : Re: How to get the content of Bind variables\n\nOn Thu, Feb 28, 2019 at 12:21:56PM +0000, ROS Didier wrote:\n> Statement: update t_shared_liste_valeurs set deletion_date=$1, \n> deletion_login=$2, modification_date=$3, modification_login=$4, \n> administrable=$5, libelle=$6, niveau=$7 where code=$8\n> \n> è how to get the content of the bind variables ?\n\nWhat is your setting of log_error_verbosity ?\nhttps://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ERROR-VERBOSITY\n\nAlso, I recommend using CSV logs, since they're easier to import into the DB and then easier to parse.\nhttps://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ERROR-VERBOSITY\n\nAlso, note that you can either set log_min_duration_statement=0, which logs all statement durations, and associated statements (if they haven't been previously logged).\n\nOr, you can set log_statement=all, which logs all statements (but duration is only logged according to log_min_duration_statement).\n\nJustin\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n",
"msg_date": "Fri, 1 Mar 2019 07:54:12 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to get the content of Bind variables"
},
{
"msg_contents": "ROS Didier wrote:\n> Here is a biggest part of my log file :\n> \n> 2019-02-27 14:41:28 CET [16239]: [5696-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: duration: 1.604 ms\n> 2019-02-27 14:41:28 CET [16239]: [5697-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: duration: 0.084 ms parse <unnamed>: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n> 2019-02-27 14:41:28 CET [16239]: [5698-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: plan:\n> 2019-02-27 14:41:28 CET [16239]: [5699-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainSTATEMENT: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n> 2019-02-27 14:41:28 CET [16239]: [5700-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: duration: 0.288 ms bind <unnamed>: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n> 2019-02-27 14:41:28 CET [16239]: [5701-1] [10086] user=pgbd_preint_sg2,db=pgbd_preint_sg2,client=localhost.localdomainLOG: execute <unnamed>: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n> <<\n> The statement has been executed\n> It is the same problem for all the statements.\n> I can not get the content of the bind variables.\n\nYou should set \"log_error_verbosity\" back from \"terse\" to \"default\".\nThen you will see the DETAIL messages.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 01 Mar 2019 09:31:38 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get the content of Bind variables"
},
{
"msg_contents": "ROS Didier <[email protected]> writes:\n> postgres=# show log_error_verbosity ;\n> log_error_verbosity\n> ---------------------\n> default\n> (1 row)\n\nSo ... how old is this server? AFAIK the above should be enough to ensure\nyou get the DETAIL lines with parameter values. But the ability to log\nthose hasn't been there forever.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 01 Mar 2019 11:29:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get the content of Bind variables"
},
{
"msg_contents": "Hi Tom\n\n\tThanks a lot for your answer.\n\n*) Here is information about my server :\n[postgres@noeyypvd pg_log]$ cat /etc/redhat-release\nRed Hat Enterprise Linux Server release 7.3 (Maipo)\n\npostgres=# select version() ;\n version\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 10.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit\n(1 row)\n\n *) it's very problematic that we can not get the content of bind variables. We can not determine the root query which makes UPDATE statements to crash our production database.\nWhat can explain the lack of information about bind variables?\n\n*) Here is the parameters setting I use :\n# postgresql.conf : include_if_exists = '/appli/postgres/pgbd_prod_pda/10/conf/audit.conf'\n\nlog_rotation_size = 0\nlog_destination=stderr\nlogging_collector=true\nclient_min_messages=notice\nlog_min_messages=ERROR\nlog_min_error_statement=ERROR\nlog_min_duration_statement=250\ndebug_print_parse=off\ndebug_print_rewritten=off\ndebug_print_plan=on\ndebug_pretty_print=on\nlog_checkpoints=on\nlog_connections=on\nlog_disconnections=on\nlog_duration=on\nlog_error_verbosity=VERBOSE\nlog_hostname=on\nlog_lock_waits=on\ndeadlock_timeout=1s\nlog_statement=all\nlog_temp_files=0\nlog_autovacuum_min_duration = 0\ntrack_activities=on\ntrack_io_timing=on\ntrack_functions=all\nlog_line_prefix = '%t [%p]: [%l-1] [%x] user=%u,db=%d,client=%h'\nlc_messages ='C'\nshared_preload_libraries = 'passwordcheck,pg_stat_statements,pgstattuple'\nlisten_addresses = '*'\npg_stat_statements.track=all\npg_stat_statements.max = 1000\npg_stat_statements.track_utility=on\npg_stat_statements.save=on\n \n*) -> suggestion : It would be nice to have the content of bind variable of a query in a table of pg_catalog. (cf ORACLE) \n\nDidier ROS\nExpertise SGBD\nEDF - DTEO - DSIT - IT DMA\nDépartement Solutions Groupe\nGroupe Performance Applicative\n32 avenue Pablo Picasso\n92000 NANTERRE\n \[email protected]\nTél. : +33 6 49 51 11 88\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] \nEnvoyé : vendredi 1 mars 2019 17:30\nÀ : ROS Didier <[email protected]>\nCc : [email protected]; [email protected]\nObjet : Re: How to get the content of Bind variables\n\nROS Didier <[email protected]> writes:\n> postgres=# show log_error_verbosity ;\n> log_error_verbosity\n> ---------------------\n> default\n> (1 row)\n\nSo ... how old is this server? AFAIK the above should be enough to ensure you get the DETAIL lines with parameter values. But the ability to log those hasn't been there forever.\n\n\t\t\tregards, tom lane\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n",
"msg_date": "Fri, 1 Mar 2019 18:47:06 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to get the content of Bind variables"
},
{
"msg_contents": "Hi Didier,\n\nI imagine that this is the sql executed from a trigger.\nCould you provide the trigger pl/pgsql code ? \nas the source and target tables (anonymized) definition ?\n\nAfter a fresh db restart, are thoses logs the same for the 6 first\nexecutions and the following ones ?\n\nRegards\nPAscal \n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Fri, 1 Mar 2019 13:41:38 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: How to get the content of Bind variables"
},
{
"msg_contents": "Hi\n\n\n\n The SQL is not executed from a trigger.\n\n Here is an extract of my log file :\n\n>>\n\n2019-03-01 14:53:37 CET [24803]: [129-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 LOG: process 24803 still waiting for ShareLock on transaction 3711 after 1000.476 ms\n\n2019-03-01 14:53:37 CET [24803]: [130-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 DETAIL: Process holding the lock: 24786. Wait queue: 24803.\n\n2019-03-01 14:53:37 CET [24803]: [131-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 CONTEXT: while rechecking updated tuple (3,33) in relation \"t_shared_liste_valeurs\"\n\n2019-03-01 14:53:37 CET [24803]: [132-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 STATEMENT: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n\n<<\n\n\n\nAfter a fresh db restart, the result is the same : no content of Bind variables in the log file.\n\n\nBest Regards[cid:[email protected]]\n\n\nDidier ROS\nExpertise SGBD\nEDF - DTEO - DSIT - IT DMA\nDépartement Solutions Groupe\nGroupe Performance Applicative\n32 avenue Pablo Picasso\n92000 NANTERRE\n\[email protected]<mailto:[email protected]>\nTél. : +33 6 49 51 11 88\n[cid:[email protected]]<mailto:[email protected]>[cid:[email protected]]<sip:[email protected]>\n\n\n\n\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]]\nEnvoyé : vendredi 1 mars 2019 21:42\nÀ : [email protected]\nObjet : RE: How to get the content of Bind variables\n\n\n\nHi Didier,\n\n\n\nI imagine that this is the sql executed from a trigger.\n\nCould you provide the trigger pl/pgsql code ?\n\nas the source and target tables (anonymized) definition ?\n\n\n\nAfter a fresh db restart, are thoses logs the same for the 6 first executions and the following ones ?\n\n\n\nRegards\n\nPAscal\n\n\n\n\n\n\n\n--\n\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Sat, 2 Mar 2019 13:14:44 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to get the content of Bind variables"
},
{
"msg_contents": "Did16 wrote\n> Hi\n> The SQL is not executed from a trigger.\n> Here is an extract of my log file :\n>>>\n> \n> 2019-03-01 14:53:37 CET [24803]: [129-1] [3686]\n> user=pgbd_preint_sg2,db=pgbd_preint_sg2 LOG: process 24803 still waiting\n> for ShareLock on transaction 3711 after 1000.476 ms\n> \n> 2019-03-01 14:53:37 CET [24803]: [130-1] [3686]\n> user=pgbd_preint_sg2,db=pgbd_preint_sg2 DETAIL: Process holding the lock:\n> 24786. Wait queue: 24803.\n> \n> 2019-03-01 14:53:37 CET [24803]: [131-1] [3686]\n> user=pgbd_preint_sg2,db=pgbd_preint_sg2 CONTEXT: while rechecking updated\n> tuple (3,33) in relation \"t_shared_liste_valeurs\"\n> \n> 2019-03-01 14:53:37 CET [24803]: [132-1] [3686]\n> user=pgbd_preint_sg2,db=pgbd_preint_sg2 STATEMENT: update\n> t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2,\n> modification_date=$3, modification_login=$4, administrable=$5, libelle=$6,\n> niveau=$7 where code=$8\n> \n> <<\n> \n> After a fresh db restart, the result is the same : no content of Bind\n> variables in the log file.\n> \n> Best Regards\n> \n> Didier ROS\n> Expertise SGBD\n> EDF - DTEO - DSIT - IT DMA\n> Département Solutions Groupe\n> Groupe Performance Applicative\n> 32 avenue Pablo Picasso\n> 92000 NANTERRE\n\nOK, In case of a trigger or any pl/pgsql program I would have tryed to write\nthe content of bind variables using RAISE command or someting similar before\nexecuting the UPDATE command.\n\nBut your log is now much more explicit: you where waiting on a LOCK ...\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sat, 2 Mar 2019 07:10:11 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: How to get the content of Bind variables"
},
{
"msg_contents": "On Fri, Mar 01, 2019 at 06:47:06PM +0000, ROS Didier wrote:\n> log_line_prefix = '%t [%p]: [%l-1] [%x] user=%u,db=%d,client=%h'\n\nOn Sat, Mar 02, 2019 at 01:14:44PM +0000, ROS Didier wrote:\n> 2019-03-01 14:53:37 CET [24803]: [129-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 LOG: process 24803 still waiting for ShareLock on transaction 3711 after 1000.476 ms\n> 2019-03-01 14:53:37 CET [24803]: [130-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 DETAIL: Process holding the lock: 24786. Wait queue: 24803.\n> 2019-03-01 14:53:37 CET [24803]: [131-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 CONTEXT: while rechecking updated tuple (3,33) in relation \"t_shared_liste_valeurs\"\n> 2019-03-01 14:53:37 CET [24803]: [132-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 STATEMENT: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n\nI just realized that your log is showing \"STATEMENT: [...]\" which I think\nmeans that's using libpq PQexec (simple query protocol), which means it doesn't\nuse or support bind parameters at all. If it were using PQexecParams (protocol\n2.0 \"extended\" query), it would show \"execute <unnamed>: [...]\", with any bind\nparams in DETAIL. And if you were using PQexecPrepared, it'd show \"execute\nFOO: [...]\" where FOO is the name of the statement \"prepared\" by PQprepare\n(plus bind params).\n\nhttps://www.postgresql.org/docs/current/libpq-exec.html\nhttps://www.postgresql.org/docs/current/protocol.html\n\nWhat client application is this ? It looks like it's going to set\ndeletion_date to the literal string \"$1\" .. except that it's not quoted, so the\nstatement will just cause an error. Am I wrong ?\n\nCould you grep the entire logfile for pid 24803 and post the output on dropbox\nor pastebin or show 10 lines of context by email ?\n\nI've just used my messages and test cases on this patch as a reference to check\nwhat I wrote above is accurate.\nhttps://www.postgresql.org/message-id/flat/20190210015707.GQ31721%40telsasoft.com#037d17567f4c84a5f436960ef1ed8c49\n\nOn Fri, Mar 01, 2019 at 06:47:06PM +0000, ROS Didier wrote:\n> *) -> suggestion : It would be nice to have the content of bind variable of a query in a table of pg_catalog. (cf ORACLE) \n\nAs I mentioned, you can set log_destination=csvlog,stderr and import them with\nCOPY (and add indices and analysis and monitoring..). It look like DETAILs are\nbeing logged, so that's not the issue, but CSV also has the nice benefit of\nbeing easily imported to SQL where escaping and linebreaks and similar are not\nconfusing the issue, which I think can be the case for text logs.\n\nJustin\n\n",
"msg_date": "Sat, 2 Mar 2019 09:56:38 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get the content of Bind variables"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Fri, Mar 01, 2019 at 06:47:06PM +0000, ROS Didier wrote:\n>> log_line_prefix = '%t [%p]: [%l-1] [%x] user=%u,db=%d,client=%h'\n\n> On Sat, Mar 02, 2019 at 01:14:44PM +0000, ROS Didier wrote:\n>> 2019-03-01 14:53:37 CET [24803]: [129-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 LOG: process 24803 still waiting for ShareLock on transaction 3711 after 1000.476 ms\n>> 2019-03-01 14:53:37 CET [24803]: [130-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 DETAIL: Process holding the lock: 24786. Wait queue: 24803.\n>> 2019-03-01 14:53:37 CET [24803]: [131-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 CONTEXT: while rechecking updated tuple (3,33) in relation \"t_shared_liste_valeurs\"\n>> 2019-03-01 14:53:37 CET [24803]: [132-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 STATEMENT: update t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, modification_date=$3, modification_login=$4, administrable=$5, libelle=$6, niveau=$7 where code=$8\n\n> I just realized that your log is showing \"STATEMENT: [...]\" which I think\n> means that's using libpq PQexec (simple query protocol), which means it doesn't\n> use or support bind parameters at all.\n\nNo, what's shown above is a case of the current statement being printed\nas detail for some log message (a log_lock_waits message in this case).\nThis has nothing to do with whether statement logging is on overall.\n\nI now realize that what the OP is probably wishing for is that bind\nparameter values would be included as a detail line in messages other\nthan log_all_statements or statement-duration messages. Sorry, that\nfeature doesn't exist, and there'd be pretty serious technical\nimpediments to making it happen.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 02 Mar 2019 11:33:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get the content of Bind variables"
},
{
"msg_contents": "Hello\n\nPostgresql does not log statement parameters on log_lock_wait. Because this is not implemented: https://github.com/postgres/postgres/blob/REL_10_STABLE/src/backend/storage/lmgr/proc.c#L1461\nCompare with errdetail_params routine in this file: https://github.com/postgres/postgres/blob/REL_10_STABLE/src/backend/tcop/postgres.c#L1847 \n\nCurrently query parameters can be logged only at the end of successful query execution.\n\nregards, Sergei\n\n",
"msg_date": "Sat, 02 Mar 2019 19:33:40 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get the content of Bind variables"
},
{
"msg_contents": "Hi\n\t\nI have executed grep command on the entire logfile for pid 24803. See the attached file\nNB : I have no DETAIL section in my entire log file. Is it normal ?\n\nBest Reagrds\n\nDidier ROS\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] \nEnvoyé : samedi 2 mars 2019 16:57\nÀ : ROS Didier <[email protected]>\nCc : [email protected]; [email protected]; [email protected]\nObjet : Re: How to get the content of Bind variables\n\nOn Fri, Mar 01, 2019 at 06:47:06PM +0000, ROS Didier wrote:\n> log_line_prefix = '%t [%p]: [%l-1] [%x] user=%u,db=%d,client=%h'\n\nOn Sat, Mar 02, 2019 at 01:14:44PM +0000, ROS Didier wrote:\n> 2019-03-01 14:53:37 CET [24803]: [129-1] [3686] \n> user=pgbd_preint_sg2,db=pgbd_preint_sg2 LOG: process 24803 still \n> waiting for ShareLock on transaction 3711 after 1000.476 ms\n> 2019-03-01 14:53:37 CET [24803]: [130-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 DETAIL: Process holding the lock: 24786. Wait queue: 24803.\n> 2019-03-01 14:53:37 CET [24803]: [131-1] [3686] user=pgbd_preint_sg2,db=pgbd_preint_sg2 CONTEXT: while rechecking updated tuple (3,33) in relation \"t_shared_liste_valeurs\"\n> 2019-03-01 14:53:37 CET [24803]: [132-1] [3686] \n> user=pgbd_preint_sg2,db=pgbd_preint_sg2 STATEMENT: update \n> t_shared_liste_valeurs set deletion_date=$1, deletion_login=$2, \n> modification_date=$3, modification_login=$4, administrable=$5, \n> libelle=$6, niveau=$7 where code=$8\n\nI just realized that your log is showing \"STATEMENT: [...]\" which I think means that's using libpq PQexec (simple query protocol), which means it doesn't use or support bind parameters at all. If it were using PQexecParams (protocol\n2.0 \"extended\" query), it would show \"execute <unnamed>: [...]\", with any bind params in DETAIL. And if you were using PQexecPrepared, it'd show \"execute\nFOO: [...]\" where FOO is the name of the statement \"prepared\" by PQprepare (plus bind params).\n\nhttps://www.postgresql.org/docs/current/libpq-exec.html\nhttps://www.postgresql.org/docs/current/protocol.html\n\nWhat client application is this ? It looks like it's going to set deletion_date to the literal string \"$1\" .. except that it's not quoted, so the statement will just cause an error. Am I wrong ?\n\nCould you grep the entire logfile for pid 24803 and post the output on dropbox or pastebin or show 10 lines of context by email ?\n\nI've just used my messages and test cases on this patch as a reference to check what I wrote above is accurate.\nhttps://www.postgresql.org/message-id/flat/20190210015707.GQ31721%40telsasoft.com#037d17567f4c84a5f436960ef1ed8c49\n\nOn Fri, Mar 01, 2019 at 06:47:06PM +0000, ROS Didier wrote:\n> *) -> suggestion : It would be nice to have the content of bind \n> variable of a query in a table of pg_catalog. (cf ORACLE)\n\nAs I mentioned, you can set log_destination=csvlog,stderr and import them with COPY (and add indices and analysis and monitoring..). It look like DETAILs are being logged, so that's not the issue, but CSV also has the nice benefit of being easily imported to SQL where escaping and linebreaks and similar are not confusing the issue, which I think can be the case for text logs.\n\nJustin\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Sat, 2 Mar 2019 19:14:41 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to get the content of Bind variables"
},
{
"msg_contents": "Hi Sergei\n\n\tThank you for your explanation. I can understand for the lock wait message, but I have no DETAIL section in my entire log file. Why ?\n I have plenty of STATEMENT sections ...\n\n\tThanks in advance\n\nBest Regards\nDidier ROS\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] \nEnvoyé : samedi 2 mars 2019 17:34\nÀ : ROS Didier <[email protected]>; [email protected]; [email protected]\nObjet : Re: How to get the content of Bind variables\n\nHello\n\nPostgresql does not log statement parameters on log_lock_wait. Because this is not implemented: https://github.com/postgres/postgres/blob/REL_10_STABLE/src/backend/storage/lmgr/proc.c#L1461\nCompare with errdetail_params routine in this file: https://github.com/postgres/postgres/blob/REL_10_STABLE/src/backend/tcop/postgres.c#L1847 \n\nCurrently query parameters can be logged only at the end of successful query execution.\n\nregards, Sergei\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n",
"msg_date": "Sat, 2 Mar 2019 19:19:15 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to get the content of Bind variables"
}
] |
[
{
"msg_contents": "Hi,\nI was testing pgstattuple and I realized that pgstattuple is working on\ntoasted table but pgstattuple_approx is raising the next error msg :\n\nERROR: \"pg_toast_18292\" is not a table or materialized view\n\nahm, is that because the pgstattuple_approx uses visibility map ? Can\nsomeone explain ? tnx.\n\nHi,I was testing pgstattuple and I realized that pgstattuple is working on toasted table but pgstattuple_approx is raising the next error msg : ERROR: \"pg_toast_18292\" is not a table or materialized viewahm, is that because the pgstattuple_approx uses visibility map ? Can someone explain ? tnx.",
"msg_date": "Sun, 3 Mar 2019 17:59:56 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgstattuple_approx for toasted table"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> I was testing pgstattuple and I realized that pgstattuple is working on toasted table but pgstattuple_approx is raising the next error msg : \n> \n> ERROR: \"pg_toast_18292\" is not a table or materialized view\n> \n> ahm, is that because the pgstattuple_approx uses visibility map ? Can someone explain ? tnx.\n\nYou are right; here is the code:\n\n /*\n * We support only ordinary relations and materialised views, because we\n * depend on the visibility map and free space map for our estimates about\n * unscanned pages.\n */\n if (!(rel->rd_rel->relkind == RELKIND_RELATION ||\n rel->rd_rel->relkind == RELKIND_MATVIEW))\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"\\\"%s\\\" is not a table or materialized view\",\n RelationGetRelationName(rel))));\n\nYours,\nLaurenz Albe\n\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 04 Mar 2019 06:21:39 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgstattuple_approx for toasted table"
}
] |
[
{
"msg_contents": "Apologies for the cross-post to the general list. I realised I should\nhave (possibly?) posted here instead. Advice gratefully received.\n\nWe've been happy running a database server and replica for some years\nwith the following details and specs:\n\n postgres 9.5 (currently)\n supermicro X9DRD-7LN4F\n LSI Megaraid MR9261-8i with BBU\n 250gb raid 1 /\n 224gb raid 10 /db\n 126GB RAM (1066Mhz DDR3)\n 2 x Xeon E5-2609 v2 @ 2.5GHz\n\nServices on the server are scaling up quite quickly, so we are running\nout of disk space for the several hundred databases in the cluster.\nWhile the disk space is fairly easy to solve, our main issue is CPU\nhitting daily 5 minute peaks of 65% plus under load for complex plpgsql\nqueries, causing query backups. While we don't often spill queries to\ndisk, running out of RAM is an incipient problem too. \n\nWhile we could split the cluster there are some management issues to do\nwith that, together with our having a policy of local and remote\nreplicas. \n\nConsequently we're thinking of the following replacement servers:\n\n postgres 11 (planned)\n supermicro 113TQ-R700W \n LSI MegaRAID 9271-8i SAS/SATA RAID Controller, 1Gb DDR3 Cache (PCIE- Gen 3)\n 500gb raid 1 /\n 2tb raid 10 /db\n with \"zero maintenance flash cache protection\"\n 256GB RAM (2666MHz DDR4)\n 2x E5-2680 v4 Intel Xeon, 14 Cores, 2.40GHz, 35M Cache,\n\nThis configuration gives us lots more storage, double the RAM (with 8\nslots free) and just under 4x CPU (according to passmark) with lots more\ncores.\n\nWe're hoping to get two to three years of service out of this upgrade,\nbut then will split the cluster between servers if demand grows more\nthan we anticipate.\n\nAny comments on this upgrade, strategy or the \"zero maintenance\" thingy\n(instead of a BBU) would be much appreciated.\n\nRory\n\n",
"msg_date": "Tue, 5 Mar 2019 22:03:26 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Server upgrade advice [xpost]"
},
{
"msg_contents": "The two major version upgrade will bring many happy returns. That’s all I can contribute. \n\n> On Mar 5, 2019, at 2:03 PM, Rory Campbell-Lange <[email protected]> wrote:\n> \n> Apologies for the cross-post to the general list. I realised I should\n> have (possibly?) posted here instead. Advice gratefully received.\n> \n> We've been happy running a database server and replica for some years\n> with the following details and specs:\n> \n> postgres 9.5 (currently)\n> supermicro X9DRD-7LN4F\n> LSI Megaraid MR9261-8i with BBU\n> 250gb raid 1 /\n> 224gb raid 10 /db\n> 126GB RAM (1066Mhz DDR3)\n> 2 x Xeon E5-2609 v2 @ 2.5GHz\n> \n> Services on the server are scaling up quite quickly, so we are running\n> out of disk space for the several hundred databases in the cluster.\n> While the disk space is fairly easy to solve, our main issue is CPU\n> hitting daily 5 minute peaks of 65% plus under load for complex plpgsql\n> queries, causing query backups. While we don't often spill queries to\n> disk, running out of RAM is an incipient problem too. \n> \n> While we could split the cluster there are some management issues to do\n> with that, together with our having a policy of local and remote\n> replicas. \n> \n> Consequently we're thinking of the following replacement servers:\n> \n> postgres 11 (planned)\n> supermicro 113TQ-R700W \n> LSI MegaRAID 9271-8i SAS/SATA RAID Controller, 1Gb DDR3 Cache (PCIE- Gen 3)\n> 500gb raid 1 /\n> 2tb raid 10 /db\n> with \"zero maintenance flash cache protection\"\n> 256GB RAM (2666MHz DDR4)\n> 2x E5-2680 v4 Intel Xeon, 14 Cores, 2.40GHz, 35M Cache,\n> \n> This configuration gives us lots more storage, double the RAM (with 8\n> slots free) and just under 4x CPU (according to passmark) with lots more\n> cores.\n> \n> We're hoping to get two to three years of service out of this upgrade,\n> but then will split the cluster between servers if demand grows more\n> than we anticipate.\n> \n> Any comments on this upgrade, strategy or the \"zero maintenance\" thingy\n> (instead of a BBU) would be much appreciated.\n> \n> Rory\n> \n\n",
"msg_date": "Tue, 5 Mar 2019 21:04:01 -0800",
"msg_from": "Ian Harding <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server upgrade advice [xpost]"
}
] |
[
{
"msg_contents": "Hi,\nI have the next relation in my db : A(id int, info bytea,date timestamp).\nEvery cell in the info column is very big and because of that there is a\ntoasted table with the data of the info column (pg_toast.pg_toast_123456).\n\nThe relation contains the login info for every user that logs into the\nsession and cleaned whenever the user disconnected. In case the use doesnt\nlogoff, during the night we clean login info that is older then 3 days.\n\nThe toasted table grew very fast (more then 50M+ rows per week) and thats\nwhy I set the next autovacuum settings for the table :\nOptions:\ntoast.autovacuum_vacuum_scale_factor=0\ntoast.autovacuum_vacuum_threshold=10000\ntoast.autovacuum_vacuum_cost_limit=10000\ntoast.autovacuum_vacuum_cost_delay=5\n\nThose settings helped but the table still grey very much. I wrote a script\nthat monitored some metadata about the table (pg_stat_all_tables,count(*)\nfrom orig and toasted table). I let the system monitor the table for a week\nand I found out the next info :\n\nAutovacuum was running great during the whole week and whenever it reached\n10k records in the toasted table it started vacuuming the table. *However,\nThe db grew dramatically during a period of 7 hours in a specific day. In\nthose 7 hours the table contained more then 10k (and kept increasing) but\nthe autovacuum didnt vacuum the table*. I saw that during those 7 hours\nautovacuum didnt run and as a result of that the table grew to its max\nsize(the current size).\n\nan example of an autovacuum run on the toasted table :\nautomatic vacuum of table \"db.pg_toast.pg_toast_123456\": index scans: 1\n pages: 0 removed, 1607656 remain\n tuples: 6396770 removed, 33778 remain\n buffer usage: 1743021 hits, 3281298 misses, 3217528 dirtied\n avg read rate: 2.555 MiB/s, avg write rate: 2.505 MiB/s\n system usage: CPU 98.44s/54.02u sec elapsed 10034.34 sec\n\nthe vacuum hits/misses/dirtied are set to default (1,10,20)\n\nautovacuum workers - 16\nmaintenance_work_mem - 200MB\n130GB RAM , 23 CPU\nCan anyone explain why suddenly the autovacuum should stop working for that\nlong period ?\n\nHi,I have the next relation in my db : A(id int, info bytea,date timestamp). Every cell in the info column is very big and because of that there is a toasted table with the data of the info column (pg_toast.pg_toast_123456).The relation contains the login info for every user that logs into the session and cleaned whenever the user disconnected. In case the use doesnt logoff, during the night we clean login info that is older then 3 days.The toasted table grew very fast (more then 50M+ rows per week) and thats why I set the next autovacuum settings for the table :Options: toast.autovacuum_vacuum_scale_factor=0toast.autovacuum_vacuum_threshold=10000toast.autovacuum_vacuum_cost_limit=10000toast.autovacuum_vacuum_cost_delay=5Those settings helped but the table still grey very much. I wrote a script that monitored some metadata about the table (pg_stat_all_tables,count(*) from orig and toasted table). I let the system monitor the table for a week and I found out the next info : Autovacuum was running great during the whole week and whenever it reached 10k records in the toasted table it started vacuuming the table. However, The db grew dramatically during a period of 7 hours in a specific day. In those 7 hours the table contained more then 10k (and kept increasing) but the autovacuum didnt vacuum the table. I saw that during those 7 hours autovacuum didnt run and as a result of that the table grew to its max size(the current size).an example of an autovacuum run on the toasted table : automatic vacuum of table \"db.pg_toast.pg_toast_123456\": index scans: 1 pages: 0 removed, 1607656 remain tuples: 6396770 removed, 33778 remain buffer usage: 1743021 hits, 3281298 misses, 3217528 dirtied avg read rate: 2.555 MiB/s, avg write rate: 2.505 MiB/s system usage: CPU 98.44s/54.02u sec elapsed 10034.34 secthe vacuum hits/misses/dirtied are set to default (1,10,20) autovacuum workers - 16maintenance_work_mem - 200MB130GB RAM , 23 CPUCan anyone explain why suddenly the autovacuum should stop working for that long period ?",
"msg_date": "Wed, 6 Mar 2019 18:47:21 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum just stop vacuuming specific table for 7 hours"
},
{
"msg_contents": "On Wed, Mar 06, 2019 at 06:47:21PM +0200, Mariel Cherkassky wrote:\n> Those settings helped but the table still grey very much. I wrote a script\n> that monitored some metadata about the table (pg_stat_all_tables,count(*)\n> from orig and toasted table). I let the system monitor the table for a week\n> and I found out the next info :\n\n> Autovacuum was running great during the whole week and whenever it reached\n> 10k records in the toasted table it started vacuuming the table. *However,\n> The db grew dramatically during a period of 7 hours in a specific day. In\n> those 7 hours the table contained more then 10k (and kept increasing) but\n> the autovacuum didnt vacuum the table*. I saw that during those 7 hours\n> autovacuum didnt run and as a result of that the table grew to its max\n> size(the current size).\n\nDoes pg_stat_all_tables show that the table ought to have been vacuumed ?\n\nSELECT * FROM pg_stat_sys_tables WHERE relid='pg_toast.pg_toast_123456'::regclass;\n\nCompare with relpages, reltuple FROM pg_class\n\nWhat postgres version ?\n\nJustin\n\n",
"msg_date": "Wed, 6 Mar 2019 11:05:14 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum just stop vacuuming specific table for 7 hours"
},
{
"msg_contents": "The PostgreSQL version is 9.6.\nI dont have access to the machine right now so I will check tomorrow.\nBasically those values should be the same because they are updated by the\nautovacuum process right ?\nAny idea what else to check ? During the week last_autovacuum (in\npg_stat_all_tables) were updated every hour. Only during those problematic\n7 hours it wasnt updated.\n\nבתאריך יום ד׳, 6 במרץ 2019 ב-19:05 מאת Justin Pryzby <\[email protected]>:\n\n> On Wed, Mar 06, 2019 at 06:47:21PM +0200, Mariel Cherkassky wrote:\n> > Those settings helped but the table still grey very much. I wrote a\n> script\n> > that monitored some metadata about the table (pg_stat_all_tables,count(*)\n> > from orig and toasted table). I let the system monitor the table for a\n> week\n> > and I found out the next info :\n>\n> > Autovacuum was running great during the whole week and whenever it\n> reached\n> > 10k records in the toasted table it started vacuuming the table.\n> *However,\n> > The db grew dramatically during a period of 7 hours in a specific day. In\n> > those 7 hours the table contained more then 10k (and kept increasing) but\n> > the autovacuum didnt vacuum the table*. I saw that during those 7 hours\n> > autovacuum didnt run and as a result of that the table grew to its max\n> > size(the current size).\n>\n> Does pg_stat_all_tables show that the table ought to have been vacuumed ?\n>\n> SELECT * FROM pg_stat_sys_tables WHERE\n> relid='pg_toast.pg_toast_123456'::regclass;\n>\n> Compare with relpages, reltuple FROM pg_class\n>\n> What postgres version ?\n>\n> Justin\n>\n\nThe PostgreSQL version is 9.6.I dont have access to the machine right now so I will check tomorrow. Basically those values should be the same because they are updated by the autovacuum process right ?Any idea what else to check ? During the week last_autovacuum (in pg_stat_all_tables) were updated every hour. Only during those problematic 7 hours it wasnt updated.בתאריך יום ד׳, 6 במרץ 2019 ב-19:05 מאת Justin Pryzby <[email protected]>:On Wed, Mar 06, 2019 at 06:47:21PM +0200, Mariel Cherkassky wrote:\n> Those settings helped but the table still grey very much. I wrote a script\n> that monitored some metadata about the table (pg_stat_all_tables,count(*)\n> from orig and toasted table). I let the system monitor the table for a week\n> and I found out the next info :\n\n> Autovacuum was running great during the whole week and whenever it reached\n> 10k records in the toasted table it started vacuuming the table. *However,\n> The db grew dramatically during a period of 7 hours in a specific day. In\n> those 7 hours the table contained more then 10k (and kept increasing) but\n> the autovacuum didnt vacuum the table*. I saw that during those 7 hours\n> autovacuum didnt run and as a result of that the table grew to its max\n> size(the current size).\n\nDoes pg_stat_all_tables show that the table ought to have been vacuumed ?\n\nSELECT * FROM pg_stat_sys_tables WHERE relid='pg_toast.pg_toast_123456'::regclass;\n\nCompare with relpages, reltuple FROM pg_class\n\nWhat postgres version ?\n\nJustin",
"msg_date": "Wed, 6 Mar 2019 19:16:30 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum just stop vacuuming specific table for 7 hours"
}
] |
[
{
"msg_contents": "Hi,\n\nI was playing around with PG11.2 (i6700k with 16GB RAM, on Ubuntu 18.04, \ncompiled from sources) and LLVM, trying a CPU-bound query that in my \nsimple mind should benefit from JIT'ting but (almost) doesn't.\n\n1.) Test table with 195 columns of type 'numeric':\n\nCREATE TABLE test (data0 numeric,data1 numeric,data2 numeric,data3 \nnumeric,...,data192 numeric,data193 numeric,data194 numeric);\n\n2.) bulk-loaded (via COPY) 2 mio. rows of randomly generated data into \nthis table (and ran vacuum & analyze afterwards)\n\n3.) Disable parallel workers to just measure JIT performance via 'set \nmax_parallel_workers = 0'\n\n4.) Execute query without JIT a couple of times to make sure table is in \nmemory (I had iostat running in the background to verify that actually \nno disk access was taking place):\n\ntest=# explain (analyze,buffers) SELECT SUM(data0) AS data0,SUM(data1) \nAS data1,SUM(data2) AS data2,...,SUM(data193) AS data193,SUM(data194) AS \ndata194 FROM test;\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=815586.31..815586.32 rows=1 width=6240) \n(actual time=14304.058..14304.058 rows=1 loops=1)\n Buffers: shared hit=64 read=399936\n -> Gather (cost=815583.66..815583.87 rows=2 width=6240) (actual \ntime=14303.925..14303.975 rows=1 loops=1)\n Workers Planned: 2\n Workers Launched: 0\n Buffers: shared hit=64 read=399936\n -> Partial Aggregate (cost=814583.66..814583.67 rows=1 \nwidth=6240) (actual time=14302.966..14302.966 rows=1 loops=1)\n Buffers: shared hit=64 read=399936\n -> Parallel Seq Scan on test (cost=0.00..408333.33 \nrows=833333 width=1170) (actual time=0.017..810.513 rows=2000000 loops=1)\n Buffers: shared hit=64 read=399936\n Planning Time: 4.707 ms\n Execution Time: 14305.380 ms\n\n5.) Now I turned on the JIT and repeated the same query a couple of \ntimes. This is what I got\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=815586.31..815586.32 rows=1 width=6240) \n(actual time=15558.558..15558.558 rows=1 loops=1)\n Buffers: shared hit=128 read=399872\n -> Gather (cost=815583.66..815583.87 rows=2 width=6240) (actual \ntime=15558.450..15558.499 rows=1 loops=1)\n Workers Planned: 2\n Workers Launched: 0\n Buffers: shared hit=128 read=399872\n -> Partial Aggregate (cost=814583.66..814583.67 rows=1 \nwidth=6240) (actual time=15557.541..15557.541 rows=1 loops=1)\n Buffers: shared hit=128 read=399872\n -> Parallel Seq Scan on test (cost=0.00..408333.33 \nrows=833333 width=1170) (actual time=0.020..941.925 rows=2000000 loops=1)\n Buffers: shared hit=128 read=399872\n Planning Time: 11.230 ms\n JIT:\n Functions: 6\n Options: Inlining true, Optimization true, Expressions true, \nDeforming true\n Timing: Generation 15.707 ms, Inlining 4.688 ms, Optimization \n652.021 ms, Emission 939.556 ms, Total 1611.973 ms\n Execution Time: 15576.516 ms\n(16 rows)\n\nSo (ignoring the time for JIT'ting itself) this yields only ~2-3% \nperformance increase... is this because my query is just too simple to \nactually benefit a lot, meaning the code path for the 'un-JIT' case is \nalready fairly optimal ? Or does JIT'ting actually only have a large \nimpact on the filter/WHERE part of the query but not so much on \naggregation / tuple deforming ?\n\nThanks,\nTobias\n\n\n\n\n\n\n\n\n",
"msg_date": "Wed, 6 Mar 2019 18:16:08 +0100",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "JIT performance question"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-06 18:16:08 +0100, Tobias Gierke wrote:\n> I was playing around with PG11.2 (i6700k with 16GB RAM, on Ubuntu 18.04,\n> compiled from sources) and LLVM, trying a CPU-bound query that in my simple\n> mind should benefit from JIT'ting but (almost) doesn't.\n> \n> 1.) Test table with 195 columns of type 'numeric':\n> \n> CREATE TABLE test (data0 numeric,data1 numeric,data2 numeric,data3\n> numeric,...,data192 numeric,data193 numeric,data194 numeric);\n> \n> 2.) bulk-loaded (via COPY) 2 mio. rows of randomly generated data into this\n> table (and ran vacuum & analyze afterwards)\n> \n> 3.) Disable parallel workers to just measure JIT performance via 'set\n> max_parallel_workers = 0'\n\nFWIW, it's better to do that via max_parallel_workers_per_gather in most\ncases, because creating a parallel plan and then not using that will\nhave its own consequences.\n\n\n> 4.) Execute query without JIT a couple of times to make sure table is in\n> memory (I had iostat running in the background to verify that actually no\n> disk access was taking place):\n\nThere's definitely accesses outside of PG happening here :(. Probably\ncached at the IO level, but without track_io_timings that's hard to\nconfirm. Presumably that's caused by the sequential scan ringbuffers.\nI found that forcing the pages to be read in using pg_prewarm gives more\nmeasurable results.\n\n\n> So (ignoring the time for JIT'ting itself) this yields only ~2-3%\n> performance increase... is this because my query is just too simple to\n> actually benefit a lot, meaning the code path for the 'un-JIT' case is\n> already fairly optimal ? Or does JIT'ting actually only have a large impact\n> on the filter/WHERE part of the query but not so much on aggregation / tuple\n> deforming ?\n\nIt's hard to know precisely without running a profile of the\nworkload. My suspicion is that the bottleneck in this query is the use\nof numeric, which has fairly slow operations, including aggregation. And\nthey're too complicated to be inlined.\n\nGenerally there's definitely advantage in JITing aggregation.\n\nThere's a lot of further improvements on the table with better JIT code\ngeneration, I just haven't gotten around implementing those :(\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 6 Mar 2019 09:42:58 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JIT performance question"
},
{
"msg_contents": "On 06.03.19 18:42, Andres Freund wrote:\n>\n> It's hard to know precisely without running a profile of the\n> workload. My suspicion is that the bottleneck in this query is the use\n> of numeric, which has fairly slow operations, including aggregation. And\n> they're too complicated to be inlined.\n>\n> Generally there's definitely advantage in JITing aggregation.\n>\n> There's a lot of further improvements on the table with better JIT code\n> generation, I just haven't gotten around implementing those :(\n\nThanks for the quick response ! I think you're onto something with the \nnumeric type. I replaced it with bigint and repeated my test and now I \nget a nice 40% speedup (I'm again intentionally ignoring the costs for \nJIT'ting here as I assume a future PostgreSQL version will have some \nkind of caching for the generated code):\n\nWithout JIT:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1395000.49..1395000.50 rows=1 width=6240) (actual \ntime=6023.436..6023.436 rows=1 loops=1)\n Buffers: shared hit=256 read=399744\n I/O Timings: read=475.135\n -> Seq Scan on test (cost=0.00..420000.00 rows=2000000 width=1560) \n(actual time=0.035..862.424 rows=2000000 loops=1)\n Buffers: shared hit=256 read=399744\n I/O Timings: read=475.135\n Planning Time: 0.574 ms\n Execution Time: 6024.298 ms\n(8 rows)\n\n\nWith JIT:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1395000.49..1395000.50 rows=1 width=6240) (actual \ntime=4840.064..4840.064 rows=1 loops=1)\n Buffers: shared hit=320 read=399680\n I/O Timings: read=493.679\n -> Seq Scan on test (cost=0.00..420000.00 rows=2000000 width=1560) \n(actual time=0.090..847.458 rows=2000000 loops=1)\n Buffers: shared hit=320 read=399680\n I/O Timings: read=493.679\n Planning Time: 1.414 ms\n JIT:\n Functions: 3\n Options: Inlining true, Optimization true, Expressions true, \nDeforming true\n Timing: Generation 19.747 ms, Inlining 10.281 ms, Optimization \n222.619 ms, Emission 362.862 ms, Total 615.509 ms\n Execution Time: 4862.113 ms\n(12 rows)\n\nCheers,\nTobias\n\n\n",
"msg_date": "Wed, 6 Mar 2019 19:21:33 +0100",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JIT performance question"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-06 19:21:33 +0100, Tobias Gierke wrote:\n> On 06.03.19 18:42, Andres Freund wrote:\n> > \n> > It's hard to know precisely without running a profile of the\n> > workload. My suspicion is that the bottleneck in this query is the use\n> > of numeric, which has fairly slow operations, including aggregation. And\n> > they're too complicated to be inlined.\n> > \n> > Generally there's definitely advantage in JITing aggregation.\n> > \n> > There's a lot of further improvements on the table with better JIT code\n> > generation, I just haven't gotten around implementing those :(\n> \n> Thanks for the quick response ! I think you're onto something with the\n> numeric type. I replaced it with bigint and repeated my test and now I get a\n> nice 40% speedup\n\nCool. It'd really be worthwhile for somebody to work on adding fastpaths\nto the numeric code...\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 6 Mar 2019 10:32:57 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JIT performance question"
}
] |
[
{
"msg_contents": "Good afternoon.\nI need to know the commands to display the execution time, CPU usage and\nmemory usage, but through the Postgres console.Thank you.\n\nGood afternoon.I need to know the commands to display the execution time, CPU usage and memory usage, but through the Postgres console.Thank you.",
"msg_date": "Wed, 6 Mar 2019 16:09:46 -0400",
"msg_from": "Kenia Vergara <[email protected]>",
"msg_from_op": true,
"msg_subject": "Good afternoon."
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 06, 2019 at 04:09:46PM -0400, Kenia Vergara wrote:\n> Good afternoon.\n> I need to know the commands to display the execution time, CPU usage and\n> memory usage, but through the Postgres console.Thank you.\n\nDoes this do what you want ? Note that the \"QUERY STATISTICS\" are log output,\nand not a part of the sql result.\n\n$ psql postgres -Atc \"SET client_min_messages=log; SET log_statement_stats=on\" -c 'explain analyze SELECT max(i) FROM generate_series(1,999999)i'\nSET\nLOG: QUERY STATISTICS\nDETAIL: ! system usage stats:\n! 0.625402 s user, 0.059799 s system, 0.687496 s elapsed\n! [0.629882 s user, 0.059799 s system total]\n! 10672 kB max resident size\n! 0/27344 [0/27344] filesystem blocks in/out\n! 0/1378 [0/2365] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/167 [2/167] voluntary/involuntary context switches\nAggregate (cost=12.50..12.51 rows=1 width=4) (actual time=680.155..680.161 rows=1 loops=1)\n -> Function Scan on generate_series i (cost=0.00..10.00 rows=1000 width=4) (actual time=253.512..497.462 rows=999999 loops=1)\nPlanning Time: 0.227 ms\nExecution Time: 686.200 ms\n\n\nJustin\n\n",
"msg_date": "Wed, 6 Mar 2019 16:48:40 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "rusage (Re: Good afternoon.)"
},
{
"msg_contents": "On Wed, Mar 6, 2019 at 11:49 PM Justin Pryzby <[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, Mar 06, 2019 at 04:09:46PM -0400, Kenia Vergara wrote:\n> > Good afternoon.\n> > I need to know the commands to display the execution time, CPU usage and\n> > memory usage, but through the Postgres console.Thank you.\n>\n> Does this do what you want ? Note that the \"QUERY STATISTICS\" are log output,\n> and not a part of the sql result.\n>\n> $ psql postgres -Atc \"SET client_min_messages=log; SET log_statement_stats=on\" -c 'explain analyze SELECT max(i) FROM generate_series(1,999999)i'\n> SET\n> LOG: QUERY STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.625402 s user, 0.059799 s system, 0.687496 s elapsed\n> ! [0.629882 s user, 0.059799 s system total]\n> ! 10672 kB max resident size\n> ! 0/27344 [0/27344] filesystem blocks in/out\n> ! 0/1378 [0/2365] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/167 [2/167] voluntary/involuntary context switches\n> Aggregate (cost=12.50..12.51 rows=1 width=4) (actual time=680.155..680.161 rows=1 loops=1)\n> -> Function Scan on generate_series i (cost=0.00..10.00 rows=1000 width=4) (actual time=253.512..497.462 rows=999999 loops=1)\n> Planning Time: 0.227 ms\n> Execution Time: 686.200 ms\n\nYou could also consider using pg_stat_kcache\n(https://github.com/powa-team/pg_stat_kcache/) extension, which gather\nmost of those metrics and accumulate them per (queryid, dbid, userid).\nIt requires more configuration, depends on pg_stat_statements\nextension, and need a postgres restart to activate it though.\n\n",
"msg_date": "Wed, 6 Mar 2019 23:59:22 +0100",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rusage (Re: Good afternoon.)"
},
{
"msg_contents": "On Wed, 6 Mar 2019 at 23:45, Kenia Vergara <[email protected]> wrote:\n\n> Good afternoon.\n> I need to know the commands to display the execution time, CPU usage and\n> memory usage, but through the Postgres console.Thank you.\n>\n\nYou might want to have a look at,\nhttps://aaronparecki.com/2015/02/19/8/monitoring-cpu-memory-usage-from-postgres\n\n-- \nRegards,\nRafia Sabih\n\nOn Wed, 6 Mar 2019 at 23:45, Kenia Vergara <[email protected]> wrote:Good afternoon.I need to know the commands to display the execution time, CPU usage and memory usage, but through the Postgres console.Thank you.\nYou might want to have a look at, https://aaronparecki.com/2015/02/19/8/monitoring-cpu-memory-usage-from-postgres-- Regards,Rafia Sabih",
"msg_date": "Mon, 1 Apr 2019 15:03:43 +0200",
"msg_from": "Rafia Sabih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Good afternoon."
},
{
"msg_contents": "Hi Kenia,\n\nTake a look of this link\n\nhttps://wiki.postgresql.org/wiki/Monitoring#pg_activity\n\nI think you are looking for something line pg_activity\n\nBut in that official pgsql documentation you'll find all you need to\nmonitor queries and pgsql cluster behavior\n\nOn Wed, Mar 6, 2019, 7:45 PM Kenia Vergara <[email protected]> wrote:\n\n> Good afternoon.\n> I need to know the commands to display the execution time, CPU usage and\n> memory usage, but through the Postgres console.Thank you.\n>\n\nHi Kenia,Take a look of this linkhttps://wiki.postgresql.org/wiki/Monitoring#pg_activityI think you are looking for something line pg_activityBut in that official pgsql documentation you'll find all you need to monitor queries and pgsql cluster behaviorOn Wed, Mar 6, 2019, 7:45 PM Kenia Vergara <[email protected]> wrote:Good afternoon.I need to know the commands to display the execution time, CPU usage and memory usage, but through the Postgres console.Thank you.",
"msg_date": "Tue, 2 Apr 2019 02:03:05 -0300",
"msg_from": "=?UTF-8?Q?Ram=C3=B3n_Bastidas?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Good afternoon."
}
] |
[
{
"msg_contents": "Hello,\n\nThanks for the feedback so far! Continue with the previous report, we\nsharing another four interesting cases that SQLFuzz discovered.\n(previous discussion:\nhttps://www.postgresql.org/message-id/flat/BN6PR07MB3409EE6CAAF8CCF43820AFB9EE670%40BN6PR07MB3409.namprd07.prod.outlook.com#acc68f0fbd8f0b207e162d2dd401d3e8\n)\n\nHere’s the time taken to execute four SQL queries on old (v9.5.16) and\nnewer version (v11.2) of PostgreSQL (in milliseconds):\n\n+----------------------+--------+---------+---------+---------+\n| | scale1 | scale10 | scale50 | scale300|\n+----------------------+--------+---------+---------+---------+\n| Case-5 (v9.5.16) | 39 | 183 | 1459 | 11125 |\n| Case-5 (v11.2) | 73 | 620 | 4818 | 16956 |\n+----------------------+--------+---------+---------+---------+\n| Case-6 (v9.5.16) | 46 | 329 | 15096 | 10721 |\n| Case-6 (v11.2) | 81 | 671 | 64808 | 26919 |\n+----------------------+--------+---------+---------+---------+\n| Case-7 (v9.5.16) | 19 | X | X | X |\n| Case-7 (v11.2) | 46 | X | X | X |\n+----------------------+--------+---------+---------+---------+\n| Case-8 (v9.5.16) | 215 | 2108 | 10460 | 64959 |\n| Case-8 (v11.2) | 449 | 3997 | 20246 | 130595 |\n+----------------------+--------+---------+---------+---------+\n\nFor each regression, we share:\n1) the associated query,\n2) the commit that activated it,\n3) our high-level analysis, and\n4) query execution plans in old and new versions of PostgreSQL.\n\nAll these regressions are observed on the latest version. (v11.2 and\nv9.5.16)\n\n* You can download the queries at:\nhttps://gts3.org/~/jjung/tpcc/case2.tar.gz\n\n* You can reproduce the result by using the same setup that we described\nbefore:\nhttps://www.postgresql.org/message-id/BN6PR07MB3409922471073F2B619A8CA4EE640%40BN6PR07MB3409.namprd07.prod.outlook.com\n\n - As Andrew mentioned before, we increased default work_mem to 128MB\n\nBest regards,\nJinho Jung\n\n#### QUERY 5\n\nEXPLAIN ANALYZE\nselect\n ref_0.c_zip as c0\nfrom\n public.customer as ref_0\nwhere EXISTS (\n select\n ref_1.ol_d_id as c10\n from\n public.order_line as ref_1\n where (ref_1.ol_o_id <> ref_0.c_d_id)\n)\n\n- Commit : 7ca25b7\n\n- Our analysis: We believe newer version is slow when the number of rows in\nthe filter is small.\n\n- Query execution plans:\n\n[Old version]\nNested Loop Semi Join (cost=0.00..15317063263550.52 rows=1 width=10)\n(actual time=0.019..10888.266 rows=9000000 loops=1)\n Join Filter: (ref_1.ol_o_id <> ref_0.c_d_id)\n Rows Removed by Join Filter: 11700000\n -> Seq Scan on customer ref_0 (cost=0.00..770327.00 rows=9000000\nwidth=14) (actual time=0.008..7541.944 rows=9000000 loops=1)\n -> Materialize (cost=0.00..2813223.52 rows=90017568 width=4) (actual\ntime=0.000..0.000 rows=2 loops=9000000)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.68\nrows=90017568 width=4) (actual time=0.005..0.007 rows=14 loops=1)\nPlanning time: 0.401 ms\nExecution time: 11125.538 ms\n\n\n[New version]\nNested Loop Semi Join (cost=0.00..3409260.89 rows=9000000 width=10)\n(actual time=0.033..16732.988 rows=9000000 loops=1)\n Join Filter: (ref_1.ol_o_id <> ref_0.c_d_id)\n Rows Removed by Join Filter: 11700000\n -> Seq Scan on customer ref_0 (cost=0.00..770327.00 rows=9000000\nwidth=14) (actual time=0.017..2113.336 rows=9000000 loops=1)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.68 rows=90017568\nwidth=4) (actual time=0.001..0.001 rows=2 loops=9000000)\nPlanning Time: 0.615 ms\nExecution Time: 16956.115 ms\n\n\n##### QUERY 6\n\nselect distinct\n ref_0.h_data as c0,\n ref_0.h_c_id as c1\nfrom\n public.history as ref_0\n left join public.item as ref_1\n on (ref_1.i_im_id < -1)\nwhere ref_1.i_price is NULL\n\n- Our analysis: We think that the 'merge sort' makes slow execution. We are\nwondering why newer version applies external merge sort in this case.\n\n- Commit: 3fc6e2d (big patch)\n\n- Query execution plans:\n\n[Old version]\nHashAggregate (cost=1312274.26..1312274.27 rows=1 width=21) (actual\ntime=7288.727..10443.586 rows=9000000 loops=1)\n Group Key: ref_0.h_data, ref_0.h_c_id\n -> Nested Loop Left Join (cost=0.00..1312274.26 rows=1 width=21)\n(actual time=26.965..2463.231 rows=9000000 loops=1)\n Filter: (ref_1.i_price IS NULL)\n -> Seq Scan on history ref_0 (cost=0.00..184795.61 rows=8999661\nwidth=21) (actual time=0.347..795.936 rows=9000000 loops=1)\n -> Materialize (cost=0.00..2521.05 rows=10 width=6) (actual\ntime=0.000..0.000 rows=0 loops=9000000)\n -> Seq Scan on item ref_1 (cost=0.00..2521.00 rows=10\nwidth=6) (actual time=26.610..26.610 rows=0 loops=1)\n Filter: (i_im_id < '-1'::integer)\n Rows Removed by Filter: 100000\nPlanning time: 1.538 ms\nExecution time: 10721.259 ms\n\n\n[New version]\nUnique (cost=1312334.34..1312334.35 rows=1 width=21) (actual\ntime=21444.459..26651.868 rows=9000000 loops=1)\n -> Sort (cost=1312334.34..1312334.35 rows=1 width=21) (actual\ntime=21444.457..25629.389 rows=9000000 loops=1)\n Sort Key: ref_0.h_data, ref_0.h_c_id\n Sort Method: external merge Disk: 285384kB\n -> Nested Loop Left Join (cost=0.00..1312334.33 rows=1 width=21)\n(actual time=21.328..2409.302 rows=9000000 loops=1)\n Filter: (ref_1.i_price IS NULL)\n -> Seq Scan on history ref_0 (cost=0.00..184800.06\nrows=9000106 width=21) (actual time=0.320..734.376 rows=9000000 loops=1)\n -> Materialize (cost=0.00..2521.05 rows=10 width=6) (actual\ntime=0.000..0.000 rows=0 loops=9000000)\n -> Seq Scan on item ref_1 (cost=0.00..2521.00 rows=10\nwidth=6) (actual time=21.000..21.001 rows=0 loops=1)\n Filter: (i_im_id < '-1'::integer)\n Rows Removed by Filter: 100000\nPlanning Time: 1.426 ms\nExecution Time: 26919.635 ms\n\n\n##### QUERY 7\n\nselect\n ref_0.c_id as c0\nfrom\n public.customer as ref_0\nwhere EXISTS (\n select\n ref_0.c_city as c0\n from\n public.order_line as ref_1\n left join public.new_order as ref_2\n on (ref_1.ol_supply_w_id = ref_2.no_w_id)\n where (ref_1.ol_delivery_d > ref_0.c_since)\n)\n\n- Our analysis : Parallel execution seems a problem. We also want to ask\nwhether only one worker is intended behavior of Postgres because we think\nparallel execution with less than two workers is not parallel.\n\n- Another interesting finding: this query cannot be finished within one day\nif we incrase the size of DB (e.g., from scale factor 1 to scale factor\n10/50/300).\n\n- Commit: 16be2fd\n\n- Query execution plans:\n\n[Old version]\nNested Loop Semi Join (cost=224.43..910152608046.08 rows=10000 width=4)\n(actual time=2.619..18.042 rows=30000 loops=1)\n Join Filter: (ref_1.ol_delivery_d > ref_0.c_since)\n -> Seq Scan on customer ref_0 (cost=0.00..2569.00 rows=30000 width=12)\n(actual time=0.003..4.530 rows=30000 loops=1)\n -> Materialize (cost=224.43..48521498.97 rows=2406886812 width=8)\n(actual time=0.000..0.000 rows=1 loops=30000)\n -> Hash Left Join (cost=224.43..27085162.91 rows=2406886812\nwidth=8) (actual time=2.612..2.612 rows=1 loops=1)\n Hash Cond: (ref_1.ol_supply_w_id = ref_2.no_w_id)\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48\nrows=300148 width=12) (actual time=0.002..0.002 rows=1 loops=1)\n -> Hash (cost=124.19..124.19 rows=8019 width=4) (actual\ntime=2.571..2.571 rows=8019 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 346kB\n -> Seq Scan on new_order ref_2 (cost=0.00..124.19\nrows=8019 width=4) (actual time=0.174..1.539 rows=8019 loops=1)\nPlanning time: 0.605 ms\nExecution time: 19.045 ms\n\n[New version]\nGather (cost=1224.43..672617098015.10 rows=10000 width=4) (actual\ntime=3.099..45.077 rows=30000 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n -> Nested Loop Semi Join (cost=224.43..672617096015.10 rows=5882\nwidth=4) (actual time=2.307..38.258 rows=15000 loops=2)\n Join Filter: (ref_1.ol_delivery_d > ref_0.c_since)\n -> Parallel Seq Scan on customer ref_0 (cost=0.00..2445.47\nrows=17647 width=12) (actual time=0.003..2.668 rows=15000 loops=2)\n -> Hash Left Join (cost=224.43..27085162.91 rows=2406886812\nwidth=8) (actual time=0.002..0.002 rows=1 loops=30000)\n Hash Cond: (ref_1.ol_supply_w_id = ref_2.no_w_id)\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48\nrows=300148 width=12) (actual time=0.001..0.001 rows=1 loops=30000)\n -> Hash (cost=124.19..124.19 rows=8019 width=4) (actual\ntime=2.211..2.211 rows=8019 loops=2)\n Buckets: 8192 Batches: 1 Memory Usage: 346kB\n -> Seq Scan on new_order ref_2 (cost=0.00..124.19\nrows=8019 width=4) (actual time=0.086..1.179 rows=8019 loops=2)\nPlanning Time: 0.611 ms\nExecution Time: 46.195 ms\n\n\n##### QUERY 8\n\nselect\n ref_0.ol_d_id as c0\nfrom\n public.order_line as ref_0\n left join (\n select\n ref_1.ol_supply_w_id as c0,\n ref_1.ol_d_id as c2\n from\n public.order_line as ref_1\n where ref_1.ol_o_id < 1\n ) as subq_0\n on (subq_0.c2 = ref_0.ol_o_id)\nwhere EXISTS (\n select\n ref_2.o_ol_cnt as c0\n from\n public.oorder as ref_2\n where\n nullif(ref_2.o_d_id, subq_0.c0) is not NULL\n);\n\n- Commit: 0c2070c\n\n- Our analysis : We are not sure about the root cause of this regression.\nThis might have to do with parallel execution.\n\n- Query execution plans:\n\n[Old version]\nNested Loop Semi Join (cost=3488355.85..210034755085.67 rows=245615461\nwidth=4) (actual time=32830.372..62745.966 rows=90017507 loops=1)\n Join Filter: (NULLIF(ref_2.o_d_id, ref_1.ol_supply_w_id) IS NOT NULL)\n -> Hash Right Join (cost=3488355.85..9115461.16 rows=246849710 width=8)\n(actual time=32829.980..48232.002 rows=90017507 loops=1)\n Hash Cond: (ref_1.ol_d_id = ref_0.ol_o_id)\n -> Index Scan using order_line_pkey on order_line ref_1\n(cost=0.57..2076988.89 rows=8461 width=8) (actual time=5346.547..5346.547\nrows=0 loops=1)\n Index Cond: (ol_o_id < 1)\n -> Hash (cost=2011503.68..2011503.68 rows=90017568 width=8)\n(actual time=27466.438..27466.438 rows=90017507 loops=1)\n Buckets: 4194304 Batches: 64 Memory Usage: 80798kB\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.68\nrows=90017568 width=8) (actual time=0.006..16821.719 rows=90017507 loops=1)\n -> Materialize (cost=0.00..245157.00 rows=9000000 width=4) (actual\ntime=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on oorder ref_2 (cost=0.00..165000.00 rows=9000000\nwidth=4) (actual time=0.377..0.377 rows=1 loops=1)\nPlanning time: 3.933 ms\nExecution time: 64959.231 ms\n\n[New version]\nGather (cost=8797140.37..153823591859.11 rows=264667229 width=4) (actual\ntime=27917.270..128050.059 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=8796140.37..153797124136.21\nrows=110278012 width=4) (actual time=27911.016..109588.362 rows=30005836\nloops=3)\n Join Filter: (NULLIF(ref_2.o_d_id, ref_1.ol_supply_w_id) IS NOT\nNULL)\n -> Merge Left Join (cost=8796140.37..10646159.56 rows=110832173\nwidth=8) (actual time=27910.620..33777.444 rows=30005836 loops=3)\n Merge Cond: (ref_0.ol_o_id = ref_1.ol_d_id)\n -> Sort (cost=6717742.99..6811511.29 rows=37507320 width=8)\n(actual time=22427.383..25862.373 rows=30005836 loops=3)\n Sort Key: ref_0.ol_o_id\n Sort Method: external merge Disk: 534352kB\n Worker 0: Sort Method: external merge Disk: 509888kB\n Worker 1: Sort Method: external merge Disk: 541512kB\n -> Parallel Seq Scan on order_line ref_0\n(cost=0.00..1486401.20 rows=37507320 width=8) (actual time=0.025..14349.178\nrows=30005836 loops=3)\n -> Sort (cost=2078397.38..2078419.66 rows=8912 width=8)\n(actual time=5483.221..5483.221 rows=0 loops=3)\n Sort Key: ref_1.ol_d_id\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort Memory: 25kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n -> Index Scan using order_line_pkey on order_line\nref_1 (cost=0.57..2077812.68 rows=8912 width=8) (actual\ntime=5483.203..5483.203 rows=0 loops=3)\n Index Cond: (ol_o_id < 1)\n -> Seq Scan on oorder ref_2 (cost=0.00..165000.00 rows=9000000\nwidth=4) (actual time=0.002..0.002 rows=1 loops=90017507)\nPlanning Time: 3.952 ms\nExecution Time: 130595.467 ms\n\n\n===================================\nFollowing up with previous question\n===================================\n\non-going discussion:\nhttps://www.postgresql.org/message-id/flat/BN6PR07MB3409EE6CAAF8CCF43820AFB9EE670%40BN6PR07MB3409.namprd07.prod.outlook.com#acc68f0fbd8f0b207e162d2dd401d3e8\n\n\nHello Andres,\n\nCould you please share your thoughts on QUERY 3?\n\nThe performance impact of this regression increases *linearly* on larger\ndatabases. We concur with Andrew in that this is related to the lack of a\nMaterialize node and mis-costing of the Nested Loop Anti-Join.\n\nWe found more than 20 regressions related to this commit. We have shared\ntwo illustrative examples (QUERIES 3A and 3B) below.\n\n- Commit: 77cd477 (Enable parallel query by default.)\n\n- Summary: Execution Time (milliseconds)\n\nWhen we increased the scale-factor of TPC-C to 300 (~30 GB), this query ran\nthree times slower on v11 (24 seconds) in comparison to v9.5 (7 seconds).\nWe also found more than 15 regressions related to the same commit and share\na couple of them below.\n\n+-----------------------+--------+---------+---------+-----------+\n| | scale1 | scale10 | scale50 | scale 300 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 3 (v9.5) | 28 | 248 | 1231 | 7265 |\n| Query 3 (v11) | 74 | 677 | 3345 | 24581 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 3A (v9.5) | 88 | 937 | 4721 | 27241 |\n| Query 3A (v11) | 288 | 2822 | 13838 | 85081 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 3B (v9.5) | 101 | 934 | 4824 | 29363 |\n| Query 3B (v11) | 200 | 2331 | 12327 | 74110 |\n+-----------------------+--------+---------+---------+-----------+\n\n\n###### QUERY 3:\n\nselect\n cast(ref_1.ol_i_id as int4) as c0\nfrom\n public.stock as ref_0\n left join public.order_line as ref_1\n on (ref_1.ol_number is not null)\nwhere ref_1.ol_number is null\n\n\n###### QUERY 3A:\n\nselect\n ref_0.ol_delivery_d as c1\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_1.i_im_id as c0\n from\n public.item as ref_1\n where ref_0.ol_d_id <= ref_1.i_im_id\n )\n\n Execution plan:\n\n[OLD version]\nNested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8)\n(actual time=0.034..24981.895 rows=90017507 loops=1)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504\nwidth=12) (actual time=0.022..7145.811 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual\ntime=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000\nwidth=4) (actual time=0.006..0.006 rows=1 loops=1)\n\nPlanning time: 0.290 ms\nExecution time: 27241.239 ms\n\n[NEW version]\nGather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual\ntime=0.265..82355.289 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431\nwidth=8) (actual time=0.033..68529.259 rows=30005836 loops=3)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93\nrows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000\nwidth=4) (actual time=0.001..0.001 rows=1 loops=90017507)\n\nPlanning Time: 0.319 ms\nExecution Time: 85081.158 ms\n\n\n###### QUERY 3B:\n\n\nselect\n ref_0.ol_i_id as c0\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_0.ol_delivery_d as c0\n from\n public.order_line as ref_1\n where ref_1.ol_d_id <= cast(nullif(ref_1.ol_o_id, ref_0.ol_i_id) as\nint4))\n\nExecution plan:\n\n[OLD version]\nNested Loop Semi Join (cost=0.00..115638730740936.53 rows=30005835\nwidth=4) (actual time=0.017..27009.302 rows=90017507 loops=1)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 11557\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504\nwidth=4) (actual time=0.009..7199.540 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2813221.56 rows=90017504 width=8) (actual\ntime=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04\nrows=90017504 width=8) (actual time=0.001..0.002 rows=14 loops=1)\n\nPlanning time: 0.252 ms\nExecution time: 29363.737 ms\n\n[NEW version]\nGather (cost=1000.00..84060490326155.39 rows=30005835 width=4) (actual\ntime=0.272..71712.491 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..84060487324571.89 rows=12502431\nwidth=4) (actual time=0.046..60153.472 rows=30005836 loops=3)\n Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id))\n Rows Removed by Join Filter: 1717\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93\nrows=37507293 width=4) (actual time=0.023..2819.361 rows=30005836 loops=3)\n -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04\nrows=90017504 width=8) (actual time=0.001..0.001 rows=1 loops=90017507)\n\nPlanning Time: 0.334 ms\nExecution Time: 74110.942 ms\n\nHello, Thanks for the feedback so far! Continue with the previous report, we sharing another four interesting cases that SQLFuzz discovered. (previous discussion: https://www.postgresql.org/message-id/flat/BN6PR07MB3409EE6CAAF8CCF43820AFB9EE670%40BN6PR07MB3409.namprd07.prod.outlook.com#acc68f0fbd8f0b207e162d2dd401d3e8)Here’s the time taken to execute four SQL queries on old (v9.5.16) and newer version (v11.2) of PostgreSQL (in milliseconds):+----------------------+--------+---------+---------+---------+| | scale1 | scale10 | scale50 | scale300|+----------------------+--------+---------+---------+---------+| Case-5 (v9.5.16) | 39 | 183 | 1459 | 11125 || Case-5 (v11.2) | 73 | 620 | 4818 | 16956 |+----------------------+--------+---------+---------+---------+| Case-6 (v9.5.16) | 46 | 329 | 15096 | 10721 || Case-6 (v11.2) | 81 | 671 | 64808 | 26919 |+----------------------+--------+---------+---------+---------+| Case-7 (v9.5.16) | 19 | X | X | X || Case-7 (v11.2) | 46 | X | X | X |+----------------------+--------+---------+---------+---------+| Case-8 (v9.5.16) | 215 | 2108 | 10460 | 64959 || Case-8 (v11.2) | 449 | 3997 | 20246 | 130595 |+----------------------+--------+---------+---------+---------+For each regression, we share:1) the associated query,2) the commit that activated it,3) our high-level analysis, and4) query execution plans in old and new versions of PostgreSQL.All these regressions are observed on the latest version. (v11.2 and v9.5.16)* You can download the queries at:https://gts3.org/~/jjung/tpcc/case2.tar.gz* You can reproduce the result by using the same setup that we described before:https://www.postgresql.org/message-id/BN6PR07MB3409922471073F2B619A8CA4EE640%40BN6PR07MB3409.namprd07.prod.outlook.com - As Andrew mentioned before, we increased default work_mem to 128MBBest regards,Jinho Jung#### QUERY 5EXPLAIN ANALYZEselect ref_0.c_zip as c0from public.customer as ref_0where EXISTS ( select ref_1.ol_d_id as c10 from public.order_line as ref_1 where (ref_1.ol_o_id <> ref_0.c_d_id))- Commit : 7ca25b7- Our analysis: We believe newer version is slow when the number of rows in the filter is small.- Query execution plans:[Old version]Nested Loop Semi Join (cost=0.00..15317063263550.52 rows=1 width=10) (actual time=0.019..10888.266 rows=9000000 loops=1) Join Filter: (ref_1.ol_o_id <> ref_0.c_d_id) Rows Removed by Join Filter: 11700000 -> Seq Scan on customer ref_0 (cost=0.00..770327.00 rows=9000000 width=14) (actual time=0.008..7541.944 rows=9000000 loops=1) -> Materialize (cost=0.00..2813223.52 rows=90017568 width=4) (actual time=0.000..0.000 rows=2 loops=9000000) -> Seq Scan on order_line ref_1 (cost=0.00..2011503.68 rows=90017568 width=4) (actual time=0.005..0.007 rows=14 loops=1)Planning time: 0.401 msExecution time: 11125.538 ms[New version]Nested Loop Semi Join (cost=0.00..3409260.89 rows=9000000 width=10) (actual time=0.033..16732.988 rows=9000000 loops=1) Join Filter: (ref_1.ol_o_id <> ref_0.c_d_id) Rows Removed by Join Filter: 11700000 -> Seq Scan on customer ref_0 (cost=0.00..770327.00 rows=9000000 width=14) (actual time=0.017..2113.336 rows=9000000 loops=1) -> Seq Scan on order_line ref_1 (cost=0.00..2011503.68 rows=90017568 width=4) (actual time=0.001..0.001 rows=2 loops=9000000)Planning Time: 0.615 msExecution Time: 16956.115 ms##### QUERY 6select distinct ref_0.h_data as c0, ref_0.h_c_id as c1 from public.history as ref_0 left join public.item as ref_1 on (ref_1.i_im_id < -1)where ref_1.i_price is NULL- Our analysis: We think that the 'merge sort' makes slow execution. We are wondering why newer version applies external merge sort in this case. - Commit: 3fc6e2d (big patch)- Query execution plans:[Old version]HashAggregate (cost=1312274.26..1312274.27 rows=1 width=21) (actual time=7288.727..10443.586 rows=9000000 loops=1) Group Key: ref_0.h_data, ref_0.h_c_id -> Nested Loop Left Join (cost=0.00..1312274.26 rows=1 width=21) (actual time=26.965..2463.231 rows=9000000 loops=1) Filter: (ref_1.i_price IS NULL) -> Seq Scan on history ref_0 (cost=0.00..184795.61 rows=8999661 width=21) (actual time=0.347..795.936 rows=9000000 loops=1) -> Materialize (cost=0.00..2521.05 rows=10 width=6) (actual time=0.000..0.000 rows=0 loops=9000000) -> Seq Scan on item ref_1 (cost=0.00..2521.00 rows=10 width=6) (actual time=26.610..26.610 rows=0 loops=1) Filter: (i_im_id < '-1'::integer) Rows Removed by Filter: 100000Planning time: 1.538 msExecution time: 10721.259 ms[New version]Unique (cost=1312334.34..1312334.35 rows=1 width=21) (actual time=21444.459..26651.868 rows=9000000 loops=1) -> Sort (cost=1312334.34..1312334.35 rows=1 width=21) (actual time=21444.457..25629.389 rows=9000000 loops=1) Sort Key: ref_0.h_data, ref_0.h_c_id Sort Method: external merge Disk: 285384kB -> Nested Loop Left Join (cost=0.00..1312334.33 rows=1 width=21) (actual time=21.328..2409.302 rows=9000000 loops=1) Filter: (ref_1.i_price IS NULL) -> Seq Scan on history ref_0 (cost=0.00..184800.06 rows=9000106 width=21) (actual time=0.320..734.376 rows=9000000 loops=1) -> Materialize (cost=0.00..2521.05 rows=10 width=6) (actual time=0.000..0.000 rows=0 loops=9000000) -> Seq Scan on item ref_1 (cost=0.00..2521.00 rows=10 width=6) (actual time=21.000..21.001 rows=0 loops=1) Filter: (i_im_id < '-1'::integer) Rows Removed by Filter: 100000Planning Time: 1.426 msExecution Time: 26919.635 ms##### QUERY 7 select ref_0.c_id as c0 from public.customer as ref_0where EXISTS ( select ref_0.c_city as c0 from public.order_line as ref_1 left join public.new_order as ref_2 on (ref_1.ol_supply_w_id = ref_2.no_w_id) where (ref_1.ol_delivery_d > ref_0.c_since))- Our analysis : Parallel execution seems a problem. We also want to ask whether only one worker is intended behavior of Postgres because we think parallel execution with less than two workers is not parallel.- Another interesting finding: this query cannot be finished within one day if we incrase the size of DB (e.g., from scale factor 1 to scale factor 10/50/300).- Commit: 16be2fd- Query execution plans:[Old version]Nested Loop Semi Join (cost=224.43..910152608046.08 rows=10000 width=4) (actual time=2.619..18.042 rows=30000 loops=1) Join Filter: (ref_1.ol_delivery_d > ref_0.c_since) -> Seq Scan on customer ref_0 (cost=0.00..2569.00 rows=30000 width=12) (actual time=0.003..4.530 rows=30000 loops=1) -> Materialize (cost=224.43..48521498.97 rows=2406886812 width=8) (actual time=0.000..0.000 rows=1 loops=30000) -> Hash Left Join (cost=224.43..27085162.91 rows=2406886812 width=8) (actual time=2.612..2.612 rows=1 loops=1) Hash Cond: (ref_1.ol_supply_w_id = ref_2.no_w_id) -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=12) (actual time=0.002..0.002 rows=1 loops=1) -> Hash (cost=124.19..124.19 rows=8019 width=4) (actual time=2.571..2.571 rows=8019 loops=1) Buckets: 8192 Batches: 1 Memory Usage: 346kB -> Seq Scan on new_order ref_2 (cost=0.00..124.19 rows=8019 width=4) (actual time=0.174..1.539 rows=8019 loops=1)Planning time: 0.605 msExecution time: 19.045 ms[New version]Gather (cost=1224.43..672617098015.10 rows=10000 width=4) (actual time=3.099..45.077 rows=30000 loops=1) Workers Planned: 1 Workers Launched: 1 -> Nested Loop Semi Join (cost=224.43..672617096015.10 rows=5882 width=4) (actual time=2.307..38.258 rows=15000 loops=2) Join Filter: (ref_1.ol_delivery_d > ref_0.c_since) -> Parallel Seq Scan on customer ref_0 (cost=0.00..2445.47 rows=17647 width=12) (actual time=0.003..2.668 rows=15000 loops=2) -> Hash Left Join (cost=224.43..27085162.91 rows=2406886812 width=8) (actual time=0.002..0.002 rows=1 loops=30000) Hash Cond: (ref_1.ol_supply_w_id = ref_2.no_w_id) -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=12) (actual time=0.001..0.001 rows=1 loops=30000) -> Hash (cost=124.19..124.19 rows=8019 width=4) (actual time=2.211..2.211 rows=8019 loops=2) Buckets: 8192 Batches: 1 Memory Usage: 346kB -> Seq Scan on new_order ref_2 (cost=0.00..124.19 rows=8019 width=4) (actual time=0.086..1.179 rows=8019 loops=2)Planning Time: 0.611 msExecution Time: 46.195 ms##### QUERY 8select ref_0.ol_d_id as c0 from public.order_line as ref_0 left join ( select ref_1.ol_supply_w_id as c0, ref_1.ol_d_id as c2 from public.order_line as ref_1 where ref_1.ol_o_id < 1 ) as subq_0 on (subq_0.c2 = ref_0.ol_o_id)where EXISTS ( select ref_2.o_ol_cnt as c0 from public.oorder as ref_2 where nullif(ref_2.o_d_id, subq_0.c0) is not NULL);- Commit: 0c2070c- Our analysis : We are not sure about the root cause of this regression. This might have to do with parallel execution.- Query execution plans:[Old version]Nested Loop Semi Join (cost=3488355.85..210034755085.67 rows=245615461 width=4) (actual time=32830.372..62745.966 rows=90017507 loops=1) Join Filter: (NULLIF(ref_2.o_d_id, ref_1.ol_supply_w_id) IS NOT NULL) -> Hash Right Join (cost=3488355.85..9115461.16 rows=246849710 width=8) (actual time=32829.980..48232.002 rows=90017507 loops=1) Hash Cond: (ref_1.ol_d_id = ref_0.ol_o_id) -> Index Scan using order_line_pkey on order_line ref_1 (cost=0.57..2076988.89 rows=8461 width=8) (actual time=5346.547..5346.547 rows=0 loops=1) Index Cond: (ol_o_id < 1) -> Hash (cost=2011503.68..2011503.68 rows=90017568 width=8) (actual time=27466.438..27466.438 rows=90017507 loops=1) Buckets: 4194304 Batches: 64 Memory Usage: 80798kB -> Seq Scan on order_line ref_0 (cost=0.00..2011503.68 rows=90017568 width=8) (actual time=0.006..16821.719 rows=90017507 loops=1) -> Materialize (cost=0.00..245157.00 rows=9000000 width=4) (actual time=0.000..0.000 rows=1 loops=90017507) -> Seq Scan on oorder ref_2 (cost=0.00..165000.00 rows=9000000 width=4) (actual time=0.377..0.377 rows=1 loops=1)Planning time: 3.933 msExecution time: 64959.231 ms[New version]Gather (cost=8797140.37..153823591859.11 rows=264667229 width=4) (actual time=27917.270..128050.059 rows=90017507 loops=1) Workers Planned: 2 Workers Launched: 2 -> Nested Loop Semi Join (cost=8796140.37..153797124136.21 rows=110278012 width=4) (actual time=27911.016..109588.362 rows=30005836 loops=3) Join Filter: (NULLIF(ref_2.o_d_id, ref_1.ol_supply_w_id) IS NOT NULL) -> Merge Left Join (cost=8796140.37..10646159.56 rows=110832173 width=8) (actual time=27910.620..33777.444 rows=30005836 loops=3) Merge Cond: (ref_0.ol_o_id = ref_1.ol_d_id) -> Sort (cost=6717742.99..6811511.29 rows=37507320 width=8) (actual time=22427.383..25862.373 rows=30005836 loops=3) Sort Key: ref_0.ol_o_id Sort Method: external merge Disk: 534352kB Worker 0: Sort Method: external merge Disk: 509888kB Worker 1: Sort Method: external merge Disk: 541512kB -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486401.20 rows=37507320 width=8) (actual time=0.025..14349.178 rows=30005836 loops=3) -> Sort (cost=2078397.38..2078419.66 rows=8912 width=8) (actual time=5483.221..5483.221 rows=0 loops=3) Sort Key: ref_1.ol_d_id Sort Method: quicksort Memory: 25kB Worker 0: Sort Method: quicksort Memory: 25kB Worker 1: Sort Method: quicksort Memory: 25kB -> Index Scan using order_line_pkey on order_line ref_1 (cost=0.57..2077812.68 rows=8912 width=8) (actual time=5483.203..5483.203 rows=0 loops=3) Index Cond: (ol_o_id < 1) -> Seq Scan on oorder ref_2 (cost=0.00..165000.00 rows=9000000 width=4) (actual time=0.002..0.002 rows=1 loops=90017507)Planning Time: 3.952 msExecution Time: 130595.467 ms===================================Following up with previous question===================================on-going discussion:https://www.postgresql.org/message-id/flat/BN6PR07MB3409EE6CAAF8CCF43820AFB9EE670%40BN6PR07MB3409.namprd07.prod.outlook.com#acc68f0fbd8f0b207e162d2dd401d3e8Hello Andres,Could you please share your thoughts on QUERY 3?The performance impact of this regression increases *linearly* on larger databases. We concur with Andrew in that this is related to the lack of a Materialize node and mis-costing of the Nested Loop Anti-Join. We found more than 20 regressions related to this commit. We have shared two illustrative examples (QUERIES 3A and 3B) below.- Commit: 77cd477 (Enable parallel query by default.)- Summary: Execution Time (milliseconds)When we increased the scale-factor of TPC-C to 300 (~30 GB), this query ran three times slower on v11 (24 seconds) in comparison to v9.5 (7 seconds). We also found more than 15 regressions related to the same commit and share a couple of them below.+-----------------------+--------+---------+---------+-----------+| | scale1 | scale10 | scale50 | scale 300 |+-----------------------+--------+---------+---------+-----------+| Query 3 (v9.5) | 28 | 248 | 1231 | 7265 || Query 3 (v11) | 74 | 677 | 3345 | 24581 |+-----------------------+--------+---------+---------+-----------+| Query 3A (v9.5) | 88 | 937 | 4721 | 27241 || Query 3A (v11) | 288 | 2822 | 13838 | 85081 |+-----------------------+--------+---------+---------+-----------+| Query 3B (v9.5) | 101 | 934 | 4824 | 29363 || Query 3B (v11) | 200 | 2331 | 12327 | 74110 |+-----------------------+--------+---------+---------+-----------+###### QUERY 3:select cast(ref_1.ol_i_id as int4) as c0from public.stock as ref_0 left join public.order_line as ref_1 on (ref_1.ol_number is not null)where ref_1.ol_number is null###### QUERY 3A:select ref_0.ol_delivery_d as c1from public.order_line as ref_0where EXISTS ( select ref_1.i_im_id as c0 from public.item as ref_1 where ref_0.ol_d_id <= ref_1.i_im_id ) Execution plan:[OLD version]Nested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8) (actual time=0.034..24981.895 rows=90017507 loops=1) Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id) -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=12) (actual time=0.022..7145.811 rows=90017507 loops=1) -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual time=0.000..0.000 rows=1 loops=90017507) -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.006..0.006 rows=1 loops=1)Planning time: 0.290 msExecution time: 27241.239 ms[NEW version]Gather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual time=0.265..82355.289 rows=90017507 loops=1) Workers Planned: 2 Workers Launched: 2 -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431 width=8) (actual time=0.033..68529.259 rows=30005836 loops=3) Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id) -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3) -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.001..0.001 rows=1 loops=90017507)Planning Time: 0.319 msExecution Time: 85081.158 ms###### QUERY 3B:select ref_0.ol_i_id as c0from public.order_line as ref_0where EXISTS ( select ref_0.ol_delivery_d as c0 from public.order_line as ref_1 where ref_1.ol_d_id <= cast(nullif(ref_1.ol_o_id, ref_0.ol_i_id) as int4))Execution plan:[OLD version]Nested Loop Semi Join (cost=0.00..115638730740936.53 rows=30005835 width=4) (actual time=0.017..27009.302 rows=90017507 loops=1) Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id)) Rows Removed by Join Filter: 11557 -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=4) (actual time=0.009..7199.540 rows=90017507 loops=1) -> Materialize (cost=0.00..2813221.56 rows=90017504 width=8) (actual time=0.000..0.000 rows=1 loops=90017507) -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.002 rows=14 loops=1)Planning time: 0.252 msExecution time: 29363.737 ms[NEW version]Gather (cost=1000.00..84060490326155.39 rows=30005835 width=4) (actual time=0.272..71712.491 rows=90017507 loops=1) Workers Planned: 2 Workers Launched: 2 -> Nested Loop Semi Join (cost=0.00..84060487324571.89 rows=12502431 width=4) (actual time=0.046..60153.472 rows=30005836 loops=3) Join Filter: (ref_1.ol_d_id <= NULLIF(ref_1.ol_o_id, ref_0.ol_i_id)) Rows Removed by Join Filter: 1717 -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=4) (actual time=0.023..2819.361 rows=30005836 loops=3) -> Seq Scan on order_line ref_1 (cost=0.00..2011503.04 rows=90017504 width=8) (actual time=0.001..0.001 rows=1 loops=90017507)Planning Time: 0.334 msExecution Time: 74110.942 ms",
"msg_date": "Wed, 6 Mar 2019 16:41:45 -0500",
"msg_from": "Jinho Jung <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance regressions found using sqlfuzz"
}
] |
[
{
"msg_contents": "I have some query:\n\n \n\nEXPLAIN ANALYZE select id from sometable where fkey IS NOT DISTINCT FROM\n21580;\n\n \n\n \n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n------------------------------------------------\n\nGather (cost=10.00..39465.11 rows=1 width=4) (actual time=0.512..129.625\nrows=1 loops=1)\n\n Workers Planned: 4\n\n Workers Launched: 4\n\n -> Parallel Seq Scan on sometable (cost=0.00..39455.01 rows=1 width=4)\n(actual time=77.995..103.806 rows=0 loops=5)\n\n Filter: (NOT (fkey IS DISTINCT FROM 21580))\n\n Rows Removed by Filter: 675238\n\nPlanning time: 0.101 ms\n\nExecution time: 148.517 ms\n\n \n\n \n\nOther Query:\n\n \n\nEXPLAIN ANALYZE select id from table where fkey=21580;\n\n \n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n------------------------------------------------------\n\nIndex Scan using sometable_index1 on sometable (cost=0.43..8.45 rows=1\nwidth=4) (actual time=0.075..0.076 rows=1 loops=1)\n\n Index Cond: (fkey = 21580)\n\nPlanning time: 0.117 ms\n\nExecution time: 0.101 ms\n\n(4 rows)\n\n \n\nThere is unique index on sometable(fkey);\n\n \n\nIs there any reason that \"NOT DISTINCT FROM\" can't be autotransformed to \"=\"\nwhen value on right side of expression is not NULL or is this any way to use\nindex with \"IS NOT DISTINCT FROM\" statement?\n\n \n\n \n\nArtur Zajac\n\n \n\n \n\n\nI have some query: EXPLAIN ANALYZE select id from sometable where fkey IS NOT DISTINCT FROM 21580; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------- Gather (cost=10.00..39465.11 rows=1 width=4) (actual time=0.512..129.625 rows=1 loops=1) Workers Planned: 4 Workers Launched: 4 -> Parallel Seq Scan on sometable (cost=0.00..39455.01 rows=1 width=4) (actual time=77.995..103.806 rows=0 loops=5) Filter: (NOT (fkey IS DISTINCT FROM 21580)) Rows Removed by Filter: 675238 Planning time: 0.101 ms Execution time: 148.517 ms Other Query: EXPLAIN ANALYZE select id from table where fkey=21580; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------- Index Scan using sometable_index1 on sometable (cost=0.43..8.45 rows=1 width=4) (actual time=0.075..0.076 rows=1 loops=1) Index Cond: (fkey = 21580) Planning time: 0.117 ms Execution time: 0.101 ms(4 rows) There is unique index on sometable(fkey); Is there any reason that „NOT DISTINCT FROM” can’t be autotransformed to „=” when value on right side of expression is not NULL or is this any way to use index with „IS NOT DISTINCT FROM” statement? Artur Zajac",
"msg_date": "Fri, 8 Mar 2019 12:30:41 +0100",
"msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "IS NOT DISTINCT FROM statement"
},
{
"msg_contents": "Artur Zając wrote:\n> Is there any reason that „NOT DISTINCT FROM” can’t be autotransformed to „=” when value\n> on right side of expression is not NULL or is this any way to use index with „IS NOT DISTINCT FROM” statement?\n\nThat would subtly change the semantics of the expression:\n\ntest=> SELECT NULL IS NOT DISTINCT FROM 21580;\n ?column? \n----------\n f\n(1 row)\n\ntest=> SELECT NULL = 21580;\n ?column? \n----------\n \n(1 row)\n\nOne expression is FALSE, the other NULL.\n\nIt doesn't matter in the context of your specific query, but it could matter.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 08 Mar 2019 12:45:39 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT DISTINCT FROM statement"
},
{
"msg_contents": "On Sat, 9 Mar 2019 at 00:30, Artur Zając <[email protected]> wrote:\n> Is there any reason that „NOT DISTINCT FROM” can’t be autotransformed to „=” when value on right side of expression is not NULL or is this any way to use index with „IS NOT DISTINCT FROM” statement?\n\nProbably nothing other than nobody has done it yet. It might be\nreasonable to have some transformation stage called from\ndistribute_restrictinfo_to_rels() when adding single rel RestrictInfos\nto RTE_RELATION base rels. It's only these you can check for NOT NULL\nconstraints, i.e. not so possible with rtekinds such as RTE_FUNCTION\nand the like.\n\nIt becomes more complex if you consider that someone might have added\na partial index on the relation that matches the IS NOT DISTINCT FROM\nclause. In this case, they might not be happy that their index can no\nlonger be used. Fixing that would require some careful surgery on\npredicate_implied_by() to teach it about IS NOT DISTINCT FROM clauses.\nHowever, that seems to go a step beyond what predicate_implied_by()\ndoes for now. Currently, it only gets to know about quals. Not the\nrelations they belong to, so there'd be no way to know that the NOT\nNULL constraint exists from there. I'm not sure if there's a good\nreason for this or not, it might be because it's not been required\nbefore. It gets more complex still if you want to consider other\nquals in the list to prove not nullness.\n\nIn short, probably possible, but why not just write an equality\nclause, if you know NULLs are not possible?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sat, 9 Mar 2019 01:13:49 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT DISTINCT FROM statement"
},
{
"msg_contents": "\n> In short, probably possible, but why not just write an equality clause, if you know NULLs are not possible?\n\nIn fact I construct query like this (usually in pl/pgsql).\n\nSELECT column FROM table WHERE column1 IS NOT DISTINCT FROM $1 AND column2 IS NOT DISTINCT FROM $2;\n\n\"IS NOT DISTINCT FROM\" statement simplifies the query ($1 OR $2 may be null, col1 and col2 has indexes).\n\nI made some workaround. I made function:\n\nCREATE OR REPLACE FUNCTION smarteq(v1 int,v2 INT) RETURNS BOOL AS\n$BODY$\n\tSELECT (CASE WHEN v2 IS NULL THEN (v1 IS NULL) ELSE v1=v2 END);\n$BODY$ LANGUAGE 'sql' IMMUTABLE PARALLEL SAFE;\n\n\nAnd then\n\nexplain analyze select id from sometable where smarteq(id1,21580);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sometable_index1 on sometable (cost=0.43..8.45 rows=1 width=4) (actual time=0.085..0.086 rows=1 loops=1)\n Index Cond: (id1 = 21580)\n Planning time: 0.223 ms\n Execution time: 0.117 ms\n(4 rows)\n\nexplain analyze select id from sometable where smarteq(id1,NULL);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on sometable (cost=19338.59..57045.02 rows=882343 width=4) (actual time=116.236..306.304 rows=881657 loops=1)\n Recheck Cond: (id1 IS NULL)\n Heap Blocks: exact=9581\n -> Bitmap Index Scan on sometable_index1 (cost=0.00..19118.00 rows=882343 width=0) (actual time=114.209..114.209 rows=892552 loops=1)\n Index Cond: (id1 IS NULL)\n Planning time: 0.135 ms\n Execution time: 339.229 ms\n\nIt looks like it works, but I must check if it will still works in plpgsql (I expect some problems if query is prepared).\n\nArtur Zajac\n\n\n\n",
"msg_date": "Fri, 8 Mar 2019 13:25:35 +0100",
"msg_from": "=?utf-8?Q?Artur_Zaj=C4=85c?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: IS NOT DISTINCT FROM statement"
},
{
"msg_contents": "On Sat, 9 Mar 2019 at 01:25, Artur Zając <[email protected]> wrote:\n> I made some workaround. I made function:\n>\n> CREATE OR REPLACE FUNCTION smarteq(v1 int,v2 INT) RETURNS BOOL AS\n> $BODY$\n> SELECT (CASE WHEN v2 IS NULL THEN (v1 IS NULL) ELSE v1=v2 END);\n> $BODY$ LANGUAGE 'sql' IMMUTABLE PARALLEL SAFE;\n\n> explain analyze select id from sometable where smarteq(id1,NULL);\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on sometable (cost=19338.59..57045.02 rows=882343 width=4) (actual time=116.236..306.304 rows=881657 loops=1)\n> Recheck Cond: (id1 IS NULL)\n> Heap Blocks: exact=9581\n> -> Bitmap Index Scan on sometable_index1 (cost=0.00..19118.00 rows=882343 width=0) (actual time=114.209..114.209 rows=892552 loops=1)\n> Index Cond: (id1 IS NULL)\n> Planning time: 0.135 ms\n> Execution time: 339.229 ms\n>\n> It looks like it works, but I must check if it will still works in plpgsql (I expect some problems if query is prepared).\n\nI think with either that you'll just be at the mercy of whether a\ngeneric or custom plan is chosen. If you get a custom plan then\nlikely your case statement will be inlined and constant folded away,\nbut for a generic plan, that can't happen since those constants are\nnot consts, they're parameters. Most likely, if you've got an index\non the column you'll perhaps always get a custom plan as the generic\nplan would result in a seqscan and it would have to evaluate your case\nstatement for each row. By default, generic plans are only considered\non the 6th query execution and are only chosen if the generic cost is\ncheaper than the average custom plan cost + fuzz cost for planning.\nPG12 gives you a bit more control over that with the plan_cache_mode\nGUC, but... that's the not out yet.\n\nHowever, possibly the cost of planning each execution is cheaper than\ndoing the seq scan, so you might be better off with this. There is a\nrisk that the planner does for some reason choose a generic plan and\nends up doing the seq scan, but for that to happen likely the table\nwould have to be small, in which case it wouldn't matter or the costs\nwould have to be off, which might cause you some pain.\n\nThe transformation mentioned earlier could only work if the arguments\nof the IS NOT DISTINCT FROM were Vars or Consts. It couldn't work with\nParams since the values are unknown to the planner.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sat, 9 Mar 2019 02:12:34 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT DISTINCT FROM statement"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On Sat, 9 Mar 2019 at 01:25, Artur Zając <[email protected]> wrote:\n>> CREATE OR REPLACE FUNCTION smarteq(v1 int,v2 INT) RETURNS BOOL AS\n>> $BODY$\n>> SELECT (CASE WHEN v2 IS NULL THEN (v1 IS NULL) ELSE v1=v2 END);\n>> $BODY$ LANGUAGE 'sql' IMMUTABLE PARALLEL SAFE;\n\n> The transformation mentioned earlier could only work if the arguments\n> of the IS NOT DISTINCT FROM were Vars or Consts. It couldn't work with\n> Params since the values are unknown to the planner.\n\nJust looking at this example, I'm wondering if there'd be any value in\nadding a rule to eval_const_expressions that converts IS DISTINCT FROM\nwith one constant-NULL argument into an IS NOT NULL test on the other\nargument. Doing anything with the general case would be hard, as you\nmentioned, but this \"workaround\" suggests that the OP isn't actually\nconcerned with the general case.\n\n[ experiments... ] Oh, look at this:\n\nregression=# explain verbose select f1 is distinct from null from int4_tbl;\n QUERY PLAN \n---------------------------------------------------------------\n Seq Scan on public.int4_tbl (cost=0.00..1.05 rows=5 width=1)\n Output: (f1 IS NOT NULL)\n(2 rows)\n\nregression=# explain verbose select f1 is not distinct from null from int4_tbl;\n QUERY PLAN \n---------------------------------------------------------------\n Seq Scan on public.int4_tbl (cost=0.00..1.05 rows=5 width=1)\n Output: (f1 IS NULL)\n(2 rows)\n\nSo somebody already inserted this optimization, but I don't see it\nhappening in eval_const_expressions ... oh, it's way earlier,\nin transformAExprDistinct:\n\n /*\n * If either input is an undecorated NULL literal, transform to a NullTest\n * on the other input. That's simpler to process than a full DistinctExpr,\n * and it avoids needing to require that the datatype have an = operator.\n */\n if (exprIsNullConstant(rexpr))\n return make_nulltest_from_distinct(pstate, a, lexpr);\n if (exprIsNullConstant(lexpr))\n return make_nulltest_from_distinct(pstate, a, rexpr);\n\nI'm hesitant to call that wrong; the ability to avoid a dependency on an\n\"=\" operator is kind of nice. But it doesn't help for cases requiring a\nParam substitution.\n\nSo maybe if we *also* had a check for this in eval_const_expressions,\nthat would address the OP's problem. But the use-case would be a bit\nnarrow given that the parser is catching the simplest case.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 08 Mar 2019 09:53:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT DISTINCT FROM statement"
}
] |
[
{
"msg_contents": "I am working on product managing and monitoring Network (NMS-like products).\n\nProduct manages configuration of network devices, for now each device has\nstored its configuration in simple table - this was the original design.\n\nCREATE TABLE public.configuration(\n id integer NOT NULL,\n config json NOT NULL,\n CONSTRAINT configuration_pkey PRIMARY KEY (id),)\n\nA config looks like:\n\n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": [{\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\",\n \"instance\": \"24\",\n \"value\": \"null\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.1.86\",\n \"instance\": \"0\",\n \"value\": \"562\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\",\n \"instance\": \"0\",\n \"value\": \"0\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"24\",\n \"value\": \"vlan24\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"25\",\n \"value\": \"vlan25\"\n }\n ]}\n\nAnd there are many plv8 (java script procedural language extension for\nPostgreSQL) stored procedures working on bulks of such config, reading some\nOIDs, changing them conditionally, removing some of them and adding others,\nespecially in use cases like: There are some upper-level META-configuration\nof different level, which during change have to update all their updated\nparameters to all affected leaves configs. An simple test-example (but\nwithout touching 'data' node)\n\nCREATE OR REPLACE FUNCTION public.process_jsonb_plv8()\n RETURNS void AS$BODY$\nvar CFG_TABLE_NAME = \"configurations\";\nvar selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME +\n\" c where c.id = $1\", ['int'] );\nvar updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config\n= $1 where id = $2', ['jsonb','int'] );\n\ntry {\n\n var ids = plv8.execute('select id from devices');\n\n for (var i = 0; i < ids.length; i++) {\n var db_cfg = selPlan.execute(ids[i].id); //Get current json\nconfig from DB\n var cfg = db_cfg[0].config;\n cfg[\"key0\"] = 'plv8_json'; //-add some dummy key\n updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB\n plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n\n\n }} finally {\n selPlan.free();\n updPlan.free();}\nreturn;$BODY$\n LANGUAGE plv8 VOLATILE\n COST 100;\n\nFor real use-cases plv8 SPs are more complicated, doing FOR-LOOP through\nALL OIDs object of 'data' array, checking if it is looking for and update\nvalue an/or remove it and/or add newer if necessary.\n\nSince number of devices in DB increased from several hundreds to 40K or\neven 70K, and number of OID+Instance combinations also increased from\nseveral hundred to ~1K and sometimes up to 10K within a config, we start\nfacing slowness in bulk (especially global -> update to ALL Devices)\nupdates/searches.\n\nIn order to get rid off FOR LOOP step for each configuration I've converted\ndata-node from array to object (key-value model), something like :\n\n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": {\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\": {\n \"24\": \"null\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.1.86\": {\n \"0\": \"562\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\": {\n \"0\": \"0\"\n },\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\": {\n \"24\": \"vlan24\",\n \"25\": \"vlan25\"\n }\n }}\n\nNow in order to get a concrete OID (e.g.\n\"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\") and/or its instance I do 1-2 *O(1)*\noperations instead *O(n)*. And it become a bit faster. After I've changed\ncolumn type from json to jsonb - I've got a lot of memory issues with plv8\nstored procedures, so now ideas is:\n\n*What are the best practices to store such data and use cases in DB?*\ntaking in considerations following: - Bulk and global updates are often\nenough (user-done operation) - several times per week and it takes long\ntime - several minutes, annoying user experience. - Consulting some OIDs\nonly from concrete config is medium frequency use case - Consulting ALL\ndevices have some specific OID (SNMP Parameter) settled to a specific value\n- medium frequency cases. - Consult (read) a configuration for a specific\ndevice as a whole document - often use case (it is send to device as json\nor as converted CSV, it is send in modified json format to other utilities,\netc)\n\nOne of suggestion from other oppinions is to move ALL configurations to\nsimple plain relational table\n\nCREATE TABLE public.configuration_plain(\n device_id integer,\n oid text,\n instance text,\n value text)\n\nLooking like\n\n*id*\n\n*oid*\n\n*instance*\n\n*value*\n\n20818132\n\n1.3.6.1.4.1.9999.2.13\n\n0\n\nVSAT\n\n20818132\n\n1.3.6.1.4.1.9999.3.10.2.2.10.15\n\n0\n\n0\n\n20818132\n\n1.3.6.1.4.1.9999.3.10.2.2.10.17\n\n0\n\n0\n\n20818132\n\n1.3.6.1.4.1.9999.3.10.2.2.10.18\n\n0\n\n1\n\n20818132\n\n1.3.6.1.4.1.9999.3.10.2.2.10.19\n\n0\n\n2\n\n20818132\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n24\n\n24\n\n20818132\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n25\n\n25\n\n20818132\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n24\n\nvlan24\n\n20818132\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n25\n\nVLAN_25\n\nAnd now I end with a table of ~33 M rows for 40K devices * (700-900\nOID+Instance combinations). Some simple selects and updates (especially if\nI add simple indexes on id, oid columns) works faster than JSON (less than\n1 sec updating one OID for ALL devices), but on some stored procedures\nwhere I need to do some checks and business logic before manipulating\nconcrete parameters in configuration - performance decrease again from 10\nto 25 seconds in below example with each nee added operation:\n\nCREATE OR REPLACE FUNCTION public.test_update_bulk_configuration_plain_plpg(\n sql_condition text, -- something like 'select id from devices'\n new_elements text, --collection of OIDs to be Added or Update,\ncould be JSON Array or comma separated list, containing 1 or more\n(100) OIDs\n oids_to_delete text --collection of OIDs to Delete\n )\n RETURNS void AS$BODY$DECLARE\n r integer;\n cnt integer;\n ids int[];\n lid int;BEGIN\n RAISE NOTICE 'start';\n EXECUTE 'SELECT ARRAY(' || sql_condition || ')' into ids;\n FOREACH lid IN ARRAY ids\n LOOP\n -- DELETE\n -- Some business logic\n -- FOR .. IF .. BEGIN\n delete from configuration_plain c where c.oid =\n'1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '10' and\nc.device_id = lid;\n delete from configuration_plain c where c.oid = 'Other\nOID' and instance = 'Index' and c.device_id = lid;\n -- other eventual deletes\n --END\n\n -- UPDATE\n -- Some business logic\n -- FOR .. IF .. BEGIN\n update configuration_plain c set value = '2' where c.oid =\n'1.3.6.1.4.1.9999.3.5.10.3.87' and c.device_id = lid;\n update configuration_plain c set value = '2' where c.oid =\n'1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '1' and c.device_id\n= lid;\n -- other eventual updates\n -- END\n\n --INSERT\n insert into configuration_plain (id, oid, instance, value)\nvalues (lid,'1.3.6.1.4.1.9999.3.5.10.3.201.1.1', '11', '11');\n -- OTHER eventually....\n insert into configuration_plain (id, oid, instance, value)\nvalues (lid,'OTHER_OID', 'Idx', 'Value of OID');\n END LOOP;\n RAISE NOTICE 'end';\n RETURN;END$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\n\nSo any best practices and advice on such data and use cases modeling in DB?\n\nRegards,\n\nAlexL\n\n\n\nI am working on product managing and monitoring Network (NMS-like products).\nProduct manages configuration of network devices, for now each device\n has stored its configuration in simple table - this was the original \ndesign.\nCREATE TABLE public.configuration\n(\n id integer NOT NULL,\n config json NOT NULL,\n CONSTRAINT configuration_pkey PRIMARY KEY (id),\n)\nA config looks like: \n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": [{\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\",\n \"instance\": \"24\",\n \"value\": \"null\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.1.86\",\n \"instance\": \"0\",\n \"value\": \"562\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\",\n \"instance\": \"0\",\n \"value\": \"0\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"24\",\n \"value\": \"vlan24\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"25\",\n \"value\": \"vlan25\"\n }\n ]\n}\nAnd there are many plv8 (java script procedural language\n extension for PostgreSQL) stored procedures working on bulks of such \nconfig, reading some OIDs, changing them conditionally, removing some of\n them and adding others, especially in use cases like: \nThere are some upper-level META-configuration of different level, which \nduring change have to update all their updated parameters to all \naffected leaves configs.\nAn simple test-example (but without touching 'data' node)\nCREATE OR REPLACE FUNCTION public.process_jsonb_plv8()\n RETURNS void AS\n$BODY$\nvar CFG_TABLE_NAME = \"configurations\";\nvar selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c where c.id = $1\", ['int'] );\nvar updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 where id = $2', ['jsonb','int'] );\n\ntry {\n\n var ids = plv8.execute('select id from devices');\n\n for (var i = 0; i < ids.length; i++) {\n var db_cfg = selPlan.execute(ids[i].id); //Get current json config from DB\n var cfg = db_cfg[0].config;\n cfg[\"key0\"] = 'plv8_json'; //-add some dummy key\n updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB\n plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n\n\n }\n} finally {\n selPlan.free();\n updPlan.free();\n}\n\nreturn;$BODY$\n LANGUAGE plv8 VOLATILE\n COST 100;\nFor real use-cases plv8 SPs are more complicated, doing FOR-LOOP \nthrough ALL OIDs object of 'data' array, checking if it is looking for \nand update value an/or remove it and/or add newer if necessary.\nSince number of devices in DB increased from several hundreds to 40K \nor even 70K, and number of OID+Instance combinations also increased from\n several hundred to ~1K and sometimes up to 10K within a config, we \nstart facing slowness in bulk (especially global -> update to ALL \nDevices) updates/searches.\nIn order to get rid off FOR LOOP step for each configuration I've \nconverted data-node from array to object (key-value model), something \nlike : \n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": {\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\": {\n \"24\": \"null\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.1.86\": {\n \"0\": \"562\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\": {\n \"0\": \"0\"\n },\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\": {\n \"24\": \"vlan24\",\n \"25\": \"vlan25\"\n }\n }\n}\nNow in order to get a concrete OID (e.g. \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\") and/or its instance I do 1-2 O(1) operations instead O(n). And it become a bit faster.\nAfter I've changed column type from json to jsonb - I've got a lot of memory issues with plv8 stored procedures, so now ideas is: \nWhat are the best practices to store such data and use cases in DB?\ntaking in considerations following: \n- Bulk and global updates are often enough (user-done operation) - \nseveral times per week and it takes long time - several minutes, \nannoying user experience.\n- Consulting some OIDs only from concrete config is medium frequency use\n case\n- Consulting ALL devices have some specific OID (SNMP Parameter) settled\n to a specific value - medium frequency cases.\n- Consult (read) a configuration for a specific device as a whole \ndocument - often use case (it is send to device as json or as converted \nCSV, it is send in modified json format to other utilities, etc)\nOne of suggestion from other oppinions is to move ALL configurations to simple plain relational table\nCREATE TABLE public.configuration_plain\n(\n device_id integer,\n oid text,\n instance text,\n value text\n)\nLooking like\n\n\n\nid\n\n\noid\n\n\ninstance\n\n\nvalue\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.2.13\n\n\n0\n\n\nVSAT\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.15\n\n\n0\n\n\n0\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.17\n\n\n0\n\n\n0\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.18\n\n\n0\n\n\n1\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.19\n\n\n0\n\n\n2\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n\n24\n\n\n24\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n\n25\n\n\n25\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n\n24\n\n\nvlan24\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n\n25\n\n\nVLAN_25\n\n\n\nAnd now I end with a table of ~33 M rows for 40K devices * (700-900 \nOID+Instance combinations).\nSome simple selects and updates (especially if I add simple indexes on \nid, oid columns) works faster than JSON (less than 1 sec updating one \nOID for ALL devices), \nbut on some stored procedures where I need to do some checks and \nbusiness logic before manipulating concrete parameters in configuration -\n performance decrease again from 10 to 25 seconds in below example with \neach nee added operation:\n\nCREATE OR REPLACE FUNCTION public.test_update_bulk_configuration_plain_plpg(\n sql_condition text, -- something like 'select id from devices'\n new_elements text, --collection of OIDs to be Added or Update, could be JSON Array or comma separated list, containing 1 or more (100) OIDs\n oids_to_delete text --collection of OIDs to Delete\n )\n RETURNS void AS\n$BODY$\nDECLARE\n r integer;\n cnt integer;\n ids int[];\n lid int;\nBEGIN\n RAISE NOTICE 'start';\n EXECUTE 'SELECT ARRAY(' || sql_condition || ')' into ids;\n FOREACH lid IN ARRAY ids\n LOOP\n -- DELETE \n -- Some business logic\n -- FOR .. IF .. BEGIN\n delete from configuration_plain c where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '10' and c.device_id = lid;\n delete from configuration_plain c where c.oid = 'Other OID' and instance = 'Index' and c.device_id = lid;\n -- other eventual deletes\n --END\n\n -- UPDATE\n -- Some business logic\n -- FOR .. IF .. BEGIN\n update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.87' and c.device_id = lid;\n update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '1' and c.device_id = lid; \n -- other eventual updates\n -- END\n\n --INSERT\n insert into configuration_plain (id, oid, instance, value) values (lid,'1.3.6.1.4.1.9999.3.5.10.3.201.1.1', '11', '11');\n -- OTHER eventually....\n insert into configuration_plain (id, oid, instance, value) values (lid,'OTHER_OID', 'Idx', 'Value of OID');\n END LOOP;\n RAISE NOTICE 'end';\n RETURN;\nEND\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\nSo any best practices and advice on such data and use cases modeling in DB?Regards,AlexL",
"msg_date": "Fri, 8 Mar 2019 16:39:57 +0200",
"msg_from": "Alexandru Lazarev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hot to model data in DB (PostgreSQL) for SNMP-like multiple\n configurations"
},
{
"msg_contents": "Is there a reason not to use a relational model instead of json(b) here? I\nthink that is in fact considered best practice.\n\nOn Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev <[email protected]>\nwrote:\n\n> I am working on product managing and monitoring Network (NMS-like\n> products).\n>\n> Product manages configuration of network devices, for now each device has\n> stored its configuration in simple table - this was the original design.\n>\n> CREATE TABLE public.configuration(\n> id integer NOT NULL,\n> config json NOT NULL,\n> CONSTRAINT configuration_pkey PRIMARY KEY (id),)\n>\n> A config looks like:\n>\n> {\n> \"_id\": 20818132,\n> \"type\": \"Modem\",\n> \"data\": [{\n> \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\",\n> \"instance\": \"24\",\n> \"value\": \"null\"\n> },\n> {\n> \"oid\": \"1.3.6.1.4.1.9999.3.5.10.1.86\",\n> \"instance\": \"0\",\n> \"value\": \"562\"\n> },\n> {\n> \"oid\": \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\",\n> \"instance\": \"0\",\n> \"value\": \"0\"\n> },\n> {\n> \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n> \"instance\": \"24\",\n> \"value\": \"vlan24\"\n> },\n> {\n> \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n> \"instance\": \"25\",\n> \"value\": \"vlan25\"\n> }\n> ]}\n>\n> And there are many plv8 (java script procedural language extension for\n> PostgreSQL) stored procedures working on bulks of such config, reading some\n> OIDs, changing them conditionally, removing some of them and adding others,\n> especially in use cases like: There are some upper-level META-configuration\n> of different level, which during change have to update all their updated\n> parameters to all affected leaves configs. An simple test-example (but\n> without touching 'data' node)\n>\n> CREATE OR REPLACE FUNCTION public.process_jsonb_plv8()\n> RETURNS void AS$BODY$\n> var CFG_TABLE_NAME = \"configurations\";\n> var selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c where c.id = $1\", ['int'] );\n> var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 where id = $2', ['jsonb','int'] );\n>\n> try {\n>\n> var ids = plv8.execute('select id from devices');\n>\n> for (var i = 0; i < ids.length; i++) {\n> var db_cfg = selPlan.execute(ids[i].id); //Get current json config from DB\n> var cfg = db_cfg[0].config;\n> cfg[\"key0\"] = 'plv8_json'; //-add some dummy key\n> updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB\n> plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n>\n>\n> }} finally {\n> selPlan.free();\n> updPlan.free();}\n> return;$BODY$\n> LANGUAGE plv8 VOLATILE\n> COST 100;\n>\n> For real use-cases plv8 SPs are more complicated, doing FOR-LOOP through\n> ALL OIDs object of 'data' array, checking if it is looking for and update\n> value an/or remove it and/or add newer if necessary.\n>\n> Since number of devices in DB increased from several hundreds to 40K or\n> even 70K, and number of OID+Instance combinations also increased from\n> several hundred to ~1K and sometimes up to 10K within a config, we start\n> facing slowness in bulk (especially global -> update to ALL Devices)\n> updates/searches.\n>\n> In order to get rid off FOR LOOP step for each configuration I've\n> converted data-node from array to object (key-value model), something like\n> :\n>\n> {\n> \"_id\": 20818132,\n> \"type\": \"Modem\",\n> \"data\": {\n> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\": {\n> \"24\": \"null\"\n> },\n> \"1.3.6.1.4.1.9999.3.5.10.1.86\": {\n> \"0\": \"562\"\n> },\n> \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\": {\n> \"0\": \"0\"\n> },\n> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\": {\n> \"24\": \"vlan24\",\n> \"25\": \"vlan25\"\n> }\n> }}\n>\n> Now in order to get a concrete OID (e.g.\n> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\") and/or its instance I do 1-2 *O(1)*\n> operations instead *O(n)*. And it become a bit faster. After I've changed\n> column type from json to jsonb - I've got a lot of memory issues with\n> plv8 stored procedures, so now ideas is:\n>\n> *What are the best practices to store such data and use cases in DB?*\n> taking in considerations following: - Bulk and global updates are often\n> enough (user-done operation) - several times per week and it takes long\n> time - several minutes, annoying user experience. - Consulting some OIDs\n> only from concrete config is medium frequency use case - Consulting ALL\n> devices have some specific OID (SNMP Parameter) settled to a specific value\n> - medium frequency cases. - Consult (read) a configuration for a specific\n> device as a whole document - often use case (it is send to device as json\n> or as converted CSV, it is send in modified json format to other utilities,\n> etc)\n>\n> One of suggestion from other oppinions is to move ALL configurations to\n> simple plain relational table\n>\n> CREATE TABLE public.configuration_plain(\n> device_id integer,\n> oid text,\n> instance text,\n> value text)\n>\n> Looking like\n>\n> *id*\n>\n> *oid*\n>\n> *instance*\n>\n> *value*\n>\n> 20818132\n>\n> 1.3.6.1.4.1.9999.2.13\n>\n> 0\n>\n> VSAT\n>\n> 20818132\n>\n> 1.3.6.1.4.1.9999.3.10.2.2.10.15\n>\n> 0\n>\n> 0\n>\n> 20818132\n>\n> 1.3.6.1.4.1.9999.3.10.2.2.10.17\n>\n> 0\n>\n> 0\n>\n> 20818132\n>\n> 1.3.6.1.4.1.9999.3.10.2.2.10.18\n>\n> 0\n>\n> 1\n>\n> 20818132\n>\n> 1.3.6.1.4.1.9999.3.10.2.2.10.19\n>\n> 0\n>\n> 2\n>\n> 20818132\n>\n> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n>\n> 24\n>\n> 24\n>\n> 20818132\n>\n> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n>\n> 25\n>\n> 25\n>\n> 20818132\n>\n> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n>\n> 24\n>\n> vlan24\n>\n> 20818132\n>\n> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n>\n> 25\n>\n> VLAN_25\n>\n> And now I end with a table of ~33 M rows for 40K devices * (700-900\n> OID+Instance combinations). Some simple selects and updates (especially if\n> I add simple indexes on id, oid columns) works faster than JSON (less than\n> 1 sec updating one OID for ALL devices), but on some stored procedures\n> where I need to do some checks and business logic before manipulating\n> concrete parameters in configuration - performance decrease again from 10\n> to 25 seconds in below example with each nee added operation:\n>\n> CREATE OR REPLACE FUNCTION public.test_update_bulk_configuration_plain_plpg(\n> sql_condition text, -- something like 'select id from devices'\n> new_elements text, --collection of OIDs to be Added or Update, could be JSON Array or comma separated list, containing 1 or more (100) OIDs\n> oids_to_delete text --collection of OIDs to Delete\n> )\n> RETURNS void AS$BODY$DECLARE\n> r integer;\n> cnt integer;\n> ids int[];\n> lid int;BEGIN\n> RAISE NOTICE 'start';\n> EXECUTE 'SELECT ARRAY(' || sql_condition || ')' into ids;\n> FOREACH lid IN ARRAY ids\n> LOOP\n> -- DELETE\n> -- Some business logic\n> -- FOR .. IF .. BEGIN\n> delete from configuration_plain c where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '10' and c.device_id = lid;\n> delete from configuration_plain c where c.oid = 'Other OID' and instance = 'Index' and c.device_id = lid;\n> -- other eventual deletes\n> --END\n>\n> -- UPDATE\n> -- Some business logic\n> -- FOR .. IF .. BEGIN\n> update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.87' and c.device_id = lid;\n> update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '1' and c.device_id = lid;\n> -- other eventual updates\n> -- END\n>\n> --INSERT\n> insert into configuration_plain (id, oid, instance, value) values (lid,'1.3.6.1.4.1.9999.3.5.10.3.201.1.1', '11', '11');\n> -- OTHER eventually....\n> insert into configuration_plain (id, oid, instance, value) values (lid,'OTHER_OID', 'Idx', 'Value of OID');\n> END LOOP;\n> RAISE NOTICE 'end';\n> RETURN;END$BODY$\n> LANGUAGE plpgsql VOLATILE\n> COST 100;\n>\n> So any best practices and advice on such data and use cases modeling in DB?\n>\n> Regards,\n>\n> AlexL\n>\n\n\n-- \nIf you can't see the forest for the trees,\nCut the trees and you'll see there is no forest.\n\nIs there a reason not to use a relational model instead of json(b) here? I think that is in fact considered best practice.On Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev <[email protected]> wrote:\n\nI am working on product managing and monitoring Network (NMS-like products).\nProduct manages configuration of network devices, for now each device\n has stored its configuration in simple table - this was the original \ndesign.\nCREATE TABLE public.configuration\n(\n id integer NOT NULL,\n config json NOT NULL,\n CONSTRAINT configuration_pkey PRIMARY KEY (id),\n)\nA config looks like: \n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": [{\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\",\n \"instance\": \"24\",\n \"value\": \"null\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.1.86\",\n \"instance\": \"0\",\n \"value\": \"562\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\",\n \"instance\": \"0\",\n \"value\": \"0\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"24\",\n \"value\": \"vlan24\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"25\",\n \"value\": \"vlan25\"\n }\n ]\n}\nAnd there are many plv8 (java script procedural language\n extension for PostgreSQL) stored procedures working on bulks of such \nconfig, reading some OIDs, changing them conditionally, removing some of\n them and adding others, especially in use cases like: \nThere are some upper-level META-configuration of different level, which \nduring change have to update all their updated parameters to all \naffected leaves configs.\nAn simple test-example (but without touching 'data' node)\nCREATE OR REPLACE FUNCTION public.process_jsonb_plv8()\n RETURNS void AS\n$BODY$\nvar CFG_TABLE_NAME = \"configurations\";\nvar selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c where c.id = $1\", ['int'] );\nvar updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 where id = $2', ['jsonb','int'] );\n\ntry {\n\n var ids = plv8.execute('select id from devices');\n\n for (var i = 0; i < ids.length; i++) {\n var db_cfg = selPlan.execute(ids[i].id); //Get current json config from DB\n var cfg = db_cfg[0].config;\n cfg[\"key0\"] = 'plv8_json'; //-add some dummy key\n updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB\n plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n\n\n }\n} finally {\n selPlan.free();\n updPlan.free();\n}\n\nreturn;$BODY$\n LANGUAGE plv8 VOLATILE\n COST 100;\nFor real use-cases plv8 SPs are more complicated, doing FOR-LOOP \nthrough ALL OIDs object of 'data' array, checking if it is looking for \nand update value an/or remove it and/or add newer if necessary.\nSince number of devices in DB increased from several hundreds to 40K \nor even 70K, and number of OID+Instance combinations also increased from\n several hundred to ~1K and sometimes up to 10K within a config, we \nstart facing slowness in bulk (especially global -> update to ALL \nDevices) updates/searches.\nIn order to get rid off FOR LOOP step for each configuration I've \nconverted data-node from array to object (key-value model), something \nlike : \n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": {\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\": {\n \"24\": \"null\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.1.86\": {\n \"0\": \"562\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\": {\n \"0\": \"0\"\n },\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\": {\n \"24\": \"vlan24\",\n \"25\": \"vlan25\"\n }\n }\n}\nNow in order to get a concrete OID (e.g. \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\") and/or its instance I do 1-2 O(1) operations instead O(n). And it become a bit faster.\nAfter I've changed column type from json to jsonb - I've got a lot of memory issues with plv8 stored procedures, so now ideas is: \nWhat are the best practices to store such data and use cases in DB?\ntaking in considerations following: \n- Bulk and global updates are often enough (user-done operation) - \nseveral times per week and it takes long time - several minutes, \nannoying user experience.\n- Consulting some OIDs only from concrete config is medium frequency use\n case\n- Consulting ALL devices have some specific OID (SNMP Parameter) settled\n to a specific value - medium frequency cases.\n- Consult (read) a configuration for a specific device as a whole \ndocument - often use case (it is send to device as json or as converted \nCSV, it is send in modified json format to other utilities, etc)\nOne of suggestion from other oppinions is to move ALL configurations to simple plain relational table\nCREATE TABLE public.configuration_plain\n(\n device_id integer,\n oid text,\n instance text,\n value text\n)\nLooking like\n\n\n\nid\n\n\noid\n\n\ninstance\n\n\nvalue\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.2.13\n\n\n0\n\n\nVSAT\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.15\n\n\n0\n\n\n0\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.17\n\n\n0\n\n\n0\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.18\n\n\n0\n\n\n1\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.19\n\n\n0\n\n\n2\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n\n24\n\n\n24\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n\n25\n\n\n25\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n\n24\n\n\nvlan24\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n\n25\n\n\nVLAN_25\n\n\n\nAnd now I end with a table of ~33 M rows for 40K devices * (700-900 \nOID+Instance combinations).\nSome simple selects and updates (especially if I add simple indexes on \nid, oid columns) works faster than JSON (less than 1 sec updating one \nOID for ALL devices), \nbut on some stored procedures where I need to do some checks and \nbusiness logic before manipulating concrete parameters in configuration -\n performance decrease again from 10 to 25 seconds in below example with \neach nee added operation:\n\nCREATE OR REPLACE FUNCTION public.test_update_bulk_configuration_plain_plpg(\n sql_condition text, -- something like 'select id from devices'\n new_elements text, --collection of OIDs to be Added or Update, could be JSON Array or comma separated list, containing 1 or more (100) OIDs\n oids_to_delete text --collection of OIDs to Delete\n )\n RETURNS void AS\n$BODY$\nDECLARE\n r integer;\n cnt integer;\n ids int[];\n lid int;\nBEGIN\n RAISE NOTICE 'start';\n EXECUTE 'SELECT ARRAY(' || sql_condition || ')' into ids;\n FOREACH lid IN ARRAY ids\n LOOP\n -- DELETE \n -- Some business logic\n -- FOR .. IF .. BEGIN\n delete from configuration_plain c where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '10' and c.device_id = lid;\n delete from configuration_plain c where c.oid = 'Other OID' and instance = 'Index' and c.device_id = lid;\n -- other eventual deletes\n --END\n\n -- UPDATE\n -- Some business logic\n -- FOR .. IF .. BEGIN\n update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.87' and c.device_id = lid;\n update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '1' and c.device_id = lid; \n -- other eventual updates\n -- END\n\n --INSERT\n insert into configuration_plain (id, oid, instance, value) values (lid,'1.3.6.1.4.1.9999.3.5.10.3.201.1.1', '11', '11');\n -- OTHER eventually....\n insert into configuration_plain (id, oid, instance, value) values (lid,'OTHER_OID', 'Idx', 'Value of OID');\n END LOOP;\n RAISE NOTICE 'end';\n RETURN;\nEND\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\nSo any best practices and advice on such data and use cases modeling in DB?Regards,AlexL\n\n\n-- If you can't see the forest for the trees,Cut the trees and you'll see there is no forest.",
"msg_date": "Fri, 8 Mar 2019 16:15:35 +0100",
"msg_from": "Alban Hertroys <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hot to model data in DB (PostgreSQL) for SNMP-like multiple\n configurations"
},
{
"msg_contents": "For now I do not see the strong reason, but i inherited this project from\nother developers,\nOriginally there was MongoDB and structure was more complex, having SNMP\nlike nested tables with OID.Instance1.Instance2.instance3 and in JSON it\nlooked like:\n{\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43.1\": {\n \"1\": {\n \"24\": \"vlan24\",\n \"25\": \"vlan25\"\n },\n \"2\": {\n \"24\": \"127.0.0.1\",\n \"25\": \"8.8.8.8\"\n }\n }\n}\n\nHere we have table in table - How to model this in relational - with\nseparate tables and JOINs only?\nI am not excluding in future I'll have such requirement\n\nthe other reason is that devices request their config and some other tools\nrequests devices configs as a single document/file - this a bit create\noverhead for composing document in JSON or XML or CSV format from\nrelational table (I understand it is doable, but...)\n\nBTW in PG documentation:\n\"\n*8.14.2. Designing JSON documents effectively*\n\n\n\n\n*Representing data as JSON can be considerably more flexible than the\ntraditional relational data model, which is compelling in environments\nwhere requirements are fluid. It is quite possible for both approaches to\nco-exist and complement each other within the same application. However,\neven for applications where maximal flexibility is desired, it is still\nrecommended that JSON documents have a somewhat fixed structure. The\nstructure is typically unenforced (though enforcing some business rules\ndeclaratively is possible), but having a predictable structure makes it\neasier to write queries that usefully summarize a set of \"documents\"\n(datums) in a table.JSON data is subject to the same concurrency-control\nconsiderations as any other data type when stored in a table. Although\nstoring large documents is practicable, keep in mind that any update\nacquires a row-level lock on the whole row. Consider limiting JSON\ndocuments to a manageable size in order to decrease lock contention among\nupdating transactions. Ideally, JSON documents should each represent an\natomic datum that business rules dictate cannot reasonably be further\nsubdivided into smaller datums that could be modified independently.\"\nhttps://www.postgresql.org/docs/9.6/datatype-json.html\n<https://www.postgresql.org/docs/9.6/datatype-json.html>*\n\n\n\nOn Fri, Mar 8, 2019 at 5:15 PM Alban Hertroys <[email protected]> wrote:\n\n> Is there a reason not to use a relational model instead of json(b) here? I\n> think that is in fact considered best practice.\n>\n> On Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev <\n> [email protected]> wrote:\n>\n>> I am working on product managing and monitoring Network (NMS-like\n>> products).\n>>\n>> Product manages configuration of network devices, for now each device has\n>> stored its configuration in simple table - this was the original design.\n>>\n>> CREATE TABLE public.configuration(\n>> id integer NOT NULL,\n>> config json NOT NULL,\n>> CONSTRAINT configuration_pkey PRIMARY KEY (id),)\n>>\n>> A config looks like:\n>>\n>> {\n>> \"_id\": 20818132,\n>> \"type\": \"Modem\",\n>> \"data\": [{\n>> \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\",\n>> \"instance\": \"24\",\n>> \"value\": \"null\"\n>> },\n>> {\n>> \"oid\": \"1.3.6.1.4.1.9999.3.5.10.1.86\",\n>> \"instance\": \"0\",\n>> \"value\": \"562\"\n>> },\n>> {\n>> \"oid\": \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\",\n>> \"instance\": \"0\",\n>> \"value\": \"0\"\n>> },\n>> {\n>> \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n>> \"instance\": \"24\",\n>> \"value\": \"vlan24\"\n>> },\n>> {\n>> \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n>> \"instance\": \"25\",\n>> \"value\": \"vlan25\"\n>> }\n>> ]}\n>>\n>> And there are many plv8 (java script procedural language extension for\n>> PostgreSQL) stored procedures working on bulks of such config, reading some\n>> OIDs, changing them conditionally, removing some of them and adding others,\n>> especially in use cases like: There are some upper-level META-configuration\n>> of different level, which during change have to update all their updated\n>> parameters to all affected leaves configs. An simple test-example (but\n>> without touching 'data' node)\n>>\n>> CREATE OR REPLACE FUNCTION public.process_jsonb_plv8()\n>> RETURNS void AS$BODY$\n>> var CFG_TABLE_NAME = \"configurations\";\n>> var selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c where c.id = $1\", ['int'] );\n>> var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 where id = $2', ['jsonb','int'] );\n>>\n>> try {\n>>\n>> var ids = plv8.execute('select id from devices');\n>>\n>> for (var i = 0; i < ids.length; i++) {\n>> var db_cfg = selPlan.execute(ids[i].id); //Get current json config from DB\n>> var cfg = db_cfg[0].config;\n>> cfg[\"key0\"] = 'plv8_json'; //-add some dummy key\n>> updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB\n>> plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n>>\n>>\n>> }} finally {\n>> selPlan.free();\n>> updPlan.free();}\n>> return;$BODY$\n>> LANGUAGE plv8 VOLATILE\n>> COST 100;\n>>\n>> For real use-cases plv8 SPs are more complicated, doing FOR-LOOP through\n>> ALL OIDs object of 'data' array, checking if it is looking for and update\n>> value an/or remove it and/or add newer if necessary.\n>>\n>> Since number of devices in DB increased from several hundreds to 40K or\n>> even 70K, and number of OID+Instance combinations also increased from\n>> several hundred to ~1K and sometimes up to 10K within a config, we start\n>> facing slowness in bulk (especially global -> update to ALL Devices)\n>> updates/searches.\n>>\n>> In order to get rid off FOR LOOP step for each configuration I've\n>> converted data-node from array to object (key-value model), something like\n>> :\n>>\n>> {\n>> \"_id\": 20818132,\n>> \"type\": \"Modem\",\n>> \"data\": {\n>> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\": {\n>> \"24\": \"null\"\n>> },\n>> \"1.3.6.1.4.1.9999.3.5.10.1.86\": {\n>> \"0\": \"562\"\n>> },\n>> \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\": {\n>> \"0\": \"0\"\n>> },\n>> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\": {\n>> \"24\": \"vlan24\",\n>> \"25\": \"vlan25\"\n>> }\n>> }}\n>>\n>> Now in order to get a concrete OID (e.g.\n>> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\") and/or its instance I do 1-2\n>> *O(1)* operations instead *O(n)*. And it become a bit faster. After I've\n>> changed column type from json to jsonb - I've got a lot of memory issues\n>> with plv8 stored procedures, so now ideas is:\n>>\n>> *What are the best practices to store such data and use cases in DB?*\n>> taking in considerations following: - Bulk and global updates are often\n>> enough (user-done operation) - several times per week and it takes long\n>> time - several minutes, annoying user experience. - Consulting some OIDs\n>> only from concrete config is medium frequency use case - Consulting ALL\n>> devices have some specific OID (SNMP Parameter) settled to a specific value\n>> - medium frequency cases. - Consult (read) a configuration for a specific\n>> device as a whole document - often use case (it is send to device as json\n>> or as converted CSV, it is send in modified json format to other utilities,\n>> etc)\n>>\n>> One of suggestion from other oppinions is to move ALL configurations to\n>> simple plain relational table\n>>\n>> CREATE TABLE public.configuration_plain(\n>> device_id integer,\n>> oid text,\n>> instance text,\n>> value text)\n>>\n>> Looking like\n>>\n>> *id*\n>>\n>> *oid*\n>>\n>> *instance*\n>>\n>> *value*\n>>\n>> 20818132\n>>\n>> 1.3.6.1.4.1.9999.2.13\n>>\n>> 0\n>>\n>> VSAT\n>>\n>> 20818132\n>>\n>> 1.3.6.1.4.1.9999.3.10.2.2.10.15\n>>\n>> 0\n>>\n>> 0\n>>\n>> 20818132\n>>\n>> 1.3.6.1.4.1.9999.3.10.2.2.10.17\n>>\n>> 0\n>>\n>> 0\n>>\n>> 20818132\n>>\n>> 1.3.6.1.4.1.9999.3.10.2.2.10.18\n>>\n>> 0\n>>\n>> 1\n>>\n>> 20818132\n>>\n>> 1.3.6.1.4.1.9999.3.10.2.2.10.19\n>>\n>> 0\n>>\n>> 2\n>>\n>> 20818132\n>>\n>> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n>>\n>> 24\n>>\n>> 24\n>>\n>> 20818132\n>>\n>> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n>>\n>> 25\n>>\n>> 25\n>>\n>> 20818132\n>>\n>> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n>>\n>> 24\n>>\n>> vlan24\n>>\n>> 20818132\n>>\n>> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n>>\n>> 25\n>>\n>> VLAN_25\n>>\n>> And now I end with a table of ~33 M rows for 40K devices * (700-900\n>> OID+Instance combinations). Some simple selects and updates (especially if\n>> I add simple indexes on id, oid columns) works faster than JSON (less than\n>> 1 sec updating one OID for ALL devices), but on some stored procedures\n>> where I need to do some checks and business logic before manipulating\n>> concrete parameters in configuration - performance decrease again from 10\n>> to 25 seconds in below example with each nee added operation:\n>>\n>> CREATE OR REPLACE FUNCTION public.test_update_bulk_configuration_plain_plpg(\n>> sql_condition text, -- something like 'select id from devices'\n>> new_elements text, --collection of OIDs to be Added or Update, could be JSON Array or comma separated list, containing 1 or more (100) OIDs\n>> oids_to_delete text --collection of OIDs to Delete\n>> )\n>> RETURNS void AS$BODY$DECLARE\n>> r integer;\n>> cnt integer;\n>> ids int[];\n>> lid int;BEGIN\n>> RAISE NOTICE 'start';\n>> EXECUTE 'SELECT ARRAY(' || sql_condition || ')' into ids;\n>> FOREACH lid IN ARRAY ids\n>> LOOP\n>> -- DELETE\n>> -- Some business logic\n>> -- FOR .. IF .. BEGIN\n>> delete from configuration_plain c where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '10' and c.device_id = lid;\n>> delete from configuration_plain c where c.oid = 'Other OID' and instance = 'Index' and c.device_id = lid;\n>> -- other eventual deletes\n>> --END\n>>\n>> -- UPDATE\n>> -- Some business logic\n>> -- FOR .. IF .. BEGIN\n>> update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.87' and c.device_id = lid;\n>> update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '1' and c.device_id = lid;\n>> -- other eventual updates\n>> -- END\n>>\n>> --INSERT\n>> insert into configuration_plain (id, oid, instance, value) values (lid,'1.3.6.1.4.1.9999.3.5.10.3.201.1.1', '11', '11');\n>> -- OTHER eventually....\n>> insert into configuration_plain (id, oid, instance, value) values (lid,'OTHER_OID', 'Idx', 'Value of OID');\n>> END LOOP;\n>> RAISE NOTICE 'end';\n>> RETURN;END$BODY$\n>> LANGUAGE plpgsql VOLATILE\n>> COST 100;\n>>\n>> So any best practices and advice on such data and use cases modeling in\n>> DB?\n>>\n>> Regards,\n>>\n>> AlexL\n>>\n>\n>\n> --\n> If you can't see the forest for the trees,\n> Cut the trees and you'll see there is no forest.\n>\n\nFor now I do not see the strong reason, but i inherited this project from other developers, Originally there was MongoDB and structure was more complex, having SNMP like nested tables with OID.Instance1.Instance2.instance3 and in JSON it looked like: { \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43.1\": { \"1\": { \"24\": \"vlan24\", \"25\": \"vlan25\" }, \"2\": { \"24\": \"127.0.0.1\", \"25\": \"8.8.8.8\" } }}Here we have table in table - How to model this in relational - with separate tables and JOINs only?I am not excluding in future I'll have such requirementthe other reason is that devices request their config and some other tools requests devices configs as a single document/file - this a bit create overhead for composing document in JSON or XML or CSV format from relational table (I understand it is doable, but...)BTW in PG documentation: \"8.14.2. Designing JSON documents effectivelyRepresenting data as JSON can be considerably more flexible than the traditional relational data model, which is compelling in environments where requirements are fluid. It is quite possible for both approaches to co-exist and complement each other within the same application. However, even for applications where maximal flexibility is desired, it is still recommended that JSON documents have a somewhat fixed structure. The structure is typically unenforced (though enforcing some business rules declaratively is possible), but having a predictable structure makes it easier to write queries that usefully summarize a set of \"documents\" (datums) in a table.JSON data is subject to the same concurrency-control considerations as any other data type when stored in a table. Although storing large documents is practicable, keep in mind that any update acquires a row-level lock on the whole row. Consider limiting JSON documents to a manageable size in order to decrease lock contention among updating transactions. Ideally, JSON documents should each represent an atomic datum that business rules dictate cannot reasonably be further subdivided into smaller datums that could be modified independently.\" https://www.postgresql.org/docs/9.6/datatype-json.htmlOn Fri, Mar 8, 2019 at 5:15 PM Alban Hertroys <[email protected]> wrote:Is there a reason not to use a relational model instead of json(b) here? I think that is in fact considered best practice.On Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev <[email protected]> wrote:\n\nI am working on product managing and monitoring Network (NMS-like products).\nProduct manages configuration of network devices, for now each device\n has stored its configuration in simple table - this was the original \ndesign.\nCREATE TABLE public.configuration\n(\n id integer NOT NULL,\n config json NOT NULL,\n CONSTRAINT configuration_pkey PRIMARY KEY (id),\n)\nA config looks like: \n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": [{\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\",\n \"instance\": \"24\",\n \"value\": \"null\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.1.86\",\n \"instance\": \"0\",\n \"value\": \"562\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\",\n \"instance\": \"0\",\n \"value\": \"0\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"24\",\n \"value\": \"vlan24\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"25\",\n \"value\": \"vlan25\"\n }\n ]\n}\nAnd there are many plv8 (java script procedural language\n extension for PostgreSQL) stored procedures working on bulks of such \nconfig, reading some OIDs, changing them conditionally, removing some of\n them and adding others, especially in use cases like: \nThere are some upper-level META-configuration of different level, which \nduring change have to update all their updated parameters to all \naffected leaves configs.\nAn simple test-example (but without touching 'data' node)\nCREATE OR REPLACE FUNCTION public.process_jsonb_plv8()\n RETURNS void AS\n$BODY$\nvar CFG_TABLE_NAME = \"configurations\";\nvar selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c where c.id = $1\", ['int'] );\nvar updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 where id = $2', ['jsonb','int'] );\n\ntry {\n\n var ids = plv8.execute('select id from devices');\n\n for (var i = 0; i < ids.length; i++) {\n var db_cfg = selPlan.execute(ids[i].id); //Get current json config from DB\n var cfg = db_cfg[0].config;\n cfg[\"key0\"] = 'plv8_json'; //-add some dummy key\n updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB\n plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n\n\n }\n} finally {\n selPlan.free();\n updPlan.free();\n}\n\nreturn;$BODY$\n LANGUAGE plv8 VOLATILE\n COST 100;\nFor real use-cases plv8 SPs are more complicated, doing FOR-LOOP \nthrough ALL OIDs object of 'data' array, checking if it is looking for \nand update value an/or remove it and/or add newer if necessary.\nSince number of devices in DB increased from several hundreds to 40K \nor even 70K, and number of OID+Instance combinations also increased from\n several hundred to ~1K and sometimes up to 10K within a config, we \nstart facing slowness in bulk (especially global -> update to ALL \nDevices) updates/searches.\nIn order to get rid off FOR LOOP step for each configuration I've \nconverted data-node from array to object (key-value model), something \nlike : \n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": {\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\": {\n \"24\": \"null\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.1.86\": {\n \"0\": \"562\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\": {\n \"0\": \"0\"\n },\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\": {\n \"24\": \"vlan24\",\n \"25\": \"vlan25\"\n }\n }\n}\nNow in order to get a concrete OID (e.g. \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\") and/or its instance I do 1-2 O(1) operations instead O(n). And it become a bit faster.\nAfter I've changed column type from json to jsonb - I've got a lot of memory issues with plv8 stored procedures, so now ideas is: \nWhat are the best practices to store such data and use cases in DB?\ntaking in considerations following: \n- Bulk and global updates are often enough (user-done operation) - \nseveral times per week and it takes long time - several minutes, \nannoying user experience.\n- Consulting some OIDs only from concrete config is medium frequency use\n case\n- Consulting ALL devices have some specific OID (SNMP Parameter) settled\n to a specific value - medium frequency cases.\n- Consult (read) a configuration for a specific device as a whole \ndocument - often use case (it is send to device as json or as converted \nCSV, it is send in modified json format to other utilities, etc)\nOne of suggestion from other oppinions is to move ALL configurations to simple plain relational table\nCREATE TABLE public.configuration_plain\n(\n device_id integer,\n oid text,\n instance text,\n value text\n)\nLooking like\n\n\n\nid\n\n\noid\n\n\ninstance\n\n\nvalue\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.2.13\n\n\n0\n\n\nVSAT\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.15\n\n\n0\n\n\n0\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.17\n\n\n0\n\n\n0\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.18\n\n\n0\n\n\n1\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.19\n\n\n0\n\n\n2\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n\n24\n\n\n24\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n\n25\n\n\n25\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n\n24\n\n\nvlan24\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n\n25\n\n\nVLAN_25\n\n\n\nAnd now I end with a table of ~33 M rows for 40K devices * (700-900 \nOID+Instance combinations).\nSome simple selects and updates (especially if I add simple indexes on \nid, oid columns) works faster than JSON (less than 1 sec updating one \nOID for ALL devices), \nbut on some stored procedures where I need to do some checks and \nbusiness logic before manipulating concrete parameters in configuration -\n performance decrease again from 10 to 25 seconds in below example with \neach nee added operation:\n\nCREATE OR REPLACE FUNCTION public.test_update_bulk_configuration_plain_plpg(\n sql_condition text, -- something like 'select id from devices'\n new_elements text, --collection of OIDs to be Added or Update, could be JSON Array or comma separated list, containing 1 or more (100) OIDs\n oids_to_delete text --collection of OIDs to Delete\n )\n RETURNS void AS\n$BODY$\nDECLARE\n r integer;\n cnt integer;\n ids int[];\n lid int;\nBEGIN\n RAISE NOTICE 'start';\n EXECUTE 'SELECT ARRAY(' || sql_condition || ')' into ids;\n FOREACH lid IN ARRAY ids\n LOOP\n -- DELETE \n -- Some business logic\n -- FOR .. IF .. BEGIN\n delete from configuration_plain c where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '10' and c.device_id = lid;\n delete from configuration_plain c where c.oid = 'Other OID' and instance = 'Index' and c.device_id = lid;\n -- other eventual deletes\n --END\n\n -- UPDATE\n -- Some business logic\n -- FOR .. IF .. BEGIN\n update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.87' and c.device_id = lid;\n update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '1' and c.device_id = lid; \n -- other eventual updates\n -- END\n\n --INSERT\n insert into configuration_plain (id, oid, instance, value) values (lid,'1.3.6.1.4.1.9999.3.5.10.3.201.1.1', '11', '11');\n -- OTHER eventually....\n insert into configuration_plain (id, oid, instance, value) values (lid,'OTHER_OID', 'Idx', 'Value of OID');\n END LOOP;\n RAISE NOTICE 'end';\n RETURN;\nEND\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\nSo any best practices and advice on such data and use cases modeling in DB?Regards,AlexL\n\n\n-- If you can't see the forest for the trees,Cut the trees and you'll see there is no forest.",
"msg_date": "Fri, 8 Mar 2019 18:40:27 +0200",
"msg_from": "Alexandru Lazarev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hot to model data in DB (PostgreSQL) for SNMP-like multiple\n configurations"
},
{
"msg_contents": "Try partitioning your table based on your device_id, that will give you a\nconsiderable boost for queries which the where clause includes it. for 9.6\n(that's the one your using right?) there's pg_partman for that kind of\nthing, in this case you would partition by ranges, if the id's are\nsequential it's pretty straightforward. Any chance of upgrading to a newer\nPG version? partitioning becomes native from PG10 onwards, so you don't\nhave to rely on particular plugins, and there are always significant\nperformance improvements for several use cases in newer versions (like\nimproved parallelism).\n\n\n\nOn Fri, Mar 8, 2019 at 10:40 AM Alexandru Lazarev <\[email protected]> wrote:\n\n> For now I do not see the strong reason, but i inherited this project from\n> other developers,\n> Originally there was MongoDB and structure was more complex, having SNMP\n> like nested tables with OID.Instance1.Instance2.instance3 and in JSON it\n> looked like:\n> {\n> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43.1\": {\n> \"1\": {\n> \"24\": \"vlan24\",\n> \"25\": \"vlan25\"\n> },\n> \"2\": {\n> \"24\": \"127.0.0.1\",\n> \"25\": \"8.8.8.8\"\n> }\n> }\n> }\n>\n> Here we have table in table - How to model this in relational - with\n> separate tables and JOINs only?\n> I am not excluding in future I'll have such requirement\n>\n> the other reason is that devices request their config and some other tools\n> requests devices configs as a single document/file - this a bit create\n> overhead for composing document in JSON or XML or CSV format from\n> relational table (I understand it is doable, but...)\n>\n> BTW in PG documentation:\n> \"\n> *8.14.2. Designing JSON documents effectively*\n>\n>\n>\n>\n> *Representing data as JSON can be considerably more flexible than the\n> traditional relational data model, which is compelling in environments\n> where requirements are fluid. It is quite possible for both approaches to\n> co-exist and complement each other within the same application. However,\n> even for applications where maximal flexibility is desired, it is still\n> recommended that JSON documents have a somewhat fixed structure. The\n> structure is typically unenforced (though enforcing some business rules\n> declaratively is possible), but having a predictable structure makes it\n> easier to write queries that usefully summarize a set of \"documents\"\n> (datums) in a table.JSON data is subject to the same concurrency-control\n> considerations as any other data type when stored in a table. Although\n> storing large documents is practicable, keep in mind that any update\n> acquires a row-level lock on the whole row. Consider limiting JSON\n> documents to a manageable size in order to decrease lock contention among\n> updating transactions. Ideally, JSON documents should each represent an\n> atomic datum that business rules dictate cannot reasonably be further\n> subdivided into smaller datums that could be modified independently.\"\n> https://www.postgresql.org/docs/9.6/datatype-json.html\n> <https://www.postgresql.org/docs/9.6/datatype-json.html>*\n>\n>\n>\n> On Fri, Mar 8, 2019 at 5:15 PM Alban Hertroys <[email protected]> wrote:\n>\n>> Is there a reason not to use a relational model instead of json(b) here?\n>> I think that is in fact considered best practice.\n>>\n>> On Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev <\n>> [email protected]> wrote:\n>>\n>>> I am working on product managing and monitoring Network (NMS-like\n>>> products).\n>>>\n>>> Product manages configuration of network devices, for now each device\n>>> has stored its configuration in simple table - this was the original design.\n>>>\n>>> CREATE TABLE public.configuration(\n>>> id integer NOT NULL,\n>>> config json NOT NULL,\n>>> CONSTRAINT configuration_pkey PRIMARY KEY (id),)\n>>>\n>>> A config looks like:\n>>>\n>>> {\n>>> \"_id\": 20818132,\n>>> \"type\": \"Modem\",\n>>> \"data\": [{\n>>> \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\",\n>>> \"instance\": \"24\",\n>>> \"value\": \"null\"\n>>> },\n>>> {\n>>> \"oid\": \"1.3.6.1.4.1.9999.3.5.10.1.86\",\n>>> \"instance\": \"0\",\n>>> \"value\": \"562\"\n>>> },\n>>> {\n>>> \"oid\": \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\",\n>>> \"instance\": \"0\",\n>>> \"value\": \"0\"\n>>> },\n>>> {\n>>> \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n>>> \"instance\": \"24\",\n>>> \"value\": \"vlan24\"\n>>> },\n>>> {\n>>> \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n>>> \"instance\": \"25\",\n>>> \"value\": \"vlan25\"\n>>> }\n>>> ]}\n>>>\n>>> And there are many plv8 (java script procedural language extension for\n>>> PostgreSQL) stored procedures working on bulks of such config, reading some\n>>> OIDs, changing them conditionally, removing some of them and adding others,\n>>> especially in use cases like: There are some upper-level META-configuration\n>>> of different level, which during change have to update all their updated\n>>> parameters to all affected leaves configs. An simple test-example (but\n>>> without touching 'data' node)\n>>>\n>>> CREATE OR REPLACE FUNCTION public.process_jsonb_plv8()\n>>> RETURNS void AS$BODY$\n>>> var CFG_TABLE_NAME = \"configurations\";\n>>> var selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c where c.id = $1\", ['int'] );\n>>> var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 where id = $2', ['jsonb','int'] );\n>>>\n>>> try {\n>>>\n>>> var ids = plv8.execute('select id from devices');\n>>>\n>>> for (var i = 0; i < ids.length; i++) {\n>>> var db_cfg = selPlan.execute(ids[i].id); //Get current json config from DB\n>>> var cfg = db_cfg[0].config;\n>>> cfg[\"key0\"] = 'plv8_json'; //-add some dummy key\n>>> updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB\n>>> plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n>>>\n>>>\n>>> }} finally {\n>>> selPlan.free();\n>>> updPlan.free();}\n>>> return;$BODY$\n>>> LANGUAGE plv8 VOLATILE\n>>> COST 100;\n>>>\n>>> For real use-cases plv8 SPs are more complicated, doing FOR-LOOP through\n>>> ALL OIDs object of 'data' array, checking if it is looking for and update\n>>> value an/or remove it and/or add newer if necessary.\n>>>\n>>> Since number of devices in DB increased from several hundreds to 40K or\n>>> even 70K, and number of OID+Instance combinations also increased from\n>>> several hundred to ~1K and sometimes up to 10K within a config, we start\n>>> facing slowness in bulk (especially global -> update to ALL Devices)\n>>> updates/searches.\n>>>\n>>> In order to get rid off FOR LOOP step for each configuration I've\n>>> converted data-node from array to object (key-value model), something like\n>>> :\n>>>\n>>> {\n>>> \"_id\": 20818132,\n>>> \"type\": \"Modem\",\n>>> \"data\": {\n>>> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\": {\n>>> \"24\": \"null\"\n>>> },\n>>> \"1.3.6.1.4.1.9999.3.5.10.1.86\": {\n>>> \"0\": \"562\"\n>>> },\n>>> \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\": {\n>>> \"0\": \"0\"\n>>> },\n>>> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\": {\n>>> \"24\": \"vlan24\",\n>>> \"25\": \"vlan25\"\n>>> }\n>>> }}\n>>>\n>>> Now in order to get a concrete OID (e.g.\n>>> \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\") and/or its instance I do 1-2\n>>> *O(1)* operations instead *O(n)*. And it become a bit faster. After\n>>> I've changed column type from json to jsonb - I've got a lot of memory\n>>> issues with plv8 stored procedures, so now ideas is:\n>>>\n>>> *What are the best practices to store such data and use cases in DB?*\n>>> taking in considerations following: - Bulk and global updates are often\n>>> enough (user-done operation) - several times per week and it takes long\n>>> time - several minutes, annoying user experience. - Consulting some OIDs\n>>> only from concrete config is medium frequency use case - Consulting ALL\n>>> devices have some specific OID (SNMP Parameter) settled to a specific value\n>>> - medium frequency cases. - Consult (read) a configuration for a specific\n>>> device as a whole document - often use case (it is send to device as json\n>>> or as converted CSV, it is send in modified json format to other utilities,\n>>> etc)\n>>>\n>>> One of suggestion from other oppinions is to move ALL configurations to\n>>> simple plain relational table\n>>>\n>>> CREATE TABLE public.configuration_plain(\n>>> device_id integer,\n>>> oid text,\n>>> instance text,\n>>> value text)\n>>>\n>>> Looking like\n>>>\n>>> *id*\n>>>\n>>> *oid*\n>>>\n>>> *instance*\n>>>\n>>> *value*\n>>>\n>>> 20818132\n>>>\n>>> 1.3.6.1.4.1.9999.2.13\n>>>\n>>> 0\n>>>\n>>> VSAT\n>>>\n>>> 20818132\n>>>\n>>> 1.3.6.1.4.1.9999.3.10.2.2.10.15\n>>>\n>>> 0\n>>>\n>>> 0\n>>>\n>>> 20818132\n>>>\n>>> 1.3.6.1.4.1.9999.3.10.2.2.10.17\n>>>\n>>> 0\n>>>\n>>> 0\n>>>\n>>> 20818132\n>>>\n>>> 1.3.6.1.4.1.9999.3.10.2.2.10.18\n>>>\n>>> 0\n>>>\n>>> 1\n>>>\n>>> 20818132\n>>>\n>>> 1.3.6.1.4.1.9999.3.10.2.2.10.19\n>>>\n>>> 0\n>>>\n>>> 2\n>>>\n>>> 20818132\n>>>\n>>> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n>>>\n>>> 24\n>>>\n>>> 24\n>>>\n>>> 20818132\n>>>\n>>> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n>>>\n>>> 25\n>>>\n>>> 25\n>>>\n>>> 20818132\n>>>\n>>> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n>>>\n>>> 24\n>>>\n>>> vlan24\n>>>\n>>> 20818132\n>>>\n>>> 1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n>>>\n>>> 25\n>>>\n>>> VLAN_25\n>>>\n>>> And now I end with a table of ~33 M rows for 40K devices * (700-900\n>>> OID+Instance combinations). Some simple selects and updates (especially if\n>>> I add simple indexes on id, oid columns) works faster than JSON (less than\n>>> 1 sec updating one OID for ALL devices), but on some stored procedures\n>>> where I need to do some checks and business logic before manipulating\n>>> concrete parameters in configuration - performance decrease again from 10\n>>> to 25 seconds in below example with each nee added operation:\n>>>\n>>> CREATE OR REPLACE FUNCTION public.test_update_bulk_configuration_plain_plpg(\n>>> sql_condition text, -- something like 'select id from devices'\n>>> new_elements text, --collection of OIDs to be Added or Update, could be JSON Array or comma separated list, containing 1 or more (100) OIDs\n>>> oids_to_delete text --collection of OIDs to Delete\n>>> )\n>>> RETURNS void AS$BODY$DECLARE\n>>> r integer;\n>>> cnt integer;\n>>> ids int[];\n>>> lid int;BEGIN\n>>> RAISE NOTICE 'start';\n>>> EXECUTE 'SELECT ARRAY(' || sql_condition || ')' into ids;\n>>> FOREACH lid IN ARRAY ids\n>>> LOOP\n>>> -- DELETE\n>>> -- Some business logic\n>>> -- FOR .. IF .. BEGIN\n>>> delete from configuration_plain c where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '10' and c.device_id = lid;\n>>> delete from configuration_plain c where c.oid = 'Other OID' and instance = 'Index' and c.device_id = lid;\n>>> -- other eventual deletes\n>>> --END\n>>>\n>>> -- UPDATE\n>>> -- Some business logic\n>>> -- FOR .. IF .. BEGIN\n>>> update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.87' and c.device_id = lid;\n>>> update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '1' and c.device_id = lid;\n>>> -- other eventual updates\n>>> -- END\n>>>\n>>> --INSERT\n>>> insert into configuration_plain (id, oid, instance, value) values (lid,'1.3.6.1.4.1.9999.3.5.10.3.201.1.1', '11', '11');\n>>> -- OTHER eventually....\n>>> insert into configuration_plain (id, oid, instance, value) values (lid,'OTHER_OID', 'Idx', 'Value of OID');\n>>> END LOOP;\n>>> RAISE NOTICE 'end';\n>>> RETURN;END$BODY$\n>>> LANGUAGE plpgsql VOLATILE\n>>> COST 100;\n>>>\n>>> So any best practices and advice on such data and use cases modeling in\n>>> DB?\n>>>\n>>> Regards,\n>>>\n>>> AlexL\n>>>\n>>\n>>\n>> --\n>> If you can't see the forest for the trees,\n>> Cut the trees and you'll see there is no forest.\n>>\n>\n\n-- \nEl genio es 1% inspiración y 99% transpiración.\nThomas Alva Edison\nhttp://pglearn.blogspot.mx/\n\nTry partitioning your table based on your device_id, that will give you a considerable boost for queries which the where clause includes it. for 9.6 (that's the one your using right?) there's pg_partman for that kind of thing, in this case you would partition by ranges, if the id's are sequential it's pretty straightforward. Any chance of upgrading to a newer PG version? partitioning becomes native from PG10 onwards, so you don't have to rely on particular plugins, and there are always significant performance improvements for several use cases in newer versions (like improved parallelism).On Fri, Mar 8, 2019 at 10:40 AM Alexandru Lazarev <[email protected]> wrote:For now I do not see the strong reason, but i inherited this project from other developers, Originally there was MongoDB and structure was more complex, having SNMP like nested tables with OID.Instance1.Instance2.instance3 and in JSON it looked like: { \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43.1\": { \"1\": { \"24\": \"vlan24\", \"25\": \"vlan25\" }, \"2\": { \"24\": \"127.0.0.1\", \"25\": \"8.8.8.8\" } }}Here we have table in table - How to model this in relational - with separate tables and JOINs only?I am not excluding in future I'll have such requirementthe other reason is that devices request their config and some other tools requests devices configs as a single document/file - this a bit create overhead for composing document in JSON or XML or CSV format from relational table (I understand it is doable, but...)BTW in PG documentation: \"8.14.2. Designing JSON documents effectivelyRepresenting data as JSON can be considerably more flexible than the traditional relational data model, which is compelling in environments where requirements are fluid. It is quite possible for both approaches to co-exist and complement each other within the same application. However, even for applications where maximal flexibility is desired, it is still recommended that JSON documents have a somewhat fixed structure. The structure is typically unenforced (though enforcing some business rules declaratively is possible), but having a predictable structure makes it easier to write queries that usefully summarize a set of \"documents\" (datums) in a table.JSON data is subject to the same concurrency-control considerations as any other data type when stored in a table. Although storing large documents is practicable, keep in mind that any update acquires a row-level lock on the whole row. Consider limiting JSON documents to a manageable size in order to decrease lock contention among updating transactions. Ideally, JSON documents should each represent an atomic datum that business rules dictate cannot reasonably be further subdivided into smaller datums that could be modified independently.\" https://www.postgresql.org/docs/9.6/datatype-json.htmlOn Fri, Mar 8, 2019 at 5:15 PM Alban Hertroys <[email protected]> wrote:Is there a reason not to use a relational model instead of json(b) here? I think that is in fact considered best practice.On Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev <[email protected]> wrote:\n\nI am working on product managing and monitoring Network (NMS-like products).\nProduct manages configuration of network devices, for now each device\n has stored its configuration in simple table - this was the original \ndesign.\nCREATE TABLE public.configuration\n(\n id integer NOT NULL,\n config json NOT NULL,\n CONSTRAINT configuration_pkey PRIMARY KEY (id),\n)\nA config looks like: \n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": [{\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\",\n \"instance\": \"24\",\n \"value\": \"null\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.1.86\",\n \"instance\": \"0\",\n \"value\": \"562\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\",\n \"instance\": \"0\",\n \"value\": \"0\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"24\",\n \"value\": \"vlan24\"\n },\n {\n \"oid\": \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\",\n \"instance\": \"25\",\n \"value\": \"vlan25\"\n }\n ]\n}\nAnd there are many plv8 (java script procedural language\n extension for PostgreSQL) stored procedures working on bulks of such \nconfig, reading some OIDs, changing them conditionally, removing some of\n them and adding others, especially in use cases like: \nThere are some upper-level META-configuration of different level, which \nduring change have to update all their updated parameters to all \naffected leaves configs.\nAn simple test-example (but without touching 'data' node)\nCREATE OR REPLACE FUNCTION public.process_jsonb_plv8()\n RETURNS void AS\n$BODY$\nvar CFG_TABLE_NAME = \"configurations\";\nvar selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c where c.id = $1\", ['int'] );\nvar updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 where id = $2', ['jsonb','int'] );\n\ntry {\n\n var ids = plv8.execute('select id from devices');\n\n for (var i = 0; i < ids.length; i++) {\n var db_cfg = selPlan.execute(ids[i].id); //Get current json config from DB\n var cfg = db_cfg[0].config;\n cfg[\"key0\"] = 'plv8_json'; //-add some dummy key\n updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB\n plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n\n\n }\n} finally {\n selPlan.free();\n updPlan.free();\n}\n\nreturn;$BODY$\n LANGUAGE plv8 VOLATILE\n COST 100;\nFor real use-cases plv8 SPs are more complicated, doing FOR-LOOP \nthrough ALL OIDs object of 'data' array, checking if it is looking for \nand update value an/or remove it and/or add newer if necessary.\nSince number of devices in DB increased from several hundreds to 40K \nor even 70K, and number of OID+Instance combinations also increased from\n several hundred to ~1K and sometimes up to 10K within a config, we \nstart facing slowness in bulk (especially global -> update to ALL \nDevices) updates/searches.\nIn order to get rid off FOR LOOP step for each configuration I've \nconverted data-node from array to object (key-value model), something \nlike : \n{\n \"_id\": 20818132,\n \"type\": \"Modem\",\n \"data\": {\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.40\": {\n \"24\": \"null\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.1.86\": {\n \"0\": \"562\"\n },\n \"1.3.6.1.4.1.9999.3.5.10.3.92.4.1\": {\n \"0\": \"0\"\n },\n \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\": {\n \"24\": \"vlan24\",\n \"25\": \"vlan25\"\n }\n }\n}\nNow in order to get a concrete OID (e.g. \"1.3.6.1.4.1.9999.3.10.2.2.25.4.1.43\") and/or its instance I do 1-2 O(1) operations instead O(n). And it become a bit faster.\nAfter I've changed column type from json to jsonb - I've got a lot of memory issues with plv8 stored procedures, so now ideas is: \nWhat are the best practices to store such data and use cases in DB?\ntaking in considerations following: \n- Bulk and global updates are often enough (user-done operation) - \nseveral times per week and it takes long time - several minutes, \nannoying user experience.\n- Consulting some OIDs only from concrete config is medium frequency use\n case\n- Consulting ALL devices have some specific OID (SNMP Parameter) settled\n to a specific value - medium frequency cases.\n- Consult (read) a configuration for a specific device as a whole \ndocument - often use case (it is send to device as json or as converted \nCSV, it is send in modified json format to other utilities, etc)\nOne of suggestion from other oppinions is to move ALL configurations to simple plain relational table\nCREATE TABLE public.configuration_plain\n(\n device_id integer,\n oid text,\n instance text,\n value text\n)\nLooking like\n\n\n\nid\n\n\noid\n\n\ninstance\n\n\nvalue\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.2.13\n\n\n0\n\n\nVSAT\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.15\n\n\n0\n\n\n0\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.17\n\n\n0\n\n\n0\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.18\n\n\n0\n\n\n1\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.19\n\n\n0\n\n\n2\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n\n24\n\n\n24\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.1\n\n\n25\n\n\n25\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n\n24\n\n\nvlan24\n\n\n\n\n20818132\n\n\n1.3.6.1.4.1.9999.3.10.2.2.10.8.1.2\n\n\n25\n\n\nVLAN_25\n\n\n\nAnd now I end with a table of ~33 M rows for 40K devices * (700-900 \nOID+Instance combinations).\nSome simple selects and updates (especially if I add simple indexes on \nid, oid columns) works faster than JSON (less than 1 sec updating one \nOID for ALL devices), \nbut on some stored procedures where I need to do some checks and \nbusiness logic before manipulating concrete parameters in configuration -\n performance decrease again from 10 to 25 seconds in below example with \neach nee added operation:\n\nCREATE OR REPLACE FUNCTION public.test_update_bulk_configuration_plain_plpg(\n sql_condition text, -- something like 'select id from devices'\n new_elements text, --collection of OIDs to be Added or Update, could be JSON Array or comma separated list, containing 1 or more (100) OIDs\n oids_to_delete text --collection of OIDs to Delete\n )\n RETURNS void AS\n$BODY$\nDECLARE\n r integer;\n cnt integer;\n ids int[];\n lid int;\nBEGIN\n RAISE NOTICE 'start';\n EXECUTE 'SELECT ARRAY(' || sql_condition || ')' into ids;\n FOREACH lid IN ARRAY ids\n LOOP\n -- DELETE \n -- Some business logic\n -- FOR .. IF .. BEGIN\n delete from configuration_plain c where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '10' and c.device_id = lid;\n delete from configuration_plain c where c.oid = 'Other OID' and instance = 'Index' and c.device_id = lid;\n -- other eventual deletes\n --END\n\n -- UPDATE\n -- Some business logic\n -- FOR .. IF .. BEGIN\n update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.87' and c.device_id = lid;\n update configuration_plain c set value = '2' where c.oid = '1.3.6.1.4.1.9999.3.5.10.3.201.1.1' and instance = '1' and c.device_id = lid; \n -- other eventual updates\n -- END\n\n --INSERT\n insert into configuration_plain (id, oid, instance, value) values (lid,'1.3.6.1.4.1.9999.3.5.10.3.201.1.1', '11', '11');\n -- OTHER eventually....\n insert into configuration_plain (id, oid, instance, value) values (lid,'OTHER_OID', 'Idx', 'Value of OID');\n END LOOP;\n RAISE NOTICE 'end';\n RETURN;\nEND\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\nSo any best practices and advice on such data and use cases modeling in DB?Regards,AlexL\n\n\n-- If you can't see the forest for the trees,Cut the trees and you'll see there is no forest.\n\n-- El genio es 1% inspiración y 99% transpiración.Thomas Alva Edisonhttp://pglearn.blogspot.mx/",
"msg_date": "Sat, 9 Mar 2019 14:08:08 -0600",
"msg_from": "Rene Romero Benavides <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hot to model data in DB (PostgreSQL) for SNMP-like multiple\n configurations"
}
] |
[
{
"msg_contents": "Hi team,\n\nI want to know about the working and importance of shared_buffers in Postgresql? is it similar to the oracle database buffer cache?\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\n\nHi team,\n \nI want to know about the working and importance of shared_buffers in Postgresql? is it similar to the oracle database buffer cache?\n\n \nRegards,\nDaulat",
"msg_date": "Mon, 11 Mar 2019 08:12:36 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Shared_buffers"
},
{
"msg_contents": "Daulat Ram wrote:\n> I want to know about the working and importance of shared_buffers in Postgresql?\n> is it similar to the oracle database buffer cache?\n\nYes, exactly.\n\nThe main difference is that PostgreSQL uses buffered I/O, while Oracle usually\nuses direct I/O.\n\nUsually you start with shared_buffers being the minimum of a quarter of the\navailable RAM and 8 GB.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 12 Mar 2019 09:29:13 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared_buffers"
},
{
"msg_contents": "On Tue, Mar 12, 2019 at 2:29 AM Laurenz Albe <[email protected]>\nwrote:\n\n> Daulat Ram wrote:\n> > I want to know about the working and importance of shared_buffers in\n> Postgresql?\n> > is it similar to the oracle database buffer cache?\n>\n> Yes, exactly.\n>\n> The main difference is that PostgreSQL uses buffered I/O, while Oracle\n> usually\n> uses direct I/O.\n>\n> Usually you start with shared_buffers being the minimum of a quarter of the\n> available RAM and 8 GB.\n>\n\nAny good rule of thumb or write up about when shared buffers in excess of\n8GBs makes sense (assuming system ram 64+ GBs perhaps)?\n\nOn Tue, Mar 12, 2019 at 2:29 AM Laurenz Albe <[email protected]> wrote:Daulat Ram wrote:\n> I want to know about the working and importance of shared_buffers in Postgresql?\n> is it similar to the oracle database buffer cache?\n\nYes, exactly.\n\nThe main difference is that PostgreSQL uses buffered I/O, while Oracle usually\nuses direct I/O.\n\nUsually you start with shared_buffers being the minimum of a quarter of the\navailable RAM and 8 GB.Any good rule of thumb or write up about when shared buffers in excess of 8GBs makes sense (assuming system ram 64+ GBs perhaps)?",
"msg_date": "Tue, 12 Mar 2019 13:23:52 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared_buffers"
},
{
"msg_contents": "Set shared_buffers more accurately by using pg_buffercache extension and \nthe related queries during high load times.\n\nRegards,\nMichael Vitale\n\n> Michael Lewis <mailto:[email protected]>\n> Tuesday, March 12, 2019 3:23 PM\n> On Tue, Mar 12, 2019 at 2:29 AM Laurenz Albe <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Daulat Ram wrote:\n> > I want to know about the working and importance of\n> shared_buffers in Postgresql?\n> > is it similar to the oracle database buffer cache?\n>\n> Yes, exactly.\n>\n> The main difference is that PostgreSQL uses buffered I/O, while\n> Oracle usually\n> uses direct I/O.\n>\n> Usually you start with shared_buffers being the minimum of a\n> quarter of the\n> available RAM and 8 GB.\n>\n>\n> Any good rule of thumb or write up about when shared buffers in excess \n> of 8GBs makes sense (assuming system ram 64+ GBs perhaps)?\n\n\n\n\nSet shared_buffers more \naccurately by using pg_buffercache extension and the related queries \nduring high load times.\n\nRegards,\nMichael Vitale\n\n\n\n \nMichael Lewis Tuesday,\n March 12, 2019 3:23 PM \nOn\n Tue, Mar 12, 2019 at 2:29 AM Laurenz Albe <[email protected]>\n wrote:Daulat Ram wrote:\n> I want to know about the working and importance of shared_buffers \nin Postgresql?\n> is it similar to the oracle database buffer cache?\n\nYes, exactly.\n\nThe main difference is that PostgreSQL uses buffered I/O, while Oracle \nusually\nuses direct I/O.\n\nUsually you start with shared_buffers being the minimum of a quarter of \nthe\navailable RAM and 8 GB.Any good \nrule of thumb or write up about when shared buffers in excess of 8GBs \nmakes sense (assuming system ram 64+ GBs perhaps)?",
"msg_date": "Tue, 12 Mar 2019 16:03:11 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared_buffers"
},
{
"msg_contents": "On Tue, Mar 12, 2019 at 04:03:11PM -0400, MichaelDBA wrote:\n> Set shared_buffers more accurately by using pg_buffercache extension and the\n> related queries during high load times.\n\nI've tuned ~40 postgres instances, primarily using log_checkpoints and\npg_stat_bgwriter, with custom RRD graphs. pg_buffercache does provide some\nvaluable insights, and I know it's commonly suggested to check histogram of\nusagecounts, but I've never had any idea how to apply that to tune\nshared_buffers.\n\nCould you elaborate on what procedure you suggest ?\n\nJustin\n\n",
"msg_date": "Tue, 12 Mar 2019 15:11:02 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared_buffers"
},
{
"msg_contents": "Here's one cook article on using pg_buffercache...\n\nhttps://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/\n\nRegards,\nMichael Vitale\n\n> Justin Pryzby <mailto:[email protected]>\n> Tuesday, March 12, 2019 4:11 PM\n>\n> I've tuned ~40 postgres instances, primarily using log_checkpoints and\n> pg_stat_bgwriter, with custom RRD graphs. pg_buffercache does provide some\n> valuable insights, and I know it's commonly suggested to check \n> histogram of\n> usagecounts, but I've never had any idea how to apply that to tune\n> shared_buffers.\n>\n> Could you elaborate on what procedure you suggest ?\n>\n> Justin\n> MichaelDBA <mailto:[email protected]>\n> Tuesday, March 12, 2019 4:03 PM\n> Set shared_buffers more accurately by using pg_buffercache extension \n> and the related queries during high load times.\n>\n> Regards,\n> Michael Vitale\n>\n>\n> Michael Lewis <mailto:[email protected]>\n> Tuesday, March 12, 2019 3:23 PM\n> On Tue, Mar 12, 2019 at 2:29 AM Laurenz Albe <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Daulat Ram wrote:\n> > I want to know about the working and importance of\n> shared_buffers in Postgresql?\n> > is it similar to the oracle database buffer cache?\n>\n> Yes, exactly.\n>\n> The main difference is that PostgreSQL uses buffered I/O, while\n> Oracle usually\n> uses direct I/O.\n>\n> Usually you start with shared_buffers being the minimum of a\n> quarter of the\n> available RAM and 8 GB.\n>\n>\n> Any good rule of thumb or write up about when shared buffers in excess \n> of 8GBs makes sense (assuming system ram 64+ GBs perhaps)?\n\n\n\n\nHere's one cook article on\n using pg_buffercache...\n\nhttps://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/\n\nRegards,\nMichael Vitale\n\n\n\n \nJustin Pryzby Tuesday,\n March 12, 2019 4:11 PM \nI've tuned ~40 \npostgres instances, primarily using log_checkpoints andpg_stat_bgwriter,\n with custom RRD graphs. pg_buffercache does provide somevaluable \ninsights, and I know it's commonly suggested to check histogram ofusagecounts,\n but I've never had any idea how to apply that to tuneshared_buffers.Could\n you elaborate on what procedure you suggest ?Justin\n \nMichaelDBA Tuesday,\n March 12, 2019 4:03 PM \n\n\nSet shared_buffers more \naccurately by using pg_buffercache extension and the related queries \nduring high load times.\n\nRegards,\nMichael Vitale\n\n\n\n\n \nMichael Lewis Tuesday,\n March 12, 2019 3:23 PM \nOn\n Tue, Mar 12, 2019 at 2:29 AM Laurenz Albe <[email protected]>\n wrote:Daulat Ram wrote:\n> I want to know about the working and importance of shared_buffers� \nin Postgresql?\n> is it similar to the oracle database buffer cache?\n\nYes, exactly.\n\nThe main difference is that PostgreSQL uses buffered I/O, while Oracle \nusually\nuses direct I/O.\n\nUsually you start with shared_buffers being the minimum of a quarter of \nthe\navailable RAM and 8 GB.Any good \nrule of thumb or write up about when shared buffers in excess of 8GBs \nmakes sense (assuming system ram 64+ GBs perhaps)?",
"msg_date": "Tue, 12 Mar 2019 16:11:49 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared_buffers"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhen I am executing multiple queries (for a benchmark) I obtained this \ngraph, the graph show me that one execution took more time and it was \nnot the firs(ont-shot), can you help me, please?\n\nThe figure shows the execution for the same query, bu one query took \nmore time and it was not the first?\n\n\nThanks for all",
"msg_date": "Mon, 11 Mar 2019 15:14:50 +0100",
"msg_from": "tayeb merabti <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wired execution in my benchmark using postgresql 9.6"
}
] |
[
{
"msg_contents": "Hello,\n\nWe noticed that the following SQL queries are running 3 times slower on the\nlatest version of PostgreSQL. Here’s the time taken to execute them on\nolder (v9.5.16) and newer versions (v11.2) of PostgreSQL (in milliseconds):\n\n+-----------------------+--------+---------+---------+-----------+\n| | scale1 | scale10 | scale50 | scale 300 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 1 (v9.5.16) | 88 | 937 | 4721 | 27241 |\n| Query 1 (v11.2) | 288 | 2822 | 13838 | 85081 |\n+-----------------------+--------+---------+---------+-----------+\n| Query 2 (v9.5.16) | 39 | X | X | X |\n| Query 2 (v11.2) | 80 | X | X | X |\n+-----------------------+--------+---------+---------+-----------+\n\nFor each regression, we share:\n1) the associated query,\n2) the commit that activated it,\n3) our high-level analysis, and\n4) query execution plans in old and new versions of PostgreSQL.\n\nAll these regressions are observed on the latest version. (v11.2 and\nv9.5.16)\n\nWe found several other regression related to this commit:\n77cd477 (Enable parallel query by default.)\n\n* You can download the queries at:\nhttps://gts3.org/~/jjung/tpcc/case3.tar.gz\n\n* You can reproduce the result by using the same setup that we described\nbefore (As Andrew mentioned before, we increased default work_mem to 128MB):\nhttps://www.postgresql.org/message-id/BN6PR07MB3409922471073F2B619A8CA4EE640%40BN6PR07MB3409.namprd07.prod.outlook.com\n\n\n###### Query 1:\n\nselect\n ref_0.ol_delivery_d as c1\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_1.i_im_id as c0\n from\n public.item as ref_1\n where ref_0.ol_d_id <= ref_1.i_im_id\n)\n\n- Commit: 77cd477 (Enable parallel query by default.)\n\n- Our analysis: We believe that this regression is due to parallel queries\nbeing enabled by default. Surprisingly, we found that even on a larger\nTPC-C database (scale factor of 50, roughly 4GB), parallel scan is still\nslower than the non-parallel one in the old version, when the query is not\nreturning any tuples.\n\n- Query Execution Plans\n\n[OLD version]\nNested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8)\n(actual time=0.034..24981.895 rows=90017507 loops=1)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504\nwidth=12) (actual time=0.022..7145.811 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual\ntime=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4)\n(actual time=0.006..0.006 rows=1 loops=1)\n\nPlanning time: 0.290 ms\nExecution time: 27241.239 ms\n\n[NEW version]\nGather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual\ntime=0.265..82355.289 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431\nwidth=8) (actual time=0.033..68529.259 rows=30005836 loops=3)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93\nrows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4)\n(actual time=0.001..0.001 rows=1 loops=90017507)\n\nPlanning Time: 0.319 ms\nExecution Time: 85081.158 ms\n\n\n\n##### QUERY 2\n\nselect\n ref_0.c_id as c0\nfrom\n public.customer as ref_0\nwhere EXISTS (\n select\n ref_0.c_city as c0\n from\n public.order_line as ref_1\n left join public.new_order as ref_2\n on (ref_1.ol_supply_w_id = ref_2.no_w_id)\n where (ref_1.ol_delivery_d > ref_0.c_since)\n)\n\n- Our analysis : There seems to a problem with parallel execution logic. It\nis unclear why only one worker is launched, since the default behavior of\nPostgreSQL is to launch at least two workers. We found that this query does\nnot run to completion within a day when we increase the size of the DB.\n(e.g., from scale factor 1 to scale factor 10/50/300).\n\n- Commit: 16be2fd (Make dsa_allocate interface more like MemoryContextAlloc)\n\n- Query execution plans:\n\n[Old version]\nNested Loop Semi Join (cost=224.43..910152608046.08 rows=10000 width=4)\n(actual time=3.681..37.842 rows=30000 loops=1)\n Join Filter: (ref_1.ol_delivery_d > ref_0.c_since)\n -> Seq Scan on customer ref_0 (cost=0.00..2569.00 rows=30000 width=12)\n(actual time=0.020..22.582 rows=30000 loops=1)\n -> Materialize (cost=224.43..48521498.97 rows=2406886812 width=8)\n(actual time=0.000..0.000 rows=1 loops=30000)\n -> Hash Left Join (cost=224.43..27085162.91 rows=2406886812\nwidth=8) (actual time=3.652..3.652 rows=1 loops=1)\n Hash Cond: (ref_1.ol_supply_w_id = ref_2.no_w_id)\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48\nrows=300148 width=12) (actual time=0.016..0.016 rows=1 loops=1)\n -> Hash (cost=124.19..124.19 rows=8019 width=4) (actual\ntime=3.569..3.569 rows=8019 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 346kB\n -> Seq Scan on new_order ref_2 (cost=0.00..124.19\nrows=8019 width=4) (actual time=0.020..1.770 rows=8019 loops=1)\nPlanning time: 0.927 ms\nExecution time: 39.166 ms\n\n[New version]\nGather (cost=1224.43..672617098015.10 rows=10000 width=4) (actual\ntime=3.792..78.702 rows=30000 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n -> Nested Loop Semi Join (cost=224.43..672617096015.10 rows=5882\nwidth=4) (actual time=4.383..65.544 rows=15000 loops=2)\n Join Filter: (ref_1.ol_delivery_d > ref_0.c_since)\n -> Parallel Seq Scan on customer ref_0 (cost=0.00..2445.47\nrows=17647 width=12) (actual time=0.029..13.054 rows=15000 loops=2)\n -> Hash Left Join (cost=224.43..27085162.91 rows=2406886812\nwidth=8) (actual time=0.002..0.002 rows=1 loops=30000)\n Hash Cond: (ref_1.ol_supply_w_id = ref_2.no_w_id)\n -> Seq Scan on order_line ref_1 (cost=0.00..6711.48\nrows=300148 width=12) (actual time=0.002..0.002 rows=1 loops=30000)\n -> Hash (cost=124.19..124.19 rows=8019 width=4) (actual\ntime=4.094..4.094 rows=8019 loops=2)\n Buckets: 8192 Batches: 1 Memory Usage: 346kB\n -> Seq Scan on new_order ref_2 (cost=0.00..124.19\nrows=8019 width=4) (actual time=0.025..1.995 rows=8019 loops=2)\nPlanning Time: 1.154 ms\nExecution Time: 80.217 ms\n\nHello,We noticed that the following SQL queries are running 3 times slower on the latest version of PostgreSQL. Here’s the time taken to execute them on older (v9.5.16) and newer versions (v11.2) of PostgreSQL (in milliseconds):+-----------------------+--------+---------+---------+-----------+| | scale1 | scale10 | scale50 | scale 300 |+-----------------------+--------+---------+---------+-----------+| Query 1 (v9.5.16) | 88 | 937 | 4721 | 27241 || Query 1 (v11.2) | 288 | 2822 | 13838 | 85081 |+-----------------------+--------+---------+---------+-----------+| Query 2 (v9.5.16) | 39 | X | X | X || Query 2 (v11.2) | 80 | X | X | X |+-----------------------+--------+---------+---------+-----------+For each regression, we share:1) the associated query,2) the commit that activated it,3) our high-level analysis, and4) query execution plans in old and new versions of PostgreSQL.All these regressions are observed on the latest version. (v11.2 and v9.5.16)We found several other regression related to this commit: 77cd477 (Enable parallel query by default.)* You can download the queries at:https://gts3.org/~/jjung/tpcc/case3.tar.gz* You can reproduce the result by using the same setup that we describedbefore (As Andrew mentioned before, we increased default work_mem to 128MB):https://www.postgresql.org/message-id/BN6PR07MB3409922471073F2B619A8CA4EE640%40BN6PR07MB3409.namprd07.prod.outlook.com###### Query 1:select ref_0.ol_delivery_d as c1from public.order_line as ref_0where EXISTS ( select ref_1.i_im_id as c0 from public.item as ref_1 where ref_0.ol_d_id <= ref_1.i_im_id)- Commit: 77cd477 (Enable parallel query by default.)- Our analysis: We believe that this regression is due to parallel queries being enabled by default. Surprisingly, we found that even on a larger TPC-C database (scale factor of 50, roughly 4GB), parallel scan is still slower than the non-parallel one in the old version, when the query is not returning any tuples.- Query Execution Plans[OLD version]Nested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8) (actual time=0.034..24981.895 rows=90017507 loops=1) Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id) -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=12) (actual time=0.022..7145.811 rows=90017507 loops=1) -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual time=0.000..0.000 rows=1 loops=90017507) -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.006..0.006 rows=1 loops=1)Planning time: 0.290 msExecution time: 27241.239 ms[NEW version]Gather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual time=0.265..82355.289 rows=90017507 loops=1) Workers Planned: 2 Workers Launched: 2 -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431 width=8) (actual time=0.033..68529.259 rows=30005836 loops=3) Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id) -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3) -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.001..0.001 rows=1 loops=90017507)Planning Time: 0.319 msExecution Time: 85081.158 ms##### QUERY 2select ref_0.c_id as c0from public.customer as ref_0where EXISTS ( select ref_0.c_city as c0 from public.order_line as ref_1 left join public.new_order as ref_2 on (ref_1.ol_supply_w_id = ref_2.no_w_id) where (ref_1.ol_delivery_d > ref_0.c_since))- Our analysis : There seems to a problem with parallel execution logic. It is unclear why only one worker is launched, since the default behavior of PostgreSQL is to launch at least two workers. We found that this query does not run to completion within a day when we increase the size of the DB. (e.g., from scale factor 1 to scale factor 10/50/300).- Commit: 16be2fd (Make dsa_allocate interface more like MemoryContextAlloc)- Query execution plans:[Old version]Nested Loop Semi Join (cost=224.43..910152608046.08 rows=10000 width=4) (actual time=3.681..37.842 rows=30000 loops=1) Join Filter: (ref_1.ol_delivery_d > ref_0.c_since) -> Seq Scan on customer ref_0 (cost=0.00..2569.00 rows=30000 width=12) (actual time=0.020..22.582 rows=30000 loops=1) -> Materialize (cost=224.43..48521498.97 rows=2406886812 width=8) (actual time=0.000..0.000 rows=1 loops=30000) -> Hash Left Join (cost=224.43..27085162.91 rows=2406886812 width=8) (actual time=3.652..3.652 rows=1 loops=1) Hash Cond: (ref_1.ol_supply_w_id = ref_2.no_w_id) -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=12) (actual time=0.016..0.016 rows=1 loops=1) -> Hash (cost=124.19..124.19 rows=8019 width=4) (actual time=3.569..3.569 rows=8019 loops=1) Buckets: 8192 Batches: 1 Memory Usage: 346kB -> Seq Scan on new_order ref_2 (cost=0.00..124.19 rows=8019 width=4) (actual time=0.020..1.770 rows=8019 loops=1)Planning time: 0.927 msExecution time: 39.166 ms[New version]Gather (cost=1224.43..672617098015.10 rows=10000 width=4) (actual time=3.792..78.702 rows=30000 loops=1) Workers Planned: 1 Workers Launched: 1 -> Nested Loop Semi Join (cost=224.43..672617096015.10 rows=5882 width=4) (actual time=4.383..65.544 rows=15000 loops=2) Join Filter: (ref_1.ol_delivery_d > ref_0.c_since) -> Parallel Seq Scan on customer ref_0 (cost=0.00..2445.47 rows=17647 width=12) (actual time=0.029..13.054 rows=15000 loops=2) -> Hash Left Join (cost=224.43..27085162.91 rows=2406886812 width=8) (actual time=0.002..0.002 rows=1 loops=30000) Hash Cond: (ref_1.ol_supply_w_id = ref_2.no_w_id) -> Seq Scan on order_line ref_1 (cost=0.00..6711.48 rows=300148 width=12) (actual time=0.002..0.002 rows=1 loops=30000) -> Hash (cost=124.19..124.19 rows=8019 width=4) (actual time=4.094..4.094 rows=8019 loops=2) Buckets: 8192 Batches: 1 Memory Usage: 346kB -> Seq Scan on new_order ref_2 (cost=0.00..124.19 rows=8019 width=4) (actual time=0.025..1.995 rows=8019 loops=2)Planning Time: 1.154 msExecution Time: 80.217 ms",
"msg_date": "Mon, 11 Mar 2019 15:19:37 -0400",
"msg_from": "Jinho Jung <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance regression related to parallel execution"
}
] |
[
{
"msg_contents": "A client had an issue with a where that had a where clause something like\nthis:\n\nWHERE 123456 = ANY(integer_array_column)\n\n\nI was surprised that this didn't use the pre-existing GIN index on\ninteger_array_column, whereas recoding as\n\nWHERE ARRAY[123456] <@ integer_array_column\n\n\ndid cause the GIN index to be used. Is this a known/expected behavior? If\nso, is there any logical reason why we couldn't have the planner pick up on\nthat?\n\nA client had an issue with a where that had a where clause something like this:WHERE 123456 = ANY(integer_array_column)I was surprised that this didn't use the pre-existing GIN index on integer_array_column, whereas recoding asWHERE ARRAY[123456] <@ integer_array_columndid cause the GIN index to be used. Is this a known/expected behavior? If so, is there any logical reason why we couldn't have the planner pick up on that?",
"msg_date": "Tue, 12 Mar 2019 21:44:13 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner not choosing GIN index"
},
{
"msg_contents": "It is an expected behavior. You can see the list of array operators with\nwhich a GIN index can be used in the doc:\n\nhttps://www.postgresql.org/docs/current/indexes-types.html\n\nAnd a very good and detailed explanation about any operator here:\n\nhttps://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns/29245753#29245753\n\nRegards,\nFlo\n\nOn Wed, Mar 13, 2019 at 2:44 AM Corey Huinker <[email protected]>\nwrote:\n\n> A client had an issue with a where that had a where clause something like\n> this:\n>\n> WHERE 123456 = ANY(integer_array_column)\n>\n>\n> I was surprised that this didn't use the pre-existing GIN index on\n> integer_array_column, whereas recoding as\n>\n> WHERE ARRAY[123456] <@ integer_array_column\n>\n>\n> did cause the GIN index to be used. Is this a known/expected behavior? If\n> so, is there any logical reason why we couldn't have the planner pick up on\n> that?\n>\n\nIt is an expected behavior. You can see the list of array operators with which a GIN index can be used in the doc:https://www.postgresql.org/docs/current/indexes-types.htmlAnd a very good and detailed explanation about any operator here:https://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns/29245753#29245753Regards,FloOn Wed, Mar 13, 2019 at 2:44 AM Corey Huinker <[email protected]> wrote:A client had an issue with a where that had a where clause something like this:WHERE 123456 = ANY(integer_array_column)I was surprised that this didn't use the pre-existing GIN index on integer_array_column, whereas recoding asWHERE ARRAY[123456] <@ integer_array_columndid cause the GIN index to be used. Is this a known/expected behavior? If so, is there any logical reason why we couldn't have the planner pick up on that?",
"msg_date": "Wed, 13 Mar 2019 10:10:48 +0100",
"msg_from": "Flo Rance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner not choosing GIN index"
},
{
"msg_contents": "On Wed, Mar 13, 2019 at 5:11 AM Flo Rance <[email protected]> wrote:\n\n> It is an expected behavior. You can see the list of array operators with\n> which a GIN index can be used in the doc:\n>\n> https://www.postgresql.org/docs/current/indexes-types.html\n>\n> And a very good and detailed explanation about any operator here:\n>\n>\n> https://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns/29245753#29245753\n>\n> Regards,\n> Flo\n>\n> On Wed, Mar 13, 2019 at 2:44 AM Corey Huinker <[email protected]>\n> wrote:\n>\n>> A client had an issue with a where that had a where clause something like\n>> this:\n>>\n>> WHERE 123456 = ANY(integer_array_column)\n>>\n>>\n>> I was surprised that this didn't use the pre-existing GIN index on\n>> integer_array_column, whereas recoding as\n>>\n>> WHERE ARRAY[123456] <@ integer_array_column\n>>\n>>\n>> did cause the GIN index to be used. Is this a known/expected behavior? If\n>> so, is there any logical reason why we couldn't have the planner pick up on\n>> that?\n>>\n>\nThanks. I'll bring the question of why-cant-we over to the hackers list.\n\nOn Wed, Mar 13, 2019 at 5:11 AM Flo Rance <[email protected]> wrote:It is an expected behavior. You can see the list of array operators with which a GIN index can be used in the doc:https://www.postgresql.org/docs/current/indexes-types.htmlAnd a very good and detailed explanation about any operator here:https://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns/29245753#29245753Regards,FloOn Wed, Mar 13, 2019 at 2:44 AM Corey Huinker <[email protected]> wrote:A client had an issue with a where that had a where clause something like this:WHERE 123456 = ANY(integer_array_column)I was surprised that this didn't use the pre-existing GIN index on integer_array_column, whereas recoding asWHERE ARRAY[123456] <@ integer_array_columndid cause the GIN index to be used. Is this a known/expected behavior? If so, is there any logical reason why we couldn't have the planner pick up on that?Thanks. I'll bring the question of why-cant-we over to the hackers list.",
"msg_date": "Wed, 13 Mar 2019 09:55:49 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner not choosing GIN index"
},
{
"msg_contents": "Yep, honestly this is far beyond my knowledge.\n\nOn Wed, Mar 13, 2019 at 2:56 PM Corey Huinker <[email protected]>\nwrote:\n\n> On Wed, Mar 13, 2019 at 5:11 AM Flo Rance <[email protected]> wrote:\n>\n>> It is an expected behavior. You can see the list of array operators with\n>> which a GIN index can be used in the doc:\n>>\n>> https://www.postgresql.org/docs/current/indexes-types.html\n>>\n>> And a very good and detailed explanation about any operator here:\n>>\n>>\n>> https://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns/29245753#29245753\n>>\n>> Regards,\n>> Flo\n>>\n>> On Wed, Mar 13, 2019 at 2:44 AM Corey Huinker <[email protected]>\n>> wrote:\n>>\n>>> A client had an issue with a where that had a where clause something\n>>> like this:\n>>>\n>>> WHERE 123456 = ANY(integer_array_column)\n>>>\n>>>\n>>> I was surprised that this didn't use the pre-existing GIN index on\n>>> integer_array_column, whereas recoding as\n>>>\n>>> WHERE ARRAY[123456] <@ integer_array_column\n>>>\n>>>\n>>> did cause the GIN index to be used. Is this a known/expected behavior?\n>>> If so, is there any logical reason why we couldn't have the planner pick up\n>>> on that?\n>>>\n>>\n> Thanks. I'll bring the question of why-cant-we over to the hackers list.\n>\n>\n>\n\nYep, honestly this is far beyond my knowledge.On Wed, Mar 13, 2019 at 2:56 PM Corey Huinker <[email protected]> wrote:On Wed, Mar 13, 2019 at 5:11 AM Flo Rance <[email protected]> wrote:It is an expected behavior. You can see the list of array operators with which a GIN index can be used in the doc:https://www.postgresql.org/docs/current/indexes-types.htmlAnd a very good and detailed explanation about any operator here:https://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns/29245753#29245753Regards,FloOn Wed, Mar 13, 2019 at 2:44 AM Corey Huinker <[email protected]> wrote:A client had an issue with a where that had a where clause something like this:WHERE 123456 = ANY(integer_array_column)I was surprised that this didn't use the pre-existing GIN index on integer_array_column, whereas recoding asWHERE ARRAY[123456] <@ integer_array_columndid cause the GIN index to be used. Is this a known/expected behavior? If so, is there any logical reason why we couldn't have the planner pick up on that?Thanks. I'll bring the question of why-cant-we over to the hackers list.",
"msg_date": "Wed, 13 Mar 2019 15:20:02 +0100",
"msg_from": "Flo Rance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner not choosing GIN index"
}
] |
[
{
"msg_contents": "Hello again. You may remember my queue issue for which some of you have \nproposed to use a partitioned table approach. I have done that, and I \nmight report more on that once I have this beast all tamed, which may be \nnow. Let's say in short, it definitely helped immensely. My test case \nnow is different from what I had previously done. I am now hammering my \ndatabase with 52 worker threads uploading like crazy into some 100 \ntables and indexes.\n\nRight now I want to remind everybody of the surprising fact that the old \nwisdom of distributing load over \"spindles\" appears to be still true \neven in the virtualized world of cloud computing. For background, this \nis running on Amazon AWS, the db server is a c5.xlarge and you see I \nhave 0.0 st, because my virtual CPUs are dedicated.\n\nI had run into a situation which was totally crazy. Here I show you a \nsnapshot of top and iostat output as it ran all night with totally low tps.\n\ntop - 12:43:42 up 1 day, 9:29, 3 users, load average: 41.03, 39.58, 38.91\nTasks: 385 total, 1 running, 169 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 2.9 us, 0.9 sy, 0.0 ni, 5.9 id, 90.3 wa, 0.0 hi, 0.1 si, 0.0 st\nKiB Mem : 7809760 total, 130528 free, 948504 used, 6730728 buff/cache\nKiB Swap: 0 total, 0 free, 0 used. 4357496 avail Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n20839 postgres 20 0 2309448 86892 83132 D 1.0 1.1 0:00.04 postgres: auser integrator 172.31.49.159(44862) SELECT\n17230 postgres 20 0 2318736 1.7g 1.7g D 0.7 23.0 0:10.00 postgres: auser integrator 172.31.49.159(44458) SELECT\n19209 postgres 20 0 2318760 1.7g 1.7g D 0.7 22.7 0:04.89 postgres: auser integrator 172.31.54.158(44421) SELECT\n19467 postgres 20 0 2318160 1.8g 1.8g D 0.7 23.6 0:04.20 postgres: auser integrator 172.31.61.242(56981) INSERT\n19990 postgres 20 0 2318084 1.2g 1.2g D 0.7 16.1 0:02.12 postgres: auser integrator 172.31.63.71(50413) SELECT\n20004 postgres 20 0 2317924 863460 853052 D 0.7 11.1 0:02.10 postgres: auser integrator 172.31.63.71(21895) INSERT\n20555 postgres 20 0 2316952 899376 890260 D 0.7 11.5 0:00.65 postgres: auser integrator 172.31.61.242(60209) INSERT\n20786 postgres 20 0 2312208 736224 729528 D 0.7 9.4 0:00.22 postgres: auser integrator 172.31.63.71(48175) INSERT\n18709 postgres 20 0 2318780 1.9g 1.8g D 0.3 24.9 0:06.18 postgres: auser integrator 172.31.54.158(17281) SELECT\n19228 postgres 20 0 2318940 1.7g 1.7g D 0.3 22.4 0:04.63 postgres: auser integrator 172.31.63.71(63850) INSERT\n19457 postgres 20 0 2318028 1.1g 1.1g D 0.3 15.0 0:03.69 postgres: auser integrator 172.31.54.158(33298) INSERT\n19656 postgres 20 0 2318080 1.3g 1.3g D 0.3 18.1 0:02.90 postgres: auser integrator 172.31.61.242(23307) INSERT\n19723 postgres 20 0 2317948 1.3g 1.2g D 0.3 16.8 0:02.17 postgres: auser integrator 172.31.49.159(44744) SELECT\n20034 postgres 20 0 2318044 927200 916924 D 0.3 11.9 0:02.19 postgres: auser integrator 172.31.63.71(64385) SELECT\n20080 postgres 20 0 2318124 1.2g 1.2g D 0.3 15.6 0:01.90 postgres: auser integrator 172.31.63.71(23430) INSERT\n20264 postgres 20 0 2317824 1.0g 1.0g D 0.3 13.9 0:01.28 postgres: auser integrator 172.31.54.158(64347) INSERT\n20285 postgres 20 0 2318096 582712 572456 D 0.3 7.5 0:01.08 postgres: auser integrator 172.31.63.71(34511) INSERT\n20392 root 20 0 0 0 0 I 0.3 0.0 0:00.05 [kworker/u8:1]\n19954 postgres 20 0 2317848 1.2g 1.2g D 0.3 15.8 0:01.95 postgres: auser integrator 172.31.61.242(65080) SELECT\n20004 postgres 20 0 2317924 863460 853052 D 0.3 11.1 0:02.08 postgres: auser integrator 172.31.63.71(21895) INSERT\n20034 postgres 20 0 2318044 923876 913600 D 0.3 11.8 0:02.18 postgres: auser integrator 172.31.63.71(64385) SELECT\n20080 postgres 20 0 2318124 1.2g 1.1g D 0.3 15.6 0:01.89 postgres: auser integrator 172.31.63.71(23430) SELECT\n20248 postgres 20 0 2318312 598416 587972 D 0.3 7.7 0:01.14 postgres: auser integrator 172.31.63.71(44375) SELECT\n20264 postgres 20 0 2317824 1.0g 1.0g D 0.3 13.9 0:01.27 postgres: auser integrator 172.31.54.158(64347) INSERT\n20350 postgres 20 0 2318228 546652 536396 D 0.3 7.0 0:00.87 postgres: auser integrator 172.31.54.158(60787) INSERT\n20590 postgres 20 0 2317208 893232 883840 D 0.3 11.4 0:00.61 postgres: auser integrator 172.31.61.242(14003) INSERT\n20595 postgres 20 0 2317172 884792 875428 D 0.3 11.3 0:00.59 postgres: auser integrator 172.31.54.158(59843) INSERT\n20603 postgres 20 0 2316596 838408 829668 D 0.3 10.7 0:00.50 postgres: auser integrator 172.31.61.242(16697) INSERT\n20770 postgres 20 0 171388 4456 3628 R 0.3 0.1 0:00.13 top -c\n\nyou can see here that all these postgress processes are in \n\"non-interruptible sleep\" (D) state. CPU% is ridiculously low (and \nthat's not because of steal, c5 instances do not run on \"CPU credits\"). \nAre they all in IO blocked state? Let's see iostat:\n\navg-cpu: %user %nice %system %iowait %steal %idle2.51 0.00 0.75 94.99 \n0.00 1.75Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz \nawait r_await w_await svctm %utilnvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme2n1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme3n1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme4n1 0.00 0.00 1.00 5.00 8.00 27.50 \n11.83 0.00 0.00 0.00 0.00 0.00 0.00nvme8n1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme9n1 0.00 0.00 91.00 2.00 3040.00 \n3.00 65.44 0.00 0.65 0.66 0.00 0.04 0.40nvme11n1 0.00 2.00 0.00 24.00 \n0.00 1090.00 90.83 0.00 0.00 0.00 0.00 0.00 0.00nvme10n1 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme6n1 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme7n1 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme12n1 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme5n1 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme16n1 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme15n1 0.00 0.00 0.00 \n2.00 0.00 1.50 1.50 0.00 0.00 0.00 0.00 0.00 0.00nvme13n1 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme14n1 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme17n1 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme18n1 0.00 0.00 \n194.00 133.00 1896.00 3253.50 31.50 6.90 23.27 31.18 11.73 2.46 \n80.40nvme19n1 0.00 0.00 6.00 13.00 48.00 355.50 42.47 0.00 0.00 0.00 \n0.00 0.00 0.00nvme20n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00nvme21n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00nvme22n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00nvme23n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00nvme0n1 0.00 0.00 0.00 7.00 0.00 69.50 19.86 0.00 0.00 \n0.00 0.00 0.00 0.00\n\nYou see that I already did a lot to balance IO out to many different \ntablespaces that's why there are so many volumes. Yet my iowait % was at \n > 95%. I though all my user data was spread out over the tablespaces, \nso that I could control the IO contention. But there remained a crazy \nhotspot on this nvme18n1 volume. And it turns out that was the data/base \ndefault tablespace, and that I had failed to actually assign proper \ntablespaces to many of the tables.\n\nNow I brought the server to a maintenance halt killing and blocking all \nthe worker threads from connecting again during the move, and then did \nthe tablespace move.\n\nALTER DATABASE integrator CONNECTION LIMIT 0;\nSELECT pg_terminate_backend(pid)\n FROM pg_stat_activity\n WHERE datname = 'integrator'\n AND pid <> pg_backend_pid()\n AND backend_type = 'client backend';\nALTER TABLE integrator.... SET TABLESPACE ...;\n...\nALTER TABLE integrator.... SET TABLESPACE ...;\nALTER DATABASE integrator CONNECTION LIMIT -1;\n\nAnd then look at what this helped:\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 38.90 0.00 10.47 13.97 0.00 36.66\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\nnvme1n1 0.00 0.00 9.00 37.00 72.00 296.00 16.00 0.00 0.52 0.44 0.54 0.00 0.00\nnvme2n1 0.00 0.00 129.00 467.00 1152.00 4152.00 17.80 0.13 0.47 0.25 0.53 0.18 10.80\nnvme3n1 0.00 0.00 8.00 38.00 64.00 304.00 16.00 0.00 0.61 0.50 0.63 0.00 0.00\nnvme4n1 0.00 0.00 8.00 43.00 64.00 344.00 16.00 0.00 0.47 0.00 0.56 0.00 0.00\nnvme8n1 0.00 0.00 326.00 1104.00 3452.00 10248.00 19.16 0.56 0.58 0.39 0.64 0.15 20.80\nnvme9n1 0.00 0.00 29.00 71.00 232.00 568.00 16.00 0.00 0.64 0.41 0.73 0.00 0.00\nnvme11n1 0.00 0.00 0.00 193.00 0.00 37720.00 390.88 0.66 4.15 0.00 4.15 0.58 11.20\nnvme10n1 0.00 0.00 185.00 281.00 1560.00 2264.00 16.41 0.06 0.51 0.58 0.46 0.10 4.80\nnvme6n1 0.00 0.00 14.00 137.00 112.00 1096.00 16.00 0.00 0.42 0.00 0.47 0.03 0.40\nnvme7n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nnvme12n1 0.00 0.00 267.00 584.00 2656.00 4864.00 17.67 0.23 0.53 0.54 0.53 0.19 16.40\nnvme5n1 0.00 0.00 22.00 14.00 176.00 112.00 16.00 0.00 0.78 1.09 0.29 0.00 0.00\nnvme16n1 0.00 0.00 75.00 179.00 732.00 1432.00 17.04 0.01 0.55 0.32 0.65 0.05 1.20\nnvme15n1 0.00 0.00 0.00 16.00 0.00 128.00 16.00 0.00 1.25 0.00 1.25 0.00 0.00\nnvme13n1 0.00 0.00 185.00 631.00 1804.00 5904.00 18.89 0.21 0.47 0.28 0.53 0.16 13.20\nnvme14n1 0.00 0.00 141.00 227.00 1128.00 1816.00 16.00 0.02 0.48 0.57 0.42 0.05 2.00\nnvme17n1 0.00 0.00 69.00 250.00 704.00 2000.00 16.95 0.00 0.44 0.41 0.45 0.00 0.00\nnvme18n1 0.00 0.00 9.00 9.00 72.00 72.00 16.00 0.00 0.00 0.00 0.00 0.00 0.00\nnvme19n1 0.00 0.00 137.00 294.00 1576.00 3088.00 21.64 0.07 0.56 0.82 0.44 0.14 6.00\nnvme20n1 0.00 0.00 191.00 693.00 1796.00 6336.00 18.40 0.37 0.65 0.44 0.70 0.20 18.00\nnvme21n1 0.00 0.00 90.00 140.00 856.00 1120.00 17.18 0.01 0.56 0.36 0.69 0.05 1.20\nnvme22n1 0.00 0.00 426.00 859.00 4016.00 7272.00 17.57 0.40 0.54 0.60 0.52 0.14 18.40\nnvme23n1 0.00 0.00 512.00 916.00 5076.00 10288.00 21.52 0.50 0.53 0.36 0.63 0.12 17.20\nnvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nAnd top:\n\ntop - 18:08:13 up 1 day, 14:54, 10 users, load average: 4.89, 6.09, 4.93\nTasks: 395 total, 4 running, 161 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 55.6 us, 8.8 sy, 0.0 ni, 18.9 id, 14.2 wa, 0.0 hi, 2.4 si, 0.0 st\nKiB Mem : 7809760 total, 136320 free, 610204 used, 7063236 buff/cache\nKiB Swap: 0 total, 0 free, 0 used. 4693632 avail Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n13601 postgres 20 0 2319104 1.9g 1.9g S 40.2 25.9 0:18.76 postgres: auser integrator 172.31.54.158(15235) idle\n13606 postgres 20 0 2318832 1.6g 1.6g S 18.6 21.7 0:14.25 postgres: auser integrator 172.31.54.158(49226) idle i+\n13760 postgres 20 0 2318772 1.7g 1.7g S 17.6 23.4 0:11.09 postgres: auser integrator 172.31.57.147(45312) idle i+\n13600 postgres 20 0 2318892 1.9g 1.9g R 15.6 26.1 0:20.08 postgres: auser integrator 172.31.54.158(63958) BIND\n13603 postgres 20 0 2318480 1.8g 1.8g S 15.3 24.0 0:22.72 postgres: auser integrator 172.31.57.147(23817) idle i+\n13714 postgres 20 0 2318640 1.8g 1.8g S 15.3 24.0 0:10.99 postgres: auser integrator 172.31.63.71(58893) idle in+\n13607 postgres 20 0 2318748 1.9g 1.9g S 14.6 25.8 0:19.59 postgres: auser integrator 172.31.57.147(11889) idle\n13844 postgres 20 0 2318260 730388 719972 S 13.0 9.4 0:02.03 postgres: auser integrator 172.31.61.242(58949) idle i+\n13716 postgres 20 0 2318816 1.8g 1.8g S 12.3 24.2 0:11.94 postgres: auser integrator 172.31.63.71(53131) idle in+\n13717 postgres 20 0 2318752 1.6g 1.6g S 10.3 21.0 0:13.39 postgres: auser integrator 172.31.63.71(19934) idle in+\n13837 postgres 20 0 2318296 805832 795380 S 10.3 10.3 0:02.28 postgres: auser integrator 172.31.61.242(63185) idle i+\n13839 postgres 20 0 2317956 722788 712532 S 10.3 9.3 0:02.04 postgres: auser integrator 172.31.49.159(57414) idle i+\n13836 postgres 20 0 2318188 697716 687224 R 10.0 8.9 0:02.09 postgres: auser integrator 172.31.61.242(51576) INSERT\n13846 postgres 20 0 2317716 1.3g 1.3g S 10.0 17.0 0:02.19 postgres: auser integrator 172.31.61.242(16349) idle i+\n13854 postgres 20 0 2313504 224276 216592 S 7.3 2.9 0:00.42 postgres: auser integrator 172.31.61.242(18867) idle i+\n18055 postgres 20 0 2308060 2.1g 2.1g S 7.0 27.6 3:04.07 postgres: checkpointer\n13602 postgres 20 0 2319160 1.8g 1.8g S 6.6 23.9 0:21.21 postgres: auser integrator 172.31.54.158(45183) idle i+\n13833 postgres 20 0 2317848 879168 869312 S 6.0 11.3 0:02.90 postgres: auser integrator 172.31.61.242(47892) idle i+\n13710 postgres 20 0 2318856 1.4g 1.4g S 5.6 19.3 0:09.89 postgres: auser integrator 172.31.63.71(22184) idle in+\n13809 postgres 20 0 2318168 1.1g 1.1g D 4.7 14.4 0:04.94 postgres: auser integrator 172.31.63.71(44808) SELECT\n13843 postgres 20 0 2318276 595844 585432 S 4.0 7.6 0:01.36 postgres: auser integrator 172.31.61.242(39820) idle i+\n13860 postgres 20 0 2311872 139372 133356 R 3.7 1.8 0:00.11 postgres: auser integrator 172.31.49.159(57420) idle i+\n 462 root 20 0 0 0 0 S 1.7 0.0 1:41.44 [kswapd0]\n13859 postgres 20 0 2308104 96788 93884 S 1.7 1.2 0:00.05 postgres: auser integrator 172.31.61.242(43391) idle i+\n18057 postgres 20 0 2305108 19108 18624 S 1.7 0.2 1:26.62 postgres: walwriter\n 1559 root 0 -20 0 0 0 I 0.3 0.0 0:19.21 [kworker/1:1H]\n 1560 root 0 -20 0 0 0 I 0.3 0.0 0:25.19 [kworker/3:1H]\n 2619 root 20 0 13144 400 292 S 0.3 0.0 0:28.52 /sbin/rngd -f\n\nThis helped!\n\nThere is a nice saturation now of CPU at the high end of the \"linear\" \nrange, i.e., we aren't in the distortion range or >90% and yet all \nworkers are runn\n\nI can do 17 transactions per second with 52 parallel worker threads. Now \nwe have not run yet for over an hour but so far so good. With my old \nnon-partitioned work queue table I would have long run into the index \ndegradation.\n\nI am not sure my autovacuum setup is working right though. I wonder if \nthere isn't some autovacuum statistics which I can query that would give \nme confidence that it is actually running?\n\nFinally the last question for now: I would like to set the XFS (all file \nsystems are XFS) block size to the same size as the PostgreSQL page \nsize. I am surprized this isn't a recommended action to take? It would \nseem to make sense to reduce IO system calls and push entire pages in \none fell swoop every time. Right?\n\nregards,\n-Gunther\n\nPS: aaaaand we're going down. Ran vacuumdb again, but that didn't help \nmuch. It's going down again.",
"msg_date": "Wed, 13 Mar 2019 14:44:10 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Distributing data over \"spindles\" even on AWS EBS, (followup to the\n work queue saga)"
},
{
"msg_contents": "I am going to reply to my own message to add more information that I \nthink can be interesting for others.\n\nI find that IO contention continues being a number one issue for \nstreamlining database performance (it's a duh! but it's significant \nsticky point!)\n\nI told you my performance was again going down the drain rapidly. In my \ncase it's going so fast because the database is filling up fast with all \nthat heavy load activity. I still don't know if autovacuum is running \nwell. But when I tried to manually vacuumdb I noticed the number of dead \nrows relatively small, so I don't think that the lack of vacuuming was \nan issue.\n\nI have a bottleneck though.\n\nThere is one massive table, let's call it Foo, and in Foo there is also \nsignificant amount of toasted text, and there is a child table called \nFoo_id then two indexes on Foo_id. Here the essentials:\n\nCREATE TABLE Foo {\n internalId UUID PRIMARY KEY,\n ... -- tons of columns\n text_xml text, -- lot's of stuff to toast\n ... -- tons of more columns\n};\n\nCREATE TABLE Foo_id {\n fooInternalId UUID REFERENCES Foo(internalId),\n prefix text,\n suffix text\n}\n\nCREATE INDEX Foo_id_fkidx ON Foo_id(fooInternalId);\nCREATE INDEX Foo_id_idx ON Foo_id(prefix, suffix);\n\nNow, it so happens that the activity on that index is so large that this \nvolume has 100% io utilization per iostat and everyone is at 75% iowait:\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\nnvme8n1 0.00 0.00 69.00 45.00 880.00 400.00 22.46 8.79 90.95 88.64 94.49 8.77 100.00\n\nHowever, the only really heavily used data file here really is just that \nFoo_id_idx. That one index!\n\nI think I have a few ways to alleviate the bottleneck:\n\n 1. partition that Foo_id table so that that index would be partitioned\n too and I can spread it out over multiple volumes.\n 2. build a low level \"spreading\" scheme which is to take the partial\n files 4653828 and 4653828.1, .2, _fsm, etc. and move each to another\n device and then symlink it back to that directory (I come back to this!)\n 3. maybe I can only partition the index by using the WHERE clause on\n the CREATE INDEX over some hash function of the Foo_id.prefix column.\n 4. maybe I can configure in AWS EBS to reserve more IOPS -- but why\n would I pay for more IOPS if my cost is by volume size? I can just\n make another volume? or does AWS play a similar trick on us with\n IOPS being limited on some \"credit\" system???\n\nTo 1. I find it surprisingly complicated to have to create a partitioned \ntable only in order to spread an index over multiple volumes.\n\nTo 2. I find that it would be a nice feature of PostgreSQL if we could \njust use symlinks and a symlink rule, for example, when PostgreSQL finds \nthat 4653828 is in fact a symlink to /otherdisk/PG/16284/4653828, then \nit would\n\n * by default also create 4653828.1 as a symlink and place the actual\n data file on /otherdisk/PG/16284/4653828.1\n * or even easier: it would allow the admin to pre-create the datafiles\n or even just the symlinks to not yet extisting datafiles, so that\n when it comes to create the 4653828.2 it will do it wherever that\n symlink points to, either a broken symlink whose target will then be\n created, or a symlink to a zero-size file that was already pre-created.\n * and also easier, allow us to pre-set the size of the data files with\n truncate --size $target_size so that PostgreSQL will use the next\n .1, .2, .3 file once the target size has been reached, rather than\n filling up all those 4 GB\n * I think that if, as I find, the wisdom to \"divide data over\n spindles\" is still true, then it would be best if PostgreSQL had a\n distribution scheme which is not at the logical data model level\n (partition ... tablespace), but rather just on the low level.\n\nThat last point I just made up. But it is extremely useful if PostgreSQL \nwould have this sort of very very simple intelligence.\n\nTo 3. if I create the index like this:\n\nCREATE INDEX Foo_id_idx ON Foo_id(prefix, suffix) WHERE hashmod(prefix,4) = 0 TABLESPACE /tbs/Foo_id_0;\nCREATE INDEX Foo_id_idx ON Foo_id(prefix, suffix) WHERE hashmod(prefix,4) = 1 TABLESPACE /tbs/Foo_id_1;\nCREATE INDEX Foo_id_idx ON Foo_id(prefix, suffix) WHERE hashmod(prefix,4) = 2 TABLESPACE /tbs/Foo_id_2;\nCREATE INDEX Foo_id_idx ON Foo_id(prefix, suffix) WHERE hashmod(prefix,4) = 3 TABLESPACE /tbs/Foo_id_3;\n\nwith some appropriately defined hashmod function that divides up 4 \napproximately equal partitions.\n\nIs there any downside to this approach? It looks to me that this does \neverything partitioning scheme would also do, i.e., (1) routing inserted \ntuples into the right file, and (2) resolving which file to refer to \nbased on the data of the query. What am I missing?\n\nregards,\n-Gunther\n\nOn 3/13/2019 14:44, Gunther wrote:\n>\n> Hello again. You may remember my queue issue for which some of you \n> have proposed to use a partitioned table approach. I have done that, \n> and I might report more on that once I have this beast all tamed, \n> which may be now. Let's say in short, it definitely helped immensely. \n> My test case now is different from what I had previously done. I am \n> now hammering my database with 52 worker threads uploading like crazy \n> into some 100 tables and indexes.\n>\n> Right now I want to remind everybody of the surprising fact that the \n> old wisdom of distributing load over \"spindles\" appears to be still \n> true even in the virtualized world of cloud computing. For background, \n> this is running on Amazon AWS, the db server is a c5.xlarge and you \n> see I have 0.0 st, because my virtual CPUs are dedicated.\n>\n> I had run into a situation which was totally crazy. Here I show you a \n> snapshot of top and iostat output as it ran all night with totally low \n> tps.\n>\n> top - 12:43:42 up 1 day, 9:29, 3 users, load average: 41.03, 39.58, 38.91\n> Tasks: 385 total, 1 running, 169 sleeping, 0 stopped, 0 zombie\n> %Cpu(s): 2.9 us, 0.9 sy, 0.0 ni, 5.9 id, 90.3 wa, 0.0 hi, 0.1 si, 0.0 st\n> KiB Mem : 7809760 total, 130528 free, 948504 used, 6730728 buff/cache\n> KiB Swap: 0 total, 0 free, 0 used. 4357496 avail Mem\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 20839 postgres 20 0 2309448 86892 83132 D 1.0 1.1 0:00.04 postgres: auser integrator 172.31.49.159(44862) SELECT\n> 17230 postgres 20 0 2318736 1.7g 1.7g D 0.7 23.0 0:10.00 postgres: auser integrator 172.31.49.159(44458) SELECT\n> 19209 postgres 20 0 2318760 1.7g 1.7g D 0.7 22.7 0:04.89 postgres: auser integrator 172.31.54.158(44421) SELECT\n> 19467 postgres 20 0 2318160 1.8g 1.8g D 0.7 23.6 0:04.20 postgres: auser integrator 172.31.61.242(56981) INSERT\n> 19990 postgres 20 0 2318084 1.2g 1.2g D 0.7 16.1 0:02.12 postgres: auser integrator 172.31.63.71(50413) SELECT\n> 20004 postgres 20 0 2317924 863460 853052 D 0.7 11.1 0:02.10 postgres: auser integrator 172.31.63.71(21895) INSERT\n> 20555 postgres 20 0 2316952 899376 890260 D 0.7 11.5 0:00.65 postgres: auser integrator 172.31.61.242(60209) INSERT\n> 20786 postgres 20 0 2312208 736224 729528 D 0.7 9.4 0:00.22 postgres: auser integrator 172.31.63.71(48175) INSERT\n> 18709 postgres 20 0 2318780 1.9g 1.8g D 0.3 24.9 0:06.18 postgres: auser integrator 172.31.54.158(17281) SELECT\n> 19228 postgres 20 0 2318940 1.7g 1.7g D 0.3 22.4 0:04.63 postgres: auser integrator 172.31.63.71(63850) INSERT\n> 19457 postgres 20 0 2318028 1.1g 1.1g D 0.3 15.0 0:03.69 postgres: auser integrator 172.31.54.158(33298) INSERT\n> 19656 postgres 20 0 2318080 1.3g 1.3g D 0.3 18.1 0:02.90 postgres: auser integrator 172.31.61.242(23307) INSERT\n> 19723 postgres 20 0 2317948 1.3g 1.2g D 0.3 16.8 0:02.17 postgres: auser integrator 172.31.49.159(44744) SELECT\n> 20034 postgres 20 0 2318044 927200 916924 D 0.3 11.9 0:02.19 postgres: auser integrator 172.31.63.71(64385) SELECT\n> 20080 postgres 20 0 2318124 1.2g 1.2g D 0.3 15.6 0:01.90 postgres: auser integrator 172.31.63.71(23430) INSERT\n> 20264 postgres 20 0 2317824 1.0g 1.0g D 0.3 13.9 0:01.28 postgres: auser integrator 172.31.54.158(64347) INSERT\n> 20285 postgres 20 0 2318096 582712 572456 D 0.3 7.5 0:01.08 postgres: auser integrator 172.31.63.71(34511) INSERT\n> 20392 root 20 0 0 0 0 I 0.3 0.0 0:00.05 [kworker/u8:1]\n> 19954 postgres 20 0 2317848 1.2g 1.2g D 0.3 15.8 0:01.95 postgres: auser integrator 172.31.61.242(65080) SELECT\n> 20004 postgres 20 0 2317924 863460 853052 D 0.3 11.1 0:02.08 postgres: auser integrator 172.31.63.71(21895) INSERT\n> 20034 postgres 20 0 2318044 923876 913600 D 0.3 11.8 0:02.18 postgres: auser integrator 172.31.63.71(64385) SELECT\n> 20080 postgres 20 0 2318124 1.2g 1.1g D 0.3 15.6 0:01.89 postgres: auser integrator 172.31.63.71(23430) SELECT\n> 20248 postgres 20 0 2318312 598416 587972 D 0.3 7.7 0:01.14 postgres: auser integrator 172.31.63.71(44375) SELECT\n> 20264 postgres 20 0 2317824 1.0g 1.0g D 0.3 13.9 0:01.27 postgres: auser integrator 172.31.54.158(64347) INSERT\n> 20350 postgres 20 0 2318228 546652 536396 D 0.3 7.0 0:00.87 postgres: auser integrator 172.31.54.158(60787) INSERT\n> 20590 postgres 20 0 2317208 893232 883840 D 0.3 11.4 0:00.61 postgres: auser integrator 172.31.61.242(14003) INSERT\n> 20595 postgres 20 0 2317172 884792 875428 D 0.3 11.3 0:00.59 postgres: auser integrator 172.31.54.158(59843) INSERT\n> 20603 postgres 20 0 2316596 838408 829668 D 0.3 10.7 0:00.50 postgres: auser integrator 172.31.61.242(16697) INSERT\n> 20770 postgres 20 0 171388 4456 3628 R 0.3 0.1 0:00.13 top -c\n>\n> you can see here that all these postgress processes are in \n> \"non-interruptible sleep\" (D) state. CPU% is ridiculously low (and \n> that's not because of steal, c5 instances do not run on \"CPU \n> credits\"). Are they all in IO blocked state? Let's see iostat:\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle2.51 0.00 0.75 94.99 \n> 0.00 1.75Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz \n> await r_await w_await svctm %utilnvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme2n1 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme3n1 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme4n1 0.00 0.00 1.00 \n> 5.00 8.00 27.50 11.83 0.00 0.00 0.00 0.00 0.00 0.00nvme8n1 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme9n1 0.00 \n> 0.00 91.00 2.00 3040.00 3.00 65.44 0.00 0.65 0.66 0.00 0.04 \n> 0.40nvme11n1 0.00 2.00 0.00 24.00 0.00 1090.00 90.83 0.00 0.00 0.00 \n> 0.00 0.00 0.00nvme10n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00nvme6n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00nvme7n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00nvme12n1 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme5n1 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme16n1 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme15n1 0.00 0.00 0.00 \n> 2.00 0.00 1.50 1.50 0.00 0.00 0.00 0.00 0.00 0.00nvme13n1 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme14n1 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme17n1 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00nvme18n1 0.00 0.00 194.00 133.00 1896.00 3253.50 31.50 6.90 23.27 \n> 31.18 11.73 2.46 80.40nvme19n1 0.00 0.00 6.00 13.00 48.00 355.50 42.47 \n> 0.00 0.00 0.00 0.00 0.00 0.00nvme20n1 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme21n1 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme22n1 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme23n1 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00nvme0n1 0.00 0.00 \n> 0.00 7.00 0.00 69.50 19.86 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> You see that I already did a lot to balance IO out to many different \n> tablespaces that's why there are so many volumes. Yet my iowait % was \n> at > 95%. I though all my user data was spread out over the \n> tablespaces, so that I could control the IO contention. But there \n> remained a crazy hotspot on this nvme18n1 volume. And it turns out \n> that was the data/base default tablespace, and that I had failed to \n> actually assign proper tablespaces to many of the tables.\n>\n> Now I brought the server to a maintenance halt killing and blocking \n> all the worker threads from connecting again during the move, and then \n> did the tablespace move.\n>\n> ALTER DATABASE integrator CONNECTION LIMIT 0;\n> SELECT pg_terminate_backend(pid)\n> FROM pg_stat_activity\n> WHERE datname = 'integrator'\n> AND pid <> pg_backend_pid()\n> AND backend_type = 'client backend';\n> ALTER TABLE integrator.... SET TABLESPACE ...;\n> ...\n> ALTER TABLE integrator.... SET TABLESPACE ...;\n> ALTER DATABASE integrator CONNECTION LIMIT -1;\n>\n> And then look at what this helped:\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 38.90 0.00 10.47 13.97 0.00 36.66\n>\n> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\n> nvme1n1 0.00 0.00 9.00 37.00 72.00 296.00 16.00 0.00 0.52 0.44 0.54 0.00 0.00\n> nvme2n1 0.00 0.00 129.00 467.00 1152.00 4152.00 17.80 0.13 0.47 0.25 0.53 0.18 10.80\n> nvme3n1 0.00 0.00 8.00 38.00 64.00 304.00 16.00 0.00 0.61 0.50 0.63 0.00 0.00\n> nvme4n1 0.00 0.00 8.00 43.00 64.00 344.00 16.00 0.00 0.47 0.00 0.56 0.00 0.00\n> nvme8n1 0.00 0.00 326.00 1104.00 3452.00 10248.00 19.16 0.56 0.58 0.39 0.64 0.15 20.80\n> nvme9n1 0.00 0.00 29.00 71.00 232.00 568.00 16.00 0.00 0.64 0.41 0.73 0.00 0.00\n> nvme11n1 0.00 0.00 0.00 193.00 0.00 37720.00 390.88 0.66 4.15 0.00 4.15 0.58 11.20\n> nvme10n1 0.00 0.00 185.00 281.00 1560.00 2264.00 16.41 0.06 0.51 0.58 0.46 0.10 4.80\n> nvme6n1 0.00 0.00 14.00 137.00 112.00 1096.00 16.00 0.00 0.42 0.00 0.47 0.03 0.40\n> nvme7n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> nvme12n1 0.00 0.00 267.00 584.00 2656.00 4864.00 17.67 0.23 0.53 0.54 0.53 0.19 16.40\n> nvme5n1 0.00 0.00 22.00 14.00 176.00 112.00 16.00 0.00 0.78 1.09 0.29 0.00 0.00\n> nvme16n1 0.00 0.00 75.00 179.00 732.00 1432.00 17.04 0.01 0.55 0.32 0.65 0.05 1.20\n> nvme15n1 0.00 0.00 0.00 16.00 0.00 128.00 16.00 0.00 1.25 0.00 1.25 0.00 0.00\n> nvme13n1 0.00 0.00 185.00 631.00 1804.00 5904.00 18.89 0.21 0.47 0.28 0.53 0.16 13.20\n> nvme14n1 0.00 0.00 141.00 227.00 1128.00 1816.00 16.00 0.02 0.48 0.57 0.42 0.05 2.00\n> nvme17n1 0.00 0.00 69.00 250.00 704.00 2000.00 16.95 0.00 0.44 0.41 0.45 0.00 0.00\n> nvme18n1 0.00 0.00 9.00 9.00 72.00 72.00 16.00 0.00 0.00 0.00 0.00 0.00 0.00\n> nvme19n1 0.00 0.00 137.00 294.00 1576.00 3088.00 21.64 0.07 0.56 0.82 0.44 0.14 6.00\n> nvme20n1 0.00 0.00 191.00 693.00 1796.00 6336.00 18.40 0.37 0.65 0.44 0.70 0.20 18.00\n> nvme21n1 0.00 0.00 90.00 140.00 856.00 1120.00 17.18 0.01 0.56 0.36 0.69 0.05 1.20\n> nvme22n1 0.00 0.00 426.00 859.00 4016.00 7272.00 17.57 0.40 0.54 0.60 0.52 0.14 18.40\n> nvme23n1 0.00 0.00 512.00 916.00 5076.00 10288.00 21.52 0.50 0.53 0.36 0.63 0.12 17.20\n> nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> And top:\n>\n> top - 18:08:13 up 1 day, 14:54, 10 users, load average: 4.89, 6.09, 4.93\n> Tasks: 395 total, 4 running, 161 sleeping, 0 stopped, 0 zombie\n> %Cpu(s): 55.6 us, 8.8 sy, 0.0 ni, 18.9 id, 14.2 wa, 0.0 hi, 2.4 si, 0.0 st\n> KiB Mem : 7809760 total, 136320 free, 610204 used, 7063236 buff/cache\n> KiB Swap: 0 total, 0 free, 0 used. 4693632 avail Mem\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 13601 postgres 20 0 2319104 1.9g 1.9g S 40.2 25.9 0:18.76 postgres: auser integrator 172.31.54.158(15235) idle\n> 13606 postgres 20 0 2318832 1.6g 1.6g S 18.6 21.7 0:14.25 postgres: auser integrator 172.31.54.158(49226) idle i+\n> 13760 postgres 20 0 2318772 1.7g 1.7g S 17.6 23.4 0:11.09 postgres: auser integrator 172.31.57.147(45312) idle i+\n> 13600 postgres 20 0 2318892 1.9g 1.9g R 15.6 26.1 0:20.08 postgres: auser integrator 172.31.54.158(63958) BIND\n> 13603 postgres 20 0 2318480 1.8g 1.8g S 15.3 24.0 0:22.72 postgres: auser integrator 172.31.57.147(23817) idle i+\n> 13714 postgres 20 0 2318640 1.8g 1.8g S 15.3 24.0 0:10.99 postgres: auser integrator 172.31.63.71(58893) idle in+\n> 13607 postgres 20 0 2318748 1.9g 1.9g S 14.6 25.8 0:19.59 postgres: auser integrator 172.31.57.147(11889) idle\n> 13844 postgres 20 0 2318260 730388 719972 S 13.0 9.4 0:02.03 postgres: auser integrator 172.31.61.242(58949) idle i+\n> 13716 postgres 20 0 2318816 1.8g 1.8g S 12.3 24.2 0:11.94 postgres: auser integrator 172.31.63.71(53131) idle in+\n> 13717 postgres 20 0 2318752 1.6g 1.6g S 10.3 21.0 0:13.39 postgres: auser integrator 172.31.63.71(19934) idle in+\n> 13837 postgres 20 0 2318296 805832 795380 S 10.3 10.3 0:02.28 postgres: auser integrator 172.31.61.242(63185) idle i+\n> 13839 postgres 20 0 2317956 722788 712532 S 10.3 9.3 0:02.04 postgres: auser integrator 172.31.49.159(57414) idle i+\n> 13836 postgres 20 0 2318188 697716 687224 R 10.0 8.9 0:02.09 postgres: auser integrator 172.31.61.242(51576) INSERT\n> 13846 postgres 20 0 2317716 1.3g 1.3g S 10.0 17.0 0:02.19 postgres: auser integrator 172.31.61.242(16349) idle i+\n> 13854 postgres 20 0 2313504 224276 216592 S 7.3 2.9 0:00.42 postgres: auser integrator 172.31.61.242(18867) idle i+\n> 18055 postgres 20 0 2308060 2.1g 2.1g S 7.0 27.6 3:04.07 postgres: checkpointer\n> 13602 postgres 20 0 2319160 1.8g 1.8g S 6.6 23.9 0:21.21 postgres: auser integrator 172.31.54.158(45183) idle i+\n> 13833 postgres 20 0 2317848 879168 869312 S 6.0 11.3 0:02.90 postgres: auser integrator 172.31.61.242(47892) idle i+\n> 13710 postgres 20 0 2318856 1.4g 1.4g S 5.6 19.3 0:09.89 postgres: auser integrator 172.31.63.71(22184) idle in+\n> 13809 postgres 20 0 2318168 1.1g 1.1g D 4.7 14.4 0:04.94 postgres: auser integrator 172.31.63.71(44808) SELECT\n> 13843 postgres 20 0 2318276 595844 585432 S 4.0 7.6 0:01.36 postgres: auser integrator 172.31.61.242(39820) idle i+\n> 13860 postgres 20 0 2311872 139372 133356 R 3.7 1.8 0:00.11 postgres: auser integrator 172.31.49.159(57420) idle i+\n> 462 root 20 0 0 0 0 S 1.7 0.0 1:41.44 [kswapd0]\n> 13859 postgres 20 0 2308104 96788 93884 S 1.7 1.2 0:00.05 postgres: auser integrator 172.31.61.242(43391) idle i+\n> 18057 postgres 20 0 2305108 19108 18624 S 1.7 0.2 1:26.62 postgres: walwriter\n> 1559 root 0 -20 0 0 0 I 0.3 0.0 0:19.21 [kworker/1:1H]\n> 1560 root 0 -20 0 0 0 I 0.3 0.0 0:25.19 [kworker/3:1H]\n> 2619 root 20 0 13144 400 292 S 0.3 0.0 0:28.52 /sbin/rngd -f\n>\n> This helped!\n>\n> There is a nice saturation now of CPU at the high end of the \"linear\" \n> range, i.e., we aren't in the distortion range or >90% and yet all \n> workers are runn\n>\n> I can do 17 transactions per second with 52 parallel worker threads. \n> Now we have not run yet for over an hour but so far so good. With my \n> old non-partitioned work queue table I would have long run into the \n> index degradation.\n>\n> I am not sure my autovacuum setup is working right though. I wonder if \n> there isn't some autovacuum statistics which I can query that would \n> give me confidence that it is actually running?\n>\n> Finally the last question for now: I would like to set the XFS (all \n> file systems are XFS) block size to the same size as the PostgreSQL \n> page size. I am surprized this isn't a recommended action to take? It \n> would seem to make sense to reduce IO system calls and push entire \n> pages in one fell swoop every time. Right?\n>\n> regards,\n> -Gunther\n>\n> PS: aaaaand we're going down. Ran vacuumdb again, but that didn't help \n> much. It's going down again.\n>\n>",
"msg_date": "Thu, 14 Mar 2019 10:53:11 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Distributing data over \"spindles\" even on AWS EBS, (followup to\n the work queue saga)"
},
{
"msg_contents": "On 3/14/19 07:53, Gunther wrote:\n> 2. build a low level \"spreading\" scheme which is to take the partial\n> files 4653828 and 4653828.1, .2, _fsm, etc. and move each to another\n> device and then symlink it back to that directory (I come back to this!)\n...\n> To 2. I find that it would be a nice feature of PostgreSQL if we could\n> just use symlinks and a symlink rule, for example, when PostgreSQL finds\n> that 4653828 is in fact a symlink to /otherdisk/PG/16284/4653828, then\n> it would\n>\n> * by default also create 4653828.1 as a symlink and place the actual\n> data file on /otherdisk/PG/16284/4653828.1\n\nHow about if we could just specify multiple tablespaces for an object,\nand then PostgreSQL would round-robin new segments across the presently\nconfigured tablespaces? This seems like a simple and elegant solution\nto me.\n\n\n> 4. maybe I can configure in AWS EBS to reserve more IOPS -- but why\n> would I pay for more IOPS if my cost is by volume size? I can just\n> make another volume? or does AWS play a similar trick on us with\n> IOPS being limited on some \"credit\" system???\n\nNot credits, but if you're using gp2 volumes then pay close attention to\nhow burst balance works. A single large volume is the same price as two\nstriped volumes at half size -- but the striped volumes will have double\nthe burst speed and take twice as long to refill the burst balance.\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n",
"msg_date": "Thu, 14 Mar 2019 08:11:20 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distributing data over \"spindles\" even on AWS EBS, (followup to\n the work queue saga)"
},
{
"msg_contents": "On Wed, Mar 13, 2019 at 02:44:10PM -0400, Gunther wrote:\n> You see that I already did a lot to balance IO out to many different\n> tablespaces that's why there are so many volumes.\n\nI wonder if it wouldn't be both better and much easier to have just 1 or 2\ntablespaces and combine drives into a single LVM VG and do something like\nlvcreate --stripes \n\n> I am not sure my autovacuum setup is working right though. I wonder if there\n> isn't some autovacuum statistics which I can query that would give me\n> confidence that it is actually running?\n\npg_stat_all_tables\n\nDid you do this ?\nALTER TABLE ... SET (autovacuum_vacuum_scale_factor=0.001, autovacuum_vacuum_threshold=1);\n\nJustin\n\n",
"msg_date": "Thu, 14 Mar 2019 10:54:56 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distributing data over \"spindles\" even on AWS EBS, (followup to\n the work queue saga)"
},
{
"msg_contents": "On 3/14/2019 11:11, Jeremy Schneider wrote:\n> On 3/14/19 07:53, Gunther wrote:\n>> 2. build a low level \"spreading\" scheme which is to take the partial\n>> files 4653828 and 4653828.1, .2, _fsm, etc. and move each to another\n>> device and then symlink it back to that directory (I come back to this!)\n> ...\n>> To 2. I find that it would be a nice feature of PostgreSQL if we could\n>> just use symlinks and a symlink rule, for example, when PostgreSQL finds\n>> that 4653828 is in fact a symlink to /otherdisk/PG/16284/4653828, then\n>> it would\n>>\n>> * by default also create 4653828.1 as a symlink and place the actual\n>> data file on /otherdisk/PG/16284/4653828.1\n> How about if we could just specify multiple tablespaces for an object,\n> and then PostgreSQL would round-robin new segments across the presently\n> configured tablespaces? This seems like a simple and elegant solution\n> to me.\n\nVery good idea! I agree.\n\nVery important also would be to take out the existing patch someone had \ncontributed to allow toast tables to be assigned to different tablespaces.\n\n>> 4. maybe I can configure in AWS EBS to reserve more IOPS -- but why\n>> would I pay for more IOPS if my cost is by volume size? I can just\n>> make another volume? or does AWS play a similar trick on us with\n>> IOPS being limited on some \"credit\" system???\n> Not credits, but if you're using gp2 volumes then pay close attention to\n> how burst balance works. A single large volume is the same price as two\n> striped volumes at half size -- but the striped volumes will have double\n> the burst speed and take twice as long to refill the burst balance.\n\nYes, I learned that too. It seems a very interesting \"bug\" of the Amazon \nGP2 IOPS allocation scheme. They say it's like 3 IOPS per GiB, so if I \nhave 100 GiB I get 300 IOPS. But it also says minimum 100. So that means \nif I have 10 volumes of 10 GiB each, I get 1000 IOPS minimum between \nthem all. But if I have it all on one 100 GiB volume I only get 300 IOPS.\n\nI wonder if Amazon is aware of this. I hope they are and think that's \njust fine. Because I like it.\n\nIt also is a clear sign to me that I want to use page sizes > 4k for the \nfile system. I have tried on Amazon Linux to use 8k block sizes of the \nXFS volume, but I cannot mount those, since the Linux says it can \ncurrently only deal with 4k blocks. This is another reason I consider \nswitching the database server(s) to FreeBSD. OTOH, who knows may be \nthis 4k is a limit of the AWS EBS infrastructure. After all, if I am \nscraping the 300 or 1000 IOPS limit already and if I can suddenly \nupgrade my block sizes per IO, I double my IO throughput.\n\nregards,\n-Gunther\n\n\n",
"msg_date": "Sun, 17 Mar 2019 14:42:04 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Distributing data over \"spindles\" even on AWS EBS, (followup to\n the work queue saga)"
},
{
"msg_contents": "You do have a finite amount of bandwidth per-instance. On c5.xlarge, it is\n3500 Mbit/sec, no matter how many iops you buy. Keep an eye on yur overall\nEBS bandwidth utilization.\n\nOn Sun, Mar 17, 2019 at 11:42 AM Gunther <[email protected]> wrote:\n\n> On 3/14/2019 11:11, Jeremy Schneider wrote:\n> > On 3/14/19 07:53, Gunther wrote:\n> >> 2. build a low level \"spreading\" scheme which is to take the partial\n> >> files 4653828 and 4653828.1, .2, _fsm, etc. and move each to\n> another\n> >> device and then symlink it back to that directory (I come back to\n> this!)\n> > ...\n> >> To 2. I find that it would be a nice feature of PostgreSQL if we could\n> >> just use symlinks and a symlink rule, for example, when PostgreSQL finds\n> >> that 4653828 is in fact a symlink to /otherdisk/PG/16284/4653828, then\n> >> it would\n> >>\n> >> * by default also create 4653828.1 as a symlink and place the actual\n> >> data file on /otherdisk/PG/16284/4653828.1\n> > How about if we could just specify multiple tablespaces for an object,\n> > and then PostgreSQL would round-robin new segments across the presently\n> > configured tablespaces? This seems like a simple and elegant solution\n> > to me.\n>\n> Very good idea! I agree.\n>\n> Very important also would be to take out the existing patch someone had\n> contributed to allow toast tables to be assigned to different tablespaces.\n>\n> >> 4. maybe I can configure in AWS EBS to reserve more IOPS -- but why\n> >> would I pay for more IOPS if my cost is by volume size? I can just\n> >> make another volume? or does AWS play a similar trick on us with\n> >> IOPS being limited on some \"credit\" system???\n> > Not credits, but if you're using gp2 volumes then pay close attention to\n> > how burst balance works. A single large volume is the same price as two\n> > striped volumes at half size -- but the striped volumes will have double\n> > the burst speed and take twice as long to refill the burst balance.\n>\n> Yes, I learned that too. It seems a very interesting \"bug\" of the Amazon\n> GP2 IOPS allocation scheme. They say it's like 3 IOPS per GiB, so if I\n> have 100 GiB I get 300 IOPS. But it also says minimum 100. So that means\n> if I have 10 volumes of 10 GiB each, I get 1000 IOPS minimum between\n> them all. But if I have it all on one 100 GiB volume I only get 300 IOPS.\n>\n> I wonder if Amazon is aware of this. I hope they are and think that's\n> just fine. Because I like it.\n>\n> It also is a clear sign to me that I want to use page sizes > 4k for the\n> file system. I have tried on Amazon Linux to use 8k block sizes of the\n> XFS volume, but I cannot mount those, since the Linux says it can\n> currently only deal with 4k blocks. This is another reason I consider\n> switching the database server(s) to FreeBSD. OTOH, who knows may be\n> this 4k is a limit of the AWS EBS infrastructure. After all, if I am\n> scraping the 300 or 1000 IOPS limit already and if I can suddenly\n> upgrade my block sizes per IO, I double my IO throughput.\n>\n> regards,\n> -Gunther\n>\n>\n>\n\nYou do have a finite amount of bandwidth per-instance. On c5.xlarge, it is 3500 Mbit/sec, no matter how many iops you buy. Keep an eye on yur overall EBS bandwidth utilization.On Sun, Mar 17, 2019 at 11:42 AM Gunther <[email protected]> wrote:On 3/14/2019 11:11, Jeremy Schneider wrote:\n> On 3/14/19 07:53, Gunther wrote:\n>> 2. build a low level \"spreading\" scheme which is to take the partial\n>> files 4653828 and 4653828.1, .2, _fsm, etc. and move each to another\n>> device and then symlink it back to that directory (I come back to this!)\n> ...\n>> To 2. I find that it would be a nice feature of PostgreSQL if we could\n>> just use symlinks and a symlink rule, for example, when PostgreSQL finds\n>> that 4653828 is in fact a symlink to /otherdisk/PG/16284/4653828, then\n>> it would\n>>\n>> * by default also create 4653828.1 as a symlink and place the actual\n>> data file on /otherdisk/PG/16284/4653828.1\n> How about if we could just specify multiple tablespaces for an object,\n> and then PostgreSQL would round-robin new segments across the presently\n> configured tablespaces? This seems like a simple and elegant solution\n> to me.\n\nVery good idea! I agree.\n\nVery important also would be to take out the existing patch someone had \ncontributed to allow toast tables to be assigned to different tablespaces.\n\n>> 4. maybe I can configure in AWS EBS to reserve more IOPS -- but why\n>> would I pay for more IOPS if my cost is by volume size? I can just\n>> make another volume? or does AWS play a similar trick on us with\n>> IOPS being limited on some \"credit\" system???\n> Not credits, but if you're using gp2 volumes then pay close attention to\n> how burst balance works. A single large volume is the same price as two\n> striped volumes at half size -- but the striped volumes will have double\n> the burst speed and take twice as long to refill the burst balance.\n\nYes, I learned that too. It seems a very interesting \"bug\" of the Amazon \nGP2 IOPS allocation scheme. They say it's like 3 IOPS per GiB, so if I \nhave 100 GiB I get 300 IOPS. But it also says minimum 100. So that means \nif I have 10 volumes of 10 GiB each, I get 1000 IOPS minimum between \nthem all. But if I have it all on one 100 GiB volume I only get 300 IOPS.\n\nI wonder if Amazon is aware of this. I hope they are and think that's \njust fine. Because I like it.\n\nIt also is a clear sign to me that I want to use page sizes > 4k for the \nfile system. I have tried on Amazon Linux to use 8k block sizes of the \nXFS volume, but I cannot mount those, since the Linux says it can \ncurrently only deal with 4k blocks. This is another reason I consider \nswitching the database server(s) to FreeBSD. OTOH, who knows may be \nthis 4k is a limit of the AWS EBS infrastructure. After all, if I am \nscraping the 300 or 1000 IOPS limit already and if I can suddenly \nupgrade my block sizes per IO, I double my IO throughput.\n\nregards,\n-Gunther",
"msg_date": "Tue, 19 Mar 2019 08:18:10 -0700",
"msg_from": "Sam Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distributing data over \"spindles\" even on AWS EBS, (followup to\n the work queue saga)"
}
] |
[
{
"msg_contents": "Hello,\n\ni’m currently working on a high Performance Database and want to make sure that whenever there are slow queries during regular operations i’ve got all Information about the query in my logs. So auto_explain come to mind, but the documentation explicitly states that it Comes at a cost. My Question is, how big is the latency added by auto_explain in percentage or ms ?\n\nBest regards,\n\nstephan\n\n\n\n\n\n\n\n\n\nHello,\n \ni’m currently working on a high Performance Database and want to make sure that whenever there are slow queries during regular operations i’ve got all Information about the query in my logs. So auto_explain come to mind, but the documentation\n explicitly states that it Comes at a cost. My Question is, how big is the latency added by auto_explain in percentage or ms ?\n \nBest regards,\n \nstephan",
"msg_date": "Thu, 14 Mar 2019 07:29:17 +0000",
"msg_from": "Stephan Schmidt <[email protected]>",
"msg_from_op": true,
"msg_subject": "impact of auto explain on overall performance"
},
{
"msg_contents": "On Thu, Mar 14, 2019 at 07:29:17AM +0000, Stephan Schmidt wrote:\n> i’m currently working on a high Performance Database and want to make sure that whenever there are slow queries during regular operations i’ve got all Information about the query in my logs. So auto_explain come to mind, but the documentation explicitly states that it Comes at a cost. My Question is, how big is the latency added by auto_explain in percentage or ms ?\n\nhttps://www.postgresql.org/docs/current/auto-explain.html\n|log_analyze\n...\n|When this parameter is on, per-plan-node timing occurs for all statements executed, whether or not they run long enough to actually get logged. This can have an extremely negative impact on performance. Turning off auto_explain.log_timing ameliorates the performance cost, at the price of obtaining less information.\n\n|auto_explain.log_timing (boolean)\n|auto_explain.log_timing controls whether per-node timing information is printed when an execution plan is logged; it's equivalent to the TIMING option of EXPLAIN. The overhead of repeatedly reading the system clock can slow down queries significantly on some systems, so it may be useful to set this parameter to off when only actual row counts, and not exact times, are needed. This parameter has no effect unless auto_explain.log_analyze is enabled. This parameter is on by default. Only superusers can change this setting.\n\nI believe the cost actually varies significantly with the type of plan \"node\",\nwith \"nested loops\" incurring much higher overhead.\n\nI think you could compare using explain(analyze) vs explain(analyze,timing\noff). While you're at it, compare without explain at all.\n\nI suspect the overhead is inconsequential if you set log_timing=off and set\nlog_min_duration such that only the slowest queries are logged.\n\nThen, you can manually run \"explain (analyze,costs on)\" on any problematic\nqueries to avoid interfering with production clients.\n\nJustin\n\n",
"msg_date": "Thu, 14 Mar 2019 03:23:00 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: impact of auto explain on overall performance"
},
{
"msg_contents": "On 3/14/19 9:23 AM, Justin Pryzby wrote:\n> On Thu, Mar 14, 2019 at 07:29:17AM +0000, Stephan Schmidt wrote:\n>> i’m currently working on a high Performance Database and want to make sure that whenever there are slow queries during regular operations i’ve got all Information about the query in my logs. So auto_explain come to mind, but the documentation explicitly states that it Comes at a cost. My Question is, how big is the latency added by auto_explain in percentage or ms ?\n> \n> https://www.postgresql.org/docs/current/auto-explain.html\n> |log_analyze\n> ...\n> |When this parameter is on, per-plan-node timing occurs for all statements executed, whether or not they run long enough to actually get logged. This can have an extremely negative impact on performance. Turning off auto_explain.log_timing ameliorates the performance cost, at the price of obtaining less information.\n> \n> |auto_explain.log_timing (boolean)\n> |auto_explain.log_timing controls whether per-node timing information is printed when an execution plan is logged; it's equivalent to the TIMING option of EXPLAIN. The overhead of repeatedly reading the system clock can slow down queries significantly on some systems, so it may be useful to set this parameter to off when only actual row counts, and not exact times, are needed. This parameter has no effect unless auto_explain.log_analyze is enabled. This parameter is on by default. Only superusers can change this setting.\n> \n> I believe the cost actually varies significantly with the type of plan \"node\",\n> with \"nested loops\" incurring much higher overhead.\n> \n> I think you could compare using explain(analyze) vs explain(analyze,timing\n> off). While you're at it, compare without explain at all.\n> \n> I suspect the overhead is inconsequential if you set log_timing=off and set\n> log_min_duration such that only the slowest queries are logged.\n> \n> Then, you can manually run \"explain (analyze,costs on)\" on any problematic\n> queries to avoid interfering with production clients.\n> \n> Justin\n> \n\nYou should also consider auto_explain.sample_rate: \nauto_explain.sample_rate causes auto_explain to only explain a fraction \nof the statements in each session. The default is 1, meaning explain all \nthe queries. In case of nested statements, either all will be explained \nor none. Only superusers can change this setting.\n\nThis option is available since 9.6\n\nRegards\n\n",
"msg_date": "Thu, 14 Mar 2019 10:08:39 +0100",
"msg_from": "Adrien NAYRAT <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: impact of auto explain on overall performance"
},
{
"msg_contents": "On Thu, Mar 14, 2019 at 3:29 AM Stephan Schmidt <[email protected]> wrote:\n\n> Hello,\n>\n>\n>\n> i’m currently working on a high Performance Database and want to make sure\n> that whenever there are slow queries during regular operations i’ve got all\n> Information about the query in my logs. So auto_explain come to mind, but\n> the documentation explicitly states that it Comes at a cost. My Question\n> is, how big is the latency added by auto_explain in percentage or ms ?\n>\n\nYou will have to measure it yourself and see. It depends on your hardware,\nOS, and OS version, and PostgreSQL version. And the nature of your\nqueries. If you have auto_explain.log_timing=on, then I find that large\nsorts are the worst impacted. So if you have a lot of those, you should be\ncareful.\n\nOn older kernels, I would run with auto_explain.log_timing=off. On newer\nkernels where you can read the clock from user-space, I run with\nauto_explain.log_timing=on. I find the slowdown noticeable with careful\ninvestigation (around 3%, last time I carefully investigated it), but\nusually well worth paying to have actual data to work with when I find slow\nqueries in the log. I made a special role with auto_explain disabled for\nuse with a few reporting queries with large sorts, both to circumvent the\noverhead and to avoid spamming the log with slow queries I already know\nabout.\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, Mar 14, 2019 at 3:29 AM Stephan Schmidt <[email protected]> wrote:\n\n\nHello,\n \ni’m currently working on a high Performance Database and want to make sure that whenever there are slow queries during regular operations i’ve got all Information about the query in my logs. So auto_explain come to mind, but the documentation\n explicitly states that it Comes at a cost. My Question is, how big is the latency added by auto_explain in percentage or ms ?You will have to measure it yourself and see. It depends on your hardware, OS, and OS version, and PostgreSQL version. And the nature of your queries. If you have auto_explain.log_timing=on, then I find that large sorts are the worst impacted. So if you have a lot of those, you should be careful.On older kernels, I would run with auto_explain.log_timing=off. On newer kernels where you can read the clock from user-space, I run with auto_explain.log_timing=on. I find the slowdown noticeable with careful investigation (around 3%, last time I carefully investigated it), but usually well worth paying to have actual data to work with when I find slow queries in the log. I made a special role with auto_explain disabled for use with a few reporting queries with large sorts, both to circumvent the overhead and to avoid spamming the log with slow queries I already know about.Cheers,Jeff",
"msg_date": "Thu, 14 Mar 2019 13:50:07 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: impact of auto explain on overall performance"
},
{
"msg_contents": "On 3/14/19 00:29, Stephan Schmidt wrote:\n> i’m currently working on a high Performance Database and want to make\n> sure that whenever there are slow queries during regular operations i’ve\n> got all Information about the query in my logs. So auto_explain come to\n> mind, but the documentation explicitly states that it Comes at a cost.\n> My Question is, how big is the latency added by auto_explain in\n> percentage or ms ?\n\nOne thought - what if the problem query is a 4ms query that just went to\n6ms but it's executed millions of times per second? That would create a\n150% increase to the load on the system.\n\nThe approach I've had the most success with is to combine active session\nsampling (from pg_stat_activity) with pg_stat_statements (ideally with\nperiodic snapshots) to identify problematic SQL statements, then use\nexplain analyze after you've identified them.\n\nThere are a handful of extensions on the internet that can do active\nsession sampling for you, and I've seen a few scripts that can be put\ninto a scheduler to capture snapshots of stats tables.\n\nMaybe something to consider in addition to the auto_explain stuff.\n\n-Jeremy\n\n-- \nhttp://about.me/jeremy_schneider\n\n",
"msg_date": "Thu, 14 Mar 2019 12:58:18 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: impact of auto explain on overall performance"
}
] |
[
{
"msg_contents": "Hi all,\n\nFacing issue in using special characters. We are trying to insert records to a remote Postgres Server and our application not able to perform this because of errors.\nIt seems that issue is because of the special characters that has been used in one of the field of a row.\n\nRegards\nTarkeshwar\n\n\n\n\n\n\n\n\n\nHi all,\n \nFacing issue in using special characters. We are trying to insert records to a remote Postgres Server and our application not able to perform this because of errors.\nIt seems that issue is because of the special characters that has been used in one of the field of a row.\n \nRegards\nTarkeshwar",
"msg_date": "Fri, 15 Mar 2019 05:19:48 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Facing issue in using special characters"
},
{
"msg_contents": "On Thursday, March 14, 2019, M Tarkeshwar Rao <[email protected]>\nwrote:\n>\n> Facing issue in using special characters. We are trying to insert records\n> to a remote Postgres Server and our application not able to perform this\n> because of errors.\n>\n> It seems that issue is because of the special characters that has been\n> used in one of the field of a row.\n>\n\nEmailing -general ONLY is both sufficient and polite. Providing more\ndetail, and ideally an example, is necessary.\n\nDavid J.\n\nOn Thursday, March 14, 2019, M Tarkeshwar Rao <[email protected]> wrote:\nFacing issue in using special characters. We are trying to insert records to a remote Postgres Server and our application not able to perform this because of errors.\nIt seems that issue is because of the special characters that has been used in one of the field of a row.Emailing -general ONLY is both sufficient and polite. Providing more detail, and ideally an example, is necessary.David J.",
"msg_date": "Thu, 14 Mar 2019 23:33:52 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
},
{
"msg_contents": "This is not an issue for \"hackers\" nor \"performance\" in fact even for \n\"general\" it isn't really an issue.\n\n\"Special characters\" is actually nonsense.\n\nWhen people complain about \"special characters\" they haven't thought \nthings through.\n\nIf you are unwilling to think things through and go step by step to make \nsure you know what you are doing, then you will not get it and really \nnobody can help you.\n\nIn my professional experience, people who complain about \"special \ncharacters\" need to be cut loose or be given a chance (if they are \nestablished employees who carry some weight). If a contractor complains \nabout \"special characters\" they need to be fired.\n\nUnderstand charsets -- character set, code point, and encoding. Then \nunderstand how encoding and string literals and \"escape sequences\" in \nstring literals might work.\n\nKnow that UNICODE today is the one standard, and there is no more need \nto do code table switch. There is nothing special about a Hebrew alef or \na greek lower case alpha or a latin A. Nor a hyphen and en-dash or an \nem-dash. All these characters are in the UNICODE. Yes, there are some \nJapanese who claim that they don't like that their Chinese character \nversions are put together with simplified reform Chinese font. But \nthat's a font issue, not a character code issue.\n\n7 bit ASCII is the first page of UNICODE, even in the UTF-8 encoding.\n\nISO Latin 1, or the Windoze 123 whatever special table of ISO Latin 1 \nhas the same code points as UNICODE pages 0 and 1, but not compatible \nwith UTF-8 coding because of the way UTF-8 uses the 8th bit.\n\nBut none of this is likely your problem.\n\nYour problem is about string literals in SQL for examples. About the \nconfiguration of your database (I always use initdb with --locale C and \n--encoding UTF-8). Use UTF-8 in the database. Then all your issues are \nabout string literals in SQL and in JAVA and JSON and XML or whatever \nyou are using.\n\nYou have to do the right thing. If you produce any representation, \nwhether that is XML or JSON or SQL or URL query parameters, or a CSV \nfile, or anything at all, you need to escape your string values properly.\n\nThis question with no detail didn't deserve such a thorough answer, but \nit's my soap box. I do not accept people complaining about \"special \ncharacters\". My own people get that same sermon from me when they make \nthat mistake.\n\n-Gunther\n\nOn 3/15/2019 1:19, M Tarkeshwar Rao wrote:\n>\n> Hi all,\n>\n> Facing issue in using special characters. We are trying to insert \n> records to a remote Postgres Server and our application not able to \n> perform this because of errors.\n>\n> It seems that issue is because of the special characters that has been \n> used in one of the field of a row.\n>\n> Regards\n>\n> Tarkeshwar\n>\n\n\n\n\n\n\nThis is not an issue for \"hackers\" nor \"performance\" in fact even\n for \"general\" it isn't really an issue.\n\"Special characters\" is actually nonsense.\nWhen people complain about \"special characters\" they haven't\n thought things through.\nIf you are unwilling to think things through and go step by step\n to make sure you know what you are doing, then you will not get it\n and really nobody can help you.\nIn my professional experience, people who complain about \"special\n characters\" need to be cut loose or be given a chance (if they are\n established employees who carry some weight). If a contractor\n complains about \"special characters\" they need to be fired.\nUnderstand charsets -- character set, code point, and encoding.\n Then understand how encoding and string literals and \"escape\n sequences\" in string literals might work.\n\nKnow that UNICODE today is the one standard, and there is no more\n need to do code table switch. There is nothing special about a\n Hebrew alef or a greek lower case alpha or a latin A. Nor a hyphen\n and en-dash or an em-dash. All these characters are in the\n UNICODE. Yes, there are some Japanese who claim that they don't\n like that their Chinese character versions are put together with\n simplified reform Chinese font. But that's a font issue, not a\n character code issue. \n\n7 bit ASCII is the first page of UNICODE, even in the UTF-8\n encoding.\n ISO Latin 1, or the Windoze 123 whatever special table of ISO\n Latin 1 has the same code points as UNICODE pages 0 and 1, but not\n compatible with UTF-8 coding because of the way UTF-8 uses the 8th\n bit.\nBut none of this is likely your problem.\nYour problem is about string literals in SQL for examples. About\n the configuration of your database (I always use initdb with\n --locale C and --encoding UTF-8). Use UTF-8 in the database. Then\n all your issues are about string literals in SQL and in JAVA and\n JSON and XML or whatever you are using. \n\nYou have to do the right thing. If you produce any\n representation, whether that is XML or JSON or SQL or URL query\n parameters, or a CSV file, or anything at all, you need to escape\n your string values properly. \n\nThis question with no detail didn't deserve such a thorough\n answer, but it's my soap box. I do not accept people complaining\n about \"special characters\". My own people get that same sermon\n from me when they make that mistake.\n\n-Gunther\n\nOn 3/15/2019 1:19, M Tarkeshwar Rao\n wrote:\n\n\n\n\n\n\nHi all,\n�\nFacing issue in using special characters.\n We are trying to insert records to a remote Postgres Server\n and our application not able to perform this because of\n errors.\nIt seems that issue is because of the\n special characters that has been used in one of the field of a\n row.\n�\nRegards\nTarkeshwar",
"msg_date": "Fri, 15 Mar 2019 11:59:48 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
},
{
"msg_contents": "On 3/15/19 11:59 AM, Gunther wrote:\n> This is not an issue for \"hackers\" nor \"performance\" in fact even for\n> \"general\" it isn't really an issue.\n\nAs long as it's already been posted, may as well make it something\nhelpful to find in the archive.\n\n> Understand charsets -- character set, code point, and encoding. Then\n> understand how encoding and string literals and \"escape sequences\" in\n> string literals might work.\n\nGood advice for sure.\n\n> Know that UNICODE today is the one standard, and there is no more need\n\nI wasn't sure from the question whether the original poster was in\na position to choose the encoding of the database. Lots of things are\neasier if it can be set to UTF-8 these days, but perhaps it's a legacy\nsituation.\n\nMaybe a good start would be to go do\n\n SHOW server_encoding;\n SHOW client_encoding;\n\nand then hit the internet and look up what that encoding (or those\nencodings, if different) can and can't represent, and go from there.\n\nIt's worth knowing that, when the server encoding isn't UTF-8,\nPostgreSQL will have the obvious limitations entailed by that,\nbut also some non-obvious ones that may be surprising, e.g. [1].\n\n-Chap\n\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmobUp8Q-wcjaKvV%3DsbDcziJoUUvBCB8m%2B_xhgOV4DjiA1A%40mail.gmail.com\n\n",
"msg_date": "Fri, 15 Mar 2019 15:26:50 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
},
{
"msg_contents": "Many of us have faced character encoding issues because we are not in control of our input sources and made the common assumption that UTF-8 covers everything.\r\n\r\nIn my lab, as an example, some of our social media posts have included ZawGyi Burmese character sets rather than Unicode Burmese. (Because Myanmar developed technology In a closed to the world environment, they made up their own non-standard character set which is very common still in Mobile phones.). We had fully tested the app with Unicode Burmese, but honestly didn’t know ZawGyi was even a thing that we would see in our dataset. We’ve also had problems with non-Unicode word separators in Arabic.\r\n\r\nWhat we’ve found to be helpful is to view the troubling code in a hex editor and determine what non-standard characters may be causing the problem.\r\n\r\nIt may be some data conversion is necessary before insertion. But the first step is knowing WHICH characters are causing the issue.\r\n\r\n\n\n\n\n\n\r\nMany of us have faced character encoding issues because we are not in control of our input sources and made the common assumption that UTF-8 covers everything.\r\n\n\nIn my lab, as an example, some of our social media posts have included ZawGyi Burmese character sets rather than Unicode Burmese. (Because Myanmar developed technology In a closed to the world environment, they made up their own non-standard character\r\n set which is very common still in Mobile phones.). We had fully tested the app with Unicode Burmese, but honestly didn’t know ZawGyi was even a thing that we would see in our dataset. We’ve also had problems with non-Unicode word separators in Arabic.\n\n\nWhat we’ve found to be helpful is to view the troubling code in a hex editor and determine what non-standard characters may be causing the problem.\n\n\nIt may be some data conversion is necessary before insertion. But the first step is knowing WHICH characters are causing the issue.",
"msg_date": "Sun, 17 Mar 2019 15:01:40 +0000",
"msg_from": "\"Warner, Gary, Jr\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
},
{
"msg_contents": "On 2019-03-17 15:01:40 +0000, Warner, Gary, Jr wrote:\n> Many of us have faced character encoding issues because we are not in control\n> of our input sources and made the common assumption that UTF-8 covers\n> everything.\n\nUTF-8 covers \"everything\" in the sense that there is a round-trip from\neach character in every commonly-used charset/encoding to Unicode and\nback.\n\nThe actual code may of course be different. For example, the € sign is\n0xA4 in iso-8859-15, but U+20AC in Unicode. So you need an\nencoding/decoding step.\n\nAnd \"commonly-used\" means just that. Unicode covers a lot of character\nsets, but it can't cover every character set ever invented (I invented\nmy own character sets when I was sixteen. Nobody except me ever used\nthem and they have long succumbed to bit rot).\n\n> In my lab, as an example, some of our social media posts have included ZawGyi\n> Burmese character sets rather than Unicode Burmese. (Because Myanmar developed\n> technology In a closed to the world environment, they made up their own\n> non-standard character set which is very common still in Mobile phones.).\n\nI'd be surprised if there was a character set which is \"very common in\nMobile phones\", even in a relatively poor country like Myanmar. Does\nZawGyi actually include characters which aren't in Unicode are are they\njust encoded differently?\n\n hp\n\n-- \n _ | Peter J. Holzer | we build much bigger, better disasters now\n|_|_) | | because we have much more sophisticated\n| | | [email protected] | management tools.\n__/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>",
"msg_date": "Mon, 18 Mar 2019 22:19:23 +0100",
"msg_from": "\"Peter J. Holzer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
}
] |
[
{
"msg_contents": "Hi PostgreSQL Community.\n\nI tried to rewrite some plv8 stored procedures, which process in bulk JSONB\ndocuments, to PL/pgSQL.\nA SP usually has to delete/update/add multiple key with the same document\nand do it for multiple documents (~40K) in loop.\n\nWhen updating a single key PL/pgSQL wins against plv8, but when I need to\nupdate multiple keys with *jsonb_set*, timing increase linearly with number\nof *jsonb_set*s and takes longer than similar SP in PLV8.\nBelow are test-cases I've used.\n\n*QUESTION:* Is it expected behavior or I do something wrong or there are\nsome better approaches or we can treat datum as object?\n\ntest case:\nPG 9.6, CentOS 7\n\nCREATE TABLE public.configurationj2b\n(\n id integer NOT NULL PRIMARY KEY,\n config jsonb NOT NULL\n);\nEach jsonb column has 3 top keys, and one of top-key ('data') has another\n700-900 key-value pairs e.g. {\"OID1\":\"Value1\"}\n\nPL/pgSQL SP\nCREATE OR REPLACE FUNCTION public.process_jsonb()\n RETURNS void AS\n$BODY$\nDECLARE\n r integer;\n cfg jsonb;\nBEGIN\nRAISE NOTICE 'start';\n FOR r IN\n SELECT id as device_id FROM devices\n LOOP\n select config into cfg from configurationj2b c where c.id = r;\n--select jsonb one by one\n\n -- MULTIPLE KEYs, Conditional Busiines Logic (BL) updates\n* cfg := jsonb_set(cfg, '{data,OID1}', '\"pl/pgsql1\"');*\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n* IF cfg@>'{\"data\" : { \"OID1\":\"pl/pgsql1\"} }' THEN cfg :=\njsonb_set(cfg, '{data,OID2}', '\"pl/pgsql2\"'); END IF; IF\ncfg@>'{\"data\" : { \"OID2\":\"pl/pgsql2\"} }' THEN cfg := jsonb_set(cfg,\n'{data,OID3}', '\"pl/pgsql3\"'); END IF; IF cfg@>'{\"data\" : {\n\"OID3\":\"pl/pgsql3\"} }' THEN cfg := jsonb_set(cfg, '{data,OID4}',\n'\"pl/pgsql4\"'); END IF; IF cfg@>'{\"data\" : { \"OID4\":\"pl/pgsql4\"} }'\nTHEN cfg := jsonb_set(cfg, '{data,OID5}', '\"pl/pgsql5\"'); END IF;*\n\n update configurationj2b c set config = cfg where c.id = r;\n\n END LOOP;\n RAISE NOTICE 'end';\n RETURN;\nEND\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\n\nor in pseudo-code I would have\n\nfor-each child_jsonb do\nbegin\n foreach (key-value in parent_jsonb) do\n begin\n* child_jsonb := jsonb_set(child_jsonb , '{key}', '\"value\"');*\n end\n update *child_jsonb * in db;\nend;\n\nplv8 snippet:\n$BODY$var ids = plv8.execute('select id from devices');\n\nvar CFG_TABLE_NAME = 'configurationj2b';\nvar selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c\nwhere c.id = $1\", ['int'] );\nvar updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1\nwhere id = $2', ['json','int'] )\n\ntry {\n\n for (var i = 0; i < ids.length; i++) {\n var db_cfg = selPlan.execute([ids[i].id]);\n var cfg = db_cfg[0].config;\n var cfg_data = cfg['data'];\n* cfg_data['OID1'] = 'plv8_01';*\n\n\n\n\n\n\n\n\n\n\n\n* if (cfg_data['OID1'] == 'plv8_01') { cfg_data['OID2'] =\n'plv8_02' }; if (cfg_data['OID2'] == 'plv8_02') {\ncfg_data['OID3'] = 'plv8_03' } if (cfg_data['OID3'] ==\n'plv8_03') { cfg_data['OID4'] = 'plv8_04' } if\n(cfg_data['OID4'] == 'plv8_04') { cfg_data['OID5'] =\n'plv8_05' }*\n\n updPlan.execute([cfg, ids[i].id]);\n plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n }\n\n} finally {\n selPlan.free();\n updPlan.free();\n}\n\nreturn;$BODY$\n\nbut for now plv8 has other issues related to resource consumption.\n\nSo could I get similar performance in PL/pgSQL?\n\nHi PostgreSQL Community. I tried to rewrite some plv8 stored procedures, which process in bulk JSONB documents, to PL/pgSQL.A SP usually has to delete/update/add multiple key with the same document and do it for multiple documents (~40K) in loop.When updating a single key \nPL/pgSQL wins against plv8, but when I need to update multiple keys with jsonb_set, timing increase linearly with number of jsonb_sets and takes longer than similar SP in PLV8.Below are test-cases I've used.QUESTION: Is it expected behavior or I do something wrong or there are some better approaches or we can treat datum as object?test case: PG 9.6, CentOS 7CREATE TABLE public.configurationj2b( id integer NOT NULL PRIMARY KEY, config jsonb NOT NULL);Each jsonb column has 3 top keys, and one of top-key ('data') has another 700-900 key-value pairs e.g. {\"OID1\":\"Value1\"}PL/pgSQL SP\nCREATE OR REPLACE FUNCTION public.process_jsonb() RETURNS void AS$BODY$DECLARE r integer; cfg jsonb;BEGINRAISE NOTICE 'start'; FOR r IN SELECT id as device_id FROM devices LOOP select config into cfg from configurationj2b c where c.id = r; --select jsonb one by one -- MULTIPLE KEYs, Conditional Busiines Logic (BL) updates cfg := jsonb_set(cfg, '{data,OID1}', '\"pl/pgsql1\"'); IF cfg@>'{\"data\" : { \"OID1\":\"pl/pgsql1\"} }' THEN cfg := jsonb_set(cfg, '{data,OID2}', '\"pl/pgsql2\"'); END IF; IF cfg@>'{\"data\" : { \"OID2\":\"pl/pgsql2\"} }' THEN cfg := jsonb_set(cfg, '{data,OID3}', '\"pl/pgsql3\"'); END IF; IF cfg@>'{\"data\" : { \"OID3\":\"pl/pgsql3\"} }' THEN cfg := jsonb_set(cfg, '{data,OID4}', '\"pl/pgsql4\"'); END IF; IF cfg@>'{\"data\" : { \"OID4\":\"pl/pgsql4\"} }' THEN cfg := jsonb_set(cfg, '{data,OID5}', '\"pl/pgsql5\"'); END IF; update configurationj2b c set config = cfg where c.id = r; END LOOP; RAISE NOTICE 'end'; RETURN;END$BODY$ LANGUAGE plpgsql VOLATILE COST 100;or in pseudo-code I would havefor-each child_jsonb dobegin foreach (key-value in parent_jsonb) do begin \n\n\n\nchild_jsonb \n\n:= jsonb_set(child_jsonb\n\n, '{key}', '\"value\"'); end update child_jsonb\n\n\n\nin db;\nend;plv8 snippet:\n$BODY$var ids = plv8.execute('select id from devices');var CFG_TABLE_NAME = 'configurationj2b';var selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c where c.id = $1\", ['int'] );var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 where id = $2', ['json','int'] )try { for (var i = 0; i < ids.length; i++) { var db_cfg = selPlan.execute([ids[i].id]); var cfg = db_cfg[0].config; var cfg_data = cfg['data']; cfg_data['OID1'] = 'plv8_01'; if (cfg_data['OID1'] == 'plv8_01') { cfg_data['OID2'] = 'plv8_02' }; if (cfg_data['OID2'] == 'plv8_02') { cfg_data['OID3'] = 'plv8_03' } if (cfg_data['OID3'] == 'plv8_03') { cfg_data['OID4'] = 'plv8_04' } if (cfg_data['OID4'] == 'plv8_04') { cfg_data['OID5'] = 'plv8_05' } updPlan.execute([cfg, ids[i].id]); plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id); }} finally { selPlan.free(); updPlan.free();}return;$BODY$ but for now plv8 has other issues related to resource consumption.So could I get similar performance in PL/pgSQL?",
"msg_date": "Fri, 15 Mar 2019 18:02:02 +0200",
"msg_from": "Alexandru Lazarev <[email protected]>",
"msg_from_op": true,
"msg_subject": "jsonb_set performance degradation / multiple jsonb_set on multiple\n documents"
},
{
"msg_contents": "I don't know the details of jsonb_set, Perhaps the '||' operator will\nperform better for you, it will overwrite existing keys, so you can build\nyour new values in a new object, and then || it to the original.\n\npostgres=# select '{\"a\": 1, \"b\": 2, \"c\": 3}'::jsonb || '{\"b\": 4, \"c\":\n5}'::jsonb;\n ?column?\n--------------------------\n {\"a\": 1, \"b\": 4, \"c\": 5}\n(1 row)\n\n-Michel\n\n\n\nOn Fri, Mar 15, 2019 at 9:02 AM Alexandru Lazarev <\[email protected]> wrote:\n\n> Hi PostgreSQL Community.\n>\n> I tried to rewrite some plv8 stored procedures, which process in bulk\n> JSONB documents, to PL/pgSQL.\n> A SP usually has to delete/update/add multiple key with the same document\n> and do it for multiple documents (~40K) in loop.\n>\n> When updating a single key PL/pgSQL wins against plv8, but when I need to\n> update multiple keys with *jsonb_set*, timing increase linearly with\n> number of *jsonb_set*s and takes longer than similar SP in PLV8.\n> Below are test-cases I've used.\n>\n> *QUESTION:* Is it expected behavior or I do something wrong or there are\n> some better approaches or we can treat datum as object?\n>\n> test case:\n> PG 9.6, CentOS 7\n>\n> CREATE TABLE public.configurationj2b\n> (\n> id integer NOT NULL PRIMARY KEY,\n> config jsonb NOT NULL\n> );\n> Each jsonb column has 3 top keys, and one of top-key ('data') has another\n> 700-900 key-value pairs e.g. {\"OID1\":\"Value1\"}\n>\n> PL/pgSQL SP\n> CREATE OR REPLACE FUNCTION public.process_jsonb()\n> RETURNS void AS\n> $BODY$\n> DECLARE\n> r integer;\n> cfg jsonb;\n> BEGIN\n> RAISE NOTICE 'start';\n> FOR r IN\n> SELECT id as device_id FROM devices\n> LOOP\n> select config into cfg from configurationj2b c where c.id = r;\n> --select jsonb one by one\n>\n> -- MULTIPLE KEYs, Conditional Busiines Logic (BL) updates\n> * cfg := jsonb_set(cfg, '{data,OID1}', '\"pl/pgsql1\"');*\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> * IF cfg@>'{\"data\" : { \"OID1\":\"pl/pgsql1\"} }' THEN cfg :=\n> jsonb_set(cfg, '{data,OID2}', '\"pl/pgsql2\"'); END IF; IF\n> cfg@>'{\"data\" : { \"OID2\":\"pl/pgsql2\"} }' THEN cfg := jsonb_set(cfg,\n> '{data,OID3}', '\"pl/pgsql3\"'); END IF; IF cfg@>'{\"data\" : {\n> \"OID3\":\"pl/pgsql3\"} }' THEN cfg := jsonb_set(cfg, '{data,OID4}',\n> '\"pl/pgsql4\"'); END IF; IF cfg@>'{\"data\" : { \"OID4\":\"pl/pgsql4\"} }'\n> THEN cfg := jsonb_set(cfg, '{data,OID5}', '\"pl/pgsql5\"'); END IF;*\n>\n> update configurationj2b c set config = cfg where c.id = r;\n>\n> END LOOP;\n> RAISE NOTICE 'end';\n> RETURN;\n> END\n> $BODY$\n> LANGUAGE plpgsql VOLATILE\n> COST 100;\n>\n> or in pseudo-code I would have\n>\n> for-each child_jsonb do\n> begin\n> foreach (key-value in parent_jsonb) do\n> begin\n> * child_jsonb := jsonb_set(child_jsonb , '{key}', '\"value\"');*\n> end\n> update *child_jsonb * in db;\n> end;\n>\n> plv8 snippet:\n> $BODY$var ids = plv8.execute('select id from devices');\n>\n> var CFG_TABLE_NAME = 'configurationj2b';\n> var selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c\n> where c.id = $1\", ['int'] );\n> var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1\n> where id = $2', ['json','int'] )\n>\n> try {\n>\n> for (var i = 0; i < ids.length; i++) {\n> var db_cfg = selPlan.execute([ids[i].id]);\n> var cfg = db_cfg[0].config;\n> var cfg_data = cfg['data'];\n> * cfg_data['OID1'] = 'plv8_01';*\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> * if (cfg_data['OID1'] == 'plv8_01') { cfg_data['OID2']\n> = 'plv8_02' }; if (cfg_data['OID2'] == 'plv8_02') {\n> cfg_data['OID3'] = 'plv8_03' } if (cfg_data['OID3'] ==\n> 'plv8_03') { cfg_data['OID4'] = 'plv8_04' } if\n> (cfg_data['OID4'] == 'plv8_04') { cfg_data['OID5'] =\n> 'plv8_05' }*\n>\n> updPlan.execute([cfg, ids[i].id]);\n> plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id);\n> }\n>\n> } finally {\n> selPlan.free();\n> updPlan.free();\n> }\n>\n> return;$BODY$\n>\n> but for now plv8 has other issues related to resource consumption.\n>\n> So could I get similar performance in PL/pgSQL?\n>\n\nI don't know the details of jsonb_set, Perhaps the '||' operator will perform better for you, it will overwrite existing keys, so you can build your new values in a new object, and then || it to the original.postgres=# select '{\"a\": 1, \"b\": 2, \"c\": 3}'::jsonb || '{\"b\": 4, \"c\": 5}'::jsonb; ?column? -------------------------- {\"a\": 1, \"b\": 4, \"c\": 5}(1 row)-MichelOn Fri, Mar 15, 2019 at 9:02 AM Alexandru Lazarev <[email protected]> wrote:Hi PostgreSQL Community. I tried to rewrite some plv8 stored procedures, which process in bulk JSONB documents, to PL/pgSQL.A SP usually has to delete/update/add multiple key with the same document and do it for multiple documents (~40K) in loop.When updating a single key \nPL/pgSQL wins against plv8, but when I need to update multiple keys with jsonb_set, timing increase linearly with number of jsonb_sets and takes longer than similar SP in PLV8.Below are test-cases I've used.QUESTION: Is it expected behavior or I do something wrong or there are some better approaches or we can treat datum as object?test case: PG 9.6, CentOS 7CREATE TABLE public.configurationj2b( id integer NOT NULL PRIMARY KEY, config jsonb NOT NULL);Each jsonb column has 3 top keys, and one of top-key ('data') has another 700-900 key-value pairs e.g. {\"OID1\":\"Value1\"}PL/pgSQL SP\nCREATE OR REPLACE FUNCTION public.process_jsonb() RETURNS void AS$BODY$DECLARE r integer; cfg jsonb;BEGINRAISE NOTICE 'start'; FOR r IN SELECT id as device_id FROM devices LOOP select config into cfg from configurationj2b c where c.id = r; --select jsonb one by one -- MULTIPLE KEYs, Conditional Busiines Logic (BL) updates cfg := jsonb_set(cfg, '{data,OID1}', '\"pl/pgsql1\"'); IF cfg@>'{\"data\" : { \"OID1\":\"pl/pgsql1\"} }' THEN cfg := jsonb_set(cfg, '{data,OID2}', '\"pl/pgsql2\"'); END IF; IF cfg@>'{\"data\" : { \"OID2\":\"pl/pgsql2\"} }' THEN cfg := jsonb_set(cfg, '{data,OID3}', '\"pl/pgsql3\"'); END IF; IF cfg@>'{\"data\" : { \"OID3\":\"pl/pgsql3\"} }' THEN cfg := jsonb_set(cfg, '{data,OID4}', '\"pl/pgsql4\"'); END IF; IF cfg@>'{\"data\" : { \"OID4\":\"pl/pgsql4\"} }' THEN cfg := jsonb_set(cfg, '{data,OID5}', '\"pl/pgsql5\"'); END IF; update configurationj2b c set config = cfg where c.id = r; END LOOP; RAISE NOTICE 'end'; RETURN;END$BODY$ LANGUAGE plpgsql VOLATILE COST 100;or in pseudo-code I would havefor-each child_jsonb dobegin foreach (key-value in parent_jsonb) do begin \n\n\n\nchild_jsonb \n\n:= jsonb_set(child_jsonb\n\n, '{key}', '\"value\"'); end update child_jsonb\n\n\n\nin db;\nend;plv8 snippet:\n$BODY$var ids = plv8.execute('select id from devices');var CFG_TABLE_NAME = 'configurationj2b';var selPlan = plv8.prepare( \"select c.config from \" + CFG_TABLE_NAME + \" c where c.id = $1\", ['int'] );var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 where id = $2', ['json','int'] )try { for (var i = 0; i < ids.length; i++) { var db_cfg = selPlan.execute([ids[i].id]); var cfg = db_cfg[0].config; var cfg_data = cfg['data']; cfg_data['OID1'] = 'plv8_01'; if (cfg_data['OID1'] == 'plv8_01') { cfg_data['OID2'] = 'plv8_02' }; if (cfg_data['OID2'] == 'plv8_02') { cfg_data['OID3'] = 'plv8_03' } if (cfg_data['OID3'] == 'plv8_03') { cfg_data['OID4'] = 'plv8_04' } if (cfg_data['OID4'] == 'plv8_04') { cfg_data['OID5'] = 'plv8_05' } updPlan.execute([cfg, ids[i].id]); plv8.elog(NOTICE, \"UPDATED = \" + ids[i].id); }} finally { selPlan.free(); updPlan.free();}return;$BODY$ but for now plv8 has other issues related to resource consumption.So could I get similar performance in PL/pgSQL?",
"msg_date": "Fri, 15 Mar 2019 09:35:13 -0700",
"msg_from": "Michel Pelletier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: jsonb_set performance degradation / multiple jsonb_set on\n multiple documents"
}
] |
[
{
"msg_contents": "We're buying some new Postgres servers with \n\n 2 x 240GB Intel SSD S4610 (RAID1 : system)\n 4 x 960GB Intel SSD S4610 (RAID10 : db)\n\nWe'll be using Postgres 11 on Debian.\n\nWe aren't sure whether to use MDRaid or a MegaRAID card\nThe MegaRAID 9271-8i with flash cache protection is available from our\nprovider. I think they may also have the 9361-8i which is 12Gb/s.\n\nOur current servers which use the 9261 with SSD don't see any IO load as\nwe are in RAM most of the time and the RAID card seems to flatten out\nany spikes.\n\nWe use MDRaid but we've never used it for our databases before.\n\nAdvice gratefully received.\n\nRory\n \n\n\n",
"msg_date": "Sat, 16 Mar 2019 18:58:55 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "MDRaid or LSI MegaRAID?"
}
] |
[
{
"msg_contents": "Hi,\n\nI would like to overcome an issue which occurs only in case with *order by *\nclause.\n\nDetails:\nI am trying to insert into a temporary table 50 rows from a joined table\nordered by a modification time column which is inserted by the current time\nso it is ordered ascending.\n\nEach table has index on the following columns: PRIMARY KEY(SystemID,\nObjectID, ElementID, ModificationTime)\n\nStatement:\n\n*sqlString := 'INSERT INTO ResultTable (*\n\n*SELECT * FROM \"TABLE\" a LEFT OUTER JOIN \"TABLE_Text\" l1031 ON\na.ModificationTime = l1031.ModificationTime AND a.SystemID = l1031.SystemID\nAND a.ObjectID = l1031.ObjectID AND a.ElementID = l1031.ElementID AND\nl1031.LCID = 1031 LEFT OUTER JOIN ( SELECT * AS CommentNumber FROM\n\"TABLE_Comment\" v1 GROUP BY v1.ModificationTime, v1.SystemID, v1.ObjectID,\nv1.ElementID ) c ON a.ModificationTime = c.ModificationTime AND a.SystemID\n= c.SystemID AND a.ObjectID = c.ObjectID AND a.ElementID = c.ElementID\nWHERE a.ModificationTime BETWEEN $1 AND $2 AND ( a.Enabled = 1 ) ORDER BY\na.ModificationTime DESC LIMIT 50));*\n\n*EXECUTE sqlString USING StartTime,EndTime; *\n\n\nnode typecountsum of times% of query\nHash 1 8.844 ms 10.0 %\nHash Left Join 1 33.715 ms 38.0 %\nInsert 1 0.734 ms 0.8 %\nLimit 1 0.003 ms 0.0 %\nSeq Scan 2 22.735 ms 25.6 %\nSort 1 22.571 ms 25.5 %\nSubquery Scan 1 0.046 ms 0.1 %\n\n\nExecution Plan: https://explain.depesz.com/s/S96g (Obfuscated)\n\n\nIf I remove the order by clause I get the following results:\n\nnode type\n\ncount\n\nsum of times\n\n% of query\n\n*Index Scan*\n\n2\n\n27.632 ms\n\n94.9 %\n\nInsert\n\n1\n\n0.848 ms\n\n2.9 %\n\nLimit\n\n1\n\n0.023 ms\n\n0.1 %\n\nMerge Left Join\n\n1\n\n0.423 ms\n\n1.5 %\n\nResult\n\n1\n\n0.000 ms\n\n0.0 %\n\nSubquery Scan\n\n1\n\n0.186 ms\n\n0.6 %\n\nWhich is pointing me to a problem with the sorting. Is there any way that I\ncould improve the performance with order by clause?\n\nTo make the problem more transparent I ran a long run test where you can\nsee that with order by clause the performance is linearly getting worse:\n\n[image: image.png]\n\n\nPostgresql version: \"PostgreSQL 11.1, compiled by Visual C++ build 1914,\n64-bit\"\n\nIstalled by: With EnterpriseDB One-click installer from EDB's offical site.\n\nPostgresql.conf changes: Used pgtune suggestions:\n# DB Version: 11\n# OS Type: windows\n# DB Type: desktop\n# Total Memory (RAM): 8 GB\n# CPUs num: 4\n# Connections num: 25\n# Data Storage: hdd\nmax_connections = 25\nshared_buffers = 512MB\neffective_cache_size = 2GB\nmaintenance_work_mem = 512MB\ncheckpoint_completion_target = 0.5\nwal_buffers = 16MB\ndefault_statistics_target = 100\nrandom_page_cost = 4\nwork_mem = 8738kB\nmin_wal_size = 100MB\nmax_wal_size = 1GB\nmax_worker_processes = 4\nmax_parallel_workers_per_gather = 2\nmax_parallel_workers = 4\n\nOperating System: Windows 10 x64, Version: 1607\n\nThanks in advance,\nBest Regards,\nTom Nay",
"msg_date": "Wed, 20 Mar 2019 12:05:15 +0100",
"msg_from": "=?UTF-8?B?TWFyYWNza2Egw4Fkw6Ft?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issue with order by clause on"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 9:36 AM Maracska Ádám <[email protected]> wrote:\n\n> Hi,\n>\n> I would like to overcome an issue which occurs only in case with *order\n> by *clause.\n>\n> Details:\n> I am trying to insert into a temporary table 50 rows from a joined table\n> ordered by a modification time column which is inserted by the current time\n> so it is ordered ascending.\n>\n> Each table has index on the following columns: PRIMARY KEY(SystemID,\n> ObjectID, ElementID, ModificationTime)\n>\n> Statement:\n>\n> *sqlString := 'INSERT INTO ResultTable (*\n>\n> *SELECT * FROM \"TABLE\" a LEFT OUTER JOIN \"TABLE_Text\" l1031 ON\n> a.ModificationTime = l1031.ModificationTime AND a.SystemID = l1031.SystemID\n> AND a.ObjectID = l1031.ObjectID AND a.ElementID = l1031.ElementID AND\n> l1031.LCID = 1031 LEFT OUTER JOIN ( SELECT * AS CommentNumber FROM\n> \"TABLE_Comment\" v1 GROUP BY v1.ModificationTime, v1.SystemID, v1.ObjectID,\n> v1.ElementID ) c ON a.ModificationTime = c.ModificationTime AND a.SystemID\n> = c.SystemID AND a.ObjectID = c.ObjectID AND a.ElementID = c.ElementID\n> WHERE a.ModificationTime BETWEEN $1 AND $2 AND ( a.Enabled = 1 ) ORDER BY\n> a.ModificationTime DESC LIMIT 50));*\n>\n> *EXECUTE sqlString USING StartTime,EndTime; *\n>\n>\n> node typecountsum of times% of query\n> Hash 1 8.844 ms 10.0 %\n> Hash Left Join 1 33.715 ms 38.0 %\n> Insert 1 0.734 ms 0.8 %\n> Limit 1 0.003 ms 0.0 %\n> Seq Scan 2 22.735 ms 25.6 %\n> Sort 1 22.571 ms 25.5 %\n> Subquery Scan 1 0.046 ms 0.1 %\n>\n>\n> Execution Plan: https://explain.depesz.com/s/S96g (Obfuscated)\n>\n>\n> If I remove the order by clause I get the following results:\n>\n> node type\n>\n> count\n>\n> sum of times\n>\n> % of query\n>\n> *Index Scan*\n>\n> 2\n>\n> 27.632 ms\n>\n> 94.9 %\n>\n> Insert\n>\n> 1\n>\n> 0.848 ms\n>\n> 2.9 %\n>\n> Limit\n>\n> 1\n>\n> 0.023 ms\n>\n> 0.1 %\n>\n> Merge Left Join\n>\n> 1\n>\n> 0.423 ms\n>\n> 1.5 %\n>\n> Result\n>\n> 1\n>\n> 0.000 ms\n>\n> 0.0 %\n>\n> Subquery Scan\n>\n> 1\n>\n> 0.186 ms\n>\n> 0.6 %\n>\n> Which is pointing me to a problem with the sorting. Is there any way that\n> I could improve the performance with order by clause?\n>\n> To make the problem more transparent I ran a long run test where you can\n> see that with order by clause the performance is linearly getting worse:\n>\n> [image: image.png]\n>\n>\n> Postgresql version: \"PostgreSQL 11.1, compiled by Visual C++ build 1914,\n> 64-bit\"\n>\n> Istalled by: With EnterpriseDB One-click installer from EDB's offical\n> site.\n>\n> Postgresql.conf changes: Used pgtune suggestions:\n> # DB Version: 11\n> # OS Type: windows\n> # DB Type: desktop\n> # Total Memory (RAM): 8 GB\n> # CPUs num: 4\n> # Connections num: 25\n> # Data Storage: hdd\n> max_connections = 25\n> shared_buffers = 512MB\n> effective_cache_size = 2GB\n> maintenance_work_mem = 512MB\n> checkpoint_completion_target = 0.5\n> wal_buffers = 16MB\n> default_statistics_target = 100\n> random_page_cost = 4\n> work_mem = 8738kB\n> min_wal_size = 100MB\n> max_wal_size = 1GB\n> max_worker_processes = 4\n> max_parallel_workers_per_gather = 2\n> max_parallel_workers = 4\n>\n> Operating System: Windows 10 x64, Version: 1607\n>\n> Thanks in advance,\n> Best Regards,\n> Tom Nay\n>\n\nThe queries are not equivalent. One returns the first 50 rows it finds\nregardless of what qualities they possess, and the other one must fetch all\nrows and then decide which 50 are the most recent.\n\nThey're the difference between:\nFind any 10 people in your city.\nFind the TALLEST 10 people in your city. This will scale poorly in large\ncities.\n\nIf you have an index on ModificationTime, then the query can seek to the\nhighest row matching the between clause, and walk backwards looking for\nrows that match any other criteria, so that will help, because it will\navoid the sort.",
"msg_date": "Wed, 20 Mar 2019 13:34:05 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with order by clause on"
},
{
"msg_contents": "Have you tried creating an sorted index like\n\nCREATE INDEX table_modificationtime_idx ON “TABLE“ USING btree(modificationtime DESC) WHERE (enabled=1)?\n\nBest Regards\nStephan\n\n\nVon: Corey Huinker<mailto:[email protected]>\nGesendet: Mittwoch, 20. März 2019 18:34\nAn: Maracska Ádám<mailto:[email protected]>\nCc: [email protected]<mailto:[email protected]>\nBetreff: Re: Performance issue with order by clause on\n\nOn Wed, Mar 20, 2019 at 9:36 AM Maracska Ádám <[email protected]<mailto:[email protected]>> wrote:\nHi,\n\nI would like to overcome an issue which occurs only in case with order by clause.\n\nDetails:\nI am trying to insert into a temporary table 50 rows from a joined table ordered by a modification time column which is inserted by the current time so it is ordered ascending.\n\nEach table has index on the following columns: PRIMARY KEY(SystemID, ObjectID, ElementID, ModificationTime)\n\nStatement:\n\nsqlString := 'INSERT INTO ResultTable (\nSELECT * FROM \"TABLE\" a LEFT OUTER JOIN \"TABLE_Text\" l1031 ON a.ModificationTime = l1031.ModificationTime AND a.SystemID = l1031.SystemID AND a.ObjectID = l1031.ObjectID AND a.ElementID = l1031.ElementID AND l1031.LCID = 1031 LEFT OUTER JOIN ( SELECT * AS CommentNumber FROM \"TABLE_Comment\" v1 GROUP BY v1.ModificationTime, v1.SystemID, v1.ObjectID, v1.ElementID ) c ON a.ModificationTime = c.ModificationTime AND a.SystemID = c.SystemID AND a.ObjectID = c.ObjectID AND a.ElementID = c.ElementID WHERE a.ModificationTime BETWEEN $1 AND $2 AND ( a.Enabled = 1 ) ORDER BY a.ModificationTime DESC LIMIT 50));\n\nEXECUTE sqlString USING StartTime,EndTime;\n\n\nnode type\n\ncount\n\nsum of times\n\n% of query\n\nHash\n\n1\n\n8.844 ms\n\n10.0 %\n\nHash Left Join\n\n1\n\n33.715 ms\n\n38.0 %\n\nInsert\n\n1\n\n0.734 ms\n\n0.8 %\n\nLimit\n\n1\n\n0.003 ms\n\n0.0 %\n\nSeq Scan\n\n2\n\n22.735 ms\n\n25.6 %\n\nSort\n\n1\n\n22.571 ms\n\n25.5 %\n\nSubquery Scan\n\n1\n\n0.046 ms\n\n0.1 %\n\n\n\n\nExecution Plan: https://explain.depesz.com/s/S96g (Obfuscated)\n\n\nIf I remove the order by clause I get the following results:\nnode type\n\ncount\n\nsum of times\n\n% of query\n\nIndex Scan\n\n2\n\n27.632 ms\n\n94.9 %\n\nInsert\n\n1\n\n0.848 ms\n\n2.9 %\n\nLimit\n\n1\n\n0.023 ms\n\n0.1 %\n\nMerge Left Join\n\n1\n\n0.423 ms\n\n1.5 %\n\nResult\n\n1\n\n0.000 ms\n\n0.0 %\n\nSubquery Scan\n\n1\n\n0.186 ms\n\n0.6 %\n\n\nWhich is pointing me to a problem with the sorting. Is there any way that I could improve the performance with order by clause?\n\nTo make the problem more transparent I ran a long run test where you can see that with order by clause the performance is linearly getting worse:\n\n[image.png]\n\n\nPostgresql version: \"PostgreSQL 11.1, compiled by Visual C++ build 1914, 64-bit\"\n\nIstalled by: With EnterpriseDB One-click installer from EDB's offical site.\n\nPostgresql.conf changes: Used pgtune suggestions:\n# DB Version: 11\n# OS Type: windows\n# DB Type: desktop\n# Total Memory (RAM): 8 GB\n# CPUs num: 4\n# Connections num: 25\n# Data Storage: hdd\nmax_connections = 25\nshared_buffers = 512MB\neffective_cache_size = 2GB\nmaintenance_work_mem = 512MB\ncheckpoint_completion_target = 0.5\nwal_buffers = 16MB\ndefault_statistics_target = 100\nrandom_page_cost = 4\nwork_mem = 8738kB\nmin_wal_size = 100MB\nmax_wal_size = 1GB\nmax_worker_processes = 4\nmax_parallel_workers_per_gather = 2\nmax_parallel_workers = 4\n\nOperating System: Windows 10 x64, Version: 1607\n\nThanks in advance,\nBest Regards,\nTom Nay\n\nThe queries are not equivalent. One returns the first 50 rows it finds regardless of what qualities they possess, and the other one must fetch all rows and then decide which 50 are the most recent.\n\nThey're the difference between:\nFind any 10 people in your city.\nFind the TALLEST 10 people in your city. This will scale poorly in large cities.\n\nIf you have an index on ModificationTime, then the query can seek to the highest row matching the between clause, and walk backwards looking for rows that match any other criteria, so that will help, because it will avoid the sort.",
"msg_date": "Wed, 20 Mar 2019 21:49:12 +0000",
"msg_from": "Stephan Schmidt <[email protected]>",
"msg_from_op": false,
"msg_subject": "AW: Performance issue with order by clause on"
}
] |
[
{
"msg_contents": "Hi all, look at this short story please:\n\nfoo=# CREATE TABLE Test(id int NOT NULL PRIMARY KEY);\nCREATE TABLE\nfoo=# INSERT INTO test SELECT row_number() OVER() FROM pg_class a CROSS JOIN pg_class b;\nINSERT 0 388129\nfoo=# EXPLAIN SELECT * FROM Test WHERE id = '8934';\n QUERY PLAN\n---------------------------------------------------------------------------\n Index Only Scan using test_pkey on test (cost=0.42..8.44 rows=1 width=4)\n Index Cond: (id = 8934)\n(2 rows)\n\nfoo=# ALTER TABLE Test DROP CONSTRAINT Test_pkey;\nALTER TABLE\nfoo=# EXPLAIN SELECT * FROM Test WHERE id = '8934';\n QUERY PLAN\n-------------------------------------------------------\n Seq Scan on test (cost=0.00..6569.61 rows=1 width=4)\n Filter: (id = 8934)\n(2 rows)\n\nfoo=# SELECT max(id)/2 FROM Test;\n ?column?\n----------\n 194064\n(1 row)\n\nfoo=# CREATE UNIQUE INDEX Test_pk0 ON Test(id) WHERE id < 194064;\nCREATE INDEX\nfoo=# CREATE UNIQUE INDEX Test_pk1 ON Test(id) WHERE id >= 194064;\nCREATE INDEX\nfoo=# ANALYZE Test;\nANALYZE\nfoo=# EXPLAIN SELECT * FROM Test WHERE id = 8934;\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Only Scan using test_pk0 on test (cost=0.42..8.44 rows=1 width=4)\n Index Cond: (id = 8934)\n(2 rows)\n\n\nfoo=# DROP INDEX Test_pk0;\nDROP INDEX\nfoo=# DROP INDEX Test_pk1;\nDROP INDEX\n\nfoo=# CREATE UNIQUE INDEX Test_pk0 ON Test(id) WHERE mod(id,2) = 0;\nCREATE INDEX\nfoo=# CREATE UNIQUE INDEX Test_pk1 ON Test(id) WHERE mod(id,2) = 1;\nCREATE INDEX\nfoo=# ANALYZE Test;\nANALYZE\nfoo=# EXPLAIN SELECT * FROM Test WHERE id = '8934';\n QUERY PLAN\n-------------------------------------------------------\n Seq Scan on test (cost=0.00..6569.61 rows=1 width=4)\n Filter: (id = 8934)\n(2 rows)\n\nWhy is that index never used?\n\nPS: there is a performance question behind this, big table, heavily used index,\nthe hope was that with this simple scheme of partitioning just the index one might\ndistribute the load better. I know, if the load really is so big, why not partition\nthe entire table. But just for hecks, why not this way?\n\nregards,\n-Gunther\n\n\n\n\n\n\n\nHi all, look at this short story please:\n\nfoo=# CREATE TABLE Test(id int NOT NULL PRIMARY KEY);\nCREATE TABLE\nfoo=# INSERT INTO test SELECT row_number() OVER() FROM pg_class a CROSS JOIN pg_class b;\nINSERT 0 388129\nfoo=# EXPLAIN SELECT * FROM Test WHERE id = '8934';\n QUERY PLAN\n---------------------------------------------------------------------------\n Index Only Scan using test_pkey on test (cost=0.42..8.44 rows=1 width=4)\n Index Cond: (id = 8934)\n(2 rows)\n\nfoo=# ALTER TABLE Test DROP CONSTRAINT Test_pkey;\nALTER TABLE\nfoo=# EXPLAIN SELECT * FROM Test WHERE id = '8934';\n QUERY PLAN\n-------------------------------------------------------\n Seq Scan on test (cost=0.00..6569.61 rows=1 width=4)\n Filter: (id = 8934)\n(2 rows)\n\nfoo=# SELECT max(id)/2 FROM Test;\n ?column?\n----------\n 194064\n(1 row)\n\nfoo=# CREATE UNIQUE INDEX Test_pk0 ON Test(id) WHERE id < 194064;\nCREATE INDEX\nfoo=# CREATE UNIQUE INDEX Test_pk1 ON Test(id) WHERE id >= 194064;\nCREATE INDEX\nfoo=# ANALYZE Test;\nANALYZE\nfoo=# EXPLAIN SELECT * FROM Test WHERE id = 8934;\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Only Scan using test_pk0 on test (cost=0.42..8.44 rows=1 width=4)\n Index Cond: (id = 8934)\n(2 rows)\n\n\nfoo=# DROP INDEX Test_pk0;\nDROP INDEX\nfoo=# DROP INDEX Test_pk1;\nDROP INDEX\n\nfoo=# CREATE UNIQUE INDEX Test_pk0 ON Test(id) WHERE mod(id,2) = 0;\nCREATE INDEX\nfoo=# CREATE UNIQUE INDEX Test_pk1 ON Test(id) WHERE mod(id,2) = 1;\nCREATE INDEX\nfoo=# ANALYZE Test;\nANALYZE\nfoo=# EXPLAIN SELECT * FROM Test WHERE id = '8934'; \n QUERY PLAN\n-------------------------------------------------------\n Seq Scan on test (cost=0.00..6569.61 rows=1 width=4)\n Filter: (id = 8934)\n(2 rows)\n\nWhy is that index never used?\n\nPS: there is a performance question behind this, big table, heavily used index, \nthe hope was that with this simple scheme of partitioning just the index one might\ndistribute the load better. I know, if the load really is so big, why not partition\nthe entire table. But just for hecks, why not this way?\n\nregards,\n-Gunther",
"msg_date": "Wed, 20 Mar 2019 22:45:07 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor man's partitioned index .... not being used?"
},
{
"msg_contents": "On Thu, 21 Mar 2019 at 15:51, Gunther <[email protected]> wrote:\n> foo=# CREATE UNIQUE INDEX Test_pk0 ON Test(id) WHERE mod(id,2) = 0;\n> CREATE INDEX\n> foo=# CREATE UNIQUE INDEX Test_pk1 ON Test(id) WHERE mod(id,2) = 1;\n> CREATE INDEX\n> foo=# ANALYZE Test;\n> ANALYZE\n> foo=# EXPLAIN SELECT * FROM Test WHERE id = '8934';\n> QUERY PLAN\n> -------------------------------------------------------\n> Seq Scan on test (cost=0.00..6569.61 rows=1 width=4)\n> Filter: (id = 8934)\n> (2 rows)\n>\n> Why is that index never used?\n\nWhen the planner looks at partial indexes to see if they'll suit the\nscan, the code that does the matching (predicate_implied_by()) simply\ndoes not go to that much trouble to determine if it matches. If you\nlook at operator_predicate_proof() you'll see it requires the\nexpression on at least one side of the OpExpr to match your predicate.\nYours matches on neither side since \"id\" is wrapped up in a mod()\nfunction call.\n\nCertainly, predicate_implied_by() is by no means finished, new smarts\nhave been added to it over the years to allow it to prove more cases,\nbut each time something is added we still need to carefully weigh up\nthe additional overhead of the new code vs. possible benefits.\n\nIt may be possible to do something with immutable functions found in\nthe expr but someone doing so might have a hard time proving that it's\nalways safe to do so. For example, arg 2 of your mod() call is a\nConst. If it had been another Var then it wouldn't be safe to use.\nWhat other unsafe cases are there? Is there a way we can always\nidentify unsafe cases during planning? ... are the sorts of questions\nsomeone implementing this would be faced with.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 21 Mar 2019 16:34:34 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor man's partitioned index .... not being used?"
},
{
"msg_contents": ">>>>> \"Gunther\" == Gunther <[email protected]> writes:\n\n Gunther> foo=# CREATE UNIQUE INDEX Test_pk0 ON Test(id) WHERE mod(id,2) = 0;\n Gunther> CREATE INDEX\n\n Gunther> foo=# EXPLAIN SELECT * FROM Test WHERE id = '8934';\n Gunther> QUERY PLAN\n Gunther> -------------------------------------------------------\n Gunther> Seq Scan on test (cost=0.00..6569.61 rows=1 width=4)\n Gunther> Filter: (id = 8934)\n Gunther> (2 rows)\n\n Gunther> Why is that index never used?\n\nBecause the expression mod(id,2) does not appear in the query, and there\nis no logic in the implication prover to prove that (mod(id,2) = 0) is\nimplied by (id = 8934).\n\nIf you did WHERE mod(id,2) = mod(8934,2) AND id = 8934\n\nthen the index would likely be used - because the prover can then treat\nmod(id,2) as an atom (call it X), constant-fold mod(8934,2) to 0 because\nmod() is immutable, and then observe that (X = 0) proves that (X = 0).\n\nPretty much the only simple implications that the prover can currently\ndeduce are:\n\n - identical immutable subexpressions are equivalent\n\n - strict operator expressions imply scalar IS NOT NULL\n\n - (A op1 B) implies (B op2 A) if op2 is op1's declared commutator\n\n - Btree semantics: if <, <=, =, >=, > are all members of a btree\n opfamily, and <> is the declared negator of =, then implications\n like (X < A) and (A <= B) implies (X < B) can be deduced.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Thu, 21 Mar 2019 03:46:58 +0000",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor man's partitioned index .... not being used?"
},
{
"msg_contents": "Thanks David Rowley and Andrew Gierth.\n\nOn 3/20/2019 23:46, Andrew Gierth wrote:\n> If you did WHERE mod(id,2) = mod(8934,2) AND id = 8934\n>\n> then the index would likely be used - because the prover can then treat\n> mod(id,2) as an atom (call it X), constant-fold mod(8934,2) to 0 because\n> mod() is immutable, and then observe that (X = 0) proves that (X = 0).\n\nfoo=# EXPLAIN SELECT * FROM Test WHERE mod(id,2) = mod(8934,2) AND id = 8934;\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Only Scan using test_pk0 on test (cost=0.42..8.44 rows=1 width=4)\n Index Cond: (id = 8934)\n(2 rows)\n\nYes indeed! It's being used that way! Interesting. Only that we can't \nuse it if id was a variable? Hmm ...\n\nfoo=# PREPARE testplan(int) AS\nfoo-# SELECT * FROM Test WHERE mod(id,2) = mod($1,2) AND id = $1;\nPREPARE\nfoo=# EXPLAIN EXECUTE testplan(8934);\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Only Scan using test_pk0 on test (cost=0.42..8.44 rows=1 width=4)\n Index Cond: (id = 8934)\n(2 rows)\n\nThat's quite alright actually. Now the questions is, could we use this \nin a nested loop query plan? That's where I think it can't work:\n\nfoo=# CREATE TABLE Test2 AS SELECT * FROM Test WHERE random() < 0.01 ORDER BY id DESC;\nSELECT 3730\nintegrator=# ANALYZE Test2;\nANALYZE\nfoo=# EXPLAIN SELECT * FROM Test2 a LEFT OUTER JOIN Test b ON( mod(b.id,2) = mod(a.id,2) AND b.id = a.id) LIMIT 10;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Limit (cost=110.25..135.67 rows=10 width=8)\n -> Hash Right Join (cost=110.25..9591.02 rows=3730 width=8)\n Hash Cond: ((mod(b.id, 2) = mod(a.id, 2)) AND (b.id = a.id))\n -> Seq Scan on test b (cost=0.00..5599.29 rows=388129 width=4)\n -> Hash (cost=54.30..54.30 rows=3730 width=4)\n -> Seq Scan on test2 a (cost=0.00..54.30 rows=3730 width=4)\n(6 rows)\n\nfoo=# SET enable_hashjoin TO off;\nSET\nfoo=# EXPLAIN SELECT * FROM Test2 a LEFT OUTER JOIN Test b ON( mod(b.id,2) = mod(a.id,2) AND b.id = a.id) LIMIT 10;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Limit (cost=47214.73..47227.86 rows=10 width=8)\n -> Merge Right Join (cost=47214.73..52113.16 rows=3730 width=8)\n Merge Cond: (((mod(b.id, 2)) = (mod(a.id, 2))) AND (b.id = a.id))\n -> Sort (cost=46939.15..47909.47 rows=388129 width=4)\n Sort Key: (mod(b.id, 2)), b.id\n -> Seq Scan on test b (cost=0.00..5599.29 rows=388129 width=4)\n -> Sort (cost=275.58..284.91 rows=3730 width=4)\n Sort Key: (mod(a.id, 2)), a.id\n -> Seq Scan on test2 a (cost=0.00..54.30 rows=3730 width=4)\n(9 rows)\n\nfoo=# SET enable_mergejoin TO off;\nSET\nfoo=# EXPLAIN SELECT * FROM Test2 a LEFT OUTER JOIN Test b ON( mod(b.id,2) = mod(a.id,2) AND b.id = a.id) LIMIT 10;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Limit (cost=0.00..102516.78 rows=10 width=8)\n -> Nested Loop Left Join (cost=0.00..38238760.24 rows=3730 width=8)\n Join Filter: ((b.id = a.id) AND (mod(b.id, 2) = mod(a.id, 2)))\n -> Seq Scan on test2 a (cost=0.00..54.30 rows=3730 width=4)\n -> Materialize (cost=0.00..9056.93 rows=388129 width=4)\n -> Seq Scan on test b (cost=0.00..5599.29 rows=388129 width=4)\n(6 rows)\n\nIt looks like it doesn't want to evaluate the mod(a.id, 2) before it \nmoves to the index query for the nested loop.\n\nNotably the partitioned table approach should do that, but it has a \ndifferent expression for the partition. No mod function but MODULUS and \nREMAINDER.\n\nI wonder if there was a way of marking such expressions as safe in the \nquery, like suggesting a certain evaluation order, i.e.,\n\nSELECT * FROM Test2 a LEFT OUTER JOIN Test b ON(mod(b.id,2) = EVAL(mod(a.id,2)) AND b.id = a.id) LIMIT 10;\n\nIt's OK though. It just goes to show that in a case like this, it is \nbest to just go with the partitioned table anyway.\n\nregards,\n-Gunther\n\n\n\n\n\n\n\nThanks David Rowley and Andrew Gierth.\n\nOn 3/20/2019 23:46, Andrew Gierth\n wrote:\n \n\nIf you did WHERE mod(id,2) = mod(8934,2) AND id = 8934\n\nthen the index would likely be used - because the prover can then treat\nmod(id,2) as an atom (call it X), constant-fold mod(8934,2) to 0 because\nmod() is immutable, and then observe that (X = 0) proves that (X = 0).\n\nfoo=# EXPLAIN SELECT * FROM Test WHERE mod(id,2) = mod(8934,2) AND id = 8934;\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Only Scan using test_pk0 on test (cost=0.42..8.44 rows=1 width=4)\n Index Cond: (id = 8934)\n(2 rows)\n\nYes indeed! It's being used that way! Interesting. Only that we\n can't use it if id was a variable? Hmm ...\n\nfoo=# PREPARE testplan(int) AS \nfoo-# SELECT * FROM Test WHERE mod(id,2) = mod($1,2) AND id = $1;\nPREPARE\nfoo=# EXPLAIN EXECUTE testplan(8934);\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Only Scan using test_pk0 on test (cost=0.42..8.44 rows=1 width=4)\n Index Cond: (id = 8934)\n(2 rows)\n\nThat's quite alright actually. Now the questions is, could we use\n this in a nested loop query plan? That's where I think it can't\n work:\nfoo=# CREATE TABLE Test2 AS SELECT * FROM Test WHERE random() < 0.01 ORDER BY id DESC;\nSELECT 3730\nintegrator=# ANALYZE Test2;\nANALYZE\nfoo=# EXPLAIN SELECT * FROM Test2 a LEFT OUTER JOIN Test b ON( mod(b.id,2) = mod(a.id,2) AND b.id = a.id) LIMIT 10;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Limit (cost=110.25..135.67 rows=10 width=8)\n -> Hash Right Join (cost=110.25..9591.02 rows=3730 width=8)\n Hash Cond: ((mod(b.id, 2) = mod(a.id, 2)) AND (b.id = a.id))\n -> Seq Scan on test b (cost=0.00..5599.29 rows=388129 width=4)\n -> Hash (cost=54.30..54.30 rows=3730 width=4)\n -> Seq Scan on test2 a (cost=0.00..54.30 rows=3730 width=4)\n(6 rows)\n\nfoo=# SET enable_hashjoin TO off;\nSET\nfoo=# EXPLAIN SELECT * FROM Test2 a LEFT OUTER JOIN Test b ON( mod(b.id,2) = mod(a.id,2) AND b.id = a.id) LIMIT 10;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Limit (cost=47214.73..47227.86 rows=10 width=8)\n -> Merge Right Join (cost=47214.73..52113.16 rows=3730 width=8)\n Merge Cond: (((mod(b.id, 2)) = (mod(a.id, 2))) AND (b.id = a.id))\n -> Sort (cost=46939.15..47909.47 rows=388129 width=4)\n Sort Key: (mod(b.id, 2)), b.id\n -> Seq Scan on test b (cost=0.00..5599.29 rows=388129 width=4)\n -> Sort (cost=275.58..284.91 rows=3730 width=4)\n Sort Key: (mod(a.id, 2)), a.id\n -> Seq Scan on test2 a (cost=0.00..54.30 rows=3730 width=4)\n(9 rows)\n\nfoo=# SET enable_mergejoin TO off;\nSET\nfoo=# EXPLAIN SELECT * FROM Test2 a LEFT OUTER JOIN Test b ON( mod(b.id,2) = mod(a.id,2) AND b.id = a.id) LIMIT 10;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Limit (cost=0.00..102516.78 rows=10 width=8)\n -> Nested Loop Left Join (cost=0.00..38238760.24 rows=3730 width=8)\n Join Filter: ((b.id = a.id) AND (mod(b.id, 2) = mod(a.id, 2)))\n -> Seq Scan on test2 a (cost=0.00..54.30 rows=3730 width=4)\n -> Materialize (cost=0.00..9056.93 rows=388129 width=4)\n -> Seq Scan on test b (cost=0.00..5599.29 rows=388129 width=4)\n(6 rows)\n\nIt looks like it doesn't want to evaluate the mod(a.id, 2) before\n it moves to the index query for the nested loop.\nNotably the partitioned table approach should do that, but it has\n a different expression for the partition. No mod function but\n MODULUS and REMAINDER. \n\nI wonder if there was a way of marking such expressions as safe\n in the query, like suggesting a certain evaluation order, i.e., \n\nSELECT * FROM Test2 a LEFT OUTER JOIN Test b ON(mod(b.id,2) = EVAL(mod(a.id,2)) AND b.id = a.id) LIMIT 10;\nIt's OK though. It just goes to show that in a case like this, it\n is best to just go with the partitioned table anyway.\nregards,\n -Gunther",
"msg_date": "Thu, 21 Mar 2019 14:57:38 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor man's partitioned index .... not being used?"
},
{
"msg_contents": "On Fri, 22 Mar 2019 at 07:57, Gunther <[email protected]> wrote:\n> foo=# PREPARE testplan(int) AS\n> foo-# SELECT * FROM Test WHERE mod(id,2) = mod($1,2) AND id = $1;\n> PREPARE\n> foo=# EXPLAIN EXECUTE testplan(8934);\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Index Only Scan using test_pk0 on test (cost=0.42..8.44 rows=1 width=4)\n> Index Cond: (id = 8934)\n> (2 rows)\n>\n> That's quite alright actually. Now the questions is, could we use this in a nested loop query plan? That's where I think it can't work:\n\nNot really. In that case, the parameters were replaced with the\nspecified values (a.k.a custom plan). That happens for the first 5\nexecutions of a prepared statement, and in this case likely the\nplanner will continue to use the custom plan since the generic plan\nwon't know that the partial index is okay to use and the plan costs\nwould likely go up enough that the custom plan would continue to be\nfavoured.\n\n> foo=# SET enable_mergejoin TO off;\n> SET\n> foo=# EXPLAIN SELECT * FROM Test2 a LEFT OUTER JOIN Test b ON( mod(b.id,2) = mod(a.id,2) AND b.id = a.id) LIMIT 10;\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Limit (cost=0.00..102516.78 rows=10 width=8)\n> -> Nested Loop Left Join (cost=0.00..38238760.24 rows=3730 width=8)\n> Join Filter: ((b.id = a.id) AND (mod(b.id, 2) = mod(a.id, 2)))\n> -> Seq Scan on test2 a (cost=0.00..54.30 rows=3730 width=4)\n> -> Materialize (cost=0.00..9056.93 rows=388129 width=4)\n> -> Seq Scan on test b (cost=0.00..5599.29 rows=388129 width=4)\n> (6 rows)\n>\n> It looks like it doesn't want to evaluate the mod(a.id, 2) before it moves to the index query for the nested loop.\n\nWhether partial indexes can be used are not is determined using only\nquals that can be applied at the scan level. In this case your qual\nis a join qual, and since no other qual exists that can be evaluated\nat the scan level where the index can be used, then it's not\nconsidered. In any case, nothing there guarantees that one of your\nindexes will match all records. For it to work, both of you indexes\nwould have to be scanned. It's not clear why you think that would be\nany better than scanning just one index. I imagine it would only ever\nbe a win if you could eliminate one of the index scans with some qual\nthat guarantees that the index can't contain any records matching your\nquery.\n\n> I wonder if there was a way of marking such expressions as safe in the query, like suggesting a certain evaluation order, i.e.,\n>\n> SELECT * FROM Test2 a LEFT OUTER JOIN Test b ON(mod(b.id,2) = EVAL(mod(a.id,2)) AND b.id = a.id) LIMIT 10;\n>\n> It's OK though. It just goes to show that in a case like this, it is best to just go with the partitioned table anyway.\n\nIt sounds like you might want something like partition-wise join that\nexists in PG11.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sat, 23 Mar 2019 01:56:57 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor man's partitioned index .... not being used?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have 250 rows to delete, but they are a target to a bunch of child \ntables with foreign key on delete cascade.\n\nEXPLAIN DELETE FROM Foo WHERE id = (SELECT fooId FROM Garbage);\n\nshows me that it uses the nested loop by Foo_pkey index to find the 250 \nitems from Garbage to be deleted.\n\nBut once that starts, I see HUGE amount of read activity from the \ntablespace Foo_main that contains the Foo table, and only the Foo table, \nnot the Foo_pkey, not any other index, not any other child table, not \neven the toast table for Foo is contained in that tablespace (I have the \ntoast table diverted with symlinks to another volume).\n\nI see the read activity with iostat, reading heavily at 130 MB/s for a \nlong time until my burst balance is used up, then continuing to churn \nwith 32 MB/s.\n\nI also see the read activity with iotop, that tells me that it is that \npostgres backend running the DELETE query that is doing this, not some \nautovacuum nor anything else.\n\nIt looks to me that in actuality it is doing a sequential scan for each \nof the 250 rows, despite it EPLAINing to me that it was going to use \nthat index.\n\nIt would really be good to know what it is churning so heavily?\n\nI have seen some ways of using dtrace or things like that to do some \nmeasurement points. But I haven't seen how this is done to inspect the \neffective execution plan and where in that plan it is, i.e., which \niteration. It would be nice if there was some way of doing a \"deep \nexplain plan\" or even better, having an idea of the execution plan which \nthe executor is actually following, and a way to report on the current \nstatus of work according to this plan.\n\nHow else do I figure out what causes this heavy read activity on the \nmain Foo table?\n\nThis is something I might even want to contribute. For many years I am \nannoyed by this waiting for long running statement without any idea \nwhere it is and how much is there still to go. If I have a plan \nstructure and an executor that follows the plan structure, there must be \na way to dump it out.\n\nThe pg_stat_activity table might contain a current statement id, and \nthen a superuser might ask EXPLAIN STATEMENT :statementId. Or just a \npg_plantrace command which would dump the current plan with an \nindication of completion % of each step.\n\nBut also delete cascades and triggers should be viewable from this, they \nshould be traced, I am sure that somewhere inside there is some data \nstructure representing this activity and all it would take is to dump it?\n\nregards,\n-Gunther\n\n\n",
"msg_date": "Thu, 21 Mar 2019 15:31:42 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "EXPLAIN PLAN for DELETE CASCADE or DELETE not using pkey index\n despite EXPLAINing that it would?"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> I have 250 rows to delete, but they are a target to a bunch of child \n> tables with foreign key on delete cascade.\n\n> EXPLAIN DELETE FROM Foo WHERE id = (SELECT fooId FROM Garbage);\n\n> shows me that it uses the nested loop by Foo_pkey index to find the 250 \n> items from Garbage to be deleted.\n\n> But once that starts, I see HUGE amount of read activity from the \n> tablespace Foo_main that contains the Foo table, and only the Foo table, \n> not the Foo_pkey, not any other index, not any other child table, not \n> even the toast table for Foo is contained in that tablespace (I have the \n> toast table diverted with symlinks to another volume).\n\nI'm betting you neglected to index the referencing column for one\nor more of the foreign keys involved. You can get away with that\nas long as you're not concerned with the speed of DELETE ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 21 Mar 2019 17:16:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN PLAN for DELETE CASCADE or DELETE not using pkey index\n despite EXPLAINing that it would?"
},
{
"msg_contents": "\nOn 3/21/2019 17:16, Tom Lane wrote:\n> Gunther <[email protected]> writes:\n>> I have 250 rows to delete, but they are a target to a bunch of child\n>> tables with foreign key on delete cascade.\n>> EXPLAIN DELETE FROM Foo WHERE id = (SELECT fooId FROM Garbage);\n>> shows me that it uses the nested loop by Foo_pkey index to find the 250\n>> items from Garbage to be deleted.\n>> But once that starts, I see HUGE amount of read activity from the\n>> tablespace Foo_main that contains the Foo table, and only the Foo table,\n>> not the Foo_pkey, not any other index, not any other child table, not\n>> even the toast table for Foo is contained in that tablespace (I have the\n>> toast table diverted with symlinks to another volume).\n> I'm betting you neglected to index the referencing column for one\n> or more of the foreign keys involved. You can get away with that\n> as long as you're not concerned with the speed of DELETE ...\n>\n> \t\t\tregards, tom lane\n\nI had the same suspicion. But firstly my schema is generated \nautomatically and all foreign keys have the indexes.\n\nBut what is even more stunning is that the table where this massive read \nactivity happens is the Foo heap table. I verified that by using strace \nwhere all the massive amounts of reads are on those files for the main \nFoo table. And this doesn't make sense, since any foreign key targets \nits primary key. The foreign keys of the child tables are also indexed \nand there is no io on the volumes that hold these child tables, nor is \nthe io on the volume that holds the Foo_pkey.\n\n\n\n",
"msg_date": "Fri, 22 Mar 2019 16:01:38 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN PLAN for DELETE CASCADE or DELETE not using pkey index\n despite EXPLAINing that it would?"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 03:31:42PM -0400, Gunther wrote:\n> Hi,\n> \n> I have 250 rows to delete, but they are a target to a bunch of child tables\n> with foreign key on delete cascade.\n> \n> EXPLAIN DELETE FROM Foo WHERE id = (SELECT fooId FROM Garbage);\n\nProbably because:\nhttps://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-FK\n\"Since a DELETE of a row from the referenced table [...] will require a scan of\nthe referencing table for rows matching the old value, it is often a good idea\nto index the referencing columns too.\"\n\nCan you show \"\\d+ foo\", specifically its FKs ?\n\nJustin\n\n",
"msg_date": "Fri, 22 Mar 2019 15:07:19 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN PLAN for DELETE CASCADE or DELETE not using pkey index\n despite EXPLAINing that it would?"
}
] |
[
{
"msg_contents": "Hi all,\nI'm trying to analyze a deadlock that I have in one of our environments.\nThe deadlock message :\n\n06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on\ntransaction 1017405468; blocked by process 36589.\nProcess 36589 waits for ShareLock on transaction 1017403840; blocked by\nprocess 14563.\nProcess 14563: delete from tableB where a in (select id from tableA where c\nin (....)\nProcess 36589: delete from tableA where c in (....)\n06:15:49 EET db 14563 HINT: See server log for query details.\n06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select id\nfrom tableA where c in (....)\n06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on\ntransaction 1017403840 after 1110158.778 ms\n06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)\n06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute <unnamed>:\ndelete from tableA where c in (...)\n\ntableA : (id int, c int references c(id))\ntableB : (id int, a int references a(id) on delete cascade)\ntableC(id int...)\n\nOne A can have Many B`s connected to (One A to Many B).\n\ndeadlock_timeout is set to 5s.\n\nNow I'm trying to understand what might cause this deadlock. I think that\nits related to the foreign keys... I tried to do a simulation in my env :\n\ntransaction 1 :\ndelete from a;\n<left in the background, no commit yet >\n\ntransaction 2 :\ndelete from b;\n\nbut I couldnt recreate the deadlock, I only had some raw exclusive locks :\n\npostgres=# select\nlocktype,relation::regclass,page,tuple,virtualxid,transactionid,virtualtransaction,mode,granted\nfrom pg_locks where database=12870;\n locktype | relation | page | tuple | virtualxid | transactionid |\nvirtualtransaction | mode | granted\n----------+----------+------+-------+------------+---------------+--------------------+------------------+---------\n relation | b | | | | |\n51/156937 | RowExclusiveLock | t\n relation | a_a_idx | | | | |\n51/156937 | RowExclusiveLock | t\n relation | a | | | | |\n51/156937 | RowExclusiveLock | t\n relation | pg_locks | | | | |\n53/39101 | AccessShareLock | t\n relation | a_a_idx | | | | |\n52/29801 | AccessShareLock | t\n relation | a | | | | |\n52/29801 | AccessShareLock | t\n relation | b | | | | |\n52/29801 | RowExclusiveLock | t\n tuple | b | 0 | 1 | | |\n51/156937 | ExclusiveLock | t\n(8 rows)\n\n\nWhat do you guys think ?\n\nHi all,I'm trying to analyze a deadlock that I have in one of our environments.The deadlock message : 06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on transaction 1017405468; blocked by process 36589. Process 36589 waits for ShareLock on transaction 1017403840; blocked by process 14563. Process 14563: delete from tableB where a in (select id from tableA where c in (....) Process 36589: delete from tableA where c in (....) 06:15:49 EET db 14563 HINT: See server log for query details.06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select id from tableA where c in (....)06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on transaction 1017403840 after 1110158.778 ms06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute <unnamed>: delete from tableA where c in (...)tableA : (id int, c int references c(id))tableB : (id int, a int references a(id) on delete cascade)tableC(id int...)One A can have Many B`s connected to (One A to Many B).deadlock_timeout is set to 5s.Now I'm trying to understand what might cause this deadlock. I think that its related to the foreign keys... I tried to do a simulation in my env :transaction 1 : delete from a;<left in the background, no commit yet > transaction 2 : delete from b;but I couldnt recreate the deadlock, I only had some raw exclusive locks : postgres=# select locktype,relation::regclass,page,tuple,virtualxid,transactionid,virtualtransaction,mode,granted from pg_locks where database=12870; locktype | relation | page | tuple | virtualxid | transactionid | virtualtransaction | mode | granted----------+----------+------+-------+------------+---------------+--------------------+------------------+--------- relation | b | | | | | 51/156937 | RowExclusiveLock | t relation | a_a_idx | | | | | 51/156937 | RowExclusiveLock | t relation | a | | | | | 51/156937 | RowExclusiveLock | t relation | pg_locks | | | | | 53/39101 | AccessShareLock | t relation | a_a_idx | | | | | 52/29801 | AccessShareLock | t relation | a | | | | | 52/29801 | AccessShareLock | t relation | b | | | | | 52/29801 | RowExclusiveLock | t tuple | b | 0 | 1 | | | 51/156937 | ExclusiveLock | t(8 rows)What do you guys think ?",
"msg_date": "Wed, 27 Mar 2019 14:52:28 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "trying to analyze deadlock"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> Hi all,\n> I'm trying to analyze a deadlock that I have in one of our environments.\n> The deadlock message : \n> \n> 06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on transaction 1017405468; blocked by process 36589.\n> \tProcess 36589 waits for ShareLock on transaction 1017403840; blocked by process 14563.\n> \tProcess 14563: delete from tableB where a in (select id from tableA where c in (....)\n> \tProcess 36589: delete from tableA where c in (....)\n> \t06:15:49 EET db 14563 HINT: See server log for query details.\n> 06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select id from tableA where c in (....)\n> 06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on transaction 1017403840 after 1110158.778 ms\n> 06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)\n> 06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute <unnamed>: delete from tableA where c in (...)\n> \n> tableA : (id int, c int references c(id))\n> tableB : (id int, a int references a(id) on delete cascade)\n> tableC(id int...)\n> \n> One A can have Many B`s connected to (One A to Many B).\n> \n> deadlock_timeout is set to 5s.\n> \n> Now I'm trying to understand what might cause this deadlock. I think that its related to the foreign keys...\n\nYou can get that if the foreign key is defined as ON CASCADE DELETE or ON CASCADE SET NULL:\n\n CREATE TABLE a (a_id integer PRIMARY KEY);\n\n INSERT INTO a VALUES (1), (2);\n\n CREATE TABLE b (b_id integer PRIMARY KEY, a_id integer NOT NULL REFERENCES a ON DELETE CASCADE);\n\n INSERT INTO b VALUES (100, 1), (101, 1), (102, 2), (103, 2);\n\nTransaction 1:\n\n BEGIN;\n\n DELETE FROM b WHERE b_id = 100;\n\nTransaction 2:\n\n BEGIN;\n\n DELETE FROM a WHERE a_id = 2;\n\n DELETE FROM a WHERE a_id = 1; -- hangs\n\nTransaction 1:\n\n DELETE FROM b WHERE b_id = 102;\n\nERROR: deadlock detected\nDETAIL: Process 10517 waits for ShareLock on transaction 77325; blocked by process 10541.\nProcess 10541 waits for ShareLock on transaction 77323; blocked by process 10517.\nHINT: See server log for query details.\nCONTEXT: while deleting tuple (0,3) in relation \"b\"\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 27 Mar 2019 14:17:49 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to analyze deadlock"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> Hi all,\n> I'm trying to analyze a deadlock that I have in one of our environments.\n> The deadlock message : \n> \n> 06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on transaction 1017405468; blocked by process 36589.\n> \tProcess 36589 waits for ShareLock on transaction 1017403840; blocked by process 14563.\n> \tProcess 14563: delete from tableB where a in (select id from tableA where c in (....)\n> \tProcess 36589: delete from tableA where c in (....)\n> \t06:15:49 EET db 14563 HINT: See server log for query details.\n> 06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select id from tableA where c in (....)\n> 06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on transaction 1017403840 after 1110158.778 ms\n> 06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)\n> 06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute <unnamed>: delete from tableA where c in (...)\n> \n> tableA : (id int, c int references c(id))\n> tableB : (id int, a int references a(id) on delete cascade)\n> tableC(id int...)\n> \n> One A can have Many B`s connected to (One A to Many B).\n> \n> deadlock_timeout is set to 5s.\n> \n> Now I'm trying to understand what might cause this deadlock. I think that its related to the foreign keys...\n\nYou can get that if the foreign key is defined as ON CASCADE DELETE or ON CASCADE SET NULL:\n\n CREATE TABLE a (a_id integer PRIMARY KEY);\n\n INSERT INTO a VALUES (1), (2);\n\n CREATE TABLE b (b_id integer PRIMARY KEY, a_id integer NOT NULL REFERENCES a ON DELETE CASCADE);\n\n INSERT INTO b VALUES (100, 1), (101, 1), (102, 2), (103, 2);\n\nTransaction 1:\n\n BEGIN;\n\n DELETE FROM b WHERE b_id = 100;\n\nTransaction 2:\n\n BEGIN;\n\n DELETE FROM a WHERE a_id = 2;\n\n DELETE FROM a WHERE a_id = 1; -- hangs\n\nTransaction 1:\n\n DELETE FROM b WHERE b_id = 102;\n\nERROR: deadlock detected\nDETAIL: Process 10517 waits for ShareLock on transaction 77325; blocked by process 10541.\nProcess 10541 waits for ShareLock on transaction 77323; blocked by process 10517.\nHINT: See server log for query details.\nCONTEXT: while deleting tuple (0,3) in relation \"b\"\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 27 Mar 2019 14:19:57 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to analyze deadlock"
},
{
"msg_contents": "Hi all,\nI'm trying to analyze a deadlock that I have in one of our environments.\nThe deadlock message :\n\n06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on\ntransaction 1017405468; blocked by process 36589.\nProcess 36589 waits for ShareLock on transaction 1017403840; blocked by\nprocess 14563.\nProcess 14563: delete from tableB where a in (select id from tableA where c\nin (....)\nProcess 36589: delete from tableA where c in (....)\n06:15:49 EET db 14563 HINT: See server log for query details.\n06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select id\nfrom tableA where c in (....)\n06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on\ntransaction 1017403840 after 1110158.778 ms\n06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)\n06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute <unnamed>:\ndelete from tableA where c in (...)\n\ntableA : (id int, c int references c(id))\ntableB : (id int, a int references a(id) on delete cascade)\ntableC(id int...)\n\nOne A can have Many B`s connected to (One A to Many B).\n\ndeadlock_timeout is set to 5s.\n\nNow I'm trying to understand what might cause this deadlock. I think that\nits related to the foreign keys... I tried to do a simulation in my env :\n\ntransaction 1 :\ndelete from a;\n<left in the background, no commit yet >\n\ntransaction 2 :\ndelete from b;\n\nbut I couldnt recreate the deadlock, I only had some raw exclusive locks :\n\npostgres=# select\nlocktype,relation::regclass,page,tuple,virtualxid,transactionid,virtualtransaction,mode,granted\nfrom pg_locks where database=12870;\n locktype | relation | page | tuple | virtualxid | transactionid |\nvirtualtransaction | mode | granted\n----------+----------+------+-------+------------+---------------+--------------------+------------------+---------\n relation | b | | | | |\n51/156937 | RowExclusiveLock | t\n relation | a_a_idx | | | | |\n51/156937 | RowExclusiveLock | t\n relation | a | | | | |\n51/156937 | RowExclusiveLock | t\n relation | pg_locks | | | | |\n53/39101 | AccessShareLock | t\n relation | a_a_idx | | | | |\n52/29801 | AccessShareLock | t\n relation | a | | | | |\n52/29801 | AccessShareLock | t\n relation | b | | | | |\n52/29801 | RowExclusiveLock | t\n tuple | b | 0 | 1 | | |\n51/156937 | ExclusiveLock | t\n(8 rows)\n\n\nWhat do you guys think ?\n\nHi all,I'm trying to analyze a deadlock that I have in one of our environments.The deadlock message : 06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on transaction 1017405468; blocked by process 36589. Process 36589 waits for ShareLock on transaction 1017403840; blocked by process 14563. Process 14563: delete from tableB where a in (select id from tableA where c in (....) Process 36589: delete from tableA where c in (....) 06:15:49 EET db 14563 HINT: See server log for query details.06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select id from tableA where c in (....)06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on transaction 1017403840 after 1110158.778 ms06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute <unnamed>: delete from tableA where c in (...)tableA : (id int, c int references c(id))tableB : (id int, a int references a(id) on delete cascade)tableC(id int...)One A can have Many B`s connected to (One A to Many B).deadlock_timeout is set to 5s.Now I'm trying to understand what might cause this deadlock. I think that its related to the foreign keys... I tried to do a simulation in my env :transaction 1 : delete from a;<left in the background, no commit yet > transaction 2 : delete from b;but I couldnt recreate the deadlock, I only had some raw exclusive locks : postgres=# select locktype,relation::regclass,page,tuple,virtualxid,transactionid,virtualtransaction,mode,granted from pg_locks where database=12870; locktype | relation | page | tuple | virtualxid | transactionid | virtualtransaction | mode | granted----------+----------+------+-------+------------+---------------+--------------------+------------------+--------- relation | b | | | | | 51/156937 | RowExclusiveLock | t relation | a_a_idx | | | | | 51/156937 | RowExclusiveLock | t relation | a | | | | | 51/156937 | RowExclusiveLock | t relation | pg_locks | | | | | 53/39101 | AccessShareLock | t relation | a_a_idx | | | | | 52/29801 | AccessShareLock | t relation | a | | | | | 52/29801 | AccessShareLock | t relation | b | | | | | 52/29801 | RowExclusiveLock | t tuple | b | 0 | 1 | | | 51/156937 | ExclusiveLock | t(8 rows)What do you guys think ?",
"msg_date": "Mon, 1 Apr 2019 13:16:22 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: trying to analyze deadlock"
},
{
"msg_contents": "Got it, thanks Laurenz !\n\nבתאריך יום ד׳, 27 במרץ 2019 ב-15:20 מאת Laurenz Albe <\[email protected]>:\n\n> Mariel Cherkassky wrote:\n> > Hi all,\n> > I'm trying to analyze a deadlock that I have in one of our environments.\n> > The deadlock message :\n> >\n> > 06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on\n> transaction 1017405468; blocked by process 36589.\n> > Process 36589 waits for ShareLock on transaction 1017403840;\n> blocked by process 14563.\n> > Process 14563: delete from tableB where a in (select id from\n> tableA where c in (....)\n> > Process 36589: delete from tableA where c in (....)\n> > 06:15:49 EET db 14563 HINT: See server log for query details.\n> > 06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select\n> id from tableA where c in (....)\n> > 06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on\n> transaction 1017403840 after 1110158.778 ms\n> > 06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)\n> > 06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute\n> <unnamed>: delete from tableA where c in (...)\n> >\n> > tableA : (id int, c int references c(id))\n> > tableB : (id int, a int references a(id) on delete cascade)\n> > tableC(id int...)\n> >\n> > One A can have Many B`s connected to (One A to Many B).\n> >\n> > deadlock_timeout is set to 5s.\n> >\n> > Now I'm trying to understand what might cause this deadlock. I think\n> that its related to the foreign keys...\n>\n> You can get that if the foreign key is defined as ON CASCADE DELETE or ON\n> CASCADE SET NULL:\n>\n> CREATE TABLE a (a_id integer PRIMARY KEY);\n>\n> INSERT INTO a VALUES (1), (2);\n>\n> CREATE TABLE b (b_id integer PRIMARY KEY, a_id integer NOT NULL\n> REFERENCES a ON DELETE CASCADE);\n>\n> INSERT INTO b VALUES (100, 1), (101, 1), (102, 2), (103, 2);\n>\n> Transaction 1:\n>\n> BEGIN;\n>\n> DELETE FROM b WHERE b_id = 100;\n>\n> Transaction 2:\n>\n> BEGIN;\n>\n> DELETE FROM a WHERE a_id = 2;\n>\n> DELETE FROM a WHERE a_id = 1; -- hangs\n>\n> Transaction 1:\n>\n> DELETE FROM b WHERE b_id = 102;\n>\n> ERROR: deadlock detected\n> DETAIL: Process 10517 waits for ShareLock on transaction 77325; blocked\n> by process 10541.\n> Process 10541 waits for ShareLock on transaction 77323; blocked by process\n> 10517.\n> HINT: See server log for query details.\n> CONTEXT: while deleting tuple (0,3) in relation \"b\"\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\nGot it, thanks Laurenz !בתאריך יום ד׳, 27 במרץ 2019 ב-15:20 מאת Laurenz Albe <[email protected]>:Mariel Cherkassky wrote:\n> Hi all,\n> I'm trying to analyze a deadlock that I have in one of our environments.\n> The deadlock message : \n> \n> 06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on transaction 1017405468; blocked by process 36589.\n> Process 36589 waits for ShareLock on transaction 1017403840; blocked by process 14563.\n> Process 14563: delete from tableB where a in (select id from tableA where c in (....)\n> Process 36589: delete from tableA where c in (....)\n> 06:15:49 EET db 14563 HINT: See server log for query details.\n> 06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select id from tableA where c in (....)\n> 06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on transaction 1017403840 after 1110158.778 ms\n> 06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)\n> 06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute <unnamed>: delete from tableA where c in (...)\n> \n> tableA : (id int, c int references c(id))\n> tableB : (id int, a int references a(id) on delete cascade)\n> tableC(id int...)\n> \n> One A can have Many B`s connected to (One A to Many B).\n> \n> deadlock_timeout is set to 5s.\n> \n> Now I'm trying to understand what might cause this deadlock. I think that its related to the foreign keys...\n\nYou can get that if the foreign key is defined as ON CASCADE DELETE or ON CASCADE SET NULL:\n\n CREATE TABLE a (a_id integer PRIMARY KEY);\n\n INSERT INTO a VALUES (1), (2);\n\n CREATE TABLE b (b_id integer PRIMARY KEY, a_id integer NOT NULL REFERENCES a ON DELETE CASCADE);\n\n INSERT INTO b VALUES (100, 1), (101, 1), (102, 2), (103, 2);\n\nTransaction 1:\n\n BEGIN;\n\n DELETE FROM b WHERE b_id = 100;\n\nTransaction 2:\n\n BEGIN;\n\n DELETE FROM a WHERE a_id = 2;\n\n DELETE FROM a WHERE a_id = 1; -- hangs\n\nTransaction 1:\n\n DELETE FROM b WHERE b_id = 102;\n\nERROR: deadlock detected\nDETAIL: Process 10517 waits for ShareLock on transaction 77325; blocked by process 10541.\nProcess 10541 waits for ShareLock on transaction 77323; blocked by process 10517.\nHINT: See server log for query details.\nCONTEXT: while deleting tuple (0,3) in relation \"b\"\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com",
"msg_date": "Mon, 1 Apr 2019 14:20:25 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying to analyze deadlock"
},
{
"msg_contents": "Hi Mariel,\n\nCommands in the same transaction will see the effects of the committed\nconcurrent transaction in any case.\n\nGo through below link hope this will help you.\nhttps://severalnines.com/blog/understanding-deadlocks-mysql-postgresql\n\nThanks & Regards,\n*Shreeyansh DBA Team*\nwww.shreeyansh.com\n\n\nOn Mon, Apr 1, 2019 at 3:46 PM Mariel Cherkassky <\[email protected]> wrote:\n\n>\n> Hi all,\n> I'm trying to analyze a deadlock that I have in one of our environments.\n> The deadlock message :\n>\n> 06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on\n> transaction 1017405468; blocked by process 36589.\n> Process 36589 waits for ShareLock on transaction 1017403840; blocked by\n> process 14563.\n> Process 14563: delete from tableB where a in (select id from tableA where\n> c in (....)\n> Process 36589: delete from tableA where c in (....)\n> 06:15:49 EET db 14563 HINT: See server log for query details.\n> 06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select\n> id from tableA where c in (....)\n> 06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on\n> transaction 1017403840 after 1110158.778 ms\n> 06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)\n> 06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute <unnamed>:\n> delete from tableA where c in (...)\n>\n> tableA : (id int, c int references c(id))\n> tableB : (id int, a int references a(id) on delete cascade)\n> tableC(id int...)\n>\n> One A can have Many B`s connected to (One A to Many B).\n>\n> deadlock_timeout is set to 5s.\n>\n> Now I'm trying to understand what might cause this deadlock. I think that\n> its related to the foreign keys... I tried to do a simulation in my env :\n>\n> transaction 1 :\n> delete from a;\n> <left in the background, no commit yet >\n>\n> transaction 2 :\n> delete from b;\n>\n> but I couldnt recreate the deadlock, I only had some raw exclusive locks :\n>\n> postgres=# select\n> locktype,relation::regclass,page,tuple,virtualxid,transactionid,virtualtransaction,mode,granted\n> from pg_locks where database=12870;\n> locktype | relation | page | tuple | virtualxid | transactionid |\n> virtualtransaction | mode | granted\n>\n> ----------+----------+------+-------+------------+---------------+--------------------+------------------+---------\n> relation | b | | | | |\n> 51/156937 | RowExclusiveLock | t\n> relation | a_a_idx | | | | |\n> 51/156937 | RowExclusiveLock | t\n> relation | a | | | | |\n> 51/156937 | RowExclusiveLock | t\n> relation | pg_locks | | | | |\n> 53/39101 | AccessShareLock | t\n> relation | a_a_idx | | | | |\n> 52/29801 | AccessShareLock | t\n> relation | a | | | | |\n> 52/29801 | AccessShareLock | t\n> relation | b | | | | |\n> 52/29801 | RowExclusiveLock | t\n> tuple | b | 0 | 1 | | |\n> 51/156937 | ExclusiveLock | t\n> (8 rows)\n>\n>\n> What do you guys think ?\n>\n>\n\nHi Mariel,Commands in the same transaction will see the effects of the committed concurrent transaction in any case.Go through below link hope this will help you.https://severalnines.com/blog/understanding-deadlocks-mysql-postgresqlThanks & Regards,Shreeyansh DBA Teamwww.shreeyansh.comOn Mon, Apr 1, 2019 at 3:46 PM Mariel Cherkassky <[email protected]> wrote:Hi all,I'm trying to analyze a deadlock that I have in one of our environments.The deadlock message : 06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on transaction 1017405468; blocked by process 36589. Process 36589 waits for ShareLock on transaction 1017403840; blocked by process 14563. Process 14563: delete from tableB where a in (select id from tableA where c in (....) Process 36589: delete from tableA where c in (....) 06:15:49 EET db 14563 HINT: See server log for query details.06:15:49 EET db 14563 STATEMENT: delete from tableB where a in (select id from tableA where c in (....)06:15:49 EET db 36589 LOG: process 36589 acquired ShareLock on transaction 1017403840 after 1110158.778 ms06:15:49 EET db 36589 STATEMENT: delete from tableA where c in (....)06:15:49 EET db 36589 LOG: duration: 1110299.539 ms execute <unnamed>: delete from tableA where c in (...)tableA : (id int, c int references c(id))tableB : (id int, a int references a(id) on delete cascade)tableC(id int...)One A can have Many B`s connected to (One A to Many B).deadlock_timeout is set to 5s.Now I'm trying to understand what might cause this deadlock. I think that its related to the foreign keys... I tried to do a simulation in my env :transaction 1 : delete from a;<left in the background, no commit yet > transaction 2 : delete from b;but I couldnt recreate the deadlock, I only had some raw exclusive locks : postgres=# select locktype,relation::regclass,page,tuple,virtualxid,transactionid,virtualtransaction,mode,granted from pg_locks where database=12870; locktype | relation | page | tuple | virtualxid | transactionid | virtualtransaction | mode | granted----------+----------+------+-------+------------+---------------+--------------------+------------------+--------- relation | b | | | | | 51/156937 | RowExclusiveLock | t relation | a_a_idx | | | | | 51/156937 | RowExclusiveLock | t relation | a | | | | | 51/156937 | RowExclusiveLock | t relation | pg_locks | | | | | 53/39101 | AccessShareLock | t relation | a_a_idx | | | | | 52/29801 | AccessShareLock | t relation | a | | | | | 52/29801 | AccessShareLock | t relation | b | | | | | 52/29801 | RowExclusiveLock | t tuple | b | 0 | 1 | | | 51/156937 | ExclusiveLock | t(8 rows)What do you guys think ?",
"msg_date": "Tue, 2 Apr 2019 17:51:38 +0530",
"msg_from": "Shreeyansh Dba <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to analyze deadlock"
}
] |
[
{
"msg_contents": "Hey,\nI was searching for a solution to scale my postgresql instance in the\ncloud. I'm aware of that that I can create many read only replicas in the\ncloud and it would improve my reading performance. I wanted to hear what\nsolution are you familiar with ? Are there any sharding solution that are\ncommonly used (citus ? pg_shard ?) My instance has many dbs (one per\ncustomer) and big customers can generate a load of load on others..\n\n\n\nThanks.\n\nHey,I was searching for a solution to scale my postgresql instance in the cloud. I'm aware of that that I can create many read only replicas in the cloud and it would improve my reading performance. I wanted to hear what solution are you familiar with ? Are there any sharding solution that are commonly used (citus ? pg_shard ?) My instance has many dbs (one per customer) and big customers can generate a load of load on others..Thanks.",
"msg_date": "Thu, 28 Mar 2019 12:10:05 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Scale out postgresql"
},
{
"msg_contents": "Hi!\nWith following kinds of keywords, it is possible to find / search for cloud native (SQL) implementations e.g. with google: \ncloud native sql database\nE.g. CockroachDB, YugaByteDB.\nI do not know are you planning to do it by other means (by yourself).\nI myself would be interested, has someone had experiences with such? Is HA provided \"ready made? Is HA working fine and does it recover/handle all situations well, or is additional algorithms needed to be implemented in addition on top e.g. for automatic recovery (by \"myself\").\nI could start an other email chain, if this chain is meant more for something else.\nBest RegardsSam\n\n \n \n On to, maalisk. 28, 2019 at 12:10, Mariel Cherkassky<[email protected]> wrote: Hey,I was searching for a solution to scale my postgresql instance in the cloud. I'm aware of that that I can create many read only replicas in the cloud and it would improve my reading performance. I wanted to hear what solution are you familiar with ? Are there any sharding solution that are commonly used (citus ? pg_shard ?) My instance has many dbs (one per customer) and big customers can generate a load of load on others..\n\n\nThanks. \n\nHi!With following kinds of keywords, it is possible to find / search for cloud native (SQL) implementations e.g. with google: cloud native sql databaseE.g. CockroachDB, YugaByteDB.I do not know are you planning to do it by other means (by yourself).I myself would be interested, has someone had experiences with such? Is HA provided \"ready made? Is HA working fine and does it recover/handle all situations well, or is additional algorithms needed to be implemented in addition on top e.g. for automatic recovery (by \"myself\").I could start an other email chain, if this chain is meant more for something else.Best RegardsSam On to, maalisk. 28, 2019 at 12:10, Mariel Cherkassky<[email protected]> wrote: Hey,I was searching for a solution to scale my postgresql instance in the cloud. I'm aware of that that I can create many read only replicas in the cloud and it would improve my reading performance. I wanted to hear what solution are you familiar with ? Are there any sharding solution that are commonly used (citus ? pg_shard ?) My instance has many dbs (one per customer) and big customers can generate a load of load on others..Thanks.",
"msg_date": "Thu, 28 Mar 2019 16:10:26 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scale out postgresql"
},
{
"msg_contents": "Hey Sam,\nAre you familiar with scale solutions that arent in the cloud ?\n\nבתאריך יום ה׳, 28 במרץ 2019 ב-18:10 מאת Sam R. <[email protected]\n>:\n\n> Hi!\n>\n> With following kinds of keywords, it is possible to find / search for\n> cloud native (SQL) implementations e.g. with google:\n>\n> cloud native sql database\n>\n> E.g. CockroachDB, YugaByteDB.\n>\n> I do not know are you planning to do it by other means (by yourself).\n>\n> I myself would be interested, has someone had experiences with such? Is HA\n> provided \"ready made? Is HA working fine and does it recover/handle all\n> situations well, or is additional algorithms needed to be implemented in\n> addition on top e.g. for automatic recovery (by \"myself\").\n>\n> I could start an other email chain, if this chain is meant more for\n> something else.\n>\n> Best Regards\n> Sam\n>\n>\n> On to, maalisk. 28, 2019 at 12:10, Mariel Cherkassky\n> <[email protected]> wrote:\n> Hey,\n> I was searching for a solution to scale my postgresql instance in the\n> cloud. I'm aware of that that I can create many read only replicas in the\n> cloud and it would improve my reading performance. I wanted to hear what\n> solution are you familiar with ? Are there any sharding solution that are\n> commonly used (citus ? pg_shard ?) My instance has many dbs (one per\n> customer) and big customers can generate a load of load on others..\n>\n>\n>\n> Thanks.\n>\n>\n\nHey Sam,Are you familiar with scale solutions that arent in the cloud ?בתאריך יום ה׳, 28 במרץ 2019 ב-18:10 מאת Sam R. <[email protected]>:Hi!With following kinds of keywords, it is possible to find / search for cloud native (SQL) implementations e.g. with google: cloud native sql databaseE.g. CockroachDB, YugaByteDB.I do not know are you planning to do it by other means (by yourself).I myself would be interested, has someone had experiences with such? Is HA provided \"ready made? Is HA working fine and does it recover/handle all situations well, or is additional algorithms needed to be implemented in addition on top e.g. for automatic recovery (by \"myself\").I could start an other email chain, if this chain is meant more for something else.Best RegardsSam On to, maalisk. 28, 2019 at 12:10, Mariel Cherkassky<[email protected]> wrote: Hey,I was searching for a solution to scale my postgresql instance in the cloud. I'm aware of that that I can create many read only replicas in the cloud and it would improve my reading performance. I wanted to hear what solution are you familiar with ? Are there any sharding solution that are commonly used (citus ? pg_shard ?) My instance has many dbs (one per customer) and big customers can generate a load of load on others..Thanks.",
"msg_date": "Thu, 28 Mar 2019 18:19:00 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scale out postgresql"
},
{
"msg_contents": "Disclaimer, I work for Citus Data.\r\n\r\nYou can check out www.citusdata.com<http://www.citusdata.com>. We aim to scale out PostgreSQL, and offer it both in the cloud and on-prem. Happy to put you in touch with the right people if you have more questions.\r\n\r\nThanks,\r\nSumedh\r\n\r\n________________________________\r\nFrom: Mariel Cherkassky <[email protected]>\r\nSent: Thursday, March 28, 2019 9:19 AM\r\nTo: [email protected]\r\nCc: PostgreSQL mailing lists\r\nSubject: Re: Scale out postgresql\r\n\r\nHey Sam,\r\nAre you familiar with scale solutions that arent in the cloud ?\r\n\r\nבתאריך יום ה׳, 28 במרץ 2019 ב-18:10 מאת Sam R. <[email protected]<mailto:[email protected]>>:\r\nHi!\r\n\r\nWith following kinds of keywords, it is possible to find / search for cloud native (SQL) implementations e.g. with google:\r\n\r\ncloud native sql database\r\n\r\nE.g. CockroachDB, YugaByteDB.\r\n\r\nI do not know are you planning to do it by other means (by yourself).\r\n\r\nI myself would be interested, has someone had experiences with such? Is HA provided \"ready made? Is HA working fine and does it recover/handle all situations well, or is additional algorithms needed to be implemented in addition on top e.g. for automatic recovery (by \"myself\").\r\n\r\nI could start an other email chain, if this chain is meant more for something else.\r\n\r\nBest Regards\r\nSam\r\n\r\n\r\nOn to, maalisk. 28, 2019 at 12:10, Mariel Cherkassky\r\n<[email protected]<mailto:[email protected]>> wrote:\r\nHey,\r\nI was searching for a solution to scale my postgresql instance in the cloud. I'm aware of that that I can create many read only replicas in the cloud and it would improve my reading performance. I wanted to hear what solution are you familiar with ? Are there any sharding solution that are commonly used (citus ? pg_shard ?) My instance has many dbs (one per customer) and big customers can generate a load of load on others..\r\n\r\n\r\n\r\nThanks.\r\n\n\n\n\n\n\n\n\r\nDisclaimer, I work for Citus Data.\n\n\n\n\r\nYou can check out www.citusdata.com. We aim to scale out PostgreSQL, and offer it both in the cloud and on-prem. Happy to put you in touch with the right people if you have more questions.\n\n\n\n\r\nThanks,\n\r\nSumedh\n\n\n\nFrom: Mariel Cherkassky <[email protected]>\nSent: Thursday, March 28, 2019 9:19 AM\nTo: [email protected]\nCc: PostgreSQL mailing lists\nSubject: Re: Scale out postgresql\n \n\n\n\nHey Sam,\nAre you familiar with scale solutions that arent in the cloud ?\n\n\n\nבתאריך יום ה׳, 28 במרץ 2019 ב-18:10 מאת Sam R. <[email protected]>:\n\n\r\nHi!\r\n\n\nWith following kinds of keywords, it is possible to find / search for cloud native (SQL) implementations e.g. with google: \n\r\ncloud native sql database\n\n\nE.g. CockroachDB, YugaByteDB.\n\n\nI do not know are you planning to do it by other means (by yourself).\n\n\nI myself would be interested, has someone had experiences with such? Is HA provided \"ready made? Is HA working fine and does it recover/handle all situations well, or is additional\r\n algorithms needed to be implemented in addition on top e.g. for automatic recovery (by \"myself\").\n\n\nI could start an other email chain, if this chain is meant more for something else.\n\n\nBest Regards\nSam\n\n\n\n\n\nOn to, maalisk. 28, 2019 at 12:10, Mariel Cherkassky\n<[email protected]> wrote:\n\n\n\n\nHey,\nI was searching for a solution to scale my postgresql instance in the cloud. I'm aware of that that I can create many read only replicas in the cloud and it would improve my reading performance. I wanted to hear what solution are you familiar\r\n with ? Are there any sharding solution that are commonly used (citus ? pg_shard ?) My instance has many dbs (one per customer) and big customers can generate a load of load on others..\n\n\n\n\n\n\nThanks.",
"msg_date": "Thu, 28 Mar 2019 16:26:23 +0000",
"msg_from": "Sumedh Pathak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scale out postgresql"
}
] |
[
{
"msg_contents": "Hi everyone,\n\n\n\nI’m using LIMIT offset with DB view. Looks like query planner is applying\nthe LIMIT for DB view at the end after processing all rows.\n\nWhen running same SQL that was used to create the DB view, LIMIT is applied\nearlier so the query is much faster.\n\n\n\nExplain plan using DB view\n\nhttps://explain.depesz.com/s/gzjQ\n\n\n\nExplain plan using raw SQL\n\nhttps://explain.depesz.com/s/KgwO\n\n\n\nIn both tests LIMIT was 100 with offset = 0.\n\nIs there any way to force DB view to apply limit earlier?\n\n\n\nThanks,\n\nRaj\n\nHi everyone,\n \nI’m using LIMIT offset with DB view. Looks like query\nplanner is applying the LIMIT for DB view at the end after processing all rows.\nWhen running same SQL that was used to create the DB view,\nLIMIT is applied earlier so the query is much faster.\n \nExplain plan using DB view \nhttps://explain.depesz.com/s/gzjQ\n \nExplain plan using raw SQL\nhttps://explain.depesz.com/s/KgwO\n \nIn both tests LIMIT was 100 with offset = 0.\nIs there any way to force DB view to apply limit earlier? \n \nThanks,\nRaj",
"msg_date": "Thu, 28 Mar 2019 18:41:13 -0400",
"msg_from": "Raj Gandhi <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIMIT OFFSET with DB view vs plain SQL"
},
{
"msg_contents": "+ pgsql-performance\n\nOn Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]> wrote:\n\n> Hi everyone,\n>\n>\n>\n> I’m using LIMIT offset with DB view. Looks like query planner is applying\n> the LIMIT for DB view at the end after processing all rows.\n>\n> When running same SQL that was used to create the DB view, LIMIT is\n> applied earlier so the query is much faster.\n>\n>\n>\n> Explain plan using DB view\n>\n> https://explain.depesz.com/s/gzjQ\n>\n>\n>\n> Explain plan using raw SQL\n>\n> https://explain.depesz.com/s/KgwO\n>\n>\n>\n> In both tests LIMIT was 100 with offset = 0.\n>\n> Is there any way to force DB view to apply limit earlier?\n>\n>\n>\n> Thanks,\n>\n> Raj\n>\n\n+\n\npgsql-performance On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]> wrote:Hi everyone,\n \nI’m using LIMIT offset with DB view. Looks like query\nplanner is applying the LIMIT for DB view at the end after processing all rows.\nWhen running same SQL that was used to create the DB view,\nLIMIT is applied earlier so the query is much faster.\n \nExplain plan using DB view \nhttps://explain.depesz.com/s/gzjQ\n \nExplain plan using raw SQL\nhttps://explain.depesz.com/s/KgwO\n \nIn both tests LIMIT was 100 with offset = 0.\nIs there any way to force DB view to apply limit earlier? \n \nThanks,\nRaj",
"msg_date": "Thu, 28 Mar 2019 18:43:45 -0400",
"msg_from": "Raj Gandhi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIMIT OFFSET with DB view vs plain SQL"
},
{
"msg_contents": "Raj Gandhi wrote:\n> I’m using LIMIT offset with DB view. Looks like query planner is applying the LIMIT for DB view at the end after processing all rows.\n> When running same SQL that was used to create the DB view, LIMIT is applied earlier so the query is much faster.\n> \n> Explain plan using DB view\n> https://explain.depesz.com/s/gzjQ\n> \n> Explain plan using raw SQL\n> https://explain.depesz.com/s/KgwO\n> \n> In both tests LIMIT was 100 with offset = 0.\n> Is there any way to force DB view to apply limit earlier?\n\nPlease show\n\n- the view definition\n- the query on the view\n- the query without the view\n\nYours,\nLaurenz Albe\n-- \n+43-670-6056265\nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26, A-2700 Wiener Neustadt\nWeb: https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 14:53:10 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT OFFSET with DB view vs plain SQL"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 5:44 PM Raj Gandhi <[email protected]> wrote:\n>\n> + pgsql-performance\n>\n> On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]> wrote:\n>>\n>> Hi everyone,\n>>\n>>\n>>\n>> I’m using LIMIT offset with DB view. Looks like query planner is applying the LIMIT for DB view at the end after processing all rows.\n>>\n>> When running same SQL that was used to create the DB view, LIMIT is applied earlier so the query is much faster.\n>>\n>>\n>>\n>> Explain plan using DB view\n>>\n>> https://explain.depesz.com/s/gzjQ\n>>\n>>\n>>\n>> Explain plan using raw SQL\n>>\n>> https://explain.depesz.com/s/KgwO\n>>\n>>\n>>\n>> In both tests LIMIT was 100 with offset = 0.\n>>\n>> Is there any way to force DB view to apply limit earlier?\n\nhuh. OFFSET does indeed force a materialize plan. This is a widely\nused tactic to hack the planner ('OFFSET 0').\n\nMaybe try converting your query from something like:\n\nSELECT * FROM foo LIMIT m OFFSET N;\nto\nWITH data AS\n(\n SELECT * FROM foo LIMIT m + n\n)\nSELECT * FROM foo OFFSET n;\n\nI didn't try this, and it may not help, but it's worth a shot.\n\nmerlin\n\n\n",
"msg_date": "Fri, 29 Mar 2019 10:28:20 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT OFFSET with DB view vs plain SQL"
},
{
"msg_contents": "Merlin, I tried the hack you suggested but that didn't work. Planner used\nthe same path.\n\nThe same query works much faster when using the raw SQL instead of DB view:\n\nHere is the definition of DB View ‘job’\n\n SELECT w.id,\n\n w.parent_id,\n\n w.status AS state,\n\n w.percent_complete AS progress_percentage,\n\n w.start_time,\n\n w.end_time,\n\n w.est_completion_time AS estimated_completion_time,\n\n w.root_id,\n\n w.internal AS is_internal,\n\n w.order_id AS step_order,\n\n c.resource_type,\n\n c.resource_id,\n\n c.id AS command_id,\n\n c.client_cookie,\n\n c.user_name AS \"user\",\n\n c.metadata,\n\n c.client_address,\n\n response_body(r.*, w.*) AS response_body\n\n FROM work_unit w\n\n LEFT JOIN command c ON c.work_unit_id = w.id\n\n LEFT JOIN command_response r ON r.command_id::text = c.id::text;\n\n\n\n\n\n*Query that uses the DB view:*\n\nSELECT id, start_time\n\nFROM job\n\norder by id LIMIT 101 OFFSET 0;\n\n\n\nExplain plan: https://explain.depesz.com/s/gzjQ\n\n\n *Query using the raw SQL*\n\n<SQL from Job DB View definition>\n\nORDER BY id LIMIT 101 OFFSET 0;\n\n\n\nExplain plan:https://explain.depesz.com/s/KgwO\n\n\n\n\n\nOn Fri, Mar 29, 2019 at 11:26 AM Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Mar 28, 2019 at 5:44 PM Raj Gandhi <[email protected]> wrote:\n> >\n> > + pgsql-performance\n> >\n> > On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]>\n> wrote:\n> >>\n> >> Hi everyone,\n> >>\n> >>\n> >>\n> >> I’m using LIMIT offset with DB view. Looks like query planner is\n> applying the LIMIT for DB view at the end after processing all rows.\n> >>\n> >> When running same SQL that was used to create the DB view, LIMIT is\n> applied earlier so the query is much faster.\n> >>\n> >>\n> >>\n> >> Explain plan using DB view\n> >>\n> >> https://explain.depesz.com/s/gzjQ\n> >>\n> >>\n> >>\n> >> Explain plan using raw SQL\n> >>\n> >> https://explain.depesz.com/s/KgwO\n> >>\n> >>\n> >>\n> >> In both tests LIMIT was 100 with offset = 0.\n> >>\n> >> Is there any way to force DB view to apply limit earlier?\n>\n> huh. OFFSET does indeed force a materialize plan. This is a widely\n> used tactic to hack the planner ('OFFSET 0').\n>\n> Maybe try converting your query from something like:\n>\n> SELECT * FROM foo LIMIT m OFFSET N;\n> to\n> WITH data AS\n> (\n> SELECT * FROM foo LIMIT m + n\n> )\n> SELECT * FROM foo OFFSET n;\n>\n> I didn't try this, and it may not help, but it's worth a shot.\n>\n> merlin\n>\n\nMerlin, I tried the hack you suggested but that didn't work. Planner used the same path.The same query works much faster when using the raw SQL instead of DB view:\nHere is the definition of DB View ‘job’\n SELECT w.id,\n w.parent_id,\n w.status AS state,\n w.percent_complete\nAS progress_percentage,\n w.start_time,\n w.end_time,\n \nw.est_completion_time AS estimated_completion_time,\n w.root_id,\n w.internal AS is_internal,\n w.order_id AS\nstep_order,\n c.resource_type,\n c.resource_id,\n c.id AS\ncommand_id,\n c.client_cookie,\n c.user_name AS\n\"user\",\n c.metadata,\n c.client_address,\n response_body(r.*,\nw.*) AS response_body\n FROM work_unit w\n LEFT JOIN command\nc ON c.work_unit_id = w.id\n LEFT JOIN\ncommand_response r ON r.command_id::text = c.id::text;\n \n \nQuery that uses the DB view:\nSELECT id, start_time\nFROM job\norder by id LIMIT 101\nOFFSET 0;\n \nExplain plan: https://explain.depesz.com/s/gzjQ\n Query using the raw SQL<SQL from Job DB View definition>\nORDER BY id LIMIT 101 OFFSET 0;\n \nExplain plan:https://explain.depesz.com/s/KgwO\n \nOn Fri, Mar 29, 2019 at 11:26 AM Merlin Moncure <[email protected]> wrote:On Thu, Mar 28, 2019 at 5:44 PM Raj Gandhi <[email protected]> wrote:\n>\n> + pgsql-performance\n>\n> On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]> wrote:\n>>\n>> Hi everyone,\n>>\n>>\n>>\n>> I’m using LIMIT offset with DB view. Looks like query planner is applying the LIMIT for DB view at the end after processing all rows.\n>>\n>> When running same SQL that was used to create the DB view, LIMIT is applied earlier so the query is much faster.\n>>\n>>\n>>\n>> Explain plan using DB view\n>>\n>> https://explain.depesz.com/s/gzjQ\n>>\n>>\n>>\n>> Explain plan using raw SQL\n>>\n>> https://explain.depesz.com/s/KgwO\n>>\n>>\n>>\n>> In both tests LIMIT was 100 with offset = 0.\n>>\n>> Is there any way to force DB view to apply limit earlier?\n\nhuh. OFFSET does indeed force a materialize plan. This is a widely\nused tactic to hack the planner ('OFFSET 0').\n\nMaybe try converting your query from something like:\n\nSELECT * FROM foo LIMIT m OFFSET N;\nto\nWITH data AS\n(\n SELECT * FROM foo LIMIT m + n\n)\nSELECT * FROM foo OFFSET n;\n\nI didn't try this, and it may not help, but it's worth a shot.\n\nmerlin",
"msg_date": "Fri, 29 Mar 2019 19:38:09 -0400",
"msg_from": "Raj Gandhi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIMIT OFFSET with DB view vs plain SQL"
},
{
"msg_contents": "Any other idea how to resolve the performance issue with the database view?\n\nOn Fri, Mar 29, 2019 at 7:38 PM Raj Gandhi <[email protected]> wrote:\n\n> Merlin, I tried the hack you suggested but that didn't work. Planner used\n> the same path.\n>\n> The same query works much faster when using the raw SQL instead of DB\n> view:\n>\n> Here is the definition of DB View ‘job’\n>\n> SELECT w.id,\n>\n> w.parent_id,\n>\n> w.status AS state,\n>\n> w.percent_complete AS progress_percentage,\n>\n> w.start_time,\n>\n> w.end_time,\n>\n> w.est_completion_time AS estimated_completion_time,\n>\n> w.root_id,\n>\n> w.internal AS is_internal,\n>\n> w.order_id AS step_order,\n>\n> c.resource_type,\n>\n> c.resource_id,\n>\n> c.id AS command_id,\n>\n> c.client_cookie,\n>\n> c.user_name AS \"user\",\n>\n> c.metadata,\n>\n> c.client_address,\n>\n> response_body(r.*, w.*) AS response_body\n>\n> FROM work_unit w\n>\n> LEFT JOIN command c ON c.work_unit_id = w.id\n>\n> LEFT JOIN command_response r ON r.command_id::text = c.id::text;\n>\n>\n>\n>\n>\n> *Query that uses the DB view:*\n>\n> SELECT id, start_time\n>\n> FROM job\n>\n> order by id LIMIT 101 OFFSET 0;\n>\n>\n>\n> Explain plan: https://explain.depesz.com/s/gzjQ\n>\n>\n> *Query using the raw SQL*\n>\n> <SQL from Job DB View definition>\n>\n> ORDER BY id LIMIT 101 OFFSET 0;\n>\n>\n>\n> Explain plan:https://explain.depesz.com/s/KgwO\n>\n>\n>\n>\n>\n> On Fri, Mar 29, 2019 at 11:26 AM Merlin Moncure <[email protected]>\n> wrote:\n>\n>> On Thu, Mar 28, 2019 at 5:44 PM Raj Gandhi <[email protected]> wrote:\n>> >\n>> > + pgsql-performance\n>> >\n>> > On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]>\n>> wrote:\n>> >>\n>> >> Hi everyone,\n>> >>\n>> >>\n>> >>\n>> >> I’m using LIMIT offset with DB view. Looks like query planner is\n>> applying the LIMIT for DB view at the end after processing all rows.\n>> >>\n>> >> When running same SQL that was used to create the DB view, LIMIT is\n>> applied earlier so the query is much faster.\n>> >>\n>> >>\n>> >>\n>> >> Explain plan using DB view\n>> >>\n>> >> https://explain.depesz.com/s/gzjQ\n>> >>\n>> >>\n>> >>\n>> >> Explain plan using raw SQL\n>> >>\n>> >> https://explain.depesz.com/s/KgwO\n>> >>\n>> >>\n>> >>\n>> >> In both tests LIMIT was 100 with offset = 0.\n>> >>\n>> >> Is there any way to force DB view to apply limit earlier?\n>>\n>> huh. OFFSET does indeed force a materialize plan. This is a widely\n>> used tactic to hack the planner ('OFFSET 0').\n>>\n>> Maybe try converting your query from something like:\n>>\n>> SELECT * FROM foo LIMIT m OFFSET N;\n>> to\n>> WITH data AS\n>> (\n>> SELECT * FROM foo LIMIT m + n\n>> )\n>> SELECT * FROM foo OFFSET n;\n>>\n>> I didn't try this, and it may not help, but it's worth a shot.\n>>\n>> merlin\n>>\n>\n\nAny other idea how to resolve the performance issue with the database view?On Fri, Mar 29, 2019 at 7:38 PM Raj Gandhi <[email protected]> wrote:Merlin, I tried the hack you suggested but that didn't work. Planner used the same path.The same query works much faster when using the raw SQL instead of DB view:\nHere is the definition of DB View ‘job’\n SELECT w.id,\n w.parent_id,\n w.status AS state,\n w.percent_complete\nAS progress_percentage,\n w.start_time,\n w.end_time,\n \nw.est_completion_time AS estimated_completion_time,\n w.root_id,\n w.internal AS is_internal,\n w.order_id AS\nstep_order,\n c.resource_type,\n c.resource_id,\n c.id AS\ncommand_id,\n c.client_cookie,\n c.user_name AS\n\"user\",\n c.metadata,\n c.client_address,\n response_body(r.*,\nw.*) AS response_body\n FROM work_unit w\n LEFT JOIN command\nc ON c.work_unit_id = w.id\n LEFT JOIN\ncommand_response r ON r.command_id::text = c.id::text;\n \n \nQuery that uses the DB view:\nSELECT id, start_time\nFROM job\norder by id LIMIT 101\nOFFSET 0;\n \nExplain plan: https://explain.depesz.com/s/gzjQ\n Query using the raw SQL<SQL from Job DB View definition>\nORDER BY id LIMIT 101 OFFSET 0;\n \nExplain plan:https://explain.depesz.com/s/KgwO\n \nOn Fri, Mar 29, 2019 at 11:26 AM Merlin Moncure <[email protected]> wrote:On Thu, Mar 28, 2019 at 5:44 PM Raj Gandhi <[email protected]> wrote:\n>\n> + pgsql-performance\n>\n> On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]> wrote:\n>>\n>> Hi everyone,\n>>\n>>\n>>\n>> I’m using LIMIT offset with DB view. Looks like query planner is applying the LIMIT for DB view at the end after processing all rows.\n>>\n>> When running same SQL that was used to create the DB view, LIMIT is applied earlier so the query is much faster.\n>>\n>>\n>>\n>> Explain plan using DB view\n>>\n>> https://explain.depesz.com/s/gzjQ\n>>\n>>\n>>\n>> Explain plan using raw SQL\n>>\n>> https://explain.depesz.com/s/KgwO\n>>\n>>\n>>\n>> In both tests LIMIT was 100 with offset = 0.\n>>\n>> Is there any way to force DB view to apply limit earlier?\n\nhuh. OFFSET does indeed force a materialize plan. This is a widely\nused tactic to hack the planner ('OFFSET 0').\n\nMaybe try converting your query from something like:\n\nSELECT * FROM foo LIMIT m OFFSET N;\nto\nWITH data AS\n(\n SELECT * FROM foo LIMIT m + n\n)\nSELECT * FROM foo OFFSET n;\n\nI didn't try this, and it may not help, but it's worth a shot.\n\nmerlin",
"msg_date": "Mon, 1 Apr 2019 09:56:05 -0400",
"msg_from": "Raj Gandhi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIMIT OFFSET with DB view vs plain SQL"
},
{
"msg_contents": "Thanks Rui. The performance of using function is close to the plain SQL.\n\nWhy Query planner is choosing different path with DB view?\n\n\nexplain analyze select foo(101,0);\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------\n ProjectSet (cost=0.00..5.27 rows=1000 width=32) (actual\ntime=10.340..10.374 rows=101 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001\nrows=1 loops=1)\n Planning time: 0.035 ms\n Execution time: 10.436 ms\n(4 rows)\n\n\n\nOn Mon, Apr 1, 2019 at 4:14 PM Rui DeSousa <[email protected]> wrote:\n\n> Try using a function that returns the result set.\n>\n> i.e.\n>\n> create or replace function foo(_limit int, _offset int)\n> returns setof sample_table\n> as $$\n> begin\n> return query\n> select *\n> from sample_table\n> order by created_date\n> limit _limit\n> offset _offset\n> ;\n> end;\n> $$ language plpgsql\n> volatile\n> ;\n>\n>\n> Given your query; return a table instead of a set. i.e.:\n>\n> returns table (\n> id int\n> , parent_id int\n> .\n> .\n> .\n> , response_body text\n> )\n> as $$\n>\n>\n> Query example:\n>\n> select * from foo(100, 50);\n>\n>\n> On Apr 1, 2019, at 9:56 AM, Raj Gandhi <[email protected]> wrote:\n>\n> Any other idea how to resolve the performance issue with the database view?\n>\n> On Fri, Mar 29, 2019 at 7:38 PM Raj Gandhi <[email protected]> wrote:\n>\n>> Merlin, I tried the hack you suggested but that didn't work. Planner\n>> used the same path.\n>>\n>> The same query works much faster when using the raw SQL instead of DB\n>> view:\n>>\n>> Here is the definition of DB View ‘job’\n>> SELECT w.id,\n>> w.parent_id,\n>> w.status AS state,\n>> w.percent_complete AS progress_percentage,\n>> w.start_time,\n>> w.end_time,\n>> w.est_completion_time AS estimated_completion_time,\n>> w.root_id,\n>> w.internal AS is_internal,\n>> w.order_id AS step_order,\n>> c.resource_type,\n>> c.resource_id,\n>> c.id AS command_id,\n>> c.client_cookie,\n>> c.user_name AS \"user\",\n>> c.metadata,\n>> c.client_address,\n>> response_body(r.*, w.*) AS response_body\n>> FROM work_unit w\n>> LEFT JOIN command c ON c.work_unit_id = w.id\n>> LEFT JOIN command_response r ON r.command_id::text = c.id::text;\n>>\n>>\n>>\n>>\n>> *Query that uses the DB view:*\n>> SELECT id, start_time\n>> FROM job\n>> order by id LIMIT 101 OFFSET 0;\n>>\n>>\n>> Explain plan: https://explain.depesz.com/s/gzjQ\n>>\n>> *Query using the raw SQL*\n>> <SQL from Job DB View definition>\n>> ORDER BY id LIMIT 101 OFFSET 0;\n>>\n>>\n>> Explain plan:https://explain.depesz.com/s/KgwO\n>>\n>>\n>>\n>>\n>> On Fri, Mar 29, 2019 at 11:26 AM Merlin Moncure <[email protected]>\n>> wrote:\n>>\n>>> On Thu, Mar 28, 2019 at 5:44 PM Raj Gandhi <[email protected]>\n>>> wrote:\n>>> >\n>>> > + pgsql-performance\n>>> >\n>>> > On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]>\n>>> wrote:\n>>> >>\n>>> >> Hi everyone,\n>>> >>\n>>> >>\n>>> >>\n>>> >> I’m using LIMIT offset with DB view. Looks like query planner is\n>>> applying the LIMIT for DB view at the end after processing all rows.\n>>> >>\n>>> >> When running same SQL that was used to create the DB view, LIMIT is\n>>> applied earlier so the query is much faster.\n>>> >>\n>>> >>\n>>> >>\n>>> >> Explain plan using DB view\n>>> >>\n>>> >> https://explain.depesz.com/s/gzjQ\n>>> >>\n>>> >>\n>>> >>\n>>> >> Explain plan using raw SQL\n>>> >>\n>>> >> https://explain.depesz.com/s/KgwO\n>>> >>\n>>> >>\n>>> >>\n>>> >> In both tests LIMIT was 100 with offset = 0.\n>>> >>\n>>> >> Is there any way to force DB view to apply limit earlier?\n>>>\n>>> huh. OFFSET does indeed force a materialize plan. This is a widely\n>>> used tactic to hack the planner ('OFFSET 0').\n>>>\n>>> Maybe try converting your query from something like:\n>>>\n>>> SELECT * FROM foo LIMIT m OFFSET N;\n>>> to\n>>> WITH data AS\n>>> (\n>>> SELECT * FROM foo LIMIT m + n\n>>> )\n>>> SELECT * FROM foo OFFSET n;\n>>>\n>>> I didn't try this, and it may not help, but it's worth a shot.\n>>>\n>>> merlin\n>>>\n>>\n>\n\nThanks Rui. The performance of using function is close to the plain SQL.Why Query planner is choosing different path with DB view?explain analyze select foo(101,0); QUERY PLAN ------------------------------------------------------------------------------------------------ ProjectSet (cost=0.00..5.27 rows=1000 width=32) (actual time=10.340..10.374 rows=101 loops=1) -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1) Planning time: 0.035 ms Execution time: 10.436 ms(4 rows)On Mon, Apr 1, 2019 at 4:14 PM Rui DeSousa <[email protected]> wrote:Try using a function that returns the result set.i.e. create or replace function foo(_limit int, _offset int) returns setof sample_tableas $$begin return query select * from sample_table order by created_date limit _limit offset _offset ;end;$$ language plpgsql volatile; Given your query; return a table instead of a set. i.e.: returns table ( id int , parent_id int . . . , response_body text)as $$Query example: select * from foo(100, 50);On Apr 1, 2019, at 9:56 AM, Raj Gandhi <[email protected]> wrote:Any other idea how to resolve the performance issue with the database view?On Fri, Mar 29, 2019 at 7:38 PM Raj Gandhi <[email protected]> wrote:Merlin, I tried the hack you suggested but that didn't work. Planner used the same path.The same query works much faster when using the raw SQL instead of DB view:Here is the definition of DB View ‘job’ SELECT w.id, w.parent_id, w.status AS state, w.percent_complete\nAS progress_percentage, w.start_time, w.end_time, \nw.est_completion_time AS estimated_completion_time, w.root_id, w.internal AS is_internal, w.order_id AS\nstep_order, c.resource_type, c.resource_id, c.id AS\ncommand_id, c.client_cookie, c.user_name AS\n\"user\", c.metadata, c.client_address, response_body(r.*,\nw.*) AS response_body FROM work_unit w LEFT JOIN command\nc ON c.work_unit_id = w.id LEFT JOIN\ncommand_response r ON r.command_id::text = c.id::text; Query that uses the DB view:SELECT id, start_timeFROM joborder by id LIMIT 101\nOFFSET 0; Explain plan: https://explain.depesz.com/s/gzjQ Query using the raw SQL<SQL from Job DB View definition>ORDER BY id LIMIT 101 OFFSET 0; Explain plan:https://explain.depesz.com/s/KgwO \nOn Fri, Mar 29, 2019 at 11:26 AM Merlin Moncure <[email protected]> wrote:On Thu, Mar 28, 2019 at 5:44 PM Raj Gandhi <[email protected]> wrote:\n>\n> + pgsql-performance\n>\n> On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]> wrote:\n>>\n>> Hi everyone,\n>>\n>>\n>>\n>> I’m using LIMIT offset with DB view. Looks like query planner is applying the LIMIT for DB view at the end after processing all rows.\n>>\n>> When running same SQL that was used to create the DB view, LIMIT is applied earlier so the query is much faster.\n>>\n>>\n>>\n>> Explain plan using DB view\n>>\n>> https://explain.depesz.com/s/gzjQ\n>>\n>>\n>>\n>> Explain plan using raw SQL\n>>\n>> https://explain.depesz.com/s/KgwO\n>>\n>>\n>>\n>> In both tests LIMIT was 100 with offset = 0.\n>>\n>> Is there any way to force DB view to apply limit earlier?\n\nhuh. OFFSET does indeed force a materialize plan. This is a widely\nused tactic to hack the planner ('OFFSET 0').\n\nMaybe try converting your query from something like:\n\nSELECT * FROM foo LIMIT m OFFSET N;\nto\nWITH data AS\n(\n SELECT * FROM foo LIMIT m + n\n)\nSELECT * FROM foo OFFSET n;\n\nI didn't try this, and it may not help, but it's worth a shot.\n\nmerlin",
"msg_date": "Mon, 1 Apr 2019 22:06:11 -0400",
"msg_from": "Raj Gandhi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIMIT OFFSET with DB view vs plain SQL"
},
{
"msg_contents": "Thanks,\nSameer\n+65 81100350\n*Please consider the environment before printing this e-mail!*\n\n\nOn Mon, Apr 1, 2019 at 9:57 PM Raj Gandhi <[email protected]> wrote:\n\n> Any other idea how to resolve the performance issue with the database view?\n>\n> On Fri, Mar 29, 2019 at 7:38 PM Raj Gandhi <[email protected]> wrote:\n>\n>> Merlin, I tried the hack you suggested but that didn't work. Planner\n>> used the same path.\n>>\n>> The same query works much faster when using the raw SQL instead of DB\n>> view:\n>>\n>> Here is the definition of DB View ‘job’\n>>\n>> SELECT w.id,\n>>\n>> w.parent_id,\n>>\n>> w.status AS state,\n>>\n>> w.percent_complete AS progress_percentage,\n>>\n>> w.start_time,\n>>\n>> w.end_time,\n>>\n>> w.est_completion_time AS estimated_completion_time,\n>>\n>> w.root_id,\n>>\n>> w.internal AS is_internal,\n>>\n>> w.order_id AS step_order,\n>>\n>> c.resource_type,\n>>\n>> c.resource_id,\n>>\n>> c.id AS command_id,\n>>\n>> c.client_cookie,\n>>\n>> c.user_name AS \"user\",\n>>\n>> c.metadata,\n>>\n>> c.client_address,\n>>\n>> response_body(r.*, w.*) AS response_body\n>>\n>> FROM work_unit w\n>>\n>> LEFT JOIN command c ON c.work_unit_id = w.id\n>>\n>> LEFT JOIN command_response r ON r.command_id::text = c.id::text;\n>>\n>>\n>>\n>>\n>>\n>> *Query that uses the DB view:*\n>>\n>> SELECT id, start_time\n>>\n>> FROM job\n>>\n>> order by id LIMIT 101 OFFSET 0;\n>>\n>>\n>>\n>> Explain plan: https://explain.depesz.com/s/gzjQ\n>>\n>>\n>> *Query using the raw SQL*\n>>\n>> <SQL from Job DB View definition>\n>>\n>> ORDER BY id LIMIT 101 OFFSET 0;\n>>\n>>\n>>\n>> Explain plan:https://explain.depesz.com/s/KgwO\n>>\n>\nI think the row count on both you explain plan does not go well with what\nwas anticipated by the planner.\n\ncan you run analyze on all the tables in your view query and try both the\nqueries again?\n\n\n\n>\n>>\n>>\n>>\n>> On Fri, Mar 29, 2019 at 11:26 AM Merlin Moncure <[email protected]>\n>> wrote:\n>>\n>>> On Thu, Mar 28, 2019 at 5:44 PM Raj Gandhi <[email protected]>\n>>> wrote:\n>>> >\n>>> > + pgsql-performance\n>>> >\n>>> > On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]>\n>>> wrote:\n>>> >>\n>>> >> Hi everyone,\n>>> >>\n>>> >>\n>>> >>\n>>> >> I’m using LIMIT offset with DB view. Looks like query planner is\n>>> applying the LIMIT for DB view at the end after processing all rows.\n>>> >>\n>>> >> When running same SQL that was used to create the DB view, LIMIT is\n>>> applied earlier so the query is much faster.\n>>> >>\n>>> >>\n>>> >>\n>>> >> Explain plan using DB view\n>>> >>\n>>> >> https://explain.depesz.com/s/gzjQ\n>>> >>\n>>> >>\n>>> >>\n>>> >> Explain plan using raw SQL\n>>> >>\n>>> >> https://explain.depesz.com/s/KgwO\n>>> >>\n>>> >>\n>>> >>\n>>> >> In both tests LIMIT was 100 with offset = 0.\n>>> >>\n>>> >> Is there any way to force DB view to apply limit earlier?\n>>>\n>>> huh. OFFSET does indeed force a materialize plan. This is a widely\n>>> used tactic to hack the planner ('OFFSET 0').\n>>>\n>>> Maybe try converting your query from something like:\n>>>\n>>> SELECT * FROM foo LIMIT m OFFSET N;\n>>> to\n>>> WITH data AS\n>>> (\n>>> SELECT * FROM foo LIMIT m + n\n>>> )\n>>> SELECT * FROM foo OFFSET n;\n>>>\n>>> I didn't try this, and it may not help, but it's worth a shot.\n>>>\n>>> merlin\n>>>\n>>\n\nThanks,Sameer+65 81100350Please consider the environment before printing this e-mail!On Mon, Apr 1, 2019 at 9:57 PM Raj Gandhi <[email protected]> wrote:Any other idea how to resolve the performance issue with the database view?On Fri, Mar 29, 2019 at 7:38 PM Raj Gandhi <[email protected]> wrote:Merlin, I tried the hack you suggested but that didn't work. Planner used the same path.The same query works much faster when using the raw SQL instead of DB view:\nHere is the definition of DB View ‘job’\n SELECT w.id,\n w.parent_id,\n w.status AS state,\n w.percent_complete AS progress_percentage,\n w.start_time,\n w.end_time,\n w.est_completion_time AS estimated_completion_time,\n w.root_id,\n w.internal AS is_internal,\n w.order_id AS step_order,\n c.resource_type,\n c.resource_id,\n c.id AS command_id,\n c.client_cookie,\n c.user_name AS \"user\",\n c.metadata,\n c.client_address,\n response_body(r.*, w.*) AS response_body\n FROM work_unit w\n LEFT JOIN command c ON c.work_unit_id = w.id\n LEFT JOIN command_response r ON r.command_id::text = c.id::text;\n \n \nQuery that uses the DB view:\nSELECT id, start_time\nFROM job\norder by id LIMIT 101 OFFSET 0;\n \nExplain plan: https://explain.depesz.com/s/gzjQ\n Query using the raw SQL<SQL from Job DB View definition>\nORDER BY id LIMIT 101 OFFSET 0;\n \nExplain plan:https://explain.depesz.com/s/KgwOI think the row count on both you explain plan does not go well with what was anticipated by the planner. can you run analyze on all the tables in your view query and try both the queries again? \n \nOn Fri, Mar 29, 2019 at 11:26 AM Merlin Moncure <[email protected]> wrote:On Thu, Mar 28, 2019 at 5:44 PM Raj Gandhi <[email protected]> wrote:>> + pgsql-performance>> On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]> wrote:>>>> Hi everyone,>>>>>>>> I’m using LIMIT offset with DB view. Looks like query planner is applying the LIMIT for DB view at the end after processing all rows.>>>> When running same SQL that was used to create the DB view, LIMIT is applied earlier so the query is much faster.>>>>>>>> Explain plan using DB view>>>> https://explain.depesz.com/s/gzjQ>>>>>>>> Explain plan using raw SQL>>>> https://explain.depesz.com/s/KgwO>>>>>>>> In both tests LIMIT was 100 with offset = 0.>>>> Is there any way to force DB view to apply limit earlier?\nhuh. OFFSET does indeed force a materialize plan. This is a widelyused tactic to hack the planner ('OFFSET 0').\nMaybe try converting your query from something like:\nSELECT * FROM foo LIMIT m OFFSET N;toWITH data AS( SELECT * FROM foo LIMIT m + n)SELECT * FROM foo OFFSET n;\nI didn't try this, and it may not help, but it's worth a shot.\nmerlin",
"msg_date": "Tue, 2 Apr 2019 12:16:32 +0800",
"msg_from": "SAMEER KUMAR <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT OFFSET with DB view vs plain SQL"
},
{
"msg_contents": "Hi Raj,\n\nI have long time without working on pgsql performance, but you can try\nmaterialized views or if you are already using its try apply some\nperformance tips...\n\nThis are some link i found in a fast search, but if you solution is going\nby this way this can be a kickstart to solve your problem..\n\nhttps://thoughtbot.com/blog/advanced-postgres-performance-tips\n\nhttp://www.postgresqltutorial.com/postgresql-materialized-views/\n\nTake in account that materialized views have to be filled and use\nadditional space..\n\nHope this can help you solving you issue\n\n\n\nOn Thu, Mar 28, 2019, 7:44 PM Raj Gandhi <[email protected]> wrote:\n\n> + pgsql-performance\n>\n> On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]> wrote:\n>\n>> Hi everyone,\n>>\n>>\n>>\n>> I’m using LIMIT offset with DB view. Looks like query planner is applying\n>> the LIMIT for DB view at the end after processing all rows.\n>>\n>> When running same SQL that was used to create the DB view, LIMIT is\n>> applied earlier so the query is much faster.\n>>\n>>\n>>\n>> Explain plan using DB view\n>>\n>> https://explain.depesz.com/s/gzjQ\n>>\n>>\n>>\n>> Explain plan using raw SQL\n>>\n>> https://explain.depesz.com/s/KgwO\n>>\n>>\n>>\n>> In both tests LIMIT was 100 with offset = 0.\n>>\n>> Is there any way to force DB view to apply limit earlier?\n>>\n>>\n>>\n>> Thanks,\n>>\n>> Raj\n>>\n>\n\nHi Raj,I have long time without working on pgsql performance, but you can try materialized views or if you are already using its try apply some performance tips... This are some link i found in a fast search, but if you solution is going by this way this can be a kickstart to solve your problem.. https://thoughtbot.com/blog/advanced-postgres-performance-tipshttp://www.postgresqltutorial.com/postgresql-materialized-views/Take in account that materialized views have to be filled and use additional space.. Hope this can help you solving you issueOn Thu, Mar 28, 2019, 7:44 PM Raj Gandhi <[email protected]> wrote:+\n\npgsql-performance On Thu, Mar 28, 2019 at 6:41 PM Raj Gandhi <[email protected]> wrote:Hi everyone,\n \nI’m using LIMIT offset with DB view. Looks like query\nplanner is applying the LIMIT for DB view at the end after processing all rows.\nWhen running same SQL that was used to create the DB view,\nLIMIT is applied earlier so the query is much faster.\n \nExplain plan using DB view \nhttps://explain.depesz.com/s/gzjQ\n \nExplain plan using raw SQL\nhttps://explain.depesz.com/s/KgwO\n \nIn both tests LIMIT was 100 with offset = 0.\nIs there any way to force DB view to apply limit earlier? \n \nThanks,\nRaj",
"msg_date": "Tue, 2 Apr 2019 01:49:31 -0300",
"msg_from": "=?UTF-8?Q?Ram=C3=B3n_Bastidas?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT OFFSET with DB view vs plain SQL"
}
] |
[
{
"msg_contents": "PostgreSQL version: 11.2\nOperating system: Linux\nDescription:\n\nWe have a wuite complex CTE which collects data fast enough for us and has a ok execution plan.\n\nWhen we insert the result into a table like\n\nWith _some_data AS (\nSELECT….\n), _some_other_data AS (\nSELECT ….\n)\nINSERT INTO table1\n SELECT *\n FROM _some_other_data\n;\n\nIt works quite well and we are happy with it’s performance (arround 10 seconds).\nBut as soon as we add an ON CONFLICT clause (like below) the queries runs for ages and doesnt seem to stop. We usually terminate it after 12 Hours\n\nWith _some_data AS (\nSELECT….\n), _some_other_data AS (\nSELECT ….\n)\nINSERT INTO table1\n SELECT *\n FROM _some_other_data\nON CONFLICT (column1, column2) DO\nUPDATE\n SET column1 = excluded.columnA,\ncolumn2 = excluded.columnB,\n.\n.\n.\n;\n\n\nWhere is the Problem?\n\n\n\n\n\n\n\n\n\nPostgreSQL version: 11.2\nOperating system: Linux\nDescription: \n \nWe have a wuite complex CTE which collects data fast enough for us and has a ok execution plan.\n \nWhen we insert the result into a table like \n \nWith _some_data AS (\nSELECT….\n), _some_other_data AS (\nSELECT ….\n)\nINSERT INTO table1\n SELECT *\n FROM _some_other_data\n;\n \nIt works quite well and we are happy with it’s performance (arround 10 seconds).\nBut as soon as we add an ON CONFLICT clause (like below) the queries runs for ages and doesnt seem to stop. We usually terminate it after 12 Hours\n\n \nWith _some_data AS (\nSELECT….\n), _some_other_data AS (\nSELECT ….\n)\nINSERT INTO table1\n SELECT *\n FROM _some_other_data\nON CONFLICT (column1, column2) DO\nUPDATE \n SET column1 = excluded.columnA,\ncolumn2 = excluded.columnB,\n.\n.\n.\n;\n \n \nWhere is the Problem?",
"msg_date": "Fri, 29 Mar 2019 14:29:02 +0000",
"msg_from": "Stephan Schmidt <[email protected]>",
"msg_from_op": true,
"msg_subject": "endless quere when upsert with ON CONFLICT clause"
},
{
"msg_contents": "\n\nAm 29.03.19 um 15:29 schrieb Stephan Schmidt:\n>\n> PostgreSQL version: 11.2\n> Operating system:�� Linux\n> Description:\n>\n> We have a wuite complex CTE which collects data fast enough for us and \n> has a ok execution plan.\n>\n> When we insert the result into a table like\n>\n> With _/some/_data AS (\n>\n> SELECT�.\n>\n> ), _/some/_other_data AS (\n>\n> SELECT �.\n>\n> )\n>\n> INSERT INTO table1\n>\n> ��������������� SELECT *\n>\n> ��������������� FROM _/some/_other_data\n>\n> ;\n>\n> It works quite well and we are happy with it�s performance (arround 10 \n> seconds).\n>\n> But as soon as we add an ON� CONFLICT clause �(like below) the queries \n> runs for ages and doesnt seem to stop. We usually terminate it after \n> 12 Hours\n>\n> With _/some/_data AS (\n>\n> SELECT�.\n>\n> ), _/some/_other_data AS (\n>\n> SELECT �.\n>\n> )\n>\n> INSERT INTO table1\n>\n> ��������������� SELECT *\n>\n> ��������������� FROM _/some/_other_data\n>\n> ON CONFLICT (column1, column2) DO\n>\n> UPDATE\n>\n> ��������SET column1 = excluded.columnA,\n>\n> column2 = excluded.columnB,\n>\n> .\n>\n> .\n>\n> .\n>\n> ;\n>\n> Where is the Problem?\n>\n\ncan you show us the explain (analyse) - plan?\n\ni have tried to reproduce, but it seems okay for me.\n\ntest=*# create table bla (i int primary key, t text);\nCREATE TABLE\ntest=*# insert into bla select s, 'name ' || s::text from \ngenerate_series(1, 100000) s;\nINSERT 0 100000\ntest=*# commit;\nCOMMIT\n\ntest=*# explain analyse with foo as (select x.* as i from \ngenerate_series(1, 1000) x) insert into bla select * from foo on \nconflict (i) do update set t=excluded.i::text;\n ��������������������������������������������������������� QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n �Insert on bla� (cost=10.00..30.00 rows=1000 width=36) (actual \ntime=16.789..16.789 rows=0 loops=1)\n �� Conflict Resolution: UPDATE\n �� Conflict Arbiter Indexes: bla_pkey\n �� Tuples Inserted: 0\n �� Conflicting Tuples: 1000\n �� CTE foo\n ���� ->� Function Scan on generate_series x� (cost=0.00..10.00 \nrows=1000 width=4) (actual time=0.214..0.443 rows=1000 loops=1)\n �� ->� CTE Scan on foo� (cost=0.00..20.00 rows=1000 width=36) (actual \ntime=0.220..1.124 rows=1000 loops=1)\n �Planning Time: 0.104 ms\n �Execution Time: 16.860 ms\n(10 rows)\n\ntest=*# explain analyse with foo as (select x.* + 10000000 as i from \ngenerate_series(1, 1000) x) insert into bla select * from foo on \nconflict (i) do update set t=excluded.i::text;\n ��������������������������������������������������������� QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n �Insert on bla� (cost=12.50..32.50 rows=1000 width=36) (actual \ntime=13.424..13.424 rows=0 loops=1)\n �� Conflict Resolution: UPDATE\n �� Conflict Arbiter Indexes: bla_pkey\n �� Tuples Inserted: 1000\n �� Conflicting Tuples: 0\n �� CTE foo\n ���� ->� Function Scan on generate_series x� (cost=0.00..12.50 \nrows=1000 width=4) (actual time=0.079..0.468 rows=1000 loops=1)\n �� ->� CTE Scan on foo� (cost=0.00..20.00 rows=1000 width=36) (actual \ntime=0.081..1.325 rows=1000 loops=1)\n �Planning Time: 0.052 ms\n �Execution Time: 13.471 ms\n(10 rows)\n\ntest=*#\n\n\nas you can see, no big difference between the 2 plans.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 17:00:19 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: endless quere when upsert with ON CONFLICT clause"
}
] |
[
{
"msg_contents": "We noticed that the following SQL query runs 3 times slower on the latest\nversion of PostgreSQL with parallel query execution. It would be helpful if\nsomeone could shed light on why this is happening.\n\nHere’s the time taken to execute them on older (v9.5.16) and newer versions\n(v11.2) of PostgreSQL (in milliseconds):\n\n+-------------+--------+---------+---------+-----------+\n| | scale1 | scale10 | scale50 | scale 300 |\n+-------------+--------+---------+---------+-----------+\n| v9.5.16 | 88 | 937 | 4721 | 27241 |\n| v11.2 | 288 | 2822 | 13838 | 85081 |\n+-------------+--------+---------+---------+-----------+\n\nWe have shared the following details below:\n1) the associated query,\n2) the commit that activated it,\n3) our high-level analysis,\n4) query execution plans in old and new versions of PostgreSQL, and\n5) information on reproducing these regressions.\n\n### QUERY\n\nselect\n ref_0.ol_delivery_d as c1\nfrom\n public.order_line as ref_0\nwhere EXISTS (\n select\n ref_1.i_im_id as c0\n from\n public.item as ref_1\n where ref_0.ol_d_id <= ref_1.i_im_id\n)\n\n### COMMIT\n\n77cd477 (Enable parallel query by default.)\nWe found several other queries exhibiting performance regression related to\nthis commit.\n\n### HIGH-LEVEL ANALYSIS\n\nWe believe that this regression is due to parallel queries being enabled by\ndefault. Surprisingly, we found that even on a larger TPC-C database (scale\nfactor of 50, roughly 4GB of size), parallel scan is still slower than the\nnon-parallel execution plan in the old version, when the query is not\nreturning any tuples.\n\n### QUERY EXECUTION PLANS\n\n[OLD version]\nNested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8)\n(actual time=0.034..24981.895 rows=90017507 loops=1)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504\nwidth=12) (actual time=0.022..7145.811 rows=90017507 loops=1)\n -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual\ntime=0.000..0.000 rows=1 loops=90017507)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4)\n(actual time=0.006..0.006 rows=1 loops=1)\n\nPlanning time: 0.290 ms\nExecution time: 27241.239 ms\n\n[NEW version]\nGather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual\ntime=0.265..82355.289 rows=90017507 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431\nwidth=8) (actual time=0.033..68529.259 rows=30005836 loops=3)\n Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)\n -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93\nrows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3)\n -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4)\n(actual time=0.001..0.001 rows=1 loops=90017507)\n\nPlanning Time: 0.319 ms\nExecution Time: 85081.158 ms\n\n### REPRODUCING REGRESSION\n\n* The queries can be downloaded here:\nhttps://gts3.org/~/jjung/tpcc/case4.tar.gz\n\n* You can reproduce these results by using the setup described in:\nhttps://www.postgresql.org/message-id/BN6PR07MB3409922471073F2B619A8CA4EE640%40BN6PR07MB3409.namprd07.prod.outlook.com\n\nThanks for the pointers!\n\nBest regards,\nJinho Jung\n\nWe noticed that the following SQL query runs 3 times slower on the latest version of PostgreSQL with parallel query execution. It would be helpful if someone could shed light on why this is happening.Here’s the time taken to execute them on older (v9.5.16) and newer versions (v11.2) of PostgreSQL (in milliseconds):+-------------+--------+---------+---------+-----------+| | scale1 | scale10 | scale50 | scale 300 |+-------------+--------+---------+---------+-----------+| v9.5.16 | 88 | 937 | 4721 | 27241 || v11.2 | 288 | 2822 | 13838 | 85081 |+-------------+--------+---------+---------+-----------+We have shared the following details below:1) the associated query,2) the commit that activated it,3) our high-level analysis,4) query execution plans in old and new versions of PostgreSQL, and5) information on reproducing these regressions.### QUERYselect ref_0.ol_delivery_d as c1from public.order_line as ref_0where EXISTS ( select ref_1.i_im_id as c0 from public.item as ref_1 where ref_0.ol_d_id <= ref_1.i_im_id)### COMMIT77cd477 (Enable parallel query by default.)We found several other queries exhibiting performance regression related to this commit.### HIGH-LEVEL ANALYSISWe believe that this regression is due to parallel queries being enabled by default. Surprisingly, we found that even on a larger TPC-C database (scale factor of 50, roughly 4GB of size), parallel scan is still slower than the non-parallel execution plan in the old version, when the query is not returning any tuples.### QUERY EXECUTION PLANS[OLD version]Nested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8) (actual time=0.034..24981.895 rows=90017507 loops=1) Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id) -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504 width=12) (actual time=0.022..7145.811 rows=90017507 loops=1) -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual time=0.000..0.000 rows=1 loops=90017507) -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.006..0.006 rows=1 loops=1)Planning time: 0.290 msExecution time: 27241.239 ms[NEW version]Gather (cost=1000.00..88047487498.82 rows=30005835 width=8) (actual time=0.265..82355.289 rows=90017507 loops=1) Workers Planned: 2 Workers Launched: 2 -> Nested Loop Semi Join (cost=0.00..88044485915.32 rows=12502431 width=8) (actual time=0.033..68529.259 rows=30005836 loops=3) Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id) -> Parallel Seq Scan on order_line ref_0 (cost=0.00..1486400.93 rows=37507293 width=12) (actual time=0.023..2789.901 rows=30005836 loops=3) -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4) (actual time=0.001..0.001 rows=1 loops=90017507)Planning Time: 0.319 msExecution Time: 85081.158 ms### REPRODUCING REGRESSION* The queries can be downloaded here:https://gts3.org/~/jjung/tpcc/case4.tar.gz* You can reproduce these results by using the setup described in:https://www.postgresql.org/message-id/BN6PR07MB3409922471073F2B619A8CA4EE640%40BN6PR07MB3409.namprd07.prod.outlook.comThanks for the pointers!Best regards,Jinho Jung",
"msg_date": "Fri, 29 Mar 2019 12:06:42 -0400",
"msg_from": "Jinho Jung <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need advice: Parallel query execution introduces performance\n regression"
}
] |
[
{
"msg_contents": "1、postgresql version\n\nqis3_dp2=> select * from version();\n version \n---------------------------------------------------------------------------------------------------------\n PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-28), 64-bit\n(1 row)\n\nqis3_dp2=> \n\n2、postgresql work_mem \n\n\nqis3_dp2=> SHOW work_mem;\n work_mem \n----------\n 2GB\n(1 row)\n\nqis3_dp2=> SHOW shared_buffers;\n shared_buffers \n----------------\n 4028MB\n(1 row)\n\nqis3_dp2=> \n\n3、Table count\n\nqis3_dp2=> select count(*) from QIS_CARPASSEDSTATION;\n count \n----------\n 11453079\n(1 row)\n\nqis3_dp2=> \n\n4、table desc\n\nqis3_dp2=> \\dS QIS_CARPASSEDSTATION;\n Table \"qis_schema.qis_carpassedstation\"\n Column | Type | Collation | Nullable | Default \n--------------+-----------------------------+-----------+----------+---------\n iid | integer | | not null | \n scartypecd | character varying(50) | | | \n svin | character varying(20) | | | \n sstationcd | character varying(50) | | | \n dpassedtime | timestamp(6) with time zone | | | \n dworkdate | date | | | \n iworkyear | integer | | | \n iworkmonth | integer | | | \n iweek | integer | | | \n sinputteamcd | character varying(20) | | | \n sinputdutycd | character varying(20) | | | \n smtoc | character varying(50) | | | \n slineno | character varying(18) | | | \nIndexes:\n \"qis_carpassedstation_pkey\" PRIMARY KEY, btree (iid)\n \"q_carp_dworkdate\" btree (dworkdate)\n \"q_carp_smtoc\" btree (smtoc)\n\nqis3_dp2=> \n\n5、Execute SQL:\nqis3_dp2=> EXPLAIN (analyze true,buffers true) SELECT COUNT(DISTINCT SVIN)\nAS CHECKCARNUM ,SMTOC FROM QIS_CARPASSEDSTATION A WHERE 1=1 AND A.SSTATIONCD\n= 'VQ3_LYG' AND A.SLINENO IN ( '1F' , '2F' , '3F' ) AND A.DWORKDATE >=\nTO_DATE('2017-02-11','YYYY-MM-DD') AND A.DWORKDATE <=\nTO_DATE('2019-03-11','YYYY-MM-DD') group by SMTOC\n;\n \nQUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------\n GroupAggregate (cost=697738.61..714224.02 rows=372 width=30) (actual\ntime=5908.786..32420.412 rows=410 loops=1)\n Group Key: smtoc\n Buffers: shared hit=401 read=184983\n I/O Timings: read=1377.762\n -> Sort (cost=697738.61..703232.51 rows=2197559 width=40) (actual\ntime=5907.791..6139.351 rows=2142215 loops=1)\n Sort Key: smtoc\n Sort Method: quicksort Memory: 265665kB\n Buffers: shared hit=401 read=184983\n I/O Timings: read=1377.762\n -> Gather (cost=1000.00..466253.56 rows=2197559 width=40) (actual\ntime=0.641..1934.614 rows=2142215 loops=1)\n Workers Planned: 5\n Workers Launched: 5\n Buffers: shared hit=401 read=184983\n I/O Timings: read=1377.762\n -> Parallel Seq Scan on qis_carpassedstation a \n(cost=0.00..245497.66 rows=439512 width=40) (actual time=0.245..1940.527\nrows=357036 loops=6)\n Filter: (((sstationcd)::text = 'VQ3_LYG'::text) AND\n((slineno)::text = ANY ('{1F,2F,3F}'::text[])) AND (dworkdate >=\nto_date('2017-02-11'::text, 'YYYY-MM-DD'::text)) AND (dworkdate <= to_da\nte('2019-03-11'::text, 'YYYY-MM-DD'::text)))\n Rows Removed by Filter: 1551811\n Buffers: shared hit=401 read=184983\n I/O Timings: read=1377.762\n Planning Time: 0.393 ms\n Execution Time: 32439.704 ms\n(21 rows)\n\nqis3_dp2=> \n\n\n6、Why does sort take a long time to execute and how can you optimize it?\nThanks!!!\n\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Mon, 1 Apr 2019 02:45:14 -0700 (MST)",
"msg_from": "\"tank.zhang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql Sort cost Poor performance?"
},
{
"msg_contents": ">>>>> \"tank\" == tank zhang <[email protected]> writes:\n\n tank> smtoc | character varying(50) | | | \n tank> Sort Key: smtoc\n\nWhat is the output of SHOW lc_collate;\n\nOne of the most common reasons for slow sorting is that you're sorting a\ntext/varchar field in a locale other than C. The slowdown for using\nother locales varies according to the data, the locale, and the\noperating system, but 8-20x slowdowns are very common, 50-100x slowdowns\nare not unusual, and there have been reports of even worse cases with\nunusual script combinations.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 01 Apr 2019 11:20:48 +0100",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Sort cost Poor performance?"
},
{
"msg_contents": "Thank you for your reply. \n\nqis3_dp2=> SHOW lc_collate;\n lc_collate \n-------------\n en_US.UTF-8\n(1 row)\n\nTime: 0.311 ms\nqis3_dp2=> \n\nqis3_dp2=> SELECT COUNT(DISTINCT SVIN) AS CHECKCARNUM ,SMTOC FROM\nQIS_CARPASSEDSTATION A WHERE 1=1 AND A.SSTATIONCD = 'VQ3_LYG' AND\nA.SLINYYY-MM-DD') AND A.DWORKDATE <= TO_DATE('2019-03-11','YYYY-MM-DD')\ngroup by SMTOC\n;\n checkcarnum | smtoc \n-------------+-----------------------\n 90 | HT6LHD700 NH731P A\n 690 | HT6LHD700 NH788P A\n 90 | HT6LHD700 R550P A\n 30 | HT6LHD700 YR615M A\n 1141 | HT6MHB700 NH731P A\n\n\n\nIs there any possibility of optimization?\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Mon, 1 Apr 2019 03:30:09 -0700 (MST)",
"msg_from": "\"tank.zhang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql Sort cost Poor performance?"
},
{
"msg_contents": "On 01/04/2019 23:20, Andrew Gierth wrote:\n>>>>>> \"tank\" == tank zhang <[email protected]> writes:\n> tank> smtoc | character varying(50) | | |\n> tank> Sort Key: smtoc\n>\n> What is the output of SHOW lc_collate;\n>\n> One of the most common reasons for slow sorting is that you're sorting a\n> text/varchar field in a locale other than C. The slowdown for using\n> other locales varies according to the data, the locale, and the\n> operating system, but 8-20x slowdowns are very common, 50-100x slowdowns\n> are not unusual, and there have been reports of even worse cases with\n> unusual script combinations.\n>\nJust wondering...\n\nWould it be possible to optionally enable the system to create a hidden \nsystem column for the text field to be sorted, the new column would be \nthe original column preprocessed to sort correctly & efficiently.ᅵ This \nwould seem to lead to a massive improvement in performance.\n\nDepending relative tradeoffs disk storage vs processing:\n(A) create hidden system column for each sort invocation\n(B) at table creation\n(C) other possibilities\n\n(A) could be done automatically, and possibly controlled via a GUC parameter\n(B) might require a change to the CREATE TABLE syntax\n\nAnyhow, just some thoughts...\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Mon, 1 Apr 2019 23:32:40 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Sort cost Poor performance?"
},
{
"msg_contents": "Re: Gavin Flower 2019-04-01 <[email protected]>\n> Would it be possible to optionally enable the system to create a hidden\n> system column for the text field to be sorted, the new column would be the\n> original column preprocessed to sort correctly & efficiently.\n\nThat's the idea behind the strxfrm(3) optimization that ultimately got\ndisabled again for non-C locales because glibc fails to implement it\ncorrectly.\n\nChristoph\n\n\n",
"msg_date": "Mon, 1 Apr 2019 12:54:35 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Sort cost Poor performance?"
},
{
"msg_contents": "Hi,\n\nIf your problem is the sort, try creating an index on the Field that you\nconsider thst could be needed (you can star with smtoc that is the one you\nare grouping and sorting)\n\nAnother thing that i noticed is your work_mem, I thing is too high for a\nglobal config (if you think 2gb can hel for this operation you can set it\nbefore execute the query but only for that session), but generally this\nvalue most be smaller depending on the commons query every sub query uses\nthat amount of mem (i.e if you have a query that have 3 subqueries and each\none with a sort operation and a grouping operation, you can be using 12 gb\nof mem in that only big query, and it doesn't mean it will be faster).. try\nto monitor the uses of ram by pgsql maybe you can be suffering paging\nproblems because os the size of you work_mem and that make the dbms slow too\n\n\n\nOn Mon, Apr 1, 2019, 6:45 AM tank.zhang <[email protected]> wrote:\n\n> 1、postgresql version\n>\n> qis3_dp2=> select * from version();\n> version\n>\n>\n> ---------------------------------------------------------------------------------------------------------\n> PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n> 20150623 (Red Hat 4.8.5-28), 64-bit\n> (1 row)\n>\n> qis3_dp2=>\n>\n> 2、postgresql work_mem\n>\n>\n> qis3_dp2=> SHOW work_mem;\n> work_mem\n> ----------\n> 2GB\n> (1 row)\n>\n> qis3_dp2=> SHOW shared_buffers;\n> shared_buffers\n> ----------------\n> 4028MB\n> (1 row)\n>\n> qis3_dp2=>\n>\n> 3、Table count\n>\n> qis3_dp2=> select count(*) from QIS_CARPASSEDSTATION;\n> count\n> ----------\n> 11453079\n> (1 row)\n>\n> qis3_dp2=>\n>\n> 4、table desc\n>\n> qis3_dp2=> \\dS QIS_CARPASSEDSTATION;\n> Table \"qis_schema.qis_carpassedstation\"\n> Column | Type | Collation | Nullable |\n> Default\n>\n> --------------+-----------------------------+-----------+----------+---------\n> iid | integer | | not null |\n> scartypecd | character varying(50) | | |\n> svin | character varying(20) | | |\n> sstationcd | character varying(50) | | |\n> dpassedtime | timestamp(6) with time zone | | |\n> dworkdate | date | | |\n> iworkyear | integer | | |\n> iworkmonth | integer | | |\n> iweek | integer | | |\n> sinputteamcd | character varying(20) | | |\n> sinputdutycd | character varying(20) | | |\n> smtoc | character varying(50) | | |\n> slineno | character varying(18) | | |\n> Indexes:\n> \"qis_carpassedstation_pkey\" PRIMARY KEY, btree (iid)\n> \"q_carp_dworkdate\" btree (dworkdate)\n> \"q_carp_smtoc\" btree (smtoc)\n>\n> qis3_dp2=>\n>\n> 5、Execute SQL:\n> qis3_dp2=> EXPLAIN (analyze true,buffers true) SELECT COUNT(DISTINCT\n> SVIN)\n> AS CHECKCARNUM ,SMTOC FROM QIS_CARPASSEDSTATION A WHERE 1=1 AND\n> A.SSTATIONCD\n> = 'VQ3_LYG' AND A.SLINENO IN ( '1F' , '2F' , '3F' ) AND A.DWORKDATE >=\n> TO_DATE('2017-02-11','YYYY-MM-DD') AND A.DWORKDATE <=\n> TO_DATE('2019-03-11','YYYY-MM-DD') group by SMTOC\n> ;\n>\n> QUERY PLAN\n>\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ---------------------------------------------\n> GroupAggregate (cost=697738.61..714224.02 rows=372 width=30) (actual\n> time=5908.786..32420.412 rows=410 loops=1)\n> Group Key: smtoc\n> Buffers: shared hit=401 read=184983\n> I/O Timings: read=1377.762\n> -> Sort (cost=697738.61..703232.51 rows=2197559 width=40) (actual\n> time=5907.791..6139.351 rows=2142215 loops=1)\n> Sort Key: smtoc\n> Sort Method: quicksort Memory: 265665kB\n> Buffers: shared hit=401 read=184983\n> I/O Timings: read=1377.762\n> -> Gather (cost=1000.00..466253.56 rows=2197559 width=40)\n> (actual\n> time=0.641..1934.614 rows=2142215 loops=1)\n> Workers Planned: 5\n> Workers Launched: 5\n> Buffers: shared hit=401 read=184983\n> I/O Timings: read=1377.762\n> -> Parallel Seq Scan on qis_carpassedstation a\n> (cost=0.00..245497.66 rows=439512 width=40) (actual time=0.245..1940.527\n> rows=357036 loops=6)\n> Filter: (((sstationcd)::text = 'VQ3_LYG'::text) AND\n> ((slineno)::text = ANY ('{1F,2F,3F}'::text[])) AND (dworkdate >=\n> to_date('2017-02-11'::text, 'YYYY-MM-DD'::text)) AND (dworkdate <= to_da\n> te('2019-03-11'::text, 'YYYY-MM-DD'::text)))\n> Rows Removed by Filter: 1551811\n> Buffers: shared hit=401 read=184983\n> I/O Timings: read=1377.762\n> Planning Time: 0.393 ms\n> Execution Time: 32439.704 ms\n> (21 rows)\n>\n> qis3_dp2=>\n>\n>\n> 6、Why does sort take a long time to execute and how can you optimize it?\n> Thanks!!!\n>\n>\n>\n>\n>\n>\n> --\n> Sent from:\n> http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n>\n>\n>\n\nHi, If your problem is the sort, try creating an index on the Field that you consider thst could be needed (you can star with smtoc that is the one you are grouping and sorting) Another thing that i noticed is your work_mem, I thing is too high for a global config (if you think 2gb can hel for this operation you can set it before execute the query but only for that session), but generally this value most be smaller depending on the commons query every sub query uses that amount of mem (i.e if you have a query that have 3 subqueries and each one with a sort operation and a grouping operation, you can be using 12 gb of mem in that only big query, and it doesn't mean it will be faster).. try to monitor the uses of ram by pgsql maybe you can be suffering paging problems because os the size of you work_mem and that make the dbms slow tooOn Mon, Apr 1, 2019, 6:45 AM tank.zhang <[email protected]> wrote:1、postgresql version\n\nqis3_dp2=> select * from version();\n version \n---------------------------------------------------------------------------------------------------------\n PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-28), 64-bit\n(1 row)\n\nqis3_dp2=> \n\n2、postgresql work_mem \n\n\nqis3_dp2=> SHOW work_mem;\n work_mem \n----------\n 2GB\n(1 row)\n\nqis3_dp2=> SHOW shared_buffers;\n shared_buffers \n----------------\n 4028MB\n(1 row)\n\nqis3_dp2=> \n\n3、Table count\n\nqis3_dp2=> select count(*) from QIS_CARPASSEDSTATION;\n count \n----------\n 11453079\n(1 row)\n\nqis3_dp2=> \n\n4、table desc\n\nqis3_dp2=> \\dS QIS_CARPASSEDSTATION;\n Table \"qis_schema.qis_carpassedstation\"\n Column | Type | Collation | Nullable | Default \n--------------+-----------------------------+-----------+----------+---------\n iid | integer | | not null | \n scartypecd | character varying(50) | | | \n svin | character varying(20) | | | \n sstationcd | character varying(50) | | | \n dpassedtime | timestamp(6) with time zone | | | \n dworkdate | date | | | \n iworkyear | integer | | | \n iworkmonth | integer | | | \n iweek | integer | | | \n sinputteamcd | character varying(20) | | | \n sinputdutycd | character varying(20) | | | \n smtoc | character varying(50) | | | \n slineno | character varying(18) | | | \nIndexes:\n \"qis_carpassedstation_pkey\" PRIMARY KEY, btree (iid)\n \"q_carp_dworkdate\" btree (dworkdate)\n \"q_carp_smtoc\" btree (smtoc)\n\nqis3_dp2=> \n\n5、Execute SQL:\nqis3_dp2=> EXPLAIN (analyze true,buffers true) SELECT COUNT(DISTINCT SVIN)\nAS CHECKCARNUM ,SMTOC FROM QIS_CARPASSEDSTATION A WHERE 1=1 AND A.SSTATIONCD\n= 'VQ3_LYG' AND A.SLINENO IN ( '1F' , '2F' , '3F' ) AND A.DWORKDATE >=\nTO_DATE('2017-02-11','YYYY-MM-DD') AND A.DWORKDATE <=\nTO_DATE('2019-03-11','YYYY-MM-DD') group by SMTOC\n;\n\nQUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------\n GroupAggregate (cost=697738.61..714224.02 rows=372 width=30) (actual\ntime=5908.786..32420.412 rows=410 loops=1)\n Group Key: smtoc\n Buffers: shared hit=401 read=184983\n I/O Timings: read=1377.762\n -> Sort (cost=697738.61..703232.51 rows=2197559 width=40) (actual\ntime=5907.791..6139.351 rows=2142215 loops=1)\n Sort Key: smtoc\n Sort Method: quicksort Memory: 265665kB\n Buffers: shared hit=401 read=184983\n I/O Timings: read=1377.762\n -> Gather (cost=1000.00..466253.56 rows=2197559 width=40) (actual\ntime=0.641..1934.614 rows=2142215 loops=1)\n Workers Planned: 5\n Workers Launched: 5\n Buffers: shared hit=401 read=184983\n I/O Timings: read=1377.762\n -> Parallel Seq Scan on qis_carpassedstation a \n(cost=0.00..245497.66 rows=439512 width=40) (actual time=0.245..1940.527\nrows=357036 loops=6)\n Filter: (((sstationcd)::text = 'VQ3_LYG'::text) AND\n((slineno)::text = ANY ('{1F,2F,3F}'::text[])) AND (dworkdate >=\nto_date('2017-02-11'::text, 'YYYY-MM-DD'::text)) AND (dworkdate <= to_da\nte('2019-03-11'::text, 'YYYY-MM-DD'::text)))\n Rows Removed by Filter: 1551811\n Buffers: shared hit=401 read=184983\n I/O Timings: read=1377.762\n Planning Time: 0.393 ms\n Execution Time: 32439.704 ms\n(21 rows)\n\nqis3_dp2=> \n\n\n6、Why does sort take a long time to execute and how can you optimize it?\nThanks!!!\n\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html",
"msg_date": "Tue, 2 Apr 2019 02:25:54 -0300",
"msg_from": "=?UTF-8?Q?Ram=C3=B3n_Bastidas?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Sort cost Poor performance?"
},
{
"msg_contents": "1、DISTINCT response time is fast without being added\n \nqis3_dp2=# SELECT COUNT(*) AS CHECKCARNUM FROM QIS_CARPASSEDSTATION A WHERE\n1=1 AND A.SSTATIONCD = 'VQ3_LYG' AND A.SLINENO IN ( '1F' , '2F' , '3F' ) AND\nA.DWORKDATE >= TO_DATE('2017-02-11','YYYY-MM-DD') AND A.DWORKDATE <=\nTO_DATE('2019-03-11','YYYY-MM-DD');\n checkcarnum \n-------------\n 2142215\n(1 row)\n\n*Time: 2237.970 ms (00:02.238)*\nqis3_dp2=# \n\n2、 Adding a DISTINCT response time was very slow\n\nqis3_dp2=# SELECT COUNT(DISTINCT SVIN) AS CHECKCARNUM FROM\nQIS_CARPASSEDSTATION A WHERE 1=1 AND A.SSTATIONCD = 'VQ3_LYG' AND A.SLINENO\nIN ( '1F' , '2F' , '3F' ) AND A.DWORKDATE >=\nTO_DATE('2017-02-11','YYYY-MM-DD') AND A.DWORKDATE <=\nTO_DATE('2019-03-11','YYYY-MM-DD');\n checkcarnum \n-------------\n 1071367\n(1 row)\n\n*Time: 38979.246 ms (00:38.979)*\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Tue, 2 Apr 2019 00:00:09 -0700 (MST)",
"msg_from": "\"tank.zhang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql Sort cost Poor performance?"
},
{
"msg_contents": "On Tue, 2 Apr 2019 at 20:00, tank.zhang <[email protected]> wrote:\n> 2、 Adding a DISTINCT response time was very slow\n>\n> qis3_dp2=# SELECT COUNT(DISTINCT SVIN) AS CHECKCARNUM FROM\n> QIS_CARPASSEDSTATION A WHERE 1=1 AND A.SSTATIONCD = 'VQ3_LYG' AND A.SLINENO\n> IN ( '1F' , '2F' , '3F' ) AND A.DWORKDATE >=\n> TO_DATE('2017-02-11','YYYY-MM-DD') AND A.DWORKDATE <=\n> TO_DATE('2019-03-11','YYYY-MM-DD');\n> checkcarnum\n> -------------\n> 1071367\n> (1 row)\n\nThat's because of how DISTINCT is implemented within an aggregate\nfunction in PostgreSQL. Internally within the aggregate code in the\nexecutor, a sort is performed on the entire input to the aggregate\nnode. The planner is currently unable to make use of any indexes that\nprovide pre-sorted input.\n\nOne way to work around this would be to perform the DISTINCT and\nCOUNT(*) in separate stages using a subquery.\n\n From your original query, something like:\n\nSELECT COUNT(SVIN) AS CHECKCARNUM,SMTOC\nFROM (\nSELECT SMTOC,SVIN\nFROM QIS_CARPASSEDSTATION A\nWHERE 1=1 AND A.SSTATIONCD = 'VQ3_LYG'\nAND A.SLINENO IN ( '1F' , '2F' , '3F' )\nAND A.DWORKDATE >= TO_DATE('2017-02-11','YYYY-MM-DD')\nAND A.DWORKDATE <= TO_DATE('2019-03-11','YYYY-MM-DD')\nGROUP BY SMTOC,SVIN\n) A GROUP BY SMTOC;\n\nAn index something like:\nCREATE INDEX ON QIS_CARPASSEDSTATION (SMTOC, SVIN, SSTATIONCD, DWORKDATE);\n\nShould help speed up the subquery and provide pre-sorted input to the\nouter aggregate. If you like, you could add SLINENO to the end of the\nindex to allow an index-only scan which may result in further\nperformance improvements.\n\nWithout the index, you're forced to sort, but at least it's just one\nsort instead of two.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 2 Apr 2019 20:42:07 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Sort cost Poor performance?"
},
{
"msg_contents": "Thank you replay!\n\nI tried to use the TMP table is very fast . thank you\n\n\nqis3_dp2=# explain analyze SELECT COUNT(*),SMTOC FROM ( SELECT\nDISTINCT(SVIN) AS CHECKCARNUM,SMTOC FROM QIS_CARPASSEDSTATION A WHERE 1=1\nAND A.SSTATIONCD = 'VQ3_LYG' AND A.SLINENO IN ( '1F' , '2F' , '3F' ) AND\nA.DWORKDATE >= TO_DATE('2017-02-11','YYYY-MM-DD') AND A.DWORKDATE <=\nTO_DATE('2019-03-11','YYYY-MM-DD')) AS TEMP group by SMTOC;\n \nQUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------\n HashAggregate (cost=691386.41..691388.41 rows=200 width=30) (actual\ntime=4090.951..4091.027 rows=410 loops=1)\n Group Key: a.smtoc\n -> HashAggregate (cost=666561.44..676491.43 rows=992999 width=40)\n(actual time=3481.712..3794.213 rows=1071367 loops=1)\n Group Key: a.svin, a.smtoc\n -> Gather (cost=1000.00..656098.93 rows=2092501 width=40) (actual\ntime=0.657..1722.814 rows=2142215 loops=1)\n Workers Planned: 4\n Workers Launched: 4\n -> Parallel Seq Scan on qis_carpassedstation a \n(cost=0.00..445848.83 rows=523125 width=40) (actual time=65.187..2287.739\nrows=428443 loops=5)\n Filter: (((sstationcd)::text = 'VQ3_LYG'::text) AND\n((slineno)::text = ANY ('{1F,2F,3F}'::text[])) AND (dworkdate >=\nto_date('2017-02-11'::text, 'YYYY-MM-DD'::text)) AND (dworkdate <= to_da\nte('2019-03-11'::text, 'YYYY-MM-DD'::text)))\n Rows Removed by Filter: 1862173\n Planning Time: 0.513 ms\n Execution Time: 4147.542 ms\n(12 rows)\n\nTime: 4148.852 ms (00:04.149)\nqis3_dp2=# \n\n\nqis3_dp2=# SELECT COUNT(*),SMTOC FROM ( SELECT DISTINCT(SVIN) AS\nCHECKCARNUM,SMTOC FROM QIS_CARPASSEDSTATION A WHERE 1=1 AND A.SSTATIONCD =\n'VQ3_LYG' AND A.SLINENO IN ( '1F' , '2F' , '3F' ) AND A.DWORKDATE >=\nTO_DATE('2017-02-11','YYYY-MM-DD') AND A.DWORKDATE <=\nTO_DATE('2019-03-11','YYYY-MM-DD')) AS TEMP group by SMTOC;\n\n**Time: 3223.935 ms (00:03.224)**\n\n\n2、 Before \n\nqis3_dp2=# explain analyze SELECT COUNT(DISTINCT SVIN) AS CHECKCARNUM ,SMTOC\nFROM QIS_CARPASSEDSTATION A WHERE 1=1 AND A.SSTATIONCD = 'VQ3_LYG' AND\nA.SLINENO IN ( '1F' , '2F' , '3F' ) AND A.DWORKDATE >=\nTO_DATE('2017-02-11','YYYY-MM-DD') AND A.DWORKDATE <=\nTO_DATE('2019-03-11','YYYY-MM-DD') group by SMTOC;\n \nQUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------\n GroupAggregate (cost=875778.02..891475.55 rows=377 width=30) (actual\ntime=6400.991..33314.132 rows=410 loops=1)\n Group Key: smtoc\n -> Sort (cost=875778.02..881009.28 rows=2092501 width=40) (actual\ntime=6399.993..6626.151 rows=2142215 loops=1)\n Sort Key: smtoc\n Sort Method: quicksort Memory: 265665kB\n -> Gather (cost=1000.00..656098.93 rows=2092501 width=40) (actual\ntime=0.557..2467.778 rows=2142215 loops=1)\n Workers Planned: 4\n Workers Launched: 4\n -> Parallel Seq Scan on qis_carpassedstation a \n(cost=0.00..445848.83 rows=523125 width=40) (actual time=66.908..2428.397\nrows=428443 loops=5)\n Filter: (((sstationcd)::text = 'VQ3_LYG'::text) AND\n((slineno)::text = ANY ('{1F,2F,3F}'::text[])) AND (dworkdate >=\nto_date('2017-02-11'::text, 'YYYY-MM-DD'::text)) AND (dworkdate <= to_da\nte('2019-03-11'::text, 'YYYY-MM-DD'::text)))\n Rows Removed by Filter: 1862173\n Planning Time: 0.457 ms\n Execution Time: 33335.429 ms\n(13 rows)\n*\nTime: 33336.720 ms (00:33.337)*\nqis3_dp2=# \n\n\n\n\n\n\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Tue, 2 Apr 2019 01:23:50 -0700 (MST)",
"msg_from": "\"tank.zhang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql Sort cost Poor performance?"
}
] |
[
{
"msg_contents": "Hey,\nI wanted to a few questions regarding the parallel parameters :\nmax_worker_processes and max_parallel_workers_per_gather.\n\n1)Basically, max_worker_processes should be set to the number of cpus I\nhave in the machine ?\n2)If I set max_worker_processes to X and max_parallel_workers_per_gather to\nY (X>Y) it means that I will have at max (X/2) queries that can run in\nparallel. Am I right ? For example, max_worker_processes\n=8,max_parallel_workers_per_gather =4, it means that at max I can have 4\nqueries that are running in parallel ? and at min 2 queries (or none) can\nrun in parallel ?\n3)So If I calculate my work_mem based on the number of sessions I have :\n(TOTAL_MEM/2/NUM_OF_CONNECTIONS)\nI should add 8 to the NUM_OF_CONNECTIONS to have a new value for the\nwork_mem in order to consider queries that run in parallel..\n\nThanks.\n\nHey,I wanted to a few questions regarding the parallel parameters : max_worker_processes and max_parallel_workers_per_gather.1)Basically, max_worker_processes should be set to the number of cpus I have in the machine ?2)If I set max_worker_processes to X and max_parallel_workers_per_gather to Y (X>Y) it means that I will have at max (X/2) queries that can run in parallel. Am I right ? For example, max_worker_processes =8,max_parallel_workers_per_gather =4, it means that at max I can have 4 queries that are running in parallel ? and at min 2 queries (or none) can run in parallel ?3)So If I calculate my work_mem based on the number of sessions I have : (TOTAL_MEM/2/NUM_OF_CONNECTIONS)I should add 8 to the NUM_OF_CONNECTIONS to have a new value for the work_mem in order to consider queries that run in parallel..Thanks.",
"msg_date": "Tue, 2 Apr 2019 11:32:23 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> I wanted to a few questions regarding the parallel parameters : max_worker_processes and max_parallel_workers_per_gather.\n> \n> 1)Basically, max_worker_processes should be set to the number of cpus I have in the machine ?\n\nSetting it higher would not be smart.\nSetting it lower can also be a good idea; it depends\non your workload.\n\n> 2)If I set max_worker_processes to X and max_parallel_workers_per_gather to Y (X>Y)\n> it means that I will have at max (X/2) queries that can run in parallel. Am I right ?\n> For example, max_worker_processes =8,max_parallel_workers_per_gather =4, it means\n> that at max I can have 4 queries that are running in parallel ? and at min 2 queries\n> (or none) can run in parallel ?\n\nThat is correct, but unless you set \"max_parallel_workers_per_gather\" to 1, one\nquery can use more than one parallel worker, and then you can have fewer\nconcurrent queries.\n\nIt also depends on the size of the table or index how many workers PostgreSQL will use.\n\n> 3)So If I calculate my work_mem based on the number of sessions I have : (TOTAL_MEM/2/NUM_OF_CONNECTIONS)\n> I should add 8 to the NUM_OF_CONNECTIONS to have a new value for the work_mem in order to consider queries that run in parallel..\n\nYes, but don't forget that one query can use \"work_mem\" several times if the\nexecution plan has several memory intensive nodes.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Tue, 02 Apr 2019 13:28:50 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: parallel query"
}
] |
[
{
"msg_contents": "We have a very simple table, whose DDL is as follows: \n\n CREATE TABLE public.next_id (\n id varchar(255) NOT NULL,\n next_value int8 NOT NULL,\n CONSTRAINT next_id_pk PRIMARY KEY (id)\n ); \n\nThe table only has about 125 rows, and there are no indexes apart from the primary key constraint.\n\nIn DBeaver I am executing the following UPDATE query:\n\n UPDATE next_id SET next_value=next_value+1 WHERE id='Session';\n\nIf I point DBeaver to a server (localhost) running version:\n 11.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20181127, 64-bit\nit executes on average in about 50ms.\n\nthe EXPLAIN (ANALYSE, TIMING TRUE) of this query gives:\n\n Update on next_id (cost=0.14..8.16 rows=1 width=36) (actual time=0.057..0.057 rows=0 loops=1)\n -> Index Scan using next_id_pk on next_id (cost=0.14..8.16 rows=1 width=36) (actual time=0.039..0.040 rows=1 loops=1)\n Index Cond: ((id)::text = 'Session'::text)\n Planning Time: 0.083 ms\n Execution Time: 0.089 ms\n\nwhich is significantly less than 50ms.\n\nNow, if I point DBeaver to a VM server on the same gigabit network switch, running version: \n 9.5.3 on i386-pc-solaris2.11, compiled by cc: Sun C 5.10 SunOS_i386 Patch 142363-07 2010/12/09, 64-bit\nthen the same query executes in about 2-3ms\n\nThe EXPLAIN output when executing the query on this server is:\n\n Update on next_id (cost=0.27..8.29 rows=1 width=36) (actual time=0.062..0.062 rows=0 loops=1)\n -> Index Scan using next_id_pkey on next_id (cost=0.27..8.29 rows=1 width=36) (actual time=0.025..0.026 rows=1 loops=1)\n Index Cond: ((id)::text = 'Session'::text)\n Planning time: 0.083 ms\n Execution time: 0.096 ms\n\nwhich you will see is virtually identical to the slower version.\n\nWhy is the query taking so much longer on the localhost server?\n\nNot that the localhost machine is significantly faster in other metrics (CPU, file system, etc.)\n\nI have also tried the query on another server on the same network switch running version: \n 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.0, 64-bit\nand the timings are very similar to those for 'localhost'. That is, approx 50ms on average.\n\nNow, if I run the following FOR LOOP query:\n\n do $$\n begin\n\tfor i in 1..10000 loop\n\t update NEXT_ID set next_value=next_value+1 where id='Session';\n\tend loop;\n end;\n $$;\n\nThen this completes in about the same time on ALL of the servers - approximately 1.7s - which makes sense as 10,000 times the above plan/execute times is approx 1.7s.\n\nSo, to me this feels like some kind of COMMIT overhead of approx 50ms that the version 10 and version 11 servers are experiencing. But I have no idea where to look to try and find where this time is being spent.\n\nNote that the schemas of the databases on the 3 servers involved are virtually identical. The schema for this table is exactly the same.\n\nHoping that someone can give me an idea about where to go looking.\n\n\nRegards, \n\nDuncan Kinnear \n\t\nFloor 1, 100 McLeod St, Hastings 4120, New Zealand \nPO Box 2006, Hastings 4153, New Zealand \nP: +64 6 871 5700 F: +64 6 871 5709 E: [email protected]\n\n\n",
"msg_date": "Thu, 4 Apr 2019 10:59:21 +1300 (NZDT)",
"msg_from": "Duncan Kinnear <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commit(?) overhead"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 3:42 AM Duncan Kinnear <[email protected]>\nwrote:\n\n>\n> the EXPLAIN (ANALYSE, TIMING TRUE) of this query gives:\n>\n> Update on next_id (cost=0.14..8.16 rows=1 width=36) (actual\n> time=0.057..0.057 rows=0 loops=1)\n> -> Index Scan using next_id_pk on next_id (cost=0.14..8.16 rows=1\n> width=36) (actual time=0.039..0.040 rows=1 loops=1)\n> Index Cond: ((id)::text = 'Session'::text)\n> Planning Time: 0.083 ms\n> Execution Time: 0.089 ms\n>\n> which is significantly less than 50ms.\n>\n\nThe EXPLAIN ANALYZE doesn't include the time needed to fsync the\ntransaction logs. It measures only the update itself, not the implicit\ncommit at the end. DBeaver is seeing the fsync-inclusive time. 50ms is\npretty long, but some file systems and OSes seem to be pretty inefficient\nat this and take several disk revolutions to get the data down.\n\n\n>\n> Now, if I point DBeaver to a VM server on the same gigabit network switch,\n> running version:\n> 9.5.3 on i386-pc-solaris2.11, compiled by cc: Sun C 5.10 SunOS_i386\n> Patch 142363-07 2010/12/09, 64-bit\n> then the same query executes in about 2-3ms\n>\n\nThat machine probably has hardware to do a fast fsync, has fsync turned\noff, or is lying about the safety of its data.\n\nCheers,\n\nJeff\n\nOn Thu, Apr 4, 2019 at 3:42 AM Duncan Kinnear <[email protected]> wrote:\nthe EXPLAIN (ANALYSE, TIMING TRUE) of this query gives:\n\n Update on next_id (cost=0.14..8.16 rows=1 width=36) (actual time=0.057..0.057 rows=0 loops=1)\n -> Index Scan using next_id_pk on next_id (cost=0.14..8.16 rows=1 width=36) (actual time=0.039..0.040 rows=1 loops=1)\n Index Cond: ((id)::text = 'Session'::text)\n Planning Time: 0.083 ms\n Execution Time: 0.089 ms\n\nwhich is significantly less than 50ms.The EXPLAIN ANALYZE doesn't include the time needed to fsync the transaction logs. It measures only the update itself, not the implicit commit at the end. DBeaver is seeing the fsync-inclusive time. 50ms is pretty long, but some file systems and OSes seem to be pretty inefficient at this and take several disk revolutions to get the data down. \n\nNow, if I point DBeaver to a VM server on the same gigabit network switch, running version: \n 9.5.3 on i386-pc-solaris2.11, compiled by cc: Sun C 5.10 SunOS_i386 Patch 142363-07 2010/12/09, 64-bit\nthen the same query executes in about 2-3msThat machine probably has hardware to do a fast fsync, has fsync turned off, or is lying about the safety of its data.Cheers,Jeff",
"msg_date": "Thu, 4 Apr 2019 11:14:07 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit(?) overhead"
},
{
"msg_contents": "----- On 5 Apr, 2019, at 4:14 AM, Jeff Janes <[email protected]> wrote: \n\n> On Thu, Apr 4, 2019 at 3:42 AM Duncan Kinnear < [\n> mailto:[email protected] | [email protected] ] > wrote:\n\n>> the EXPLAIN (ANALYSE, TIMING TRUE) of this query gives:\n\n>> Update on next_id (cost=0.14..8.16 rows=1 width=36) (actual time=0.057..0.057\n>> rows=0 loops=1)\n>> -> Index Scan using next_id_pk on next_id (cost=0.14..8.16 rows=1 width=36)\n>> (actual time=0.039..0.040 rows=1 loops=1)\n>> Index Cond: ((id)::text = 'Session'::text)\n>> Planning Time: 0.083 ms\n>> Execution Time: 0.089 ms\n\n>> which is significantly less than 50ms.\n\n> The EXPLAIN ANALYZE doesn't include the time needed to fsync the transaction\n> logs. It measures only the update itself, not the implicit commit at the end.\n> DBeaver is seeing the fsync-inclusive time. 50ms is pretty long, but some file\n> systems and OSes seem to be pretty inefficient at this and take several disk\n> revolutions to get the data down.\n\n>> Now, if I point DBeaver to a VM server on the same gigabit network switch,\n>> running version:\n>> 9.5.3 on i386-pc-solaris2.11, compiled by cc: Sun C 5.10 SunOS_i386 Patch\n>> 142363-07 2010/12/09, 64-bit\n>> then the same query executes in about 2-3ms\n\n> That machine probably has hardware to do a fast fsync, has fsync turned off, or\n> is lying about the safety of its data.\n\nJust a quick update. I tried performing a sequence of BEGIN; UPDATE ...; COMMIT; and I got the following log entries:\n\nApr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.639 NZST [29887] LOG: duration: 0.025 ms parse <unnamed>: begin\nApr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.639 NZST [29887] LOG: duration: 0.014 ms bind <unnamed>: begin\nApr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.639 NZST [29887] LOG: duration: 0.003 ms execute <unnamed>: begin\nApr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.639 NZST [29887] LOG: duration: 0.045 ms parse <unnamed>: update NEXT_ID set next_value=next_value+1 where id='Session'\nApr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.640 NZST [29887] LOG: duration: 0.055 ms bind <unnamed>: update NEXT_ID set next_value=next_value+1 where id='Session'\nApr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.640 NZST [29887] LOG: duration: 0.059 ms execute <unnamed>: update NEXT_ID set next_value=next_value+1 where id='Session'\nApr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.640 NZST [29887] LOG: duration: 0.004 ms parse <unnamed>: commit\nApr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.640 NZST [29887] LOG: duration: 0.003 ms bind <unnamed>: commit\nApr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.690 NZST [29887] LOG: duration: 50.237 ms execute <unnamed>: commit\n\nSo this confirms that the overhead is indeed happening in the COMMIT part. But how do I get more detailed logging to see what it is doing?\n\nNote, in a previous reply to Jeff (which I forgot to CC to the list) I explained that the slow machines are both using BTRFS as the filesystem, and a bit of googling has revealed that using PostgreSQL on BTRFS filesystems is (don't cross the streams) bad.\n\nJeff, I will try adding the wait event stuff to see if that it what it is doing.\n\n\nCheers, Duncan\n\n\n",
"msg_date": "Wed, 10 Apr 2019 09:26:22 +1200 (NZST)",
"msg_from": "Duncan Kinnear <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commit(?) overhead"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 09:26:22AM +1200, Duncan Kinnear wrote:\n> ----- On 5 Apr, 2019, at 4:14 AM, Jeff Janes <[email protected]> wrote: \n> \n> > On Thu, Apr 4, 2019 at 3:42 AM Duncan Kinnear <[email protected] | [email protected] ] > wrote:\n> \n> >> the EXPLAIN (ANALYSE, TIMING TRUE) of this query gives:\n> \n> >> Update on next_id (cost=0.14..8.16 rows=1 width=36) (actual time=0.057..0.057\n> >> rows=0 loops=1)\n> >> -> Index Scan using next_id_pk on next_id (cost=0.14..8.16 rows=1 width=36)\n> >> (actual time=0.039..0.040 rows=1 loops=1)\n> >> Index Cond: ((id)::text = 'Session'::text)\n> >> Planning Time: 0.083 ms\n> >> Execution Time: 0.089 ms\n> \n> >> which is significantly less than 50ms.\n> \n> > The EXPLAIN ANALYZE doesn't include the time needed to fsync the transaction\n> > logs. It measures only the update itself, not the implicit commit at the end.\n> > DBeaver is seeing the fsync-inclusive time. 50ms is pretty long, but some file\n> > systems and OSes seem to be pretty inefficient at this and take several disk\n> > revolutions to get the data down.\n> \n> >> Now, if I point DBeaver to a VM server on the same gigabit network switch,\n> >> running version:\n> >> 9.5.3 on i386-pc-solaris2.11, compiled by cc: Sun C 5.10 SunOS_i386 Patch\n> >> 142363-07 2010/12/09, 64-bit\n> >> then the same query executes in about 2-3ms\n> \n> > That machine probably has hardware to do a fast fsync, has fsync turned off, or\n> > is lying about the safety of its data.\n> \n> Just a quick update. I tried performing a sequence of BEGIN; UPDATE ...; COMMIT; and I got the following log entries:\n\n> Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.640 NZST [29887] LOG: duration: 0.003 ms bind <unnamed>: commit\n> Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.690 NZST [29887] LOG: duration: 50.237 ms execute <unnamed>: commit\n> \n> So this confirms that the overhead is indeed happening in the COMMIT part. But how do I get more detailed logging to see what it is doing?\n\ncommit is causing the fsync() Jeff mentioned.\n\nYou could test that's the issue by comparing with fsync=off (please read what\nthat means and don't run your production cluster like that).\nhttps://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-FSYNC\n\nYou could also put your XLOG on a separate FS (as a test or otherwise).\n\nJustin\n\n\n",
"msg_date": "Tue, 9 Apr 2019 17:12:27 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit(?) overhead"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-09 17:12:27 -0500, Justin Pryzby wrote:\n> You could test that's the issue by comparing with fsync=off (please read what\n> that means and don't run your production cluster like that).\n> https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-FSYNC\n\nI suggest testing it with synchronous_commit=off instead. That's about\nas fast for this type of workload, doesn't have cluster corruption\nissues, the window of a transaction not persisting in case of a crash is\nvery small, and it can just set by any user in individual sessions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Apr 2019 15:23:36 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit(?) overhead"
},
{
"msg_contents": "----- On 10 Apr, 2019, at 10:23 AM, Andres Freund [email protected] wrote:\n\n> On 2019-04-09 17:12:27 -0500, Justin Pryzby wrote:\n>> You could test that's the issue by comparing with fsync=off (please read what\n>> that means and don't run your production cluster like that).\n>> https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-FSYNC\n> \n> I suggest testing it with synchronous_commit=off instead. That's about\n> as fast for this type of workload, doesn't have cluster corruption\n> issues, the window of a transaction not persisting in case of a crash is\n> very small, and it can just set by any user in individual sessions.\n\nBingo! Adding 'SET LOCAL synchronous_commit TO OFF;' to my 'BEGIN; UPDATE ....; COMMIT;' block has given me sub-1ms timings! Thanks Andres.\n\nI'll probably leave the setting as that on my local machine. The option appears to be relatively safe, but my machine is just a dev test machine anyway.\n\n\nRegards, \n\nDuncan Kinnear \n\n\n",
"msg_date": "Wed, 10 Apr 2019 10:42:33 +1200 (NZST)",
"msg_from": "Duncan Kinnear <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commit(?) overhead"
},
{
"msg_contents": "Duncan Kinnear wrote:\n> Bingo! Adding 'SET LOCAL synchronous_commit TO OFF;' to my 'BEGIN; UPDATE ....; COMMIT;'\n> block has given me sub-1ms timings! Thanks Andres.\n\nThat's a pretty clear indication that your I/O subsystem was overloaded.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 10:16:50 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit(?) overhead"
}
] |
[
{
"msg_contents": "Hi,\n\nI have installed PostgreSQL 9.4 (open source) version on my CentOS Linux\nRed Hat 7 production server and kept default parameters which are in\npostgresql.conf file.So my basic question is, once I start using postgres\nhow much RAM the postgres processes consumes (postgres related processes\nonly).\n\nThere are lot of allocations in postgresql.conf file, for example\nshared_buffers, work_mem...etc.\n\nAs per my knowledge, all postgres processes should not consume the RAM more\nthan the value assigned in shared_buffers.Please clarify and let me know if\nI misunderstand the concept..\n\n-- \nThanks,\nVenkata Prasad\n\nHi,I have installed PostgreSQL 9.4 (open source) version on my CentOS Linux Red Hat 7 production server and kept default parameters which are in postgresql.conf file.So my basic question is, once I start using postgres how much RAM the postgres processes consumes (postgres related processes only). There are lot of allocations in postgresql.conf file, for example shared_buffers, work_mem...etc.As per my knowledge, all postgres processes should not consume the RAM more than the value assigned in shared_buffers.Please clarify and let me know if I misunderstand the concept..-- Thanks,Venkata Prasad",
"msg_date": "Thu, 4 Apr 2019 20:18:01 +0530",
"msg_from": "Prasad <[email protected]>",
"msg_from_op": true,
"msg_subject": "RAM usage of PostgreSql"
},
{
"msg_contents": "Hi,\n\n|Cc: [email protected], [email protected]\nPlease don't cross post to multiple lists.\n\nOn Thu, Apr 04, 2019 at 08:18:01PM +0530, Prasad wrote:\n> There are lot of allocations in postgresql.conf file, for example\n> shared_buffers, work_mem...etc.\n> \n> As per my knowledge, all postgres processes should not consume the RAM more\n> than the value assigned in shared_buffers.Please clarify and let me know if\n> I misunderstand the concept..\n\nshared_buffers is what's *reserved* for postgres and unavailable for other\nprocesses whenever PG is running.\n\nwork_mem is what each postgres process might use, if needed. When complete,\nthat's returned to the OS. Note that an expensive query might actually use\nsome multiple of work_mem (it's per sort/hash node and also per parallel\nprocess, and also hash aggregate can sometimes use more than work_mem).\n\nJustin\n\n\n",
"msg_date": "Thu, 4 Apr 2019 12:46:30 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAM usage of PostgreSql"
},
{
"msg_contents": "Prasad wrote:\n> I have installed PostgreSQL 9.4 (open source) version on my CentOS\n> Linux Red Hat 7 production server and kept default parameters which\n> are in postgresql.conf file.So my basic question is, once I start\n> using postgres how much RAM the postgres processes consumes\n> (postgres related processes only). \n> \n> There are lot of allocations in postgresql.conf file, for example\n> shared_buffers, work_mem...etc.\n> \n> As per my knowledge, all postgres processes should not consume the\n> RAM more than the value assigned in shared_buffers.Please clarify\n> and let me know if I misunderstand the concept..\n\nshared_buffers only determines the shared memory cache, each database\nprocess still needs private memory.\n\nAs a rule of thumb, start with shared_buffers set to 1/4 of your\navailable RAM, but no more than 8GB.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 04 Apr 2019 20:08:05 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAM usage of PostgreSql"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 10:48 AM Prasad <[email protected]> wrote:\n\n> Hi,\n>\n> I have installed PostgreSQL 9.4 (open source) version on my CentOS Linux\n> Red Hat 7 production server and kept default parameters which are in\n> postgresql.conf file.So my basic question is, once I start using postgres\n> how much RAM the postgres processes consumes (postgres related processes\n> only).\n>\n> There are lot of allocations in postgresql.conf file, for example\n> shared_buffers, work_mem...etc.\n>\n> As per my knowledge, all postgres processes should not consume the RAM\n> more than the value assigned in shared_buffers.Please clarify and let me\n> know if I misunderstand the concept..\n>\n> --\n> Thanks,\n> Venkata Prasad\n>\n>\n>\nshared_buffers is just the shared memory segment. work_mem &\nmaintenance_work mem are used in addition to shared_buffers and there can\nbe multiples of those values in use at the same time. The PostgreSQL wiki\nhas some good guidance on what the different memory settings mean and how\nto tune them\n\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nOn Thu, Apr 4, 2019 at 10:48 AM Prasad <[email protected]> wrote:Hi,I have installed PostgreSQL 9.4 (open source) version on my CentOS Linux Red Hat 7 production server and kept default parameters which are in postgresql.conf file.So my basic question is, once I start using postgres how much RAM the postgres processes consumes (postgres related processes only). There are lot of allocations in postgresql.conf file, for example shared_buffers, work_mem...etc.As per my knowledge, all postgres processes should not consume the RAM more than the value assigned in shared_buffers.Please clarify and let me know if I misunderstand the concept..-- Thanks,Venkata Prasad\nshared_buffers is just the shared memory segment. work_mem & maintenance_work mem are used in addition to shared_buffers and there can be multiples of those values in use at the same time. The PostgreSQL wiki has some good guidance on what the different memory settings mean and how to tune themhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server",
"msg_date": "Thu, 4 Apr 2019 14:09:56 -0400",
"msg_from": "Keith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAM usage of PostgreSql"
}
] |
[
{
"msg_contents": "Hi there,\n\nI would like to monitor our postgresql instance under AWS-RDS to get some alert (or log) if any query runs over a certain amount of time, like 1.5 seconds.\nI would like to know which query took over that time (and how long), when and which parameters it used.\nThe exact parameters are important because the amount of data retrieved varies a lot depending on parameters.\nI would like to know when it happened to be able to correlate it with the overall system activity.\n\nI came across\n\n* pg_stat_statements is very useful BUT it gives me stats rather than specific executions.\nIn particular, I don't know the exact time it happened and the parameters used\n\n* log_statement but this time I don't see how I would filter on \"slow\" queries and it seems dumped into the RDS log... not very easy to use and maybe too heavy for a production system\n\n* pg_hero is great but looks like an interactive tool (unless I missed something) and I don't think it gives me the exact parameters and time (not sure...)\n\nIs there a tool I could use to achieve that?\n\n\n\nThanks\n\nEric\n\n\n\n\n\n\n\n\n\nHi there, \n \nI would like to monitor our postgresql instance under AWS-RDS to get some alert (or log) if any query runs over a certain amount of time, like 1.5 seconds.\nI would like to know which query took over that time (and how long), when and which parameters it used.\nThe exact parameters are important because the amount of data retrieved varies a lot depending on parameters.\nI would like to know when it happened to be able to correlate it with the overall system activity.\n \nI came across \n· \npg_stat_statements is very useful BUT it gives me stats rather than specific executions.\nIn particular, I don’t know the exact time it happened and the parameters used\n· \nlog_statement but this time I don’t see how I would filter on “slow” queries and it seems dumped into the RDS log… not very easy to use and maybe too heavy for a production system\n· \npg_hero is great but looks like an interactive tool (unless I missed something) and I don’t think it gives me the exact parameters and time (not sure…)\n \nIs there a tool I could use to achieve that?\n\n\n\n \nThanks\n \nEric",
"msg_date": "Thu, 4 Apr 2019 16:28:04 +0000",
"msg_from": "\"Mamet, Eric (GfK)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "monitoring options for postgresql under AWS/RDS?"
},
{
"msg_contents": "On Thu, Apr 04, 2019 at 04:28:04PM +0000, Mamet, Eric (GfK) wrote:\n> I would like to monitor our postgresql instance under AWS-RDS to get some alert (or log) if any query runs over a certain amount of time, like 1.5 seconds.\n...\n> * log_statement but this time I don't see how I would filter on \"slow\" queries and it seems dumped into the RDS log... not very easy to use and maybe too heavy for a production system\n\nYou can set log_min_duration_statement='1500ms'\n\nJustin\n\n\n",
"msg_date": "Thu, 4 Apr 2019 12:43:22 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: monitoring options for postgresql under AWS/RDS?"
},
{
"msg_contents": "Hello Eric,\nTo start with, you can set log_min_duration_statement to 1500ms and log_statement to the required one which will give you the statement that ran for more than 1.5 s. Then you know what to do!\nFor tools: 1. pgcluu2. PoWA\nBest Regards,Rijo Roy\n\n On Thursday, 4 April, 2019, 11:07:35 pm IST, Mamet, Eric (GfK) <[email protected]> wrote: \n \n \nHi there, \n \n \n \nI would like to monitor our postgresql instance under AWS-RDS to get some alert (or log) if any query runs over a certain amount of time, like 1.5 seconds.\n \nI would like to know which query took over that time (and how long), when and which parameters it used.\n \nThe exact parameters are important because the amount of data retrieved varies a lot depending on parameters.\n \nI would like to know when it happened to be able to correlate it with the overall system activity.\n \n \n \nI came across \n \n· pg_stat_statements is very useful BUT it gives me stats rather than specific executions.\nIn particular, I don’t know the exact time it happened and the parameters used\n \n· log_statement but this time I don’t see how I would filter on “slow” queries and it seems dumped into the RDS log… not very easy to use and maybe too heavy for a production system\n \n· pg_hero is great but looks like an interactive tool (unless I missed something) and I don’t think it gives me the exact parameters and time (not sure…)\n \n \n \nIs there a tool I could use to achieve that?\n\n\n\n \n \n \nThanks\n \n \n \nEric\n \n\n Hello Eric,To start with, you can set log_min_duration_statement to 1500ms and log_statement to the required one which will give you the statement that ran for more than 1.5 s. Then you know what to do!For tools: 1. pgcluu2. PoWABest Regards,Rijo Roy\n\n\n\n On Thursday, 4 April, 2019, 11:07:35 pm IST, Mamet, Eric (GfK) <[email protected]> wrote:\n \n\n\n\n\n\nHi there, \n \nI would like to monitor our postgresql instance under AWS-RDS to get some alert (or log) if any query runs over a certain amount of time, like 1.5 seconds.\nI would like to know which query took over that time (and how long), when and which parameters it used.\nThe exact parameters are important because the amount of data retrieved varies a lot depending on parameters.\nI would like to know when it happened to be able to correlate it with the overall system activity.\n \nI came across \n· \npg_stat_statements is very useful BUT it gives me stats rather than specific executions.\nIn particular, I don’t know the exact time it happened and the parameters used\n· \nlog_statement but this time I don’t see how I would filter on “slow” queries and it seems dumped into the RDS log… not very easy to use and maybe too heavy for a production system\n· \npg_hero is great but looks like an interactive tool (unless I missed something) and I don’t think it gives me the exact parameters and time (not sure…)\n \nIs there a tool I could use to achieve that?\n\n\n\n \nThanks\n \nEric",
"msg_date": "Thu, 4 Apr 2019 18:03:44 +0000 (UTC)",
"msg_from": "Rijo Roy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: monitoring options for postgresql under AWS/RDS?"
}
] |
[
{
"msg_contents": "It looks like I missed some functionality of LOG_STATEMENT such as filtering on the duration (log_min_duration_statement)\n\nSo maybe log_statement is what I am looking for, combined with some cloudwatch monitoring on the log?\n\n\n\nFrom: Mamet, Eric (GfK)\nSent: 04 April 2019 17:28\nTo: '[email protected]' <[email protected]>\nSubject: monitoring options for postgresql under AWS/RDS?\n\nHi there,\n\nI would like to monitor our postgresql instance under AWS-RDS to get some alert (or log) if any query runs over a certain amount of time, like 1.5 seconds.\nI would like to know which query took over that time (and how long), when and which parameters it used.\nThe exact parameters are important because the amount of data retrieved varies a lot depending on parameters.\nI would like to know when it happened to be able to correlate it with the overall system activity.\n\nI came across\n\n* pg_stat_statements is very useful BUT it gives me stats rather than specific executions.\nIn particular, I don't know the exact time it happened and the parameters used\n\n* log_statement but this time I don't see how I would filter on \"slow\" queries and it seems dumped into the RDS log... not very easy to use and maybe too heavy for a production system\n\n* pg_hero is great but looks like an interactive tool (unless I missed something) and I don't think it gives me the exact parameters and time (not sure...)\n\nIs there a tool I could use to achieve that?\n\n\nThanks\n\nEric\n\n\n\n\n\n\n\n\n\nIt looks like I missed some functionality of LOG_STATEMENT such as filtering on the duration (log_min_duration_statement)\n \nSo maybe log_statement is what I am looking for, combined with some cloudwatch monitoring on the log?\n \n \n \n\n\nFrom: Mamet, Eric\n (GfK) \nSent: 04 April 2019 17:28\nTo: '[email protected]' <[email protected]>\nSubject: monitoring options for postgresql under AWS/RDS?\n\n\n \nHi there, \n \nI would like to monitor our postgresql instance under AWS-RDS to get some alert (or log) if any query runs over a certain amount of time, like 1.5 seconds.\nI would like to know which query took over that time (and how long), when and which parameters it used.\nThe exact parameters are important because the amount of data retrieved varies a lot depending on parameters.\nI would like to know when it happened to be able to correlate it with the overall system activity.\n \nI came across \n· \npg_stat_statements is very useful BUT it gives me stats rather than specific executions.\nIn particular, I don’t know the exact time it happened and the parameters used\n· \nlog_statement but this time I don’t see how I would filter on “slow” queries and it seems dumped into the RDS log… not very easy to use and maybe too heavy for a production system\n· \npg_hero is great but looks like an interactive tool (unless I missed something) and I don’t think it gives me the exact parameters and time (not sure…)\n \nIs there a tool I could use to achieve that?\n\n\n \nThanks\n \nEric",
"msg_date": "Fri, 5 Apr 2019 09:46:12 +0000",
"msg_from": "\"Mamet, Eric (GfK)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: monitoring options for postgresql under AWS/RDS?"
},
{
"msg_contents": "Hello Mike,\n\nYou could also try exploring “Performance Insights” for the RDS instances.\nPersonally I found that helpful when debugging some issues.\n\nRegards,\nPraveen\n\nOn Fri, Apr 5, 2019 at 6:54 AM Mamet, Eric (GfK) <[email protected]> wrote:\n\n> It looks like I missed some functionality of LOG_STATEMENT such as\n> filtering on the duration (log_min_duration_statement)\n>\n>\n>\n> So maybe log_statement is what I am looking for, combined with some\n> cloudwatch monitoring on the log?\n>\n>\n>\n>\n>\n>\n>\n> *From:* Mamet, Eric (GfK)\n> *Sent:* 04 April 2019 17:28\n> *To:* '[email protected]' <[email protected]\n> >\n> *Subject:* monitoring options for postgresql under AWS/RDS?\n>\n>\n>\n> Hi there,\n>\n>\n>\n> I would like to monitor our postgresql instance under AWS-RDS to get some\n> alert (or log) if any query runs over a certain amount of time, like 1.5\n> seconds.\n>\n> I would like to know which query took over that time (and how long), when\n> and which parameters it used.\n>\n> The exact parameters are important because the amount of data retrieved\n> varies a lot depending on parameters.\n>\n> I would like to know when it happened to be able to correlate it with the\n> overall system activity.\n>\n>\n>\n> I came across\n>\n> · pg_stat_statements is very useful BUT it gives me stats rather\n> than specific executions.\n> In particular, I don’t know the exact time it happened and the parameters\n> used\n>\n> · log_statement but this time I don’t see how I would filter on\n> “slow” queries and it seems dumped into the RDS log… not very easy to use\n> and maybe too heavy for a production system\n>\n> · pg_hero is great but looks like an interactive tool (unless I\n> missed something) and I don’t think it gives me the exact parameters and\n> time (not sure…)\n>\n>\n>\n> Is there a tool I could use to achieve that?\n>\n>\n>\n> Thanks\n>\n>\n>\n> Eric\n>\n\nHello Mike,You could also try exploring “Performance Insights” for the RDS instances. Personally I found that helpful when debugging some issues.Regards,PraveenOn Fri, Apr 5, 2019 at 6:54 AM Mamet, Eric (GfK) <[email protected]> wrote:\n\n\nIt looks like I missed some functionality of LOG_STATEMENT such as filtering on the duration (log_min_duration_statement)\n \nSo maybe log_statement is what I am looking for, combined with some cloudwatch monitoring on the log?\n \n \n \n\n\nFrom: Mamet, Eric\n (GfK) \nSent: 04 April 2019 17:28\nTo: '[email protected]' <[email protected]>\nSubject: monitoring options for postgresql under AWS/RDS?\n\n\n \nHi there, \n \nI would like to monitor our postgresql instance under AWS-RDS to get some alert (or log) if any query runs over a certain amount of time, like 1.5 seconds.\nI would like to know which query took over that time (and how long), when and which parameters it used.\nThe exact parameters are important because the amount of data retrieved varies a lot depending on parameters.\nI would like to know when it happened to be able to correlate it with the overall system activity.\n \nI came across \n· \npg_stat_statements is very useful BUT it gives me stats rather than specific executions.\nIn particular, I don’t know the exact time it happened and the parameters used\n· \nlog_statement but this time I don’t see how I would filter on “slow” queries and it seems dumped into the RDS log… not very easy to use and maybe too heavy for a production system\n· \npg_hero is great but looks like an interactive tool (unless I missed something) and I don’t think it gives me the exact parameters and time (not sure…)\n \nIs there a tool I could use to achieve that?\n\n\n \nThanks\n \nEric",
"msg_date": "Fri, 5 Apr 2019 08:57:01 -0400",
"msg_from": "Praveen Duraiswami <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: monitoring options for postgresql under AWS/RDS?"
}
] |
[
{
"msg_contents": "Hi team,\n\nPlease confirm ! Can we migrate Oracle 12c database (12.1.0.1.0) running on Solaris to PostgreSQL 11.2 on Linux (Ubuntu). Also, please suggest the tools and pre-requisites.\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\n\n\nHi team,\n \nPlease confirm ! Can we migrate Oracle 12c database (12.1.0.1.0) running on Solaris to PostgreSQL 11.2 on Linux (Ubuntu). Also, please suggest the tools and pre-requisites. \n\n \nRegards,\nDaulat",
"msg_date": "Mon, 8 Apr 2019 10:23:40 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Oracle to postgres migration"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 1:49 PM Daulat Ram <[email protected]> wrote:\n>\n> Please confirm ! Can we migrate Oracle 12c database (12.1.0.1.0) running on Solaris to PostgreSQL 11.2 on Linux (Ubuntu). Also, please suggest the tools and pre-requisites.\nA database migration is likely feasible, but might require quite a lot\nof work depending on what features you're using, and the amount of PL\ncode. Also, obviously migrating the database is only a part of the\noverall migration process, as you'll also need to take care of the\napplication(s), the backup/restore, monitoring and all other tools you\nneed.\n\nConcerning the database migration, the best tool is probably Gilles\nDarold's ora2pg. The tool also provides a migration cost assessment\nreport, to evaluate the difficulty of the database migration. More\ninformation on http://ora2pg.darold.net/\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:04:01 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle to postgres migration"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 8:04 AM Julien Rouhaud <[email protected]> wrote:\n\n> On Mon, Apr 8, 2019 at 1:49 PM Daulat Ram <[email protected]>\n> wrote:\n> >\n> > Please confirm ! Can we migrate Oracle 12c database (12.1.0.1.0) running\n> on Solaris to PostgreSQL 11.2 on Linux (Ubuntu). Also, please suggest the\n> tools and pre-requisites.\n> A database migration is likely feasible, but might require quite a lot\n> of work depending on what features you're using, and the amount of PL\n> code. Also, obviously migrating the database is only a part of the\n> overall migration process, as you'll also need to take care of the\n> application(s), the backup/restore, monitoring and all other tools you\n> need.\n>\n> Concerning the database migration, the best tool is probably Gilles\n> Darold's ora2pg. The tool also provides a migration cost assessment\n> report, to evaluate the difficulty of the database migration. More\n> information on http://ora2pg.darold.net/\n>\n>\n>\nThe last big Oracle to PG migration that I did was several years ago. We\nstood up the PostgreSQL instance(s) and then used SymmetricDS to\nsynchronize the Oracle and PG databases. After tuning and testing the\npostgresql side, we cut over the applications live - with minimal downtime\n- by releasing the updated application code and configuration. If we\nneeded to fail back, it was also pretty easy to undo the release and\nconfiguration changes.\n\nAnother approach you can play with is to leverage Foreign Data Wrappers.\nIn that scenario, you can run queries on your Oracle database from within\nPostgreSQL. You can use those queries to copy data directly into new\ntables without any interim files, or as a hybrid transition while you get\nthe new database set up.\n\nAt the time I was working on that migration, we had too many\ndata-edge-cases for ora2pg to be very useful. It has come a long ways\nsince then. I'm not sure it can do a live cutover, so you may need to plan\na bit of downtime if you have a lot of data to move into the new database.\n\nNote that you will also almost certainly want to use a connection pooler\nlike PGBouncer and/or PGPool II (or both at the same time), so be sure to\ninclude that in your plans from the beginning.\n\nThat said, none of this is on topic for the performance mailing list.\nPlease try to direct your questions to the right group next time.\n\nOn Mon, Apr 8, 2019 at 8:04 AM Julien Rouhaud <[email protected]> wrote:On Mon, Apr 8, 2019 at 1:49 PM Daulat Ram <[email protected]> wrote:\n>\n> Please confirm ! Can we migrate Oracle 12c database (12.1.0.1.0) running on Solaris to PostgreSQL 11.2 on Linux (Ubuntu). Also, please suggest the tools and pre-requisites.\nA database migration is likely feasible, but might require quite a lot\nof work depending on what features you're using, and the amount of PL\ncode. Also, obviously migrating the database is only a part of the\noverall migration process, as you'll also need to take care of the\napplication(s), the backup/restore, monitoring and all other tools you\nneed.\n\nConcerning the database migration, the best tool is probably Gilles\nDarold's ora2pg. The tool also provides a migration cost assessment\nreport, to evaluate the difficulty of the database migration. More\ninformation on http://ora2pg.darold.net/\n\nThe last big Oracle to PG migration that I did was several years ago. We stood up the PostgreSQL instance(s) and then used SymmetricDS to synchronize the Oracle and PG databases. After tuning and testing the postgresql side, we cut over the applications live - with minimal downtime - by releasing the updated application code and configuration. If we needed to fail back, it was also pretty easy to undo the release and configuration changes.Another approach you can play with is to leverage Foreign Data Wrappers. In that scenario, you can run queries on your Oracle database from within PostgreSQL. You can use those queries to copy data directly into new tables without any interim files, or as a hybrid transition while you get the new database set up.At the time I was working on that migration, we had too many data-edge-cases for ora2pg to be very useful. It has come a long ways since then. I'm not sure it can do a live cutover, so you may need to plan a bit of downtime if you have a lot of data to move into the new database.Note that you will also almost certainly want to use a connection pooler like PGBouncer and/or PGPool II (or both at the same time), so be sure to include that in your plans from the beginning. That said, none of this is on topic for the performance mailing list. Please try to direct your questions to the right group next time.",
"msg_date": "Mon, 8 Apr 2019 08:24:39 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle to postgres migration"
},
{
"msg_contents": "Le 08/04/2019 à 14:24, Rick Otten a écrit :\n>\n>\n> On Mon, Apr 8, 2019 at 8:04 AM Julien Rouhaud <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On Mon, Apr 8, 2019 at 1:49 PM Daulat Ram\n> <[email protected] <mailto:[email protected]>>\n> wrote:\n> >\n> > Please confirm ! Can we migrate Oracle 12c database (12.1.0.1.0)\n> running on Solaris to PostgreSQL 11.2 on Linux (Ubuntu). Also,\n> please suggest the tools and pre-requisites.\n> A database migration is likely feasible, but might require quite a lot\n> of work depending on what features you're using, and the amount of PL\n> code. Also, obviously migrating the database is only a part of the\n> overall migration process, as you'll also need to take care of the\n> application(s), the backup/restore, monitoring and all other tools you\n> need.\n>\n> Concerning the database migration, the best tool is probably Gilles\n> Darold's ora2pg. The tool also provides a migration cost assessment\n> report, to evaluate the difficulty of the database migration. More\n> information on http://ora2pg.darold.net/\n>\n>\n>\n> The last big Oracle to PG migration that I did was several years ago. \n> We stood up the PostgreSQL instance(s) and then used SymmetricDS to \n> synchronize the Oracle and PG databases. After tuning and testing \n> the postgresql side, we cut over the applications live - with minimal \n> downtime - by releasing the updated application code and \n> configuration. If we needed to fail back, it was also pretty easy to \n> undo the release and configuration changes.\n>\n> Another approach you can play with is to leverage Foreign Data \n> Wrappers. In that scenario, you can run queries on your Oracle \n> database from within PostgreSQL. You can use those queries to copy \n> data directly into new tables without any interim files, or as a \n> hybrid transition while you get the new database set up.\n>\n> At the time I was working on that migration, we had too many \n> data-edge-cases for ora2pg to be very useful. It has come a long ways \n> since then. I'm not sure it can do a live cutover, so you may need to \n> plan a bit of downtime if you have a lot of data to move into the new \n> database.\n>\n> Note that you will also almost certainly want to use a connection \n> pooler like PGBouncer and/or PGPool II (or both at the same time), so \n> be sure to include that in your plans from the beginning.\n>\n> That said, none of this is on topic for the performance mailing list. \n> Please try to direct your questions to the right group next time.\n>\nJust a few additional pieces of information.\n1) migration from one DBMS to another must always be lead as a project \n(because your data are always important ;-)\n2) a migration project always has the following main tasks:\n- setting a proper postgres platform (with all softwares, procedures and \ndocumentation needed to provide a good PostgreSQL service to your \napplications/clients) (you may already have such a platform).\n- migrating the data. This concerns both the structure (DDL) and the \ndata content.\n- migration the stored procedures, if any. In Oracle migrations, this is \noften a big workload in the project.\n- adapting the client application. The needed effort here can be huge or \n... null, depending on the used languages, whether the data access API \nare compatible or whether an ORM is used.\n- when all this has been prepared, a test phase can start. This is very \noften the most costly part of the project, in particular for mission \ncritical databases.\n- then, you are ready to switch to Postgres.\n3) do not hesitate to invest in education and external professional support.\n4) before launching such a project, it is highly recommended to perform \na preliminary study. For this purpose, as Julien said, ora2pg brings a \nbig help in analysing the Oracle database content. The cost estimates \nare pretty well computed, which gives you very quickly an idea of the \nglobal cost of the database migration. For the application side, you may \nalso have a look at code2pg.\n\nKR. Philippe.\n\n\n\n\n\n\n\nLe 08/04/2019 à 14:24, Rick Otten a\n écrit :\n\n\n\n\n\n\n\nOn Mon, Apr 8, 2019 at 8:04\n AM Julien Rouhaud <[email protected]>\n wrote:\n\nOn Mon, Apr 8, 2019 at\n 1:49 PM Daulat Ram <[email protected]>\n wrote:\n >\n > Please confirm ! Can we migrate Oracle 12c database\n (12.1.0.1.0) running on Solaris to PostgreSQL 11.2 on Linux\n (Ubuntu). Also, please suggest the tools and pre-requisites.\n A database migration is likely feasible, but might require\n quite a lot\n of work depending on what features you're using, and the\n amount of PL\n code. Also, obviously migrating the database is only a part\n of the\n overall migration process, as you'll also need to take care\n of the\n application(s), the backup/restore, monitoring and all other\n tools you\n need.\n\n Concerning the database migration, the best tool is probably\n Gilles\n Darold's ora2pg. The tool also provides a migration cost\n assessment\n report, to evaluate the difficulty of the database\n migration. More\n information on http://ora2pg.darold.net/\n\n\n\n\n\nThe last big Oracle to PG migration that I did was\n several years ago. We stood up the PostgreSQL instance(s)\n and then used SymmetricDS to synchronize the Oracle and PG\n databases. After tuning and testing the postgresql side,\n we cut over the applications live - with minimal downtime -\n by releasing the updated application code and\n configuration. If we needed to fail back, it was also\n pretty easy to undo the release and configuration changes.\n\n\nAnother approach you can play with is to leverage Foreign\n Data Wrappers. In that scenario, you can run queries on\n your Oracle database from within PostgreSQL. You can use\n those queries to copy data directly into new tables without\n any interim files, or as a hybrid transition while you get\n the new database set up.\n\n\nAt the time I was working on that migration, we had too\n many data-edge-cases for ora2pg to be very useful. It has\n come a long ways since then. I'm not sure it can do a live\n cutover, so you may need to plan a bit of downtime if you\n have a lot of data to move into the new database.\n\n\nNote that you will also almost certainly want to use a\n connection pooler like PGBouncer and/or PGPool II (or both\n at the same time), so be sure to include that in your plans\n from the beginning. \n\n\nThat said, none of this is on topic for the performance\n mailing list. Please try to direct your questions to the\n right group next time.\n\n\n\n\n\n Just a few additional pieces of information.\n 1) migration from one DBMS to another must always be lead as a\n project (because your data are always important ;-)\n 2) a migration project always has the following main tasks:\n - setting a proper postgres platform (with all softwares, procedures\n and documentation needed to provide a good PostgreSQL service to\n your applications/clients) (you may already have such a platform).\n - migrating the data. This concerns both the structure (DDL) and the\n data content.\n - migration the stored procedures, if any. In Oracle migrations,\n this is often a big workload in the project.\n - adapting the client application. The needed effort here can be\n huge or ... null, depending on the used languages, whether the data\n access API are compatible or whether an ORM is used.\n - when all this has been prepared, a test phase can start. This is\n very often the most costly part of the project, in particular for\n mission critical databases.\n - then, you are ready to switch to Postgres.\n 3) do not hesitate to invest in education and external professional\n support.\n 4) before launching such a project, it is highly recommended to\n perform a preliminary study. For this purpose, as Julien said,\n ora2pg brings a big help in analysing the Oracle database content.\n The cost estimates are pretty well computed, which gives you very\n quickly an idea of the global cost of the database migration. For\n the application side, you may also have a look at code2pg.\n\n KR. Philippe.",
"msg_date": "Mon, 8 Apr 2019 19:45:32 +0200",
"msg_from": "phb07 <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle to postgres migration"
},
{
"msg_contents": "Rick Otten-2 wrote\n> On Mon, Apr 8, 2019 at 8:04 AM Julien Rouhaud <\n\n> rjuju123@\n\n> > wrote:\n> \n> [...]\n> \n> That said, none of this is on topic for the performance mailing list.\n> Please try to direct your questions to the right group next time.\n\nIs \"general\" the correct one ?\nor should a \"migration\" group be created ;^> \n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Mon, 8 Apr 2019 13:02:36 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle to postgres migration"
},
{
"msg_contents": "On 09/04/2019 08:02, legrand legrand wrote:\n> Rick Otten-2 wrote\n>> On Mon, Apr 8, 2019 at 8:04 AM Julien Rouhaud <\n>> rjuju123@\n>> > wrote:\n>>\n>> [...]\n>>\n>> That said, none of this is on topic for the performance mailing list.\n>> Please try to direct your questions to the right group next time.\n> Is \"general\" the correct one ?\n> or should a \"migration\" group be created ;^>\n>\n> Regards\n> PAscal\n>\n>\n>\n> --\n> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n>\n>\nI think having a specific migration group would also be likely to \nimprove the visibility of pg, and the idea of migrating to pg.ᅵ As it \nhelp pg to appear in more search results.\n\n\nCheers,\nGavin\n\n\n\n\n",
"msg_date": "Tue, 9 Apr 2019 10:31:15 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle to postgres migration"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 3:31 PM Gavin Flower <[email protected]>\nwrote:\n\n> I think having a specific migration group would also be likely to\n> improve the visibility of pg, and the idea of migrating to pg. As it\n> help pg to appear in more search results.\n>\n>\nI presently have qualms retaining novice, sql, performance, and probably\nsome others. I don't think adding yet another specialized low-volume list\nis of particular benefit. Nor do I think it behooves the core project to\nbe in the center of migration support anyway.\n\nThis discussion can and should be moved to -general\n\nDavid J.\n\nOn Mon, Apr 8, 2019 at 3:31 PM Gavin Flower <[email protected]> wrote:\nI think having a specific migration group would also be likely to \nimprove the visibility of pg, and the idea of migrating to pg. As it \nhelp pg to appear in more search results.I presently have qualms retaining novice, sql, performance, and probably some others. I don't think adding yet another specialized low-volume list is of particular benefit. Nor do I think it behooves the core project to be in the center of migration support anyway.This discussion can and should be moved to -generalDavid J.",
"msg_date": "Mon, 8 Apr 2019 15:42:57 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle to postgres migration"
}
] |
[
{
"msg_contents": "Hi\n\nWe have some very strange query planning problem. Long story short it \ntakes 67626.278ms just to plan. Query execution takes 12ms.\n\nQuery has 7 joins and 2 subselects.\nIt looks like the issue is not deterministic, sometimes is takes few ms \nto plan the query.\n\nOne of the tables has 550,485,942 live tuples and 743,504,012 dead \ntuples. Running ANALYZE on that tables solves the problem only temporarily.\n\nQuestion is how can we debug what is going on?\n\nBest Regards,\nKrzysztof Płocharz\n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:10:52 +0200",
"msg_from": "Krzysztof Plocharz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planning performance problem (67626.278ms)"
},
{
"msg_contents": "\r\n-----Original Message-----\r\nFrom: Krzysztof Plocharz [mailto:[email protected]] \r\nSent: Monday, April 08, 2019 10:11 AM\r\nTo: [email protected]\r\nSubject: Planning performance problem (67626.278ms)\r\n\r\nHi\r\n\r\nWe have some very strange query planning problem. Long story short it takes 67626.278ms just to plan. Query execution takes 12ms.\r\n\r\nQuery has 7 joins and 2 subselects.\r\nIt looks like the issue is not deterministic, sometimes is takes few ms to plan the query.\r\n\r\nOne of the tables has 550,485,942 live tuples and 743,504,012 dead tuples. Running ANALYZE on that tables solves the problem only temporarily.\r\n\r\nQuestion is how can we debug what is going on?\r\n\r\nBest Regards,\r\nKrzysztof Płocharz\r\n\r\n_______________________________________________________________________________________________\r\n\r\nWhy do you have to run Analyze? Did you turn off Autovacuum?\r\n\r\nRegards,\r\nIgor Neyman\r\n",
"msg_date": "Mon, 8 Apr 2019 14:18:11 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "No, Autovacuum is running.\n\nOn 2019/04/08 16:18, Igor Neyman wrote:\n> \n> -----Original Message-----\n> From: Krzysztof Plocharz [mailto:[email protected]]\n> Sent: Monday, April 08, 2019 10:11 AM\n> To: [email protected]\n> Subject: Planning performance problem (67626.278ms)\n> \n> Hi\n> \n> We have some very strange query planning problem. Long story short it takes 67626.278ms just to plan. Query execution takes 12ms.\n> \n> Query has 7 joins and 2 subselects.\n> It looks like the issue is not deterministic, sometimes is takes few ms to plan the query.\n> \n> One of the tables has 550,485,942 live tuples and 743,504,012 dead tuples. Running ANALYZE on that tables solves the problem only temporarily.\n> \n> Question is how can we debug what is going on?\n> \n> Best Regards,\n> Krzysztof Płocharz\n> \n> _______________________________________________________________________________________________\n> \n> Why do you have to run Analyze? Did you turn off Autovacuum?\n> \n> Regards,\n> Igor Neyman\n> \n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:28:02 +0200",
"msg_from": "Krzysztof Plocharz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "po 8. 4. 2019 v 16:11 odesílatel Krzysztof Plocharz <[email protected]>\nnapsal:\n\n> Hi\n>\n> We have some very strange query planning problem. Long story short it\n> takes 67626.278ms just to plan. Query execution takes 12ms.\n>\n> Query has 7 joins and 2 subselects.\n> It looks like the issue is not deterministic, sometimes is takes few ms\n> to plan the query.\n>\n> One of the tables has 550,485,942 live tuples and 743,504,012 dead\n> tuples. Running ANALYZE on that tables solves the problem only temporarily.\n>\n> Question is how can we debug what is going on?\n>\n\nplease check your indexes against bloating. Planner get min and max from\nindexes and this operation is slow on bloat indexes.\n\nbut 67 sec is really slow - it can be some other other problem - it is real\ncomputer or virtual?\n\n\n\n>\n> Best Regards,\n> Krzysztof Płocharz\n>\n>\n>\n\npo 8. 4. 2019 v 16:11 odesílatel Krzysztof Plocharz <[email protected]> napsal:Hi\n\nWe have some very strange query planning problem. Long story short it \ntakes 67626.278ms just to plan. Query execution takes 12ms.\n\nQuery has 7 joins and 2 subselects.\nIt looks like the issue is not deterministic, sometimes is takes few ms \nto plan the query.\n\nOne of the tables has 550,485,942 live tuples and 743,504,012 dead \ntuples. Running ANALYZE on that tables solves the problem only temporarily.\n\nQuestion is how can we debug what is going on?please check your indexes against bloating. Planner get min and max from indexes and this operation is slow on bloat indexes.but 67 sec is really slow - it can be some other other problem - it is real computer or virtual? \n\nBest Regards,\nKrzysztof Płocharz",
"msg_date": "Mon, 8 Apr 2019 16:33:34 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "On Mon, Apr 08, 2019 at 04:33:34PM +0200, Pavel Stehule wrote:\n> po 8. 4. 2019 v 16:11 odes�latel Krzysztof Plocharz <[email protected]> napsal:\n> \n> > We have some very strange query planning problem. Long story short it\n> > takes 67626.278ms just to plan. Query execution takes 12ms.\n> >\n> > Query has 7 joins and 2 subselects.\n> > It looks like the issue is not deterministic, sometimes is takes few ms\n> > to plan the query.\n> >\n> > One of the tables has 550,485,942 live tuples and 743,504,012 dead\n> > tuples. Running ANALYZE on that tables solves the problem only temporarily.\n> >\n> > Question is how can we debug what is going on?\n> \n> please check your indexes against bloating. Planner get min and max from\n> indexes and this operation is slow on bloat indexes.\n\nI think that's from get_actual_variable_range(), right ?\n\nIf it's due to bloating, I think the first step would be to 1) vacuum right\nnow; and, 2) set more aggressive auto-vacuum, like ALTER TABLE t SET\n(AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005).\n\nWhat version postgres server ?\n\nJustin\n\n\n",
"msg_date": "Mon, 8 Apr 2019 09:42:10 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "\n\nOn 2019/04/08 16:42, Justin Pryzby wrote:\n> On Mon, Apr 08, 2019 at 04:33:34PM +0200, Pavel Stehule wrote:\n>> po 8. 4. 2019 v 16:11 odes�latel Krzysztof Plocharz <[email protected]> napsal:\n>>\n>>> We have some very strange query planning problem. Long story short it\n>>> takes 67626.278ms just to plan. Query execution takes 12ms.\n>>>\n>>> Query has 7 joins and 2 subselects.\n>>> It looks like the issue is not deterministic, sometimes is takes few ms\n>>> to plan the query.\n>>>\n>>> One of the tables has 550,485,942 live tuples and 743,504,012 dead\n>>> tuples. Running ANALYZE on that tables solves the problem only temporarily.\n>>>\n>>> Question is how can we debug what is going on?\n>>\n>> please check your indexes against bloating. Planner get min and max from\n>> indexes and this operation is slow on bloat indexes.\n\nYes, we thought about this, there are over 700,000,000 dead tuples. But \nas you said, it should not result in 67 second planning...\n\n> \n> I think that's from get_actual_variable_range(), right ?\n> \n> If it's due to bloating, I think the first step would be to 1) vacuum right\n> now; and, 2) set more aggressive auto-vacuum, like ALTER TABLE t SET\n> (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005).\n> \n\nWe did pgrepack and it did help, but is it possible for \nget_actual_variable_range to take over 60 seconds?\nIs there any other workaround for this except for pgrepack/vacuum?\n\nAnyway to actually debug this?\n\n> What version postgres server ?\n> \n> Justin\n> \n> \n\n\n\n\nOn 2019/04/08 16:33, Pavel Stehule wrote:>\n >\n > po 8. 4. 2019 v 16:11 odesílatel Krzysztof Plocharz\n > <[email protected] <mailto:[email protected]>> napsal:\n >\n > Hi\n >\n > We have some very strange query planning problem. Long story short it\n > takes 67626.278ms just to plan. Query execution takes 12ms.\n >\n > Query has 7 joins and 2 subselects.\n > It looks like the issue is not deterministic, sometimes is takes \nfew ms\n > to plan the query.\n >\n > One of the tables has 550,485,942 live tuples and 743,504,012 dead\n > tuples. Running ANALYZE on that tables solves the problem only\n > temporarily.\n >\n > Question is how can we debug what is going on?\n >\n >\n > please check your indexes against bloating. Planner get min and max from\n > indexes and this operation is slow on bloat indexes.\n >\nYes, we thought about this, there are over 700,000,000 dead tuples. But \nas you said, it should not result in 67 second planning...\n\n > but 67 sec is really slow - it can be some other other problem - it is\n > real computer or virtual?\n >\nreal, with pretty good specs: NVME drives, Six-Core AMD Opteron, 64GB of \nram. During testing system was mostly idle.\n\n\n >\n > Best Regards,\n > Krzysztof Płocharz\n >\n >\n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:55:36 +0200",
"msg_from": "Krzysztof Plocharz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "\n\nOn 2019/04/08 16:33, Pavel Stehule wrote:\n> \n> \n> po 8. 4. 2019 v 16:11 odesílatel Krzysztof Plocharz \n> <[email protected] <mailto:[email protected]>> napsal:\n> \n> Hi\n> \n> We have some very strange query planning problem. Long story short it\n> takes 67626.278ms just to plan. Query execution takes 12ms.\n> \n> Query has 7 joins and 2 subselects.\n> It looks like the issue is not deterministic, sometimes is takes few ms\n> to plan the query.\n> \n> One of the tables has 550,485,942 live tuples and 743,504,012 dead\n> tuples. Running ANALYZE on that tables solves the problem only\n> temporarily.\n> \n> Question is how can we debug what is going on?\n> \n> \n> please check your indexes against bloating. Planner get min and max from \n> indexes and this operation is slow on bloat indexes.\n> \n\nYes, we thought about this, there are over 700,000,000 dead tuples. But \nas you said, it should not result in 67 second planning...\n\n> but 67 sec is really slow - it can be some other other problem - it is \n> real computer or virtual?\n> \n\nreal, with pretty good specs: NVME drives, Six-Core AMD Opteron, 64GB of \nram. During testing system was mostly idle.\n\n> \n> Best Regards,\n> Krzysztof Płocharz\n> \n> \n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:58:55 +0200",
"msg_from": "Krzysztof Plocharz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "po 8. 4. 2019 v 16:55 odesílatel Krzysztof Plocharz <[email protected]>\nnapsal:\n\n>\n>\n> On 2019/04/08 16:42, Justin Pryzby wrote:\n> > On Mon, Apr 08, 2019 at 04:33:34PM +0200, Pavel Stehule wrote:\n> >> po 8. 4. 2019 v 16:11 odes�latel Krzysztof Plocharz <\n> [email protected]> napsal:\n> >>\n> >>> We have some very strange query planning problem. Long story short it\n> >>> takes 67626.278ms just to plan. Query execution takes 12ms.\n> >>>\n> >>> Query has 7 joins and 2 subselects.\n> >>> It looks like the issue is not deterministic, sometimes is takes few ms\n> >>> to plan the query.\n> >>>\n> >>> One of the tables has 550,485,942 live tuples and 743,504,012 dead\n> >>> tuples. Running ANALYZE on that tables solves the problem only\n> temporarily.\n> >>>\n> >>> Question is how can we debug what is going on?\n> >>\n> >> please check your indexes against bloating. Planner get min and max from\n> >> indexes and this operation is slow on bloat indexes.\n>\n> Yes, we thought about this, there are over 700,000,000 dead tuples. But\n> as you said, it should not result in 67 second planning...\n>\n> >\n> > I think that's from get_actual_variable_range(), right ?\n> >\n> > If it's due to bloating, I think the first step would be to 1) vacuum\n> right\n> > now; and, 2) set more aggressive auto-vacuum, like ALTER TABLE t SET\n> > (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005).\n> >\n>\n> We did pgrepack and it did help, but is it possible for\n> get_actual_variable_range to take over 60 seconds?\n> Is there any other workaround for this except for pgrepack/vacuum?\n>\n> Anyway to actually debug this?\n>\n\nyou can use perf and get a profile.\n\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n\n\n\n> > What version postgres server ?\n> >\n> > Justin\n> >\n> >\n>\n>\n>\n>\n> On 2019/04/08 16:33, Pavel Stehule wrote:>\n> >\n> > po 8. 4. 2019 v 16:11 odesílatel Krzysztof Plocharz\n> > <[email protected] <mailto:[email protected]>> napsal:\n> >\n> > Hi\n> >\n> > We have some very strange query planning problem. Long story short\n> it\n> > takes 67626.278ms just to plan. Query execution takes 12ms.\n> >\n> > Query has 7 joins and 2 subselects.\n> > It looks like the issue is not deterministic, sometimes is takes\n> few ms\n> > to plan the query.\n> >\n> > One of the tables has 550,485,942 live tuples and 743,504,012 dead\n> > tuples. Running ANALYZE on that tables solves the problem only\n> > temporarily.\n> >\n> > Question is how can we debug what is going on?\n> >\n> >\n> > please check your indexes against bloating. Planner get min and max from\n> > indexes and this operation is slow on bloat indexes.\n> >\n> Yes, we thought about this, there are over 700,000,000 dead tuples. But\n> as you said, it should not result in 67 second planning...\n>\n> > but 67 sec is really slow - it can be some other other problem - it is\n> > real computer or virtual?\n> >\n> real, with pretty good specs: NVME drives, Six-Core AMD Opteron, 64GB of\n> ram. During testing system was mostly idle.\n>\n>\n> >\n> > Best Regards,\n> > Krzysztof Płocharz\n> >\n> >\n>\n>\n>\n\npo 8. 4. 2019 v 16:55 odesílatel Krzysztof Plocharz <[email protected]> napsal:\n\nOn 2019/04/08 16:42, Justin Pryzby wrote:\n> On Mon, Apr 08, 2019 at 04:33:34PM +0200, Pavel Stehule wrote:\n>> po 8. 4. 2019 v 16:11 odes�latel Krzysztof Plocharz <[email protected]> napsal:\n>>\n>>> We have some very strange query planning problem. Long story short it\n>>> takes 67626.278ms just to plan. Query execution takes 12ms.\n>>>\n>>> Query has 7 joins and 2 subselects.\n>>> It looks like the issue is not deterministic, sometimes is takes few ms\n>>> to plan the query.\n>>>\n>>> One of the tables has 550,485,942 live tuples and 743,504,012 dead\n>>> tuples. Running ANALYZE on that tables solves the problem only temporarily.\n>>>\n>>> Question is how can we debug what is going on?\n>>\n>> please check your indexes against bloating. Planner get min and max from\n>> indexes and this operation is slow on bloat indexes.\n\nYes, we thought about this, there are over 700,000,000 dead tuples. But \nas you said, it should not result in 67 second planning...\n\n> \n> I think that's from get_actual_variable_range(), right ?\n> \n> If it's due to bloating, I think the first step would be to 1) vacuum right\n> now; and, 2) set more aggressive auto-vacuum, like ALTER TABLE t SET\n> (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005).\n> \n\nWe did pgrepack and it did help, but is it possible for \nget_actual_variable_range to take over 60 seconds?\nIs there any other workaround for this except for pgrepack/vacuum?\n\nAnyway to actually debug this?you can use perf and get a profile. https://wiki.postgresql.org/wiki/Profiling_with_perf\n\n> What version postgres server ?\n> \n> Justin\n> \n> \n\n\n\n\nOn 2019/04/08 16:33, Pavel Stehule wrote:>\n >\n > po 8. 4. 2019 v 16:11 odesílatel Krzysztof Plocharz\n > <[email protected] <mailto:[email protected]>> napsal:\n >\n > Hi\n >\n > We have some very strange query planning problem. Long story short it\n > takes 67626.278ms just to plan. Query execution takes 12ms.\n >\n > Query has 7 joins and 2 subselects.\n > It looks like the issue is not deterministic, sometimes is takes \nfew ms\n > to plan the query.\n >\n > One of the tables has 550,485,942 live tuples and 743,504,012 dead\n > tuples. Running ANALYZE on that tables solves the problem only\n > temporarily.\n >\n > Question is how can we debug what is going on?\n >\n >\n > please check your indexes against bloating. Planner get min and max from\n > indexes and this operation is slow on bloat indexes.\n >\nYes, we thought about this, there are over 700,000,000 dead tuples. But \nas you said, it should not result in 67 second planning...\n\n > but 67 sec is really slow - it can be some other other problem - it is\n > real computer or virtual?\n >\nreal, with pretty good specs: NVME drives, Six-Core AMD Opteron, 64GB of \nram. During testing system was mostly idle.\n\n\n >\n > Best Regards,\n > Krzysztof Płocharz\n >\n >",
"msg_date": "Mon, 8 Apr 2019 17:07:04 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "On Mon, Apr 08, 2019 at 04:55:36PM +0200, Krzysztof Plocharz wrote:\n> We did pgrepack and it did help, but is it possible for\n> get_actual_variable_range to take over 60 seconds?\n\nYou have many tables being joined, perhaps in exhaustive search, so maybe\nthat's being called many times.\n\nWhat version postgres server ?\n\nJustin\n\n\n",
"msg_date": "Mon, 8 Apr 2019 17:10:56 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "On 4/8/19 07:42, Justin Pryzby wrote:\n> On Mon, Apr 08, 2019 at 04:33:34PM +0200, Pavel Stehule wrote:\n>> po 8. 4. 2019 v 16:11 odesílatel Krzysztof Plocharz <[email protected]> napsal:\n>>\n>>> We have some very strange query planning problem. Long story short it\n>>> takes 67626.278ms just to plan. Query execution takes 12ms.\n>>>\n>>> Query has 7 joins and 2 subselects.\n>>> It looks like the issue is not deterministic, sometimes is takes few ms\n>>> to plan the query.\n>>>\n>>> One of the tables has 550,485,942 live tuples and 743,504,012 dead\n>>> tuples. Running ANALYZE on that tables solves the problem only temporarily.\n>>>\n>>> Question is how can we debug what is going on?\n>>\n>> please check your indexes against bloating. Planner get min and max from\n>> indexes and this operation is slow on bloat indexes.\n> \n> I think that's from get_actual_variable_range(), right ?\n\nFor what it's worth, I have seen a similar issue on Aurora PG 9.6 where\nquery planning took a very long time (multiple minutes). In this\nparticular case, there wasn't anything Aurora-specific about the call to\nget_actual_variable_range. We weren't able to distinctly identify the\nroot cause or build a reproducible test case -- but we suspect that an\ninefficiency might exist in community PostgreSQL code.\n\nFor debugging, a few ideas:\n\n1) capture a stack with pstack or perf record --call-graph\n\n2) capture the execution plan of the SQL w slow planning\n\n3) capture detailed stats for all relations and objects involved\n\n4) capture the usual info for bug reporting (preface section in docs)\n\nA reproducible test case is the gold standard; I'm keeping my eyes open\nfor another case too.\n\nFor the slow planning case that I saw, the slow process was almost\nentirely in this call stack (captured with perf record --call-graph):\n...\nindex_fetch_heap\nindex_getnext\nget_actual_variable_range\nineq_histogram_selectivity\nscalarineqsel\nmergejoinscansel\ninitial_cost_mergejoin\ntry_mergejoin_path\nadd_paths_to_joinrel\nmake_join_rel\njoin_search_one_level\nstandard_join_search\nmake_one_rel\nquery_planner\n...\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:10:17 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 16:10:17 -0700, Jeremy Schneider wrote:\n> On 4/8/19 07:42, Justin Pryzby wrote:\n> > On Mon, Apr 08, 2019 at 04:33:34PM +0200, Pavel Stehule wrote:\n> >> po 8. 4. 2019 v 16:11 odes�latel Krzysztof Plocharz <[email protected]> napsal:\n> >>\n> >>> We have some very strange query planning problem. Long story short it\n> >>> takes 67626.278ms just to plan. Query execution takes 12ms.\n> >>>\n> >>> Query has 7 joins and 2 subselects.\n> >>> It looks like the issue is not deterministic, sometimes is takes few ms\n> >>> to plan the query.\n> >>>\n> >>> One of the tables has 550,485,942 live tuples and 743,504,012 dead\n> >>> tuples. Running ANALYZE on that tables solves the problem only temporarily.\n> >>>\n> >>> Question is how can we debug what is going on?\n> >>\n> >> please check your indexes against bloating. Planner get min and max from\n> >> indexes and this operation is slow on bloat indexes.\n> > \n> > I think that's from get_actual_variable_range(), right ?\n> \n> For what it's worth, I have seen a similar issue on Aurora PG 9.6 where\n> query planning took a very long time (multiple minutes). In this\n> particular case, there wasn't anything Aurora-specific about the call to\n> get_actual_variable_range. We weren't able to distinctly identify the\n> root cause or build a reproducible test case -- but we suspect that an\n> inefficiency might exist in community PostgreSQL code.\n> \n> For debugging, a few ideas:\n> \n> 1) capture a stack with pstack or perf record --call-graph\n> \n> 2) capture the execution plan of the SQL w slow planning\n> \n> 3) capture detailed stats for all relations and objects involved\n> \n> 4) capture the usual info for bug reporting (preface section in docs)\n> \n> A reproducible test case is the gold standard; I'm keeping my eyes open\n> for another case too.\n> \n> For the slow planning case that I saw, the slow process was almost\n> entirely in this call stack (captured with perf record --call-graph):\n> ...\n> index_fetch_heap\n> index_getnext\n> get_actual_variable_range\n> ineq_histogram_selectivity\n> scalarineqsel\n> mergejoinscansel\n> initial_cost_mergejoin\n> try_mergejoin_path\n> add_paths_to_joinrel\n> make_join_rel\n> join_search_one_level\n> standard_join_search\n> make_one_rel\n> query_planner\n> ...\n\nI suspect some of this might be related to < 11 not having the following\ncommit:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ca930fc39ccf987c1c22fd04a1e7463b5dd0dfd\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:26:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "Two years later, I still remember this. And today I just confirmed\nsomeone hitting this on open source PG13.\n\nOn the incident I saw today, the tuple reads during planning were\nleading to excessive mxid waits - I suspect because memory pressure\nprobably caused the mxid offsets/member files to get flushed and start\nreading from disk. It was the mxid waits that actually made the system\ngo sideways - I was just surprised that we saw most of the system CPU in\nplanning time (rather than execution).\n\nI really think there's something here in the planner that's still\ncausing headaches for people; I've this seen a few times now. Looks to\nme like the common theme is:\n\ntry_mergejoin_path ->\ninitial_cost_mergejoin ->\nmergejoinscansel ->\nscalarineqsel ->\nineq_histogram_selectivity ->\nget_actual_variable_range\n\nAnd from here it starts calling index_getnext() which can go on for a\nvery long time and the system seems to fall over if it begins to involve\nmuch physical I/O.\n\nI'll continue to keep an eye out for this, and keep this thread updated\nif I find anything else that might move the understanding forward.\n\nThanks,\nJeremy Schneider\n\n\nOn 4/8/19 16:26, Andres Freund wrote:\n> Hi,\n> \n> On 2019-04-08 16:10:17 -0700, Jeremy Schneider wrote:\n>>\n>> For what it's worth, I have seen a similar issue on Aurora PG 9.6 where\n>> query planning took a very long time (multiple minutes). In this\n>> particular case, there wasn't anything Aurora-specific about the call to\n>> get_actual_variable_range. We weren't able to distinctly identify the\n>> root cause or build a reproducible test case -- but we suspect that an\n>> inefficiency might exist in community PostgreSQL code.\n>>\n>> For debugging, a few ideas:\n>>\n>> 1) capture a stack with pstack or perf record --call-graph\n>> 2) capture the execution plan of the SQL w slow planning\n>> 3) capture detailed stats for all relations and objects involved\n>> 4) capture the usual info for bug reporting (preface section in docs)\n>>\n>> A reproducible test case is the gold standard; I'm keeping my eyes open\n>> for another case too.\n>>\n>> For the slow planning case that I saw, the slow process was almost\n>> entirely in this call stack (captured with perf record --call-graph):\n>> ...\n>> index_fetch_heap\n>> index_getnext\n>> get_actual_variable_range\n>> ineq_histogram_selectivity\n>> scalarineqsel\n>> mergejoinscansel\n>> initial_cost_mergejoin\n>> try_mergejoin_path\n>> add_paths_to_joinrel\n>> make_join_rel\n>> join_search_one_level\n>> standard_join_search\n>> make_one_rel\n>> query_planner\n>> ...\n> \n> I suspect some of this might be related to < 11 not having the following\n> commit:\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ca930fc39ccf987c1c22fd04a1e7463b5dd0dfd\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Tue, 20 Apr 2021 14:00:33 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "On Thu, 22 Apr 2021 at 00:03, Jeremy Schneider <[email protected]> wrote:\n>\n> Two years later, I still remember this. And today I just confirmed\n> someone hitting this on open source PG13.\n\nThe only thing that changed about get_actual_variable_range() is that\nit now uses a SnapshotNonVacuumable snapshot. Previously a\nlong-running transaction could have caused vacuum to be unable to\nremove tuples which could have caused get_actual_variable_range() to\nbe slow if it had to skip the unvacuumable tuples.\n\nThat's now changed as the SnapshotNonVacuumable will see any tuples\nrequired by that long-running transaction and use that to determine\nthe range instead of skipping over it.\n\nAnyone with a large number of tuples that vacuum can remove that are\nat either end of the range on a column that is indexed by a btree\nindex could still have issues. Vacuuming more often might be a good\nthing to consider. With the original report on this thread there were\nmore dead tuples in the table than live tuples. Disabling auto-vacuum\nor tuning it so it waits that long is likely a bad idea.\n\nFWIW, here's a simple test case that shows the problem in current master.\n\ncreate table a (a int primary key) with (autovacuum_enabled = off);\ninsert into a select x from generate_series(1,10000000) x;\nanalyze a;\ndelete from a;\n\\timing on\nexplain select * from a where a < 10000000;\n QUERY PLAN\n------------------------------------------------------------\n Seq Scan on a (cost=0.00..169247.71 rows=9998977 width=4)\n Filter: (a < 10000000)\n(2 rows)\n\n\nTime: 9062.600 ms (00:09.063)\n\nvacuum a;\nexplain select * from a where a < 10000000;\n QUERY PLAN\n-------------------------------------------------\n Seq Scan on a (cost=0.00..0.00 rows=1 width=4)\n Filter: (a < 10000000)\n(2 rows)\n\nTime: 2.665 ms\n\nNotice that it became faster again after I did a vacuum.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Apr 2021 02:14:21 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> FWIW, here's a simple test case that shows the problem in current master.\n\nThis isn't telling the whole story. That first EXPLAIN did set the killed\nbits in the index, so that subsequent ones are fairly fast, even without\nVACUUM:\n\nregression=# explain select * from a where a < 10000000;\n QUERY PLAN \n------------------------------------------------------------\n Seq Scan on a (cost=0.00..169247.71 rows=9998977 width=4)\n Filter: (a < 10000000)\n(2 rows)\n\nTime: 3711.089 ms (00:03.711)\nregression=# explain select * from a where a < 10000000;\n QUERY PLAN \n------------------------------------------------------------\n Seq Scan on a (cost=0.00..169247.71 rows=9998977 width=4)\n Filter: (a < 10000000)\n(2 rows)\n\nTime: 230.094 ms\n\nAdmittedly this is still more than after VACUUM gets rid of the\nindex entries altogether:\n\nregression=# vacuum a;\nVACUUM\nTime: 2559.571 ms (00:02.560)\nregression=# explain select * from a where a < 10000000;\n QUERY PLAN \n-------------------------------------------------\n Seq Scan on a (cost=0.00..0.00 rows=1 width=4)\n Filter: (a < 10000000)\n(2 rows)\n\nTime: 0.698 ms\n\nHowever, I'm skeptical that any problem actually remains in\nreal-world use cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Apr 2021 10:37:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "> However, I'm skeptical that any problem actually remains in\n> real-world use cases.\n\nHello Tom,\n\nWe also had some issues with planning and get_actual_variable_range(). We\nactually found some interesting behaviour that probably requires an eye with\nbetter expertise in how the planner works.\nFor the example being discussed you can add some joins into the equation and\nplanning times deteriorate quite a bit.\nI'll just skip posting the first executions as it is already established that\na subsequent one will be faster.\n\n\ncreate table b (b int primary key, a int references a(a))\nwith (autovacuum_enabled=off);\n\ninsert into a select x from generate_series(1,10000000) x;\ninsert into b select x, x from generate_series(1,10000000) x;\ncreate index b_a_idx on b(a);\nanalyze a, b;\n\n\nFor our case a rollback of a bulk insert causes bloat on the index.\n\n\nbegin;\ninsert into a select x from generate_series(10000001,20000000) x;\nrollback;\n\nexplain (analyze, buffers)\nselect * from a\njoin b on (b.a = a.a)\nwhere b.a in (1,100,10000,1000000,1000001);\n\n Planning:\n Buffers: shared hit=9 read=27329\n Planning Time: 134.560 ms\n Execution Time: 0.100 ms\n\n\nI see a lot of buffers being read for some reason (wasn't this fixed?). And\ntimes are slow too. But it get's worse with each join added to the select.\n\n\nexplain (analyze, buffers)\nselect * from a\njoin b b1 on (b1.a = a.a)\njoin b b2 on (b2.a = a.a)\nwhere b1.a in (1,100,10000,1000000,1000001);\n\n Planning:\n Buffers: shared hit=38 read=81992\n Planning Time: 312.826 ms\n Execution Time: 0.131 ms\n\nJust add a few more joins and it is a recipe for disaster.\nApparently, the planner isn't reusing the data boundaries across alternative\nplans. It would be nicer if the planner remembered each column boundaries\nfor later reuse (within the same planner execution).\n\nAnother thing that worries me is that even the second run has faster planning\nit is still way slower than the case without lots of bloat in the index. And\nI don't think this is just an edge case. Rollbacks on bulk inserts can be\nquite common, and joins are expected in a SQL database.\n\nWe had downtime due to how the planner works on this case. Unfortunately\nsetting more aggressive vacuum settings won't fix our problems. Most of the\nread queries are being issued to a replica. When the issues with the planner\nstart happening, CPU usage on that node goes to 100% which interferes with the\nreplication process.\nThis means the replica cannot get to a new checkpoint with a new live\nmax value in the index nor can it delete the bloat that vacuum has already\ncleaned on the leader server.\n\nOh, by the way, we're running version 13.2\n\n\nRegards,\n\nManuel\n\n\n\n",
"msg_date": "Mon, 14 Jun 2021 13:25:38 -0400",
"msg_from": "Manuel Weitzman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "Hello everyone,\n\n> Apparently, the planner isn't reusing the data boundaries across alternative\n> plans. It would be nicer if the planner remembered each column boundaries\n> for later reuse (within the same planner execution).\n\nI've written a very naive (and crappy) patch to show how adding\nmemorization to get_actual_variable_range() could help the planner on\nscenarios with a big number of joins.\n\nFor the previous example,\n\n> explain (analyze, buffers)\n> select * from a\n> join b b1 on (b1.a = a.a)\n> join b b2 on (b2.a = a.a)\n> where b1.a in (1,100,10000,1000000,1000001);\n\neach time you add a join clause the planner has to read an extra ~5[K]\nbuffers and gets about 200[ms] slower.\n\n1 join\n Planning:\n Buffers: shared hit=9 read=27329\n Planning Time: 101.745 ms\n Execution Time: 0.082 ms\n\n2 joins\n Planning:\n Buffers: shared hit=42 read=81988\n Planning Time: 303.237 ms\n Execution Time: 0.102 ms\n\n3 joins\n Planning:\n Buffers: shared hit=94 read=136660\n Planning Time: 508.947 ms\n Execution Time: 0.155 ms\n\n4 joins\n Planning:\n Buffers: shared hit=188 read=191322\n Planning Time: 710.981 ms\n Execution Time: 0.168 ms\n\n\nAfter adding memorization the cost in buffers remains constant and the\nlatency deteriorates only marginally (as expected) with each join.\n\n1 join\n Planning:\n Buffers: shared hit=10 read=27328\n Planning Time: 97.889 ms\n Execution Time: 0.066 ms\n\n2 joins\n Planning:\n Buffers: shared hit=7 read=27331\n Planning Time: 100.589 ms\n Execution Time: 0.111 ms\n\n3 joins\n Planning:\n Buffers: shared hit=9 read=27329\n Planning Time: 105.669 ms\n Execution Time: 0.134 ms\n\n4 joins\n Planning:\n Buffers: shared hit=132 read=27370\n Planning Time: 155.716 ms\n Execution Time: 0.219 ms\n\n\nI'd be happy to improve this patch into something better. Though I'd\nlike suggestions on how to do it:\nI have this idea of creating a local \"memorization\" struct instance within\nstandard_planner(). That would require passing on a pointer down until\nit reaches get_actual_variable_range(), which I think would be quite\nugly, if done just to improve the planner for this scenario.\nIs there any better mechanism I could reuse from other modules? (utils\nor cache, for example).\n\n\nRegards,\nManuel",
"msg_date": "Sat, 19 Jun 2021 20:09:58 -0400",
"msg_from": "Manuel Weitzman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "Manuel Weitzman <[email protected]> writes:\n> I've written a very naive (and crappy) patch to show how adding\n> memorization to get_actual_variable_range() could help the planner on\n> scenarios with a big number of joins.\n\nSo ... the reason why there's not caching of get_actual_variable_range\nresults already is that I'd supposed it wouldn't be necessary given\nthe caching of selectivity estimates that happens at the RestrictInfo\nlevel. I don't have any objection in principle to adding another\ncaching layer if that one's not working well enough, but I think it'd\nbe wise to first understand why it's needed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Jun 2021 17:06:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "Em dom., 20 de jun. de 2021 às 14:50, Manuel Weitzman <\[email protected]> escreveu:\n\n> Hello everyone,\n>\n> > Apparently, the planner isn't reusing the data boundaries across\n> alternative\n> > plans. It would be nicer if the planner remembered each column boundaries\n> > for later reuse (within the same planner execution).\n>\n> I've written a very naive (and crappy) patch to show how adding\n> memorization to get_actual_variable_range() could help the planner on\n> scenarios with a big number of joins.\n>\n> For the previous example,\n>\n> > explain (analyze, buffers)\n> > select * from a\n> > join b b1 on (b1.a = a.a)\n> > join b b2 on (b2.a = a.a)\n> > where b1.a in (1,100,10000,1000000,1000001);\n>\n> each time you add a join clause the planner has to read an extra ~5[K]\n> buffers and gets about 200[ms] slower.\n>\n> 1 join\n> Planning:\n> Buffers: shared hit=9 read=27329\n> Planning Time: 101.745 ms\n> Execution Time: 0.082 ms\n>\n> 2 joins\n> Planning:\n> Buffers: shared hit=42 read=81988\n> Planning Time: 303.237 ms\n> Execution Time: 0.102 ms\n>\n> 3 joins\n> Planning:\n> Buffers: shared hit=94 read=136660\n> Planning Time: 508.947 ms\n> Execution Time: 0.155 ms\n>\n> 4 joins\n> Planning:\n> Buffers: shared hit=188 read=191322\n> Planning Time: 710.981 ms\n> Execution Time: 0.168 ms\n>\n>\n> After adding memorization the cost in buffers remains constant and the\n> latency deteriorates only marginally (as expected) with each join.\n>\n> 1 join\n> Planning:\n> Buffers: shared hit=10 read=27328\n> Planning Time: 97.889 ms\n> Execution Time: 0.066 ms\n>\n> 2 joins\n> Planning:\n> Buffers: shared hit=7 read=27331\n> Planning Time: 100.589 ms\n> Execution Time: 0.111 ms\n>\n> 3 joins\n> Planning:\n> Buffers: shared hit=9 read=27329\n> Planning Time: 105.669 ms\n> Execution Time: 0.134 ms\n>\n> 4 joins\n> Planning:\n> Buffers: shared hit=132 read=27370\n> Planning Time: 155.716 ms\n> Execution Time: 0.219 ms\n>\n>\n> I'd be happy to improve this patch into something better. Though I'd\n> like suggestions on how to do it:\n> I have this idea of creating a local \"memorization\" struct instance within\n> standard_planner(). That would require passing on a pointer down until\n> it reaches get_actual_variable_range(), which I think would be quite\n> ugly, if done just to improve the planner for this scenario.\n> Is there any better mechanism I could reuse from other modules? (utils\n> or cache, for example).\n>\nWithout going into the merits of whether this cache will be adopted or not,\nI have some comments about the code.\n\n1. Prefer to use .patch instead of .diff, it makes it easier for browsers\nsuch as firefox to read and show the content automatically.\n2. New struct?\n Oid is unsigned int, lower than int64.\n Better struct is:\n+struct ActualVariableRangeCache {\n+ int64 min_value; /* 8 bytes */\n+ int64 max_value; /* 8 bytes */\n+ Oid indexoid; /* 4 bytes */\n+ bool has_min; /* 1 byte */\n+ bool has_max; /*1 byte */\n+};\nTakes up less space.\n\n3. Avoid use of type *long*, it is very problematic with 64 bits.\nWindows 64 bits, long is 4 (four) bytes.\nLinux 64 bits, long is 8 (eight) bytes.\n\n4. Avoid C99 style declarations\n for(unsigned long i = 0;)\nPrefer:\n size_t i;\n for(i = 0;)\nHelps backpatching to C89 versions.\n\nregards,\nRanier Vilela\n\nEm dom., 20 de jun. de 2021 às 14:50, Manuel Weitzman <[email protected]> escreveu:Hello everyone,\n\n> Apparently, the planner isn't reusing the data boundaries across alternative\n> plans. It would be nicer if the planner remembered each column boundaries\n> for later reuse (within the same planner execution).\n\nI've written a very naive (and crappy) patch to show how adding\nmemorization to get_actual_variable_range() could help the planner on\nscenarios with a big number of joins.\n\nFor the previous example,\n\n> explain (analyze, buffers)\n> select * from a\n> join b b1 on (b1.a = a.a)\n> join b b2 on (b2.a = a.a)\n> where b1.a in (1,100,10000,1000000,1000001);\n\neach time you add a join clause the planner has to read an extra ~5[K]\nbuffers and gets about 200[ms] slower.\n\n1 join\n Planning:\n Buffers: shared hit=9 read=27329\n Planning Time: 101.745 ms\n Execution Time: 0.082 ms\n\n2 joins\n Planning:\n Buffers: shared hit=42 read=81988\n Planning Time: 303.237 ms\n Execution Time: 0.102 ms\n\n3 joins\n Planning:\n Buffers: shared hit=94 read=136660\n Planning Time: 508.947 ms\n Execution Time: 0.155 ms\n\n4 joins\n Planning:\n Buffers: shared hit=188 read=191322\n Planning Time: 710.981 ms\n Execution Time: 0.168 ms\n\n\nAfter adding memorization the cost in buffers remains constant and the\nlatency deteriorates only marginally (as expected) with each join.\n\n1 join\n Planning:\n Buffers: shared hit=10 read=27328\n Planning Time: 97.889 ms\n Execution Time: 0.066 ms\n\n2 joins\n Planning:\n Buffers: shared hit=7 read=27331\n Planning Time: 100.589 ms\n Execution Time: 0.111 ms\n\n3 joins\n Planning:\n Buffers: shared hit=9 read=27329\n Planning Time: 105.669 ms\n Execution Time: 0.134 ms\n\n4 joins\n Planning:\n Buffers: shared hit=132 read=27370\n Planning Time: 155.716 ms\n Execution Time: 0.219 ms\n\n\nI'd be happy to improve this patch into something better. Though I'd\nlike suggestions on how to do it:\nI have this idea of creating a local \"memorization\" struct instance within\nstandard_planner(). That would require passing on a pointer down until\nit reaches get_actual_variable_range(), which I think would be quite\nugly, if done just to improve the planner for this scenario.\nIs there any better mechanism I could reuse from other modules? (utils\nor cache, for example).Without going into the merits of whether this cache will be adopted or not, I have some comments about the code. 1. Prefer to use .patch instead of .diff, it makes it easier for browsers such as firefox to read and show the content automatically.2. New struct? Oid is unsigned int, lower than int64. Better struct is:+struct ActualVariableRangeCache {+\tint64\tmin_value; /* 8 bytes */+\tint64\tmax_value; \n/* 8 bytes */\n\n\n+\tOid\t\tindexoid; \n/* 4 bytes */\n\n\n\n+\tbool\thas_min; /* 1 byte */+\tbool\thas_max; /*1 byte */+};Takes up less space.3. Avoid use of type *long*, it is very problematic with 64 bits.Windows 64 bits, long is 4 (four) bytes.Linux 64 bits, long is 8 (eight) bytes.4. Avoid C99 style declarations for(unsigned long i = 0;)Prefer: size_t i; for(i = 0;)Helps backpatching to C89 versions.regards,Ranier Vilela",
"msg_date": "Sun, 20 Jun 2021 20:23:40 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "Ranier Vilela <[email protected]> writes:\n> 3. Avoid use of type *long*, it is very problematic with 64 bits.\n> Windows 64 bits, long is 4 (four) bytes.\n> Linux 64 bits, long is 8 (eight) bytes.\n\nAgreed.\n\n> 4. Avoid C99 style declarations\n> for(unsigned long i = 0;)\n> Prefer:\n> size_t i;\n> for(i = 0;)\n> Helps backpatching to C89 versions.\n\nIt seems unlikely that we'd consider back-patching this into pre-C99\nbranches, so I see no reason not to use C99 loop style. (But do\nkeep in mind that we avoid most other C99-isms, such as intermixed\ndecls and code.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Jun 2021 20:17:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "\n> On 20-06-2021, at 17:06, Tom Lane <[email protected]> wrote:\n> \n> So ... the reason why there's not caching of get_actual_variable_range\n> results already is that I'd supposed it wouldn't be necessary given\n> the caching of selectivity estimates that happens at the RestrictInfo\n> level. I don't have any objection in principle to adding another\n> caching layer if that one's not working well enough, but I think it'd\n> be wise to first understand why it's needed.\n\nFor what I could make out from the code, the caching done at the\nRestrictInfo level is already saving a lot of work, but there's a\ndifferent RestrictInfo instance for each alternative path created by\nmake_one_rel().\nI wouldn't know how to reuse instances of RestrictInfo or if that\nwould even be the correct approach. My guess is that doing so would\nbe incorrect.\n\nI'll improve the patch with the suggestions from Ranier and you in\nthe meantime.\n\n\nBest regards,\nManuel\n\n\n",
"msg_date": "Tue, 29 Jun 2021 15:26:20 -0400",
"msg_from": "Manuel Weitzman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "Manuel Weitzman <[email protected]> writes:\n>> On 20-06-2021, at 17:06, Tom Lane <[email protected]> wrote:\n>> So ... the reason why there's not caching of get_actual_variable_range\n>> results already is that I'd supposed it wouldn't be necessary given\n>> the caching of selectivity estimates that happens at the RestrictInfo\n>> level. I don't have any objection in principle to adding another\n>> caching layer if that one's not working well enough, but I think it'd\n>> be wise to first understand why it's needed.\n\n> For what I could make out from the code, the caching done at the\n> RestrictInfo level is already saving a lot of work, but there's a\n> different RestrictInfo instance for each alternative path created by\n> make_one_rel().\n\nThat seems a bit broken; a given WHERE clause should produce only one\nRestrictInfo. Can you provide a more concrete example?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Jun 2021 15:43:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "> On 29-06-2021, at 15:43, Tom Lane <[email protected]> wrote:\n> \n> Manuel Weitzman <[email protected]> writes:\n>>> On 20-06-2021, at 17:06, Tom Lane <[email protected]> wrote:\n>>> So ... the reason why there's not caching of get_actual_variable_range\n>>> results already is that I'd supposed it wouldn't be necessary given\n>>> the caching of selectivity estimates that happens at the RestrictInfo\n>>> level. I don't have any objection in principle to adding another\n>>> caching layer if that one's not working well enough, but I think it'd\n>>> be wise to first understand why it's needed.\n> \n>> For what I could make out from the code, the caching done at the\n>> RestrictInfo level is already saving a lot of work, but there's a\n>> different RestrictInfo instance for each alternative path created by\n>> make_one_rel().\n> \n> That seems a bit broken; a given WHERE clause should produce only one\n> RestrictInfo. Can you provide a more concrete example?\n> \n\nI added some logging to see hits and misses on cached_scansel() for\nthis query\n> explain (analyze, buffers)\n> select * from a\n> join b b1 on (b1.a = a.a)\n> join b b2 on (b2.a = a.a)\n> where b1.a in (1,100,10000,1000000,1000001);\n\nApparently there's a RestrictInfo for each possible way of doing merge\njoin (are those created dynamically for planning?), for example:\n- a join (b1 join b2)\n- b1 join (a join b2)\n- b2 join (a join b1)\n\nWhen the cost of a possible mergejoin path hasn't been computed yet,\nthen mergejoinscansel() would have to check the bloated index again.\n\nI attached a patch so you can see the hits and misses on cached_scansel().\nEach time there's a miss logged, there's also a different RestrictInfo\npointer involved.\n\nBest regards,\nManuel",
"msg_date": "Tue, 29 Jun 2021 17:35:38 -0400",
"msg_from": "Manuel Weitzman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "Manuel Weitzman <[email protected]> writes:\n> On 29-06-2021, at 15:43, Tom Lane <[email protected]> wrote:\n>> That seems a bit broken; a given WHERE clause should produce only one\n>> RestrictInfo. Can you provide a more concrete example?\n\n>> explain (analyze, buffers)\n>> select * from a\n>> join b b1 on (b1.a = a.a)\n>> join b b2 on (b2.a = a.a)\n>> where b1.a in (1,100,10000,1000000,1000001);\n\nHm. By my count, this example generates 3 RestrictInfos during\ndeconstruct_jointree, representing the three original clauses\nfrom the query, and then 4 more in generate_join_implied_equalities,\nrepresenting the EC-derived clauses\n\n\ta.a = b1.a\n\ta.a = b2.a\n\tb1.a = b2.a\n\tb1.a = a.a\n\nThe third of these seems legit enough; it's a new fact that we\nwant to apply while considering joining b1 directly to b2.\nThe other ones get made despite create_join_clause's efforts\nto avoid making duplicate clauses, because of two things:\n\n1. create_join_clause doesn't trouble to look for commuted\nequivalents, which perhaps is penny-wise and pound-foolish.\nThe cost of re-deriving selectivity estimates could be way\nmore than the cost of checking this.\n\n2. Although these look like they ought to be equivalent to the\noriginal clauses (modulo commutation, for some of them), they don't\nlook that way to create_join_clause, because it's also checking\nfor parent_ec equality. Per the comment,\n\n * parent_ec is either equal to ec (if the clause is a potentially-redundant\n * join clause) or NULL (if not). We have to treat this as part of the\n * match requirements --- it's possible that a clause comparing the same two\n * EMs is a join clause in one join path and a restriction clause in another.\n\nIt might be worth digging into the git history to see why that\nbecame a thing and then considering whether there's a way around it.\n(I'm pretty sure that comment is mine, but I don't recall the details\nanymore.)\n\nAnyway, it's certainly not the case that we're making new\nRestrictInfos for every pair of rels. It looks that way in this\nexample because the join vars all belong to the same EC, but\nthat typically wouldn't be the case in more complex queries.\n\nSo we could look into whether this code can be improved to share\nRestrictInfos across more cases. Another thought is that even\nif we need to keep original and derived clauses separate, maybe it'd\nbe all right to copy previously-determined cached selectivity values\nfrom an original clause to an otherwise-identical derived clause\n(cf. commute_restrictinfo()). I'm not sure though whether it's\nreliably the case that we'd have filled in selectivities for the\noriginal clauses before this code wants to clone them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Jun 2021 18:31:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "> 1. create_join_clause doesn't trouble to look for commuted\n> equivalents, which perhaps is penny-wise and pound-foolish.\n> The cost of re-deriving selectivity estimates could be way\n> more than the cost of checking this.\n\nAgreed.\n\n> 2. Although these look like they ought to be equivalent to the\n> original clauses (modulo commutation, for some of them), they don't\n> look that way to create_join_clause, because it's also checking\n> for parent_ec equality. Per the comment,\n> \n> * parent_ec is either equal to ec (if the clause is a potentially-redundant\n> * join clause) or NULL (if not). We have to treat this as part of the\n> * match requirements --- it's possible that a clause comparing the same two\n> * EMs is a join clause in one join path and a restriction clause in another.\n> \n> It might be worth digging into the git history to see why that\n> became a thing and then considering whether there's a way around it.\n> (I'm pretty sure that comment is mine, but I don't recall the details\n> anymore.)\n\nTo me that sounds OK, I cannot prove that they're equivalent to the\noriginal clauses so I think it is fine to assume they're not (not an\nexpert here, quite the opposite).\n\n> Anyway, it's certainly not the case that we're making new\n> RestrictInfos for every pair of rels. It looks that way in this\n> example because the join vars all belong to the same EC, but\n> that typically wouldn't be the case in more complex queries.\n\nGood to know, this wasn't clear to me.\n\n> So we could look into whether this code can be improved to share\n> RestrictInfos across more cases. Another thought is that even\n> if we need to keep original and derived clauses separate, maybe it'd\n> be all right to copy previously-determined cached selectivity values\n> from an original clause to an otherwise-identical derived clause\n> (cf. commute_restrictinfo()). I'm not sure though whether it's\n> reliably the case that we'd have filled in selectivities for the\n> original clauses before this code wants to clone them.\n\nTo be honest, even if that sounds like a good idea to dig on, I think\nit wouldn't completely solve the problem with repeated calls to\nget_actual_variable_range().\n\nThe example query I gave is doing a lot of simple auto-joins which\nmakes the thought process simpler, but I worry more about the more\n\"common\" case in which there is more than 2 distinct tables involved\nin the query\n\nFor example, instead of having \"b1, b2, ..., bn\" as aliases of \"b\" in\nthis query\n\n>> explain (analyze, buffers)\n>> select * from a\n>> join b b1 on (b1.a = a.a)\n>> join b b2 on (b2.a = a.a)\n>> where b1.a in (1,100,10000,1000000,1000001);\n\nit is also possible to reproduce the increasing cost in planning\nbuffers for each new join on a distinct table being added:\n\nexplain (analyze, buffers)\nselect * from a\njoin b on (b.a = a.a)\njoin c on (c.a = a.a)\n-- ... (etc)\nwhere c.a in (1,100,10000,1000000,1000001);\n\nI can imagine that deconstruct_jointree() and\ngenerate_join_implied_equalities() would generate multiple\nRestrictInfos, in which many of them a constraint on a.a would be\ninvolved (each involving a different table).\n\nb.a = a.a\nc.a = a.a\nc.a = b.a\na.a = b.a\na.a = c.a\n... (etc)\n\n(if we wanted, we could also add a different WHERE clause on each of\nthe tables involved to make really sure all RestrictInfos are\ndifferent).\n\nFor each of these RestrictInfos there *could* be one cache miss on\ncached_scansel() that *could* force the planner to compute\nget_actual_variable_range() for the same variable (a.a) over and over,\nas mergejoinscansel() always computes the selectivity for the\nintervals that require actual extremal values. In practice this\nre-computing of the variable range seems to happen a lot.\n\nOne way in which I see possible to share this kind of information (of\nextremal values) across RestrictInfos is to store the known variable\nranges in PlannerInfo (or within a member of such struct), which seems\nto be around everywhere it would be needed.\n\n\nBest regards,\nManuel\n\n\n\n",
"msg_date": "Wed, 30 Jun 2021 16:56:14 -0400",
"msg_from": "Manuel Weitzman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "> On 30-06-2021, at 16:56, Manuel Weitzman <[email protected]> wrote:\n> \n> One way in which I see possible to share this kind of information (of\n> extremal values) across RestrictInfos is to store the known variable\n> ranges in PlannerInfo (or within a member of such struct), which seems\n> to be around everywhere it would be needed.\n\nI have written a new patch that's (hopefully) better than the first\none I sent, to store the extremal index values within PlannerInfo.\n\n> it is also possible to reproduce the increasing cost in planning\n> buffers for each new join on a distinct table being added:\n> \n> [...]\n> \n\n> I can imagine that deconstruct_jointree() and\n> generate_join_implied_equalities() would generate multiple\n> RestrictInfos, in which many of them a constraint on a.a would be\n> involved (each involving a different table).\n\nI also attached an example in which there are RestrictInfos generated\nfor multiple tables instead of just a single aliased one. The buffers\nread for planning also increase with each join added to the query.\n\n\nBest regards,\nManuel",
"msg_date": "Thu, 1 Jul 2021 16:49:44 -0400",
"msg_from": "Manuel Weitzman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
},
{
"msg_contents": "On Thu, 1 Jul 2021 at 08:56, Manuel Weitzman <[email protected]> wrote:\n> For each of these RestrictInfos there *could* be one cache miss on\n> cached_scansel() that *could* force the planner to compute\n> get_actual_variable_range() for the same variable (a.a) over and over,\n> as mergejoinscansel() always computes the selectivity for the\n> intervals that require actual extremal values. In practice this\n> re-computing of the variable range seems to happen a lot.\n\nRecently, for some other topic, I was thinking about if we were ever\nto have the executor give up on a plan because something did not play\nout the way the planner expected it to, that if the executor to ever\nbe given that ability to throw the plan back at the planner with some\nhints about where it went wrong, then I wondered what exactly these\nhints would look like.\n\nI guessed these would be a list of some rough guidelines saying that x\nJOIN y ON x.a=y.b produces at least Z rows. Then the planner would\nfind that when estimating the size of the join between x and y and\ntake the maximum of two values. That would need to be designed in\nsuch a way that the planner could quickly consult the feedback, e.g\nhashtable lookup on Oid.\n\nAnyway, I don't really have any clearly thought-through plans for that\nidea as it would be a large project that would need a lot of changes\nbefore it could be even thought about seriously. However, it did cause\nme to think about that again when reading this thread as it seems\nsimilar. You've learned the actual variable range through actual\nexecution, so it does not seem too unreasonable that information might\nget stored in a similar place, if that place were to exist.\n\nI'm not saying that's what needs to happen here. It's more just food\nfor thought. The caching you need does seem more specific to Oid than\nrelid.\n\nDavid\n\n\n",
"msg_date": "Fri, 2 Jul 2021 21:55:36 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning performance problem (67626.278ms)"
}
] |
[
{
"msg_contents": "Hi all, I am sure this should be a FAQ, but I can't see a definitive \nanswer, only chatter on various lists and forums.\n\nDefault page size of PostgreSQL is 8192 bytes.\n\nDefault IO block size in Linux is 4096 bytes.\n\nI can set an XFS file system with 8192 bytes block size, but then it \ndoes not mount on Linux, because the VM page size is the limit, 4096 again.\n\nThere seems to be no way to change that in (most, common) Linux \nvariants. In FreeBSD there appears to be a way to change that.\n\nBut then, there is a hardware limit also, as far as the VM memory page \nallocation is concerned. Apparently most i386 / amd64 architectures the \nVM page sizes are 4k, 2M, and 1G. The latter, I believe, are called \n\"hugepages\" and I only ever see that discussed in the PostgreSQL manuals \nfor Linux, not for FreeBSD.\n\nPeople have asked: does it matter? And then there is all that chatter \nabout \"why don't you run a benchmark and report back to us\" -- \"OK, will \ndo\" -- and then it's crickets.\n\nBut why is this such a secret?\n\nOn Amazon AWS there is the following very simple situation: IO is capped \non IO operations per second (IOPS). Let's say, on a smallish volume, I \nget 300 IOPS (once my burst balance is used up.)\n\nNow my simple theoretical reasoning is this: one IO call transfers 1 \nblock of 4k size. That means, with a cap of 300 IOPS, I get to send 1.17 \nMB per second. That would be the absolute limit. BUT, if I could double \nthe transfer size to 8k, I should be able to move 2.34 MB per second. \nShouldn't I?\n\nThat might well depend on whether AWS' virtual device paths would \nsupport these 8k block sizes.\n\nBut something tells me that my reasoning here is totally off. Because I \nget better IO throughput that that. Even on 3000 IOPS I would only get \n11 MB per second, and I am sure I am getting rather 50-100 MB/s, no? So \nmy simplistic logic is false.\n\nWhat really is the theoretical issue with the file system block size? \nWhere does -- in theory -- the benefit come from of using an XFS block \nsize of 8 kB, or even increasing the PostgreSQL page size to 16 kB and \nthen the XFS block size also to 16 kB? I remember having seen standard \nUFS block sizes of 16 kB. But then why is Linux so tough on refusing to \nmount an 8 kB XFS because it's VM page size is only 4 kB?\n\nDoesn't this all have one straight explanation?\n\nIf you have a link that I can just read, I appreciate you sharing that. \nI think that should be on some Wiki or FAQ somewhere. If I get a quick \nand dirty explanation with some pointers, I can try to write it out into \na more complete answer that might be added into some documentation or \nFAQ somewhere.\n\nthanks & regards,\n-Gunther\n\n\n\n",
"msg_date": "Mon, 8 Apr 2019 11:09:07 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Block / Page Size Optimization"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 11:09:07 -0400, Gunther wrote:\n> I can set an XFS file system with 8192 bytes block size, but then it does\n> not mount on Linux, because the VM page size is the limit, 4096 again.\n> \n> There seems to be no way to change that in (most, common) Linux variants. In\n> FreeBSD there appears to be a way to change that.\n> \n> But then, there is a hardware limit also, as far as the VM memory page\n> allocation is concerned. Apparently most i386 / amd64 architectures the VM\n> page sizes are 4k, 2M, and 1G. The latter, I believe, are called \"hugepages\"\n> and I only ever see that discussed in the PostgreSQL manuals for Linux, not\n> for FreeBSD.\n> \n> People have asked: does it matter? And then there is all that chatter about\n> \"why don't you run a benchmark and report back to us\" -- \"OK, will do\" --\n> and then it's crickets.\n> \n> But why is this such a secret?\n> \n> On Amazon AWS there is the following very simple situation: IO is capped on\n> IO operations per second (IOPS). Let's say, on a smallish volume, I get 300\n> IOPS (once my burst balance is used up.)\n> \n> Now my simple theoretical reasoning is this: one IO call transfers 1 block\n> of 4k size. That means, with a cap of 300 IOPS, I get to send 1.17 MB per\n> second. That would be the absolute limit. BUT, if I could double the\n> transfer size to 8k, I should be able to move 2.34 MB per second. Shouldn't\n> I?\n\nThe kernel collapses consecutive write requests. You can see the\naverage sizes of IO requests using iostat -xm 1. When e.g. bulk loading\ninto postgres I see:\n\nDevice r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsda 4.00 696.00 0.02 471.05 0.00 80.00 0.00 10.31 8.50 7.13 4.64 4.00 693.03 0.98 68.50\n\nso the average write request size was 693.03 kb. Thus I got 470 MB/sec\ndespite there only being ~700 IOPS. That's with 4KB page sizes, 4KB FS\nblocks, and 8KB postgres block size.\n\n\nThere still might be some benefit of different FS block sizes, but it's\nnot going to be related directly to IOPS.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Apr 2019 09:28:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block / Page Size Optimization"
},
{
"msg_contents": "On Mon, Apr 08, 2019 at 11:09:07AM -0400, Gunther wrote:\n>Hi all, I am sure this should be a FAQ, but I can't see a definitive \n>answer, only chatter on various lists and forums.\n>\n>Default page size of PostgreSQL is 8192 bytes.\n>\n>Default IO block size in Linux is 4096 bytes.\n>\n>I can set an XFS file system with 8192 bytes block size, but then it \n>does not mount on Linux, because the VM page size is the limit, 4096 \n>again.\n>\n>There seems to be no way to change that in (most, common) Linux \n>variants. In FreeBSD there appears to be a way to change that.\n>\n>But then, there is a hardware limit also, as far as the VM memory page \n>allocation is concerned. Apparently most i386 / amd64 architectures \n>the VM page sizes are 4k, 2M, and 1G. The latter, I believe, are \n>called \"hugepages\" and I only ever see that discussed in the \n>PostgreSQL manuals for Linux, not for FreeBSD.\n>\n\nYou're mixing page sizes at three different levels\n\n1) memory (usually 4kB on x86, although we now have hugepages too)\n\n2) filesystem (generally needs to be smaller than memory page, at least\nfor native filesystems, 4kB by default for most filesystems on x86)\n\n3) database (8kB by default)\n\nThen there's also the \"hardware page\" (sectors) which used to be 512B,\nthen it got increased to 4kB, and then SSDs entirely changed how all\nthat works and it's quite specific to individual devices / models.\n\nOf course, the exact behavior depends on sizes used at each level, and\nit may interfere in unexpected ways.\n\n>People have asked: does it matter? And then there is all that chatter \n>about \"why don't you run a benchmark and report back to us\" -- \"OK, \n>will do\" -- and then it's crickets.\n>\n>But why is this such a secret?\n>\n\nWhat is a secret? That I/O request size affects performance? That's\npretty obvious fact, I think. Years ago I did exactly that kind of\nbenchmark, and the results are just as expected - smaller pages are\nbetter for random I/O, larger pages are better for sequential access.\nEssentially, throughput vs. latency kind of trade-off.\n\nThe other thing of course is that page size affects how adaptive the\ncache can be - even if you keep the cache size the same, doubling the\npage size means you only have 1/2 of \"slots\" that you used to have. So\nyou're more likely to evict stuff that you'll need soon, negatively\naffecting the cache hit ratio.\n\nOTOH if you decrease the page size, you increase the \"overhead\" fraction\n(because each page has a fixed-size header). So while you get more\nslots, a bigger fraction will be used for this metadata.\n\nIn practice, it probably does not matter much whether you have 4kB, 8kB\nor 16kB pages. It will make a difference for some workloads, especially\nif you align the sizes to e.g. match SSD page sizes etc.\n\nBut frankly, there are probably better/cheaper ways to achieve the same\nbenefits. And it's usually the case that systems are a mix of workloads\nand what improves one is bad for another one.\n\n>On Amazon AWS there is the following very simple situation: IO is \n>capped on IO operations per second (IOPS). Let's say, on a smallish \n>volume, I get 300 IOPS (once my burst balance is used up.)\n>\n>Now my simple theoretical reasoning is this: one IO call transfers 1 \n>block of 4k size. That means, with a cap of 300 IOPS, I get to send \n>1.17 MB per second. That would be the absolute limit. BUT, if I could \n>double the transfer size to 8k, I should be able to move 2.34 MB per \n>second. Shouldn't I?\n>\n\nUmmm, I'm no expert on Amazon, but AFAIK the I/O limits are specified\nassuming requests of a specific size (16kB IIRC). So doubling the I/O\nrequest size may not actually help much, the throughput limit will\nremain the same.\n\n>That might well depend on whether AWS' virtual device paths would \n>support these 8k block sizes.\n>\n>But something tells me that my reasoning here is totally off. Because \n>I get better IO throughput that that. Even on 3000 IOPS I would only \n>get 11 MB per second, and I am sure I am getting rather 50-100 MB/s, \n>no? So my simplistic logic is false.\n>\n\nThere's a difference between guaranteed and actual throughput. If you\nrun the workload long enough, chances are the numbers will go down.\n\n>What really is the theoretical issue with the file system block size? \n>Where does -- in theory -- the benefit come from of using an XFS block \n>size of 8 kB, or even increasing the PostgreSQL page size to 16 kB and \n>then the XFS block size also to 16 kB? I remember having seen standard \n>UFS block sizes of 16 kB. But then why is Linux so tough on refusing \n>to mount an 8 kB XFS because it's VM page size is only 4 kB?\n>\n>Doesn't this all have one straight explanation?\n>\n\nNot really. AFAICS the limitation is due to a mix of reasons, and is\nmostly a trade-off between code complexity and potential benefits. It's\nprobably simpler to manage filesystems with pages smaller than a memory\npage, and removing the limitation did not seem very useful compared to\nthe added complexity. But it's probably a question for kernel hackers.\n\n>If you have a link that I can just read, I appreciate you sharing \n>that. I think that should be on some Wiki or FAQ somewhere. If I get a \n>quick and dirty explanation with some pointers, I can try to write it \n>out into a more complete answer that might be added into some \n>documentation or FAQ somewhere.\n>\n\nMaybe read this famous paper by Jim Gray & Franco Putzolu. It's not\nexactly about the thing you're asking about, but it's related. It\nessentially deals with sizing memory vs. disk I/O, and page size plays\nan important role in that too.\n\n[1] https://www.hpl.hp.com/techreports/tandem/TR-86.1.pdf\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Apr 2019 18:19:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block / Page Size Optimization"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 06:19:06PM +0200, Tomas Vondra wrote:\n> On Mon, Apr 08, 2019 at 11:09:07AM -0400, Gunther wrote:\n> > What really is the theoretical issue with the file system block size?\n> > Where does -- in theory -- the benefit come from of using an XFS block\n> > size of 8 kB, or even increasing the PostgreSQL page size to 16 kB and\n> > then the XFS block size also to 16 kB? I remember having seen standard\n> > UFS block sizes of 16 kB. But then why is Linux so tough on refusing to\n> > mount an 8 kB XFS because it's VM page size is only 4 kB?\n> > \n> > Doesn't this all have one straight explanation?\n> > \n> \n> Not really. AFAICS the limitation is due to a mix of reasons, and is\n> mostly a trade-off between code complexity and potential benefits. It's\n> probably simpler to manage filesystems with pages smaller than a memory\n> page, and removing the limitation did not seem very useful compared to\n> the added complexity. But it's probably a question for kernel hackers.\n\nMy guess is that having the file system block size be the same as the\nvirtual memory page size allows flipping pages from kernel to userspace\nmemory much simpler.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 17 Apr 2019 16:14:59 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block / Page Size Optimization"
}
] |
[
{
"msg_contents": "Hello team.\n\nWe have two node postgresql database version 9.6 with streaming replication which is running on docker environment, os Linux (Ubuntu) and we have to migrate on PostgresQL11. I need your suggestions & steps to compete the upgrade process successfully.\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\nHello team.\n \nWe have two node postgresql database version 9.6 with streaming replication which is running on docker environment, os Linux (Ubuntu) and we have to migrate on PostgresQL11. I need your suggestions & steps to compete the upgrade process\n successfully.\n \nRegards,\nDaulat",
"msg_date": "Wed, 10 Apr 2019 05:40:25 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL upgrade."
},
{
"msg_contents": "\n\nAm 10.04.19 um 07:40 schrieb Daulat Ram:\n> We have two node postgresql database version 9.6 with streaming \n> replication which is running on docker environment, os Linux (Ubuntu) \n> and we have to migrate on PostgresQL11. I need your suggestions & \n> steps to compete the upgrade �process successfully.\n\nthere are exists several ways to do that. You can take a normal dump and \nreplay it in the new version, you can use pg_upgrade, and you can use a \nlogical replication (using slony, londiste or pg_logical from \n2ndQuadrant). There is no 'standard way' to do that, all depends on your \nrequirements and knowledge how to work with that tools.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 10:20:58 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL upgrade."
},
{
"msg_contents": "\nOn 10/04/19 8:20 PM, Andreas Kretschmer wrote:\n>\n>\n> Am 10.04.19 um 07:40 schrieb Daulat Ram:\n>> We have two node postgresql database version 9.6 with streaming \n>> replication which is running on docker environment, os Linux (Ubuntu) \n>> and we have to migrate on PostgresQL11. I need your suggestions & \n>> steps to compete the upgrade �process successfully.\n>\n> there are exists several ways to do that. You can take a normal dump \n> and replay it in the new version, you can use pg_upgrade, and you can \n> use a logical replication (using slony, londiste or pg_logical from \n> 2ndQuadrant). There is no 'standard way' to do that, all depends on \n> your requirements and knowledge how to work with that tools.\n>\n>\n>\n\nThe docker environment makes using pg_upgrade more difficult, as you \nneed to modify (or build a new) container with the old and new Postgres \nversions installed. I'm interested in seeing how hard that would be \n(will update this thread if I find anything useful).\n\nregards\n\nMark\n\n\n\n",
"msg_date": "Mon, 15 Apr 2019 14:26:46 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL upgrade."
},
{
"msg_contents": "On 15/04/19 2:26 PM, Mark Kirkwood wrote:\n>\n> On 10/04/19 8:20 PM, Andreas Kretschmer wrote:\n>>\n>>\n>> Am 10.04.19 um 07:40 schrieb Daulat Ram:\n>>> We have two node postgresql database version 9.6 with streaming\n>>> replication which is running on docker environment, os Linux\n>>> (Ubuntu) and we have to migrate on PostgresQL11. I need your\n>>> suggestions & steps to compete the upgrade process successfully.\n>>\n>> there are exists several ways to do that. You can take a normal dump\n>> and replay it in the new version, you can use pg_upgrade, and you can\n>> use a logical replication (using slony, londiste or pg_logical from\n>> 2ndQuadrant). There is no 'standard way' to do that, all depends on\n>> your requirements and knowledge how to work with that tools.\n>>\n>>\n>>\n>\n> The docker environment makes using pg_upgrade more difficult, as you\n> need to modify (or build a new) container with the old and new\n> Postgres versions installed. I'm interested in seeing how hard that\n> would be (will update this thread if I find anything useful).\n>\n>\n>\n\nIt transpires that it is not too tricky to build a 'migration' container:\n\n- get relevant Postgres Dockerfile from https://hub.docker.com/_/postgres\n\n- Amend it to install 2 versions of Postgres\n\n- Change ENTRYPOINT to run something non Postgres related (I used 'top')\n\n- Build it\n\n\nTo use pg_upgrade the process is:\n\n- stop your original Postgres container\n\n- run the migration one, attaching volume from the Postgres container +\na new one\n\n- enter the migration container and initialize the new version's datadir\n\n- run pg_upgrade from old to new version\n\n- tidy up config and pg_hba for the upgraded datadir\n\n- exit and stop the migration container\n\n\n(see attached for notes and Dockerfile diff)\n\n\nYou can then run a new Postgres container (of the new version) using the\nnew volume.\n\nWhile the process is a bit fiddly, it is probably still way faster than\na dump and restore.\n\nregards\n\nMark",
"msg_date": "Tue, 16 Apr 2019 14:08:45 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL upgrade."
}
] |
[
{
"msg_contents": "For weeks now, I am banging my head at an \"out of memory\" situation. \nThere is only one query I am running on an 8 GB system, whatever I try, \nI get knocked out on this out of memory. It is extremely impenetrable to \nunderstand and fix this error. I guess I could add a swap file, and then \nI would have to take the penalty of swapping. But how can I actually \naddress an out of memory condition if the system doesn't tell me where \nit is happening?\n\nYou might want to see the query, but it is a huge plan, and I can't \nreally break this down. It shouldn't matter though. But just so you can \nget a glimpse here is the plan:\n\nInsert on businessoperation (cost=5358849.28..5361878.44 rows=34619 width=1197)\n -> Unique (cost=5358849.28..5361532.25 rows=34619 width=1197)\n -> Sort (cost=5358849.28..5358935.83 rows=34619 width=1197)\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.is_current, documentinformationsubject.documentid, documentinformationsubject.documenttypecode, documentinformationsubject.subjectroleinternalid, documentinformationsubject.subjectentityinternalid, documentinformationsubject.subjectentityid, documentinformationsubject.subjectentityidroot, documentinformationsubject.subjectentityname, documentinformationsubject.subjectentitytel, documentinformationsubject.subjectentityemail, documentinformationsubject.otherentityinternalid, documentinformationsubject.confidentialitycode, documentinformationsubject.actinternalid, documentinformationsubject.code_code, documentinformationsubject.code_displayname, q.code_code, q.code_displayname, an.extension, an.root, documentinformationsubject_2.subjectentitycode, documentinformationsubject_2.subjectentitycodesystem, documentinformationsubject_2.effectivetime_low, documentinformationsubject_2.effectivetime_high, documentinformationsubject_2.statuscode, documentinformationsubject_2.code_code, agencyid.extension, agencyname.trivialname, documentinformationsubject_1.subjectentitycode, documentinformationsubject_1.subjectentityinternalid\n -> Nested Loop Left Join (cost=2998335.54..5338133.63 rows=34619 width=1197)\n Join Filter: (((documentinformationsubject.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject.actinternalid)::text = (r.targetinternalid)::text))\n -> Merge Left Join (cost=2998334.98..3011313.54 rows=34619 width=930)\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n -> Sort (cost=1408783.87..1408870.41 rows=34619 width=882)\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalid\n -> Seq Scan on documentinformationsubject (cost=0.00..1392681.22 rows=34619 width=882)\n Filter: (((participationtypecode)::text = ANY ('{PPRF,PRF}'::text[])) AND ((classcode)::text = 'ACT'::text) AND ((moodcode)::text = 'DEF'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n -> Materialize (cost=1589551.12..1594604.04 rows=1010585 width=159)\n -> Sort (cost=1589551.12..1592077.58 rows=1010585 width=159)\n Sort Key: documentinformationsubject_1.documentinternalid, documentinformationsubject_1.documentid, documentinformationsubject_1.actinternalid\n -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=159)\n Filter: ((participationtypecode)::text = 'PRD'::text)\n -> Materialize (cost=0.56..2318944.31 rows=13 width=341)\n -> Nested Loop Left Join (cost=0.56..2318944.24 rows=13 width=341)\n -> Nested Loop Left Join (cost=0.00..2318893.27 rows=1 width=281)\n Join Filter: ((agencyname.entityinternalid)::text = (documentinformationsubject_2.otherentityinternalid)::text)\n -> Nested Loop Left Join (cost=0.00..2286828.33 rows=1 width=291)\n Join Filter: ((agencyid.entityinternalid)::text = (documentinformationsubject_2.otherentityinternalid)::text)\n -> Nested Loop Left Join (cost=0.00..2284826.24 rows=1 width=239)\n Join Filter: (((q.documentinternalid)::text = (documentinformationsubject_2.documentinternalid)::text) AND ((q.actinternalid)::text = (documentinformationsubject_2.actinternalid)::text))\n -> Nested Loop (cost=0.00..954957.59 rows=1 width=136)\n Join Filter: ((q.actinternalid)::text = (r.sourceinternalid)::text)\n -> Seq Scan on actrelationship r (cost=0.00..456015.26 rows=1 width=74)\n Filter: ((typecode)::text = 'SUBJ'::text)\n -> Seq Scan on documentinformation q (cost=0.00..497440.84 rows=120119 width=99)\n Filter: (((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = 'EVN'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n -> Seq Scan on documentinformationsubject documentinformationsubject_2 (cost=0.00..1329868.64 rows=1 width=177)\n Filter: ((participationtypecode)::text = 'AUT'::text)\n -> Seq Scan on entity_id agencyid (cost=0.00..1574.82 rows=34182 width=89)\n -> Seq Scan on bestname agencyname (cost=0.00..27066.08 rows=399908 width=64)\n -> Index Scan using act_id_fkidx on act_id an (cost=0.56..50.85 rows=13 width=134)\n Index Cond: ((q.actinternalid)::text = (actinternalid)::text)\n\nI have monitored the activity with vmstat and iostat, and it looks like \nthe memory grabbing happens rapidly after a Sort Merge step. I see in \nthe iostat a heavy read and write activity, which I attribute to a \nsort-merge step, then that is followed by a sudden spike in write \nactivity, and then the out of memory crash.\n\nprocs -----------------------memory---------------------- ---swap-- -----io---- -system-- --------cpu-------- -----timestamp-----\n r b swpd free buff cache si so bi bo in cs us sy id wa st UTC\n 0 2 0 119344 0 7616672 0 0 11681 3107 9 0 6 1 72 21 0 2019-04-14 16:19:52\n 0 2 0 128884 0 7607288 0 0 2712 55386 500 509 3 2 15 80 0 2019-04-14 16:19:54\n 0 2 0 116984 0 7619916 0 0 880 59241 548 525 2 2 9 87 0 2019-04-14 16:19:56\n 0 2 0 131492 0 7604816 0 0 128 56512 518 401 1 1 12 86 0 2019-04-14 16:19:58\n ...\n 0 2 0 134508 0 7601480 0 0 0 58562 428 353 0 1 4 95 0 2019-04-14 16:21:46\n 0 2 0 125360 0 7611320 0 0 0 59392 481 369 0 1 11 89 0 2019-04-14 16:21:48\n 0 2 0 122896 0 7612872 0 0 0 58564 431 342 0 1 17 82 0 2019-04-14 16:21:50\n 1 1 0 121456 0 7614248 0 0 54 57347 487 399 0 1 13 85 0 2019-04-14 16:21:52\n 0 2 0 122820 0 7613324 0 0 12 59964 460 346 0 1 20 79 0 2019-04-14 16:21:54\n 0 2 0 120344 0 7616528 0 0 1844 55691 645 676 5 3 6 85 0 2019-04-14 16:21:56\n 0 2 0 124900 0 7611404 0 0 936 58261 795 1215 2 3 13 83 0 2019-04-14 16:21:58\n 0 2 0 124572 0 7612192 0 0 1096 55340 518 487 1 2 0 97 0 2019-04-14 16:22:00\n 0 2 0 123040 0 7612740 0 0 888 57574 573 620 1 2 5 92 0 2019-04-14 16:22:02\n 0 2 0 125112 0 7610592 0 0 124 59164 498 480 1 1 13 85 0 2019-04-14 16:22:04\n 1 1 0 129440 0 7607592 0 0 568 60196 563 612 2 2 8 88 0 2019-04-14 16:22:06\n 0 2 0 124020 0 7612364 0 0 0 58260 629 725 0 1 8 91 0 2019-04-14 16:22:08\n 2 1 0 124480 0 7611848 0 0 0 58852 447 331 0 1 1 98 0 2019-04-14 16:22:10\n 0 3 0 137636 0 7598484 0 0 11908 44995 619 714 1 1 11 87 0 2019-04-14 16:22:12\n 0 2 0 123128 0 7613392 0 0 29888 28901 532 972 1 1 29 68 0 2019-04-14 16:22:14\n 0 2 0 126260 0 7609984 0 0 39872 18836 706 1435 1 2 28 70 0 2019-04-14 16:22:16\n 0 2 0 130748 0 7605536 0 0 36096 22488 658 1272 2 1 8 89 0 2019-04-14 16:22:18\n...\n 0 2 0 127216 0 7609192 0 0 29192 29696 472 949 1 1 23 75 0 2019-04-14 16:22:40\n 0 2 0 147428 0 7588556 0 0 29120 29696 523 974 1 1 19 79 0 2019-04-14 16:22:42\n 0 1 0 120644 0 7615388 0 0 32320 25276 566 998 1 2 49 47 0 2019-04-14 16:22:44\n 0 1 0 128456 0 7607904 0 0 58624 0 621 1103 3 2 49 46 0 2019-04-14 16:22:46\n 0 1 0 127836 0 7608260 0 0 58624 0 631 1119 3 2 50 46 0 2019-04-14 16:22:48\n 0 1 0 126712 0 7609616 0 0 58624 0 616 1110 2 2 50 47 0 2019-04-14 16:22:50\n...\n 0 1 0 157408 0 7578060 0 0 58628 0 736 1206 3 3 50 44 0 2019-04-14 16:27:22\n 0 1 0 142420 0 7593400 0 0 58688 0 623 1099 1 4 50 45 0 2019-04-14 16:27:24\n 0 1 0 247016 0 7488184 0 0 58568 0 649 1113 1 4 50 45 0 2019-04-14 16:27:26\n 0 1 0 123232 0 7612088 0 0 58412 215 675 1141 2 3 50 46 0 2019-04-14 16:27:28\n 0 2 0 144920 0 7586576 0 0 48376 11046 788 1455 1 5 34 60 0 2019-04-14 16:27:30\n 1 1 0 125636 0 7595704 0 0 36736 21381 702 1386 1 4 21 74 0 2019-04-14 16:27:32\n 0 3 0 156700 0 7559328 0 0 35556 23364 709 1367 1 3 22 74 0 2019-04-14 16:27:34\n 0 2 0 315580 0 7382748 0 0 33608 24731 787 1407 1 5 18 76 0 2019-04-14 16:27:36\n...\n 0 2 0 684412 0 6152040 0 0 29832 28356 528 994 1 2 32 66 0 2019-04-14 16:38:04\n 0 2 0 563512 0 6272264 0 0 29696 29506 546 987 1 2 32 65 0 2019-04-14 16:38:06\n 0 2 0 595488 0 6241068 0 0 27292 30858 549 971 1 2 26 71 0 2019-04-14 16:38:08\n 0 2 0 550120 0 6285352 0 0 28844 29696 567 995 1 2 29 68 0 2019-04-14 16:38:10\n 1 1 0 432380 0 6402964 0 0 28992 29696 557 979 1 2 37 61 0 2019-04-14 16:38:12\n 0 2 0 445796 0 6384412 0 0 26768 32134 628 1029 1 4 27 69 0 2019-04-14 16:38:14\n 0 2 0 374972 0 6453592 0 0 28172 30839 529 962 1 2 43 54 0 2019-04-14 16:38:16\n 0 2 0 317824 0 6507992 0 0 29172 29386 560 1001 1 3 27 68 0 2019-04-14 16:38:18\n 0 3 0 215092 0 6609132 0 0 33116 25210 621 1148 1 3 19 77 0 2019-04-14 16:38:20\n 0 2 0 194836 0 6621524 0 0 27786 30959 704 1152 0 5 18 77 0 2019-04-14 16:38:22\n 0 3 0 315648 0 6500196 0 0 31434 27226 581 1073 0 3 31 65 0 2019-04-14 16:38:24\n*0 2 0 256180 0 6554676 0 0 29828 29017 668 1174 0 4 20 76 0 2019-04-14 \n16:38:26* <<< CRASH\n 0 1 0 378220 0 6552496 0 0 4348 53686 2210 3816 1 5 46 49 0 2019-04-14 16:38:28\n 0 1 0 389888 0 6536296 0 0 2704 56529 2454 4178 0 5 42 52 0 2019-04-14 16:38:30\n 0 2 0 923572 0 5998992 0 0 1612 56863 2384 3928 0 6 16 78 0 2019-04-14 16:38:32\n 0 0 0 908336 0 6006696 0 0 3584 49280 8961 17334 0 19 39 42 0 2019-04-14 16:38:34\n 0 1 0 1306480 0 5607088 0 0 264 63632 18605 37933 3 58 35 4 0 2019-04-14 16:38:36\n 2 1 0 1355448 0 5558576 0 0 8 59222 14817 30296 2 46 24 27 0 2019-04-14 16:38:38\n 2 2 0 1358224 0 5555884 0 0 0 58544 14226 28331 2 44 3 50 0 2019-04-14 16:38:40\n 2 1 0 1446348 0 5468748 0 0 0 58846 14376 29185 2 44 11 42 0 2019-04-14 16:38:42\n 0 0 0 2639648 0 4357608 0 0 0 28486 12909 26770 2 44 49 5 0 2019-04-14 16:38:44\n 0 0 0 2639524 0 4357800 0 0 0 0 158 154 0 0 100 0 0 2019-04-14 16:38:46\n 0 0 0 2687316 0 4309976 0 0 0 0 181 188 0 2 98 0 0 2019-04-14 16:38:48\n 0 0 0 2706920 0 4300116 0 0 0 105 137 263 0 0 100 0 0 2019-04-14 16:38:50\n 0 0 0 2706672 0 4300232 0 0 0 0 142 204 0 0 100 0 0 2019-04-14 16:38:52\n 0 0 0 2815116 0 4191928 0 0 0 0 116 242 0 0 100 0 0 2019-04-14 16:38:54\n 0 0 0 2815364 0 4192008 0 0 0 0 116 239 0 0 100 0 0 2019-04-14 16:38:56\n 0 0 0 2815116 0 4192164 0 0 0 0 159 236 0 0 100 0 0 2019-04-14 16:38:58\n\nending after the out of memory crash, that occurred exactly at the \nmarked point 16:38:26.355 UTC.\n\nWe can't really see anything too worrisome. There is always lots of \nmemory used by cache, which could have been mobilized. The only possible \nexplanation I can think of is that in that moment of the crash the \nmemory utilization suddenly skyrocketed in less than a second, so that \nthe 2 second vmstat interval wouldn't show it??? Nah.\n\nI have already much reduced work_mem, which has helped in some other \ncases before. Now I am going to reduce the shared_buffers now, but that \nseems counter-intuitive because we are sitting on all that cache memory \nunused!\n\nMight this be a bug? It feels like a bug. It feels like those out of \nmemory issues should be handled more gracefully (garbage collection \nattempt?) and that somehow there should be more information so the \nperson can do anything about it.\n\nAny ideas?\n\n-Gunther\n\n\n\n\n\n\n\nFor weeks now, I am banging my head at an \"out of memory\"\r\n situation. There is only one query I am running on an 8 GB system,\r\n whatever I try, I get knocked out on this out of memory. It is\r\n extremely impenetrable to understand and fix this error. I guess I\r\n could add a swap file, and then I would have to take the penalty\r\n of swapping. But how can I actually address an out of memory\r\n condition if the system doesn't tell me where it is happening?\nYou might want to see the query, but it is a huge plan, and I\r\n can't really break this down. It shouldn't matter though. But just\r\n so you can get a glimpse here is the plan:\n\nInsert on businessoperation (cost=5358849.28..5361878.44 rows=34619 width=1197)\r\n -> Unique (cost=5358849.28..5361532.25 rows=34619 width=1197)\r\n -> Sort (cost=5358849.28..5358935.83 rows=34619 width=1197)\r\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.is_current, documentinformationsubject.documentid, documentinformationsubject.documenttypecode, documentinformationsubject.subjectroleinternalid, documentinformationsubject.subjectentityinternalid, documentinformationsubject.subjectentityid, documentinformationsubject.subjectentityidroot, documentinformationsubject.subjectentityname, documentinformationsubject.subjectentitytel, documentinformationsubject.subjectentityemail, documentinformationsubject.otherentityinternalid, documentinformationsubject.confidentialitycode, documentinformationsubject.actinternalid, documentinformationsubject.code_code, documentinformationsubject.code_displayname, q.code_code, q.code_displayname, an.extension, an.root, documentinformationsubject_2.subjectentitycode, documentinformationsubject_2.subjectentitycodesystem, documentinformationsubject_2.effectivetime_low, documentinformationsubject_2.effectivetime_high, documentinformationsubject_2.statuscode, documentinformationsubject_2.code_code, agencyid.extension, agencyname.trivialname, documentinformationsubject_1.subjectentitycode, documentinformationsubject_1.subjectentityinternalid\r\n -> Nested Loop Left Join (cost=2998335.54..5338133.63 rows=34619 width=1197)\r\n Join Filter: (((documentinformationsubject.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject.actinternalid)::text = (r.targetinternalid)::text))\r\n -> Merge Left Join (cost=2998334.98..3011313.54 rows=34619 width=930)\r\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\r\n -> Sort (cost=1408783.87..1408870.41 rows=34619 width=882)\r\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalid\r\n -> Seq Scan on documentinformationsubject (cost=0.00..1392681.22 rows=34619 width=882)\r\n Filter: (((participationtypecode)::text = ANY ('{PPRF,PRF}'::text[])) AND ((classcode)::text = 'ACT'::text) AND ((moodcode)::text = 'DEF'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\r\n -> Materialize (cost=1589551.12..1594604.04 rows=1010585 width=159)\r\n -> Sort (cost=1589551.12..1592077.58 rows=1010585 width=159)\r\n Sort Key: documentinformationsubject_1.documentinternalid, documentinformationsubject_1.documentid, documentinformationsubject_1.actinternalid\r\n -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=159)\r\n Filter: ((participationtypecode)::text = 'PRD'::text)\r\n -> Materialize (cost=0.56..2318944.31 rows=13 width=341)\r\n -> Nested Loop Left Join (cost=0.56..2318944.24 rows=13 width=341)\r\n -> Nested Loop Left Join (cost=0.00..2318893.27 rows=1 width=281)\r\n Join Filter: ((agencyname.entityinternalid)::text = (documentinformationsubject_2.otherentityinternalid)::text)\r\n -> Nested Loop Left Join (cost=0.00..2286828.33 rows=1 width=291)\r\n Join Filter: ((agencyid.entityinternalid)::text = (documentinformationsubject_2.otherentityinternalid)::text)\r\n -> Nested Loop Left Join (cost=0.00..2284826.24 rows=1 width=239)\r\n Join Filter: (((q.documentinternalid)::text = (documentinformationsubject_2.documentinternalid)::text) AND ((q.actinternalid)::text = (documentinformationsubject_2.actinternalid)::text))\r\n -> Nested Loop (cost=0.00..954957.59 rows=1 width=136)\r\n Join Filter: ((q.actinternalid)::text = (r.sourceinternalid)::text)\r\n -> Seq Scan on actrelationship r (cost=0.00..456015.26 rows=1 width=74)\r\n Filter: ((typecode)::text = 'SUBJ'::text)\r\n -> Seq Scan on documentinformation q (cost=0.00..497440.84 rows=120119 width=99)\r\n Filter: (((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = 'EVN'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\r\n -> Seq Scan on documentinformationsubject documentinformationsubject_2 (cost=0.00..1329868.64 rows=1 width=177)\r\n Filter: ((participationtypecode)::text = 'AUT'::text)\r\n -> Seq Scan on entity_id agencyid (cost=0.00..1574.82 rows=34182 width=89)\r\n -> Seq Scan on bestname agencyname (cost=0.00..27066.08 rows=399908 width=64)\r\n -> Index Scan using act_id_fkidx on act_id an (cost=0.56..50.85 rows=13 width=134)\r\n Index Cond: ((q.actinternalid)::text = (actinternalid)::text)\r\n\nI have monitored the activity with vmstat and iostat, and it\r\n looks like the memory grabbing happens rapidly after a Sort Merge\r\n step. I see in the iostat a heavy read and write activity, which I\r\n attribute to a sort-merge step, then that is followed by a sudden\r\n spike in write activity, and then the out of memory crash.\nprocs -----------------------memory---------------------- ---swap-- -----io---- -system-- --------cpu-------- -----timestamp-----\r\n r b swpd free buff cache si so bi bo in cs us sy id wa st UTC\r\n 0 2 0 119344 0 7616672 0 0 11681 3107 9 0 6 1 72 21 0 2019-04-14 16:19:52\r\n 0 2 0 128884 0 7607288 0 0 2712 55386 500 509 3 2 15 80 0 2019-04-14 16:19:54\r\n 0 2 0 116984 0 7619916 0 0 880 59241 548 525 2 2 9 87 0 2019-04-14 16:19:56\r\n 0 2 0 131492 0 7604816 0 0 128 56512 518 401 1 1 12 86 0 2019-04-14 16:19:58\r\n ...\r\n 0 2 0 134508 0 7601480 0 0 0 58562 428 353 0 1 4 95 0 2019-04-14 16:21:46\r\n 0 2 0 125360 0 7611320 0 0 0 59392 481 369 0 1 11 89 0 2019-04-14 16:21:48\r\n 0 2 0 122896 0 7612872 0 0 0 58564 431 342 0 1 17 82 0 2019-04-14 16:21:50\r\n 1 1 0 121456 0 7614248 0 0 54 57347 487 399 0 1 13 85 0 2019-04-14 16:21:52\r\n 0 2 0 122820 0 7613324 0 0 12 59964 460 346 0 1 20 79 0 2019-04-14 16:21:54\r\n 0 2 0 120344 0 7616528 0 0 1844 55691 645 676 5 3 6 85 0 2019-04-14 16:21:56\r\n 0 2 0 124900 0 7611404 0 0 936 58261 795 1215 2 3 13 83 0 2019-04-14 16:21:58\r\n 0 2 0 124572 0 7612192 0 0 1096 55340 518 487 1 2 0 97 0 2019-04-14 16:22:00\r\n 0 2 0 123040 0 7612740 0 0 888 57574 573 620 1 2 5 92 0 2019-04-14 16:22:02\r\n 0 2 0 125112 0 7610592 0 0 124 59164 498 480 1 1 13 85 0 2019-04-14 16:22:04\r\n 1 1 0 129440 0 7607592 0 0 568 60196 563 612 2 2 8 88 0 2019-04-14 16:22:06\r\n 0 2 0 124020 0 7612364 0 0 0 58260 629 725 0 1 8 91 0 2019-04-14 16:22:08\r\n 2 1 0 124480 0 7611848 0 0 0 58852 447 331 0 1 1 98 0 2019-04-14 16:22:10\r\n 0 3 0 137636 0 7598484 0 0 11908 44995 619 714 1 1 11 87 0 2019-04-14 16:22:12\r\n 0 2 0 123128 0 7613392 0 0 29888 28901 532 972 1 1 29 68 0 2019-04-14 16:22:14\r\n 0 2 0 126260 0 7609984 0 0 39872 18836 706 1435 1 2 28 70 0 2019-04-14 16:22:16\r\n 0 2 0 130748 0 7605536 0 0 36096 22488 658 1272 2 1 8 89 0 2019-04-14 16:22:18\r\n...\r\n 0 2 0 127216 0 7609192 0 0 29192 29696 472 949 1 1 23 75 0 2019-04-14 16:22:40\r\n 0 2 0 147428 0 7588556 0 0 29120 29696 523 974 1 1 19 79 0 2019-04-14 16:22:42\r\n 0 1 0 120644 0 7615388 0 0 32320 25276 566 998 1 2 49 47 0 2019-04-14 16:22:44\r\n 0 1 0 128456 0 7607904 0 0 58624 0 621 1103 3 2 49 46 0 2019-04-14 16:22:46\r\n 0 1 0 127836 0 7608260 0 0 58624 0 631 1119 3 2 50 46 0 2019-04-14 16:22:48\r\n 0 1 0 126712 0 7609616 0 0 58624 0 616 1110 2 2 50 47 0 2019-04-14 16:22:50\r\n...\r\n 0 1 0 157408 0 7578060 0 0 58628 0 736 1206 3 3 50 44 0 2019-04-14 16:27:22\r\n 0 1 0 142420 0 7593400 0 0 58688 0 623 1099 1 4 50 45 0 2019-04-14 16:27:24\r\n 0 1 0 247016 0 7488184 0 0 58568 0 649 1113 1 4 50 45 0 2019-04-14 16:27:26\r\n 0 1 0 123232 0 7612088 0 0 58412 215 675 1141 2 3 50 46 0 2019-04-14 16:27:28\r\n 0 2 0 144920 0 7586576 0 0 48376 11046 788 1455 1 5 34 60 0 2019-04-14 16:27:30\r\n 1 1 0 125636 0 7595704 0 0 36736 21381 702 1386 1 4 21 74 0 2019-04-14 16:27:32\r\n 0 3 0 156700 0 7559328 0 0 35556 23364 709 1367 1 3 22 74 0 2019-04-14 16:27:34\r\n 0 2 0 315580 0 7382748 0 0 33608 24731 787 1407 1 5 18 76 0 2019-04-14 16:27:36\r\n...\r\n 0 2 0 684412 0 6152040 0 0 29832 28356 528 994 1 2 32 66 0 2019-04-14 16:38:04\r\n 0 2 0 563512 0 6272264 0 0 29696 29506 546 987 1 2 32 65 0 2019-04-14 16:38:06\r\n 0 2 0 595488 0 6241068 0 0 27292 30858 549 971 1 2 26 71 0 2019-04-14 16:38:08\r\n 0 2 0 550120 0 6285352 0 0 28844 29696 567 995 1 2 29 68 0 2019-04-14 16:38:10\r\n 1 1 0 432380 0 6402964 0 0 28992 29696 557 979 1 2 37 61 0 2019-04-14 16:38:12\r\n 0 2 0 445796 0 6384412 0 0 26768 32134 628 1029 1 4 27 69 0 2019-04-14 16:38:14\r\n 0 2 0 374972 0 6453592 0 0 28172 30839 529 962 1 2 43 54 0 2019-04-14 16:38:16\r\n 0 2 0 317824 0 6507992 0 0 29172 29386 560 1001 1 3 27 68 0 2019-04-14 16:38:18\r\n 0 3 0 215092 0 6609132 0 0 33116 25210 621 1148 1 3 19 77 0 2019-04-14 16:38:20\r\n 0 2 0 194836 0 6621524 0 0 27786 30959 704 1152 0 5 18 77 0 2019-04-14 16:38:22\r\n 0 3 0 315648 0 6500196 0 0 31434 27226 581 1073 0 3 31 65 0 2019-04-14 16:38:24\r\n 0 2 0 256180 0 6554676 0 0 29828 29017 668 1174 0 4 20 76 0 2019-04-14 16:38:26 <<< CRASH\r\n 0 1 0 378220 0 6552496 0 0 4348 53686 2210 3816 1 5 46 49 0 2019-04-14 16:38:28\r\n 0 1 0 389888 0 6536296 0 0 2704 56529 2454 4178 0 5 42 52 0 2019-04-14 16:38:30\r\n 0 2 0 923572 0 5998992 0 0 1612 56863 2384 3928 0 6 16 78 0 2019-04-14 16:38:32\r\n 0 0 0 908336 0 6006696 0 0 3584 49280 8961 17334 0 19 39 42 0 2019-04-14 16:38:34\r\n 0 1 0 1306480 0 5607088 0 0 264 63632 18605 37933 3 58 35 4 0 2019-04-14 16:38:36\r\n 2 1 0 1355448 0 5558576 0 0 8 59222 14817 30296 2 46 24 27 0 2019-04-14 16:38:38\r\n 2 2 0 1358224 0 5555884 0 0 0 58544 14226 28331 2 44 3 50 0 2019-04-14 16:38:40\r\n 2 1 0 1446348 0 5468748 0 0 0 58846 14376 29185 2 44 11 42 0 2019-04-14 16:38:42\r\n 0 0 0 2639648 0 4357608 0 0 0 28486 12909 26770 2 44 49 5 0 2019-04-14 16:38:44\r\n 0 0 0 2639524 0 4357800 0 0 0 0 158 154 0 0 100 0 0 2019-04-14 16:38:46\r\n 0 0 0 2687316 0 4309976 0 0 0 0 181 188 0 2 98 0 0 2019-04-14 16:38:48\r\n 0 0 0 2706920 0 4300116 0 0 0 105 137 263 0 0 100 0 0 2019-04-14 16:38:50\r\n 0 0 0 2706672 0 4300232 0 0 0 0 142 204 0 0 100 0 0 2019-04-14 16:38:52\r\n 0 0 0 2815116 0 4191928 0 0 0 0 116 242 0 0 100 0 0 2019-04-14 16:38:54\r\n 0 0 0 2815364 0 4192008 0 0 0 0 116 239 0 0 100 0 0 2019-04-14 16:38:56\r\n 0 0 0 2815116 0 4192164 0 0 0 0 159 236 0 0 100 0 0 2019-04-14 16:38:58\nending after the out of memory crash, that occurred exactly at\r\n the marked point 16:38:26.355 UTC. \n\nWe can't really see anything too worrisome. There is always lots\r\n of memory used by cache, which could have been mobilized. The only\r\n possible explanation I can think of is that in that moment of the\r\n crash the memory utilization suddenly skyrocketed in less than a\r\n second, so that the 2 second vmstat interval wouldn't show it???\r\n Nah. \n\nI have already much reduced work_mem, which has helped in some\r\n other cases before. Now I am going to reduce the shared_buffers\r\n now, but that seems counter-intuitive because we are sitting on\r\n all that cache memory unused!\nMight this be a bug? It feels like a bug. It feels like those out\r\n of memory issues should be handled more gracefully (garbage\r\n collection attempt?) and that somehow there should be more\r\n information so the person can do anything about it. \n\nAny ideas?\n-Gunther",
"msg_date": "Sun, 14 Apr 2019 16:23:34 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> For weeks now, I am banging my head at an \"out of memory\" situation. \n> There is only one query I am running on an 8 GB system, whatever I try, \n> I get knocked out on this out of memory. It is extremely impenetrable to \n> understand and fix this error. I guess I could add a swap file, and then \n> I would have to take the penalty of swapping. But how can I actually \n> address an out of memory condition if the system doesn't tell me where \n> it is happening?\n\n> You might want to see the query, but it is a huge plan, and I can't \n> really break this down. It shouldn't matter though. But just so you can \n> get a glimpse here is the plan:\n\nIs that the whole plan? With just three sorts and two materializes,\nit really shouldn't use more than more-or-less 5X work_mem. What do\nyou have work_mem set to, anyway? Is this a 64-bit build of PG?\n\nAlso, are the estimated rowcounts shown here anywhere close to what\nyou expect in reality? If there are any AFTER INSERT triggers on the\ninsertion target table, you'd also be accumulating per-row trigger\nqueue entries ... but if there's only circa 35K rows to be inserted,\nit's hard to credit that eating more than a couple hundred KB, either.\n\n> Might this be a bug?\n\nIt's conceivable that you've hit some memory-leakage bug, but if so you\nhaven't provided nearly enough info for anyone else to reproduce it.\nYou haven't even shown us the actual error message :-(\n\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 14 Apr 2019 17:19:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 4:51 PM Gunther <[email protected]> wrote:\n\n> For weeks now, I am banging my head at an \"out of memory\" situation. There\n> is only one query I am running on an 8 GB system, whatever I try, I get\n> knocked out on this out of memory.\n>\nIs PostgreSQL throwing an error with OOM, or is getting killed -9 by the\nOOM killer? Do you get a core file you can inspect with gdb?\n\nYou might want to see the query, but it is a huge plan, and I can't really\n> break this down. It shouldn't matter though. But just so you can get a\n> glimpse here is the plan:\n>\n> Insert on businessoperation (cost=5358849.28..5361878.44 rows=34619 width=1197)\n> -> Unique (cost=5358849.28..5361532.25 rows=34619 width=1197)\n>\n>\n>\nMaybe it is memory for trigger or constraint checking, although I don't\nknow why that would appear instantly. What triggers or constraints do you\nhave on businessoperation?\n\nWhat if you just run the SELECT without the INSERT? Or insert into a temp\ntable rather than into businessoperation? And if that doesn't crash, what\nif you then insert to businessoperation from the temp table?\n\nAlso, what version?\n\nCheers,\n\nJeff\n\nOn Sun, Apr 14, 2019 at 4:51 PM Gunther <[email protected]> wrote:\n\nFor weeks now, I am banging my head at an \"out of memory\"\n situation. There is only one query I am running on an 8 GB system,\n whatever I try, I get knocked out on this out of memory. Is PostgreSQL throwing an error with OOM, or is getting killed -9 by the OOM killer? Do you get a core file you can inspect with gdb?\nYou might want to see the query, but it is a huge plan, and I\n can't really break this down. It shouldn't matter though. But just\n so you can get a glimpse here is the plan:\n\nInsert on businessoperation (cost=5358849.28..5361878.44 rows=34619 width=1197)\n -> Unique (cost=5358849.28..5361532.25 rows=34619 width=1197)\n Maybe it is memory for trigger or constraint checking, although I don't know why that would appear instantly. What triggers or constraints do you have on businessoperation? What if you just run the SELECT without the INSERT? Or insert into a temp table rather than into businessoperation? And if that doesn't crash, what if you then insert to businessoperation from the temp table? Also, what version?Cheers,Jeff",
"msg_date": "Sun, 14 Apr 2019 17:19:50 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Thanks for looking at my problem Tom Lane and Jeff Janes. Sorry for not \nhaving given enough detail.\n\nThe version is 10.2 latest. The database was originally built with 10.1 \nand then just started with 10.2. No dump and reload or pg_upgrade. \nUnderlying system is 64bit Amazon Linux (CentOS like) running on an \nAMD64 VM (m5a) right now.\n\nI said \"crash\" and that is wrong. Not a signal nor core dump. It is the \nERROR: out of memory. Only the query crashes. Although I don't know if \nmay be the backend server might have left a core dump? Where would that \nbe? Would it help anyone if I started the server with the -c option to \nget a core dump? I guess I could re-compile with gcc -g debugging \nsymbols all on and then run with that -c option, and then use gdb to \nfind out which line it was failing at and then inspect the query plan \ndata structure? Would that be helpful? Does anyone want the coredump to \ninspect?\n\nThe short version is:\n\nGrand total: 1437014672 bytes in 168424 blocks; 11879744 free (3423 chunks); 1425134928 used\n2019-04-14 16:38:26.355 UTC [11061] ERROR: out of memory\n2019-04-14 16:38:26.355 UTC [11061] DETAIL: Failed on request of size 8272 in memory context \"ExecutorState\".\n\nHere is the out of memory error dump in its full glory.\n\nTopMemoryContext: 2197400 total in 7 blocks; 42952 free (15 chunks); 2154448 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 7720 free (2 chunks); 472 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 2097152 total in 9 blocks; 396480 free (10 chunks); 1700672 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalContext: 1024 total in 1 blocks; 624 free (0 chunks); 400 used:\n ExecutorState: 1416621920 total in 168098 blocks; 8494152 free (3102 chunks); 1408127768 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 57432 total in 3 blocks; 16072 free (6 chunks); 41360 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n HashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); 8389408 used\n HashBatchContext: 106640 total in 3 blocks; 7936 free (0 chunks); 98704 used\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n Caller tuples: 4194304 total in 10 blocks; 1434888 free (20 chunks); 2759416 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 386840 free (1 chunks); 714488 used\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: businessop_docid_ndx\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: businessop_sbjentityidroot_ndx\n index info: 2048 total in 2 blocks; 704 free (1 chunks); 1344 used: businessop_sbjroleiid_ndx\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: act_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: act_id_idx\n index info: 2048 total in 2 blocks; 592 free (1 chunks); 1456 used: pg_constraint_conrelid_contypid_conname_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: actrelationship_pkey\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_target_idx\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_source_idx\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: documentinformation_pk\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_seii\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_sbjentcodeonly\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2618_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indrelid_index\n relation rules: 229376 total in 31 blocks; 5136 free (0 chunks); 224240 used: v_businessoperation\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_db_role_setting_databaseid_rol_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_opclass_am_name_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_enum_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_class_relname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_pubname_index\n index info: 2048 total in 2 blocks; 592 free (3 chunks); 1456 used: pg_statistic_relid_att_inh_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_cast_source_target_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_transform_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_collation_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indexrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_template_tmplname_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_ts_config_map_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_opclass_oid_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_evtname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_statistic_ext_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_dict_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_oid_index\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_conversion_default_index\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_operator_oprname_l_r_n_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_trigger_tgrelid_tgname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_enum_typid_label_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_config_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_user_mapping_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_opfamily_am_name_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_table_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_type_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_aggregate_fnoid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_constraint_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_rewrite_rel_rulename_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_parser_prsname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_config_cfgname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_parser_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_publication_rel_prrelid_prpubid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_namespace_nspname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_template_oid_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index\n index info: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: pg_default_acl_role_nsp_obj_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_rel_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_range_rngtypid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_dict_dictname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_type_typname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_opfamily_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_proc_proname_args_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_partitioned_table_partrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_transform_type_lang_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_proc_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_namespace_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amproc_fam_proc_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_name_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_attribute_relid_attnam_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_conversion_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_user_mapping_user_server_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_subscription_rel_srrelid_srsubid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_conversion_name_nsp_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_member_role_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_subscription_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_tablespace_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_shseclabel_object_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roname_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_datname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_subscription_subname_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roiident_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_role_member_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_rolname_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 7256 free (1 chunks); 936 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\nGrand total: 1437014672 bytes in 168424 blocks; 11879744 free (3423 chunks); 1425134928 used\n2019-04-14 16:38:26.355 UTC [11061] ERROR: out of memory\n2019-04-14 16:38:26.355 UTC [11061] DETAIL: Failed on request of size 8272 in memory context \"ExecutorState\".\n\nI am delighted that Tom Lane is hinting that my query plan doesn't look \nso crazy, and it isn't. And delighted there may be a known bug involved.\n\nI wonder if this is one data issue. May be a few rows have excessively \nlong text fields? But even checking for that is rather difficult because \nthere are many tables and columns involved. Would really be nice if the \nerror would say exactly what plan step that ExecutorState referred to, \nso one could narrow it down.\n\nregards,\n-Gunther\n\nOn 4/14/2019 17:19, Tom Lane wrote:\n> Gunther <[email protected]> writes:\n>> For weeks now, I am banging my head at an \"out of memory\" situation.\n>> There is only one query I am running on an 8 GB system, whatever I try,\n>> I get knocked out on this out of memory. It is extremely impenetrable to\n>> understand and fix this error. I guess I could add a swap file, and then\n>> I would have to take the penalty of swapping. But how can I actually\n>> address an out of memory condition if the system doesn't tell me where\n>> it is happening?\n>> You might want to see the query, but it is a huge plan, and I can't\n>> really break this down. It shouldn't matter though. But just so you can\n>> get a glimpse here is the plan:\n> Is that the whole plan? With just three sorts and two materializes,\n> it really shouldn't use more than more-or-less 5X work_mem. What do\n> you have work_mem set to, anyway? Is this a 64-bit build of PG?\n>\n> Also, are the estimated rowcounts shown here anywhere close to what\n> you expect in reality? If there are any AFTER INSERT triggers on the\n> insertion target table, you'd also be accumulating per-row trigger\n> queue entries ... but if there's only circa 35K rows to be inserted,\n> it's hard to credit that eating more than a couple hundred KB, either.\n>\n>> Might this be a bug?\n> It's conceivable that you've hit some memory-leakage bug, but if so you\n> haven't provided nearly enough info for anyone else to reproduce it.\n> You haven't even shown us the actual error message :-(\n>\n> https://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n>\n> \t\t\tregards, tom lane\n\n\n\n\n\n\nThanks for looking at my problem Tom Lane and Jeff Janes. Sorry\n for not having given enough detail.\n\n The version is 10.2 latest. The database was originally built with\n 10.1 and then just started with 10.2. No dump and reload or\n pg_upgrade. Underlying system is 64bit Amazon Linux (CentOS like)\n running on an AMD64 VM (m5a) right now.\n\nI said \"crash\" and that is wrong. Not a signal nor core dump. It\n is the ERROR: out of memory. Only the query crashes. Although I\n don't know if may be the backend server might have left a core\n dump? Where would that be? Would it help anyone if I started the\n server with the -c option to get a core dump? I guess I could\n re-compile with gcc -g debugging symbols all on and then run with\n that -c option, and then use gdb to find out which line it was\n failing at and then inspect the query plan data structure? Would\n that be helpful? Does anyone want the coredump to inspect?\n\nThe short version is:\n\nGrand total: 1437014672 bytes in 168424 blocks; 11879744 free (3423 chunks); 1425134928 used\n2019-04-14 16:38:26.355 UTC [11061] ERROR: out of memory\n2019-04-14 16:38:26.355 UTC [11061] DETAIL: Failed on request of size 8272 in memory context \"ExecutorState\".\n\nHere is the out of memory error dump in its full glory.\n\nTopMemoryContext: 2197400 total in 7 blocks; 42952 free (15 chunks); 2154448 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 7720 free (2 chunks); 472 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 2097152 total in 9 blocks; 396480 free (10 chunks); 1700672 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalContext: 1024 total in 1 blocks; 624 free (0 chunks); 400 used:\n ExecutorState: 1416621920 total in 168098 blocks; 8494152 free (3102 chunks); 1408127768 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 57432 total in 3 blocks; 16072 free (6 chunks); 41360 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n HashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); 8389408 used\n HashBatchContext: 106640 total in 3 blocks; 7936 free (0 chunks); 98704 used\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n Caller tuples: 4194304 total in 10 blocks; 1434888 free (20 chunks); 2759416 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 386840 free (1 chunks); 714488 used\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: businessop_docid_ndx\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: businessop_sbjentityidroot_ndx\n index info: 2048 total in 2 blocks; 704 free (1 chunks); 1344 used: businessop_sbjroleiid_ndx\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: act_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: act_id_idx\n index info: 2048 total in 2 blocks; 592 free (1 chunks); 1456 used: pg_constraint_conrelid_contypid_conname_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: actrelationship_pkey\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_target_idx\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_source_idx\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: documentinformation_pk\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_seii\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_sbjentcodeonly\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2618_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indrelid_index\n relation rules: 229376 total in 31 blocks; 5136 free (0 chunks); 224240 used: v_businessoperation\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_db_role_setting_databaseid_rol_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_opclass_am_name_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_enum_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_class_relname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_pubname_index\n index info: 2048 total in 2 blocks; 592 free (3 chunks); 1456 used: pg_statistic_relid_att_inh_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_cast_source_target_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_transform_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_collation_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indexrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_template_tmplname_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_ts_config_map_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_opclass_oid_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_evtname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_statistic_ext_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_dict_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_oid_index\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_conversion_default_index\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_operator_oprname_l_r_n_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_trigger_tgrelid_tgname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_enum_typid_label_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_config_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_user_mapping_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_opfamily_am_name_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_table_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_type_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_aggregate_fnoid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_constraint_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_rewrite_rel_rulename_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_parser_prsname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_config_cfgname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_parser_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_publication_rel_prrelid_prpubid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_namespace_nspname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_template_oid_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index\n index info: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: pg_default_acl_role_nsp_obj_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_rel_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_range_rngtypid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_dict_dictname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_type_typname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_opfamily_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_proc_proname_args_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_partitioned_table_partrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_transform_type_lang_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_proc_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_namespace_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amproc_fam_proc_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_name_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_attribute_relid_attnam_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_conversion_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_user_mapping_user_server_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_subscription_rel_srrelid_srsubid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_conversion_name_nsp_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_member_role_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_subscription_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_tablespace_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_shseclabel_object_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roname_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_datname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_subscription_subname_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roiident_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_role_member_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_rolname_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 7256 free (1 chunks); 936 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\nGrand total: 1437014672 bytes in 168424 blocks; 11879744 free (3423 chunks); 1425134928 used\n2019-04-14 16:38:26.355 UTC [11061] ERROR: out of memory\n2019-04-14 16:38:26.355 UTC [11061] DETAIL: Failed on request of size 8272 in memory context \"ExecutorState\".\n\nI am delighted that Tom Lane is hinting\n that my query plan doesn't look so crazy, and it isn't. And\n delighted there may be a known bug involved.\n\n\nI wonder if this is one data issue. May\n be a few rows have excessively long text fields? But even checking\n for that is rather difficult because there are many tables and\n columns involved. Would really be nice if the error would say\n exactly what plan step that ExecutorState referred to, so one\n could narrow it down.\n\n\n\nregards,\n -Gunther\n\n\n\nOn 4/14/2019 17:19, Tom Lane wrote:\n\n\nGunther <[email protected]> writes:\n\n\nFor weeks now, I am banging my head at an \"out of memory\" situation. \nThere is only one query I am running on an 8 GB system, whatever I try, \nI get knocked out on this out of memory. It is extremely impenetrable to \nunderstand and fix this error. I guess I could add a swap file, and then \nI would have to take the penalty of swapping. But how can I actually \naddress an out of memory condition if the system doesn't tell me where \nit is happening?\n\n\n\n\n\nYou might want to see the query, but it is a huge plan, and I can't \nreally break this down. It shouldn't matter though. But just so you can \nget a glimpse here is the plan:\n\n\n\nIs that the whole plan? With just three sorts and two materializes,\nit really shouldn't use more than more-or-less 5X work_mem. What do\nyou have work_mem set to, anyway? Is this a 64-bit build of PG?\n\nAlso, are the estimated rowcounts shown here anywhere close to what\nyou expect in reality? If there are any AFTER INSERT triggers on the\ninsertion target table, you'd also be accumulating per-row trigger\nqueue entries ... but if there's only circa 35K rows to be inserted,\nit's hard to credit that eating more than a couple hundred KB, either.\n\n\n\nMight this be a bug?\n\n\n\nIt's conceivable that you've hit some memory-leakage bug, but if so you\nhaven't provided nearly enough info for anyone else to reproduce it.\nYou haven't even shown us the actual error message :-(\n\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 14 Apr 2019 21:05:48 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 05:19:11PM -0400, Tom Lane wrote:\n! Gunther <[email protected]> writes:\n! > For weeks now, I am banging my head at an \"out of memory\" situation. \n! > There is only one query I am running on an 8 GB system, whatever I try, \n! > I get knocked out on this out of memory. It is extremely impenetrable to \n! > understand and fix this error. I guess I could add a swap file, and then \n! > I would have to take the penalty of swapping. But how can I actually \n! > address an out of memory condition if the system doesn't tell me where \n! > it is happening?\n\nWell, esactly with a swap space. No offense intended, but if You\ndon't have a swap space, You should not complain about unintellegibe\nOut-of-memory situations.\nSwapspace is not usually used to run applications from (that would\nindeed give horrible performance), it is used to not get out-of-memory\nerrors. With a swapspace, the out-of-memory situation will persist,\nand so one has time to take measurements and analyze system\nbehaviour and from that, one can better understand what is causing \nthe problem, and decide what actions should be taken, on an informed\nbase (e.g. correct flaws in the system tuning, fix bad query, buy \nmore memory, or what may be applicable)\n\nIf I remember correctly, I did even see a DTRACE flag in my build,\nso what more is to wish? :))\n\nP.\n\n\n",
"msg_date": "Mon, 15 Apr 2019 03:29:55 +0200",
"msg_from": "Peter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 09:05:48PM -0400, Gunther wrote:\n> Thanks for looking at my problem Tom Lane and Jeff Janes. Sorry for not\n> having given enough detail.\n> \n> The version is 10.2 latest.\n\nv10.7 is available; could you upgrade ?\n\nWhat are these set to ? shared_buffers? work_mem?\n\nWas postgres locally compiled, packaged by distribution, or PGDG RPM/DEB ?\n\nCan you show \\d businessoperation ?\n\n> The short version is:\n> \n> Grand total: 1437014672 bytes in 168424 blocks; 11879744 free (3423 chunks); 1425134928 used\n> 2019-04-14 16:38:26.355 UTC [11061] ERROR: out of memory\n> 2019-04-14 16:38:26.355 UTC [11061] DETAIL: Failed on request of size 8272 in memory context \"ExecutorState\".\n\nCould you rerun the query with \\set VERBOSITY verbose to show the file/line\nthat's failing ?\n\nIf you wanted to show a stack trace, you could attach gdb to PID from SELECT\npg_backend_pid(), \"b\"reak on errdetail, run the query, and then \"bt\" when it\nfails.\n\nJustin\n\n\n",
"msg_date": "Sun, 14 Apr 2019 20:48:11 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 9:06 PM Gunther <[email protected]> wrote:\n\n> Thanks for looking at my problem Tom Lane and Jeff Janes. Sorry for not\n> having given enough detail.\n>\n> The version is 10.2 latest. The database was originally built with 10.1\n> and then just started with 10.2.\n>\nDo you mean 11.2? The latest in the 10 series is 10.7. If you do mean\n10.2, there a fix for a memory leak bug since then that might plausibly be\nrelevant (bdc7f686d1b8f423cb)\n\n>\n> I said \"crash\" and that is wrong. Not a signal nor core dump. It is the\n> ERROR: out of memory. Only the query crashes. Although I don't know if may\n> be the backend server might have left a core dump?\n>\nI don't think there would be a core dump on only an ERROR, and probably not\nworthwhile to trick it into generating one.\n\n\n> The short version is:\n>\n> Grand total: 1437014672 bytes in 168424 blocks; 11879744 free (3423 chunks); 1425134928 used\n> 2019-04-14 16:38:26.355 UTC [11061] ERROR: out of memory\n> 2019-04-14 16:38:26.355 UTC [11061] DETAIL: Failed on request of size 8272 in memory context \"ExecutorState\".\n>\n> I don't know why a 8GB system with a lot of cache that could be evicted\nwould get an OOM when something using 1.5GB asks for 8272 bytes more. But\nthat is a question of how the kernel works, rather than how PostgreSQL\nworks. But I also think the log you quote above belongs to a different\nevent than the vmstat trace in your first email.\n\n\n> ExecutorState: 1416621920 total in 168098 blocks; 8494152 free (3102 chunks); 1408127768 used\n> HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n> HashBatchContext: 57432 total in 3 blocks; 16072 free (6 chunks); 41360 used\n>\n>\nThis does not seem to match your query plan. Why would a plan with no Hash\nJoins have a HashBatchContext? I think this log must be from a different\nquery than the one that generated the plan you posted. Perhaps one was\nbefore you lowered work_mem and one was after?\n\nCheers,\n\nJeff\n\nOn Sun, Apr 14, 2019 at 9:06 PM Gunther <[email protected]> wrote:\n\nThanks for looking at my problem Tom Lane and Jeff Janes. Sorry\n for not having given enough detail.\n\n The version is 10.2 latest. The database was originally built with\n 10.1 and then just started with 10.2. Do you mean 11.2? The latest in the 10 series is 10.7. If you do mean 10.2, there a fix for a memory leak bug since then that might plausibly be relevant (bdc7f686d1b8f423cb)\n\nI said \"crash\" and that is wrong. Not a signal nor core dump. It\n is the ERROR: out of memory. Only the query crashes. Although I\n don't know if may be the backend server might have left a core\n dump? I don't think there would be a core dump on only an ERROR, and probably not worthwhile to trick it into generating one. The short version is:\n\nGrand total: 1437014672 bytes in 168424 blocks; 11879744 free (3423 chunks); 1425134928 used\n2019-04-14 16:38:26.355 UTC [11061] ERROR: out of memory\n2019-04-14 16:38:26.355 UTC [11061] DETAIL: Failed on request of size 8272 in memory context \"ExecutorState\". I don't know why a 8GB system with a lot of cache that could be evicted would get an OOM when something using 1.5GB asks for 8272 bytes more. But that is a question of how the kernel works, rather than how PostgreSQL works. But I also think the log you quote above belongs to a different event than the vmstat trace in your first email. ExecutorState: 1416621920 total in 168098 blocks; 8494152 free (3102 chunks); 1408127768 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 57432 total in 3 blocks; 16072 free (6 chunks); 41360 usedThis does not seem to match your query plan. Why would a plan with no Hash Joins have a HashBatchContext? I think this log must be from a different query than the one that generated the plan you posted. Perhaps one was before you lowered work_mem and one was after?Cheers,Jeff",
"msg_date": "Sun, 14 Apr 2019 22:48:49 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Thanks Justin Pryzby too, and Jeff Janes, responding to both of you for \nefficiency. Answers and more logs and the gdb backtrace below.\n>> The version is 10.2 latest.\n> v10.7 is available; could you upgrade ?\nSorry I meant 11.2 actually latest.\n> What are these set to ? shared_buffers? work_mem?\n\nshared_buffers=2G (of 8 total), then 1G, didn't help.\n\nwork_mem=4M by now (I had once been successful of avoiding out of memory \nby reducing work mem from 64M to 8M. But as Tom Lane says, it shouldn't \nbe using more than 5 x work_mem in this query plan.\n\nJeff Janes said:\n\n> I don't know why a 8GB system with a lot of cache that could be \n> evicted would get an OOM when something using 1.5GB asks for 8272 \n> bytes more. But that is a question of how the kernel works, rather \n> than how PostgreSQL works. But I also think the log you quote above \n> belongs to a different event than the vmstat trace in your first email.\nand I agree, except that the vmstat log and the error really belong \ntogether, same timestamp. Nothing else running on that machine this \nSunday. Yes I ran this several times with different parameters, so some \nmixup is possible, but always ending in the same crash anyway. So here \nagain, without the vmstat log, which really wouldn't be any different \nthan I showed you. (See below for the ENABLE_NESTLOOP=off setting, not \nhaving those settings same between explain and actual execution might \naccount for the discrepancy that you saw.)\n\nintegrator=# SET ENABLE_NESTLOOP TO OFF;\nSET\nintegrator=# \\set VERBOSITY verbose\nintegrator=# explain INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\nintegrator=# \\pset pager off\nPager usage is off.\nintegrator=# \\pset format unaligned\nOutput format is unaligned.\nintegrator=# explain INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\nQUERY PLAN\nInsert on businessoperation (cost=5850091.58..5853120.74 rows=34619 width=1197)\n -> Unique (cost=5850091.58..5852774.55 rows=34619 width=1197)\n -> Sort (cost=5850091.58..5850178.13 rows=34619 width=1197)\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.is_current, documentinformationsubject.documentid, documentinformationsubject.documenttypecode, documentinformationsubject.subjectroleinternalid, documentinformationsubject.subjectentityinternalid, documentinformationsubject.subjectentityid, documentinformationsubject.subjectentityidroot, documentinformationsubject.subjectentityname, documentinformationsubject.subjectentitytel, documentinformationsubject.subjectentityemail, documentinformationsubject.otherentityinternalid, documentinformationsubject.confidentialitycode, documentinformationsubject.actinternalid, documentinformationsubject.code_code, documentinformationsubject.code_displayname, q.code_code, q.code_displayname, an.extension, an.root, documentinformationsubject_2.subjectentitycode, documentinformationsubject_2.subjectentitycodesystem, documentinformationsubject_2.effectivetime_low, documentinformationsubject_2.effectivetime_high, documentinformationsubject_2.statuscode, documentinformationsubject_2.code_code, agencyid.extension, agencyname.trivialname, documentinformationsubject_1.subjectentitycode, documentinformationsubject_1.subjectentityinternalid\n -> Hash Right Join (cost=4489522.06..5829375.93 rows=34619 width=1197)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1473632.24..2808301.92 rows=13 width=341)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n -> Hash Left Join (cost=38864.03..1373533.69 rows=1 width=219)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyname.entityinternalid)::text)\n -> Hash Left Join (cost=2503.10..1332874.75 rows=1 width=229)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyid.entityinternalid)::text)\n -> Seq Scan on documentinformationsubject documentinformationsubject_2 (cost=0.00..1329868.64 rows=1 width=177)\n Filter: ((participationtypecode)::text = 'AUT'::text)\n -> Hash (cost=1574.82..1574.82 rows=34182 width=89)\n -> Seq Scan on entity_id agencyid (cost=0.00..1574.82 rows=34182 width=89)\n -> Hash (cost=27066.08..27066.08 rows=399908 width=64)\n -> Seq Scan on bestname agencyname (cost=0.00..27066.08 rows=399908 width=64)\n -> Hash (cost=1434768.02..1434768.02 rows=13 width=233)\n -> Hash Right Join (cost=953906.58..1434768.02 rows=13 width=233)\n Hash Cond: ((an.actinternalid)::text = (q.actinternalid)::text)\n -> Seq Scan on act_id an (cost=0.00..425941.04 rows=14645404 width=134)\n -> Hash (cost=953906.57..953906.57 rows=1 width=136)\n -> Hash Join (cost=456015.28..953906.57 rows=1 width=136)\n Hash Cond: ((q.actinternalid)::text = (r.sourceinternalid)::text)\n -> Seq Scan on documentinformation q (cost=0.00..497440.84 rows=120119 width=99)\n Filter: (((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = 'EVN'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n -> Hash (cost=456015.26..456015.26 rows=1 width=74)\n -> Seq Scan on actrelationship r (cost=0.00..456015.26 rows=1 width=74)\n Filter: ((typecode)::text = 'SUBJ'::text)\n -> Hash (cost=3011313.54..3011313.54 rows=34619 width=930)\n -> Merge Left Join (cost=2998334.98..3011313.54 rows=34619 width=930)\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n -> Sort (cost=1408783.87..1408870.41 rows=34619 width=882)\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalid\n -> Seq Scan on documentinformationsubject (cost=0.00..1392681.22 rows=34619 width=882)\n Filter: (((participationtypecode)::text = ANY ('{PPRF,PRF}'::text[])) AND ((classcode)::text = 'ACT'::text) AND ((moodcode)::text = 'DEF'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n -> Materialize (cost=1589551.12..1594604.04 rows=1010585 width=159)\n -> Sort (cost=1589551.12..1592077.58 rows=1010585 width=159)\n Sort Key: documentinformationsubject_1.documentinternalid, documentinformationsubject_1.documentid, documentinformationsubject_1.actinternalid\n -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=159)\n Filter: ((participationtypecode)::text = 'PRD'::text)\n\nand the error memory status dump (I hope my grey boxes help a bit to \nlighten this massive amount of data...\n\nTopMemoryContext: 4294552 total in 7 blocks; 42952 free (15 chunks); 4251600 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 7720 free (2 chunks); 472 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 2097152 total in 9 blocks; 396480 free (10 chunks); 1700672 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalContext: 1024 total in 1 blocks; 624 free (0 chunks); 400 used:\n ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244 chunks); 2230341056 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 57432 total in 3 blocks; 16072 free (6 chunks); 41360 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n HashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); 8389408 used\n HashBatchContext: 100711712 total in 3065 blocks; 7936 free (0 chunks); 100703776 used\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n Caller tuples: 1048576 total in 8 blocks; 21608 free (14 chunks); 1026968 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 386840 free (1 chunks); 714488 used\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: businessop_docid_ndx\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: businessop_sbjentityidroot_ndx\n index info: 2048 total in 2 blocks; 704 free (1 chunks); 1344 used: businessop_sbjroleiid_ndx\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: act_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: act_id_idx\n index info: 2048 total in 2 blocks; 592 free (1 chunks); 1456 used: pg_constraint_conrelid_contypid_conname_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: actrelationship_pkey\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_target_idx\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_source_idx\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: documentinformation_pk\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_seii\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_sbjentcodeonly\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2618_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indrelid_index\n relation rules: 229376 total in 31 blocks; 5136 free (0 chunks); 224240 used: v_businessoperation\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_db_role_setting_databaseid_rol_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_opclass_am_name_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_enum_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_class_relname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_pubname_index\n index info: 2048 total in 2 blocks; 592 free (3 chunks); 1456 used: pg_statistic_relid_att_inh_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_cast_source_target_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_transform_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_collation_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indexrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_template_tmplname_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_ts_config_map_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_opclass_oid_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_evtname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_statistic_ext_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_dict_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_oid_index\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_conversion_default_index\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_operator_oprname_l_r_n_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_trigger_tgrelid_tgname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_enum_typid_label_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_config_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_user_mapping_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_opfamily_am_name_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_table_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_type_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_aggregate_fnoid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_constraint_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_rewrite_rel_rulename_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_parser_prsname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_config_cfgname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_parser_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_publication_rel_prrelid_prpubid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_namespace_nspname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_template_oid_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index\n index info: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: pg_default_acl_role_nsp_obj_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_rel_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_range_rngtypid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_dict_dictname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_type_typname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_opfamily_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_proc_proname_args_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_partitioned_table_partrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_transform_type_lang_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_proc_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_namespace_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amproc_fam_proc_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_name_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_attribute_relid_attnam_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_conversion_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_user_mapping_user_server_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_subscription_rel_srrelid_srsubid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_conversion_name_nsp_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_member_role_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_subscription_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_tablespace_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_shseclabel_object_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roname_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_datname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_subscription_subname_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roiident_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_role_member_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_rolname_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 7256 free (1 chunks); 936 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\nGrand total: 2354072632 bytes in 269647 blocks; 5754640 free (17559 chunks); 2348317992 used\n\n> Was postgres locally compiled, packaged by distribution, or PGDG RPM/DEB ?\nLocally compiled. I just recompiled with --enable-debug, ready to deploy \nthat to create a core dump and check it out.\n> Can you show \\d businessoperation ?\n\n Table \"reports.businessoperation\"\n Column | Type | Modifiers\n---------------------------+------------------------+-----------\n documentinternalid | character varying(255) |\n is_current | character(1) |\n documentid | character varying(555) |\n documenttypecode | character varying(512) |\n subjectroleinternalid | character varying(255) |\n subjectentityinternalid | character varying(255) |\n subjectentityid | character varying(555) |\n subjectentityidroot | character varying(555) |\n subjectentityname | character varying |\n subjectentitytel | text |\n subjectentityemail | text |\n otherentityinternalid | character varying(255) |\n confidentialitycode | character varying(512) |\n actinternalid | character varying(255) |\n operationcode | character varying(512) |\n operationname | text |\n operationqualifiercode | character varying(512) |\n operationqualifiername | character varying(512) |\n approvalnumber | character varying(555) |\n approvalnumbersystem | character varying(555) |\n approvalstatecode | character varying(512) |\n approvalstatecodesystem | character varying(512) |\n approvaleffectivetimelow | character varying(512) |\n approvaleffectivetimehigh | character varying(512) |\n approvalstatuscode | character varying(32) |\n licensecode | character varying(512) |\n agencyid | character varying(555) |\n agencyname | text |\n productitemcode | character varying(512) |\n productinternalid | character varying(255) |\n\n> Could you rerun the query with \\set VERBOSITY verbose to show the file/line\n> that's failing ?\n\nHere goes:\n\nintegrator=# \\set VERBOSITY verbose\nintegrator=# SET ENABLE_NESTLOOP TO OFF;\nSET\nintegrator=# INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\nERROR: 53200: out of memory\nDETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\nLOCATION: MemoryContextAlloc, mcxt.c:798\n\nyou notice that I set ENABLE_NESTLOOP to off, that is because the \nplanner goes off thinking the NL plan is marginally more efficient, but \nin fact it will take 5 hours to get to the same out of memory crash, \nwhile the no NL plan gets there in half an hour. That verbose setting \ndidn't help much I guess.\n\n> If you wanted to show a stack trace, you could attach gdb to PID from SELECT\n> pg_backend_pid(), \"b\"reak on errdetail, run the query, and then \"bt\" when it\n> fails.\n\ngdb -p 27930\nGNU gdb (GDB) Red Hat Enterprise Linux 8.0.1-30.amzn2.0.3\n...\nAttaching to process 27930\nReading symbols from /usr/local/pgsql/bin/postgres...done.\n...\n(gdb) b errdetail\nBreakpoint 1 at 0x82b210: file elog.c, line 872.\n(gdb) cont\nContinuing.\nBreakpoint 1, errdetail (fmt=fmt@entry=0x9d9958 \"Failed on request of size %zu in memory context \\\"%s\\\".\") at elog.c:872\n872 {\n(gdb) bt\n#0 errdetail (fmt=fmt@entry=0x9d9958 \"Failed on request of size %zu in memory context \\\"%s\\\".\") at elog.c:872\n#1 0x000000000084e320 in MemoryContextAlloc (context=0x1111600, size=size@entry=32800) at mcxt.c:794\n#2 0x000000000060ce7a in dense_alloc (size=384, size@entry=381, hashtable=<optimized out>, hashtable=<optimized out>)\n at nodeHash.c:2696\n#3 0x000000000060d788 in ExecHashTableInsert (hashtable=hashtable@entry=0x10ead08, slot=<optimized out>, hashvalue=194758122)\n at nodeHash.c:1614\n#4 0x0000000000610c6f in ExecHashJoinNewBatch (hjstate=0x10806b0) at nodeHashjoin.c:1051\n#5 ExecHashJoinImpl (parallel=false, pstate=0x10806b0) at nodeHashjoin.c:539\n#6 ExecHashJoin (pstate=0x10806b0) at nodeHashjoin.c:565\n#7 0x000000000061ce4e in ExecProcNode (node=0x10806b0) at ../../../src/include/executor/executor.h:247\n#8 ExecSort (pstate=0x1080490) at nodeSort.c:107\n#9 0x000000000061d2c4 in ExecProcNode (node=0x1080490) at ../../../src/include/executor/executor.h:247\n#10 ExecUnique (pstate=0x107ff60) at nodeUnique.c:73\n#11 0x0000000000619732 in ExecProcNode (node=0x107ff60) at ../../../src/include/executor/executor.h:247\n#12 ExecModifyTable (pstate=0x107fd20) at nodeModifyTable.c:2025\n#13 0x00000000005f75ba in ExecProcNode (node=0x107fd20) at ../../../src/include/executor/executor.h:247\n#14 ExecutePlan (execute_once=<optimized out>, dest=0x7f0442721998, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_INSERT, use_parallel_mode=<optimized out>, planstate=0x107fd20, estate=0x107f830)\n at execMain.c:1723\n#15 standard_ExecutorRun (queryDesc=0x1086880, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#16 0x000000000072a972 in ProcessQuery (plan=<optimized out>,\n sourceText=0xf4a710 \"INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\", params=0x0,\n queryEnv=0x0, dest=0x7f0442721998, completionTag=0x7fff2e4cad30 \"\") at pquery.c:161\n#17 0x000000000072abb0 in PortalRunMulti (portal=portal@entry=0xfb06b0, isTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x7f0442721998, altdest=altdest@entry=0x7f0442721998,\n completionTag=completionTag@entry=0x7fff2e4cad30 \"\") at pquery.c:1286\n#18 0x000000000072b661 in PortalRun (portal=portal@entry=0xfb06b0, count=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x7f0442721998,\n altdest=altdest@entry=0x7f0442721998, completionTag=0x7fff2e4cad30 \"\") at pquery.c:799\n#19 0x00000000007276e8 in exec_simple_query (\n query_string=0xf4a710 \"INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\") at postgres.c:1145\n#20 0x0000000000729534 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xf76ce8, dbname=<optimized out>,\n username=<optimized out>) at postgres.c:4182\n#21 0x00000000006be215 in BackendRun (port=0xf6dfe0) at postmaster.c:4361\n#22 BackendStartup (port=0xf6dfe0) at postmaster.c:4033\n#23 ServerLoop () at postmaster.c:1706\n#24 0x00000000006bf122 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xf45320) at postmaster.c:1379\n#25 0x00000000004822dc in main (argc=3, argv=0xf45320) at main.c:228\n\nThat's it.\n\nThank you all very much for your interest in this case.\n\n-Gunther\n\n\n\n\n\n\n\n\nThanks Justin Pryzby too, and Jeff\r\n Janes, responding to both of you for efficiency. Answers and more\r\n logs and the gdb backtrace below.\n\n\n\nThe version is 10.2 latest.\r\n\n\n\r\nv10.7 is available; could you upgrade ?\n\r\n Sorry I meant 11.2 actually latest. \n\nWhat are these set to ? shared_buffers? work_mem?\n\nshared_buffers=2G (of 8 total), then 1G, didn't help.\nwork_mem=4M by now (I had once been successful of avoiding out of\r\n memory by reducing work mem from 64M to 8M. But as Tom Lane says,\r\n it shouldn't be using more than 5 x work_mem in this query plan.\nJeff Janes said:\n\n\n I don't know why a 8GB system with a lot\r\n of cache that could be evicted would get an OOM when something\r\n using 1.5GB asks for 8272 bytes more. But that is a question of\r\n how the kernel works, rather than how PostgreSQL works. But I\r\n also think the log you quote above belongs to a different event\r\n than the vmstat trace in your first email.\r\n and I agree, except that the vmstat log and the error really\r\n belong together, same timestamp. Nothing else running on that\r\n machine this Sunday. Yes I ran this several times with different\r\n parameters, so some mixup is possible, but always ending in the\r\n same crash anyway. So here again, without the vmstat log, which\r\n really wouldn't be any different than I showed you. (See below for\r\n the ENABLE_NESTLOOP=off setting, not having those settings same\r\n between explain and actual execution might account for the\r\n discrepancy that you saw.)\nintegrator=# SET ENABLE_NESTLOOP TO OFF;\r\nSET\r\nintegrator=# \\set VERBOSITY verbose\r\nintegrator=# explain INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\r\nintegrator=# \\pset pager off\r\nPager usage is off.\r\nintegrator=# \\pset format unaligned\r\nOutput format is unaligned.\r\nintegrator=# explain INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\r\nQUERY PLAN\r\nInsert on businessoperation (cost=5850091.58..5853120.74 rows=34619 width=1197)\r\n -> Unique (cost=5850091.58..5852774.55 rows=34619 width=1197)\r\n -> Sort (cost=5850091.58..5850178.13 rows=34619 width=1197)\r\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.is_current, documentinformationsubject.documentid, documentinformationsubject.documenttypecode, documentinformationsubject.subjectroleinternalid, documentinformationsubject.subjectentityinternalid, documentinformationsubject.subjectentityid, documentinformationsubject.subjectentityidroot, documentinformationsubject.subjectentityname, documentinformationsubject.subjectentitytel, documentinformationsubject.subjectentityemail, documentinformationsubject.otherentityinternalid, documentinformationsubject.confidentialitycode, documentinformationsubject.actinternalid, documentinformationsubject.code_code, documentinformationsubject.code_displayname, q.code_code, q.code_displayname, an.extension, an.root, documentinformationsubject_2.subjectentitycode, documentinformationsubject_2.subjectentitycodesystem, documentinformationsubject_2.effectivetime_low, documentinformationsubject_2.effectivetime_high, documentinformationsubject_2.statuscode, documentinformationsubject_2.code_code, agencyid.extension, agencyname.trivialname, documentinformationsubject_1.subjectentitycode, documentinformationsubject_1.subjectentityinternalid\r\n -> Hash Right Join (cost=4489522.06..5829375.93 rows=34619 width=1197)\r\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\r\n -> Hash Right Join (cost=1473632.24..2808301.92 rows=13 width=341)\r\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\r\n -> Hash Left Join (cost=38864.03..1373533.69 rows=1 width=219)\r\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyname.entityinternalid)::text)\r\n -> Hash Left Join (cost=2503.10..1332874.75 rows=1 width=229)\r\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyid.entityinternalid)::text)\r\n -> Seq Scan on documentinformationsubject documentinformationsubject_2 (cost=0.00..1329868.64 rows=1 width=177)\r\n Filter: ((participationtypecode)::text = 'AUT'::text)\r\n -> Hash (cost=1574.82..1574.82 rows=34182 width=89)\r\n -> Seq Scan on entity_id agencyid (cost=0.00..1574.82 rows=34182 width=89)\r\n -> Hash (cost=27066.08..27066.08 rows=399908 width=64)\r\n -> Seq Scan on bestname agencyname (cost=0.00..27066.08 rows=399908 width=64)\r\n -> Hash (cost=1434768.02..1434768.02 rows=13 width=233)\r\n -> Hash Right Join (cost=953906.58..1434768.02 rows=13 width=233)\r\n Hash Cond: ((an.actinternalid)::text = (q.actinternalid)::text)\r\n -> Seq Scan on act_id an (cost=0.00..425941.04 rows=14645404 width=134)\r\n -> Hash (cost=953906.57..953906.57 rows=1 width=136)\r\n -> Hash Join (cost=456015.28..953906.57 rows=1 width=136)\r\n Hash Cond: ((q.actinternalid)::text = (r.sourceinternalid)::text)\r\n -> Seq Scan on documentinformation q (cost=0.00..497440.84 rows=120119 width=99)\r\n Filter: (((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = 'EVN'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\r\n -> Hash (cost=456015.26..456015.26 rows=1 width=74)\r\n -> Seq Scan on actrelationship r (cost=0.00..456015.26 rows=1 width=74)\r\n Filter: ((typecode)::text = 'SUBJ'::text)\r\n -> Hash (cost=3011313.54..3011313.54 rows=34619 width=930)\r\n -> Merge Left Join (cost=2998334.98..3011313.54 rows=34619 width=930)\r\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\r\n -> Sort (cost=1408783.87..1408870.41 rows=34619 width=882)\r\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalid\r\n -> Seq Scan on documentinformationsubject (cost=0.00..1392681.22 rows=34619 width=882)\r\n Filter: (((participationtypecode)::text = ANY ('{PPRF,PRF}'::text[])) AND ((classcode)::text = 'ACT'::text) AND ((moodcode)::text = 'DEF'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\r\n -> Materialize (cost=1589551.12..1594604.04 rows=1010585 width=159)\r\n -> Sort (cost=1589551.12..1592077.58 rows=1010585 width=159)\r\n Sort Key: documentinformationsubject_1.documentinternalid, documentinformationsubject_1.documentid, documentinformationsubject_1.actinternalid\r\n -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=159)\r\n Filter: ((participationtypecode)::text = 'PRD'::text)\nand the error memory status dump (I hope my grey boxes help a bit\r\n to lighten this massive amount of data...\n\nTopMemoryContext: 4294552 total in 7 blocks; 42952 free (15 chunks); 4251600 used\r\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\r\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\r\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\r\n TopTransactionContext: 8192 total in 1 blocks; 7720 free (2 chunks); 472 used\r\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\r\n MessageContext: 2097152 total in 9 blocks; 396480 free (10 chunks); 1700672 used\r\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\r\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\r\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\r\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\r\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\r\n PortalContext: 1024 total in 1 blocks; 624 free (0 chunks); 400 used:\r\n ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244 chunks); 2230341056 used\r\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\r\n HashBatchContext: 57432 total in 3 blocks; 16072 free (6 chunks); 41360 used\r\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\r\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\r\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\r\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\r\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\r\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\r\n HashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); 8389408 used\r\n HashBatchContext: 100711712 total in 3065 blocks; 7936 free (0 chunks); 100703776 used\r\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\r\n Caller tuples: 1048576 total in 8 blocks; 21608 free (14 chunks); 1026968 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\r\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\r\n CacheMemoryContext: 1101328 total in 14 blocks; 386840 free (1 chunks); 714488 used\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: businessop_docid_ndx\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: businessop_sbjentityidroot_ndx\r\n index info: 2048 total in 2 blocks; 704 free (1 chunks); 1344 used: businessop_sbjroleiid_ndx\r\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\r\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\r\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\r\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: act_id_fkidx\r\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: act_id_idx\r\n index info: 2048 total in 2 blocks; 592 free (1 chunks); 1456 used: pg_constraint_conrelid_contypid_conname_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: actrelationship_pkey\r\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_target_idx\r\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_source_idx\r\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: documentinformation_pk\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_relid_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_seii\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_sbjentcodeonly\r\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2618_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indrelid_index\r\n relation rules: 229376 total in 31 blocks; 5136 free (0 chunks); 224240 used: v_businessoperation\r\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_db_role_setting_databaseid_rol_index\r\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_opclass_am_name_nsp_index\r\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_name_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_enum_oid_index\r\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_class_relname_nsp_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_pubname_index\r\n index info: 2048 total in 2 blocks; 592 free (3 chunks); 1456 used: pg_statistic_relid_att_inh_index\r\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_cast_source_target_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_name_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_transform_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_collation_oid_index\r\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indexrelid_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_template_tmplname_index\r\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_ts_config_map_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_opclass_oid_index\r\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_evtname_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_statistic_ext_name_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_dict_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_oid_index\r\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_conversion_default_index\r\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_operator_oprname_l_r_n_index\r\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_trigger_tgrelid_tgname_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_enum_typid_label_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_config_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_user_mapping_oid_index\r\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_opfamily_am_name_nsp_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_table_relid_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_type_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_aggregate_fnoid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_constraint_oid_index\r\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_rewrite_rel_rulename_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_parser_prsname_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_config_cfgname_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_parser_oid_index\r\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_publication_rel_prrelid_prpubid_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_namespace_nspname_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_template_oid_index\r\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index\r\n index info: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: pg_default_acl_role_nsp_obj_index\r\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_rel_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_range_rngtypid_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_dict_dictname_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_type_typname_nsp_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_opfamily_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_oid_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\r\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_proc_proname_args_nsp_index\r\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_partitioned_table_partrelid_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_transform_type_lang_index\r\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_attribute_relid_attnum_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_proc_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_oid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_namespace_oid_index\r\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amproc_fam_proc_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_name_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_attribute_relid_attnam_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_conversion_oid_index\r\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_user_mapping_user_server_index\r\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_subscription_rel_srrelid_srsubid_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_conversion_name_nsp_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_oid_index\r\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_member_role_index\r\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_subscription_oid_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_tablespace_oid_index\r\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_shseclabel_object_index\r\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roname_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_datname_index\r\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_subscription_subname_index\r\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roiident_index\r\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_role_member_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_oid_index\r\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_rolname_index\r\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\r\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\r\n MdSmgr: 8192 total in 1 blocks; 7256 free (1 chunks); 936 used\r\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\r\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\r\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\r\nGrand total: 2354072632 bytes in 269647 blocks; 5754640 free (17559 chunks); 2348317992 used\n\n\n\nWas postgres locally compiled, packaged by distribution, or PGDG RPM/DEB ?\n\r\n Locally compiled. I just recompiled with --enable-debug, ready to\r\n deploy that to create a core dump and check it out.\r\n \nCan you show \\d businessoperation ?\n\n Table \"reports.businessoperation\"\r\n Column | Type | Modifiers\r\n---------------------------+------------------------+-----------\r\n documentinternalid | character varying(255) |\r\n is_current | character(1) |\r\n documentid | character varying(555) |\r\n documenttypecode | character varying(512) |\r\n subjectroleinternalid | character varying(255) |\r\n subjectentityinternalid | character varying(255) |\r\n subjectentityid | character varying(555) |\r\n subjectentityidroot | character varying(555) |\r\n subjectentityname | character varying |\r\n subjectentitytel | text |\r\n subjectentityemail | text |\r\n otherentityinternalid | character varying(255) |\r\n confidentialitycode | character varying(512) |\r\n actinternalid | character varying(255) |\r\n operationcode | character varying(512) |\r\n operationname | text |\r\n operationqualifiercode | character varying(512) |\r\n operationqualifiername | character varying(512) |\r\n approvalnumber | character varying(555) |\r\n approvalnumbersystem | character varying(555) |\r\n approvalstatecode | character varying(512) |\r\n approvalstatecodesystem | character varying(512) |\r\n approvaleffectivetimelow | character varying(512) |\r\n approvaleffectivetimehigh | character varying(512) |\r\n approvalstatuscode | character varying(32) |\r\n licensecode | character varying(512) |\r\n agencyid | character varying(555) |\r\n agencyname | text |\r\n productitemcode | character varying(512) |\r\n productinternalid | character varying(255) |\r\n\r\n\n\nCould you rerun the query with \\set VERBOSITY verbose to show the file/line\r\nthat's failing ?\n\nHere goes:\n\nintegrator=# \\set VERBOSITY verbose\r\nintegrator=# SET ENABLE_NESTLOOP TO OFF;\r\nSET\r\nintegrator=# INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\r\nERROR: 53200: out of memory\r\nDETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\r\nLOCATION: MemoryContextAlloc, mcxt.c:798\nyou notice that I set ENABLE_NESTLOOP to off, that is because the\r\n planner goes off thinking the NL plan is marginally more\r\n efficient, but in fact it will take 5 hours to get to the same out\r\n of memory crash, while the no NL plan gets there in half an hour.\r\n That verbose setting didn't help much I guess.\n\n\nIf you wanted to show a stack trace, you could attach gdb to PID from SELECT\r\npg_backend_pid(), \"b\"reak on errdetail, run the query, and then \"bt\" when it\r\nfails.\n\ngdb -p 27930\r\nGNU gdb (GDB) Red Hat Enterprise Linux 8.0.1-30.amzn2.0.3\r\n...\r\nAttaching to process 27930\r\nReading symbols from /usr/local/pgsql/bin/postgres...done.\r\n...\r\n(gdb) b errdetail\r\nBreakpoint 1 at 0x82b210: file elog.c, line 872.\r\n(gdb) cont\r\nContinuing.\r\nBreakpoint 1, errdetail (fmt=fmt@entry=0x9d9958 \"Failed on request of size %zu in memory context \\\"%s\\\".\") at elog.c:872\r\n872 {\r\n(gdb) bt\r\n#0 errdetail (fmt=fmt@entry=0x9d9958 \"Failed on request of size %zu in memory context \\\"%s\\\".\") at elog.c:872\r\n#1 0x000000000084e320 in MemoryContextAlloc (context=0x1111600, size=size@entry=32800) at mcxt.c:794\r\n#2 0x000000000060ce7a in dense_alloc (size=384, size@entry=381, hashtable=<optimized out>, hashtable=<optimized out>)\r\n at nodeHash.c:2696\r\n#3 0x000000000060d788 in ExecHashTableInsert (hashtable=hashtable@entry=0x10ead08, slot=<optimized out>, hashvalue=194758122)\r\n at nodeHash.c:1614\r\n#4 0x0000000000610c6f in ExecHashJoinNewBatch (hjstate=0x10806b0) at nodeHashjoin.c:1051\r\n#5 ExecHashJoinImpl (parallel=false, pstate=0x10806b0) at nodeHashjoin.c:539\r\n#6 ExecHashJoin (pstate=0x10806b0) at nodeHashjoin.c:565\r\n#7 0x000000000061ce4e in ExecProcNode (node=0x10806b0) at ../../../src/include/executor/executor.h:247\r\n#8 ExecSort (pstate=0x1080490) at nodeSort.c:107\r\n#9 0x000000000061d2c4 in ExecProcNode (node=0x1080490) at ../../../src/include/executor/executor.h:247\r\n#10 ExecUnique (pstate=0x107ff60) at nodeUnique.c:73\r\n#11 0x0000000000619732 in ExecProcNode (node=0x107ff60) at ../../../src/include/executor/executor.h:247\r\n#12 ExecModifyTable (pstate=0x107fd20) at nodeModifyTable.c:2025\r\n#13 0x00000000005f75ba in ExecProcNode (node=0x107fd20) at ../../../src/include/executor/executor.h:247\r\n#14 ExecutePlan (execute_once=<optimized out>, dest=0x7f0442721998, direction=<optimized out>, numberTuples=0,\r\n sendTuples=<optimized out>, operation=CMD_INSERT, use_parallel_mode=<optimized out>, planstate=0x107fd20, estate=0x107f830)\r\n at execMain.c:1723\r\n#15 standard_ExecutorRun (queryDesc=0x1086880, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\r\n#16 0x000000000072a972 in ProcessQuery (plan=<optimized out>,\r\n sourceText=0xf4a710 \"INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\", params=0x0,\r\n queryEnv=0x0, dest=0x7f0442721998, completionTag=0x7fff2e4cad30 \"\") at pquery.c:161\r\n#17 0x000000000072abb0 in PortalRunMulti (portal=portal@entry=0xfb06b0, isTopLevel=isTopLevel@entry=true,\r\n setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x7f0442721998, altdest=altdest@entry=0x7f0442721998,\r\n completionTag=completionTag@entry=0x7fff2e4cad30 \"\") at pquery.c:1286\r\n#18 0x000000000072b661 in PortalRun (portal=portal@entry=0xfb06b0, count=count@entry=9223372036854775807,\r\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x7f0442721998,\r\n altdest=altdest@entry=0x7f0442721998, completionTag=0x7fff2e4cad30 \"\") at pquery.c:799\r\n#19 0x00000000007276e8 in exec_simple_query (\r\n query_string=0xf4a710 \"INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\") at postgres.c:1145\r\n#20 0x0000000000729534 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xf76ce8, dbname=<optimized out>,\r\n username=<optimized out>) at postgres.c:4182\r\n#21 0x00000000006be215 in BackendRun (port=0xf6dfe0) at postmaster.c:4361\r\n#22 BackendStartup (port=0xf6dfe0) at postmaster.c:4033\r\n#23 ServerLoop () at postmaster.c:1706\r\n#24 0x00000000006bf122 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xf45320) at postmaster.c:1379\r\n#25 0x00000000004822dc in main (argc=3, argv=0xf45320) at main.c:228\nThat's it.\nThank you all very much for your interest in this case.\n-Gunther",
"msg_date": "Sun, 14 Apr 2019 23:04:18 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244 chunks); 2230341056 used\n\nOooh, that looks like a memory leak right enough. The ExecutorState\nshould not get that big for any reasonable query.\n\nYour error and stack trace show a failure in HashBatchContext,\nwhich is probably the last of these four:\n\n> HashBatchContext: 57432 total in 3 blocks; 16072 free (6 chunks); 41360 used\n> HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n> HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n> HashBatchContext: 100711712 total in 3065 blocks; 7936 free (0 chunks); 100703776 used\n\nPerhaps that's more than it should be, but it's silly to obsess over 100M\nwhen there's a 2.2G problem elsewhere. I think it's likely that it was\njust coincidence that the failure happened right there. Unfortunately,\nthat leaves us with no info about where the actual leak is coming from.\n\nThe memory map shows that there were three sorts and four hashes going\non, so I'm not sure I believe that this corresponds to the query plan\nyou showed us before.\n\nAny chance of extracting a self-contained test case that reproduces this?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 14 Apr 2019 23:24:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 4/14/2019 23:24, Tom Lane wrote:\n>> ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244 chunks); 2230341056 used\n> Oooh, that looks like a memory leak right enough. The ExecutorState\n> should not get that big for any reasonable query.\n2.2 GB is massive yes.\n> Your error and stack trace show a failure in HashBatchContext,\n> which is probably the last of these four:\n>\n>> HashBatchContext: 57432 total in 3 blocks; 16072 free (6 chunks); 41360 used\n>> HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n>> HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n>> HashBatchContext: 100711712 total in 3065 blocks; 7936 free (0 chunks); 100703776 used\n> Perhaps that's more than it should be, but it's silly to obsess over 100M\n> when there's a 2.2G problem elsewhere.\nYes.\n> I think it's likely that it was\n> just coincidence that the failure happened right there. Unfortunately,\n> that leaves us with no info about where the actual leak is coming from.\n\nStrange though, that the vmstat tracking never showed that the cache \nallocated memory goes much below 6 GB. Even if this 2.2 GB memory leak \nis there, and even if I had 2 GB of shared_buffers, I would still have \nenough for the OS to give me.\n\nIs there any doubt that this might be a problem with Linux? Because if \nyou want, I can whip out a FreeBSD machine, compile pgsql, and attach \nthe same disk, and try it there. I am longing to have a reason to move \nback to FreeBSD anyway. But I have tons of stuff to do, so if you do not \nhave reason to suspect Linux to do wrong here, I prefer skipping that \nfutile attempt\n\n> The memory map shows that there were three sorts and four hashes going\n> on, so I'm not sure I believe that this corresponds to the query plan\n> you showed us before.\nLike I said, the first explain was not using the same constraints (no \nNL). Now what I sent last should all be consistent. Memory dump and \nexplain plan and gdb backtrace.\n> Any chance of extracting a self-contained test case that reproduces this?\n\nWith 18 million rows involved in the base tables, hardly.\n\nBut I am ready to try some other things with the debugger that you want \nme to try. If we have a memory leak issue, we might just as well try to \nplug it!\n\nI could even to give someone of you access to the system that runs this.\n\nthanks,\n-Gunther\n\n\n",
"msg_date": "Sun, 14 Apr 2019 23:59:45 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 11:59:45PM -0400, Gunther wrote:\n> On 4/14/2019 23:24, Tom Lane wrote:\n> >Any chance of extracting a self-contained test case that reproduces this?\n> With 18 million rows involved in the base tables, hardly.\n\nWere you able to reproduce the problem with SELECT (without INSERT) ?\nHow many rows does it output ? Show explain analyze if possible. If that\nstill errors, can you make it work with a small enough LIMIT ?\n\nWe haven't seen the view - maybe it's very complicated, but can you reproduce\nwith a simpler one ? Fewer joins ? Or fewer join conditions ?\n\nJustin\n\n\n",
"msg_date": "Sun, 14 Apr 2019 23:15:00 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 11:04 PM Gunther <[email protected]> wrote:\n\n> Could you rerun the query with \\set VERBOSITY verbose to show the file/line\n> that's failing ?\n>\n> Here goes:\n>\n> integrator=# \\set VERBOSITY verbose\n> integrator=# SET ENABLE_NESTLOOP TO OFF;\n> SET\n> integrator=# INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\n> ERROR: 53200: out of memory\n> DETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\n> LOCATION: MemoryContextAlloc, mcxt.c:798\n>\n> you notice that I set ENABLE_NESTLOOP to off, that is because the planner\n> goes off thinking the NL plan is marginally more efficient, but in fact it\n> will take 5 hours to get to the same out of memory crash, while the no NL\n> plan gets there in half an hour. That verbose setting didn't help much I\n> guess.\n>\nI think the backtrace of the enable_nestloop=on plan would be more useful.\nHere someone has filled up memory, and then we see HashBatchContext trip\nover it that. But it isn't the one the one that caused the problem, so the\nbacktrace doesn't help. With the previous plan, it was an allocation into\nExecutorState which tripped over the problem, and it is likely that it is\ncoming from the same series of allocations that caused the problem.\n\nTo get it to happen faster, maybe you could run the server with a small\nsetting of \"ulimit -v\"? Or, you could try to capture it live in gdb.\nUnfortunately I don't know how to set a breakpoint for allocations into a\nspecific context, and setting a breakpoint for any memory allocation is\nprobably going to fire too often to be useful.\n\nYes, the verbose option didn't help (but the gdb backtrace made up for\nit--kind of--we really need the backtrace of the allocations into\nExecutorState). It isn't helpful to know that a memory allocation failed\nin the mcxt.c code. To bad it doesn't report the location of the caller of\nthat code. I know in Perl you can use Carp::croak to do that, but I don't\nknow to do it in C.\n\nBut really the first thing I want to know now is what if you just do the\nselect, without the insert?\n\nexplain analyze SELECT * FROM reports.v_BusinessOperation\n\nIf that works, what about \"create temporary table foo as SELECT * FROM\nreports.v_BusinessOperation\" ?\n\nAnd if that works, what about \"INSERT INTO reports.BusinessOperation SELECT\n* FROM foo\"?\n\nIf the ERROR happens in the first or last of these, it might be much easier\nto analyze in that simplified context. If it happens in the middle one,\nthen we probably haven't achieved much. (And if there is no ERROR at all,\nthen you have workaround, but we still haven't found the fundamental bug).\n\nAre you not showing the view definition for proprietary reasons, or just\nbecause you don't think it will be useful? If the latter, please send it as\nan attachment, I can't guarantee it will be useful, but there is only one\nway find out.\n\nCheers,\n\nJeff\n\nOn Sun, Apr 14, 2019 at 11:04 PM Gunther <[email protected]> wrote:\n\nCould you rerun the query with \\set VERBOSITY verbose to show the file/line\nthat's failing ?\n\nHere goes:\n\nintegrator=# \\set VERBOSITY verbose\nintegrator=# SET ENABLE_NESTLOOP TO OFF;\nSET\nintegrator=# INSERT INTO reports.BusinessOperation SELECT * FROM reports.v_BusinessOperation;\nERROR: 53200: out of memory\nDETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\nLOCATION: MemoryContextAlloc, mcxt.c:798\nyou notice that I set ENABLE_NESTLOOP to off, that is because the\n planner goes off thinking the NL plan is marginally more\n efficient, but in fact it will take 5 hours to get to the same out\n of memory crash, while the no NL plan gets there in half an hour.\n That verbose setting didn't help much I guess.I think the backtrace of the enable_nestloop=on plan would be more useful. Here someone has filled up memory, and then we see HashBatchContext trip over it that. But it isn't the one the one that caused the problem, so the backtrace doesn't help. With the previous plan, it was an allocation into ExecutorState which tripped over the problem, and it is likely that it is coming from the same series of allocations that caused the problem.To get it to happen faster, maybe you could run the server with a small setting of \"ulimit -v\"? Or, you could try to capture it live in gdb. Unfortunately I don't know how to set a breakpoint for allocations into a specific context, and setting a breakpoint for any memory allocation is probably going to fire too often to be useful.Yes, the verbose option didn't help (but the gdb backtrace made up for it--kind of--we really need the backtrace of the allocations into ExecutorState). It isn't helpful to know that a memory allocation failed in the mcxt.c code. To bad it doesn't report the location of the caller of that code. I know in Perl you can use Carp::croak to do that, but I don't know to do it in C.But really the first thing I want to know now is what if you just do the select, without the insert? explain analyze SELECT * FROM reports.v_BusinessOperationIf that works, what about \"create temporary table foo as SELECT * FROM reports.v_BusinessOperation\" ?And if that works, what about \"INSERT INTO reports.BusinessOperation SELECT * FROM foo\"?If the ERROR happens in the first or last of these, it might be much easier to analyze in that simplified context. If it happens in the middle one, then we probably haven't achieved much. (And if there is no ERROR at all, then you have workaround, but we still haven't found the fundamental bug).Are you not showing the view definition for proprietary reasons, or just because you don't think it will be useful? If the latter, please send it as an attachment, I can't guarantee it will be useful, but there is only one way find out. Cheers,Jeff",
"msg_date": "Mon, 15 Apr 2019 09:32:09 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 11:59 PM Gunther <[email protected]> wrote:\n\n\n> Is there any doubt that this might be a problem with Linux? Because if\n> you want, I can whip out a FreeBSD machine, compile pgsql, and attach\n> the same disk, and try it there. I am longing to have a reason to move\n> back to FreeBSD anyway. But I have tons of stuff to do, so if you do not\n> have reason to suspect Linux to do wrong here, I prefer skipping that\n> futile attempt\n>\n\nI think the PostgreSQL leaking in the first place would be independent of\nLinux being ungraceful about it. So repeating it on BSD probably wouldn't\nhelp us here. If you want to take up the 2nd issue with the kernel folks,\nhaving some evidence from BSD might (but not very likely) be helpful for\nthat, but that would be for a different mailing list.\n\nCheers,\n\nJeff\n\nOn Sun, Apr 14, 2019 at 11:59 PM Gunther <[email protected]> wrote: Is there any doubt that this might be a problem with Linux? Because if \nyou want, I can whip out a FreeBSD machine, compile pgsql, and attach \nthe same disk, and try it there. I am longing to have a reason to move \nback to FreeBSD anyway. But I have tons of stuff to do, so if you do not \nhave reason to suspect Linux to do wrong here, I prefer skipping that \nfutile attemptI think the PostgreSQL leaking in the first place would be independent of Linux being ungraceful about it. So repeating it on BSD probably wouldn't help us here. If you want to take up the 2nd issue with the kernel folks, having some evidence from BSD might (but not very likely) be helpful for that, but that would be for a different mailing list.Cheers,Jeff",
"msg_date": "Mon, 15 Apr 2019 09:44:40 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 11:59:45PM -0400, Gunther wrote:\n>On 4/14/2019 23:24, Tom Lane wrote:\n>>> ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244 chunks); 2230341056 used\n>>Oooh, that looks like a memory leak right enough. The ExecutorState\n>>should not get that big for any reasonable query.\n>2.2 GB is massive yes.\n>>Your error and stack trace show a failure in HashBatchContext,\n>>which is probably the last of these four:\n>>\n>>> HashBatchContext: 57432 total in 3 blocks; 16072 free (6 chunks); 41360 used\n>>> HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n>>> HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n>>> HashBatchContext: 100711712 total in 3065 blocks; 7936 free (0 chunks); 100703776 used\n>>Perhaps that's more than it should be, but it's silly to obsess over 100M\n>>when there's a 2.2G problem elsewhere.\n>Yes.\n>> I think it's likely that it was\n>>just coincidence that the failure happened right there. Unfortunately,\n>>that leaves us with no info about where the actual leak is coming from.\n>\n>Strange though, that the vmstat tracking never showed that the cache \n>allocated memory goes much below 6 GB. Even if this 2.2 GB memory leak \n>is there, and even if I had 2 GB of shared_buffers, I would still have \n>enough for the OS to give me.\n>\n\nDepends on how the kernel is configured. What are vm.overcommit_memory\nand vm.overcommit_ratio set to, for example?\n\nIt may easily be the case that the kernel is only allowing 50% of RAM to\nbe committed to user space, and then refusing to allocate more despite\nhaving free memory. That's fairly common issue on swapless systems.\n\nTry running the query again, watch\n\n cat /proc/meminfo | grep Commit\n\nand if it crashes when Committed_AS hits the CommitLimit.\n\nThat doesn't explain where the memory leak is, though :-(\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Apr 2019 17:26:30 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> To get it to happen faster, maybe you could run the server with a small\n> setting of \"ulimit -v\"? Or, you could try to capture it live in gdb.\n> Unfortunately I don't know how to set a breakpoint for allocations into a\n> specific context, and setting a breakpoint for any memory allocation is\n> probably going to fire too often to be useful.\n\nIf you can use gdb at all, it's not that hard to break on allocations\ninto a specific context; I've done it many times. The strategy is\nbasically\n\n1. Let query run long enough for memory usage to start increasing,\nthen attach to backend with gdb.\n\n2. Set breakpoint at, probably, AllocSetAlloc. (In some cases,\nreallocs could be the problem, but I doubt it here.) Then \"c\".\n\n3. When it stops, \"p *context\" and see if this is the context\nyou're looking for. In this case, since we want to know about\nallocations into ExecutorState and we know there's only one\nactive one, you just have to look at the context name. In general\nyou might have to look at the backtrace. Anyway, if it isn't the\none you want, just \"c\" until you get to an allocation into the\none you do want.\n\n4. Once you have found out the address of the context you care\nabout, make the breakpoint conditional on the context argument\nbeing that one. It might look like this:\n\nBreakpoint 1, AllocSetAlloc (context=0x1483be0, size=480) at aset.c:715\n715 {\n(gdb) p *context\n$1 = {type = T_AllocSetContext, isReset = false, allowInCritSection = false, \n methods = 0xa33f40, parent = 0x0, firstchild = 0x1537f30, prevchild = 0x0, \n nextchild = 0x0, name = 0xa3483f \"TopMemoryContext\", ident = 0x0, \n reset_cbs = 0x0}\n(gdb) cond 1 context == 0x1483be0\n\n5. Now repeatedly \"c\", and check the stack trace each time, for a\ndozen or two times to get a feeling for where the allocations are\nbeing requested.\n\nIn some cases you might be able to find the context address in a\nmore efficient way than what I suggest in #3 --- for instance,\nyou could instead set a breakpoint where the context is created\nand snag its address immediately, or you could dig around in\nbackend data structures to find it. But these ways generally\nrequire more familiarity with the code than just watching the\nrequests go by.\n\n> Are you not showing the view definition for proprietary reasons, or just\n> because you don't think it will be useful?\n\nAs far as that goes, I think the most likely theory right now is that\nsome particular function being used in the view is leaking memory.\nSo yes, we need to see the view ... or you can try removing bits of\nit to see if the leak goes away.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:28:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 05:19:50PM -0400, Jeff Janes wrote:\n> On Sun, Apr 14, 2019 at 4:51 PM Gunther <[email protected]> wrote:\n>\n> For weeks now, I am banging my head at an \"out of memory\" situation.\n> There is only one query I am running on an 8 GB system, whatever I try,\n> I get knocked out on this out of memory.\n>\n> Is PostgreSQL throwing an error with OOM, or is getting killed -9 by the\n> OOM killer?� Do you get a core file you can inspect with gdb?\n>\n> You might want to see the query, but it is a huge plan, and I can't\n> really break this down. It shouldn't matter though. But just so you can\n> get a glimpse here is the plan:\n>\n> Insert on businessoperation (cost=5358849.28..5361878.44 rows=34619 width=1197)\n> -> Unique (cost=5358849.28..5361532.25 rows=34619 width=1197)\n> \n>\n> Maybe it is memory for trigger or constraint checking, although I don't\n> know why that would appear instantly.� What triggers or constraints do you\n> have on businessoperation?�\n\nYeah, that would be my guess too. If I had to guess, something likely gets\nconfused and allocates memory in es_query_ctx instead of the per-tuple\ncontext (es_per_tuple_exprcontext).\n\nTriggers, constraints and expr evaluation all seem like a plausible\ncandidates. It's going to be hard to nail the exact place, though :-(\n\n> What if you just run the SELECT without the INSERT?� Or insert into a temp\n> table rather than into businessoperation?� And if that doesn't crash, what\n> if you then insert to businessoperation from the temp table?\n> �\n\nYeah. What's the schema of \"businessoperation\"? Anything special about\nit? Triggers, expression indexes, check constraints, ...\n\nGunther, you mentioned you build postgres from sources. Would it be\npossible to add some sort of extra debugging to see where the memory is\nallocated from? It's a bit heavy-handed, though.\n\nOr maybe splitting es_query_ctx into smaller contexts. That might be\neasier to evaluate than sifting throuht god-knows-how-many-gbs of log.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Apr 2019 17:38:49 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "I wrote:\n> If you can use gdb at all, it's not that hard to break on allocations\n> into a specific context; I've done it many times. The strategy is\n> basically\n> 1. Let query run long enough for memory usage to start increasing,\n> then attach to backend with gdb.\n\nBTW, just to clarify that strategy a bit: the usage pattern we expect\nfor ExecutorState is that there are a bunch of individual allocations\nduring executor startup, but then none while the query is running\n(or at least, if there are any in that phase, they get freed before\nmoving on to the next row). The form of the leak, almost certainly,\nis that some allocation is happening per-row and *not* getting freed.\nSo there's no point in groveling through the startup behavior. What\nwe want to know about is what happens after we reach the ought-to-be\nsteady state behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:53:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "OK Guys, you are very kind to continue taking an interest in this matter.\n\nI will try what I can to help squish the bug.\n\nTomas Vondra just added a good idea that explains why I get the out of \nmemory with still having so much cache available:\n\n# sysctl vm.overcommit_memory\nvm.overcommit_memory = 2\n# sysctl vm.overcommit_ratio\nvm.overcommit_ratio = 50\n\nas he predicted.\n\n# cat /proc/meminfo |grep Commit\nCommitLimit: 3955192 kB\nCommitted_AS: 2937352 kB\n\nSo I thing that explains why it turns into an out of memory error. We \ndon't worry or wonder about that any more. I will change that parameter \nin the future to allow for some spikes. But it's not going to resolve \nthe underlying memory leak issue.\n\nNow I run explain analyze SELECT ... without the INSERT.\n\nintegrator=# \\set VERBOSITY verbose\nintegrator=#\nintegrator=# \\pset pager off\nPager usage is off.\nintegrator=# \\pset format unaligned\nOutput format is unaligned.\nintegrator=# \\set VERBOSITY verbose\nintegrator=#\nintegrator=# SET ENABLE_NESTLOOP TO OFF;\nSET\nintegrator=# explain analyze SELECT * FROM reports.v_BusinessOperation;\nERROR: 53200: out of memory\nDETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\nLOCATION: MemoryContextAlloc, mcxt.c:798\n\nAnd since that failed already, I guess we don't need to worry about the \ntemporary table insert.\n\nAbout adding LIMIT, I don't think it makes sense in the outer query, \nsince the error is probably happening earlier. I did put a LIMIT 100 on \none of the tables we join to, and it helped. But that doesn't really \ntell us anything I think.\n\nThen yes, I can try the backtrace with the NLs enabled. It will just \ntake a long long time and unfortunately it is extremely likely that I \nlose the console and then will be unable to get back to it. OK, \nscreen(1) resolves that problem too. Will do, after I reported the above.\n\nBut now you have already produced more ideas ...\n\n>> Maybe it is memory for trigger or constraint checking, although I \n>> don't\n>> know why that would appear instantly. What triggers or constraints \n>> do you\n>> have on businessoperation? \n>\n> Yeah, that would be my guess too. If I had to guess, something likely \n> gets\n> confused and allocates memory in es_query_ctx instead of the per-tuple\n> context (es_per_tuple_exprcontext).\n>\n> Triggers, constraints and expr evaluation all seem like a plausible\n> candidates. It's going to be hard to nail the exact place, though \n\nI think triggers and constraints is ruled out, because the problem \nhappens without the INSERT.\n\nThat leaves us with expression evaluation. And OK, now you really wanna \nsee the query, although it should be in the plan too. But for what it is \nworth:\n\nSELECT DISTINCT\n documentInternalId, is_current,\n\tdocumentId,\n\tdocumentTypeCode,\n\tsubjectRoleInternalId,\n\tsubjectEntityInternalId,\n\tsubjectEntityId,\n\tsubjectEntityIdRoot,\n\tsubjectEntityName,\n\tsubjectEntityTel,\n\tsubjectEntityEmail,\n\totherEntityInternalId,\n\tconfidentialityCode,\n\tactInternalId,\n\tcode_code as operationCode,\n\tcode_displayName AS operationName,\n\toperationQualifierCode,\n\toperationQualifierName,\n\tapprovalNumber,\n\tapprovalNumberSystem,\n\tapprovalStateCode,\n\tapprovalStateCodeSystem,\n\tapprovalEffectiveTimeLow,\n\tapprovalEffectiveTimeHigh,\n\tapprovalStatusCode,\n\tlicenseCode,\n\tagencyId,\n\tagencyName,\n\tproductItemCode,\n\tproductInternalId\n FROM reports.DocumentInformationSubject\n LEFT OUTER JOIN (SELECT documentInternalId, documentId, actInternalId,\n\t subjectEntityCode as productItemCode,\n\t\t\t subjectEntityInternalId as productInternalId\n\t\t FROM reports.DocumentInformationSubject\n\t\t WHERE participationTypeCode = 'PRD') prd\n USING(documentInternalId, documentId, actInternalId)\n LEFT OUTER JOIN (\n SELECT documentInternalId,\n q.code_code AS operationQualifierCode,\n \t q.code_displayName AS operationQualifierName,\n \t r.targetInternalId AS actInternalId,\n\t actInternalId AS approvalInternalId,\n \t an.extension AS approvalNumber,\n \t an.root AS approvalNumberSystem,\n\t qs.subjectEntityCode AS approvalStateCode,\n\t qs.subjectEntityCodeSystem AS approvalStateCodeSystem,\n \t qs.effectivetime_low AS approvalEffectiveTimeLow,\n \t qs.effectivetime_high AS approvalEffectiveTimeHigh,\n \t qs.statusCode AS approvalStatusCode,\n\t qs.code_code AS licenseCode,\n\t agencyId.extension AS agencyId,\n\t agencyName.trivialName AS agencyName\n FROM reports.DocumentInformation q\n LEFT OUTER JOIN (SELECT * FROM reports.DocumentInformationSubject WHERE participationTypeCode = 'AUT') qs\n USING(documentInternalId, actInternalId)\n INNER JOIN integrator.ActRelationship r\n ON( r.sourceInternalId = actInternalId\n \t AND r.typeCode = 'SUBJ')\n LEFT OUTER JOIN integrator.Act_id an USING(actInternalId)\n LEFT OUTER JOIN integrator.Entity_id agencyId ON(agencyId.entityInternalId = otherEntityInternalId)\n LEFT OUTER JOIN reports.BestName agencyName ON(agencyName.entityInternalId = otherEntityInternalId)\n WHERE q.classCode = 'CNTRCT'\n AND q.moodCode = 'EVN'\n AND q.code_codeSystem = '2.16.840.1.113883.3.26.1.1'\n ) q\n USING(documentInternalId, actInternalId)\n WHERE classCode = 'ACT'\n AND moodCode = 'DEF'\n AND code_codeSystem = '2.16.840.1.113883.3.26.1.1'\n AND participationTypeCode IN ('PPRF','PRF');\n\nYou see that the expressions are all just equal operations, some IN, \nnothing outlandish.\n\nNow I will try what Tom Lane suggested. Here you go. And I have it \nstopped at this state, so if you want me to inspect anything else, I can \ndo it.\n\nWith screen(1) I can be sure I won't lose my stuff when my internet goes \ndown. Nice.\n\nI have one screen session with 3 windows:\n\n 1. psql\n 2. gdb\n 3. misc (vmstat, etc.)\n\nNow I have let this run for a good long time while setting up my screen \nstuff. And then:\n\nps -x\n\nlook for the postgres job with the EXPLAIN ... that's $PID, then:\n\ngdb -p $PID\n\nThen first I do\n\ncont\n\nbut then it stops at SIGUSR1, because of the parallel workers signalling \neach other.\n\nhandle SIGUSR1 nostop\n\nsuppresses that stopping. Then I break CTRL-C, and set the breakpoint \nwhere Tom Lane said:\n\nb AllocSetAlloc\n\nonce it stops there I do\n\nBreakpoint 1, AllocSetAlloc (context=0x1168230, size=8) at aset.c:715\n715 {\n(gdb) p context->name\n$4 = 0x96ce5b \"ExecutorState\"\n\nSo I should even be able to set a conditional breakpoint.\n\n(gdb) delete\nDelete all breakpoints? (y or n) y\n(gdb) b AllocSetAlloc if strcmp(context->name, \"ExecutorState\") == 0\nBreakpoint 2 at 0x848ed0: file aset.c, line 715.\n(gdb) cont\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=10) at aset.c:715\n715 {\n(gdb) cont\nContinuing.\n\nProgram received signal SIGUSR1, User defined signal 1.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=152) at aset.c:715\n715 {\n(gdb) cont\nContinuing.\n\nProgram received signal SIGUSR1, User defined signal 1.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=201) at aset.c:715\n715 {\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715\n715 {\n(gdb) p context->name\n$8 = 0x96ce5b \"ExecutorState\"\n\nNice. Now the question is, am I at the place where memory gets squeezed? \nAnd I think yes. With top\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n31752 postgres 20 0 2772964 1.2g 329640 t 0.0 16.5 8:46.59 postgres: postgres integrator [local] EXPLAIN\n\nI guess I should run this for a little longer. So I disable my breakpoints\n\n(gdb) info breakpoints\nNum Type Disp Enb Address What\n2 breakpoint keep y 0x0000000000848ed0 in AllocSetAlloc at aset.c:715\n stop only if strcmp(context->name, \"ExecutorState\") == 0\n breakpoint already hit 6 times\n(gdb) disable 2\n(gdb) cont\nContinuing.\n\nwhile watching top:\n\n31752 postgres 20 0 2777060 1.3g 329920 D 33.2 17.9 8:52.07 postgres: postgres integrator [local] EXPLAIN\n\n31752 postgres 20 0 2777060 1.4g 329920 D 33.2 17.9 8:52.07 postgres: postgres integrator [local] EXPLAIN\n\n31752 postgres 20 0 2777060 1.5g 329920 D 33.2 17.9 8:52.07 postgres: postgres integrator [local] EXPLAIN\n\nit went up pretty quick from 1.2 GB to 1.5 GB, but then it stopped \ngrowing fast, so now back to gdb and break:\n\n^C\nProgram received signal SIGINT, Interrupt.\n0x00007f048f336d71 in read () from /lib64/libpthread.so.0\n(gdb) enable 2\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=385) at aset.c:715\n715 {\n\nNow I give you a bt so we have something to look at:\n\n#0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715\n#1 0x000000000084e6cd in palloc (size=385) at mcxt.c:938\n#2 0x000000000061019c in ExecHashJoinGetSavedTuple (file=file@entry=0x8bbc528, hashvalue=hashvalue@entry=0x7fff2e4ca76c,\n tupleSlot=0x10856b8, hjstate=0x11688e0) at nodeHashjoin.c:1277\n#3 0x0000000000610c83 in ExecHashJoinNewBatch (hjstate=0x11688e0) at nodeHashjoin.c:1042\n#4 ExecHashJoinImpl (parallel=false, pstate=0x11688e0) at nodeHashjoin.c:539\n#5 ExecHashJoin (pstate=0x11688e0) at nodeHashjoin.c:565\n#6 0x00000000005fde68 in ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:461\n#7 0x000000000061ce4e in ExecProcNode (node=0x11688e0) at ../../../src/include/executor/executor.h:247\n#8 ExecSort (pstate=0x11687d0) at nodeSort.c:107\n#9 0x00000000005fde68 in ExecProcNodeInstr (node=0x11687d0) at execProcnode.c:461\n#10 0x000000000061d2c4 in ExecProcNode (node=0x11687d0) at ../../../src/include/executor/executor.h:247\n#11 ExecUnique (pstate=0x11685e0) at nodeUnique.c:73\n#12 0x00000000005fde68 in ExecProcNodeInstr (node=0x11685e0) at execProcnode.c:461\n#13 0x00000000005f75ba in ExecProcNode (node=0x11685e0) at ../../../src/include/executor/executor.h:247\n#14 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x11685e0, estate=0x1168340)\n at execMain.c:1723\n#15 standard_ExecutorRun (queryDesc=0x119b6d8, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#16 0x000000000059c6f8 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x1199a68, into=into@entry=0x0, es=es@entry=0x1141d48,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7fff2e4ca990) at explain.c:535\n#17 0x000000000059c9ef in ExplainOneQuery (query=<optimized out>, cursorOptions=<optimized out>, into=0x0, es=0x1141d48,\n queryString=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"..., params=0x0, queryEnv=0x0)\n at explain.c:371\n#18 0x000000000059ce37 in ExplainQuery (pstate=pstate@entry=0xf74608, stmt=stmt@entry=0x11ef240,\n queryString=queryString@entry=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...,\n params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0xf74578) at explain.c:254\n#19 0x000000000072ca5d in standard_ProcessUtility (pstmt=0x11ef390,\n queryString=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0xf74578, completionTag=0x7fff2e4cab20 \"\") at utility.c:675\n#20 0x000000000072a052 in PortalRunUtility (portal=0xfb06b0, pstmt=0x11ef390, isTopLevel=<optimized out>,\n setHoldSnapshot=<optimized out>, dest=<optimized out>, completionTag=0x7fff2e4cab20 \"\") at pquery.c:1178\n#21 0x000000000072add2 in FillPortalStore (portal=portal@entry=0xfb06b0, isTopLevel=isTopLevel@entry=true) at pquery.c:1038\n#22 0x000000000072b855 in PortalRun (portal=portal@entry=0xfb06b0, count=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0xf4c570, altdest=altdest@entry=0xf4c570,\n completionTag=0x7fff2e4cad30 \"\") at pquery.c:768\n#23 0x00000000007276e8 in exec_simple_query (\n query_string=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...) at postgres.c:1145\n#24 0x0000000000729534 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xf76ce8, dbname=<optimized out>,\n username=<optimized out>) at postgres.c:4182\n#25 0x00000000006be215 in BackendRun (port=0xf6efe0) at postmaster.c:4361\n#26 BackendStartup (port=0xf6efe0) at postmaster.c:4033\n#27 ServerLoop () at postmaster.c:1706\n#28 0x00000000006bf122 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xf45320) at postmaster.c:1379\n#29 0x00000000004822dc in main (argc=3, argv=0xf45320) at main.c:228\n\nBut who knows if that's it. I continue and watch top again...\n\n31752 postgres 20 0 3112352 1.8g 329920 D 32.2 23.7 9:43.75 postgres: postgres integrator [local] EXPLAIN\n\nit went quickly to 1.6, then after some time to 1.7, then 1.8, and I \nstop again:\n\n^C\nProgram received signal SIGINT, Interrupt.\n0x00007f048f336d71 in read () from /lib64/libpthread.so.0\n(gdb) enable 2\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=375) at aset.c:715\n715 {\nbt\n#0 AllocSetAlloc (context=0x1168230, size=375) at aset.c:715\n#1 0x000000000084e6cd in palloc (size=375) at mcxt.c:938\n#2 0x000000000061019c in ExecHashJoinGetSavedTuple (file=file@entry=0x21df688, hashvalue=hashvalue@entry=0x7fff2e4ca76c,\n tupleSlot=0x10856b8, hjstate=0x11688e0) at nodeHashjoin.c:1277\n#3 0x0000000000610c83 in ExecHashJoinNewBatch (hjstate=0x11688e0) at nodeHashjoin.c:1042\n#4 ExecHashJoinImpl (parallel=false, pstate=0x11688e0) at nodeHashjoin.c:539\n#5 ExecHashJoin (pstate=0x11688e0) at nodeHashjoin.c:565\n#6 0x00000000005fde68 in ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:461\n#7 0x000000000061ce4e in ExecProcNode (node=0x11688e0) at ../../../src/include/executor/executor.h:247\n#8 ExecSort (pstate=0x11687d0) at nodeSort.c:107\n#9 0x00000000005fde68 in ExecProcNodeInstr (node=0x11687d0) at execProcnode.c:461\n#10 0x000000000061d2c4 in ExecProcNode (node=0x11687d0) at ../../../src/include/executor/executor.h:247\n#11 ExecUnique (pstate=0x11685e0) at nodeUnique.c:73\n#12 0x00000000005fde68 in ExecProcNodeInstr (node=0x11685e0) at execProcnode.c:461\n#13 0x00000000005f75ba in ExecProcNode (node=0x11685e0) at ../../../src/include/executor/executor.h:247\n#14 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x11685e0, estate=0x1168340)\n at execMain.c:1723\n#15 standard_ExecutorRun (queryDesc=0x119b6d8, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#16 0x000000000059c6f8 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x1199a68, into=into@entry=0x0, es=es@entry=0x1141d48,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7fff2e4ca990) at explain.c:535\n#17 0x000000000059c9ef in ExplainOneQuery (query=<optimized out>, cursorOptions=<optimized out>, into=0x0, es=0x1141d48,\n queryString=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"..., params=0x0, queryEnv=0x0)\n at explain.c:371\n#18 0x000000000059ce37 in ExplainQuery (pstate=pstate@entry=0xf74608, stmt=stmt@entry=0x11ef240,\n queryString=queryString@entry=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...,\n params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0xf74578) at explain.c:254\n#19 0x000000000072ca5d in standard_ProcessUtility (pstmt=0x11ef390,\n queryString=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0xf74578, completionTag=0x7fff2e4cab20 \"\") at utility.c:675\n#20 0x000000000072a052 in PortalRunUtility (portal=0xfb06b0, pstmt=0x11ef390, isTopLevel=<optimized out>,\n setHoldSnapshot=<optimized out>, dest=<optimized out>, completionTag=0x7fff2e4cab20 \"\") at pquery.c:1178\n#21 0x000000000072add2 in FillPortalStore (portal=portal@entry=0xfb06b0, isTopLevel=isTopLevel@entry=true) at pquery.c:1038\n#22 0x000000000072b855 in PortalRun (portal=portal@entry=0xfb06b0, count=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0xf4c570, altdest=altdest@entry=0xf4c570,\n completionTag=0x7fff2e4cad30 \"\") at pquery.c:768\n#23 0x00000000007276e8 in exec_simple_query (\n query_string=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...) at postgres.c:1145\n#24 0x0000000000729534 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xf76ce8, dbname=<optimized out>,\n username=<optimized out>) at postgres.c:4182\n#25 0x00000000006be215 in BackendRun (port=0xf6efe0) at postmaster.c:4361\n#26 BackendStartup (port=0xf6efe0) at postmaster.c:4033\n#27 ServerLoop () at postmaster.c:1706\n#28 0x00000000006bf122 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xf45320) at postmaster.c:1379\n#29 0x00000000004822dc in main (argc=3, argv=0xf45320) at main.c:228\n\nGood, now I leave this all sitting like that for you to ask me what else \nyou might want to see.\n\nWe are now close to the edge of the cliff.\n\n-Gunther\n\n\n\n\n\n\n\n\nOK Guys, you are very kind to continue taking an interest in this\n matter.\nI will try what I can to help squish the bug.\nTomas Vondra just added a good idea that explains why I get the\n out of memory with still having so much cache available:\n# sysctl vm.overcommit_memory\nvm.overcommit_memory = 2\n# sysctl vm.overcommit_ratio\nvm.overcommit_ratio = 50\nas he predicted. \n\n# cat /proc/meminfo |grep Commit\nCommitLimit: 3955192 kB\nCommitted_AS: 2937352 kB\nSo I thing that explains why it turns into an out of memory\n error. We don't worry or wonder about that any more. I will change\n that parameter in the future to allow for some spikes. But it's\n not going to resolve the underlying memory leak issue.\n\nNow I run explain analyze SELECT ... without the INSERT. \n\nintegrator=# \\set VERBOSITY verbose\nintegrator=#\nintegrator=# \\pset pager off\nPager usage is off.\nintegrator=# \\pset format unaligned\nOutput format is unaligned.\nintegrator=# \\set VERBOSITY verbose\nintegrator=#\nintegrator=# SET ENABLE_NESTLOOP TO OFF;\nSET\nintegrator=# explain analyze SELECT * FROM reports.v_BusinessOperation;\nERROR: 53200: out of memory\nDETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\nLOCATION: MemoryContextAlloc, mcxt.c:798\nAnd since that failed already, I guess we don't need to worry\n about the temporary table insert.\nAbout adding LIMIT, I don't think it makes sense in the outer\n query, since the error is probably happening earlier. I did put a\n LIMIT 100 on one of the tables we join to, and it helped. But that\n doesn't really tell us anything I think.\n\nThen yes, I can try the backtrace with the NLs enabled. It will\n just take a long long time and unfortunately it is extremely\n likely that I lose the console and then will be unable to get back\n to it. OK, screen(1) resolves that problem too. Will do, after I\n reported the above.\nBut now you have already produced more ideas ...\n\n\n\n Maybe it is memory for trigger or\n constraint checking, although I don't\n \n know why that would appear instantly. What triggers or\n constraints do you\n \n have on businessoperation? \n \n\n Yeah, that would be my guess too. If I had to guess, something\n likely gets\n \n confused and allocates memory in es_query_ctx instead of the\n per-tuple\n \n context (es_per_tuple_exprcontext).\n \n\n Triggers, constraints and expr evaluation all seem like a\n plausible\n \n candidates. It's going to be hard to nail the exact place,\n though \nI think triggers and constraints is ruled out, because the\n problem happens without the INSERT. \n\nThat leaves us with expression evaluation. And OK, now you really\n wanna see the query, although it should be in the plan too. But\n for what it is worth:\nSELECT DISTINCT\n documentInternalId, is_current,\n\tdocumentId,\n\tdocumentTypeCode,\n\tsubjectRoleInternalId,\n\tsubjectEntityInternalId,\n\tsubjectEntityId,\n\tsubjectEntityIdRoot,\n\tsubjectEntityName,\n\tsubjectEntityTel,\n\tsubjectEntityEmail,\n\totherEntityInternalId,\n\tconfidentialityCode,\n\tactInternalId, \n\tcode_code as operationCode, \n\tcode_displayName AS operationName,\n\toperationQualifierCode,\n\toperationQualifierName,\n\tapprovalNumber,\n\tapprovalNumberSystem,\n\tapprovalStateCode,\n\tapprovalStateCodeSystem,\n\tapprovalEffectiveTimeLow,\n\tapprovalEffectiveTimeHigh,\n\tapprovalStatusCode,\n\tlicenseCode,\n\tagencyId,\n\tagencyName,\n\tproductItemCode,\n\tproductInternalId\n FROM reports.DocumentInformationSubject \n LEFT OUTER JOIN (SELECT documentInternalId, documentId, actInternalId,\n\t subjectEntityCode as productItemCode,\n\t\t\t subjectEntityInternalId as productInternalId\n\t\t FROM reports.DocumentInformationSubject\n\t\t WHERE participationTypeCode = 'PRD') prd\n USING(documentInternalId, documentId, actInternalId)\n LEFT OUTER JOIN (\n SELECT documentInternalId,\n q.code_code AS operationQualifierCode,\n \t q.code_displayName AS operationQualifierName,\n \t r.targetInternalId AS actInternalId,\n\t actInternalId AS approvalInternalId,\n \t an.extension AS approvalNumber,\n \t an.root AS approvalNumberSystem,\n\t qs.subjectEntityCode AS approvalStateCode,\n\t qs.subjectEntityCodeSystem AS approvalStateCodeSystem,\n \t qs.effectivetime_low AS approvalEffectiveTimeLow,\n \t qs.effectivetime_high AS approvalEffectiveTimeHigh,\n \t qs.statusCode AS approvalStatusCode,\n\t qs.code_code AS licenseCode,\n\t agencyId.extension AS agencyId,\n\t agencyName.trivialName AS agencyName\n FROM reports.DocumentInformation q\n LEFT OUTER JOIN (SELECT * FROM reports.DocumentInformationSubject WHERE participationTypeCode = 'AUT') qs\n USING(documentInternalId, actInternalId)\n INNER JOIN integrator.ActRelationship r \n ON( r.sourceInternalId = actInternalId \n \t AND r.typeCode = 'SUBJ')\n LEFT OUTER JOIN integrator.Act_id an USING(actInternalId)\n LEFT OUTER JOIN integrator.Entity_id agencyId ON(agencyId.entityInternalId = otherEntityInternalId)\n LEFT OUTER JOIN reports.BestName agencyName ON(agencyName.entityInternalId = otherEntityInternalId)\n WHERE q.classCode = 'CNTRCT'\n AND q.moodCode = 'EVN'\n AND q.code_codeSystem = '2.16.840.1.113883.3.26.1.1' \n ) q\n USING(documentInternalId, actInternalId)\n WHERE classCode = 'ACT' \n AND moodCode = 'DEF' \n AND code_codeSystem = '2.16.840.1.113883.3.26.1.1' \n AND participationTypeCode IN ('PPRF','PRF');\nYou see that the expressions are all just equal operations, some\n IN, nothing outlandish.\nNow I will try what Tom Lane suggested. Here you go. And I have\n it stopped at this state, so if you want me to inspect anything\n else, I can do it.\nWith screen(1) I can be\n sure I won't lose my stuff when my internet goes down. Nice.\n\nI have one screen\n session with 3 windows:\n\n\npsql\ngdb\nmisc (vmstat, etc.)\n\nNow I have let this run\n for a good long time while setting up my screen stuff. And then:\n\nps -x \n\nlook for the postgres\n job with the EXPLAIN ... that's $PID, then:\n\ngdb -p $PID\n\nThen first I do\n\ncont\n\nbut then it stops at\n SIGUSR1, because of the parallel workers signalling each other. \n\nhandle SIGUSR1 nostop\n\nsuppresses that\n stopping. Then I break CTRL-C, and set the breakpoint where Tom\n Lane said:\n\nb AllocSetAlloc\n\nonce it stops there I do\n\nBreakpoint 1, AllocSetAlloc (context=0x1168230, size=8) at aset.c:715\n715 {\n(gdb) p context->name\n$4 = 0x96ce5b \"ExecutorState\"\n\nSo I should even be able\n to set a conditional breakpoint.\n\n(gdb) delete\nDelete all breakpoints? (y or n) y\n(gdb) b AllocSetAlloc if strcmp(context->name, \"ExecutorState\") == 0\nBreakpoint 2 at 0x848ed0: file aset.c, line 715.\n(gdb) cont\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=10) at aset.c:715\n715 {\n(gdb) cont\nContinuing.\n\nProgram received signal SIGUSR1, User defined signal 1.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=152) at aset.c:715\n715 {\n(gdb) cont\nContinuing.\n\nProgram received signal SIGUSR1, User defined signal 1.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=201) at aset.c:715\n715 {\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715\n715 {\n(gdb) p context->name\n$8 = 0x96ce5b \"ExecutorState\"\n\nNice. Now the question\n is, am I at the place where memory gets squeezed? And I think yes.\n With top\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n31752 postgres 20 0 2772964 1.2g 329640 t 0.0 16.5 8:46.59 postgres: postgres integrator [local] EXPLAIN\n\nI guess I should run\n this for a little longer. So I disable my breakpoints\n\n(gdb) info breakpoints\nNum Type Disp Enb Address What\n2 breakpoint keep y 0x0000000000848ed0 in AllocSetAlloc at aset.c:715\n stop only if strcmp(context->name, \"ExecutorState\") == 0\n breakpoint already hit 6 times\n(gdb) disable 2\n(gdb) cont\nContinuing.\n\nwhile watching top:\n\n31752 postgres 20 0 2777060 1.3g 329920 D 33.2 17.9 8:52.07 postgres: postgres integrator [local] EXPLAIN\n\n31752 postgres 20 0 2777060 1.4g 329920 D 33.2 17.9 8:52.07 postgres: postgres integrator [local] EXPLAIN\n\n31752 postgres 20 0 2777060 1.5g 329920 D 33.2 17.9 8:52.07 postgres: postgres integrator [local] EXPLAIN\n\nit went up pretty quick\n from 1.2 GB to 1.5 GB, but then it stopped growing fast, so now\n back to gdb and break:\n\n^C\nProgram received signal SIGINT, Interrupt.\n0x00007f048f336d71 in read () from /lib64/libpthread.so.0\n(gdb) enable 2\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=385) at aset.c:715\n715 {\n\nNow I give you a bt so\n we have something to look at:\n\n#0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715\n#1 0x000000000084e6cd in palloc (size=385) at mcxt.c:938\n#2 0x000000000061019c in ExecHashJoinGetSavedTuple (file=file@entry=0x8bbc528, hashvalue=hashvalue@entry=0x7fff2e4ca76c,\n tupleSlot=0x10856b8, hjstate=0x11688e0) at nodeHashjoin.c:1277\n#3 0x0000000000610c83 in ExecHashJoinNewBatch (hjstate=0x11688e0) at nodeHashjoin.c:1042\n#4 ExecHashJoinImpl (parallel=false, pstate=0x11688e0) at nodeHashjoin.c:539\n#5 ExecHashJoin (pstate=0x11688e0) at nodeHashjoin.c:565\n#6 0x00000000005fde68 in ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:461\n#7 0x000000000061ce4e in ExecProcNode (node=0x11688e0) at ../../../src/include/executor/executor.h:247\n#8 ExecSort (pstate=0x11687d0) at nodeSort.c:107\n#9 0x00000000005fde68 in ExecProcNodeInstr (node=0x11687d0) at execProcnode.c:461\n#10 0x000000000061d2c4 in ExecProcNode (node=0x11687d0) at ../../../src/include/executor/executor.h:247\n#11 ExecUnique (pstate=0x11685e0) at nodeUnique.c:73\n#12 0x00000000005fde68 in ExecProcNodeInstr (node=0x11685e0) at execProcnode.c:461\n#13 0x00000000005f75ba in ExecProcNode (node=0x11685e0) at ../../../src/include/executor/executor.h:247\n#14 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x11685e0, estate=0x1168340)\n at execMain.c:1723\n#15 standard_ExecutorRun (queryDesc=0x119b6d8, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#16 0x000000000059c6f8 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x1199a68, into=into@entry=0x0, es=es@entry=0x1141d48,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7fff2e4ca990) at explain.c:535\n#17 0x000000000059c9ef in ExplainOneQuery (query=<optimized out>, cursorOptions=<optimized out>, into=0x0, es=0x1141d48,\n queryString=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"..., params=0x0, queryEnv=0x0)\n at explain.c:371\n#18 0x000000000059ce37 in ExplainQuery (pstate=pstate@entry=0xf74608, stmt=stmt@entry=0x11ef240,\n queryString=queryString@entry=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...,\n params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0xf74578) at explain.c:254\n#19 0x000000000072ca5d in standard_ProcessUtility (pstmt=0x11ef390,\n queryString=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0xf74578, completionTag=0x7fff2e4cab20 \"\") at utility.c:675\n#20 0x000000000072a052 in PortalRunUtility (portal=0xfb06b0, pstmt=0x11ef390, isTopLevel=<optimized out>,\n setHoldSnapshot=<optimized out>, dest=<optimized out>, completionTag=0x7fff2e4cab20 \"\") at pquery.c:1178\n#21 0x000000000072add2 in FillPortalStore (portal=portal@entry=0xfb06b0, isTopLevel=isTopLevel@entry=true) at pquery.c:1038\n#22 0x000000000072b855 in PortalRun (portal=portal@entry=0xfb06b0, count=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0xf4c570, altdest=altdest@entry=0xf4c570,\n completionTag=0x7fff2e4cad30 \"\") at pquery.c:768\n#23 0x00000000007276e8 in exec_simple_query (\n query_string=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...) at postgres.c:1145\n#24 0x0000000000729534 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xf76ce8, dbname=<optimized out>,\n username=<optimized out>) at postgres.c:4182\n#25 0x00000000006be215 in BackendRun (port=0xf6efe0) at postmaster.c:4361\n#26 BackendStartup (port=0xf6efe0) at postmaster.c:4033\n#27 ServerLoop () at postmaster.c:1706\n#28 0x00000000006bf122 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xf45320) at postmaster.c:1379\n#29 0x00000000004822dc in main (argc=3, argv=0xf45320) at main.c:228\n\nBut who knows if that's\n it. I continue and watch top again...\n\n31752 postgres 20 0 3112352 1.8g 329920 D 32.2 23.7 9:43.75 postgres: postgres integrator [local] EXPLAIN\n\nit went quickly to 1.6,\n then after some time to 1.7, then 1.8, and I stop again:\n\n^C\nProgram received signal SIGINT, Interrupt.\n0x00007f048f336d71 in read () from /lib64/libpthread.so.0\n(gdb) enable 2\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=375) at aset.c:715\n715 {\nbt\n#0 AllocSetAlloc (context=0x1168230, size=375) at aset.c:715\n#1 0x000000000084e6cd in palloc (size=375) at mcxt.c:938\n#2 0x000000000061019c in ExecHashJoinGetSavedTuple (file=file@entry=0x21df688, hashvalue=hashvalue@entry=0x7fff2e4ca76c,\n tupleSlot=0x10856b8, hjstate=0x11688e0) at nodeHashjoin.c:1277\n#3 0x0000000000610c83 in ExecHashJoinNewBatch (hjstate=0x11688e0) at nodeHashjoin.c:1042\n#4 ExecHashJoinImpl (parallel=false, pstate=0x11688e0) at nodeHashjoin.c:539\n#5 ExecHashJoin (pstate=0x11688e0) at nodeHashjoin.c:565\n#6 0x00000000005fde68 in ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:461\n#7 0x000000000061ce4e in ExecProcNode (node=0x11688e0) at ../../../src/include/executor/executor.h:247\n#8 ExecSort (pstate=0x11687d0) at nodeSort.c:107\n#9 0x00000000005fde68 in ExecProcNodeInstr (node=0x11687d0) at execProcnode.c:461\n#10 0x000000000061d2c4 in ExecProcNode (node=0x11687d0) at ../../../src/include/executor/executor.h:247\n#11 ExecUnique (pstate=0x11685e0) at nodeUnique.c:73\n#12 0x00000000005fde68 in ExecProcNodeInstr (node=0x11685e0) at execProcnode.c:461\n#13 0x00000000005f75ba in ExecProcNode (node=0x11685e0) at ../../../src/include/executor/executor.h:247\n#14 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x11685e0, estate=0x1168340)\n at execMain.c:1723\n#15 standard_ExecutorRun (queryDesc=0x119b6d8, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#16 0x000000000059c6f8 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x1199a68, into=into@entry=0x0, es=es@entry=0x1141d48,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7fff2e4ca990) at explain.c:535\n#17 0x000000000059c9ef in ExplainOneQuery (query=<optimized out>, cursorOptions=<optimized out>, into=0x0, es=0x1141d48,\n queryString=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"..., params=0x0, queryEnv=0x0)\n at explain.c:371\n#18 0x000000000059ce37 in ExplainQuery (pstate=pstate@entry=0xf74608, stmt=stmt@entry=0x11ef240,\n queryString=queryString@entry=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...,\n params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0xf74578) at explain.c:254\n#19 0x000000000072ca5d in standard_ProcessUtility (pstmt=0x11ef390,\n queryString=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0xf74578, completionTag=0x7fff2e4cab20 \"\") at utility.c:675\n#20 0x000000000072a052 in PortalRunUtility (portal=0xfb06b0, pstmt=0x11ef390, isTopLevel=<optimized out>,\n setHoldSnapshot=<optimized out>, dest=<optimized out>, completionTag=0x7fff2e4cab20 \"\") at pquery.c:1178\n#21 0x000000000072add2 in FillPortalStore (portal=portal@entry=0xfb06b0, isTopLevel=isTopLevel@entry=true) at pquery.c:1038\n#22 0x000000000072b855 in PortalRun (portal=portal@entry=0xfb06b0, count=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0xf4c570, altdest=altdest@entry=0xf4c570,\n completionTag=0x7fff2e4cad30 \"\") at pquery.c:768\n#23 0x00000000007276e8 in exec_simple_query (\n query_string=0xf4af30 \"explain analyze\\nSELECT DISTINCT\\n documentInternalId, is_current,\\ndocumentId,\\ndocumentTypeCode,\\nsubjectRoleInternalId,\\nsubjectEntityInternalId,\\nsubjectEntityId,\\nsubjectEntityIdRoot,\\nsubjectEntit\"...) at postgres.c:1145\n#24 0x0000000000729534 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xf76ce8, dbname=<optimized out>,\n username=<optimized out>) at postgres.c:4182\n#25 0x00000000006be215 in BackendRun (port=0xf6efe0) at postmaster.c:4361\n#26 BackendStartup (port=0xf6efe0) at postmaster.c:4033\n#27 ServerLoop () at postmaster.c:1706\n#28 0x00000000006bf122 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xf45320) at postmaster.c:1379\n#29 0x00000000004822dc in main (argc=3, argv=0xf45320) at main.c:228\n\nGood, now I leave this\n all sitting like that for you to ask me what else you might want\n to see.\nWe are now close to the\n edge of the cliff.\n\n-Gunther",
"msg_date": "Mon, 15 Apr 2019 12:34:38 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 11:28 AM Tom Lane <[email protected]> wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > To get it to happen faster, maybe you could run the server with a small\n> > setting of \"ulimit -v\"? Or, you could try to capture it live in gdb.\n> > Unfortunately I don't know how to set a breakpoint for allocations into a\n> > specific context, and setting a breakpoint for any memory allocation is\n> > probably going to fire too often to be useful.\n>\n> If you can use gdb at all, it's not that hard to break on allocations\n> into a specific context; I've done it many times. The strategy is\n> basically\n>\n\n> 1. Let query run long enough for memory usage to start increasing,\n> then attach to backend with gdb.\n>\n> 2. Set breakpoint at, probably, AllocSetAlloc. (In some cases,\n> reallocs could be the problem, but I doubt it here.) Then \"c\".\n>\n> 3. When it stops, \"p *context\" and see if this is the context\n> you're looking for. In this case, since we want to know about\n> allocations into ExecutorState and we know there's only one\n> active one, you just have to look at the context name. In general\n> you might have to look at the backtrace. Anyway, if it isn't the\n> one you want, just \"c\" until you get to an allocation into the\n> one you do want.\n>\n> 4. Once you have found out the address of the context you care\n> about, make the breakpoint conditional on the context argument\n> being that one. It might look like this:\n>\n> Breakpoint 1, AllocSetAlloc (context=0x1483be0, size=480) at aset.c:715\n> 715 {\n> (gdb) p *context\n> $1 = {type = T_AllocSetContext, isReset = false, allowInCritSection =\n> false,\n> methods = 0xa33f40, parent = 0x0, firstchild = 0x1537f30, prevchild =\n> 0x0,\n> nextchild = 0x0, name = 0xa3483f \"TopMemoryContext\", ident = 0x0,\n> reset_cbs = 0x0}\n> (gdb) cond 1 context == 0x1483be0\n>\n> 5. Now repeatedly \"c\", and check the stack trace each time, for a\n> dozen or two times to get a feeling for where the allocations are\n> being requested.\n>\n> In some cases you might be able to find the context address in a\n> more efficient way than what I suggest in #3 --- for instance,\n> you could instead set a breakpoint where the context is created\n> and snag its address immediately, or you could dig around in\n> backend data structures to find it. But these ways generally\n> require more familiarity with the code than just watching the\n> requests go by.\n>\n\n\nThanks for the recipe. I can use gdb at all, just not very skillfully :)\n\nWith that as a starting point, experimentally, this seems to work to short\ncircuit the loop described in your step 3 (which I fear could be thousands\nof iterations in some situations):\n\ncond 1 strcmp(context.name,\"ExecutorState\")==0\n\nAlso, I've found that in the last few versions of PostgreSQL, processes\nmight get unreasonable numbers of SIGUSR1 (maybe due to parallelization?)\nand so to avoid having to stand on the 'c' button, you might need this:\n\nhandle SIGUSR1 noprint nostop\n\nCheers,\n\nJeff\n\nOn Mon, Apr 15, 2019 at 11:28 AM Tom Lane <[email protected]> wrote:Jeff Janes <[email protected]> writes:\n> To get it to happen faster, maybe you could run the server with a small\n> setting of \"ulimit -v\"? Or, you could try to capture it live in gdb.\n> Unfortunately I don't know how to set a breakpoint for allocations into a\n> specific context, and setting a breakpoint for any memory allocation is\n> probably going to fire too often to be useful.\n\nIf you can use gdb at all, it's not that hard to break on allocations\ninto a specific context; I've done it many times. The strategy is\nbasically\n1. Let query run long enough for memory usage to start increasing,\nthen attach to backend with gdb.\n\n2. Set breakpoint at, probably, AllocSetAlloc. (In some cases,\nreallocs could be the problem, but I doubt it here.) Then \"c\".\n\n3. When it stops, \"p *context\" and see if this is the context\nyou're looking for. In this case, since we want to know about\nallocations into ExecutorState and we know there's only one\nactive one, you just have to look at the context name. In general\nyou might have to look at the backtrace. Anyway, if it isn't the\none you want, just \"c\" until you get to an allocation into the\none you do want.\n\n4. Once you have found out the address of the context you care\nabout, make the breakpoint conditional on the context argument\nbeing that one. It might look like this:\n\nBreakpoint 1, AllocSetAlloc (context=0x1483be0, size=480) at aset.c:715\n715 {\n(gdb) p *context\n$1 = {type = T_AllocSetContext, isReset = false, allowInCritSection = false, \n methods = 0xa33f40, parent = 0x0, firstchild = 0x1537f30, prevchild = 0x0, \n nextchild = 0x0, name = 0xa3483f \"TopMemoryContext\", ident = 0x0, \n reset_cbs = 0x0}\n(gdb) cond 1 context == 0x1483be0\n\n5. Now repeatedly \"c\", and check the stack trace each time, for a\ndozen or two times to get a feeling for where the allocations are\nbeing requested.\n\nIn some cases you might be able to find the context address in a\nmore efficient way than what I suggest in #3 --- for instance,\nyou could instead set a breakpoint where the context is created\nand snag its address immediately, or you could dig around in\nbackend data structures to find it. But these ways generally\nrequire more familiarity with the code than just watching the\nrequests go by.Thanks for the recipe. I can use gdb at all, just not very skillfully :) With that as a starting point, experimentally, this seems to work to short circuit the loop described in your step 3 (which I fear could be thousands of iterations in some situations):cond 1 strcmp(context.name,\"ExecutorState\")==0Also, I've found that in the last few versions of PostgreSQL, processes might get unreasonable numbers of SIGUSR1 (maybe due to parallelization?) and so to avoid having to stand on the 'c' button, you might need this: handle SIGUSR1 noprint nostop Cheers,Jeff",
"msg_date": "Mon, 15 Apr 2019 12:51:56 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> Now I give you a bt so we have something to look at:\n\n> #0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715\n> #1 0x000000000084e6cd in palloc (size=385) at mcxt.c:938\n> #2 0x000000000061019c in ExecHashJoinGetSavedTuple (file=file@entry=0x8bbc528, hashvalue=hashvalue@entry=0x7fff2e4ca76c,\n> tupleSlot=0x10856b8, hjstate=0x11688e0) at nodeHashjoin.c:1277\n\nI'm pretty sure that's not the droid we're looking for.\nExecHashJoinGetSavedTuple does palloc a new tuple, but it immediately\nsticks it into a TupleTableSlot that will be responsible for freeing\nit (when the next tuple is stuck into the same slot). I'd suggest\ncontinuing a few times and looking for other code paths leading\nto AllocSetAlloc in this context.\n\nMy first thought on noticing the SELECT DISTINCT was that you might be\nhitting the grouping-function-related leak that Andres fixed in 9cf37a527;\nbut that fix did make it into 11.2 (by just a couple of days...). Still,\nmaybe there's another issue in the same area.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 13:32:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 2019-Apr-15, Gunther wrote:\n\n> #0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715\n> #1 0x000000000084e6cd in palloc (size=385) at mcxt.c:938\n> #2 0x000000000061019c in ExecHashJoinGetSavedTuple (file=file@entry=0x8bbc528, hashvalue=hashvalue@entry=0x7fff2e4ca76c,\n> tupleSlot=0x10856b8, hjstate=0x11688e0) at nodeHashjoin.c:1277\n> #3 0x0000000000610c83 in ExecHashJoinNewBatch (hjstate=0x11688e0) at nodeHashjoin.c:1042\n\nSeems that ExecHashJoinGetSavedTuple stores a minimalTuple and sets the\nshouldFree flag to \"true\", and then in ExecHashJoinNewBatch, callee\nExecFetchSlotMinimalTuple sets shouldFree to false inconditionally when\nthe slot uses minimal tuple ops. Maybe that's correct, but it does\nsound like a memory leak is not entirely impossible. I wonder if this\nfixes it, without causing crashes elsewhere.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 15 Apr 2019 13:38:48 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Seems that ExecHashJoinGetSavedTuple stores a minimalTuple and sets the\n> shouldFree flag to \"true\", and then in ExecHashJoinNewBatch, callee\n> ExecFetchSlotMinimalTuple sets shouldFree to false inconditionally when\n> the slot uses minimal tuple ops. Maybe that's correct, but it does\n> sound like a memory leak is not entirely impossible. I wonder if this\n> fixes it, without causing crashes elsewhere.\n\nThis discussion is about v11, not HEAD. Still, I agree that that\ncoding in HEAD seems a bit fishy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 13:45:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 12:34 PM Gunther <[email protected]> wrote:\n\n> Breakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715\n> 715 {\n> (gdb) p context->name\n> $8 = 0x96ce5b \"ExecutorState\"\n>\n>\nI think that the above one might have been the one you wanted.\n\n\n> I guess I should run this for a little longer. So I disable my breakpoints\n>\n>\nit went up pretty quick from 1.2 GB to 1.5 GB, but then it stopped growing\n> fast, so now back to gdb and break:\n>\nUnfortunately, I think this means you missed your opportunity and are now\ngetting backtraces of the innocent bystanders.\n\nParticularly since you report that the version using nested loops rather\nthan hash joins also leaked, so it is probably not the hash-join specific\ncode that is doing it.\n\nWhat I've done before is compile with the comments removed from\nsrc/backend/utils/mmgr/aset.c:/* #define HAVE_ALLOCINFO */\n\nand then look for allocations sizes which are getting allocated but not\nfreed, and then you can go back to gdb to look for allocations of those\nspecific sizes. This generates a massive amount of output, and it bypasses\nthe logging configuration and goes directly to stderr--so it might not end\nup where you expect.\n\n\nThanks for the view definition. Nothing in it stood out to me as risky.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 15, 2019 at 12:34 PM Gunther <[email protected]> wrote:Breakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715\n715 {\n(gdb) p context->name\n$8 = 0x96ce5b \"ExecutorState\"\n\nI think that the above one might have been the one you wanted. I guess I should run\n this for a little longer. So I disable my breakpoints it went up pretty quick\n from 1.2 GB to 1.5 GB, but then it stopped growing fast, so now\n back to gdb and break:Unfortunately, I think this means you missed your opportunity and are now getting backtraces of the innocent bystanders.Particularly since you report that the version using nested loops rather than hash joins also leaked, so it is probably not the hash-join specific code that is doing it.What I've done before is compile with the comments removed from src/backend/utils/mmgr/aset.c:/* #define HAVE_ALLOCINFO */ and then look for allocations sizes which are getting allocated but not freed, and then you can go back to gdb to look for allocations of those specific sizes. This generates a massive amount of output, and it bypasses the logging configuration and goes directly to stderr--so it might not end up where you expect.Thanks for the view definition. Nothing in it stood out to me as risky.Cheers,Jeff",
"msg_date": "Mon, 15 Apr 2019 14:14:07 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Wow, we are getting somewhere.\n\nTom (BTW, your mail server rejects my direct mail, but I'm glad you got \nit through the list), you say:\n\n> I'm pretty sure that's not the droid we're looking for.\n> ExecHashJoinGetSavedTuple does palloc a new tuple, but it immediately\n> sticks it into a TupleTableSlot that will be responsible for freeing\n> it (when the next tuple is stuck into the same slot). I'd suggest\n> continuing a few times and looking for other code paths leading\n> to AllocSetAlloc in this context.\n\nI did continue a \"few times\", but few as in a dozen, it's always the same\n\n(gdb) bt 6\n#0 AllocSetAlloc (context=0x1168230, size=375) at aset.c:715\n#1 0x000000000084e6cd in palloc (size=375) at mcxt.c:938\n#2 0x000000000061019c in ExecHashJoinGetSavedTuple (file=file@entry=0x21df688, hashvalue=hashvalue@entry=0x7fff2e4ca76c,\n tupleSlot=0x10856b8, hjstate=0x11688e0) at nodeHashjoin.c:1277\n#3 0x0000000000610c83 in ExecHashJoinNewBatch (hjstate=0x11688e0) at nodeHashjoin.c:1042\n#4 ExecHashJoinImpl (parallel=false, pstate=0x11688e0) at nodeHashjoin.c:539\n#5 ExecHashJoin (pstate=0x11688e0) at nodeHashjoin.c:565\n(More stack frames follow...)\n\nSo I decided to just let it go until it exits the ExecHashJoin function:\n\n#6 0x00000000005fde68 in ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:461\n461 result = node->ExecProcNodeReal(node);\n(gdb) list\n456 {\n457 TupleTableSlot *result;\n458\n459 InstrStartNode(node->instrument);\n460\n461 result = node->ExecProcNodeReal(node);\n462\n463 InstrStopNode(node->instrument, TupIsNull(result) ? 0.0 : 1.0);\n464\n465 return result;\n(gdb) break 463\nBreakpoint 3 at 0x5fde68: file execProcnode.c, line 463.\n(gdb) disable 2\n(gdb) cont\nContinuing.\n\nBreakpoint 3, ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:463\n463 InstrStopNode(node->instrument, TupIsNull(result) ? 0.0 : 1.0);\n\noops, that was fast, so up further ...\n\n(gdb) cont\nContinuing.\n\nBreakpoint 4, ExecSort (pstate=0x11687d0) at nodeSort.c:109\n109 if (TupIsNull(slot))\n(gdb) cont\nContinuing.\n\nBreakpoint 3, ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:463\n463 InstrStopNode(node->instrument, TupIsNull(result) ? 0.0 : 1.0);\n(gdb) cont\nContinuing.\n\nBreakpoint 4, ExecSort (pstate=0x11687d0) at nodeSort.c:109\n109 if (TupIsNull(slot))\n(gdb) up\n#1 0x00000000005fde68 in ExecProcNodeInstr (node=0x11687d0) at execProcnode.c:461\n461 result = node->ExecProcNodeReal(node);\n(gdb) up\n#2 0x000000000061d2c4 in ExecProcNode (node=0x11687d0) at ../../../src/include/executor/executor.h:247\n247 return node->ExecProcNode(node);\n(gdb) up\n#3 ExecUnique (pstate=0x11685e0) at nodeUnique.c:73\n73 slot = ExecProcNode(outerPlan);\n(gdb) list\n68 for (;;)\n69 {\n70 /*\n71 * fetch a tuple from the outer subplan\n72 */\n73 slot = ExecProcNode(outerPlan);\n74 if (TupIsNull(slot))\n75 {\n76 /* end of subplan, so we're done */\n77 ExecClearTuple(resultTupleSlot);\n\n... but whatever I do, ultimately I get to that allocation routine \nthrough the same path.\n\nSince that is the bulk of the activity, and memory was still growing \nwhile we come through this path, I assume that this is it.\n\n> My first thought on noticing the SELECT DISTINCT was that you might be\n> hitting the grouping-function-related leak that Andres fixed in 9cf37a527;\n> but that fix did make it into 11.2 (by just a couple of days...). Still,\n> maybe there's another issue in the same area.\n\nI don't know about that one, I only know that I am running 11.2 freshly \ncompiled.\n\nThe change suggested by Alvaro Herrera wasn't applicable.\n\nJeff Janes had more\n\n> Breakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715\n> 715 {\n> (gdb) p context->name\n> $8 = 0x96ce5b \"ExecutorState\"\n>\n>\n> I think that the above one might have been the one you wanted.\nNot sure how you could tell that? It's the same place as everything \nelse. If we can find out what you're looking for, may be we can set a \nbreak point earlier up the call chain?\n>\n> I guess I should run this for a little longer. So I disable my\n> breakpoints\n>\n> it went up pretty quick from 1.2 GB to 1.5 GB, but then it stopped\n> growing fast, so now back to gdb and break:\n>\n> Unfortunately, I think this means you missed your opportunity and are \n> now getting backtraces of the innocent bystanders.\nBut why? If I see the memory still go up insanely fast, isn't that a \nsign for the leak?\n> Particularly since you report that the version using nested loops \n> rather than hash joins also leaked, so it is probably not the \n> hash-join specific code that is doing it.\nHow about it's in the DISTINCT? I noticed while peeking up the call \nchain, that it was already in the UNIQUE sort thing also. I guess it's \nstreaming the results from the hash join right into the unique sort step.\n> What I've done before is compile with the comments removed from\n> src/backend/utils/mmgr/aset.c:/* #define HAVE_ALLOCINFO */\nI have just done that and it creates an insane amount of output from all \nthe processes, I'm afraid there will be no way to keep that stuff \nseparated. If there was a way of turning that one and off for one \nprocess only, then we could probably get more info...\n\nEverything is also extremely slow that way. Like in a half hour the \nmemory didn't even reach 100 MB.\n\n> and then look for allocations sizes which are getting allocated but \n> not freed, and then you can go back to gdb to look for allocations of \n> those specific sizes.\nI guess I should look for both, address and size to match it better.\n> This generates a massive amount of output, and it bypasses the logging \n> configuration and goes directly to stderr--so it might not end up \n> where you expect.\nYes, massive, like I said. Impossible to use. File system fills up \nrapidly. I made it so that it can be turned on and off, with the debugger.\n\nint _alloc_info = 0;\n#ifdef HAVE_ALLOCINFO\n#define AllocFreeInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocFree: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#define AllocAllocInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocAlloc: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#else\n#define AllocFreeInfo(_cxt, _chunk)\n#define AllocAllocInfo(_cxt, _chunk)\n#endif\n\nso with this I do\n\n(gdb) b AllocSetAlloc\n(gdb) cont\n(gdb) set _alloc_info=1\n(gdb) disable\n(gdb) cont\n\nthen I wait, ... until it crashes again ... no, it's too much. It fills \nup my filesystem in no time with the logs. It produced 3 GB in just a \nminute of run time.\n\nAnd also, I doubt we can find anything specifically by allocation size. \nIt's just going to be 512 or whatever.\n\nIsn't there some other way?\n\nI'm going to try without that DISTINCT step, or perhaps by dismantling \nthis query until it works without this excessive memory growth.\n\n-Gunther\n\n\n\n\n\n\n\n\n\nWow, we are getting somewhere.\nTom (BTW, your mail server rejects my direct mail, but I'm glad\n you got it through the list), you say:\n\n\nI'm pretty sure that's not the droid we're looking for.\nExecHashJoinGetSavedTuple does palloc a new tuple, but it immediately\nsticks it into a TupleTableSlot that will be responsible for freeing\nit (when the next tuple is stuck into the same slot). I'd suggest\ncontinuing a few times and looking for other code paths leading\nto AllocSetAlloc in this context.\n\nI did continue a \"few times\", but few as in a dozen, it's always\n the same\n\n(gdb) bt 6\n#0 AllocSetAlloc (context=0x1168230, size=375) at aset.c:715\n#1 0x000000000084e6cd in palloc (size=375) at mcxt.c:938\n#2 0x000000000061019c in ExecHashJoinGetSavedTuple (file=file@entry=0x21df688, hashvalue=hashvalue@entry=0x7fff2e4ca76c,\n tupleSlot=0x10856b8, hjstate=0x11688e0) at nodeHashjoin.c:1277\n#3 0x0000000000610c83 in ExecHashJoinNewBatch (hjstate=0x11688e0) at nodeHashjoin.c:1042\n#4 ExecHashJoinImpl (parallel=false, pstate=0x11688e0) at nodeHashjoin.c:539\n#5 ExecHashJoin (pstate=0x11688e0) at nodeHashjoin.c:565\n(More stack frames follow...)\n\nSo I decided to just let it go until it exits the ExecHashJoin\n function:\n#6 0x00000000005fde68 in ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:461\n461 result = node->ExecProcNodeReal(node);\n(gdb) list\n456 {\n457 TupleTableSlot *result;\n458\n459 InstrStartNode(node->instrument);\n460\n461 result = node->ExecProcNodeReal(node);\n462\n463 InstrStopNode(node->instrument, TupIsNull(result) ? 0.0 : 1.0);\n464\n465 return result;\n(gdb) break 463\nBreakpoint 3 at 0x5fde68: file execProcnode.c, line 463.\n(gdb) disable 2\n(gdb) cont\nContinuing.\n\nBreakpoint 3, ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:463\n463 InstrStopNode(node->instrument, TupIsNull(result) ? 0.0 : 1.0);\n\noops, that was fast, so up further ... \n\n(gdb) cont\nContinuing.\n\nBreakpoint 4, ExecSort (pstate=0x11687d0) at nodeSort.c:109\n109 if (TupIsNull(slot))\n(gdb) cont\nContinuing.\n\nBreakpoint 3, ExecProcNodeInstr (node=0x11688e0) at execProcnode.c:463\n463 InstrStopNode(node->instrument, TupIsNull(result) ? 0.0 : 1.0);\n(gdb) cont\nContinuing.\n\nBreakpoint 4, ExecSort (pstate=0x11687d0) at nodeSort.c:109\n109 if (TupIsNull(slot))\n(gdb) up\n#1 0x00000000005fde68 in ExecProcNodeInstr (node=0x11687d0) at execProcnode.c:461\n461 result = node->ExecProcNodeReal(node);\n(gdb) up\n#2 0x000000000061d2c4 in ExecProcNode (node=0x11687d0) at ../../../src/include/executor/executor.h:247\n247 return node->ExecProcNode(node);\n(gdb) up\n#3 ExecUnique (pstate=0x11685e0) at nodeUnique.c:73\n73 slot = ExecProcNode(outerPlan);\n(gdb) list\n68 for (;;)\n69 {\n70 /*\n71 * fetch a tuple from the outer subplan\n72 */\n73 slot = ExecProcNode(outerPlan);\n74 if (TupIsNull(slot))\n75 {\n76 /* end of subplan, so we're done */\n77 ExecClearTuple(resultTupleSlot);\n\n... but whatever I do, ultimately I get to that allocation\n routine through the same path.\nSince that is the bulk of the activity, and memory was still\n growing while we come through this path, I assume that this is it.\n\nMy first thought on noticing the SELECT DISTINCT was that you might be\nhitting the grouping-function-related leak that Andres fixed in 9cf37a527;\nbut that fix did make it into 11.2 (by just a couple of days...). Still,\nmaybe there's another issue in the same area.\n\nI don't know about that one, I only know that I am running 11.2\n freshly compiled.\nThe change suggested by Alvaro Herrera wasn't applicable. \n\nJeff Janes had more\n\n\n\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715\n715 {\n(gdb) p context->name\n$8 = 0x96ce5b \"ExecutorState\"\n\n\n\n\n\nI think that the above one might have been the one you\n wanted.\n\n Not sure how you could tell that? It's the same place as\n everything else. If we can find out what you're looking for, may\n be we can set a break point earlier up the call chain? \n\n\n\nI\n guess I should run this for a little longer. So I disable\n my breakpoints \n\n\n\n\nit\n went up pretty quick from 1.2 GB to 1.5 GB, but then it\n stopped growing fast, so now back to gdb and break:\n\n\nUnfortunately, I think this means you missed your\n opportunity and are now getting backtraces of the innocent\n bystanders.\n\n But why? If I see the memory still go up insanely fast, isn't that\n a sign for the leak?\n\nParticularly since you report that the version using nested\n loops rather than hash joins also leaked, so it is probably\n not the hash-join specific code that is doing it.\n\n How about it's in the DISTINCT? I noticed while peeking up the\n call chain, that it was already in the UNIQUE sort thing also. I\n guess it's streaming the results from the hash join right into the\n unique sort step.\n\nWhat I've done before is compile with the comments removed\n from \nsrc/backend/utils/mmgr/aset.c:/* #define HAVE_ALLOCINFO */\n \n\n\n I have just done that and it creates an insane amount of output\n from all the processes, I'm afraid there will be no way to keep\n that stuff separated. If there was a way of turning that one and\n off for one process only, then we could probably get more info...\n Everything is also extremely slow that way. Like in a half hour\n the memory didn't even reach 100 MB.\n\n\n\nand then look for allocations sizes which are getting\n allocated but not freed, and then you can go back to gdb to\n look for allocations of those specific sizes. \n\n I guess I should look for both, address and size to match it\n better.\n\nThis generates a massive amount of output, and it bypasses\n the logging configuration and goes directly to stderr--so it\n might not end up where you expect.\n\n Yes, massive, like I said. Impossible to use. File system fills up\n rapidly. I made it so that it can be turned on and off, with the\n debugger.\n int _alloc_info = 0;\n#ifdef HAVE_ALLOCINFO\n#define AllocFreeInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocFree: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#define AllocAllocInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocAlloc: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#else\n#define AllocFreeInfo(_cxt, _chunk)\n#define AllocAllocInfo(_cxt, _chunk)\n#endif\n\nso with this I do\n(gdb) b AllocSetAlloc\n(gdb) cont\n(gdb) set _alloc_info=1\n(gdb) disable\n(gdb) cont\n\nthen I wait, ... until it crashes again ... no, it's too much. It\n fills up my filesystem in no time with the logs. It produced 3 GB\n in just a minute of run time.\n\nAnd also, I doubt we can find anything specifically by allocation\n size. It's just going to be 512 or whatever. \n\nIsn't there some other way? \n\nI'm going to try without that DISTINCT step, or perhaps by\n dismantling this query until it works without this excessive\n memory growth.\n-Gunther",
"msg_date": "Mon, 15 Apr 2019 21:49:50 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> Tom (BTW, your mail server rejects my direct mail,\n\n[ raised eyebrow ] It's coming through fine AFAICS.\n\n>> I'm pretty sure that's not the droid we're looking for.\n>> ExecHashJoinGetSavedTuple does palloc a new tuple, but it immediately\n>> sticks it into a TupleTableSlot that will be responsible for freeing\n>> it (when the next tuple is stuck into the same slot).\n\n> I did continue a \"few times\", but few as in a dozen, it's always the same\n\nWell, I still don't believe that ExecHashJoinGetSavedTuple is the issue.\nIt has a mechanism for freeing the allocation at the right time, and\nif that were broken then all hash joins would be leaking. It's easy\nto prove that that's not so, both by experiment and by the lack of\nother reports.\n\nIt's barely conceivable that in your particular query, there's something\nacting to break that which doesn't manifest typically; but I think it's\nmuch more likely that you simply haven't found the culprit allocation.\nIt's quite feasible that many many ExecHashJoinGetSavedTuple calls would\ngo by in between problem allocations.\n\nAnother line of thought is that maybe the problem is with realloc'ing\nsomething larger and larger? You could try trapping AllocSetRealloc\nto see.\n\n(BTW, it looks like we *do* have a leak with simple hash joins in\nHEAD. But not v11.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 22:28:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 2019-Apr-15, Tom Lane wrote:\n\n> It's barely conceivable that in your particular query, there's something\n> acting to break that which doesn't manifest typically; but I think it's\n> much more likely that you simply haven't found the culprit allocation.\n> It's quite feasible that many many ExecHashJoinGetSavedTuple calls would\n> go by in between problem allocations.\n\nA possibly useful thing to do is use \"commands\" in gdb to print out a\nstack trace for each allocation that touches the problem memory context\nand collect them into some output, then classify the allocations based\non the stack trace on each. No need to do it manually.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 15 Apr 2019 22:39:12 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 4/15/2019 21:49, Gunther wrote:\n>\n> I'm going to try without that DISTINCT step, or perhaps by dismantling \n> this query until it works without this excessive memory growth.\n>\nIt also failed. Out of memory. The resident memory size of the backend \nwas 1.5 GB before it crashed.\n\nTopMemoryContext: 4335600 total in 8 blocks; 41208 free (16 chunks); \n4294392 used HandleParallelMessages: 8192 total in 1 blocks; 7936 free \n(0 chunks); 256 used TableSpace cache: 8192 total in 1 blocks; 2096 free \n(0 chunks); 6096 used Type information cache: 24352 total in 2 blocks; \n2624 free (0 chunks); 21728 used Operator lookup cache: 24576 total in 2 \nblocks; 10760 free (3 chunks); 13816 used pgstat TabStatusArray lookup \nhash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used \nTopTransactionContext: 8192 total in 1 blocks; 5416 free (2 chunks); \n2776 used RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 \nchunks); 1296 used MessageContext: 524288 total in 7 blocks; 186848 free \n(7 chunks); 337440 used Operator class cache: 8192 total in 1 blocks; \n560 free (0 chunks); 7632 used smgr relation table: 32768 total in 3 \nblocks; 16832 free (8 chunks); 15936 used TransactionAbortContext: 32768 \ntotal in 1 blocks; 32512 free (0 chunks); 256 used Portal hash: 8192 \ntotal in 1 blocks; 560 free (0 chunks); 7632 used TopPortalContext: 8192 \ntotal in 1 blocks; 7664 free (0 chunks); 528 used PortalHoldContext: \n24632 total in 2 blocks; 7392 free (0 chunks); 17240 used PortalContext: \n1105920 total in 138 blocks; 10368 free (8 chunks); 1095552 used: \nExecutorState: 2238648944 total in 266772 blocks; 3726944 free (16276 \nchunks); 2234922000 used HashTableContext: 16384 total in 2 blocks; 4032 \nfree (5 chunks); 12352 used HashBatchContext: 8192 total in 1 blocks; \n7936 free (0 chunks); 256 used HashTableContext: 8192 total in 1 blocks; \n7320 free (0 chunks); 872 used HashBatchContext: 8192 total in 1 blocks; \n7936 free (0 chunks); 256 used HashTableContext: 8192 total in 1 blocks; \n7320 free (0 chunks); 872 used HashBatchContext: 8192 total in 1 blocks; \n7936 free (0 chunks); 256 used HashTableContext: 8192 total in 1 blocks; \n7752 free (0 chunks); 440 used HashBatchContext: 90288 total in 4 \nblocks; 16072 free (6 chunks); 74216 used HashTableContext: 8192 total \nin 1 blocks; 7624 free (0 chunks); 568 used HashBatchContext: 90288 \ntotal in 4 blocks; 16072 free (6 chunks); 74216 used TupleSort main: \n286912 total in 8 blocks; 246792 free (39 chunks); 40120 used TupleSort \nmain: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nHashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); \n8389408 used HashBatchContext: 66935744 total in 2037 blocks; 7936 free \n(0 chunks); 66927808 used ExprContext: 8192 total in 1 blocks; 7936 free \n(0 chunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used Relcache by OID: 16384 total in 2 blocks; 3512 free (2 \nchunks); 12872 used CacheMemoryContext: 1101328 total in 14 blocks; \n383480 free (0 chunks); 717848 used index info: 2048 total in 2 blocks; \n680 free (1 chunks); 1368 used: pg_toast_2619_index index info: 2048 \ntotal in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx index \ninfo: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: \nentity_id_idx index info: 2048 total in 2 blocks; 968 free (1 chunks); \n1080 used: act_id_fkidx index info: 2048 total in 2 blocks; 696 free (1 \nchunks); 1352 used: act_id_idx index info: 2048 total in 2 blocks; 592 \nfree (1 chunks); 1456 used: \npg_constraint_conrelid_contypid_conname_index index info: 2048 total in \n2 blocks; 952 free (1 chunks); 1096 used: actrelationship_pkey index \ninfo: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: \nactrelationship_target_idx index info: 2048 total in 2 blocks; 624 free \n(1 chunks); 1424 used: actrelationship_source_idx index info: 2048 total \nin 2 blocks; 680 free (1 chunks); 1368 used: documentinformation_pk \nindex info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_statistic_ext_relid_index index info: 2048 total in 2 blocks; 952 \nfree (1 chunks); 1096 used: docinfsubj_ndx_seii index info: 2048 total \nin 2 blocks; 952 free (1 chunks); 1096 used: \ndocinfsubj_ndx_sbjentcodeonly index info: 2048 total in 2 blocks; 952 \nfree (1 chunks); 1096 used: pg_index_indrelid_index index info: 2048 \ntotal in 2 blocks; 648 free (2 chunks); 1400 used: \npg_db_role_setting_databaseid_rol_index index info: 2048 total in 2 \nblocks; 624 free (2 chunks); 1424 used: pg_opclass_am_name_nsp_index \nindex info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: \npg_foreign_data_wrapper_name_index index info: 1024 total in 1 blocks; \n48 free (0 chunks); 976 used: pg_enum_oid_index index info: 2048 total \nin 2 blocks; 680 free (2 chunks); 1368 used: pg_class_relname_nsp_index \nindex info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_foreign_server_oid_index index info: 1024 total in 1 blocks; 48 free \n(0 chunks); 976 used: pg_publication_pubname_index index info: 2048 \ntotal in 2 blocks; 592 free (3 chunks); 1456 used: \npg_statistic_relid_att_inh_index index info: 2048 total in 2 blocks; 680 \nfree (2 chunks); 1368 used: pg_cast_source_target_index index info: 1024 \ntotal in 1 blocks; 48 free (0 chunks); 976 used: pg_language_name_index \nindex info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_transform_oid_index index info: 1024 total in 1 blocks; 48 free (0 \nchunks); 976 used: pg_collation_oid_index index info: 3072 total in 2 \nblocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index index \ninfo: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: \npg_index_indexrelid_index index info: 2048 total in 2 blocks; 760 free \n(2 chunks); 1288 used: pg_ts_template_tmplname_index index info: 2048 \ntotal in 2 blocks; 704 free (3 chunks); 1344 used: \npg_ts_config_map_index index info: 2048 total in 2 blocks; 952 free (1 \nchunks); 1096 used: pg_opclass_oid_index index info: 1024 total in 1 \nblocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_oid_index \nindex info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_event_trigger_evtname_index index info: 2048 total in 2 blocks; 760 \nfree (2 chunks); 1288 used: pg_statistic_ext_name_index index info: 1024 \ntotal in 1 blocks; 48 free (0 chunks); 976 used: \npg_publication_oid_index index info: 1024 total in 1 blocks; 48 free (0 \nchunks); 976 used: pg_ts_dict_oid_index index info: 1024 total in 1 \nblocks; 48 free (0 chunks); 976 used: pg_event_trigger_oid_index index \ninfo: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: \npg_conversion_default_index index info: 3072 total in 2 blocks; 1136 \nfree (2 chunks); 1936 used: pg_operator_oprname_l_r_n_index index info: \n2048 total in 2 blocks; 680 free (2 chunks); 1368 used: \npg_trigger_tgrelid_tgname_index index info: 2048 total in 2 blocks; 760 \nfree (2 chunks); 1288 used: pg_enum_typid_label_index index info: 1024 \ntotal in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_config_oid_index \nindex info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_user_mapping_oid_index index info: 2048 total in 2 blocks; 704 free \n(3 chunks); 1344 used: pg_opfamily_am_name_nsp_index index info: 1024 \ntotal in 1 blocks; 48 free (0 chunks); 976 used: \npg_foreign_table_relid_index index info: 2048 total in 2 blocks; 952 \nfree (1 chunks); 1096 used: pg_type_oid_index index info: 1024 total in \n1 blocks; 48 free (0 chunks); 976 used: pg_aggregate_fnoid_index index \ninfo: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_constraint_oid_index index info: 2048 total in 2 blocks; 760 free (2 \nchunks); 1288 used: pg_rewrite_rel_rulename_index index info: 2048 total \nin 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_parser_prsname_index \nindex info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \npg_ts_config_cfgname_index index info: 1024 total in 1 blocks; 48 free \n(0 chunks); 976 used: pg_ts_parser_oid_index index info: 2048 total in 2 \nblocks; 728 free (1 chunks); 1320 used: \npg_publication_rel_prrelid_prpubid_index index info: 2048 total in 2 \nblocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index index \ninfo: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: \npg_namespace_nspname_index index info: 1024 total in 1 blocks; 48 free \n(0 chunks); 976 used: pg_ts_template_oid_index index info: 2048 total in \n2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index index \ninfo: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: \npg_default_acl_role_nsp_obj_index index info: 2048 total in 2 blocks; \n704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index index \ninfo: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_publication_rel_oid_index index info: 2048 total in 2 blocks; 952 \nfree (1 chunks); 1096 used: pg_range_rngtypid_index index info: 2048 \ntotal in 2 blocks; 760 free (2 chunks); 1288 used: \npg_ts_dict_dictname_index index info: 2048 total in 2 blocks; 760 free \n(2 chunks); 1288 used: pg_type_typname_nsp_index index info: 1024 total \nin 1 blocks; 48 free (0 chunks); 976 used: pg_opfamily_oid_index index \ninfo: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_statistic_ext_oid_index index info: 2048 total in 2 blocks; 952 free \n(1 chunks); 1096 used: pg_class_oid_index index info: 2048 total in 2 \nblocks; 704 free (3 chunks); 1344 used: pg_proc_proname_args_nsp_index \nindex info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: \npg_partitioned_table_partrelid_index index info: 2048 total in 2 blocks; \n760 free (2 chunks); 1288 used: pg_transform_type_lang_index index info: \n2048 total in 2 blocks; 680 free (2 chunks); 1368 used: \npg_attribute_relid_attnum_index index info: 2048 total in 2 blocks; 952 \nfree (1 chunks); 1096 used: pg_proc_oid_index index info: 1024 total in \n1 blocks; 48 free (0 chunks); 976 used: pg_language_oid_index index \ninfo: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_namespace_oid_index index info: 3072 total in 2 blocks; 1136 free (2 \nchunks); 1936 used: pg_amproc_fam_proc_index index info: 1024 total in 1 \nblocks; 48 free (0 chunks); 976 used: pg_foreign_server_name_index index \ninfo: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \npg_attribute_relid_attnam_index index info: 1024 total in 1 blocks; 48 \nfree (0 chunks); 976 used: pg_conversion_oid_index index info: 2048 \ntotal in 2 blocks; 728 free (1 chunks); 1320 used: \npg_user_mapping_user_server_index index info: 2048 total in 2 blocks; \n728 free (1 chunks); 1320 used: \npg_subscription_rel_srrelid_srsubid_index index info: 1024 total in 1 \nblocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index index \ninfo: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \npg_conversion_name_nsp_index index info: 2048 total in 2 blocks; 952 \nfree (1 chunks); 1096 used: pg_authid_oid_index index info: 2048 total \nin 2 blocks; 728 free (1 chunks); 1320 used: \npg_auth_members_member_role_index index info: 1024 total in 1 blocks; 48 \nfree (0 chunks); 976 used: pg_subscription_oid_index index info: 2048 \ntotal in 2 blocks; 952 free (1 chunks); 1096 used: \npg_tablespace_oid_index index info: 2048 total in 2 blocks; 704 free (3 \nchunks); 1344 used: pg_shseclabel_object_index index info: 1024 total in \n1 blocks; 16 free (0 chunks); 1008 used: \npg_replication_origin_roname_index index info: 2048 total in 2 blocks; \n952 free (1 chunks); 1096 used: pg_database_datname_index index info: \n2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \npg_subscription_subname_index index info: 1024 total in 1 blocks; 16 \nfree (0 chunks); 1008 used: pg_replication_origin_roiident_index index \ninfo: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: \npg_auth_members_role_member_index index info: 2048 total in 2 blocks; \n952 free (1 chunks); 1096 used: pg_database_oid_index index info: 2048 \ntotal in 2 blocks; 952 free (1 chunks); 1096 used: \npg_authid_rolname_index WAL record construction: 49768 total in 2 \nblocks; 6368 free (0 chunks); 43400 used PrivateRefCount: 8192 total in \n1 blocks; 2624 free (0 chunks); 5568 used MdSmgr: 8192 total in 1 \nblocks; 7208 free (1 chunks); 984 used LOCALLOCK hash: 16384 total in 2 \nblocks; 4600 free (2 chunks); 11784 used Timezones: 104120 total in 2 \nblocks; 2624 free (0 chunks); 101496 used ErrorContext: 8192 total in 1 \nblocks; 7936 free (5 chunks); 256 used Grand total: 2322733048 bytes in \n269225 blocks; 5406896 free (16556 chunks); 2317326152 used\n\nSo what I am wondering now, is there seems to be an EXPLOSION of memory \nconsumption near the time of the crash. That ExecutorState has \n2,238,648,944 but just until the very last second(s) the RES memory as \nper top was 1.5 GB I swear. I looked at it. It went like this:\n\n1.5 GB for a very long time\n1.1 GB -- and I thought, yeah! it worked! it's shrinking now\nand then it was gone, and there was the memory error.\n\nSo how can top tell me 1.5 GB while here the ExecutorState allocations \nalone have 2 GB???\n\nAnd isn't even 1.5 GB way too big?\n\nIs there a way of dumping that memory map info during normal runtime, by \ncalling a function with the debugger? So one can see how it grows? It's \nlike checking out memory leaks with Java where I keep looking at the \nheap_info summary. Tom Lane said that this ExecutorState should not \ngrow to anything like this size, right?\n\n-Gunther\n\n\n\n\n\n\n\nOn 4/15/2019 21:49, Gunther wrote:\n\n\n\nI'm going to try without that DISTINCT step, or perhaps by\n dismantling this query until it works without this excessive\n memory growth.\n\nIt also failed. Out of memory. The resident memory size of the\n backend was 1.5 GB before it crashed.\nTopMemoryContext: 4335600 total in 8 blocks; 41208 free (16 chunks); 4294392 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 5416 free (2 chunks); 2776 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 524288 total in 7 blocks; 186848 free (7 chunks); 337440 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1105920 total in 138 blocks; 10368 free (8 chunks); 1095552 used:\n ExecutorState: 2238648944 total in 266772 blocks; 3726944 free (16276 chunks); 2234922000 used\n HashTableContext: 16384 total in 2 blocks; 4032 free (5 chunks); 12352 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); 8389408 used\n HashBatchContext: 66935744 total in 2037 blocks; 7936 free (0 chunks); 66927808 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 383480 free (0 chunks); 717848 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: act_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: act_id_idx\n index info: 2048 total in 2 blocks; 592 free (1 chunks); 1456 used: pg_constraint_conrelid_contypid_conname_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: actrelationship_pkey\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_target_idx\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: actrelationship_source_idx\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: documentinformation_pk\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_seii\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: docinfsubj_ndx_sbjentcodeonly\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indrelid_index\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_db_role_setting_databaseid_rol_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_opclass_am_name_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_enum_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_class_relname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_pubname_index\n index info: 2048 total in 2 blocks; 592 free (3 chunks); 1456 used: pg_statistic_relid_att_inh_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_cast_source_target_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_transform_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_collation_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indexrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_template_tmplname_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_ts_config_map_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_opclass_oid_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_evtname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_statistic_ext_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_dict_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_oid_index\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_conversion_default_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_operator_oprname_l_r_n_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_trigger_tgrelid_tgname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_enum_typid_label_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_config_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_user_mapping_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_opfamily_am_name_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_table_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_type_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_aggregate_fnoid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_constraint_oid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_rewrite_rel_rulename_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_parser_prsname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_config_cfgname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_parser_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_publication_rel_prrelid_prpubid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_namespace_nspname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_template_oid_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index\n index info: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: pg_default_acl_role_nsp_obj_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_rel_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_range_rngtypid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_dict_dictname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_type_typname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_opfamily_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_proc_proname_args_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_partitioned_table_partrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_transform_type_lang_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_proc_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_namespace_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amproc_fam_proc_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_name_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_attribute_relid_attnam_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_conversion_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_user_mapping_user_server_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_subscription_rel_srrelid_srsubid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_conversion_name_nsp_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_member_role_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_subscription_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_tablespace_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_shseclabel_object_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roname_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_datname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_subscription_subname_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_replication_origin_roiident_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_role_member_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_database_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_rolname_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 7208 free (1 chunks); 984 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 2322733048 bytes in 269225 blocks; 5406896 free (16556 chunks); 2317326152 used\n\nSo what I am wondering now, is there seems to be an EXPLOSION of\n memory consumption near the time of the crash. That ExecutorState\n has 2,238,648,944 but just until the very last second(s) the RES\n memory as per top was 1.5 GB I swear. I looked at it. It went like\n this:\n1.5 GB for a very long time\n 1.1 GB -- and I thought, yeah! it worked! it's shrinking now\n and then it was gone, and there was the memory error.\nSo how can top tell me 1.5 GB while here the ExecutorState\n allocations alone have 2 GB???\nAnd isn't even 1.5 GB way too big? \n\nIs there a way of dumping that memory map info during normal\n runtime, by calling a function with the debugger? So one can see\n how it grows? It's like checking out memory leaks with Java where\n I keep looking at the heap_info summary. Tom Lane said that this\n ExecutorState should not grow to anything like this size, right?\n-Gunther",
"msg_date": "Mon, 15 Apr 2019 22:39:16 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> Is there a way of dumping that memory map info during normal runtime, by \n> calling a function with the debugger?\n\nSure, \"call MemoryContextStats(TopMemoryContext)\"\n\n(or actually, since you know which context is the problematic one,\njust print that one context)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 22:50:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> So what I am wondering now, is there seems to be an EXPLOSION of memory \n> consumption near the time of the crash. That ExecutorState has \n> 2,238,648,944 but just until the very last second(s) the RES memory as \n> per top was 1.5 GB I swear.\n\nThat's not hugely surprising really, especially in a complex query.\nIt could be performing some preliminary join that doesn't leak, and\nthen when it starts to perform the join that does have the leak,\nkaboom. Also, given that you seem to be invoking multi-batch joins,\nmaybe the preliminary phase is fine and there's only a leak when\nreading back a batch.\n\nAnyway, the upshot is that you need to investigate what's happening\nwhile the memory consumption is increasing. The behavior before\nthat starts to happen isn't going to be very interesting. It might\nbe a bit tricky to catch that if it only takes a few seconds to blow\nup, but you could try \"c 10000\" or so to step through a lot of\nAllocSetAlloc calls, repeating till the bad stuff starts to happen,\nand then going back to looking at just where the calls are coming\nfrom.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 23:03:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "I saw your replies, if there was a way of using gdb commands to have a \nconditional breakpoint which will only fire if the n-th caller in the \nchain is not a certain source location, then one could exclude the bulk \nof these allocations and focus better.\n\nBut I decided I try to re-factor this query. And I made an interesting \nobservation.\n\nThere is a left outer join in parenthesis\n\n... LEFT OUTER JOIN (SELECT ....) q ...\n\nthe biggest parenthesis. I turned this into a temporary table, tmp_bulk. \nThen I change the main query to\n\n... LEFT OUTER JOIN tmp_bulk q ...\n\nnow I am running it again. But what I noticed is that the tmp_bulk table \nis tiny! It only has like 250 rows. So this means the vast majority of \nthe right left rows in that join are unmatched. The plan is all \ndifferent now. Heavy CPU% load. Must be merge sorting? No memory growth, \nnot yet.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 5394 postgres 20 0 1284448 287880 271528 R 99.3 3.6 9:21.83 postgres: postgres integrator [local] EXPLAIN\n 5425 postgres 20 0 1278556 93184 82296 S 27.6 1.2 0:38.72 postgres: parallel worker for PID 5394\n\nNo, I never trust when a database job has high CPU% and low IO for a \nlong time. So I do\n\nSET ENABLE_MERGEJOIN TO OFF;\n\nand then do it again. Now I have high IO and low CPU%.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 5394 postgres 20 0 1280904 282036 273616 D 2.3 3.6 13:01.51 postgres: postgres integrator [local] EXPLAIN\n 5510 postgres 20 0 1278892 87524 80804 D 2.3 1.1 0:05.20 postgres: parallel worker for PID 5394\n 5509 postgres 20 0 1278860 87484 80796 D 2.3 1.1 0:05.30 postgres: parallel worker for PID 5394\n\nStill I slip into the high CPU% situation, I guess I'll have to wait it \nout ...\n\n... and still waiting. No growth whatsoever. The plan is now so totally \ndifferent that it probably won't trigger the problem.\n\nThe original plan that causes the leak involved right joins. This one \nonly left joins. Even after ANALYZE tmp_bulk it still comes up with the \nsame plan. And that plan isn't quick to succeed but also doesn't trigger \nthe memory leak.\n\nSo what I can tell is this: that growth to 1.5 GB is consistently \nhappening. It isn't just happening in the beginning and then the rest is \njust a follow-up problem. Also there seems to be a final spike in growth \nfrom 1.5 GB to 2.2 GB that happens inside a second. That seems very \nstrange.\n\nBack to the debugger and do a better job of conditional breakpoints ... \nI already have an idea how I'll do that. I set a flag when I enter the\n\n> Anyway, the upshot is that you need to investigate what's happening\n> while the memory consumption is increasing. The behavior before\n> that starts to happen isn't going to be very interesting. It might\n> be a bit tricky to catch that if it only takes a few seconds to blow\n> up, but you could try \"c 10000\" or so to step through a lot of\n> AllocSetAlloc calls, repeating till the bad stuff starts to happen,\n> and then going back to looking at just where the calls are coming\n> from.\nIsn't 1.5 GB already way too big? There are 3 phases really.\n\n 1. steady state at less than 500 M\n 2. slow massive growth to 1 G to 1.5 - 1.8 G\n 3. explosion within 1 second from whatever the final size of slow\n massive growth to the final 2.2 G\n\nI thought that slow massive growth is already a sign of a leak?\n\nI will now filter the calls that come through ExecHashJoinGetSavedTuple\n\nI figured I can do this:\n\n(gdb) info frame\nStack level 0, frame at 0x7ffcbf92fdd0:\n rip = 0x849030 in AllocSetAlloc (aset.c:718); saved rip = 0x84e7dd\n called by frame at 0x7ffcbf92fdf0\n source language c.\n Arglist at 0x7ffcbf92fdc0, args: context=0x29a6450, size=371\n Locals at 0x7ffcbf92fdc0, Previous frame's sp is 0x7ffcbf92fdd0\n Saved registers:\n rip at 0x7ffcbf92fdc8\n\nso is the saved $rip is 0x84e7dd then we are coming this way. Therefore \nI set my new breakpoint like this:\n\n(gdb) b AllocSetAlloc if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd\nBreakpoint 6 at 0x849030: file aset.c, line 718.\n(gdb) info b\nNum Type Disp Enb Address What\n6 breakpoint keep y 0x0000000000849030 in AllocSetAlloc at aset.c:718\n stop only if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd\n\nAnd there we go:\n\nBreakpoint 6, AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n718 {\n(gdb) bt 8\n#0 AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n#1 0x000000000084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969\n#2 0x0000000000702b63 in makeBufFileCommon (nfiles=nfiles@entry=1) at buffile.c:119\n#3 0x0000000000702e4c in makeBufFile (firstfile=68225) at buffile.c:138\n#4 BufFileCreateTemp (interXact=interXact@entry=false) at buffile.c:201\n#5 0x000000000061060b in ExecHashJoinSaveTuple (tuple=0x2ba1018, hashvalue=<optimized out>, fileptr=0x6305b00) at nodeHashjoin.c:1220\n#6 0x000000000060d766 in ExecHashTableInsert (hashtable=hashtable@entry=0x2b50ad8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1663\n#7 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x29a6be0) at nodeHashjoin.c:1051\n(More stack frames follow...)\n\nand on\n\n(gdb) info frame\nStack level 0, frame at 0x7ffcbf92fd90:\n rip = 0x849030 in AllocSetAlloc (aset.c:718); saved rip = 0x84e8ad\n called by frame at 0x7ffcbf92fdb0\n source language c.\n Arglist at 0x7ffcbf92fd80, args: context=0x29a6450, size=8\n Locals at 0x7ffcbf92fd80, Previous frame's sp is 0x7ffcbf92fd90\n Saved registers:\n rip at 0x7ffcbf92fd88\n(gdb) b AllocSetAlloc if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd && 0x84e8ad != *(int *)$rsp\nNote: breakpoint 6 also set at pc 0x849030.\nBreakpoint 7 at 0x849030: file aset.c, line 718.\n(gdb) delete 6\n\nNow if I continue I don't seem to be stopping any more.\n\nDoes this help now?\n\n-Gunther\n\n\n\n\n\n\n\nI saw your replies, if there was a way of using gdb commands to\n have a conditional breakpoint which will only fire if the n-th\n caller in the chain is not a certain source location, then one\n could exclude the bulk of these allocations and focus better. \n\nBut I decided I try to re-factor this query. And I made an\n interesting observation. \n\nThere is a left outer join in parenthesis \n\n... LEFT OUTER JOIN (SELECT ....) q ... \n\nthe biggest parenthesis. I turned this into a temporary table,\n tmp_bulk. Then I change the main query to \n\n... LEFT OUTER JOIN tmp_bulk q ...\nnow I am running it again. But what I noticed is that the\n tmp_bulk table is tiny! It only has like 250 rows. So this means\n the vast majority of the right left rows in that join are\n unmatched. The plan is all different now. Heavy CPU% load. Must be\n merge sorting? No memory growth, not yet.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 5394 postgres 20 0 1284448 287880 271528 R 99.3 3.6 9:21.83 postgres: postgres integrator [local] EXPLAIN\n 5425 postgres 20 0 1278556 93184 82296 S 27.6 1.2 0:38.72 postgres: parallel worker for PID 5394\n\nNo, I never trust when a database job has high CPU% and low IO\n for a long time. So I do\nSET ENABLE_MERGEJOIN TO OFF;\nand then do it again. Now I have high IO and low CPU%. \n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 5394 postgres 20 0 1280904 282036 273616 D 2.3 3.6 13:01.51 postgres: postgres integrator [local] EXPLAIN\n 5510 postgres 20 0 1278892 87524 80804 D 2.3 1.1 0:05.20 postgres: parallel worker for PID 5394\n 5509 postgres 20 0 1278860 87484 80796 D 2.3 1.1 0:05.30 postgres: parallel worker for PID 5394\n\nStill I slip into the high CPU% situation, I guess I'll have to\n wait it out ...\n... and still waiting. No growth whatsoever. The plan is now so\n totally different that it probably won't trigger the problem.\nThe original plan that causes the leak involved right joins. This\n one only left joins. Even after ANALYZE tmp_bulk it still comes up\n with the same plan. And that plan isn't quick to succeed but also\n doesn't trigger the memory leak.\nSo what I can tell is this: that growth to 1.5 GB is consistently\n happening. It isn't just happening in the beginning and then the\n rest is just a follow-up problem. Also there seems to be a final\n spike in growth from 1.5 GB to 2.2 GB that happens inside a\n second. That seems very strange. \n\nBack to the debugger and do a better job of conditional\n breakpoints ... I already have an idea how I'll do that. I set a\n flag when I enter the \n\n\n\nAnyway, the upshot is that you need to investigate what's happening\nwhile the memory consumption is increasing. The behavior before\nthat starts to happen isn't going to be very interesting. It might\nbe a bit tricky to catch that if it only takes a few seconds to blow\nup, but you could try \"c 10000\" or so to step through a lot of\nAllocSetAlloc calls, repeating till the bad stuff starts to happen,\nand then going back to looking at just where the calls are coming\nfrom.\n\n\n Isn't 1.5 GB already way too big? There are 3 phases really. \n \nsteady state at less than 500 M\nslow massive growth to 1 G to 1.5 - 1.8 G\nexplosion within 1 second from whatever the final size of slow\n massive growth to the final 2.2 G \n\n\nI thought that slow massive growth is already a sign of a leak?\nI will now filter the calls that come through\n ExecHashJoinGetSavedTuple \n\nI figured I can do this:\n(gdb) info frame\nStack level 0, frame at 0x7ffcbf92fdd0:\n rip = 0x849030 in AllocSetAlloc (aset.c:718); saved rip = 0x84e7dd\n called by frame at 0x7ffcbf92fdf0\n source language c.\n Arglist at 0x7ffcbf92fdc0, args: context=0x29a6450, size=371\n Locals at 0x7ffcbf92fdc0, Previous frame's sp is 0x7ffcbf92fdd0\n Saved registers:\n rip at 0x7ffcbf92fdc8\n\nso is the saved $rip is 0x84e7dd then we are coming this way.\n Therefore I set my new breakpoint like this:\n\n(gdb) b AllocSetAlloc if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd\nBreakpoint 6 at 0x849030: file aset.c, line 718.\n(gdb) info b\nNum Type Disp Enb Address What\n6 breakpoint keep y 0x0000000000849030 in AllocSetAlloc at aset.c:718\n stop only if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd\nAnd there we go:\nBreakpoint 6, AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n718 {\n(gdb) bt 8\n#0 AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n#1 0x000000000084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969\n#2 0x0000000000702b63 in makeBufFileCommon (nfiles=nfiles@entry=1) at buffile.c:119\n#3 0x0000000000702e4c in makeBufFile (firstfile=68225) at buffile.c:138\n#4 BufFileCreateTemp (interXact=interXact@entry=false) at buffile.c:201\n#5 0x000000000061060b in ExecHashJoinSaveTuple (tuple=0x2ba1018, hashvalue=<optimized out>, fileptr=0x6305b00) at nodeHashjoin.c:1220\n#6 0x000000000060d766 in ExecHashTableInsert (hashtable=hashtable@entry=0x2b50ad8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1663\n#7 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x29a6be0) at nodeHashjoin.c:1051\n(More stack frames follow...)\n\nand on\n(gdb) info frame\nStack level 0, frame at 0x7ffcbf92fd90:\n rip = 0x849030 in AllocSetAlloc (aset.c:718); saved rip = 0x84e8ad\n called by frame at 0x7ffcbf92fdb0\n source language c.\n Arglist at 0x7ffcbf92fd80, args: context=0x29a6450, size=8\n Locals at 0x7ffcbf92fd80, Previous frame's sp is 0x7ffcbf92fd90\n Saved registers:\n rip at 0x7ffcbf92fd88\n(gdb) b AllocSetAlloc if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd && 0x84e8ad != *(int *)$rsp\nNote: breakpoint 6 also set at pc 0x849030.\nBreakpoint 7 at 0x849030: file aset.c, line 718.\n(gdb) delete 6\n\nNow if I continue I don't seem to be stopping any more.\nDoes this help now?\n-Gunther",
"msg_date": "Tue, 16 Apr 2019 01:23:45 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "It is confirmed, these two call paths are the only ones. At least \nprobably the only ones to occur with enough of a frequency.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n28576 postgres 20 0 2695304 1.0g 200764 R 11.3 13.8 4:20.13 postgres: postgres integrator [local] EXPLAIN\n28580 postgres 20 0 646616 432784 36968 S 98.7 5.5 8:53.28 gdb -p 28576\n\nthere is a problem with gdb, it also has a memoy leak and is very \nexpensive with the checking of my conditional breakpoint. So I can't run \nit all the way through.\n\nAlso here captured with\n\n(gdb) call MemoryContextStats(TopPortalContext)\n\nTopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 1369337168 total in 163397 blocks; 248840 free (36 chunks); 1369088328 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (10 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 32824 total in 2 blocks; 144 free (0 chunks); 32680 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); 8389408 used\n HashBatchContext: 106640 total in 3 blocks; 7936 free (0 chunks); 98704 used\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n Caller tuples: 4194304 total in 10 blocks; 1496136 free (20 chunks); 2698168 used ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\nGrand total: 1384601904 bytes in 163660 blocks; 2303840 free (145 chunks); 1382298064 used\n\nthere is the developing memory leak. Now let's see if we can trace \nindividual increments ...\n\n(gdb) info break\nNum Type Disp Enb Address What\n1 breakpoint keep y 0x0000000000849030 in AllocSetAlloc at aset.c:718\n stop only if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd && 0x84e8ad != *(int *)$rsp\n breakpoint already hit 4 times\n(gdb) delete 1\n(gdb) break AllocSetAlloc if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd\nBreakpoint 2 at 0x849030: file aset.c, line 718.\n(gdb) cont\nContinuing.\n^CError in testing breakpoint condition:\nQuit\n\nBreakpoint 2, AllocSetAlloc (context=0x2a1d190, size=381) at aset.c:718\n718 {\n(gdb) bt 4\n#0 AllocSetAlloc (context=0x2a1d190, size=381) at aset.c:718\n#1 0x000000000084e7dd in palloc (size=381) at mcxt.c:938\n#2 0x00000000006101bc in ExecHashJoinGetSavedTuple (file=file@entry=0x4b4a198, hashvalue=hashvalue@entry=0x7ffcbf92fe5c,\n tupleSlot=0x2ae0ab8, hjstate=0x2a1d920) at nodeHashjoin.c:1277\n#3 0x0000000000610ca3 in ExecHashJoinNewBatch (hjstate=0x2a1d920) at nodeHashjoin.c:1042\n(More stack frames follow...)\n(gdb) call MemoryContextStats(TopPortalContext)\n\ndoesn't show an increase of ExecutorState total:\n\nTopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 1369337168 total in 163397 blocks; 248840 free (36 chunks); 1369088328 used\n\nexact same as before:\n\n ExecutorState: 1369337168 total in 163397 blocks; 248840 free (36 chunks); 1369088328 used\n\nbut now we get an increase to:\n\n ExecutorState: 1369345496 total in 163398 blocks; 248840 free (36 chunks); 1369096656 used\n\nafter I did this:\n\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x2a1d190, size=8) at aset.c:718\n718 {\n(gdb) bt 4\n#0 AllocSetAlloc (context=0x2a1d190, size=8) at aset.c:718\n#1 0x000000000084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969\n#2 0x0000000000702b63 in makeBufFileCommon (nfiles=nfiles@entry=1) at buffile.c:119\n#3 0x0000000000702e4c in makeBufFile (firstfile=163423) at buffile.c:138\n(More stack frames follow...)\n(gdb) call MemoryContextStats(TopPortalContext)\n\nSo now we have it confirmed don't we? No! No we have not! We stop at the \n/entrance /of the allocate method. So when I interrupted, there was no \ncall yet. Then at the next stop the increase was from the previous.\nContinuing ... this now is from a stop at the makeBufFileCommon\n\n ExecutorState: 1369345496 total in 163398 blocks; 248816 free (36 chunks); 1369096680 used\n\nAnd again, now stopped before\n\n ExecutorState: 1369345496 total in 163398 blocks; 248792 free (36 chunks); 1369096704 used\n\n ExecutorState: 1369345496 total in 163398 blocks; 248792 free (36 chunks); 1369096704 used\n\nI don't see a growth between individual invocations. Anyway, these are \nthe two ways to get there:\n\n(gdb) bt 4\n#0 AllocSetAlloc (context=0x2a1d190, size=4) at aset.c:718\n#1 0x000000000084e7dd in palloc (size=size@entry=4) at mcxt.c:938\n#2 0x0000000000702e59 in makeBufFile (firstfile=163423) at buffile.c:140\n#3 BufFileCreateTemp (interXact=interXact@entry=false) at buffile.c:201\n(More stack frames follow...)\n(gdb) cont\nContinuing.\n\nBreakpoint 3, AllocSetAlloc (context=0x2a1d190, size=394) at aset.c:718\n718 {\n(gdb) bt 3\n#0 AllocSetAlloc (context=0x2a1d190, size=394) at aset.c:718\n#1 0x000000000084e7dd in palloc (size=394) at mcxt.c:938\n#2 0x00000000006101bc in ExecHashJoinGetSavedTuple (file=file@entry=0x4b4a198, hashvalue=hashvalue@entry=0x7ffcbf92fe5c,\n tupleSlot=0x2ae0ab8, hjstate=0x2a1d920) at nodeHashjoin.c:1277\n(More stack frames follow...)\n\nBut now it increased\n\n ExecutorState: 1369353824 total in 163399 blocks; 248792 free (36 chunks); 1369105032 used\n\nIt increases every 3 times I stop at the breakpoint.\n\n-Gunther\n\n\n\n\n\n\nIt is confirmed, these two call paths are the only ones. At least\n probably the only ones to occur with enough of a frequency.\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n28576 postgres 20 0 2695304 1.0g 200764 R 11.3 13.8 4:20.13 postgres: postgres integrator [local] EXPLAIN\n28580 postgres 20 0 646616 432784 36968 S 98.7 5.5 8:53.28 gdb -p 28576\n\nthere is a problem with gdb, it also has a memoy leak and is very\n expensive with the checking of my conditional breakpoint. So I\n can't run it all the way through.\nAlso here captured with\n(gdb) call MemoryContextStats(TopPortalContext)\n\nTopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 1369337168 total in 163397 blocks; 248840 free (36 chunks); 1369088328 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (10 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 32824 total in 2 blocks; 144 free (0 chunks); 32680 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); 8389408 used\n HashBatchContext: 106640 total in 3 blocks; 7936 free (0 chunks); 98704 used\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n Caller tuples: 4194304 total in 10 blocks; 1496136 free (20 chunks); 2698168 used ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\nGrand total: 1384601904 bytes in 163660 blocks; 2303840 free (145 chunks); 1382298064 used\n\nthere is the developing memory leak.\n Now let's see if we can trace individual increments ...\n\n(gdb) info break\nNum Type Disp Enb Address What\n1 breakpoint keep y 0x0000000000849030 in AllocSetAlloc at aset.c:718\n stop only if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd && 0x84e8ad != *(int *)$rsp\n breakpoint already hit 4 times\n(gdb) delete 1\n(gdb) break AllocSetAlloc if (int)strcmp(context->name, \"ExecutorState\") == 0 && *(int *)$rsp != 0x84e7dd\nBreakpoint 2 at 0x849030: file aset.c, line 718.\n(gdb) cont\nContinuing.\n^CError in testing breakpoint condition:\nQuit\n\nBreakpoint 2, AllocSetAlloc (context=0x2a1d190, size=381) at aset.c:718\n718 {\n(gdb) bt 4\n#0 AllocSetAlloc (context=0x2a1d190, size=381) at aset.c:718\n#1 0x000000000084e7dd in palloc (size=381) at mcxt.c:938\n#2 0x00000000006101bc in ExecHashJoinGetSavedTuple (file=file@entry=0x4b4a198, hashvalue=hashvalue@entry=0x7ffcbf92fe5c,\n tupleSlot=0x2ae0ab8, hjstate=0x2a1d920) at nodeHashjoin.c:1277\n#3 0x0000000000610ca3 in ExecHashJoinNewBatch (hjstate=0x2a1d920) at nodeHashjoin.c:1042\n(More stack frames follow...)\n(gdb) call MemoryContextStats(TopPortalContext)\n\n\ndoesn't show an increase of\n ExecutorState total:\n\n\nTopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 1369337168 total in 163397 blocks; 248840 free (36 chunks); 1369088328 used\n\nexact same as before:\n\n ExecutorState: 1369337168 total in 163397 blocks; 248840 free (36 chunks); 1369088328 used\n\nbut now we get an increase to:\n\n\n\n ExecutorState: 1369345496 total in 163398 blocks; 248840 free (36 chunks); 1369096656 used\n\n\n\nafter I did this:\n\n(gdb) cont\nContinuing.\n\nBreakpoint 2, AllocSetAlloc (context=0x2a1d190, size=8) at aset.c:718\n718 {\n(gdb) bt 4\n#0 AllocSetAlloc (context=0x2a1d190, size=8) at aset.c:718\n#1 0x000000000084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969\n#2 0x0000000000702b63 in makeBufFileCommon (nfiles=nfiles@entry=1) at buffile.c:119\n#3 0x0000000000702e4c in makeBufFile (firstfile=163423) at buffile.c:138\n(More stack frames follow...)\n(gdb) call MemoryContextStats(TopPortalContext)\n\n\nSo now we have it confirmed don't we?\n No! No we have not! We stop at the entrance of the\n allocate method. So when I interrupted, there was no call yet.\n Then at the next stop the increase was from the previous.\n\nContinuing ... this now is from a stop\n at the makeBufFileCommon\n\n\n ExecutorState: 1369345496 total in 163398 blocks; 248816 free (36 chunks); 1369096680 used\n\n\nAnd again, now stopped before \n\n\n ExecutorState: 1369345496 total in 163398 blocks; 248792 free (36 chunks); 1369096704 used\n\n ExecutorState: 1369345496 total in 163398 blocks; 248792 free (36 chunks); 1369096704 used\n\n I don't see a growth between individual invocations. Anyway, these\n are the two ways to get there:\n(gdb) bt 4\n#0 AllocSetAlloc (context=0x2a1d190, size=4) at aset.c:718\n#1 0x000000000084e7dd in palloc (size=size@entry=4) at mcxt.c:938\n#2 0x0000000000702e59 in makeBufFile (firstfile=163423) at buffile.c:140\n#3 BufFileCreateTemp (interXact=interXact@entry=false) at buffile.c:201\n(More stack frames follow...)\n(gdb) cont\nContinuing.\n\nBreakpoint 3, AllocSetAlloc (context=0x2a1d190, size=394) at aset.c:718\n718 {\n(gdb) bt 3\n#0 AllocSetAlloc (context=0x2a1d190, size=394) at aset.c:718\n#1 0x000000000084e7dd in palloc (size=394) at mcxt.c:938\n#2 0x00000000006101bc in ExecHashJoinGetSavedTuple (file=file@entry=0x4b4a198, hashvalue=hashvalue@entry=0x7ffcbf92fe5c,\n tupleSlot=0x2ae0ab8, hjstate=0x2a1d920) at nodeHashjoin.c:1277\n(More stack frames follow...)\n\n\n\nBut now it increased\n\n ExecutorState: 1369353824 total in 163399 blocks; 248792 free (36 chunks); 1369105032 used\n\n It increases every 3 times I stop at the breakpoint.\n\n\n\n -Gunther",
"msg_date": "Tue, 16 Apr 2019 02:33:19 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> And there we go:\n\n> Breakpoint 6, AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n> 718 {\n> (gdb) bt 8\n> #0 AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n> #1 0x000000000084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969\n> #2 0x0000000000702b63 in makeBufFileCommon (nfiles=nfiles@entry=1) at buffile.c:119\n> #3 0x0000000000702e4c in makeBufFile (firstfile=68225) at buffile.c:138\n> #4 BufFileCreateTemp (interXact=interXact@entry=false) at buffile.c:201\n> #5 0x000000000061060b in ExecHashJoinSaveTuple (tuple=0x2ba1018, hashvalue=<optimized out>, fileptr=0x6305b00) at nodeHashjoin.c:1220\n> #6 0x000000000060d766 in ExecHashTableInsert (hashtable=hashtable@entry=0x2b50ad8, slot=<optimized out>, hashvalue=<optimized out>)\n> at nodeHash.c:1663\n> #7 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x29a6be0) at nodeHashjoin.c:1051\n\nHmm ... this matches up with a vague thought I had that for some reason\nthe hash join might be spawning a huge number of separate batches.\nEach batch would have a couple of files with associated in-memory\nstate including an 8K I/O buffer, so you could account for the\n\"slow growth\" behavior you're seeing by periodic decisions to\nincrease the number of batches.\n\nYou might try watching calls to ExecHashIncreaseNumBatches\nand see if that theory holds water.\n\nThis could only happen with a very unfriendly distribution of the\nhash keys, I think. There's a heuristic in there to shut off\ngrowth of nbatch if we observe that we're making no progress at\nall, but perhaps this is a skewed distribution that's not quite\nskewed enough to trigger that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Apr 2019 11:30:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 9:49 PM Gunther <[email protected]> wrote:\n\n> Jeff Janes had more\n>\n> Breakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715\n>> 715 {\n>> (gdb) p context->name\n>> $8 = 0x96ce5b \"ExecutorState\"\n>>\n>>\n> I think that the above one might have been the one you wanted.\n>\n> Not sure how you could tell that? It's the same place as everything else.\n> If we can find out what you're looking for, may be we can set a break point\n> earlier up the call chain?\n>\n\nIt is just a hunch. That size is similar to one you reported for the\nlast-straw allocation on the case with the nested loops rather than hash\njoins. So it is reasonable (but surely not guaranteed) that they are coming\nfrom the same place in the code. And since that other one occurred as the\nlast straw, then that one must have not be part of the start up\nallocations, but rather the part where we should be at steady state, but\nare not. So maybe this one is too.\n\n\n>\n> I guess I should run this for a little longer. So I disable my breakpoints\n>>\n>>\n> it went up pretty quick from 1.2 GB to 1.5 GB, but then it stopped growing\n>> fast, so now back to gdb and break:\n>>\n> Unfortunately, I think this means you missed your opportunity and are now\n> getting backtraces of the innocent bystanders.\n>\n> But why? If I see the memory still go up insanely fast, isn't that a sign\n> for the leak?\n>\n\nYou said it was no longer going up insanely fast by the time you got your\nbreakpoints re-activated.\n\n> Particularly since you report that the version using nested loops rather\n> than hash joins also leaked, so it is probably not the hash-join specific\n> code that is doing it.\n>\n> How about it's in the DISTINCT? I noticed while peeking up the call chain,\n> that it was already in the UNIQUE sort thing also. I guess it's streaming\n> the results from the hash join right into the unique sort step.\n>\n\nIsn't crashing before any rows are emitted from the hash join? I thought\nit was, but I am not very confident on that.\n\n\n> What I've done before is compile with the comments removed from\n> src/backend/utils/mmgr/aset.c:/* #define HAVE_ALLOCINFO */\n>\n> I have just done that and it creates an insane amount of output from all\n> the processes, I'm afraid there will be no way to keep that stuff\n> separated. If there was a way of turning that one and off for one process\n> only, then we could probably get more info...\n>\n\nAre you doing all this stuff on a production server in use by other people?\n\n> Everything is also extremely slow that way. Like in a half hour the memory\n> didn't even reach 100 MB.\n>\n> and then look for allocations sizes which are getting allocated but not\n> freed, and then you can go back to gdb to look for allocations of those\n> specific sizes.\n>\n> I guess I should look for both, address and size to match it better.\n>\n\nYou can analyze the log for specific addresses which are allocated but not\nfreed, but once you find them you can't do much with them. Those specific\naddresses probably won't repeat on the next run, so, you so can't do\nanything with the knowledge. If you find a request size that is\nsystematically analyzed but not freed, you can condition logging (or gdb)\non that size.\n\n\n> This generates a massive amount of output, and it bypasses the logging\n> configuration and goes directly to stderr--so it might not end up where you\n> expect.\n>\n> Yes, massive, like I said. Impossible to use. File system fills up\n> rapidly. I made it so that it can be turned on and off, with the debugger.\n>\n> int _alloc_info = 0;\n> #ifdef HAVE_ALLOCINFO\n> #define AllocFreeInfo(_cxt, _chunk) \\\n> if(_alloc_info) \\\n> fprintf(stderr, \"AllocFree: %s: %p, %zu\\n\", \\\n> (_cxt)->header.name, (_chunk), (_chunk)->size)\n> #define AllocAllocInfo(_cxt, _chunk) \\\n> if(_alloc_info) \\\n> fprintf(stderr, \"AllocAlloc: %s: %p, %zu\\n\", \\\n> (_cxt)->header.name, (_chunk), (_chunk)->size)\n> #else\n> #define AllocFreeInfo(_cxt, _chunk)\n> #define AllocAllocInfo(_cxt, _chunk)\n> #endif\n>\n> so with this I do\n>\n> (gdb) b AllocSetAlloc\n> (gdb) cont\n> (gdb) set _alloc_info=1\n> (gdb) disable\n> (gdb) cont\n>\n>\nThanks for this trick, I'll save this so I can refer back to it if I need\nto do this again some time.\n\n\n> then I wait, ... until it crashes again ... no, it's too much. It fills up\n> my filesystem in no time with the logs. It produced 3 GB in just a minute\n> of run time.\n>\n\nYou don't need to run it until it crashes, only until the preliminaries are\ndone and the leak has started to happen. I would think a minute would be\nmore than enough, unless the leak doesn't start until some other join runs\nto completion or something. You can turn on the logging only after\nExecutorStat has obtained a size which is larger than we think it has any\nreason to be. If you still have log around, you can still analyze it.\n\n\n> And also, I doubt we can find anything specifically by allocation size.\n> It's just going to be 512 or whatever.\n>\n\nFor allocations larger than 4096 bytes, it records the size requested, not\nthe rounded-to-a-power-of-two size. So if the leak is for large\nallocations, you can get considerable specificity here. Even if not, at\nleast you have some more of a clue than you had previously.\n\n\n> Isn't there some other way?\n>\nI wonder of valgrind or something like that could be of use. I don't know\nenough about those tools to know. One problem is that this is not really a\nleak. If the query completely successfully, it would have freed the\nmemory. And when the query completed with an error, it also freed the\nmemory. So it just an inefficiency, not a true leak, and leak-detection\ntools might not work. But as I said, I have not studied them.\n\nAre you sure that you had the memory problem under a plan with no hash\njoins? If so, that would seem to rule out some of the ideas that Tom has\nbeen pondering. Unless there are two bugs.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 15, 2019 at 9:49 PM Gunther <[email protected]> wrote:\n\nJeff Janes had more\n\n\n\n\nBreakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715\n715 {\n(gdb) p context->name\n$8 = 0x96ce5b \"ExecutorState\"\n\n\n\n\n\nI think that the above one might have been the one you\n wanted.\n\n Not sure how you could tell that? It's the same place as\n everything else. If we can find out what you're looking for, may\n be we can set a break point earlier up the call chain?It is just a hunch. That size is similar to one you reported for the last-straw allocation on the case with the nested loops rather than hash joins. So it is reasonable (but surely not guaranteed) that they are coming from the same place in the code. And since that other one occurred as the last straw, then that one must have not be part of the start up allocations, but rather the part where we should be at steady state, but are not. So maybe this one is too. \n\n\n\nI\n guess I should run this for a little longer. So I disable\n my breakpoints \n\n\n\n\nit\n went up pretty quick from 1.2 GB to 1.5 GB, but then it\n stopped growing fast, so now back to gdb and break:\n\n\nUnfortunately, I think this means you missed your\n opportunity and are now getting backtraces of the innocent\n bystanders.\n\n But why? If I see the memory still go up insanely fast, isn't that\n a sign for the leak?You said it was no longer going up insanely fast by the time you got your breakpoints re-activated. \n\nParticularly since you report that the version using nested\n loops rather than hash joins also leaked, so it is probably\n not the hash-join specific code that is doing it.\n\n How about it's in the DISTINCT? I noticed while peeking up the\n call chain, that it was already in the UNIQUE sort thing also. I\n guess it's streaming the results from the hash join right into the\n unique sort step.Isn't crashing before any rows are emitted from the hash join? I thought it was, but I am not very confident on that. \n\nWhat I've done before is compile with the comments removed\n from \nsrc/backend/utils/mmgr/aset.c:/* #define HAVE_ALLOCINFO */\n \n\n\n I have just done that and it creates an insane amount of output\n from all the processes, I'm afraid there will be no way to keep\n that stuff separated. If there was a way of turning that one and\n off for one process only, then we could probably get more info...Are you doing all this stuff on a production server in use by other people? \nEverything is also extremely slow that way. Like in a half hour\n the memory didn't even reach 100 MB.\n\n\n\nand then look for allocations sizes which are getting\n allocated but not freed, and then you can go back to gdb to\n look for allocations of those specific sizes. \n\n I guess I should look for both, address and size to match it\n better.You can analyze the log for specific addresses which are allocated but not freed, but once you find them you can't do much with them. Those specific addresses probably won't repeat on the next run, so, you so can't do anything with the knowledge. If you find a request size that is systematically analyzed but not freed, you can condition logging (or gdb) on that size. \n\nThis generates a massive amount of output, and it bypasses\n the logging configuration and goes directly to stderr--so it\n might not end up where you expect.\n\n Yes, massive, like I said. Impossible to use. File system fills up\n rapidly. I made it so that it can be turned on and off, with the\n debugger.\nint _alloc_info = 0;\n#ifdef HAVE_ALLOCINFO\n#define AllocFreeInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocFree: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#define AllocAllocInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocAlloc: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#else\n#define AllocFreeInfo(_cxt, _chunk)\n#define AllocAllocInfo(_cxt, _chunk)\n#endif\nso with this I do\n(gdb) b AllocSetAlloc\n(gdb) cont\n(gdb) set _alloc_info=1\n(gdb) disable\n(gdb) contThanks for this trick, I'll save this so I can refer back to it if I need to do this again some time. \nthen I wait, ... until it crashes again ... no, it's too much. It\n fills up my filesystem in no time with the logs. It produced 3 GB\n in just a minute of run time.You don't need to run it until it crashes, only until the preliminaries are done and the leak has started to happen. I would think a minute would be more than enough, unless the leak doesn't start until some other join runs to completion or something. You can turn on the logging only after ExecutorStat has obtained a size which is larger than we think it has any reason to be. If you still have log around, you can still analyze it. \n\nAnd also, I doubt we can find anything specifically by allocation\n size. It's just going to be 512 or whatever. For allocations larger than 4096 bytes, it records the size requested, not the rounded-to-a-power-of-two size. So if the leak is for large allocations, you can get considerable specificity here. Even if not, at least you have some more of a clue than you had previously. \n\nIsn't there some other way? I wonder of valgrind or something like that could be of use. I don't know enough about those tools to know. One problem is that this is not really a leak. If the query completely successfully, it would have freed the memory. And when the query completed with an error, it also freed the memory. So it just an inefficiency, not a true leak, and leak-detection tools might not work. But as I said, I have not studied them.Are you sure that you had the memory problem under a plan with no hash joins? If so, that would seem to rule out some of the ideas that Tom has been pondering. Unless there are two bugs.Cheers,Jeff",
"msg_date": "Tue, 16 Apr 2019 15:28:40 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> On Mon, Apr 15, 2019 at 9:49 PM Gunther <[email protected]> wrote:\n>> Isn't there some other way?\n\n> I wonder of valgrind or something like that could be of use. I don't know\n> enough about those tools to know. One problem is that this is not really a\n> leak. If the query completely successfully, it would have freed the\n> memory. And when the query completed with an error, it also freed the\n> memory. So it just an inefficiency, not a true leak, and leak-detection\n> tools might not work. But as I said, I have not studied them.\n\nvalgrind is a useful idea, given that Gunther is building his own\npostgres (so he could compile it with -DUSE_VALGRIND + --enable-cassert,\nwhich are needed to get valgrind to understand palloc allocations).\nI don't recall details right now, but it is possible to trigger\na valgrind report intra-session similar to what you get by default\nat process exit. You could wait till the memory has bloated a\ngood deal and then ask for one of those reports that classify\nallocations by call chain (I think you want the memcheck tool for\nthis, not the default valgrind tool).\n\nHowever --- at least for the case involving hash joins, I think we\nhave a decent fix on the problem location already: it seems to be a\nmatter of continually deciding to increase nbatch, and now what we\nneed to investigate is why that's happening.\n\nIf there's a leak that shows up without any hash joins in the plan,\nthen that's a separate matter for investigation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Apr 2019 15:43:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 15/04/2019 08:23, Gunther wrote:\n>\n> For weeks now, I am banging my head at an \"out of memory\" situation. \n> There is only one query I am running on an 8 GB system, whatever I \n> try, I get knocked out on this out of memory. It is extremely \n> impenetrable to understand and fix this error. I guess I could add a \n> swap file, and then I would have to take the penalty of swapping. But \n> how can I actually address an out of memory condition if the system \n> doesn't tell me where it is happening?\n>\n[...]\n\nI strongly suigest having a swap file, I've got 32GB, and I've used \n2.9GB of my swap space after 4 days, but I'm not really pushing my \nsystem. For me, mostly stuff that is only used once, or not at all, is \nswapped out. If you do have a memory leak, then it might be easier to \ndiagnose, if you don't run out on Memory.\n\nI suspect that most things will run a little better with some swap space.\n\n\nCherers,\nGavin\n\n\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 10:39:26 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 4/16/2019 11:30, Tom Lane wrote:\n>> Breakpoint 6, AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n>> 718 {\n>> (gdb) bt 8\n>> #0 AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n>> #1 0x000000000084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969\n>> #2 0x0000000000702b63 in makeBufFileCommon (nfiles=nfiles@entry=1) at buffile.c:119\n>> #3 0x0000000000702e4c in makeBufFile (firstfile=68225) at buffile.c:138\n>> #4 BufFileCreateTemp (interXact=interXact@entry=false) at buffile.c:201\n>> #5 0x000000000061060b in ExecHashJoinSaveTuple (tuple=0x2ba1018, hashvalue=<optimized out>, fileptr=0x6305b00) at nodeHashjoin.c:1220\n>> #6 0x000000000060d766 in ExecHashTableInsert (hashtable=hashtable@entry=0x2b50ad8, slot=<optimized out>, hashvalue=<optimized out>)\n>> at nodeHash.c:1663\n>> #7 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x29a6be0) at nodeHashjoin.c:1051\n> Hmm ... this matches up with a vague thought I had that for some reason\n> the hash join might be spawning a huge number of separate batches.\n> Each batch would have a couple of files with associated in-memory\n> state including an 8K I/O buffer, so you could account for the\n> \"slow growth\" behavior you're seeing by periodic decisions to\n> increase the number of batches.\n>\n> You might try watching calls to ExecHashIncreaseNumBatches\n> and see if that theory holds water.\n\nOK, checking that ... well yes, this breaks quickly into that, here is \none backtrace\n\n\n> This could only happen with a very unfriendly distribution of the\n> hash keys, I think. There's a heuristic in there to shut off\n> growth of nbatch if we observe that we're making no progress at\n> all, but perhaps this is a skewed distribution that's not quite\n> skewed enough to trigger that.\n\nYour hunch is pretty right on. There is something very weirdly \ndistributed in this particular join situation.\n\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=slot@entry=0x2ae0238,\n hashvalue=<optimized out>) at nodeHash.c:1655\n#2 0x000000000060fd9c in MultiExecPrivateHash (node=<optimized out>) at nodeHash.c:186\n#3 MultiExecHash (node=node@entry=0x2ac6dc8) at nodeHash.c:114\n#4 0x00000000005fe42f in MultiExecProcNode (node=node@entry=0x2ac6dc8) at execProcnode.c:501\n#5 0x000000000061073d in ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:290\n#6 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#7 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#8 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#9 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n#10 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dc30) at execProcnode.c:461\n#11 0x000000000061d2e4 in ExecProcNode (node=0x2a1dc30) at ../../../src/include/executor/executor.h:247\n#12 ExecUnique (pstate=0x2a1d9b0) at nodeUnique.c:73\n#13 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1d9b0) at execProcnode.c:461\n#14 0x00000000005f75da in ExecProcNode (node=0x2a1d9b0) at ../../../src/include/executor/executor.h:247\n#15 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2a1d9b0, estate=0x2a1d6c0)\n at execMain.c:1723\n#16 standard_ExecutorRun (queryDesc=0x2a7a478, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#17 0x000000000059c718 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2a787f8, into=into@entry=0x0, es=es@entry=0x28f1048,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7ffcbf930080) at explain.c:535\n\nBut this is still in the warm-up phase, we don't know if it is at the \nplace where memory grows too much.\n\nLet's see if I can count the occurrences ... I do cont 100. Now resident \nmemory slowly grows, but not too much just 122 kB and CPU is at 88%. I \nthink we haven't hit the problematic part of the plan. There is a sort \nmerge at some leaf, which I believe is innocent. My gut feeling from \nlooking at CPU% high that we are in one of those since NL is disabled.\n\nNext stage is that memory shot up to 264 kB and CPU% down to 8.6. Heavy \nIO (write and read).\n\nYes! And now entering the 3rd stage, where memory shots up to 600 kB. \nThis is where it starts \"breaking out\". And only now the 100 breakpoint \nconts are used up. And within a second another 100. And even 1000 go by \nin a second. cont 10000 goes by in 4 seconds. And during that time \nresident memory increased to over 700 kB. Let's measure:\n\n736096 + cont 10000 --> 740056, that is 3960 bytes for 10000 conts, or \n0.396 bytes per cont. Prediction: cont 10000 will now arrive at 744016? \nAaaand ... BINGO! 744016 exactly! cont 50000 will take about 20 seconds \nand will boost memory to 763816. Bets are on ... drumroll ... 35, 36 , \n... nope. This time didn't pan out. Breakpoint already hit 75727 times \nignore next 5585 hits ... memory now 984052. So it took longer this time \nand memory increment was larger. We are now getting toward the edge of \nthe cliff. Before we do here is the backtrace now:\n\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1655\n#2 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x2a1dd40) at nodeHashjoin.c:1051\n#3 ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:539\n#4 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#5 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#6 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#7 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n(More stack frames follow...)\n(gdb) bt 18\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1655\n#2 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x2a1dd40) at nodeHashjoin.c:1051\n#3 ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:539\n#4 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#5 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#6 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#7 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n#8 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dc30) at execProcnode.c:461\n#9 0x000000000061d2e4 in ExecProcNode (node=0x2a1dc30) at ../../../src/include/executor/executor.h:247\n#10 ExecUnique (pstate=0x2a1d9b0) at nodeUnique.c:73\n#11 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1d9b0) at execProcnode.c:461\n#12 0x00000000005f75da in ExecProcNode (node=0x2a1d9b0) at ../../../src/include/executor/executor.h:247\n#13 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2a1d9b0, estate=0x2a1d6c0)\n at execMain.c:1723\n#14 standard_ExecutorRun (queryDesc=0x2a7a478, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#15 0x000000000059c718 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2a787f8, into=into@entry=0x0, es=es@entry=0x28f1048,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7ffcbf930080) at explain.c:535\n\nBy the way, I ran the explain analyze of the plan while removing all the \nfinal result columns from the outer-most select, replacing them with \nsimply SELECT 1 FROM .... And here is that plan. I am presenting it to \nyou because you might glean something about the whatever skewed \ndistribution.\n\n Hash Right Join (cost=4203858.53..5475530.71 rows=34619 width=4) (actual time=309603.384..459480.863 rows=113478386 loops=1)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1341053.37..2611158.36 rows=13 width=74) (actual time=109807.980..109808.040 rows=236 loops=1)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n -> Gather (cost=30803.54..1300908.52 rows=1 width=74) (actual time=58730.915..58737.757 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Left Join (cost=29803.54..1299908.42 rows=1 width=74) (actual time=58723.378..58723.379 rows=0 loops=3)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyid.entityinternalid)::text)\n -> Parallel Hash Left Join (cost=28118.13..1298223.00 rows=1 width=111) (actual time=58713.650..58713.652 rows=0 loops=3)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyname.entityinternalid)::text)\n -> Parallel Seq Scan on documentinformationsubject documentinformationsubject_2 (cost=0.00..1268800.85 rows=1 width=111) (actual time=58544.391..58544.391 rows=0 loops=3)\n Filter: ((participationtypecode)::text = 'AUT'::text)\n Rows Removed by Filter: 2815562\n -> Parallel Hash (cost=24733.28..24733.28 rows=166628 width=37) (actual time=125.611..125.611 rows=133303 loops=3)\n Buckets: 65536 Batches: 16 Memory Usage: 2336kB\n -> Parallel Seq Scan on bestname agencyname (cost=0.00..24733.28 rows=166628 width=37) (actual time=0.009..60.685 rows=133303 loops=3)\n -> Parallel Hash (cost=1434.07..1434.07 rows=20107 width=37) (actual time=9.329..9.329 rows=11393 loops=3)\n Buckets: 65536 Batches: 1 Memory Usage: 2976kB\n -> Parallel Seq Scan on entity_id agencyid (cost=0.00..1434.07 rows=20107 width=37) (actual time=0.008..5.224 rows=11393 loops=3)\n -> Hash (cost=1310249.63..1310249.63 rows=13 width=111) (actual time=51077.049..51077.049 rows=236 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 41kB\n -> Hash Right Join (cost=829388.20..1310249.63 rows=13 width=111) (actual time=45607.852..51076.967 rows=236 loops=1)\n Hash Cond: ((an.actinternalid)::text = (q.actinternalid)::text)\n -> Seq Scan on act_id an (cost=0.00..425941.04 rows=14645404 width=37) (actual time=1.212..10883.350 rows=14676871 loops=1)\n -> Hash (cost=829388.19..829388.19 rows=1 width=111) (actual time=38246.715..38246.715 rows=236 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 41kB\n -> Gather (cost=381928.46..829388.19 rows=1 width=111) (actual time=31274.733..38246.640 rows=236 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Join (cost=380928.46..828388.09 rows=1 width=111) (actual time=31347.260..38241.812 rows=79 loops=3)\n Hash Cond: ((q.actinternalid)::text = (r.sourceinternalid)::text)\n -> Parallel Seq Scan on documentinformation q (cost=0.00..447271.93 rows=50050 width=74) (actual time=13304.439..20265.733 rows=87921 loops=3)\n Filter: (((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = 'EVN'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n Rows Removed by Filter: 1540625\n -> Parallel Hash (cost=380928.44..380928.44 rows=1 width=74) (actual time=17954.106..17954.106 rows=79 loops=3)\n Buckets: 1024 Batches: 1 Memory Usage: 104kB\n -> Parallel Seq Scan on actrelationship r (cost=0.00..380928.44 rows=1 width=74) (actual time=7489.704..17953.959 rows=79 loops=3)\n Filter: ((typecode)::text = 'SUBJ'::text)\n Rows Removed by Filter: 3433326\n -> Hash (cost=2861845.87..2861845.87 rows=34619 width=74) (actual time=199792.446..199792.446 rows=113478127 loops=1)\n Buckets: 65536 (originally 65536) Batches: 131072 (originally 2) Memory Usage: 189207kB\n -> Gather Merge (cost=2845073.40..2861845.87 rows=34619 width=74) (actual time=107620.262..156256.432 rows=113478127 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Merge Left Join (cost=2844073.37..2856849.96 rows=14425 width=74) (actual time=107570.719..126113.792 rows=37826042 loops=3)\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n -> Sort (cost=1295969.26..1296005.32 rows=14425 width=111) (actual time=57700.723..58134.751 rows=231207 loops=3)\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalid\n Sort Method: external merge Disk: 26936kB\n Worker 0: Sort Method: external merge Disk: 27152kB\n Worker 1: Sort Method: external merge Disk: 28248kB\n -> Parallel Seq Scan on documentinformationsubject (cost=0.00..1294972.76 rows=14425 width=111) (actual time=24866.656..57424.420 rows=231207 loops=3)\n Filter: (((participationtypecode)::text = ANY ('{PPRF,PRF}'::text[])) AND ((classcode)::text = 'ACT'::text) AND ((moodcode)::text = 'DEF'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n Rows Removed by Filter: 2584355\n -> Materialize (cost=1548104.12..1553157.04 rows=1010585 width=111) (actual time=49869.984..54191.701 rows=38060250 loops=3)\n -> Sort (cost=1548104.12..1550630.58 rows=1010585 width=111) (actual time=49869.980..50832.205 rows=1031106 loops=3)\n Sort Key: documentinformationsubject_1.documentinternalid, documentinformationsubject_1.documentid, documentinformationsubject_1.actinternalid\n Sort Method: external merge Disk: 122192kB\n Worker 0: Sort Method: external merge Disk: 122192kB\n Worker 1: Sort Method: external merge Disk: 122192kB\n -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=111) (actual time=20366.166..47751.267 rows=1031106 loops=3)\n Filter: ((participationtypecode)::text = 'PRD'::text)\n Rows Removed by Filter: 7415579\n Planning Time: 2.523 ms\n Execution Time: 464825.391 ms\n(66 rows)\n\nBy the way, let me ask, do you have pretty-print functions I can call \nwith, e.g., node in ExecProcNode, or pstate in ExecHashJoin? Because if \nthere was, then we could determine where exactly in the current plan we \nare? And can I call the plan printer for the entire plan we are \ncurrently executing? Might it even give us preliminary counts of where \nin the process it is? (I ask the latter not only because it would be \nreally useful for our present debugging, but also because it would be an \nawesome tool for monitoring of long running queries! Something I am sure \ntons of people would just love to have!\n\nBTW, I also read the other responses. I agree that having a swap space \navailable just in case is better than these annoying out of memory \nerrors. And yes, I can add that memory profiler thing, if you think it \nwould actually work. I've done it with java heap dumps, even upgrading \nthe VM to a 32 GB VM just to crunch the heap dump. But can you tell me \njust a little more as to how I need to configure this thing to get the \ndata you want without blowing up the memory and disk during this huge query?\n\nregards,\n-Gunther\n\n\n\n\n\n\n\nOn 4/16/2019 11:30, Tom Lane wrote:\n\n\n\nBreakpoint 6, AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n718 {\n(gdb) bt 8\n#0 AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n#1 0x000000000084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969\n#2 0x0000000000702b63 in makeBufFileCommon (nfiles=nfiles@entry=1) at buffile.c:119\n#3 0x0000000000702e4c in makeBufFile (firstfile=68225) at buffile.c:138\n#4 BufFileCreateTemp (interXact=interXact@entry=false) at buffile.c:201\n#5 0x000000000061060b in ExecHashJoinSaveTuple (tuple=0x2ba1018, hashvalue=<optimized out>, fileptr=0x6305b00) at nodeHashjoin.c:1220\n#6 0x000000000060d766 in ExecHashTableInsert (hashtable=hashtable@entry=0x2b50ad8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1663\n#7 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x29a6be0) at nodeHashjoin.c:1051\n\n\n\nHmm ... this matches up with a vague thought I had that for some reason\nthe hash join might be spawning a huge number of separate batches.\nEach batch would have a couple of files with associated in-memory\nstate including an 8K I/O buffer, so you could account for the\n\"slow growth\" behavior you're seeing by periodic decisions to\nincrease the number of batches.\n\nYou might try watching calls to ExecHashIncreaseNumBatches\nand see if that theory holds water.\n\nOK, checking that ... well yes, this breaks quickly into that,\n here is one backtrace\n\n\n\nThis could only happen with a very unfriendly distribution of the\nhash keys, I think. There's a heuristic in there to shut off\ngrowth of nbatch if we observe that we're making no progress at\nall, but perhaps this is a skewed distribution that's not quite\nskewed enough to trigger that.\n\nYour hunch is pretty right on. There is something very weirdly\n distributed in this particular join situation.\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=slot@entry=0x2ae0238,\n hashvalue=<optimized out>) at nodeHash.c:1655\n#2 0x000000000060fd9c in MultiExecPrivateHash (node=<optimized out>) at nodeHash.c:186\n#3 MultiExecHash (node=node@entry=0x2ac6dc8) at nodeHash.c:114\n#4 0x00000000005fe42f in MultiExecProcNode (node=node@entry=0x2ac6dc8) at execProcnode.c:501\n#5 0x000000000061073d in ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:290\n#6 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#7 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#8 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#9 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n#10 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dc30) at execProcnode.c:461\n#11 0x000000000061d2e4 in ExecProcNode (node=0x2a1dc30) at ../../../src/include/executor/executor.h:247\n#12 ExecUnique (pstate=0x2a1d9b0) at nodeUnique.c:73\n#13 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1d9b0) at execProcnode.c:461\n#14 0x00000000005f75da in ExecProcNode (node=0x2a1d9b0) at ../../../src/include/executor/executor.h:247\n#15 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2a1d9b0, estate=0x2a1d6c0)\n at execMain.c:1723\n#16 standard_ExecutorRun (queryDesc=0x2a7a478, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#17 0x000000000059c718 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2a787f8, into=into@entry=0x0, es=es@entry=0x28f1048,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7ffcbf930080) at explain.c:535\nBut this is still in the warm-up phase, we don't know if it is at\n the place where memory grows too much.\n\nLet's see if I can count the occurrences ... I do cont 100. Now\n resident memory slowly grows, but not too much just 122 kB and CPU\n is at 88%. I think we haven't hit the problematic part of the\n plan. There is a sort merge at some leaf, which I believe is\n innocent. My gut feeling from looking at CPU% high that we are in\n one of those since NL is disabled.\n\nNext stage is that memory shot up to 264 kB and CPU% down to\n 8.6. Heavy IO (write and read).\nYes! And now entering the 3rd stage, where memory shots up to 600\n kB. This is where it starts \"breaking out\". And only now the 100\n breakpoint conts are used up. And within a second another 100. And\n even 1000 go by in a second. cont 10000 goes by in 4 seconds. And\n during that time resident memory increased to over 700 kB. Let's\n measure:\n736096 + cont 10000 --> 740056, that is 3960 bytes for 10000\n conts, or 0.396 bytes per cont. Prediction: cont 10000 will now\n arrive at 744016? Aaaand ... BINGO! 744016 exactly! cont 50000\n will take about 20 seconds and will boost memory to 763816. Bets\n are on ... drumroll ... 35, 36 , ... nope. This time didn't pan\n out. Breakpoint already hit 75727 times ignore next 5585 hits ...\n memory now 984052. So it took longer this time and memory\n increment was larger. We are now getting toward the edge of the\n cliff. Before we do here is the backtrace now:\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1655\n#2 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x2a1dd40) at nodeHashjoin.c:1051\n#3 ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:539\n#4 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#5 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#6 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#7 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n(More stack frames follow...)\n(gdb) bt 18\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1655\n#2 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x2a1dd40) at nodeHashjoin.c:1051\n#3 ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:539\n#4 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#5 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#6 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#7 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n#8 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dc30) at execProcnode.c:461\n#9 0x000000000061d2e4 in ExecProcNode (node=0x2a1dc30) at ../../../src/include/executor/executor.h:247\n#10 ExecUnique (pstate=0x2a1d9b0) at nodeUnique.c:73\n#11 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1d9b0) at execProcnode.c:461\n#12 0x00000000005f75da in ExecProcNode (node=0x2a1d9b0) at ../../../src/include/executor/executor.h:247\n#13 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2a1d9b0, estate=0x2a1d6c0)\n at execMain.c:1723\n#14 standard_ExecutorRun (queryDesc=0x2a7a478, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#15 0x000000000059c718 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2a787f8, into=into@entry=0x0, es=es@entry=0x28f1048,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7ffcbf930080) at explain.c:535\nBy the way, I ran the explain analyze of the plan while removing\n all the final result columns from the outer-most select, replacing\n them with simply SELECT 1 FROM .... And here is that plan. I am\n presenting it to you because you might glean something about the\n whatever skewed distribution.\n Hash Right Join (cost=4203858.53..5475530.71 rows=34619 width=4) (actual time=309603.384..459480.863 rows=113478386 loops=1)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1341053.37..2611158.36 rows=13 width=74) (actual time=109807.980..109808.040 rows=236 loops=1)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n -> Gather (cost=30803.54..1300908.52 rows=1 width=74) (actual time=58730.915..58737.757 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Left Join (cost=29803.54..1299908.42 rows=1 width=74) (actual time=58723.378..58723.379 rows=0 loops=3)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyid.entityinternalid)::text)\n -> Parallel Hash Left Join (cost=28118.13..1298223.00 rows=1 width=111) (actual time=58713.650..58713.652 rows=0 loops=3)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyname.entityinternalid)::text)\n -> Parallel Seq Scan on documentinformationsubject documentinformationsubject_2 (cost=0.00..1268800.85 rows=1 width=111) (actual time=58544.391..58544.391 rows=0 loops=3)\n Filter: ((participationtypecode)::text = 'AUT'::text)\n Rows Removed by Filter: 2815562\n -> Parallel Hash (cost=24733.28..24733.28 rows=166628 width=37) (actual time=125.611..125.611 rows=133303 loops=3)\n Buckets: 65536 Batches: 16 Memory Usage: 2336kB\n -> Parallel Seq Scan on bestname agencyname (cost=0.00..24733.28 rows=166628 width=37) (actual time=0.009..60.685 rows=133303 loops=3)\n -> Parallel Hash (cost=1434.07..1434.07 rows=20107 width=37) (actual time=9.329..9.329 rows=11393 loops=3)\n Buckets: 65536 Batches: 1 Memory Usage: 2976kB\n -> Parallel Seq Scan on entity_id agencyid (cost=0.00..1434.07 rows=20107 width=37) (actual time=0.008..5.224 rows=11393 loops=3)\n -> Hash (cost=1310249.63..1310249.63 rows=13 width=111) (actual time=51077.049..51077.049 rows=236 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 41kB\n -> Hash Right Join (cost=829388.20..1310249.63 rows=13 width=111) (actual time=45607.852..51076.967 rows=236 loops=1)\n Hash Cond: ((an.actinternalid)::text = (q.actinternalid)::text)\n -> Seq Scan on act_id an (cost=0.00..425941.04 rows=14645404 width=37) (actual time=1.212..10883.350 rows=14676871 loops=1)\n -> Hash (cost=829388.19..829388.19 rows=1 width=111) (actual time=38246.715..38246.715 rows=236 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 41kB\n -> Gather (cost=381928.46..829388.19 rows=1 width=111) (actual time=31274.733..38246.640 rows=236 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Join (cost=380928.46..828388.09 rows=1 width=111) (actual time=31347.260..38241.812 rows=79 loops=3)\n Hash Cond: ((q.actinternalid)::text = (r.sourceinternalid)::text)\n -> Parallel Seq Scan on documentinformation q (cost=0.00..447271.93 rows=50050 width=74) (actual time=13304.439..20265.733 rows=87921 loops=3)\n Filter: (((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = 'EVN'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n Rows Removed by Filter: 1540625\n -> Parallel Hash (cost=380928.44..380928.44 rows=1 width=74) (actual time=17954.106..17954.106 rows=79 loops=3)\n Buckets: 1024 Batches: 1 Memory Usage: 104kB\n -> Parallel Seq Scan on actrelationship r (cost=0.00..380928.44 rows=1 width=74) (actual time=7489.704..17953.959 rows=79 loops=3)\n Filter: ((typecode)::text = 'SUBJ'::text)\n Rows Removed by Filter: 3433326\n -> Hash (cost=2861845.87..2861845.87 rows=34619 width=74) (actual time=199792.446..199792.446 rows=113478127 loops=1)\n Buckets: 65536 (originally 65536) Batches: 131072 (originally 2) Memory Usage: 189207kB\n -> Gather Merge (cost=2845073.40..2861845.87 rows=34619 width=74) (actual time=107620.262..156256.432 rows=113478127 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Merge Left Join (cost=2844073.37..2856849.96 rows=14425 width=74) (actual time=107570.719..126113.792 rows=37826042 loops=3)\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n -> Sort (cost=1295969.26..1296005.32 rows=14425 width=111) (actual time=57700.723..58134.751 rows=231207 loops=3)\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalid\n Sort Method: external merge Disk: 26936kB\n Worker 0: Sort Method: external merge Disk: 27152kB\n Worker 1: Sort Method: external merge Disk: 28248kB\n -> Parallel Seq Scan on documentinformationsubject (cost=0.00..1294972.76 rows=14425 width=111) (actual time=24866.656..57424.420 rows=231207 loops=3)\n Filter: (((participationtypecode)::text = ANY ('{PPRF,PRF}'::text[])) AND ((classcode)::text = 'ACT'::text) AND ((moodcode)::text = 'DEF'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n Rows Removed by Filter: 2584355\n -> Materialize (cost=1548104.12..1553157.04 rows=1010585 width=111) (actual time=49869.984..54191.701 rows=38060250 loops=3)\n -> Sort (cost=1548104.12..1550630.58 rows=1010585 width=111) (actual time=49869.980..50832.205 rows=1031106 loops=3)\n Sort Key: documentinformationsubject_1.documentinternalid, documentinformationsubject_1.documentid, documentinformationsubject_1.actinternalid\n Sort Method: external merge Disk: 122192kB\n Worker 0: Sort Method: external merge Disk: 122192kB\n Worker 1: Sort Method: external merge Disk: 122192kB\n -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=111) (actual time=20366.166..47751.267 rows=1031106 loops=3)\n Filter: ((participationtypecode)::text = 'PRD'::text)\n Rows Removed by Filter: 7415579\n Planning Time: 2.523 ms\n Execution Time: 464825.391 ms\n(66 rows)\nBy the way, let me ask, do you have pretty-print functions I can\n call with, e.g., node in ExecProcNode, or pstate in ExecHashJoin?\n Because if there was, then we could determine where exactly in the\n current plan we are? And can I call the plan printer for the\n entire plan we are currently executing? Might it even give us\n preliminary counts of where in the process it is? (I ask the\n latter not only because it would be really useful for our present\n debugging, but also because it would be an awesome tool for\n monitoring of long running queries! Something I am sure tons of\n people would just love to have!\nBTW, I also read the other responses. I agree that having a swap\n space available just in case is better than these annoying out of\n memory errors. And yes, I can add that memory profiler thing, if\n you think it would actually work. I've done it with java heap\n dumps, even upgrading the VM to a 32 GB VM just to crunch the heap\n dump. But can you tell me just a little more as to how I need to\n configure this thing to get the data you want without blowing up\n the memory and disk during this huge query?\n\nregards,\n -Gunther",
"msg_date": "Tue, 16 Apr 2019 22:24:53 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "I wonder if it'd be useful to compile with \n./configure CFLAGS=-DHJDEBUG=1\n\n\n",
"msg_date": "Tue, 16 Apr 2019 23:46:51 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 4/16/19 6:39 PM, Gavin Flower wrote:\n> I suspect that most things will run a little better with some swap space.\n\nNot always.\n\n$ free\n total used free shared buffers cached\nMem: 16254616 13120960 3133656 20820 646676 10765380\n-/+ buffers/cache: 1708904 14545712\nSwap: 4095996 17436 4078560\n\n-- \n .~. Jean-David Beyer\n /V\\ PGP-Key:166D840A 0C610C8B\n /( )\\ Shrewsbury, New Jersey\n ^^-^^ 01:50:01 up 5 days, 56 min, 2 users, load average: 4.51, 4.59, 4.90\n\n\n",
"msg_date": "Wed, 17 Apr 2019 02:01:58 -0400",
"msg_from": "Jean-David Beyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 17/04/2019 18:01, Jean-David Beyer wrote:\n> On 4/16/19 6:39 PM, Gavin Flower wrote:\n>> I suspect that most things will run a little better with some swap space.\n> Not always.\n>\n> $ free\n> total used free shared buffers cached\n> Mem: 16254616 13120960 3133656 20820 646676 10765380\n> -/+ buffers/cache: 1708904 14545712\n> Swap: 4095996 17436 4078560\n>\nUnclear what is the point you're trying to make, and the stats you quote \ndon't enlighten me.\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 19:53:16 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Hi guys. I don't want to be pushy, but I found it strange that after so \nmuch lively back and forth getting to the bottom of this, suddenly my \nlast nights follow-up remained completely without reply. I wonder if it \neven got received. For those who read their emails with modern readers \n(I know I too am from a time where I wrote everything in plain text) I \nmarked some important questions in bold.\n\nOn 4/16/2019 11:30, Tom Lane wrote:\n>> Breakpoint 6, AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n>> 718 {\n>> (gdb) bt 8\n>> #0 AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n>> #1 0x000000000084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969\n>> #2 0x0000000000702b63 in makeBufFileCommon (nfiles=nfiles@entry=1) at buffile.c:119\n>> #3 0x0000000000702e4c in makeBufFile (firstfile=68225) at buffile.c:138\n>> #4 BufFileCreateTemp (interXact=interXact@entry=false) at buffile.c:201\n>> #5 0x000000000061060b in ExecHashJoinSaveTuple (tuple=0x2ba1018, hashvalue=<optimized out>, fileptr=0x6305b00) at nodeHashjoin.c:1220\n>> #6 0x000000000060d766 in ExecHashTableInsert (hashtable=hashtable@entry=0x2b50ad8, slot=<optimized out>, hashvalue=<optimized out>)\n>> at nodeHash.c:1663\n>> #7 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x29a6be0) at nodeHashjoin.c:1051\n> Hmm ... this matches up with a vague thought I had that for some reason\n> the hash join might be spawning a huge number of separate batches.\n> Each batch would have a couple of files with associated in-memory\n> state including an 8K I/O buffer, so you could account for the\n> \"slow growth\" behavior you're seeing by periodic decisions to\n> increase the number of batches.\n>\n> You might try watching calls to ExecHashIncreaseNumBatches\n> and see if that theory holds water.\n\nOK, checking that ... well yes, this breaks quickly into that, here is \none backtrace\n\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=slot@entry=0x2ae0238,\n hashvalue=<optimized out>) at nodeHash.c:1655\n#2 0x000000000060fd9c in MultiExecPrivateHash (node=<optimized out>) at nodeHash.c:186\n#3 MultiExecHash (node=node@entry=0x2ac6dc8) at nodeHash.c:114\n#4 0x00000000005fe42f in MultiExecProcNode (node=node@entry=0x2ac6dc8) at execProcnode.c:501\n#5 0x000000000061073d in ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:290\n#6 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#7 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#8 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#9 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n#10 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dc30) at execProcnode.c:461\n#11 0x000000000061d2e4 in ExecProcNode (node=0x2a1dc30) at ../../../src/include/executor/executor.h:247\n#12 ExecUnique (pstate=0x2a1d9b0) at nodeUnique.c:73\n#13 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1d9b0) at execProcnode.c:461\n#14 0x00000000005f75da in ExecProcNode (node=0x2a1d9b0) at ../../../src/include/executor/executor.h:247\n#15 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2a1d9b0, estate=0x2a1d6c0)\n at execMain.c:1723\n#16 standard_ExecutorRun (queryDesc=0x2a7a478, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#17 0x000000000059c718 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2a787f8, into=into@entry=0x0, es=es@entry=0x28f1048,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7ffcbf930080) at explain.c:535\n\nBut this is still in the warm-up phase, we don't know if it is at the \nplace where memory grows too much.\n\n> This could only happen with a very unfriendly distribution of the\n> hash keys, I think. There's a heuristic in there to shut off\n> growth of nbatch if we observe that we're making no progress at\n> all, but perhaps this is a skewed distribution that's not quite\n> skewed enough to trigger that.\n\nYour hunch is pretty right on. There is something very weirdly \ndistributed in this particular join situation.\n\nLet's see if I can count the occurrences ... I do cont 100. Now resident \nmemory slowly grows, but not too much just 122 kB and CPU is at 88%. I \nthink we haven't hit the problematic part of the plan. There is a sort \nmerge at some leaf, which I believe is innocent. My gut feeling from \nlooking at CPU% high that we are in one of those since NL is disabled.\n\nNext stage is that memory shot up to 264 kB and CPU% down to 8.6. Heavy \nIO (write and read).\n\nYes! And now entering the 3rd stage, where memory shots up to 600 kB. \nThis is where it starts \"breaking out\". And only now the 100 breakpoint \nconts are used up. And within a second another 100. And even 1000 go by \nin a second. cont 10000 goes by in 4 seconds. And during that time \nresident memory increased to over 700 kB. Let's measure:\n\n736096 + cont 10000 --> 740056, that is 3960 bytes for 10000 conts, or \n0.396 bytes per cont. Prediction: cont 10000 will now arrive at 744016? \nAaaand ... BINGO! 744016 exactly! cont 50000 will take about 20 seconds \nand will boost memory to 763816. Bets are on ... drumroll ... 35, 36 , \n... nope. This time didn't pan out. Breakpoint already hit 75727 times \nignore next 5585 hits ... memory now 984052. So it took longer this time \nand memory increment was larger. We are now getting toward the edge of \nthe cliff. Before we do here is the backtrace now:\n\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1655\n#2 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x2a1dd40) at nodeHashjoin.c:1051\n#3 ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:539\n#4 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#5 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#6 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#7 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n(More stack frames follow...)\n(gdb) bt 18\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1655\n#2 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x2a1dd40) at nodeHashjoin.c:1051\n#3 ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:539\n#4 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#5 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#6 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#7 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n#8 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dc30) at execProcnode.c:461\n#9 0x000000000061d2e4 in ExecProcNode (node=0x2a1dc30) at ../../../src/include/executor/executor.h:247\n#10 ExecUnique (pstate=0x2a1d9b0) at nodeUnique.c:73\n#11 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1d9b0) at execProcnode.c:461\n#12 0x00000000005f75da in ExecProcNode (node=0x2a1d9b0) at ../../../src/include/executor/executor.h:247\n#13 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2a1d9b0, estate=0x2a1d6c0)\n at execMain.c:1723\n#14 standard_ExecutorRun (queryDesc=0x2a7a478, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#15 0x000000000059c718 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2a787f8, into=into@entry=0x0, es=es@entry=0x28f1048,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7ffcbf930080) at explain.c:535\n\nI ran the explain analyze of the plan while removing all the final \nresult columns from the outer-most select, replacing them with simply \nSELECT 1 FROM .... And here is that plan. I am presenting it to you \nbecause you might glean something about the whatever skewed distribution.\n\n Hash Right Join (cost=4203858.53..5475530.71 rows=34619 width=4) (actual time=309603.384..459480.863 rows=113478386 loops=1)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1341053.37..2611158.36 rows=13 width=74) (actual time=109807.980..109808.040 rows=236 loops=1)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n -> Gather (cost=30803.54..1300908.52 rows=1 width=74) (actual time=58730.915..58737.757 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Left Join (cost=29803.54..1299908.42 rows=1 width=74) (actual time=58723.378..58723.379 rows=0 loops=3)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyid.entityinternalid)::text)\n -> Parallel Hash Left Join (cost=28118.13..1298223.00 rows=1 width=111) (actual time=58713.650..58713.652 rows=0 loops=3)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyname.entityinternalid)::text)\n -> Parallel Seq Scan on documentinformationsubject documentinformationsubject_2 (cost=0.00..1268800.85 rows=1 width=111) (actual time=58544.391..58544.391 rows=0 loops=3)\n Filter: ((participationtypecode)::text = 'AUT'::text)\n Rows Removed by Filter: 2815562\n -> Parallel Hash (cost=24733.28..24733.28 rows=166628 width=37) (actual time=125.611..125.611 rows=133303 loops=3)\n Buckets: 65536 Batches: 16 Memory Usage: 2336kB\n -> Parallel Seq Scan on bestname agencyname (cost=0.00..24733.28 rows=166628 width=37) (actual time=0.009..60.685 rows=133303 loops=3)\n -> Parallel Hash (cost=1434.07..1434.07 rows=20107 width=37) (actual time=9.329..9.329 rows=11393 loops=3)\n Buckets: 65536 Batches: 1 Memory Usage: 2976kB\n -> Parallel Seq Scan on entity_id agencyid (cost=0.00..1434.07 rows=20107 width=37) (actual time=0.008..5.224 rows=11393 loops=3)\n -> Hash (cost=1310249.63..1310249.63 rows=13 width=111) (actual time=51077.049..51077.049 rows=236 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 41kB\n -> Hash Right Join (cost=829388.20..1310249.63 rows=13 width=111) (actual time=45607.852..51076.967 rows=236 loops=1)\n Hash Cond: ((an.actinternalid)::text = (q.actinternalid)::text)\n -> Seq Scan on act_id an (cost=0.00..425941.04 rows=14645404 width=37) (actual time=1.212..10883.350 rows=14676871 loops=1)\n -> Hash (cost=829388.19..829388.19 rows=1 width=111) (actual time=38246.715..38246.715 rows=236 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 41kB\n -> Gather (cost=381928.46..829388.19 rows=1 width=111) (actual time=31274.733..38246.640 rows=236 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Join (cost=380928.46..828388.09 rows=1 width=111) (actual time=31347.260..38241.812 rows=79 loops=3)\n Hash Cond: ((q.actinternalid)::text = (r.sourceinternalid)::text)\n -> Parallel Seq Scan on documentinformation q (cost=0.00..447271.93 rows=50050 width=74) (actual time=13304.439..20265.733 rows=87921 loops=3)\n Filter: (((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = 'EVN'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n Rows Removed by Filter: 1540625\n -> Parallel Hash (cost=380928.44..380928.44 rows=1 width=74) (actual time=17954.106..17954.106 rows=79 loops=3)\n Buckets: 1024 Batches: 1 Memory Usage: 104kB\n -> Parallel Seq Scan on actrelationship r (cost=0.00..380928.44 rows=1 width=74) (actual time=7489.704..17953.959 rows=79 loops=3)\n Filter: ((typecode)::text = 'SUBJ'::text)\n Rows Removed by Filter: 3433326\n -> Hash (cost=2861845.87..2861845.87 rows=34619 width=74) (actual time=199792.446..199792.446 rows=113478127 loops=1)\n Buckets: 65536 (originally 65536) Batches: 131072 (originally 2) Memory Usage: 189207kB\n -> Gather Merge (cost=2845073.40..2861845.87 rows=34619 width=74) (actual time=107620.262..156256.432 rows=113478127 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Merge Left Join (cost=2844073.37..2856849.96 rows=14425 width=74) (actual time=107570.719..126113.792 rows=37826042 loops=3)\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n -> Sort (cost=1295969.26..1296005.32 rows=14425 width=111) (actual time=57700.723..58134.751 rows=231207 loops=3)\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalid\n Sort Method: external merge Disk: 26936kB\n Worker 0: Sort Method: external merge Disk: 27152kB\n Worker 1: Sort Method: external merge Disk: 28248kB\n -> Parallel Seq Scan on documentinformationsubject (cost=0.00..1294972.76 rows=14425 width=111) (actual time=24866.656..57424.420 rows=231207 loops=3)\n Filter: (((participationtypecode)::text = ANY ('{PPRF,PRF}'::text[])) AND ((classcode)::text = 'ACT'::text) AND ((moodcode)::text = 'DEF'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n Rows Removed by Filter: 2584355\n -> Materialize (cost=1548104.12..1553157.04 rows=1010585 width=111) (actual time=49869.984..54191.701 rows=38060250 loops=3)\n -> Sort (cost=1548104.12..1550630.58 rows=1010585 width=111) (actual time=49869.980..50832.205 rows=1031106 loops=3)\n Sort Key: documentinformationsubject_1.documentinternalid, documentinformationsubject_1.documentid, documentinformationsubject_1.actinternalid\n Sort Method: external merge Disk: 122192kB\n Worker 0: Sort Method: external merge Disk: 122192kB\n Worker 1: Sort Method: external merge Disk: 122192kB\n -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=111) (actual time=20366.166..47751.267 rows=1031106 loops=3)\n Filter: ((participationtypecode)::text = 'PRD'::text)\n Rows Removed by Filter: 7415579\n Planning Time: 2.523 ms\n Execution Time: 464825.391 ms\n(66 rows)\n\n*By the way, let me ask, do you have pretty-print functions I can call \nwith, e.g., node in ExecProcNode, or pstate in ExecHashJoin? Because if \nthere was, then we could determine where exactly in the current plan we \nare? And can I call the plan printer for the entire plan we are \ncurrently executing? Might it even give us preliminary counts of where \nin the process it is? (I ask the latter not only because it would be \nreally useful for our present debugging, but also because it would be an \nawesome tool for monitoring of long running queries! Something I am sure \ntons of people would just love to have!)*\n\nI also read the other responses. I agree that having a swap space \navailable just in case is better than these annoying out of memory \nerrors. And yes, I can add that memory profiler thing, if you think it \nwould actually work. I've done it with java heap dumps, even upgrading \nthe VM to a 32 GB VM just to crunch the heap dump. But can you tell me \njust a little more as to how I need to configure this thing to get the \ndata you want without blowing up the memory and disk during this huge query?\n\nregards,\n-Gunther\n\n\n\n\n\n\n\nHi guys. I don't want to be pushy, but I found it strange that\n after so much lively back and forth getting to the bottom of this,\n suddenly my last nights follow-up remained completely without\n reply. I wonder if it even got received. For those who read their\n emails with modern readers (I know I too am from a time where I\n wrote everything in plain text) I marked some important questions\n in bold.\n\nOn 4/16/2019 11:30, Tom Lane wrote:\n\n\n\nBreakpoint 6, AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n718 {\n(gdb) bt 8\n#0 AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718\n#1 0x000000000084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969\n#2 0x0000000000702b63 in makeBufFileCommon (nfiles=nfiles@entry=1) at buffile.c:119\n#3 0x0000000000702e4c in makeBufFile (firstfile=68225) at buffile.c:138\n#4 BufFileCreateTemp (interXact=interXact@entry=false) at buffile.c:201\n#5 0x000000000061060b in ExecHashJoinSaveTuple (tuple=0x2ba1018, hashvalue=<optimized out>, fileptr=0x6305b00) at nodeHashjoin.c:1220\n#6 0x000000000060d766 in ExecHashTableInsert (hashtable=hashtable@entry=0x2b50ad8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1663\n#7 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x29a6be0) at nodeHashjoin.c:1051\n\n\nHmm ... this matches up with a vague thought I had that for some reason\nthe hash join might be spawning a huge number of separate batches.\nEach batch would have a couple of files with associated in-memory\nstate including an 8K I/O buffer, so you could account for the\n\"slow growth\" behavior you're seeing by periodic decisions to\nincrease the number of batches.\n\nYou might try watching calls to ExecHashIncreaseNumBatches\nand see if that theory holds water.\n\nOK, checking that ... well yes, this breaks quickly into that,\n here is one backtrace\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=slot@entry=0x2ae0238,\n hashvalue=<optimized out>) at nodeHash.c:1655\n#2 0x000000000060fd9c in MultiExecPrivateHash (node=<optimized out>) at nodeHash.c:186\n#3 MultiExecHash (node=node@entry=0x2ac6dc8) at nodeHash.c:114\n#4 0x00000000005fe42f in MultiExecProcNode (node=node@entry=0x2ac6dc8) at execProcnode.c:501\n#5 0x000000000061073d in ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:290\n#6 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#7 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#8 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#9 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n#10 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dc30) at execProcnode.c:461\n#11 0x000000000061d2e4 in ExecProcNode (node=0x2a1dc30) at ../../../src/include/executor/executor.h:247\n#12 ExecUnique (pstate=0x2a1d9b0) at nodeUnique.c:73\n#13 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1d9b0) at execProcnode.c:461\n#14 0x00000000005f75da in ExecProcNode (node=0x2a1d9b0) at ../../../src/include/executor/executor.h:247\n#15 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2a1d9b0, estate=0x2a1d6c0)\n at execMain.c:1723\n#16 standard_ExecutorRun (queryDesc=0x2a7a478, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#17 0x000000000059c718 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2a787f8, into=into@entry=0x0, es=es@entry=0x28f1048,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7ffcbf930080) at explain.c:535\nBut this is still in the warm-up phase, we don't know if it is at\n the place where memory grows too much.\n\n \n\nThis could only happen with a very unfriendly distribution of the\nhash keys, I think. There's a heuristic in there to shut off\ngrowth of nbatch if we observe that we're making no progress at\nall, but perhaps this is a skewed distribution that's not quite\nskewed enough to trigger that.\n\nYour hunch is pretty right on. There is something very weirdly\n distributed in this particular join situation.\n Let's see if I can count the occurrences ... I do cont 100. Now\n resident memory slowly grows, but not too much just 122 kB and CPU\n is at 88%. I think we haven't hit the problematic part of the plan. \n There is a sort merge at some leaf, which I believe is innocent. My\n gut feeling from looking at CPU% high that we are in one of those\n since NL is disabled.\nNext stage is that memory shot up to 264 kB and CPU% down to\n 8.6. Heavy IO (write and read).\nYes! And now entering the 3rd stage, where memory shots up to 600\n kB. This is where it starts \"breaking out\". And only now the 100\n breakpoint conts are used up. And within a second another 100. And\n even 1000 go by in a second. cont 10000 goes by in 4 seconds. And\n during that time resident memory increased to over 700 kB. Let's\n measure:\n736096 + cont 10000 --> 740056, that is 3960 bytes for 10000\n conts, or 0.396 bytes per cont. Prediction: cont 10000 will now\n arrive at 744016? Aaaand ... BINGO! 744016 exactly! cont 50000\n will take about 20 seconds and will boost memory to 763816. Bets\n are on ... drumroll ... 35, 36 , ... nope. This time didn't pan\n out. Breakpoint already hit 75727 times ignore next 5585 hits ...\n memory now 984052. So it took longer this time and memory\n increment was larger. We are now getting toward the edge of the\n cliff. Before we do here is the backtrace now:\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1655\n#2 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x2a1dd40) at nodeHashjoin.c:1051\n#3 ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:539\n#4 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#5 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#6 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#7 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n(More stack frames follow...)\n(gdb) bt 18\n#0 ExecHashIncreaseNumBatches (hashtable=hashtable@entry=0x2ae8ca8) at nodeHash.c:893\n#1 0x000000000060d84a in ExecHashTableInsert (hashtable=hashtable@entry=0x2ae8ca8, slot=<optimized out>, hashvalue=<optimized out>)\n at nodeHash.c:1655\n#2 0x0000000000610c8f in ExecHashJoinNewBatch (hjstate=0x2a1dd40) at nodeHashjoin.c:1051\n#3 ExecHashJoinImpl (parallel=false, pstate=0x2a1dd40) at nodeHashjoin.c:539\n#4 ExecHashJoin (pstate=0x2a1dd40) at nodeHashjoin.c:565\n#5 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dd40) at execProcnode.c:461\n#6 0x000000000061ce6e in ExecProcNode (node=0x2a1dd40) at ../../../src/include/executor/executor.h:247\n#7 ExecSort (pstate=0x2a1dc30) at nodeSort.c:107\n#8 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1dc30) at execProcnode.c:461\n#9 0x000000000061d2e4 in ExecProcNode (node=0x2a1dc30) at ../../../src/include/executor/executor.h:247\n#10 ExecUnique (pstate=0x2a1d9b0) at nodeUnique.c:73\n#11 0x00000000005fde88 in ExecProcNodeInstr (node=0x2a1d9b0) at execProcnode.c:461\n#12 0x00000000005f75da in ExecProcNode (node=0x2a1d9b0) at ../../../src/include/executor/executor.h:247\n#13 ExecutePlan (execute_once=<optimized out>, dest=0xcc60e0 <donothingDR>, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2a1d9b0, estate=0x2a1d6c0)\n at execMain.c:1723\n#14 standard_ExecutorRun (queryDesc=0x2a7a478, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#15 0x000000000059c718 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2a787f8, into=into@entry=0x0, es=es@entry=0x28f1048,\n queryString=<optimized out>, params=0x0, queryEnv=queryEnv@entry=0x0, planduration=0x7ffcbf930080) at explain.c:535\nI ran the explain analyze of the plan while removing all the\n final result columns from the outer-most select, replacing them\n with simply SELECT 1 FROM .... And here is that plan. I am\n presenting it to you because you might glean something about the\n whatever skewed distribution.\n Hash Right Join (cost=4203858.53..5475530.71 rows=34619 width=4) (actual time=309603.384..459480.863 rows=113478386 loops=1)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1341053.37..2611158.36 rows=13 width=74) (actual time=109807.980..109808.040 rows=236 loops=1)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n -> Gather (cost=30803.54..1300908.52 rows=1 width=74) (actual time=58730.915..58737.757 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Left Join (cost=29803.54..1299908.42 rows=1 width=74) (actual time=58723.378..58723.379 rows=0 loops=3)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyid.entityinternalid)::text)\n -> Parallel Hash Left Join (cost=28118.13..1298223.00 rows=1 width=111) (actual time=58713.650..58713.652 rows=0 loops=3)\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyname.entityinternalid)::text)\n -> Parallel Seq Scan on documentinformationsubject documentinformationsubject_2 (cost=0.00..1268800.85 rows=1 width=111) (actual time=58544.391..58544.391 rows=0 loops=3)\n Filter: ((participationtypecode)::text = 'AUT'::text)\n Rows Removed by Filter: 2815562\n -> Parallel Hash (cost=24733.28..24733.28 rows=166628 width=37) (actual time=125.611..125.611 rows=133303 loops=3)\n Buckets: 65536 Batches: 16 Memory Usage: 2336kB\n -> Parallel Seq Scan on bestname agencyname (cost=0.00..24733.28 rows=166628 width=37) (actual time=0.009..60.685 rows=133303 loops=3)\n -> Parallel Hash (cost=1434.07..1434.07 rows=20107 width=37) (actual time=9.329..9.329 rows=11393 loops=3)\n Buckets: 65536 Batches: 1 Memory Usage: 2976kB\n -> Parallel Seq Scan on entity_id agencyid (cost=0.00..1434.07 rows=20107 width=37) (actual time=0.008..5.224 rows=11393 loops=3)\n -> Hash (cost=1310249.63..1310249.63 rows=13 width=111) (actual time=51077.049..51077.049 rows=236 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 41kB\n -> Hash Right Join (cost=829388.20..1310249.63 rows=13 width=111) (actual time=45607.852..51076.967 rows=236 loops=1)\n Hash Cond: ((an.actinternalid)::text = (q.actinternalid)::text)\n -> Seq Scan on act_id an (cost=0.00..425941.04 rows=14645404 width=37) (actual time=1.212..10883.350 rows=14676871 loops=1)\n -> Hash (cost=829388.19..829388.19 rows=1 width=111) (actual time=38246.715..38246.715 rows=236 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 41kB\n -> Gather (cost=381928.46..829388.19 rows=1 width=111) (actual time=31274.733..38246.640 rows=236 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Join (cost=380928.46..828388.09 rows=1 width=111) (actual time=31347.260..38241.812 rows=79 loops=3)\n Hash Cond: ((q.actinternalid)::text = (r.sourceinternalid)::text)\n -> Parallel Seq Scan on documentinformation q (cost=0.00..447271.93 rows=50050 width=74) (actual time=13304.439..20265.733 rows=87921 loops=3)\n Filter: (((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = 'EVN'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n Rows Removed by Filter: 1540625\n -> Parallel Hash (cost=380928.44..380928.44 rows=1 width=74) (actual time=17954.106..17954.106 rows=79 loops=3)\n Buckets: 1024 Batches: 1 Memory Usage: 104kB\n -> Parallel Seq Scan on actrelationship r (cost=0.00..380928.44 rows=1 width=74) (actual time=7489.704..17953.959 rows=79 loops=3)\n Filter: ((typecode)::text = 'SUBJ'::text)\n Rows Removed by Filter: 3433326\n -> Hash (cost=2861845.87..2861845.87 rows=34619 width=74) (actual time=199792.446..199792.446 rows=113478127 loops=1)\n Buckets: 65536 (originally 65536) Batches: 131072 (originally 2) Memory Usage: 189207kB\n -> Gather Merge (cost=2845073.40..2861845.87 rows=34619 width=74) (actual time=107620.262..156256.432 rows=113478127 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Merge Left Join (cost=2844073.37..2856849.96 rows=14425 width=74) (actual time=107570.719..126113.792 rows=37826042 loops=3)\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n -> Sort (cost=1295969.26..1296005.32 rows=14425 width=111) (actual time=57700.723..58134.751 rows=231207 loops=3)\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalid\n Sort Method: external merge Disk: 26936kB\n Worker 0: Sort Method: external merge Disk: 27152kB\n Worker 1: Sort Method: external merge Disk: 28248kB\n -> Parallel Seq Scan on documentinformationsubject (cost=0.00..1294972.76 rows=14425 width=111) (actual time=24866.656..57424.420 rows=231207 loops=3)\n Filter: (((participationtypecode)::text = ANY ('{PPRF,PRF}'::text[])) AND ((classcode)::text = 'ACT'::text) AND ((moodcode)::text = 'DEF'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n Rows Removed by Filter: 2584355\n -> Materialize (cost=1548104.12..1553157.04 rows=1010585 width=111) (actual time=49869.984..54191.701 rows=38060250 loops=3)\n -> Sort (cost=1548104.12..1550630.58 rows=1010585 width=111) (actual time=49869.980..50832.205 rows=1031106 loops=3)\n Sort Key: documentinformationsubject_1.documentinternalid, documentinformationsubject_1.documentid, documentinformationsubject_1.actinternalid\n Sort Method: external merge Disk: 122192kB\n Worker 0: Sort Method: external merge Disk: 122192kB\n Worker 1: Sort Method: external merge Disk: 122192kB\n -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=111) (actual time=20366.166..47751.267 rows=1031106 loops=3)\n Filter: ((participationtypecode)::text = 'PRD'::text)\n Rows Removed by Filter: 7415579\n Planning Time: 2.523 ms\n Execution Time: 464825.391 ms\n(66 rows)\nBy the way, let me ask, do you have pretty-print functions I\n can call with, e.g., node in ExecProcNode, or pstate in\n ExecHashJoin? Because if there was, then we could determine\n where exactly in the current plan we are? And can I call the\n plan printer for the entire plan we are currently executing?\n Might it even give us preliminary counts of where in the process\n it is? (I ask the latter not only because it would be really\n useful for our present debugging, but also because it would be\n an awesome tool for monitoring of long running queries!\n Something I am sure tons of people would just love to have!)\nI also read the other responses. I agree that having a swap space\n available just in case is better than these annoying out of memory\n errors. And yes, I can add that memory profiler thing, if you\n think it would actually work. I've done it with java heap dumps,\n even upgrading the VM to a 32 GB VM just to crunch the heap dump.\n But can you tell me just a little more as to how I need to\n configure this thing to get the data you want without blowing up\n the memory and disk during this huge query?\n\nregards,\n -Gunther",
"msg_date": "Wed, 17 Apr 2019 23:52:44 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 11:52:44PM -0400, Gunther wrote:\n>Hi guys. I don't want to be pushy, but I found it strange that after \n>so much lively back and forth getting to the bottom of this, suddenly \n>my last nights follow-up remained completely without reply. I wonder \n>if it even got received. For those who read their emails with modern \n>readers (I know I too am from a time where I wrote everything in plain \n>text) I marked some important questions in bold.\n>\n\nIt was received (and it's visible in the archives). It's right before\neaster, so I guess some people may be already on a vaction.\n\nAs for the issue - I think the current hypothesis is that the data\ndistribution is skewed in some strange way, triggering some unexpected\nbehavior in hash join. That seems plausible, but it's really hard to\ninvestigate without knowing anything about the data distribution :-(\n\nIt would be possible to do at least one of these two things:\n\n(a) export pg_stats info about distribution of the join keys\n\nThe number of tables involved in the query is not that high, and this\nwould allo us to generate a data set approximating your data. The one\nthing this can't do is showing how it's affected by WHERE conditions.\n\n(b) export data for join keys\n\nThis is similar to (a), but it would allow filtering data by the WHERE\nconditions first. The amount of data would be higher, although we only\nneed data from the columns used as join keys.\n\nOf course, if those key values contain sensitive data, it may not be\npossible, but perhaps you could hash it in some way.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 18 Apr 2019 17:21:28 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 6:01 AM Gunther <[email protected]> wrote:\n\n> Hi guys. I don't want to be pushy, but I found it strange that after so\n> much lively back and forth getting to the bottom of this, suddenly my last\n> nights follow-up remained completely without reply. I wonder if it even got\n> received. For those who read their emails with modern readers (I know I too\n> am from a time where I wrote everything in plain text) I marked some\n> important questions in bold.\n>\n\nGive Tom Lane remote access to your server, around 10 years ago I had the\nserver crashing and the fastest we could do is let him access the server.\n\nG.\n\nOn Thu, Apr 18, 2019 at 6:01 AM Gunther <[email protected]> wrote:\n\nHi guys. I don't want to be pushy, but I found it strange that\n after so much lively back and forth getting to the bottom of this,\n suddenly my last nights follow-up remained completely without\n reply. I wonder if it even got received. For those who read their\n emails with modern readers (I know I too am from a time where I\n wrote everything in plain text) I marked some important questions\n in bold.Give Tom Lane remote access to your server, around 10 years ago I had the server crashing and the fastest we could do is let him access the server.G.",
"msg_date": "Fri, 19 Apr 2019 22:28:38 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 11:52:44PM -0400, Gunther wrote:\n> Hi guys. I don't want to be pushy, but I found it strange that after so much\n\nWere you able to reproduce the issue in some minimized way ? Like after\njoining fewer tables or changing to join with fewer join conditions ?\n\nOn Thu, Apr 18, 2019 at 05:21:28PM +0200, Tomas Vondra wrote:\n> As for the issue - I think the current hypothesis is that the data\n> distribution is skewed in some strange way, triggering some unexpected\n> behavior in hash join. That seems plausible, but it's really hard to\n> investigate without knowing anything about the data distribution :-(\n> \n> It would be possible to do at least one of these two things:\n> \n> (a) export pg_stats info about distribution of the join keys\n\nFor starts, could you send the MCVs, maybe with some variation on this query ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nJustin\n\n\n",
"msg_date": "Fri, 19 Apr 2019 16:01:17 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 4/19/2019 17:01, Justin Pryzby wrote:\n> Were you able to reproduce the issue in some minimized way ? Like after\n> joining fewer tables or changing to join with fewer join conditions ?\n>\n> On Thu, Apr 18, 2019 at 05:21:28PM +0200, Tomas Vondra wrote:\n>> It would be possible to do at least one of these two things:\n\nThanks, and sorry for my pushyness. Yes, I have pin pointed the \nHashJoin, and I have created the two tables involved.\n\nThe data distribution of the join keys, they are all essentially UUIDs \nand essentially random.\n\nI am sharing this data with you. However, only someone who can actually \ncontrol the planner can use it to reproduce the problem. I have tried \nbut not succeeded. But I am sure the problem is reproduced by this material.\n\nHere is the part of the plan that generates this massive number of calls to\n\n -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1341541.37..2612134.36 rows=13 width=341)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n ... let's call this tmp_q ...\n -> Hash (cost=2908913.87..2908913.87 rows=34619 width=930)\n -> Gather Merge (cost=2892141.40..2908913.87 rows=34619 width=930)\n ... let's call this tmp_r ...\n\nThis can be logically reduced to the following query\n\nSELECT *\n FROM tmp_q q\n RIGHT OUTER JOIN tmp_r r\n USING(documentInternalId, actInternalId);\n\nwith the following two tables\n\nCREATE TABLE xtmp_q (\n documentinternalid character varying(255),\n operationqualifiercode character varying(512),\n operationqualifiername character varying(512),\n actinternalid character varying(255),\n approvalinternalid character varying(255),\n approvalnumber character varying(555),\n approvalnumbersystem character varying(555),\n approvalstatecode character varying(512),\n approvalstatecodesystem character varying(512),\n approvaleffectivetimelow character varying(512),\n approvaleffectivetimehigh character varying(512),\n approvalstatuscode character varying(32),\n licensecode character varying(512),\n agencyid character varying(555),\n agencyname text\n);\n\nCREATE TABLE tmp_r (\n documentinternalid character varying(255),\n is_current character(1),\n documentid character varying(555),\n documenttypecode character varying(512),\n subjectroleinternalid character varying(255),\n subjectentityinternalid character varying(255),\n subjectentityid character varying(555),\n subjectentityidroot character varying(555),\n subjectentityname character varying,\n subjectentitytel text,\n subjectentityemail text,\n otherentityinternalid character varying(255),\n confidentialitycode character varying(512),\n actinternalid character varying(255),\n operationcode character varying(512),\n operationname text,\n productitemcode character varying(512),\n productinternalid character varying(255)..\n);\n\nyou can download the data here (URLs just a tiny bit obfuscated):\n\nThe small table http:// gusw dot net/tmp_q.gz\n\nThe big table is in the form of 9 parts of 20 MB each, http:// gusw dot \nnet/tmp_r.gz.00, .01, .02, ..., .09, maybe you need only the first part.\n\nDownload as many as you have patience to grab, and then import the data \nlike this:\n\n\\copy tmp_q from program 'zcat tmp_q.gz'\n\\copt tmp_r from program 'cat tmp_r.gz.* |zcat'\n\nThe only problem is that I can't test that this actually would trigger \nthe memory problem, because I can't force the plan to use the right \njoin, it always reverts to the left join hashing the tmp_q:\n\n -> Hash Left Join (cost=10.25..5601401.19 rows=5505039 width=12118)\n Hash Cond: (((r.documentinternalid)::text = (q.documentinternalid)::text) AND ((r.actinternalid)::text = (q.actinternalid)::text))\n -> Seq Scan on tmp_r r (cost=0.00..5560089.39 rows=5505039 width=6844)\n -> Hash (cost=10.10..10.10 rows=10 width=6306)\n -> Seq Scan on tmp_q q (cost=0.00..10.10 rows=10 width=6306)\n\nwhich is of course much better, but when tmp_q and tmp_r are the results \nof complex stuff that the planner can't estimate, then it gets it wrong, \nand then the issue gets triggered because we are hashing on the big \ntmp_r, not tmp_q.\n\nIt would be so nice if there was a way to force a specific plan for \npurposes of the testing. I tried giving false data in pg_class \nreltuples and relpages:\n\nfoo=# analyze tmp_q;\nANALYZE\nfoo=# analyze tmp_r;\nANALYZE\nfoo=# select relname, relpages, reltuples from pg_class where relname in ('tmp_q', 'tmp_r');\n relname | relpages | reltuples\n---------+----------+-------------\n tmp_r | 5505039 | 1.13467e+08\n tmp_q | 7 | 236\n(2 rows)\n\nfoo=# update pg_class set (relpages, reltuples) = (5505039, 1.13467e+08) where relname = 'tmp_q';\nUPDATE 1\nfoo=# update pg_class set (relpages, reltuples) = (7, 236) where relname = 'tmp_r';\nUPDATE 1\n\nbut that didn't help. Somehow the planner outsmarts every such trick, so \nI can't get it to follow my right outer join plan where the big table is \nhashed. I am sure y'all know some way to force it.\n\nregards,\n-Gunther\n\n\n\n\n\n\n\nOn 4/19/2019 17:01, Justin Pryzby\n wrote:\n\n\nWere you able to reproduce the issue in some minimized way ? Like after\njoining fewer tables or changing to join with fewer join conditions ?\n\nOn Thu, Apr 18, 2019 at 05:21:28PM +0200, Tomas Vondra wrote:\n\n\nIt would be possible to do at least one of these two things:\n\n\nThanks, and sorry for my pushyness. Yes, I have pin pointed the\n HashJoin, and I have created the two tables involved.\nThe data distribution of the join keys, they are all essentially\n UUIDs and essentially random. \n I am sharing this data with you. However, only someone who can\n actually control the planner can use it to reproduce the problem.\n I have tried but not succeeded. But I am sure the problem is\n reproduced by this material.\nHere is the part of the plan that generates this massive number\n of calls to \n\n -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1341541.37..2612134.36 rows=13 width=341)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n ... let's call this tmp_q ...\n -> Hash (cost=2908913.87..2908913.87 rows=34619 width=930)\n -> Gather Merge (cost=2892141.40..2908913.87 rows=34619 width=930)\n ... let's call this tmp_r ... \nThis can be logically reduced to the following query\nSELECT *\n FROM tmp_q q\n RIGHT OUTER JOIN tmp_r r\n USING(documentInternalId, actInternalId);\nwith the following two tables\nCREATE TABLE xtmp_q (\n documentinternalid character varying(255),\n operationqualifiercode character varying(512),\n operationqualifiername character varying(512),\n actinternalid character varying(255),\n approvalinternalid character varying(255),\n approvalnumber character varying(555),\n approvalnumbersystem character varying(555),\n approvalstatecode character varying(512),\n approvalstatecodesystem character varying(512),\n approvaleffectivetimelow character varying(512),\n approvaleffectivetimehigh character varying(512),\n approvalstatuscode character varying(32),\n licensecode character varying(512),\n agencyid character varying(555),\n agencyname text\n);\n\nCREATE TABLE tmp_r (\n documentinternalid character varying(255),\n is_current character(1),\n documentid character varying(555),\n documenttypecode character varying(512),\n subjectroleinternalid character varying(255),\n subjectentityinternalid character varying(255),\n subjectentityid character varying(555),\n subjectentityidroot character varying(555),\n subjectentityname character varying,\n subjectentitytel text,\n subjectentityemail text,\n otherentityinternalid character varying(255),\n confidentialitycode character varying(512),\n actinternalid character varying(255),\n operationcode character varying(512),\n operationname text,\n productitemcode character varying(512),\n productinternalid character varying(255)..\n);\nyou can download the data here (URLs just a tiny bit obfuscated):\n\nThe small table http:// gusw dot net/tmp_q.gz\nThe big table is in the form of 9 parts of 20 MB each, http://\n gusw dot net/tmp_r.gz.00, .01, .02, ..., .09, maybe you need only\n the first part.\nDownload as many as you have patience to grab, and then import\n the data like this:\n\n\\copy tmp_q from program 'zcat tmp_q.gz'\n\\copt tmp_r from program 'cat tmp_r.gz.* |zcat'\n\nThe only problem is that I can't test that this actually would\n trigger the memory problem, because I can't force the plan to use\n the right join, it always reverts to the left join hashing the\n tmp_q:\n\n -> Hash Left Join (cost=10.25..5601401.19 rows=5505039 width=12118)\n Hash Cond: (((r.documentinternalid)::text = (q.documentinternalid)::text) AND ((r.actinternalid)::text = (q.actinternalid)::text))\n -> Seq Scan on tmp_r r (cost=0.00..5560089.39 rows=5505039 width=6844)\n -> Hash (cost=10.10..10.10 rows=10 width=6306)\n -> Seq Scan on tmp_q q (cost=0.00..10.10 rows=10 width=6306)\n\nwhich is of course much better, but when tmp_q and tmp_r are the\n results of complex stuff that the planner can't estimate, then it\n gets it wrong, and then the issue gets triggered because we are\n hashing on the big tmp_r, not tmp_q.\nIt would be so nice if there was a way to force a specific plan\n for purposes of the testing. I tried giving false data in\n pg_class reltuples and relpages:\n\nfoo=# analyze tmp_q;\nANALYZE\nfoo=# analyze tmp_r;\nANALYZE\nfoo=# select relname, relpages, reltuples from pg_class where relname in ('tmp_q', 'tmp_r');\n relname | relpages | reltuples\n---------+----------+-------------\n tmp_r | 5505039 | 1.13467e+08\n tmp_q | 7 | 236\n(2 rows)\n\nfoo=# update pg_class set (relpages, reltuples) = (5505039, 1.13467e+08) where relname = 'tmp_q';\nUPDATE 1\nfoo=# update pg_class set (relpages, reltuples) = (7, 236) where relname = 'tmp_r';\nUPDATE 1\nbut that didn't help. Somehow the planner outsmarts every such\n trick, so I can't get it to follow my right outer join plan where\n the big table is hashed. I am sure y'all know some way to force\n it.\n\nregards,\n -Gunther",
"msg_date": "Fri, 19 Apr 2019 23:34:54 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "pg_hint_plan extension might be able to force a plan.\n\nAlso, I don’t know if perf probes & perf record/script could be useful for creating a log of all the calls to do memory allocation along with the unwound call stacks? Then analyzing that file? At least this can be done for a single process, and just while the problematic sql is running.\n\n-Jeremy\n\nSent from my TI-83\n\n> On Apr 19, 2019, at 20:34, Gunther <[email protected]> wrote:\n> \n>> On 4/19/2019 17:01, Justin Pryzby wrote:\n>> Were you able to reproduce the issue in some minimized way ? Like after\n>> joining fewer tables or changing to join with fewer join conditions ?\n>> \n>> On Thu, Apr 18, 2019 at 05:21:28PM +0200, Tomas Vondra wrote:\n>>> It would be possible to do at least one of these two things:\n> Thanks, and sorry for my pushyness. Yes, I have pin pointed the HashJoin, and I have created the two tables involved.\n> \n> The data distribution of the join keys, they are all essentially UUIDs and essentially random.\n> \n> I am sharing this data with you. However, only someone who can actually control the planner can use it to reproduce the problem. I have tried but not succeeded. But I am sure the problem is reproduced by this material.\n> \n> Here is the part of the plan that generates this massive number of calls to \n> \n> -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197)\n> Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n> -> Hash Right Join (cost=1341541.37..2612134.36 rows=13 width=341)\n> Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n> ... let's call this tmp_q ...\n> -> Hash (cost=2908913.87..2908913.87 rows=34619 width=930)\n> -> Gather Merge (cost=2892141.40..2908913.87 rows=34619 width=930)\n> ... let's call this tmp_r ... \n> This can be logically reduced to the following query\n> \n> SELECT *\n> FROM tmp_q q\n> RIGHT OUTER JOIN tmp_r r\n> USING(documentInternalId, actInternalId);\n> with the following two tables\n> \n> CREATE TABLE xtmp_q (\n> documentinternalid character varying(255),\n> operationqualifiercode character varying(512),\n> operationqualifiername character varying(512),\n> actinternalid character varying(255),\n> approvalinternalid character varying(255),\n> approvalnumber character varying(555),\n> approvalnumbersystem character varying(555),\n> approvalstatecode character varying(512),\n> approvalstatecodesystem character varying(512),\n> approvaleffectivetimelow character varying(512),\n> approvaleffectivetimehigh character varying(512),\n> approvalstatuscode character varying(32),\n> licensecode character varying(512),\n> agencyid character varying(555),\n> agencyname text\n> );\n> \n> CREATE TABLE tmp_r (\n> documentinternalid character varying(255),\n> is_current character(1),\n> documentid character varying(555),\n> documenttypecode character varying(512),\n> subjectroleinternalid character varying(255),\n> subjectentityinternalid character varying(255),\n> subjectentityid character varying(555),\n> subjectentityidroot character varying(555),\n> subjectentityname character varying,\n> subjectentitytel text,\n> subjectentityemail text,\n> otherentityinternalid character varying(255),\n> confidentialitycode character varying(512),\n> actinternalid character varying(255),\n> operationcode character varying(512),\n> operationname text,\n> productitemcode character varying(512),\n> productinternalid character varying(255)..\n> );\n> you can download the data here (URLs just a tiny bit obfuscated):\n> \n> The small table http:// gusw dot net/tmp_q.gz\n> \n> The big table is in the form of 9 parts of 20 MB each, http:// gusw dot net/tmp_r.gz.00, .01, .02, ..., .09, maybe you need only the first part.\n> \n> Download as many as you have patience to grab, and then import the data like this:\n> \n> \\copy tmp_q from program 'zcat tmp_q.gz'\n> \\copt tmp_r from program 'cat tmp_r.gz.* |zcat'\n> The only problem is that I can't test that this actually would trigger the memory problem, because I can't force the plan to use the right join, it always reverts to the left join hashing the tmp_q:\n> \n> -> Hash Left Join (cost=10.25..5601401.19 rows=5505039 width=12118)\n> Hash Cond: (((r.documentinternalid)::text = (q.documentinternalid)::text) AND ((r.actinternalid)::text = (q.actinternalid)::text))\n> -> Seq Scan on tmp_r r (cost=0.00..5560089.39 rows=5505039 width=6844)\n> -> Hash (cost=10.10..10.10 rows=10 width=6306)\n> -> Seq Scan on tmp_q q (cost=0.00..10.10 rows=10 width=6306)\n> which is of course much better, but when tmp_q and tmp_r are the results of complex stuff that the planner can't estimate, then it gets it wrong, and then the issue gets triggered because we are hashing on the big tmp_r, not tmp_q.\n> \n> It would be so nice if there was a way to force a specific plan for purposes of the testing. I tried giving false data in pg_class reltuples and relpages:\n> \n> foo=# analyze tmp_q;\n> ANALYZE\n> foo=# analyze tmp_r;\n> ANALYZE\n> foo=# select relname, relpages, reltuples from pg_class where relname in ('tmp_q', 'tmp_r');\n> relname | relpages | reltuples\n> ---------+----------+-------------\n> tmp_r | 5505039 | 1.13467e+08\n> tmp_q | 7 | 236\n> (2 rows)\n> \n> foo=# update pg_class set (relpages, reltuples) = (5505039, 1.13467e+08) where relname = 'tmp_q';\n> UPDATE 1\n> foo=# update pg_class set (relpages, reltuples) = (7, 236) where relname = 'tmp_r';\n> UPDATE 1\n> but that didn't help. Somehow the planner outsmarts every such trick, so I can't get it to follow my right outer join plan where the big table is hashed. I am sure y'all know some way to force it.\n> \n> regards,\n> -Gunther\n\npg_hint_plan extension might be able to force a plan.Also, I don’t know if perf probes & perf record/script could be useful for creating a log of all the calls to do memory allocation along with the unwound call stacks? Then analyzing that file? At least this can be done for a single process, and just while the problematic sql is running.-JeremySent from my TI-83On Apr 19, 2019, at 20:34, Gunther <[email protected]> wrote:\n\nOn 4/19/2019 17:01, Justin Pryzby\n wrote:\n\n\nWere you able to reproduce the issue in some minimized way ? Like after\njoining fewer tables or changing to join with fewer join conditions ?\n\nOn Thu, Apr 18, 2019 at 05:21:28PM +0200, Tomas Vondra wrote:\n\n\nIt would be possible to do at least one of these two things:\n\n\nThanks, and sorry for my pushyness. Yes, I have pin pointed the\n HashJoin, and I have created the two tables involved.\nThe data distribution of the join keys, they are all essentially\n UUIDs and essentially random. \n I am sharing this data with you. However, only someone who can\n actually control the planner can use it to reproduce the problem.\n I have tried but not succeeded. But I am sure the problem is\n reproduced by this material.\nHere is the part of the plan that generates this massive number\n of calls to \n\n -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1341541.37..2612134.36 rows=13 width=341)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n ... let's call this tmp_q ...\n -> Hash (cost=2908913.87..2908913.87 rows=34619 width=930)\n -> Gather Merge (cost=2892141.40..2908913.87 rows=34619 width=930)\n ... let's call this tmp_r ... \nThis can be logically reduced to the following query\nSELECT *\n FROM tmp_q q\n RIGHT OUTER JOIN tmp_r r\n USING(documentInternalId, actInternalId);\nwith the following two tables\nCREATE TABLE xtmp_q (\n documentinternalid character varying(255),\n operationqualifiercode character varying(512),\n operationqualifiername character varying(512),\n actinternalid character varying(255),\n approvalinternalid character varying(255),\n approvalnumber character varying(555),\n approvalnumbersystem character varying(555),\n approvalstatecode character varying(512),\n approvalstatecodesystem character varying(512),\n approvaleffectivetimelow character varying(512),\n approvaleffectivetimehigh character varying(512),\n approvalstatuscode character varying(32),\n licensecode character varying(512),\n agencyid character varying(555),\n agencyname text\n);\n\nCREATE TABLE tmp_r (\n documentinternalid character varying(255),\n is_current character(1),\n documentid character varying(555),\n documenttypecode character varying(512),\n subjectroleinternalid character varying(255),\n subjectentityinternalid character varying(255),\n subjectentityid character varying(555),\n subjectentityidroot character varying(555),\n subjectentityname character varying,\n subjectentitytel text,\n subjectentityemail text,\n otherentityinternalid character varying(255),\n confidentialitycode character varying(512),\n actinternalid character varying(255),\n operationcode character varying(512),\n operationname text,\n productitemcode character varying(512),\n productinternalid character varying(255)..\n);\nyou can download the data here (URLs just a tiny bit obfuscated):\n\nThe small table http:// gusw dot net/tmp_q.gz\nThe big table is in the form of 9 parts of 20 MB each, http://\n gusw dot net/tmp_r.gz.00, .01, .02, ..., .09, maybe you need only\n the first part.\nDownload as many as you have patience to grab, and then import\n the data like this:\n\n\\copy tmp_q from program 'zcat tmp_q.gz'\n\\copt tmp_r from program 'cat tmp_r.gz.* |zcat'\n\nThe only problem is that I can't test that this actually would\n trigger the memory problem, because I can't force the plan to use\n the right join, it always reverts to the left join hashing the\n tmp_q:\n\n -> Hash Left Join (cost=10.25..5601401.19 rows=5505039 width=12118)\n Hash Cond: (((r.documentinternalid)::text = (q.documentinternalid)::text) AND ((r.actinternalid)::text = (q.actinternalid)::text))\n -> Seq Scan on tmp_r r (cost=0.00..5560089.39 rows=5505039 width=6844)\n -> Hash (cost=10.10..10.10 rows=10 width=6306)\n -> Seq Scan on tmp_q q (cost=0.00..10.10 rows=10 width=6306)\n\nwhich is of course much better, but when tmp_q and tmp_r are the\n results of complex stuff that the planner can't estimate, then it\n gets it wrong, and then the issue gets triggered because we are\n hashing on the big tmp_r, not tmp_q.\nIt would be so nice if there was a way to force a specific plan\n for purposes of the testing. I tried giving false data in\n pg_class reltuples and relpages:\n\nfoo=# analyze tmp_q;\nANALYZE\nfoo=# analyze tmp_r;\nANALYZE\nfoo=# select relname, relpages, reltuples from pg_class where relname in ('tmp_q', 'tmp_r');\n relname | relpages | reltuples\n---------+----------+-------------\n tmp_r | 5505039 | 1.13467e+08\n tmp_q | 7 | 236\n(2 rows)\n\nfoo=# update pg_class set (relpages, reltuples) = (5505039, 1.13467e+08) where relname = 'tmp_q';\nUPDATE 1\nfoo=# update pg_class set (relpages, reltuples) = (7, 236) where relname = 'tmp_r';\nUPDATE 1\nbut that didn't help. Somehow the planner outsmarts every such\n trick, so I can't get it to follow my right outer join plan where\n the big table is hashed. I am sure y'all know some way to force\n it.\n\nregards,\n -Gunther",
"msg_date": "Fri, 19 Apr 2019 22:54:13 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 11:34:54PM -0400, Gunther wrote:\n> On 4/19/2019 17:01, Justin Pryzby wrote:\n> >Were you able to reproduce the issue in some minimized way ? Like after\n> >joining fewer tables or changing to join with fewer join conditions ?\n> >\n> >On Thu, Apr 18, 2019 at 05:21:28PM +0200, Tomas Vondra wrote:\n> >>It would be possible to do at least one of these two things:\n> \n> Thanks, and sorry for my pushyness. Yes, I have pin pointed the HashJoin,\n> and I have created the two tables involved.\n> \n> The data distribution of the join keys, they are all essentially UUIDs and\n> essentially random.\n> \n> I am sharing this data with you. However, only someone who can actually\n> control the planner can use it to reproduce the problem. I have tried but\n> not succeeded. But I am sure the problem is reproduced by this material.\n> \n> Here is the part of the plan that generates this massive number of calls to\n> \n> -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197)\n> Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n> -> Hash Right Join (cost=1341541.37..2612134.36 rows=13 width=341)\n> Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n> ... let's call this tmp_q ...\n> -> Hash (cost=2908913.87..2908913.87 rows=34619 width=930)\n> -> Gather Merge (cost=2892141.40..2908913.87 rows=34619 width=930)\n> ... let's call this tmp_r ...\n\nWould you send basic stats for these ?\nq.documentinternalid, documentinformationsubject.documentinternalid, r.targetinternalid, documentinformationsubject.actinternalid\n\nLike from this query\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nJustin\n\n\n",
"msg_date": "Sat, 20 Apr 2019 02:52:57 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "> The only problem is that I can't test that this actually would trigger the\n> memory problem, because I can't force the plan to use the right join, it\n> always reverts to the left join hashing the tmp_q:\n\nI think the table on the \"OUTER\" side is the one which needs to be iterated\nover (not hashed), in order to return each of its rows even if there are no\njoin partners in the other table. In your original query, the small table was\nbeing hashed and the large table iterated; maybe that's whats important.\n\n> which is of course much better, but when tmp_q and tmp_r are the results of\n> complex stuff that the planner can't estimate, then it gets it wrong, and\n> then the issue gets triggered because we are hashing on the big tmp_r, not\n> tmp_q.\n\nI was able to get something maybe promising ? \"Batches: 65536 (originally 1)\"\n\nI didn't get \"Out of memory\" error yet, but did crash the server with this one:\npostgres=# explain analyze WITH v AS( SELECT * FROM generate_series(1,99999999)i WHERE i%10<10 AND i%11<11 AND i%12<12 AND i%13<13 AND i%14<14 AND i%15<15 AND i%16<16 AND i%17<17 AND i%18<18 AND i%19<19 AND i%20<20 AND i%21<21 ) SELECT * FROM generate_series(1,99)k JOIN v ON k=i ;\n\nNote, on pg12dev this needs to be \"with v AS MATERIALIZED\".\n\npostgres=# SET work_mem='128kB';SET client_min_messages =log;SET log_statement_stats=on;explain(analyze,timing off) WITH v AS( SELECT * FROM generate_series(1,999999)i WHERE i%10<10 AND i%11<11 AND i%12<12 AND i%13<13 AND i%14<14 AND i%15<15 AND i%16<16 AND i%17<17 AND i%18<18 AND i%19<19 AND i%20<20 AND i%21<21 ) SELECT * FROM generate_series(1,99)k JOIN v ON k=i ;\n Hash Join (cost=70.04..83.84 rows=5 width=8) (actual rows=99 loops=1)\n Hash Cond: (k.k = v.i)\n CTE v\n -> Function Scan on generate_series i (cost=0.00..70.00 rows=1 width=4) (actual rows=999999 loops=1)\n Filter: (((i % 10) < 10) AND ((i % 11) < 11) AND ((i % 12) < 12) AND ((i % 13) < 13) AND ((i % 14) < 14) AND ((i % 15) < 15) AND ((i % 16) < 16) AND ((i % 17) < 17) AND ((i % 18) < 18) AND ((i % 19) < 19) AND ((i % 20) < 20) AND ((i % 21) < 21))\n -> Function Scan on generate_series k (cost=0.00..10.00 rows=1000 width=4) (actual rows=99 loops=1)\n -> Hash (cost=0.02..0.02 rows=1 width=4) (actual rows=999999 loops=1)\n Buckets: 4096 (originally 1024) Batches: 512 (originally 1) Memory Usage: 101kB\n -> CTE Scan on v (cost=0.00..0.02 rows=1 width=4) (actual rows=999999 loops=1)\n\nJustin\n\n\n",
"msg_date": "Sat, 20 Apr 2019 05:53:36 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 11:24:59PM -0400, Tom Lane wrote:\n> Gunther <[email protected]> writes:\n> > ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244 chunks); 2230341056 used\n> \n> Oooh, that looks like a memory leak right enough. The ExecutorState\n> should not get that big for any reasonable query.\n\nOn Tue, Apr 16, 2019 at 11:30:19AM -0400, Tom Lane wrote:\n> Hmm ... this matches up with a vague thought I had that for some reason\n> the hash join might be spawning a huge number of separate batches.\n> Each batch would have a couple of files with associated in-memory\n> state including an 8K I/O buffer, so you could account for the\n\nOn Tue, Apr 16, 2019 at 10:24:53PM -0400, Gunther wrote:\n> -> Hash (cost=2861845.87..2861845.87 rows=34619 width=74) (actual time=199792.446..199792.446 rows=113478127 loops=1)\n> Buckets: 65536 (originally 65536) Batches: 131072 (originally 2) Memory Usage: 189207kB\n\nIs it significant that there are ~2x as many ExecutorState blocks as there are\nbatches ? 266261/131072 => 2.03...\n\nIf there was 1 blocks leaked when batch=2, and 2 blocks leaked when batch=4,\nand 4 blocks leaked when batch=131072, then when batch=16, there'd be 64k\nleaked blocks, and 131072 total blocks.\n\nI'm guessing Tom probably already thought of this, but:\n2230341056/266261 => ~8376\nwhich is pretty close to the 8kB I/O buffer you were talking about (if the\nnumber of same-sized buffers much greater than other allocations).\n\nIf Tom thinks (as I understand) that the issue is *not* a memory leak, but out\nof control increasing of nbatches, and failure to account for their size...then\nthis patch might help.\n\nThe number of batches is increased to avoid exceeding work_mem. With very low\nwork_mem (or very larger number of tuples hashed), it'll try to use a large\nnumber of batches. At some point the memory used by BatchFiles structure\n(increasing by powers of two) itself exceeds work_mem.\n\nWith larger work_mem, there's less need for more batches. So the number of\nbatches used for small work_mem needs to be constrained, either based on\nwork_mem, or at all.\n\nWith my patch, the number of batches is nonlinear WRT work_mem, and reaches a\nmaximum for moderately small work_mem. The goal is to choose the optimal\nnumber of batches to minimize the degree to which work_mem is exceeded.\n\nJustin",
"msg_date": "Sat, 20 Apr 2019 14:30:09 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 11:46:51PM -0500, Justin Pryzby wrote:\n\n>> I wonder if it'd be useful to compile with\n>> ./configure CFLAGS=-DHJDEBUG=1\n> Could you try this, too ?\n\nOK, doing it now, here is what I'm getting in the log file now. I am \nsurprised I get so few rows here when there\n\n2019-04-20 17:12:16.077 UTC [7093] LOG: database system was shut down at 2019-04-20 17:12:15 UTC\n2019-04-20 17:12:16.085 UTC [7091] LOG: database system is ready to accept connections\nHashjoin 0x118e0c8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x118e0f8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1194e78: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x119b518: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x1194e78: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x119bb38: initial nbatch = 16, nbuckets = 65536\nTopMemoryContext: 4347672 total in 9 blocks; 41784 free (19 chunks); 4305888 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 1456 free (0 chunks); 6736 used\n TopTransactionContext: 8192 total in 1 blocks; 5416 free (2 chunks); 2776 used\n Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (1 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1474560 total in 183 blocks; 6152 free (8 chunks); 1468408 used:\n ExecutorState: 2234501600 total in 266274 blocks; 3696112 free (17274 chunks); 2230805488 used\n HashTableContext: 32768 total in 3 blocks; 17272 free (8 chunks); 15496 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); 8389408 used\n HashBatchContext: 177003344 total in 5387 blocks; 7936 free (0 chunks); 176995408 used\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n Caller tuples: 1048576 total in 8 blocks; 21608 free (14 chunks); 1026968 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 236384 free (1 chunks); 864944 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: act_id_fkidx\n ...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6152 free (1 chunks); 2040 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\nGrand total: 2430295856 bytes in 272166 blocks; 5223104 free (17571 chunks); 2425072752 used\n2019-04-20 17:28:56.887 UTC [7100] ERROR: out of memory\n2019-04-20 17:28:56.887 UTC [7100] DETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\n2019-04-20 17:28:56.887 UTC [7100] STATEMENT: explain analyze SELECT * FROM reports.v_BusinessOperation;\n\nThere are amazingly few entries in the log.\n\n(gdb) break ExecHashIncreaseNumBatches\nBreakpoint 1 at 0x6b7bd6: file nodeHash.c, line 884.\n(gdb) cont\nContinuing.\n\nBreakpoint 1, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n884 int oldnbatch = hashtable->nbatch;\n(gdb) cont\nContinuing.\n\nBreakpoint 1, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n884 int oldnbatch = hashtable->nbatch;\n(gdb) cont\nContinuing.\n\nBreakpoint 1, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n884 int oldnbatch = hashtable->nbatch;\n(gdb) cont 100\nWill ignore next 99 crossings of breakpoint 1. Continuing.\n\nBut I am surprised now to find that the behavior has changed or what?\n\nFirst, weirdly enough, now I am not getting any of the HJDEBUG messages \nany more. And yet my resident memory has already increased to 1.1 GB.\n\n 7100 postgres 20 0 2797036 1.1g 188064 R 17.9 14.9 14:46.00 postgres: postgres integrator [local] EXPLAIN\n 7664 postgres 20 0 1271788 16228 14408 D 17.6 0.2 0:01.96 postgres: parallel worker for PID 7100\n 7665 postgres 20 0 1271788 16224 14404 R 17.6 0.2 0:01.95 postgres: parallel worker for PID 7100\n\nso why is it all different now? Are we chasing a red herring?\n\nFinally I had another stop now at the HJDEBUG line\n\n#0 ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:904\n#1 0x00000000006b93b1 in ExecHashTableInsert (hashtable=0x12d0818, slot=0x12bcf98, hashvalue=234960700) at nodeHash.c:1655\n#2 0x00000000006bd600 in ExecHashJoinNewBatch (hjstate=0x12a4340) at nodeHashjoin.c:1051\n#3 0x00000000006bc999 in ExecHashJoinImpl (pstate=0x12a4340, parallel=false) at nodeHashjoin.c:539\n#4 0x00000000006bca23 in ExecHashJoin (pstate=0x12a4340) at nodeHashjoin.c:565\n#5 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4340) at execProcnode.c:461\n#6 0x00000000006ceaad in ExecProcNode (node=0x12a4340) at ../../../src/include/executor/executor.h:247\n#7 0x00000000006cebe7 in ExecSort (pstate=0x12a4230) at nodeSort.c:107\n#8 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4230) at execProcnode.c:461\n#9 0x00000000006a18f0 in ExecProcNodeFirst (node=0x12a4230) at execProcnode.c:445\n#10 0x00000000006cf25c in ExecProcNode (node=0x12a4230) at ../../../src/include/executor/executor.h:247\n#11 0x00000000006cf388 in ExecUnique (pstate=0x12a4040) at nodeUnique.c:73\n#12 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4040) at execProcnode.c:461\n#13 0x00000000006a18f0 in ExecProcNodeFirst (node=0x12a4040) at execProcnode.c:445\n#14 0x000000000069728b in ExecProcNode (node=0x12a4040) at ../../../src/include/executor/executor.h:247\n#15 0x0000000000699790 in ExecutePlan (estate=0x12a3da0, planstate=0x12a4040, use_parallel_mode=true, operation=CMD_SELECT,\n sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0xe811c0 <donothingDR>, execute_once=true)\n at execMain.c:1723\n#16 0x0000000000697757 in standard_ExecutorRun (queryDesc=0x1404168, direction=ForwardScanDirection, count=0, execute_once=true)\n at execMain.c:364\n#17 0x00000000006975f4 in ExecutorRun (queryDesc=0x1404168, direction=ForwardScanDirection, count=0, execute_once=true)\n at execMain.c:307\n#18 0x000000000060d227 in ExplainOnePlan (plannedstmt=0x1402588, into=0x0, es=0x10b03a8,\n queryString=0x10866c0 \"explain analyze SELECT * FROM reports.v_BusinessOperation;\", params=0x0, queryEnv=0x0,\n planduration=0x7fff56a0df00) at explain.c:535\n\nSo now I am back with checking who calls AllocSetAlloc?\n\n(gdb) break AllocSetAlloc if (int)strcmp(context->name, \"ExecutorState\") == 0\nBreakpoint 4 at 0x9eab11: file aset.c, line 719.\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=8272) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) bt 5\n#0 AllocSetAlloc (context=0x12a3c90, size=8272) at aset.c:719\n#1 0x00000000009f2e47 in palloc (size=8272) at mcxt.c:938\n#2 0x000000000082ae84 in makeBufFileCommon (nfiles=1) at buffile.c:116\n#3 0x000000000082af14 in makeBufFile (firstfile=34029) at buffile.c:138\n#4 0x000000000082b09f in BufFileCreateTemp (interXact=false) at buffile.c:201\n(More stack frames follow...)\n(gdb) bt 7\n#0 AllocSetAlloc (context=0x12a3c90, size=8272) at aset.c:719\n#1 0x00000000009f2e47 in palloc (size=8272) at mcxt.c:938\n#2 0x000000000082ae84 in makeBufFileCommon (nfiles=1) at buffile.c:116\n#3 0x000000000082af14 in makeBufFile (firstfile=34029) at buffile.c:138\n#4 0x000000000082b09f in BufFileCreateTemp (interXact=false) at buffile.c:201\n#5 0x00000000006bda31 in ExecHashJoinSaveTuple (tuple=0x86069b8, hashvalue=234960700, fileptr=0x8616a30) at nodeHashjoin.c:1220\n#6 0x00000000006b80c0 in ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:1004\n(More stack frames follow...)\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=8) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) bt 7\n#0 AllocSetAlloc (context=0x12a3c90, size=8) at aset.c:719\n#1 0x00000000009f2f5d in palloc0 (size=8) at mcxt.c:969\n#2 0x000000000082aea2 in makeBufFileCommon (nfiles=1) at buffile.c:119\n#3 0x000000000082af14 in makeBufFile (firstfile=34029) at buffile.c:138\n#4 0x000000000082b09f in BufFileCreateTemp (interXact=false) at buffile.c:201\n#5 0x00000000006bda31 in ExecHashJoinSaveTuple (tuple=0x86069b8, hashvalue=234960700, fileptr=0x8616a30) at nodeHashjoin.c:1220\n#6 0x00000000006b80c0 in ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:1004\n(More stack frames follow...)\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=4) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) bt 7\n#0 AllocSetAlloc (context=0x12a3c90, size=4) at aset.c:719\n#1 0x00000000009f2e47 in palloc (size=4) at mcxt.c:938\n#2 0x000000000082af22 in makeBufFile (firstfile=34029) at buffile.c:140\n#3 0x000000000082b09f in BufFileCreateTemp (interXact=false) at buffile.c:201\n#4 0x00000000006bda31 in ExecHashJoinSaveTuple (tuple=0x86069b8, hashvalue=234960700, fileptr=0x8616a30) at nodeHashjoin.c:1220\n#5 0x00000000006b80c0 in ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:1004\n#6 0x00000000006b93b1 in ExecHashTableInsert (hashtable=0x12d0818, slot=0x12bcf98, hashvalue=234960700) at nodeHash.c:1655\n(More stack frames follow...)\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) bt 7\n#0 AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n#1 0x00000000009f2e47 in palloc (size=375) at mcxt.c:938\n#2 0x00000000006bdbec in ExecHashJoinGetSavedTuple (hjstate=0x12a4340, file=0x13ca418, hashvalue=0x7fff56a0da54, tupleSlot=0x12bcf98)\n at nodeHashjoin.c:1277\n#3 0x00000000006bd61f in ExecHashJoinNewBatch (hjstate=0x12a4340) at nodeHashjoin.c:1042\n#4 0x00000000006bc999 in ExecHashJoinImpl (pstate=0x12a4340, parallel=false) at nodeHashjoin.c:539\n#5 0x00000000006bca23 in ExecHashJoin (pstate=0x12a4340) at nodeHashjoin.c:565\n#6 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4340) at execProcnode.c:461\n(More stack frames follow...)\n\nNow I am adding a breakpoint in AllocSetFree and I see that this gets \ncalled right after AllocSetAlloc with the same context, and presumably \nthe pointer previously allocated.\n\n(gdb) finish\nRun till exit from #0 AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n0x00000000009f2e47 in palloc (size=375) at mcxt.c:938\n938 ret = context->methods->alloc(context, size);\nValue returned is $1 = (void *) 0x11bd858\n(gdb) cont\nContinuing.\n\nBreakpoint 5, AllocSetFree (context=0x12a3c90, pointer=0x11bf748) at aset.c:992\n992 AllocSet set = (AllocSet) context;\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) finish\nRun till exit from #0 AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n0x00000000009f2e47 in palloc (size=375) at mcxt.c:938\n938 ret = context->methods->alloc(context, size);\nValue returned is $2 = (void *) 0x11bf748\n(gdb) cont\nContinuing.\n\nBreakpoint 5, AllocSetFree (context=0x12a3c90, pointer=0x11bd858) at aset.c:992\n992 AllocSet set = (AllocSet) context;\n\nSee, now that pointer allocated before is being freed. I can already see \nhow I would write a memory leak detection tool. I would keep a cache of \na fixed size, say one page, of recently allocated pointers, and when \nfree is called, it would remove the pointer from the cache. Then I would \nonly log the allocated pointer once it has to be evicted from the cache \nbecause another one needs to be added, so I don't fill my log file with \na bunch of memory allocations that get freed relatively quickly.\n\nThis is very confused. I just stepped over\n\nBreakpoint 2, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:904\n904 printf(\"Hashjoin %p: increasing nbatch to %d because space = %zu\\n\",\n(gdb) next\n908 oldcxt = MemoryContextSwitchTo(hashtable->hashCxt);\n(gdb) call MemoryContextStats(TopPortalContext)\n\nand checked my log file and there was nothing before the call \nMemoryContextStats(TopPortalContext) so I don't understand where this \nprintf stuff is ending up. Clearly *some* of it is in the log, but none \nof that \"increasing nbatch\" stuff is. I think there is something wrong \nwith that HJDEBUG stuff. Oops, my calling that MemoryContextStats really \nseems to help force this buffer to be flushed, because now I got more:\n\nHashjoin 0x118e4c8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x118e4f8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1195278: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x119b918: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x1195278: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x119b918: initial nbatch = 16, nbuckets = 65536\n...\nHashjoin 0x13a8e68: initial nbatch = 16, nbuckets = 8192\nHashjoin 0x13a8e68: increasing nbatch to 32 because space = 4128933\nHashjoin 0x13a8e68: freed 148 of 10584 tuples, space now 4071106\nHashjoin 0x13a8e68: increasing nbatch to 64 because space = 4128826\nHashjoin 0x13a8e68: freed 544 of 10584 tuples, space now 3916296\nHashjoin 0x13a8e68: increasing nbatch to 128 because space = 4128846\nHashjoin 0x13a8e68: freed 10419 of 10585 tuples, space now 65570\nHashjoin 0x13a8e68: increasing nbatch to 256 because space = 4128829\nHashjoin 0x13a8e68: freed 10308 of 10734 tuples, space now 161815\nHashjoin 0x13a8e68: increasing nbatch to 512 because space = 4128908\nHashjoin 0x13a8e68: freed 398 of 10379 tuples, space now 3977787\nHashjoin 0x13a8e68: increasing nbatch to 1024 because space = 4129008\nHashjoin 0x13a8e68: freed 296 of 10360 tuples, space now 4013423\nHashjoin 0x13a8e68: increasing nbatch to 2048 because space = 4129133\nHashjoin 0x13a8e68: freed 154 of 10354 tuples, space now 4068786\nHashjoin 0x13a8e68: increasing nbatch to 4096 because space = 4129035\nHashjoin 0x13a8e68: freed 10167 of 10351 tuples, space now 72849\nHashjoin 0x242c9b0: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x2443ee0: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x2443aa0: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x2443440: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x2443330: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x13a8e68: increasing nbatch to 8192 because space = 4128997\nHashjoin 0x12d0818: freed 10555 of 10560 tuples, space now 1983\nHashjoin 0x12d0818: increasing nbatch to 16384 because space = 4128957\nHashjoin 0x12d0818: freed 10697 of 10764 tuples, space now 25956TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n...\n\nAnd you see here clearly that there is a problem with this printf stdout \nbuffer.\n\nNow finally I am at this stage of the breakpoint firing rapidly\n\n(gdb) cont 100\nWill ignore next 99 crossings of breakpoint 1. Continuing.\n\nBreakpoint 1, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n884 int oldnbatch = hashtable->nbatch;\n(gdb) bt 7\n#0 ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n#1 0x00000000006b93b1 in ExecHashTableInsert (hashtable=0x12d0818, slot=0x12bcf98, hashvalue=2161368) at nodeHash.c:1655\n#2 0x00000000006bd600 in ExecHashJoinNewBatch (hjstate=0x12a4340) at nodeHashjoin.c:1051\n#3 0x00000000006bc999 in ExecHashJoinImpl (pstate=0x12a4340, parallel=false) at nodeHashjoin.c:539\n#4 0x00000000006bca23 in ExecHashJoin (pstate=0x12a4340) at nodeHashjoin.c:565\n#5 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4340) at execProcnode.c:461\n#6 0x00000000006ceaad in ExecProcNode (node=0x12a4340) at ../../../src/include/executor/executor.h:247\n(More stack frames follow...)\n\nso it's not a red herring after all. But nothing new in the logfile from \nthat HJDEBUG. Trying to flush stdout with this call \nMemoryContextStats(TopPortalContext)\n\n...\n ExecutorState: 782712360 total in 92954 blocks; 3626888 free (3126 chunks); 779085472 used\n\nand nothing heard from HJDEBUG. So you really can't rely on that HJDEBUG \nstuff. It's not working.\n\nI will summarize that the memory problem with the rapid firing on \nExecHashIncreaseNumBatches is still occurring, confirmed as I reported \nearlier, it wasn't some red herring but that the HJDEBUG stuff doesn't \nwork. Also, that there might be benefit in creating a little resident \nmemory leak detector that keeps track of a single page of AllocSetAlloc \npointers cancelled out by matching AllocSetFree and reporting the \noverflow with a quick little stack trace.\n\nOn 4/20/2019 6:53, Justin Pryzby wrote:\n>> The only problem is that I can't test that this actually would trigger the\n>> memory problem, because I can't force the plan to use the right join, it\n>> always reverts to the left join hashing the tmp_q:\n> I think the table on the \"OUTER\" side is the one which needs to be iterated\n> over (not hashed), in order to return each of its rows even if there are no\n> join partners in the other table. In your original query, the small table was\n> being hashed and the large table iterated; maybe that's whats important.\n\nMay be so. Trying to wrap my head around the RIGHT vs. LEFT outer join \nand why there even is a difference though.\n\n -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1341541.37..2612134.36 rows=13 width=341)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n ... from the TINY table tmp_q, the estimate of 13 rows ain't bad\n -> Hash (cost=2908913.87..2908913.87 rows=34619 width=930)\n -> Gather Merge (cost=2892141.40..2908913.87 rows=34619 width=930)\n .... from the HUGE tmp_r table, with 100 million rows (estimate of 34619 is grossly wrong, but how could it know? ...\n\nNow in a right join, we include all rows from the right table, and only \nthose from the left table that match the join key. I wonder why not \ntransform all of those to left joins then?\n\nWhy are we not hashing only the \"optional\" side?\n\nThe plan here seems to tell me without a doubt that it is hashing the \nbig table, 100 million rows get hashed into the hash join and then we \niterate over the tiny table and then ... now my mind boggles and I just \ndon't know why there are right joins at all.\n\nBut I see the plan is running the Hash index of 100 million rows.\n\n>> which is of course much better, but when tmp_q and tmp_r are the results of\n>> complex stuff that the planner can't estimate, then it gets it wrong, and\n>> then the issue gets triggered because we are hashing on the big tmp_r, not\n>> tmp_q.\n> I was able to get something maybe promising ? \"Batches: 65536 (originally 1)\"\n>\n> I didn't get \"Out of memory\" error yet, but did crash the server with this one:\n> postgres=# explain analyze WITH v AS( SELECT * FROM generate_series(1,99999999)i WHERE i%10<10 AND i%11<11 AND i%12<12 AND i%13<13 AND i%14<14 AND i%15<15 AND i%16<16 AND i%17<17 AND i%18<18 AND i%19<19 AND i%20<20 AND i%21<21 ) SELECT * FROM generate_series(1,99)k JOIN v ON k=i ;\n>\n> Note, on pg12dev this needs to be \"with v AS MATERIALIZED\".\n>\n> postgres=# SET work_mem='128kB';SET client_min_messages =log;SET log_statement_stats=on;explain(analyze,timing off) WITH v AS( SELECT * FROM generate_series(1,999999)i WHERE i%10<10 AND i%11<11 AND i%12<12 AND i%13<13 AND i%14<14 AND i%15<15 AND i%16<16 AND i%17<17 AND i%18<18 AND i%19<19 AND i%20<20 AND i%21<21 ) SELECT * FROM generate_series(1,99)k JOIN v ON k=i ;\n> Hash Join (cost=70.04..83.84 rows=5 width=8) (actual rows=99 loops=1)\n> Hash Cond: (k.k = v.i)\n> CTE v\n> -> Function Scan on generate_series i (cost=0.00..70.00 rows=1 width=4) (actual rows=999999 loops=1)\n> Filter: (((i % 10) < 10) AND ((i % 11) < 11) AND ((i % 12) < 12) AND ((i % 13) < 13) AND ((i % 14) < 14) AND ((i % 15) < 15) AND ((i % 16) < 16) AND ((i % 17) < 17) AND ((i % 18) < 18) AND ((i % 19) < 19) AND ((i % 20) < 20) AND ((i % 21) < 21))\n> -> Function Scan on generate_series k (cost=0.00..10.00 rows=1000 width=4) (actual rows=99 loops=1)\n> -> Hash (cost=0.02..0.02 rows=1 width=4) (actual rows=999999 loops=1)\n> Buckets: 4096 (originally 1024) Batches: 512 (originally 1) Memory Usage: 101kB\n> -> CTE Scan on v (cost=0.00..0.02 rows=1 width=4) (actual rows=999999 loops=1)\n\nYes I thought that with CTEs and functions one might be able to generate \na test case, but still not seing how you can trick the planner into this \npeculiar jointype JOIN_RIGHT and whether that is requisite for \ntriggering the problem.\n\nFinally, I have tried to make a pstate pretty printer in explain.c:\n\nvoid\nDumpPlanState(PlanState *pstate) {\n ExplainState *es = NewExplainState();\n ExplainNode(pstate, NIL, NULL, NULL, es);\n puts(es->str->data);\n pfree(es->str->data);\n}\n\nbut that didn't work, because unfortunately that ExplainNode function is \ndestructive. It would be so nice to refactor this explain code such that \nthere would be a completely conservative function that simply dumps the \npresent pstate with all the information about its estimate and actual \nsituation, how many iterations it has already accomplished, how many it \nestimates to still have to do, whether its original estimate panned out \nor not, etc. This would be so tremendously useful for runtime debugging \nof queries. I think the utility of this can hardly be overstated. I mean \neven for end-user applications of some data warehouse, where you could \nprobe a long running query every 5 seconds as to where the execution is. \nMan, I could not think of any more low hanging fruit useful feature. I \nam sure that if PostgreSQL was originally written in Java, this feature \nwould naturally exist already.\n\nregards and Happy Easter,\n-Gunther\n\n\n\n\n\n\n\nOn Tue, Apr 16, 2019 at 11:46:51PM -0500, Justin Pryzby wrote:\n\n\n\nI wonder if it'd be useful to compile with \n./configure CFLAGS=-DHJDEBUG=1\n\n\nCould you try this, too ?\n\n\nOK, doing it now, here is what I'm getting in the log file now. I\n am surprised I get so few rows here when there \n\n2019-04-20 17:12:16.077 UTC [7093] LOG: database system was shut down at 2019-04-20 17:12:15 UTC\n2019-04-20 17:12:16.085 UTC [7091] LOG: database system is ready to accept connections\nHashjoin 0x118e0c8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x118e0f8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1194e78: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x119b518: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x1194e78: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x119bb38: initial nbatch = 16, nbuckets = 65536\nTopMemoryContext: 4347672 total in 9 blocks; 41784 free (19 chunks); 4305888 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 1456 free (0 chunks); 6736 used\n TopTransactionContext: 8192 total in 1 blocks; 5416 free (2 chunks); 2776 used\n Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (1 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1474560 total in 183 blocks; 6152 free (8 chunks); 1468408 used:\n ExecutorState: 2234501600 total in 266274 blocks; 3696112 free (17274 chunks); 2230805488 used\n HashTableContext: 32768 total in 3 blocks; 17272 free (8 chunks); 15496 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8454256 total in 6 blocks; 64848 free (32 chunks); 8389408 used\n HashBatchContext: 177003344 total in 5387 blocks; 7936 free (0 chunks); 176995408 used\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n Caller tuples: 1048576 total in 8 blocks; 21608 free (14 chunks); 1026968 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 236384 free (1 chunks); 864944 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: act_id_fkidx\n ...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6152 free (1 chunks); 2040 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\nGrand total: 2430295856 bytes in 272166 blocks; 5223104 free (17571 chunks); 2425072752 used\n2019-04-20 17:28:56.887 UTC [7100] ERROR: out of memory\n2019-04-20 17:28:56.887 UTC [7100] DETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\n2019-04-20 17:28:56.887 UTC [7100] STATEMENT: explain analyze SELECT * FROM reports.v_BusinessOperation;\n\nThere are amazingly few entries in the\n log.\n\n(gdb) break ExecHashIncreaseNumBatches\nBreakpoint 1 at 0x6b7bd6: file nodeHash.c, line 884.\n(gdb) cont\nContinuing.\n\nBreakpoint 1, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n884 int oldnbatch = hashtable->nbatch;\n(gdb) cont\nContinuing.\n\nBreakpoint 1, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n884 int oldnbatch = hashtable->nbatch;\n(gdb) cont\nContinuing.\n\nBreakpoint 1, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n884 int oldnbatch = hashtable->nbatch;\n(gdb) cont 100\nWill ignore next 99 crossings of breakpoint 1. Continuing.\n But I am surprised now to find that the behavior has changed or\n what?\n\n\n\nFirst, weirdly enough, now I am not\n getting any of the HJDEBUG messages any more. And yet my resident\n memory has already increased to 1.1 GB. \n\n\n 7100 postgres 20 0 2797036 1.1g 188064 R 17.9 14.9 14:46.00 postgres: postgres integrator [local] EXPLAIN\n 7664 postgres 20 0 1271788 16228 14408 D 17.6 0.2 0:01.96 postgres: parallel worker for PID 7100\n 7665 postgres 20 0 1271788 16224 14404 R 17.6 0.2 0:01.95 postgres: parallel worker for PID 7100\n\n\nso why is it all different now? Are we\n chasing a red herring?\n\n\nFinally I had another stop now at the\n HJDEBUG line\n\n#0 ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:904\n#1 0x00000000006b93b1 in ExecHashTableInsert (hashtable=0x12d0818, slot=0x12bcf98, hashvalue=234960700) at nodeHash.c:1655\n#2 0x00000000006bd600 in ExecHashJoinNewBatch (hjstate=0x12a4340) at nodeHashjoin.c:1051\n#3 0x00000000006bc999 in ExecHashJoinImpl (pstate=0x12a4340, parallel=false) at nodeHashjoin.c:539\n#4 0x00000000006bca23 in ExecHashJoin (pstate=0x12a4340) at nodeHashjoin.c:565\n#5 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4340) at execProcnode.c:461\n#6 0x00000000006ceaad in ExecProcNode (node=0x12a4340) at ../../../src/include/executor/executor.h:247\n#7 0x00000000006cebe7 in ExecSort (pstate=0x12a4230) at nodeSort.c:107\n#8 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4230) at execProcnode.c:461\n#9 0x00000000006a18f0 in ExecProcNodeFirst (node=0x12a4230) at execProcnode.c:445\n#10 0x00000000006cf25c in ExecProcNode (node=0x12a4230) at ../../../src/include/executor/executor.h:247\n#11 0x00000000006cf388 in ExecUnique (pstate=0x12a4040) at nodeUnique.c:73\n#12 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4040) at execProcnode.c:461\n#13 0x00000000006a18f0 in ExecProcNodeFirst (node=0x12a4040) at execProcnode.c:445\n#14 0x000000000069728b in ExecProcNode (node=0x12a4040) at ../../../src/include/executor/executor.h:247\n#15 0x0000000000699790 in ExecutePlan (estate=0x12a3da0, planstate=0x12a4040, use_parallel_mode=true, operation=CMD_SELECT,\n sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0xe811c0 <donothingDR>, execute_once=true)\n at execMain.c:1723\n#16 0x0000000000697757 in standard_ExecutorRun (queryDesc=0x1404168, direction=ForwardScanDirection, count=0, execute_once=true)\n at execMain.c:364\n#17 0x00000000006975f4 in ExecutorRun (queryDesc=0x1404168, direction=ForwardScanDirection, count=0, execute_once=true)\n at execMain.c:307\n#18 0x000000000060d227 in ExplainOnePlan (plannedstmt=0x1402588, into=0x0, es=0x10b03a8,\n queryString=0x10866c0 \"explain analyze SELECT * FROM reports.v_BusinessOperation;\", params=0x0, queryEnv=0x0,\n planduration=0x7fff56a0df00) at explain.c:535\n\n\nSo now I am back with checking who\n calls AllocSetAlloc?\n\n\n\n(gdb) break AllocSetAlloc if (int)strcmp(context->name, \"ExecutorState\") == 0\nBreakpoint 4 at 0x9eab11: file aset.c, line 719.\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=8272) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) bt 5\n#0 AllocSetAlloc (context=0x12a3c90, size=8272) at aset.c:719\n#1 0x00000000009f2e47 in palloc (size=8272) at mcxt.c:938\n#2 0x000000000082ae84 in makeBufFileCommon (nfiles=1) at buffile.c:116\n#3 0x000000000082af14 in makeBufFile (firstfile=34029) at buffile.c:138\n#4 0x000000000082b09f in BufFileCreateTemp (interXact=false) at buffile.c:201\n(More stack frames follow...)\n(gdb) bt 7\n#0 AllocSetAlloc (context=0x12a3c90, size=8272) at aset.c:719\n#1 0x00000000009f2e47 in palloc (size=8272) at mcxt.c:938\n#2 0x000000000082ae84 in makeBufFileCommon (nfiles=1) at buffile.c:116\n#3 0x000000000082af14 in makeBufFile (firstfile=34029) at buffile.c:138\n#4 0x000000000082b09f in BufFileCreateTemp (interXact=false) at buffile.c:201\n#5 0x00000000006bda31 in ExecHashJoinSaveTuple (tuple=0x86069b8, hashvalue=234960700, fileptr=0x8616a30) at nodeHashjoin.c:1220\n#6 0x00000000006b80c0 in ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:1004\n(More stack frames follow...)\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=8) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) bt 7\n#0 AllocSetAlloc (context=0x12a3c90, size=8) at aset.c:719\n#1 0x00000000009f2f5d in palloc0 (size=8) at mcxt.c:969\n#2 0x000000000082aea2 in makeBufFileCommon (nfiles=1) at buffile.c:119\n#3 0x000000000082af14 in makeBufFile (firstfile=34029) at buffile.c:138\n#4 0x000000000082b09f in BufFileCreateTemp (interXact=false) at buffile.c:201\n#5 0x00000000006bda31 in ExecHashJoinSaveTuple (tuple=0x86069b8, hashvalue=234960700, fileptr=0x8616a30) at nodeHashjoin.c:1220\n#6 0x00000000006b80c0 in ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:1004\n(More stack frames follow...)\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=4) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) bt 7\n#0 AllocSetAlloc (context=0x12a3c90, size=4) at aset.c:719\n#1 0x00000000009f2e47 in palloc (size=4) at mcxt.c:938\n#2 0x000000000082af22 in makeBufFile (firstfile=34029) at buffile.c:140\n#3 0x000000000082b09f in BufFileCreateTemp (interXact=false) at buffile.c:201\n#4 0x00000000006bda31 in ExecHashJoinSaveTuple (tuple=0x86069b8, hashvalue=234960700, fileptr=0x8616a30) at nodeHashjoin.c:1220\n#5 0x00000000006b80c0 in ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:1004\n#6 0x00000000006b93b1 in ExecHashTableInsert (hashtable=0x12d0818, slot=0x12bcf98, hashvalue=234960700) at nodeHash.c:1655\n(More stack frames follow...)\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) bt 7\n#0 AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n#1 0x00000000009f2e47 in palloc (size=375) at mcxt.c:938\n#2 0x00000000006bdbec in ExecHashJoinGetSavedTuple (hjstate=0x12a4340, file=0x13ca418, hashvalue=0x7fff56a0da54, tupleSlot=0x12bcf98)\n at nodeHashjoin.c:1277\n#3 0x00000000006bd61f in ExecHashJoinNewBatch (hjstate=0x12a4340) at nodeHashjoin.c:1042\n#4 0x00000000006bc999 in ExecHashJoinImpl (pstate=0x12a4340, parallel=false) at nodeHashjoin.c:539\n#5 0x00000000006bca23 in ExecHashJoin (pstate=0x12a4340) at nodeHashjoin.c:565\n#6 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4340) at execProcnode.c:461\n(More stack frames follow...)\n\n\n\nNow I am adding a breakpoint in\n AllocSetFree and I see that this gets called right after\n AllocSetAlloc with the same context, and presumably the pointer\n previously allocated. \n\n\n(gdb) finish\nRun till exit from #0 AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n0x00000000009f2e47 in palloc (size=375) at mcxt.c:938\n938 ret = context->methods->alloc(context, size);\nValue returned is $1 = (void *) 0x11bd858\n(gdb) cont\nContinuing.\n\nBreakpoint 5, AllocSetFree (context=0x12a3c90, pointer=0x11bf748) at aset.c:992\n992 AllocSet set = (AllocSet) context;\n(gdb) cont\nContinuing.\n\nBreakpoint 4, AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n719 AllocSet set = (AllocSet) context;\n(gdb) finish\nRun till exit from #0 AllocSetAlloc (context=0x12a3c90, size=375) at aset.c:719\n0x00000000009f2e47 in palloc (size=375) at mcxt.c:938\n938 ret = context->methods->alloc(context, size);\nValue returned is $2 = (void *) 0x11bf748\n(gdb) cont\nContinuing.\n\nBreakpoint 5, AllocSetFree (context=0x12a3c90, pointer=0x11bd858) at aset.c:992\n992 AllocSet set = (AllocSet) context;\n\n\nSee, now that pointer allocated before\n is being freed. I can already see how I would write a memory leak\n detection tool. I would keep a cache of a fixed size, say one\n page, of recently allocated pointers, and when free is called, it\n would remove the pointer from the cache. Then I would only log the\n allocated pointer once it has to be evicted from the cache because\n another one needs to be added, so I don't fill my log file with a\n bunch of memory allocations that get freed relatively quickly. \n\n\n\nThis is very confused. I just stepped\n over\n\nBreakpoint 2, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:904\n904 printf(\"Hashjoin %p: increasing nbatch to %d because space = %zu\\n\",\n(gdb) next\n908 oldcxt = MemoryContextSwitchTo(hashtable->hashCxt);\n(gdb) call MemoryContextStats(TopPortalContext)\n\n\nand checked my log file and there was\n nothing before the call MemoryContextStats(TopPortalContext) so I\n don't understand where this printf stuff is ending up. Clearly\n *some* of it is in the log, but none of that \"increasing nbatch\"\n stuff is. I think there is something wrong with that HJDEBUG\n stuff. Oops, my calling that MemoryContextStats really seems to\n help force this buffer to be flushed, because now I got more:\n\nHashjoin 0x118e4c8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x118e4f8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1195278: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x119b918: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x1195278: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x119b918: initial nbatch = 16, nbuckets = 65536\n...\nHashjoin 0x13a8e68: initial nbatch = 16, nbuckets = 8192\nHashjoin 0x13a8e68: increasing nbatch to 32 because space = 4128933\nHashjoin 0x13a8e68: freed 148 of 10584 tuples, space now 4071106\nHashjoin 0x13a8e68: increasing nbatch to 64 because space = 4128826\nHashjoin 0x13a8e68: freed 544 of 10584 tuples, space now 3916296\nHashjoin 0x13a8e68: increasing nbatch to 128 because space = 4128846\nHashjoin 0x13a8e68: freed 10419 of 10585 tuples, space now 65570\nHashjoin 0x13a8e68: increasing nbatch to 256 because space = 4128829\nHashjoin 0x13a8e68: freed 10308 of 10734 tuples, space now 161815\nHashjoin 0x13a8e68: increasing nbatch to 512 because space = 4128908\nHashjoin 0x13a8e68: freed 398 of 10379 tuples, space now 3977787\nHashjoin 0x13a8e68: increasing nbatch to 1024 because space = 4129008\nHashjoin 0x13a8e68: freed 296 of 10360 tuples, space now 4013423\nHashjoin 0x13a8e68: increasing nbatch to 2048 because space = 4129133\nHashjoin 0x13a8e68: freed 154 of 10354 tuples, space now 4068786\nHashjoin 0x13a8e68: increasing nbatch to 4096 because space = 4129035\nHashjoin 0x13a8e68: freed 10167 of 10351 tuples, space now 72849\nHashjoin 0x242c9b0: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x2443ee0: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x2443aa0: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x2443440: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x2443330: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x13a8e68: increasing nbatch to 8192 because space = 4128997\nHashjoin 0x12d0818: freed 10555 of 10560 tuples, space now 1983\nHashjoin 0x12d0818: increasing nbatch to 16384 because space = 4128957\nHashjoin 0x12d0818: freed 10697 of 10764 tuples, space now 25956TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n...\n\n\nAnd you see here clearly that there is\n a problem with this printf stdout buffer. \n\n\n\nNow finally I am at this stage of the\n breakpoint firing rapidly\n\n(gdb) cont 100\nWill ignore next 99 crossings of breakpoint 1. Continuing.\n\nBreakpoint 1, ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n884 int oldnbatch = hashtable->nbatch;\n(gdb) bt 7\n#0 ExecHashIncreaseNumBatches (hashtable=0x12d0818) at nodeHash.c:884\n#1 0x00000000006b93b1 in ExecHashTableInsert (hashtable=0x12d0818, slot=0x12bcf98, hashvalue=2161368) at nodeHash.c:1655\n#2 0x00000000006bd600 in ExecHashJoinNewBatch (hjstate=0x12a4340) at nodeHashjoin.c:1051\n#3 0x00000000006bc999 in ExecHashJoinImpl (pstate=0x12a4340, parallel=false) at nodeHashjoin.c:539\n#4 0x00000000006bca23 in ExecHashJoin (pstate=0x12a4340) at nodeHashjoin.c:565\n#5 0x00000000006a191f in ExecProcNodeInstr (node=0x12a4340) at execProcnode.c:461\n#6 0x00000000006ceaad in ExecProcNode (node=0x12a4340) at ../../../src/include/executor/executor.h:247\n(More stack frames follow...)\n\n\nso it's not a red herring after all.\n But nothing new in the logfile from that HJDEBUG. Trying to flush\n stdout with this call MemoryContextStats(TopPortalContext)\n\n...\n ExecutorState: 782712360 total in 92954 blocks; 3626888 free (3126 chunks); 779085472 used\n\n\nand nothing heard from HJDEBUG. So you\n really can't rely on that HJDEBUG stuff. It's not working.\n\n\nI will summarize that the memory\n problem with the rapid firing on ExecHashIncreaseNumBatches is\n still occurring, confirmed as I reported earlier, it wasn't some\n red herring but that the HJDEBUG stuff doesn't work. Also, that\n there might be benefit in creating a little resident memory leak\n detector that keeps track of a single page of AllocSetAlloc\n pointers cancelled out by matching AllocSetFree and reporting the\n overflow with a quick little stack trace.\n\n\nOn 4/20/2019 6:53, Justin Pryzby wrote:\n\n\n\nThe only problem is that I can't test that this actually would trigger the\nmemory problem, because I can't force the plan to use the right join, it\nalways reverts to the left join hashing the tmp_q:\n\n\n\nI think the table on the \"OUTER\" side is the one which needs to be iterated\nover (not hashed), in order to return each of its rows even if there are no\njoin partners in the other table. In your original query, the small table was\nbeing hashed and the large table iterated; maybe that's whats important.\n\nMay be so. Trying to wrap my head around the RIGHT vs. LEFT outer\n join and why there even is a difference though. \n\n -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197)\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\n -> Hash Right Join (cost=1341541.37..2612134.36 rows=13 width=341)\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\n ... from the TINY table tmp_q, the estimate of 13 rows ain't bad\n -> Hash (cost=2908913.87..2908913.87 rows=34619 width=930)\n -> Gather Merge (cost=2892141.40..2908913.87 rows=34619 width=930)\n .... from the HUGE tmp_r table, with 100 million rows (estimate of 34619 is grossly wrong, but how could it know? ...\n\nNow in a right join, we include all rows from the right table,\n and only those from the left table that match the join key. I\n wonder why not transform all of those to left joins then? \n\nWhy are we not hashing only the \"optional\" side? \n\nThe plan here seems to tell me without a doubt that it is hashing\n the big table, 100 million rows get hashed into the hash join and\n then we iterate over the tiny table and then ... now my mind\n boggles and I just don't know why there are right joins at all. \n\nBut I see the plan is running the Hash index of 100 million rows.\n \n\n\n\nwhich is of course much better, but when tmp_q and tmp_r are the results of\ncomplex stuff that the planner can't estimate, then it gets it wrong, and\nthen the issue gets triggered because we are hashing on the big tmp_r, not\ntmp_q.\n\n\n\nI was able to get something maybe promising ? \"Batches: 65536 (originally 1)\"\n\nI didn't get \"Out of memory\" error yet, but did crash the server with this one:\npostgres=# explain analyze WITH v AS( SELECT * FROM generate_series(1,99999999)i WHERE i%10<10 AND i%11<11 AND i%12<12 AND i%13<13 AND i%14<14 AND i%15<15 AND i%16<16 AND i%17<17 AND i%18<18 AND i%19<19 AND i%20<20 AND i%21<21 ) SELECT * FROM generate_series(1,99)k JOIN v ON k=i ;\n\nNote, on pg12dev this needs to be \"with v AS MATERIALIZED\".\n\npostgres=# SET work_mem='128kB';SET client_min_messages =log;SET log_statement_stats=on;explain(analyze,timing off) WITH v AS( SELECT * FROM generate_series(1,999999)i WHERE i%10<10 AND i%11<11 AND i%12<12 AND i%13<13 AND i%14<14 AND i%15<15 AND i%16<16 AND i%17<17 AND i%18<18 AND i%19<19 AND i%20<20 AND i%21<21 ) SELECT * FROM generate_series(1,99)k JOIN v ON k=i ;\n Hash Join (cost=70.04..83.84 rows=5 width=8) (actual rows=99 loops=1)\n Hash Cond: (k.k = v.i)\n CTE v\n -> Function Scan on generate_series i (cost=0.00..70.00 rows=1 width=4) (actual rows=999999 loops=1)\n Filter: (((i % 10) < 10) AND ((i % 11) < 11) AND ((i % 12) < 12) AND ((i % 13) < 13) AND ((i % 14) < 14) AND ((i % 15) < 15) AND ((i % 16) < 16) AND ((i % 17) < 17) AND ((i % 18) < 18) AND ((i % 19) < 19) AND ((i % 20) < 20) AND ((i % 21) < 21))\n -> Function Scan on generate_series k (cost=0.00..10.00 rows=1000 width=4) (actual rows=99 loops=1)\n -> Hash (cost=0.02..0.02 rows=1 width=4) (actual rows=999999 loops=1)\n Buckets: 4096 (originally 1024) Batches: 512 (originally 1) Memory Usage: 101kB\n -> CTE Scan on v (cost=0.00..0.02 rows=1 width=4) (actual rows=999999 loops=1)\n\n\nYes I thought that with CTEs and functions one might be able to\n generate a test case, but still not seing how you can trick the\n planner into this peculiar jointype JOIN_RIGHT and whether that is\n requisite for triggering the problem.\nFinally, I have tried to make a pstate pretty printer in\n explain.c:\nvoid\nDumpPlanState(PlanState *pstate) {\n ExplainState *es = NewExplainState();\n ExplainNode(pstate, NIL, NULL, NULL, es);\n puts(es->str->data);\n pfree(es->str->data);\n}\n\nbut that didn't work, because unfortunately that ExplainNode\n function is destructive. It would be so nice to refactor this\n explain code such that there would be a completely conservative\n function that simply dumps the present pstate with all the\n information about its estimate and actual situation, how many\n iterations it has already accomplished, how many it estimates to\n still have to do, whether its original estimate panned out or not,\n etc. This would be so tremendously useful for runtime debugging of\n queries. I think the utility of this can hardly be overstated. I\n mean even for end-user applications of some data warehouse, where\n you could probe a long running query every 5 seconds as to where\n the execution is. Man, I could not think of any more low hanging\n fruit useful feature. I am sure that if PostgreSQL was originally\n written in Java, this feature would naturally exist already. \n\nregards and Happy Easter,\n -Gunther",
"msg_date": "Sat, 20 Apr 2019 16:00:18 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 11:34:54PM -0400, Gunther wrote:\n>\n> ...\n>\n>It would be so nice if there was a way to force a specific plan for \n>purposes of the testing.� I tried giving false data in pg_class \n>reltuples and relpages:\n>\n>foo=# analyze tmp_q;\n>ANALYZE\n>foo=# analyze tmp_r;\n>ANALYZE\n>foo=# select relname, relpages, reltuples from pg_class where relname in ('tmp_q', 'tmp_r');\n> relname | relpages | reltuples\n>---------+----------+-------------\n> tmp_r | 5505039 | 1.13467e+08\n> tmp_q | 7 | 236\n>(2 rows)\n>\n>foo=# update pg_class set (relpages, reltuples) = (5505039, 1.13467e+08) where relname = 'tmp_q';\n>UPDATE 1\n>foo=# update pg_class set (relpages, reltuples) = (7, 236) where relname = 'tmp_r';\n>UPDATE 1\n>\n>but that didn't help. Somehow the planner outsmarts every such trick, \n>so I can't get it to follow my right outer join plan where the big \n>table is hashed.� I am sure y'all know some way to force it.\n>\n\nThat does not work, because the planner does not actually use these values\ndirectly - it only computes the density from them, and then multiplies\nthat to the current number of pages in the file. That behaves much nicer\nwhen the table grows/shrinks between refreshes of the pg_class values.\n\nSo what you need to do is tweak these values to skew the density in a way\nthat then results in the desired esimate when multiplied with the actual\nnumber of pages. For me, this did the trick:\n\n update pg_class set (relpages, reltuples) = (1000000, 1)\n where relname = 'tmp_r';\n\n update pg_class set (relpages, reltuples) = (1, 1000000)\n where relname = 'tmp_q';\n\nafter which I get a plan like this:\n\n Hash Right Join\n Hash Cond: (...)\n -> Seq Scan on tmp_q q\n -> Hash\n -> Seq Scan on tmp_r r\n\nAs for the issue, I have a theory that I think would explain the issues.\nIt is related to the number of batches, as others speculated over here.\nIt's not a memory leak, though, it's just that each batch requires a lot\nof extra memory and we don't account for that.\n\nThe trouble is - each batch is represented by BufFile, which is a whopping\n8272 bytes, because it includes PGAlignedBlock. Now, with 131072 batches,\nthat's a nice 1GB of memory right there. And we don't actually account for\nthis memory in hashjoin code, so it's not counted against work_mem and we\njust increase the number of batches.\n\nAttached are two patches, that should help us to confirm that's actually\nwhat's happening when running the query on actual data. The first patch\nmoves the BufFile stuff into a separate memory context, to make it more\nobvious where the memory went. It also adds a buch of logging into the\nExecHashIncreaseNumBatches() function.\n\nThe second patch makes sure all the BufFiles are allocated right when\nincreasing the number of batches - otherwise we allocate them only when we\nactually find a row for that batch, and I suspect the sample data shared\non this thread are somewhat correlated (I see long runs of the same UUID\nvalue). That might slow down the memory growth. Of course, the real data\nprobably don't have such correlation, resulting in faster failures.\n\nWith the patch, I see stuff like this with 256k batches:\n\n ExecutorState: 65536 total in 4 blocks; 28136 free (4 chunks); 37400 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n hash batch files: 4404002656 total in 524302 blocks; 8387928 free (20 chunks); 4395614728 used\n\nso it's conceivable it's the root cause.\n\nAs for a fix, I'm not sure. I'm pretty sure we need to consider the amount\nof memory for BufFile(s) when increasing the number of batches. But we\ncan't just stop incrementing the batches, because that would mean the\ncurrent batch may easily get bigger than work_mem :-(\n\nI think we might cap the number of batches kept in memory, and at some\npoint start spilling data into an \"overflow batch.\" So for example we'd\nallow 32k batches, and then instead of increasing nbatch to 64k, we'd\ncreate a single \"overflow batch\" representing batches 32k - 64k. After\nprocessing the first 32k batches, we'd close the files and reuse the\nmemory for the next 32k batches. We'd read the overflow batch, split it\ninto the 32k batches, and just process them as usual. Of course, there\nmight be multiple rounds of this, for example we might end up with 32k\nconcurrent batches but 128k virtual ones, which means we'd do 4 rounds of\nthis dance.\n\nIt's a bit inefficient, but situations like this should be rather rare,\nand it's more graceful than just crashing with OOM.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 20 Apr 2019 22:01:34 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 02:30:09PM -0500, Justin Pryzby wrote:\n>On Sun, Apr 14, 2019 at 11:24:59PM -0400, Tom Lane wrote:\n>> Gunther <[email protected]> writes:\n>> > ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244 chunks); 2230341056 used\n>>\n>> Oooh, that looks like a memory leak right enough. The ExecutorState\n>> should not get that big for any reasonable query.\n>\n>On Tue, Apr 16, 2019 at 11:30:19AM -0400, Tom Lane wrote:\n>> Hmm ... this matches up with a vague thought I had that for some reason\n>> the hash join might be spawning a huge number of separate batches.\n>> Each batch would have a couple of files with associated in-memory\n>> state including an 8K I/O buffer, so you could account for the\n>\n>On Tue, Apr 16, 2019 at 10:24:53PM -0400, Gunther wrote:\n>> -> Hash (cost=2861845.87..2861845.87 rows=34619 width=74) (actual time=199792.446..199792.446 rows=113478127 loops=1)\n>> Buckets: 65536 (originally 65536) Batches: 131072 (originally 2) Memory Usage: 189207kB\n>\n>Is it significant that there are ~2x as many ExecutorState blocks as there are\n>batches ? 266261/131072 => 2.03...\n>\n\nIMO that confirms this is the issue with BufFile I just described, because \nthe struct is >8K, so it's allocated as a separate block (it exceeds the \nthreshold in AllocSet). And we have two BufFile(s) for each batch, because\nwe need to batch both the inner and outer relations.\n\n>If there was 1 blocks leaked when batch=2, and 2 blocks leaked when batch=4,\n>and 4 blocks leaked when batch=131072, then when batch=16, there'd be 64k\n>leaked blocks, and 131072 total blocks.\n>\n>I'm guessing Tom probably already thought of this, but:\n>2230341056/266261 => ~8376\n\nWell, the BufFile is 8272 on my system, so that's pretty close ;-)\n\n>which is pretty close to the 8kB I/O buffer you were talking about (if the\n>number of same-sized buffers much greater than other allocations).\n>\n>If Tom thinks (as I understand) that the issue is *not* a memory leak, but out\n>of control increasing of nbatches, and failure to account for their size...then\n>this patch might help.\n>\n>The number of batches is increased to avoid exceeding work_mem. With very low\n>work_mem (or very larger number of tuples hashed), it'll try to use a large\n>number of batches. At some point the memory used by BatchFiles structure\n>(increasing by powers of two) itself exceeds work_mem.\n>\n>With larger work_mem, there's less need for more batches. So the number of\n>batches used for small work_mem needs to be constrained, either based on\n>work_mem, or at all.\n>\n>With my patch, the number of batches is nonlinear WRT work_mem, and reaches a\n>maximum for moderately small work_mem. The goal is to choose the optimal\n>number of batches to minimize the degree to which work_mem is exceeded.\n>\n\nYeah. The patch might be enough for debugging, but it's clearly not\nsomething we could adopt as is, because we increase the number of batches\nfor a reason - we need to do that to keep the amount of memory needed for\nthe hash table contents (i.e. rows) below work_mem. If you just cap the\nnumber of batches, you'll keep the amount of memory for BufFile under\ncontrol, but the hash table may exceed work_mem.\n\nConsidering how rare this issue likely is, we need to be looking for a\nsolution that does not break the common case.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 20 Apr 2019 22:11:57 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> Considering how rare this issue likely is, we need to be looking for a\n> solution that does not break the common case.\n\nAgreed. What I think we need to focus on next is why the code keeps\nincreasing the number of batches. It seems like there must be an undue\namount of data all falling into the same bucket ... but if it were simply\na matter of a lot of duplicate hash keys, the growEnabled shutoff\nheuristic ought to trigger.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Apr 2019 16:26:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote:\n>Tomas Vondra <[email protected]> writes:\n>> Considering how rare this issue likely is, we need to be looking for a\n>> solution that does not break the common case.\n>\n>Agreed. What I think we need to focus on next is why the code keeps\n>increasing the number of batches. It seems like there must be an undue\n>amount of data all falling into the same bucket ... but if it were simply\n>a matter of a lot of duplicate hash keys, the growEnabled shutoff\n>heuristic ought to trigger.\n>\n\nI think it's really a matter of underestimate, which convinces the planner\nto hash the larger table. In this case, the table is 42GB, so it's\npossible it actually works as expected. With work_mem = 4MB I've seen 32k\nbatches, and that's not that far off, I'd day. Maybe there are more common\nvalues, but it does not seem like a very contrived data set.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 20 Apr 2019 22:36:50 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> I think it's really a matter of underestimate, which convinces the planner\n> to hash the larger table. In this case, the table is 42GB, so it's\n> possible it actually works as expected. With work_mem = 4MB I've seen 32k\n> batches, and that's not that far off, I'd day. Maybe there are more common\n> values, but it does not seem like a very contrived data set.\n\nMaybe we just need to account for the per-batch buffers while estimating\nthe amount of memory used during planning. That would force this case\ninto a mergejoin instead, given that work_mem is set so small.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Apr 2019 16:46:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> and checked my log file and there was nothing before the call \n> MemoryContextStats(TopPortalContext) so I don't understand where this \n> printf stuff is ending up.\n\nIt's going to stdout, which is likely block-buffered whereas stderr\nis line-buffered, so data from the latter will show up in your log\nfile much sooner. You might consider adding something to startup\nto switch stdout to line buffering.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Apr 2019 16:47:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:\n>Tomas Vondra <[email protected]> writes:\n>> I think it's really a matter of underestimate, which convinces the planner\n>> to hash the larger table. In this case, the table is 42GB, so it's\n>> possible it actually works as expected. With work_mem = 4MB I've seen 32k\n>> batches, and that's not that far off, I'd day. Maybe there are more common\n>> values, but it does not seem like a very contrived data set.\n>\n>Maybe we just need to account for the per-batch buffers while estimating\n>the amount of memory used during planning. That would force this case\n>into a mergejoin instead, given that work_mem is set so small.\n>\n\nHow would that solve the issue of underestimates like this one?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 20 Apr 2019 22:53:56 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 10:36:50PM +0200, Tomas Vondra wrote:\n>On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote:\n>>Tomas Vondra <[email protected]> writes:\n>>>Considering how rare this issue likely is, we need to be looking for a\n>>>solution that does not break the common case.\n>>\n>>Agreed. What I think we need to focus on next is why the code keeps\n>>increasing the number of batches. It seems like there must be an undue\n>>amount of data all falling into the same bucket ... but if it were simply\n>>a matter of a lot of duplicate hash keys, the growEnabled shutoff\n>>heuristic ought to trigger.\n>>\n>\n>I think it's really a matter of underestimate, which convinces the planner\n>to hash the larger table. In this case, the table is 42GB, so it's\n>possible it actually works as expected. With work_mem = 4MB I've seen 32k\n>batches, and that's not that far off, I'd day. Maybe there are more common\n>values, but it does not seem like a very contrived data set.\n>\n\nActually, I might have spoken too soon. I've dne some stats on the sample\ndata. There are 113478127 rows in total, and while most UUIDs are unique,\nthere are UUIDs that represent ~10% of the data. So maybe there really is\nsomething broken in disabling the growth.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 20 Apr 2019 23:13:20 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n> > I think it's really a matter of underestimate, which convinces the planner\n> > to hash the larger table. In this case, the table is 42GB, so it's\n> > possible it actually works as expected. With work_mem = 4MB I've seen 32k\n> > batches, and that's not that far off, I'd day. Maybe there are more common\n> > values, but it does not seem like a very contrived data set.\n> \n> Maybe we just need to account for the per-batch buffers while estimating\n> the amount of memory used during planning. That would force this case\n> into a mergejoin instead, given that work_mem is set so small.\n\nDo you mean by adding disable_cost if work_mem is so small that it's estimated\nto be exceeded ?\n\nJustin\n\n\n",
"msg_date": "Sat, 20 Apr 2019 16:45:41 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:\n>> Maybe we just need to account for the per-batch buffers while estimating\n>> the amount of memory used during planning. That would force this case\n>> into a mergejoin instead, given that work_mem is set so small.\n\n> Do you mean by adding disable_cost if work_mem is so small that it's estimated\n> to be exceeded ?\n\nNo, my point is that ExecChooseHashTableSize fails to consider the\nI/O buffers at all while estimating hash table size. It's not\nimmediately obvious how to factor that in, but we should.\n\nIf Tomas is right that there's also an underestimate of the number\nof rows here, that might not solve Gunther's immediate problem; but\nit seems like a clear costing oversight.\n\nThere's also the angle that the runtime code acts as though increasing\nthe number of batches is free, while it clearly isn't when you think\nabout the I/O buffers. So at some point we should probably stop\nincreasing the number of batches on the grounds of needing too many\nbuffers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Apr 2019 18:20:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 06:20:15PM -0400, Tom Lane wrote:\n>Justin Pryzby <[email protected]> writes:\n>> On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:\n>>> Maybe we just need to account for the per-batch buffers while estimating\n>>> the amount of memory used during planning. That would force this case\n>>> into a mergejoin instead, given that work_mem is set so small.\n>\n>> Do you mean by adding disable_cost if work_mem is so small that it's estimated\n>> to be exceeded ?\n>\n>No, my point is that ExecChooseHashTableSize fails to consider the\n>I/O buffers at all while estimating hash table size. It's not\n>immediately obvious how to factor that in, but we should.\n>\n>If Tomas is right that there's also an underestimate of the number\n>of rows here, that might not solve Gunther's immediate problem; but\n>it seems like a clear costing oversight.\n>\n>There's also the angle that the runtime code acts as though increasing\n>the number of batches is free, while it clearly isn't when you think\n>about the I/O buffers. So at some point we should probably stop\n>increasing the number of batches on the grounds of needing too many\n>buffers.\n\nYes. I think it might be partially due to the cost being hidden elsewhere.\nThe hashjoin code only really deals with array of pointers to BufFile, not\nwith the BufFiles. And might have looked insignificant for common cases,\nbut clearly for these corner cases it matters quite a bit.\n\nSo yes, ExecChooseHashTableSize() needs to consider this memory and check\nif doubling the number of batches has any chance of actually improving\nthings, because at some point the BufFile memory starts to dominate and\nwould just force us to do more and more batches.\n\nBut I think we should also consider this before even creating the hash\njoin path - see if the expected number of batches has any chance of\nfitting into work_mem, and if not then just not create the path at all.\nJust like we do for hash aggregate, for example. It's not going to solve\ncases like this (with underestimates), but it seems reasonable. Although,\nmaybe we won't actually use such paths, because merge join will win thanks\nto being automatically cheaper? Not sure.\n\nAlso, I wonder if we really need 8kB buffers here. Would it make sense to\nallow smaller buffers in some cases? Say, 1kB. It's not going to save us,\nbut it's still 8x better than now.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 21 Apr 2019 01:46:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 4/20/2019 16:01, Tomas Vondra wrote:\n> For me, this did the trick:\n> �update pg_class set (relpages, reltuples) = (1000000, 1) where \n> relname = 'tmp_r';\n> �update pg_class set (relpages, reltuples) = (1, 1000000) where \n> relname = 'tmp_q';\n>\nYES! For me too. My EXPLAIN ANALYZE actually succeeded.\n\n Hash Right Join (cost=11009552.27..11377073.28 rows=11 width=4271) (actual time=511580.110..1058354.140 rows=113478386 loops=1)\n Hash Cond: (((q.documentinternalid)::text = (r.documentinternalid)::text) AND ((q.actinternalid)::text = (r.actinternalid)::text))\n -> Seq Scan on tmp_q q (cost=0.00..210021.00 rows=21000000 width=3417) (actual time=1.148..1.387 rows=236 loops=1)\n -> Hash (cost=11009552.11..11009552.11 rows=11 width=928) (actual time=511577.002..511577.002 rows=113478127 loops=1)\n Buckets: 16384 (originally 1024) Batches: 131072 (originally 1) Memory Usage: 679961kB\n -> Seq Scan on tmp_r r (cost=0.00..11009552.11 rows=11 width=928) (actual time=4.077..344558.954 rows=113478127 loops=1)\n Planning Time: 0.725 ms\n Execution Time: 1064128.721 ms\n\nBut it used a lot of resident memory, and now it seems like I actually \nhave a leak! Because after the command returned as shown above, the \nmemory is still allocated:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 7100 postgres 20 0 2164012 1.1g 251364 S 0.0 14.5 23:27.23 postgres: postgres integrator [local] idle\n\nand let's do the memory map dump:\n\n2019-04-20 22:09:52.522 UTC [7100] LOG: duration: 1064132.171 ms statement: explain analyze\n SELECT *\n FROM tmp_q q\n RIGHT OUTER JOIN tmp_r r\n USING(documentInternalId, actInternalId);\nTopMemoryContext: 153312 total in 8 blocks; 48168 free (70 chunks); 105144 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 8192 total in 1 blocks; 6896 free (1 chunks); 1296 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7936 free (1 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1154080 total in 20 blocks; 151784 free (1 chunks); 1002296 used\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_class_tblspc_relfilenode_index\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n ...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 4992 free (6 chunks); 3200 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (3 chunks); 256 used\nGrand total: 2082624 bytes in 240 blocks; 382760 free (175 chunks); 1699864 used\n\nstrange, it shows no leak here. Now I run this test again, to see if the \nmemory grows further in top? This time I also leave the DISTINCT step in \nthe query. I am trying to hit the out of memory situation. Well, I \nclearly saw memory growing now:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 7100 postgres 20 0 2601900 1.5g 251976 R 97.7 19.9 36:32.23 postgres: postgres integrator [local] EXPLAIN\n\nTopMemoryContext: 2250520 total in 9 blocks; 45384 free (56 chunks); 2205136 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 1456 free (0 chunks); 6736 used\n TopTransactionContext: 8192 total in 1 blocks; 7528 free (1 chunks); 664 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 65536 total in 4 blocks; 28664 free (9 chunks); 36872 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 147456 total in 21 blocks; 10400 free (7 chunks); 137056 used:\n ExecutorState: 489605432 total in 57794 blocks; 5522192 free (129776 chunks); 484083240 used\n HashTableContext: 2162800 total in 6 blocks; 64848 free (35 chunks); 2097952 used\n HashBatchContext: 706576176 total in 21503 blocks; 7936 free (0 chunks); 706568240 used\n TupleSort main: 452880 total in 8 blocks; 125880 free (29 chunks); 327000 used\n Caller tuples: 4194304 total in 10 blocks; 452280 free (20 chunks); 3742024 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1154080 total in 20 blocks; 149992 free (1 chunks); 1004088 used\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_class_tblspc_relfilenode_index\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 4992 free (6 chunks); 3200 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\nGrand total: 1207458200 bytes in 79595 blocks; 6639272 free (130033 chunks); 1200818928 used\n\nbut Executor state is only 489 MB, so it is in the area of slow but massive growth.\n\n ExecutorState: 489605432 total in 57794 blocks; 5522192 free (129776 chunks); 484083240 used\n HashTableContext: 2162800 total in 6 blocks; 64848 free (35 chunks); 2097952 used\n HashBatchContext: 706576176 total in 21503 blocks; 7936 free (0 chunks); 706568240 used\n TupleSort main: 452880 total in 8 blocks; 125880 free (29 chunks); 327000 used\n Caller tuples: 4194304 total in 10 blocks; 452280 free (20 chunks); 3742024 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n\nnow I see if I can run it to completion anyway, and if there will then \nbe a new bottom of memory. Now the newly allocated memory seems to have \nbeen released, so we stick to the 1.1G baseline we started out with.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 7100 postgres 20 0 2164012 1.1g 251976 D 6.3 14.5 55:26.82 postgres: postgres integrator [local] EXPLAIN\n\non the other hand, the sort step is not yet finished.\n\nAlso, I think while we might have focused in on a peculiar planning \nsituation where a very unfortunate plan is chosen which stresses the \nmemory situation, the real reason for the final out of memory situation \nhas not yet been determined. Remember, I have seen 3 stages in my \noriginal query:\n\n 1. Steady state, sort-merge join active high CPU% memory at or below 100 kB\n 2. Slow massive growth from over 200 kB to 1.5 GB or 1.8 GB\n 3. Explosive growth within a second to over 2.2 GB\n\nIt might well be that my initial query would have succeeded just fine \ndespite the unfortunate plan with the big memory consumption on the \noddly planned hash join, were it not for that third phase of explosive \ngrowth. And we haven't been able to catch this, because it happens too \nquickly.\n\nIt seems to me that some better memory tracing would be necessary. \nLooking at src/backend/utils/memdebug.c, mentions Valgrind. But to me, \nValgrind would be a huge toolbox to just look after one thing. I wonder \nif we could not make a much simpler memory leak debugger tool.� One that \nis fast,� yet doesn't provide too much output to overwhelm the log \ndestination file system (and waste too much time). There are already \nDebug macros there which, if enabled, just create an absolutely crazy \namount of undecipherable log file content, because ALL backend processes \nwould spit out this blabber. So I already stopped that by adding a \nvariable that must be set to 1 (using the debugger on exactly one \nprocess for exactly the time desired):\n\nint _alloc_info = 0;\n#ifdef HAVE_ALLOCINFO\n#define AllocFreeInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocFree: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#define AllocAllocInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocAlloc: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#else\n#define AllocFreeInfo(_cxt, _chunk)\n#define AllocAllocInfo(_cxt, _chunk)\n#endif\n\nBut now I am thinking that should be the hook to use a limited cache \nwhere we can cancel out AllocSetAlloc with their AllocSetFree calls that \nfollow relatively rapidly, which apparently is the majority of the log \nchatter created.\n\nThe memory debugger would allocate a single fixed memory chunk like 8 or \n16 kB as a cache per each memory context that is actually traced. In \neach we would record the memory allocation in the shortest possible way. \nWith everything compressed. Instead of pointer, references to the memory \nwe would store whatever memory chunk index, a very brief backtrace would \nbe stored in a compressed form, by instruction pointer ($rip) of the \ncaller and then variable length differences to the $rip of the caller \nnext up. These could even be stored in an index of the 100 most common \ncaller chains to compress this data some more, while minimizing the cost \nin searching. Now each allocated chunk would be kept in this table and \nwhen it fills up, the oldest allocated chunk removed and written to the \nlog file. When freed before being evicted from the cache, the chunk gets \nsilently removed. When a chunk is freed that is no longer in the cache, \nthe free event is recorded in the log. That way only those chunks get \nwritten to the log files that have endured beyond the capacity of the \ncache. Hopefully that would be the chunks most likely involved in the \nmemory leak. Once the log has been created, it can be loaded into \nPostgreSQL table itself, and analyzed to find the chunks that never get \nfreed and from the compressed backtrace figure out where they have been \nallocated.\n\nBTW, my explain analyze is still running. That Sort - Unique step is \ntaking forever on this data.\n\nOK, now I will try the various patches that people sent.\n\n-Gunther\n\n\n\n\n\n\n\n\n\nOn 4/20/2019 16:01, Tomas Vondra wrote:\n\nFor me,\n this did the trick:\n \n �update pg_class set (relpages, reltuples) = (1000000, 1)�\n where relname = 'tmp_r';\n \n �update pg_class set (relpages, reltuples) = (1, 1000000)�\n where relname = 'tmp_q';\n \n\n\nYES! For me too. My EXPLAIN ANALYZE actually succeeded. \n\n Hash Right Join (cost=11009552.27..11377073.28 rows=11 width=4271) (actual time=511580.110..1058354.140 rows=113478386 loops=1)\n Hash Cond: (((q.documentinternalid)::text = (r.documentinternalid)::text) AND ((q.actinternalid)::text = (r.actinternalid)::text))\n -> Seq Scan on tmp_q q (cost=0.00..210021.00 rows=21000000 width=3417) (actual time=1.148..1.387 rows=236 loops=1)\n -> Hash (cost=11009552.11..11009552.11 rows=11 width=928) (actual time=511577.002..511577.002 rows=113478127 loops=1)\n Buckets: 16384 (originally 1024) Batches: 131072 (originally 1) Memory Usage: 679961kB\n -> Seq Scan on tmp_r r (cost=0.00..11009552.11 rows=11 width=928) (actual time=4.077..344558.954 rows=113478127 loops=1)\n Planning Time: 0.725 ms\n Execution Time: 1064128.721 ms\n\n But it used a lot of resident memory, and now it seems like I\n actually have a leak! Because after the command returned as shown\n above, the memory is still allocated:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 7100 postgres 20 0 2164012 1.1g 251364 S 0.0 14.5 23:27.23 postgres: postgres integrator [local] idle\n\nand let's do the memory map dump:\n\n2019-04-20 22:09:52.522 UTC [7100] LOG: duration: 1064132.171 ms statement: explain analyze\n SELECT *\n FROM tmp_q q\n RIGHT OUTER JOIN tmp_r r\n USING(documentInternalId, actInternalId);\nTopMemoryContext: 153312 total in 8 blocks; 48168 free (70 chunks); 105144 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 8192 total in 1 blocks; 6896 free (1 chunks); 1296 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7936 free (1 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1154080 total in 20 blocks; 151784 free (1 chunks); 1002296 used\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_class_tblspc_relfilenode_index\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n ...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 4992 free (6 chunks); 3200 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (3 chunks); 256 used\nGrand total: 2082624 bytes in 240 blocks; 382760 free (175 chunks); 1699864 used\n\nstrange, it shows no leak here. Now I run this test again, to see\n if the memory grows further in top? This time I also leave the\n DISTINCT step in the query. I am trying to hit the out of memory\n situation. Well, I clearly saw memory growing now:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 7100 postgres 20 0 2601900 1.5g 251976 R 97.7 19.9 36:32.23 postgres: postgres integrator [local] EXPLAIN\n\nTopMemoryContext: 2250520 total in 9 blocks; 45384 free (56 chunks); 2205136 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 1456 free (0 chunks); 6736 used\n TopTransactionContext: 8192 total in 1 blocks; 7528 free (1 chunks); 664 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 65536 total in 4 blocks; 28664 free (9 chunks); 36872 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 147456 total in 21 blocks; 10400 free (7 chunks); 137056 used:\n ExecutorState: 489605432 total in 57794 blocks; 5522192 free (129776 chunks); 484083240 used\n HashTableContext: 2162800 total in 6 blocks; 64848 free (35 chunks); 2097952 used\n HashBatchContext: 706576176 total in 21503 blocks; 7936 free (0 chunks); 706568240 used\n TupleSort main: 452880 total in 8 blocks; 125880 free (29 chunks); 327000 used\n Caller tuples: 4194304 total in 10 blocks; 452280 free (20 chunks); 3742024 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1154080 total in 20 blocks; 149992 free (1 chunks); 1004088 used\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_class_tblspc_relfilenode_index\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 4992 free (6 chunks); 3200 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\nGrand total: 1207458200 bytes in 79595 blocks; 6639272 free (130033 chunks); 1200818928 used\n\nbut Executor state is only 489 MB, so it is in the area of slow but massive growth. \n\n ExecutorState: 489605432 total in 57794 blocks; 5522192 free (129776 chunks); 484083240 used\n HashTableContext: 2162800 total in 6 blocks; 64848 free (35 chunks); 2097952 used\n HashBatchContext: 706576176 total in 21503 blocks; 7936 free (0 chunks); 706568240 used\n TupleSort main: 452880 total in 8 blocks; 125880 free (29 chunks); 327000 used\n Caller tuples: 4194304 total in 10 blocks; 452280 free (20 chunks); 3742024 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n\n now I see if I can run it to completion anyway, and if there will\n then be a new bottom of memory. Now the newly allocated memory seems\n to have been released, so we stick to the 1.1G baseline we started\n out with.\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 7100 postgres 20 0 2164012 1.1g 251976 D 6.3 14.5 55:26.82 postgres: postgres integrator [local] EXPLAIN\non the other hand, the sort step is not yet finished. \n\nAlso, I think while we might have focused in on a peculiar\n planning situation where a very unfortunate plan is chosen which\n stresses the memory situation, the real reason for the final out\n of memory situation has not yet been determined. Remember, I have\n seen 3 stages in my original query:\n\nSteady state, sort-merge join active high CPU% memory at or\n below 100 kB\nSlow massive growth from over 200 kB to 1.5 GB or 1.8 GB\nExplosive growth within a second to over 2.2 GB\n\n\nIt might well be that my initial query would have succeeded just\n fine despite the unfortunate plan with the big memory consumption\n on the oddly planned hash join, were it not for that third phase\n of explosive growth. And we haven't been able to catch this,\n because it happens too quickly. \n\nIt seems to me that some better memory tracing would be\n necessary. Looking at src/backend/utils/memdebug.c, mentions\n Valgrind. But to me, Valgrind would be a huge toolbox to just look\n after one thing. I wonder if we could not make a much simpler\n memory leak debugger tool.� One that is fast,� yet doesn't provide\n too much output to overwhelm the log destination file system (and\n waste too much time). There are already Debug macros there which,\n if enabled, just create an absolutely crazy amount of\n undecipherable log file content, because ALL backend processes\n would spit out this blabber. So I already stopped that by adding a\n variable that must be set to 1 (using the debugger on exactly one\n process for exactly the time desired):\n\nint _alloc_info = 0;\n#ifdef HAVE_ALLOCINFO\n#define AllocFreeInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocFree: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#define AllocAllocInfo(_cxt, _chunk) \\\n if(_alloc_info) \\\n fprintf(stderr, \"AllocAlloc: %s: %p, %zu\\n\", \\\n (_cxt)->header.name, (_chunk), (_chunk)->size)\n#else\n#define AllocFreeInfo(_cxt, _chunk)\n#define AllocAllocInfo(_cxt, _chunk)\n#endif\n\nBut now I am thinking that should be the hook to use a limited\n cache where we can cancel out AllocSetAlloc with their\n AllocSetFree calls that follow relatively rapidly, which\n apparently is the majority of the log chatter created.�\nThe memory debugger would allocate a single fixed memory chunk\n like 8 or 16 kB as a cache per each memory context that is\n actually traced. In each we would record the memory allocation in\n the shortest possible way. With everything compressed. Instead of\n pointer, references to the memory we would store whatever memory\n chunk index, a very brief backtrace would be stored in a\n compressed form, by instruction pointer ($rip) of the caller and\n then variable length differences to the $rip of the caller next\n up. These could even be stored in an index of the 100 most common\n caller chains to compress this data some more, while minimizing\n the cost in searching. Now each allocated chunk would be kept in\n this table and when it fills up, the oldest allocated chunk\n removed and written to the log file. When freed before being\n evicted from the cache, the chunk gets silently removed. When a\n chunk is freed that is no longer in the cache, the free event is\n recorded in the log. That way only those chunks get written to the\n log files that have endured beyond the capacity of the cache.\n Hopefully that would be the chunks most likely involved in the\n memory leak. Once the log has been created, it can be loaded into\n PostgreSQL table itself, and analyzed to find the chunks that\n never get freed and from the compressed backtrace figure out where\n they have been allocated. \n\n BTW, my explain analyze is still running. That Sort - Unique step\n is taking forever on this data.\n\n OK, now I will try the various patches that people sent.\n\n-Gunther",
"msg_date": "Sat, 20 Apr 2019 20:33:46 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 08:33:46PM -0400, Gunther wrote:\n>On 4/20/2019 16:01, Tomas Vondra wrote:\n>>For me, this did the trick:\n>>�update pg_class set (relpages, reltuples) = (1000000, 1) where \n>>relname = 'tmp_r';\n>>�update pg_class set (relpages, reltuples) = (1, 1000000) where \n>>relname = 'tmp_q';\n>>\n>YES! For me too. My EXPLAIN ANALYZE actually succeeded.\n>\n> Hash Right Join (cost=11009552.27..11377073.28 rows=11 width=4271) (actual time=511580.110..1058354.140 rows=113478386 loops=1)\n> Hash Cond: (((q.documentinternalid)::text = (r.documentinternalid)::text) AND ((q.actinternalid)::text = (r.actinternalid)::text))\n> -> Seq Scan on tmp_q q (cost=0.00..210021.00 rows=21000000 width=3417) (actual time=1.148..1.387 rows=236 loops=1)\n> -> Hash (cost=11009552.11..11009552.11 rows=11 width=928) (actual time=511577.002..511577.002 rows=113478127 loops=1)\n> Buckets: 16384 (originally 1024) Batches: 131072 (originally 1) Memory Usage: 679961kB\n> -> Seq Scan on tmp_r r (cost=0.00..11009552.11 rows=11 width=928) (actual time=4.077..344558.954 rows=113478127 loops=1)\n> Planning Time: 0.725 ms\n> Execution Time: 1064128.721 ms\n>\n>But it used a lot of resident memory, and now it seems like I actually \n>have a leak! Because after the command returned as shown above, the \n>memory is still allocated:\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 7100 postgres 20 0 2164012 1.1g 251364 S 0.0 14.5 23:27.23 postgres: postgres integrator [local] idle\n>\n>and let's do the memory map dump:\n>\n>2019-04-20 22:09:52.522 UTC [7100] LOG: duration: 1064132.171 ms statement: explain analyze\n> SELECT *\n> FROM tmp_q q\n> RIGHT OUTER JOIN tmp_r r\n> USING(documentInternalId, actInternalId);\n>TopMemoryContext: 153312 total in 8 blocks; 48168 free (70 chunks); 105144 used\n> HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n> Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n> TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n> Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n> RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n> MessageContext: 8192 total in 1 blocks; 6896 free (1 chunks); 1296 used\n> Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n> smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n> TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n> Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n> TopPortalContext: 8192 total in 1 blocks; 7936 free (1 chunks); 256 used\n> Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n> CacheMemoryContext: 1154080 total in 20 blocks; 151784 free (1 chunks); 1002296 used\n> index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_class_tblspc_relfilenode_index\n> index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n> index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n> index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n> ...\n> index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n> index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n> WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n> PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n> MdSmgr: 8192 total in 1 blocks; 4992 free (6 chunks); 3200 used\n> LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n> Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n> ErrorContext: 8192 total in 1 blocks; 7936 free (3 chunks); 256 used\n>Grand total: 2082624 bytes in 240 blocks; 382760 free (175 chunks); 1699864 used\n>\n>strange, it shows no leak here. Now I run this test again, to see if \n>the memory grows further in top? This time I also leave the DISTINCT \n>step in the query. I am trying to hit the out of memory situation. \n>Well, I clearly saw memory growing now:\n>\n\nUnfortunately, interpreting RES is way more complicated. The trouble is\nPostgreSQL does not get memory from kernel directly, it gets it through\nglibc. So when we do free(), it's not guaranteed kernel gets it.\n\nAlso, I think glibc has multiple ways of getting memory from the kernel.\nIt can either to mmap or sbrk, and AFAIK it's easy to cause \"islands\" that\nmake it impossible to undo sbrk after freeing memory.\n\nMemory leaks in PostgreSQL are usually about allocating memory in the\nwrong context, and so are visible in MemoryContextStats. Permanent leaks\nthat don't show there are quite rare.\n\n>\n>Also, I think while we might have focused in on a peculiar planning \n>situation where a very unfortunate plan is chosen which stresses the \n>memory situation, the real reason for the final out of memory \n>situation has not yet been determined. Remember, I have seen 3 stages \n>in my original query:\n>\n>1. Steady state, sort-merge join active high CPU% memory at or below 100 kB\n>2. Slow massive growth from over 200 kB to 1.5 GB or 1.8 GB\n>3. Explosive growth within a second to over 2.2 GB\n>\n>It might well be that my initial query would have succeeded just fine \n>despite the unfortunate plan with the big memory consumption on the \n>oddly planned hash join, were it not for that third phase of explosive \n>growth. And we haven't been able to catch this, because it happens too \n>quickly.\n>\n>It seems to me that some better memory tracing would be necessary. \n>Looking at src/backend/utils/memdebug.c, mentions Valgrind. But to me, \n>Valgrind would be a huge toolbox to just look after one thing. I \n>wonder if we could not make a much simpler memory leak debugger tool.� \n>One that is fast,� yet doesn't provide too much output to overwhelm \n>the log destination file system (and waste too much time). There are \n>already Debug macros there which, if enabled, just create an \n>absolutely crazy amount of undecipherable log file content, because \n>ALL backend processes would spit out this blabber. So I already \n>stopped that by adding a variable that must be set to 1 (using the \n>debugger on exactly one process for exactly the time desired):\n>\n>int _alloc_info = 0;\n>#ifdef HAVE_ALLOCINFO\n>#define AllocFreeInfo(_cxt, _chunk) \\\n> if(_alloc_info) \\\n> fprintf(stderr, \"AllocFree: %s: %p, %zu\\n\", \\\n> (_cxt)->header.name, (_chunk), (_chunk)->size)\n>#define AllocAllocInfo(_cxt, _chunk) \\\n> if(_alloc_info) \\\n> fprintf(stderr, \"AllocAlloc: %s: %p, %zu\\n\", \\\n> (_cxt)->header.name, (_chunk), (_chunk)->size)\n>#else\n>#define AllocFreeInfo(_cxt, _chunk)\n>#define AllocAllocInfo(_cxt, _chunk)\n>#endif\n>\n>But now I am thinking that should be the hook to use a limited cache \n>where we can cancel out AllocSetAlloc with their AllocSetFree calls \n>that follow relatively rapidly, which apparently is the majority of \n>the log chatter created.\n>\n>The memory debugger would allocate a single fixed memory chunk like 8 \n>or 16 kB as a cache per each memory context that is actually traced. \n>In each we would record the memory allocation in the shortest possible \n>way. With everything compressed. Instead of pointer, references to the \n>memory we would store whatever memory chunk index, a very brief \n>backtrace would be stored in a compressed form, by instruction pointer \n>($rip) of the caller and then variable length differences to the $rip \n>of the caller next up. These could even be stored in an index of the \n>100 most common caller chains to compress this data some more, while \n>minimizing the cost in searching. Now each allocated chunk would be \n>kept in this table and when it fills up, the oldest allocated chunk \n>removed and written to the log file. When freed before being evicted \n>from the cache, the chunk gets silently removed. When a chunk is freed \n>that is no longer in the cache, the free event is recorded in the log. \n>That way only those chunks get written to the log files that have \n>endured beyond the capacity of the cache. Hopefully that would be the \n>chunks most likely involved in the memory leak. Once the log has been \n>created, it can be loaded into PostgreSQL table itself, and analyzed \n>to find the chunks that never get freed and from the compressed \n>backtrace figure out where they have been allocated.\n>\n>BTW, my explain analyze is still running. That Sort - Unique step is \n>taking forever on this data.\n>\n>OK, now I will try the various patches that people sent.\n>\n\nMaybe. But before wasting any more time on the memory leak investigation,\nI suggest you first try the patch moving the BufFile allocations to a\nseparate context. That'll either confirm or disprove the theory.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 21 Apr 2019 03:14:01 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 4/20/2019 21:14, Tomas Vondra wrote:\n> Maybe. But before wasting any more time on the memory leak investigation,\n> I suggest you first try the patch moving the BufFile allocations to a\n> separate context. That'll either confirm or disprove the theory.\n\nOK, fair enough. So, first patch 0001-* applied, recompiled and\n\n2019-04-21 04:08:04.364 UTC [11304] LOG: server process (PID 11313) was terminated by signal 11: Segmentation fault\n2019-04-21 04:08:04.364 UTC [11304] DETAIL: Failed process was running: explain analyze select * from reports.v_BusinessOperation;\n2019-04-21 04:08:04.364 UTC [11304] LOG: terminating any other active server processes\n2019-04-21 04:08:04.368 UTC [11319] FATAL: the database system is in recovery mode\n2019-04-21 04:08:04.368 UTC [11315] WARNING: terminating connection because of crash of another server process\n\nSIGSEGV ... and with the core dump that I have I can tell you where:\n\nCore was generated by `postgres: postgres integrator'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000009f300c in palloc (size=8272) at mcxt.c:936\n936 context->isReset = false;\n(gdb) bt\n#0 0x00000000009f300c in palloc (size=8272) at mcxt.c:936\n#1 0x000000000082b068 in makeBufFileCommon (nfiles=1) at buffile.c:116\n#2 0x000000000082b0f8 in makeBufFile (firstfile=73) at buffile.c:138\n#3 0x000000000082b283 in BufFileCreateTemp (interXact=false) at buffile.c:201\n#4 0x00000000006bdc15 in ExecHashJoinSaveTuple (tuple=0x1c5a468, hashvalue=3834163156, fileptr=0x18a3730) at nodeHashjoin.c:1227\n#5 0x00000000006b9568 in ExecHashTableInsert (hashtable=0x188fb88, slot=0x1877a18, hashvalue=3834163156) at nodeHash.c:1701\n#6 0x00000000006b6c39 in MultiExecPrivateHash (node=0x1862168) at nodeHash.c:186\n#7 0x00000000006b6aec in MultiExecHash (node=0x1862168) at nodeHash.c:114\n#8 0x00000000006a19cc in MultiExecProcNode (node=0x1862168) at execProcnode.c:501\n#9 0x00000000006bc5d2 in ExecHashJoinImpl (pstate=0x17b90e0, parallel=false) at nodeHashjoin.c:290\n...\n(gdb) info frame\nStack level 0, frame at 0x7fffd5d4dc80:\n rip = 0x9f300c in palloc (mcxt.c:936); saved rip = 0x82b068\n called by frame at 0x7fffd5d4dcb0\n source language c.\n Arglist at 0x7fffd5d4dc70, args: size=8272\n Locals at 0x7fffd5d4dc70, Previous frame's sp is 0x7fffd5d4dc80\n Saved registers:\n rbx at 0x7fffd5d4dc60, rbp at 0x7fffd5d4dc70, r12 at 0x7fffd5d4dc68, rip at 0x7fffd5d4dc78\n\nand I have confirmed that this is while working the main T_HashJoin with \njointype JOIN_RIGHT.\n\nSo now I am assuming that perhaps you want both of these patches \napplied. So applied it, and retried and boom, same thing same place.\n\nturns out the MemoryContext is NULL:\n\n(gdb) p context\n$1 = (MemoryContext) 0x0\n\nall patches applied cleanly (with the -p1 option). I see no .rej file, \nbut also no .orig file, not sure why that version of patch didn't create \nthem. But I paid attention and know that there was no error.\n\n-Gunther\n\n\n\n\n\n\n\nOn 4/20/2019 21:14, Tomas Vondra wrote:\n\nMaybe. But\n before wasting any more time on the memory leak investigation,\n \n I suggest you first try the patch moving the BufFile allocations\n to a\n \n separate context. That'll either confirm or disprove the theory.\n \n\nOK, fair enough. So, first patch 0001-* applied, recompiled and \n\n2019-04-21 04:08:04.364 UTC [11304] LOG: server process (PID 11313) was terminated by signal 11: Segmentation fault\n2019-04-21 04:08:04.364 UTC [11304] DETAIL: Failed process was running: explain analyze select * from reports.v_BusinessOperation;\n2019-04-21 04:08:04.364 UTC [11304] LOG: terminating any other active server processes\n2019-04-21 04:08:04.368 UTC [11319] FATAL: the database system is in recovery mode\n2019-04-21 04:08:04.368 UTC [11315] WARNING: terminating connection because of crash of another server process\n\nSIGSEGV ... and with the core dump that I have I can tell you\n where:\nCore was generated by `postgres: postgres integrator'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000009f300c in palloc (size=8272) at mcxt.c:936\n936 context->isReset = false;\n(gdb) bt\n#0 0x00000000009f300c in palloc (size=8272) at mcxt.c:936\n#1 0x000000000082b068 in makeBufFileCommon (nfiles=1) at buffile.c:116\n#2 0x000000000082b0f8 in makeBufFile (firstfile=73) at buffile.c:138\n#3 0x000000000082b283 in BufFileCreateTemp (interXact=false) at buffile.c:201\n#4 0x00000000006bdc15 in ExecHashJoinSaveTuple (tuple=0x1c5a468, hashvalue=3834163156, fileptr=0x18a3730) at nodeHashjoin.c:1227\n#5 0x00000000006b9568 in ExecHashTableInsert (hashtable=0x188fb88, slot=0x1877a18, hashvalue=3834163156) at nodeHash.c:1701\n#6 0x00000000006b6c39 in MultiExecPrivateHash (node=0x1862168) at nodeHash.c:186\n#7 0x00000000006b6aec in MultiExecHash (node=0x1862168) at nodeHash.c:114\n#8 0x00000000006a19cc in MultiExecProcNode (node=0x1862168) at execProcnode.c:501\n#9 0x00000000006bc5d2 in ExecHashJoinImpl (pstate=0x17b90e0, parallel=false) at nodeHashjoin.c:290\n...\n(gdb) info frame\nStack level 0, frame at 0x7fffd5d4dc80:\n rip = 0x9f300c in palloc (mcxt.c:936); saved rip = 0x82b068\n called by frame at 0x7fffd5d4dcb0\n source language c.\n Arglist at 0x7fffd5d4dc70, args: size=8272\n Locals at 0x7fffd5d4dc70, Previous frame's sp is 0x7fffd5d4dc80\n Saved registers:\n rbx at 0x7fffd5d4dc60, rbp at 0x7fffd5d4dc70, r12 at 0x7fffd5d4dc68, rip at 0x7fffd5d4dc78\n\n\n and I have confirmed that this is while working the main T_HashJoin\n with jointype JOIN_RIGHT.\nSo now I am assuming that perhaps you want both of these patches\n applied. So applied it, and retried and boom, same thing same\n place.\nturns out the MemoryContext is NULL:\n(gdb) p context\n$1 = (MemoryContext) 0x0\n\nall patches applied cleanly (with the -p1 option). I see no .rej\n file, but also no .orig file, not sure why that version of patch\n didn't create them. But I paid attention and know that there was\n no error. \n\n-Gunther",
"msg_date": "Sun, 21 Apr 2019 01:03:50 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "I� am now running Justin's patch after undoing Tomas' patches and any of \nmy own hacks (which would not have interfered with Tomas' patch)\n\nOn 4/20/2019 15:30, Justin Pryzby wrote:\n> With my patch, the number of batches is nonlinear WRT work_mem, and reaches a\n> maximum for moderately small work_mem. The goal is to choose the optimal\n> number of batches to minimize the degree to which work_mem is exceeded.\n\nNow I seem to be in that slow massive growth phase or maybe still in an \nearlier step, but I can see the top RES behavior different already.� The \nsize lingers around 400 MB.� But then it's growing too, at high CPU%, \ngoes past 700, 800, 900 MB now 1.5 GB, 1.7 GB, 1.8 GB, 1.9 GB, 2.0 GB, \n2.1, and still 98% CPU. 2.4 GB, wow, it has never been that big ... and \nBOOM!\n\nTopMemoryContext: 120544 total in 7 blocks; 9760 free (7 chunks); 110784 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (9 chunks); 1471536 used:\n ExecutorState: 2361536 total in 27 blocks; 1827536 free (3163 chunks); 534000 used\n TupleSort main: 3957712 total in 22 blocks; 246792 free (39 chunks); 3710920 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7336 free (6 chunks); 856 used\n HashBatchContext: 2523874568 total in 76816 blocks; 7936 free (0 chunks); 2523866632 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6176 free (1 chunks); 2016 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\nGrand total: 2538218304 bytes in 77339 blocks; 3075256 free (3372 chunks); 2535143048 used\n2019-04-21 05:27:07.731 UTC [968] ERROR: out of memory\n2019-04-21 05:27:07.731 UTC [968] DETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\n2019-04-21 05:27:07.731 UTC [968] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\n\nso we're ending up with the same problem.\n\nNo cigar. But lots of admiration and gratitude for all your attempts to \npinpoint this.\n\nAlso, again, if anyone (of the trusted people) wants access to hack \ndirectly, I can provide.\n\nregards,\n-Gunther\n\n\n\n\n\n\n\n\nI� am now running Justin's patch after undoing Tomas' patches and\n any of my own hacks (which would not have interfered with Tomas'\n patch)\n\nOn 4/20/2019 15:30, Justin Pryzby\n wrote:\n\n\nWith my patch, the number of batches is nonlinear WRT work_mem, and reaches a\nmaximum for moderately small work_mem. The goal is to choose the optimal\nnumber of batches to minimize the degree to which work_mem is exceeded.\n\n\nNow I seem to be in that slow massive growth phase or maybe still\n in an earlier step, but I can see the top RES behavior different\n already.� The size lingers around 400 MB.� But then it's growing\n too, at high CPU%, goes past 700, 800, 900 MB now 1.5 GB, 1.7 GB,\n 1.8 GB, 1.9 GB, 2.0 GB, 2.1, and still 98% CPU. 2.4 GB, wow, it\n has never been that big ... and BOOM!\nTopMemoryContext: 120544 total in 7 blocks; 9760 free (7 chunks); 110784 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (9 chunks); 1471536 used:\n ExecutorState: 2361536 total in 27 blocks; 1827536 free (3163 chunks); 534000 used\n TupleSort main: 3957712 total in 22 blocks; 246792 free (39 chunks); 3710920 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7336 free (6 chunks); 856 used\n HashBatchContext: 2523874568 total in 76816 blocks; 7936 free (0 chunks); 2523866632 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6176 free (1 chunks); 2016 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\nGrand total: 2538218304 bytes in 77339 blocks; 3075256 free (3372 chunks); 2535143048 used\n2019-04-21 05:27:07.731 UTC [968] ERROR: out of memory\n2019-04-21 05:27:07.731 UTC [968] DETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\n2019-04-21 05:27:07.731 UTC [968] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\n\nso we're ending up with the same problem.\nNo cigar. But lots of admiration and gratitude for all your\n attempts to pinpoint this.\nAlso, again, if anyone (of the trusted people) wants access to\n hack directly, I can provide.\nregards,\n -Gunther",
"msg_date": "Sun, 21 Apr 2019 01:31:06 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 01:03:50AM -0400, Gunther wrote:\n> On 4/20/2019 21:14, Tomas Vondra wrote:\n> >Maybe. But before wasting any more time on the memory leak investigation,\n> >I suggest you first try the patch moving the BufFile allocations to a\n> >separate context. That'll either confirm or disprove the theory.\n> \n> OK, fair enough. So, first patch 0001-* applied, recompiled and\n> \n> 2019-04-21 04:08:04.364 UTC [11304] LOG: server process (PID 11313) was terminated by signal 11: Segmentation fault\n...\n> turns out the MemoryContext is NULL:\n> \n> (gdb) p context\n> $1 = (MemoryContext) 0x0\n\nI updated Tomas' patch to unconditionally set the context.\n(Note, oldctx vs oldcxt is fairly subtle but I think deliberate?)\n\nJustin",
"msg_date": "Sun, 21 Apr 2019 03:08:22 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "I was able to reproduce in a somewhat contrived way:\n\nsh -c 'ulimit -v 1024000 ; /usr/local/pgsql/bin/postgres -D ./pg12dev5 -cport=1234' &\n\npostgres=# SET work_mem='64kB';SET client_min_messages =debug1;SET log_statement_stats=on;explain(analyze) WITH v AS MATERIALIZED (SELECT * FROM generate_series(1,9999999)i WHERE i%10<10 AND i%11<11 AND i%12<12 AND i%13<13 AND i%14<14 AND i%15<15 AND i%16<16 AND i%17<17 AND i%18<18 AND i%19<19 AND i%20<20 AND i%21<21 ) SELECT * FROM generate_series(1,99)k JOIN v ON k=i;\n\n HashTableContext: 8192 total in 1 blocks; 7696 free (7 chunks); 496 used\n hash batch files: 852065104 total in 101318 blocks; 951896 free (20 chunks); 851113208 used\n HashBatchContext: 73888 total in 4 blocks; 24280 free (6 chunks); 49608 used\n\n2019-04-21 04:11:02.521 CDT [4156] ERROR: out of memory\n2019-04-21 04:11:02.521 CDT [4156] DETAIL: Failed on request of size 8264 in memory context \"hash batch files\".\n\n\n",
"msg_date": "Sun, 21 Apr 2019 05:19:09 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 03:08:22AM -0500, Justin Pryzby wrote:\n>On Sun, Apr 21, 2019 at 01:03:50AM -0400, Gunther wrote:\n>> On 4/20/2019 21:14, Tomas Vondra wrote:\n>> >Maybe. But before wasting any more time on the memory leak investigation,\n>> >I suggest you first try the patch moving the BufFile allocations to a\n>> >separate context. That'll either confirm or disprove the theory.\n>>\n>> OK, fair enough. So, first patch 0001-* applied, recompiled and\n>>\n>> 2019-04-21 04:08:04.364 UTC [11304] LOG: server process (PID 11313) was terminated by signal 11: Segmentation fault\n>...\n>> turns out the MemoryContext is NULL:\n>>\n>> (gdb) p context\n>> $1 = (MemoryContext) 0x0\n>\n>I updated Tomas' patch to unconditionally set the context.\n>(Note, oldctx vs oldcxt is fairly subtle but I think deliberate?)\n>\n\nI don't follow - is there a typo confusing oldctx vs. oldcxt? I don't\nthink so, but I might have missed something. (I always struggle with which\nspelling is the right one).\n\nI think the bug is actually such simpler - the memory context was created\nonly in ExecuteIncreaseNumBatches() when starting with nbatch=1. But when\nthe initial nbatch value was higher (i.e. when starting with 2 or more\nbatches), it was left NULL. That was OK for testing with the contrived\ndata set, but it may easily break on other examples.\n\nSo here is an updated patch - hopefully this version works. I don't have\ntime to do much more testing now, though. But it compiles.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 21 Apr 2019 13:46:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 4:26 PM Tom Lane <[email protected]> wrote:\n\n> Tomas Vondra <[email protected]> writes:\n> > Considering how rare this issue likely is, we need to be looking for a\n> > solution that does not break the common case.\n>\n> Agreed. What I think we need to focus on next is why the code keeps\n> increasing the number of batches. It seems like there must be an undue\n> amount of data all falling into the same bucket ... but if it were simply\n> a matter of a lot of duplicate hash keys, the growEnabled shutoff\n> heuristic ought to trigger.\n>\n\nThe growEnabled stuff only prevents infinite loops. It doesn't prevent\nextreme silliness.\n\nIf a single 32 bit hash value has enough tuples by itself to not fit in\nwork_mem, then it will keep splitting until that value is in a batch by\nitself before shutting off (or at least until the split-point bit of\nwhatever else is in the that bucket happens to be the same value as the\nsplit-point-bit of the degenerate one, so by luck nothing or everything\ngets moved)\n\nProbabilistically we keep splitting until every batch, other than the one\ncontaining the degenerate value, has about one tuple in it.\n\nCheers,\n\nJeff\n\nOn Sat, Apr 20, 2019 at 4:26 PM Tom Lane <[email protected]> wrote:Tomas Vondra <[email protected]> writes:\n> Considering how rare this issue likely is, we need to be looking for a\n> solution that does not break the common case.\n\nAgreed. What I think we need to focus on next is why the code keeps\nincreasing the number of batches. It seems like there must be an undue\namount of data all falling into the same bucket ... but if it were simply\na matter of a lot of duplicate hash keys, the growEnabled shutoff\nheuristic ought to trigger.The growEnabled stuff only prevents infinite loops. It doesn't prevent extreme silliness.If a single 32 bit hash value has enough tuples by itself to not fit in work_mem, then it will keep splitting until that value is in a batch by itself before shutting off (or at least until the split-point bit of whatever else is in the that bucket happens to be the same value as the split-point-bit of the degenerate one, so by luck nothing or everything gets moved)Probabilistically we keep splitting until every batch, other than the one containing the degenerate value, has about one tuple in it.Cheers,Jeff",
"msg_date": "Sun, 21 Apr 2019 09:05:16 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> The growEnabled stuff only prevents infinite loops. It doesn't prevent\n> extreme silliness.\n\n> If a single 32 bit hash value has enough tuples by itself to not fit in\n> work_mem, then it will keep splitting until that value is in a batch by\n> itself before shutting off\n\nRight, that's the code's intention. If that's not good enough for this\ncase, we'll need to understand the details a bit better before we can\ndesign a better(?) heuristic.\n\nI suspect, however, that we might be better off just taking the existence\nof the I/O buffers into account somehow while deciding whether it's worth\ngrowing further. That is, I'm imagining adding a second independent\nreason for shutting off growEnabled, along the lines of \"increasing\nnbatch any further will require an unreasonable amount of buffer memory\".\nThe question then becomes how to define \"unreasonable\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Apr 2019 10:36:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:\n>Jeff Janes <[email protected]> writes:\n>> The growEnabled stuff only prevents infinite loops. It doesn't prevent\n>> extreme silliness.\n>\n>> If a single 32 bit hash value has enough tuples by itself to not fit in\n>> work_mem, then it will keep splitting until that value is in a batch by\n>> itself before shutting off\n>\n>Right, that's the code's intention. If that's not good enough for this\n>case, we'll need to understand the details a bit better before we can\n>design a better(?) heuristic.\n>\n\nI think we only disable growing when there are no other values in the\nbatch, but that seems rather easy to defeat - all you need is a single\ntuple with a hash that falls into the same batch, and it's over. Maybe\nwe should make this a bit less accurate - say, if less than 5% memory\ngets freed, don't add more batches.\n\n>I suspect, however, that we might be better off just taking the existence\n>of the I/O buffers into account somehow while deciding whether it's worth\n>growing further. That is, I'm imagining adding a second independent\n>reason for shutting off growEnabled, along the lines of \"increasing\n>nbatch any further will require an unreasonable amount of buffer memory\".\n>The question then becomes how to define \"unreasonable\".\n>\n\nI think the question the code needs to be asking is \"If we double the\nnumber of batches, does the amount of memory we need drop?\" And the\nmemory needs to account both for the buffers and per-batch data.\n\nI don't think we can just stop increasing the number of batches when the\nmemory for BufFile exceeds work_mem, because that entirely ignores the\nfact that by doing that we force the system to keep the per-batch stuff\nin memory (and that can be almost arbitrary amount).\n\nWhat I think we should be doing instead is instead make the threshold\ndynamic - instead of just checking at work_mem, we need to increment the\nnumber of batches when the total amount of memory exceeds\n\n Max(work_mem, 3 * memory_for_buffiles)\n\nThis is based on the observation that by increasing the number of\nbatches, we double memory_for_buffiles and split the per-batch data in\nhalf. By adding more batches, we'd actually increase the amount of\nmemory used.\n\nOf course, this just stops enforcing work_mem at some point, but it at\nleast attempts to minimize the amount of memory used.\n\nAn alternative would be spilling the extra tuples into a special\noverflow file, as I explained earlier. That would actually enforce\nwork_mem I think.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 21 Apr 2019 18:15:25 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:\n> Jeff Janes <[email protected]> writes:\n> > The growEnabled stuff only prevents infinite loops. It doesn't prevent\n> > extreme silliness.\n> \n> > If a single 32 bit hash value has enough tuples by itself to not fit in\n> > work_mem, then it will keep splitting until that value is in a batch by\n> > itself before shutting off\n> \n> I suspect, however, that we might be better off just taking the existence\n> of the I/O buffers into account somehow while deciding whether it's worth\n> growing further. That is, I'm imagining adding a second independent\n> reason for shutting off growEnabled, along the lines of \"increasing\n> nbatch any further will require an unreasonable amount of buffer memory\".\n> The question then becomes how to define \"unreasonable\".\n\nOn Sun, Apr 21, 2019 at 06:15:25PM +0200, Tomas Vondra wrote:\n> I think the question the code needs to be asking is \"If we double the\n> number of batches, does the amount of memory we need drop?\" And the\n> memory needs to account both for the buffers and per-batch data.\n> \n> I don't think we can just stop increasing the number of batches when the\n> memory for BufFile exceeds work_mem, because that entirely ignores the\n> fact that by doing that we force the system to keep the per-batch stuff\n> in memory (and that can be almost arbitrary amount).\n...\n> Of course, this just stops enforcing work_mem at some point, but it at\n> least attempts to minimize the amount of memory used.\n\nThis patch defines reasonable as \"additional BatchFiles will not themselves\nexceed work_mem; OR, exceeded work_mem already but additional BatchFiles are\ngoing to save us RAM\"...\n\nI think the first condition is insensitive and not too important to get right,\nit only allows work_mem to be exceeded by 2x, which maybe already happens for\nmultiple reasons, related to this thread and otherwise. It'd be fine to slap\non a factor of /2 or /4 or /8 there too. \n\nThe current patch doesn't unset growEnabled, since there's no point at which\nthe hashtable should grow without bound: if hash tables are *already* exceeding\nwork_mem by 2x as big, nbatches should be doubled.\n\nJustin",
"msg_date": "Sun, 21 Apr 2019 11:40:22 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:\n>> Jeff Janes <[email protected]> writes:\n>>> If a single 32 bit hash value has enough tuples by itself to not fit in\n>>> work_mem, then it will keep splitting until that value is in a batch by\n>>> itself before shutting off\n\n>> Right, that's the code's intention. If that's not good enough for this\n>> case, we'll need to understand the details a bit better before we can\n>> design a better(?) heuristic.\n\n> I think we only disable growing when there are no other values in the\n> batch, but that seems rather easy to defeat - all you need is a single\n> tuple with a hash that falls into the same batch, and it's over. Maybe\n> we should make this a bit less accurate - say, if less than 5% memory\n> gets freed, don't add more batches.\n\nYeah, something like that, but it's hard to design it without seeing some\nconcrete misbehaving examples.\n\nI think though that this is somewhat independent of the problem that we're\nnot including the I/O buffers in our reasoning about memory consumption.\n\n> An alternative would be spilling the extra tuples into a special\n> overflow file, as I explained earlier. That would actually enforce\n> work_mem I think.\n\nWell, no, it won't. If you have umpteen gigabytes of RHS tuples with the\nexact same hash code, nothing we can do here is going to prevent you from\nhaving to process those in a single table load. (If it's a plain inner\njoin, maybe you could break that into subsections anyway ... but that\nwon't work for left or full joins where you need per-tuple match status.)\n\nI think our ambition here should just be to not have the code go crazy\ntrying to keep its memory consumption under work_mem when it's ultimately\ngoing to fail to do so anyhow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Apr 2019 19:07:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "After applying Tomas' corrected patch 0001, and routing HJDEBUG messages \nto stderr:\n\nintegrator=# set enable_nestloop to off;\nSET\nintegrator=# explain analyze select * from reports.v_BusinessOperation;\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 32 to 64\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 64 to 128\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 128 to 256\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 256 to 512\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 512 to 1024\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 1024 to 2048\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 2048 to 4096\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 4096 to 8192\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 8192 to 16384\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 16384 to 32768\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 32768 to 65536\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 65536 to 131072\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 131072 to 262144\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 262144 to 524288\nERROR:� out of memory\nDETAIL:� Failed on request of size 32800 in memory context \"HashBatchContext\".\n\nNow\n\nTopMemoryContext: 4347672 total in 9 blocks; 41688 free (18 chunks); 4305984 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 5416 free (2 chunks); 2776 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 2449896 total in 16 blocks; 1795000 free (3158 chunks); 654896 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 16384 total in 2 blocks; 3008 free (6 chunks); 13376 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 1073512 total in 11 blocks; 246792 free (39 chunks); 826720 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 2242545904 total in 266270 blocks; 3996232 free (14164 chunks); 2238549672 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 168165080 total in 5118 blocks; 7936 free (0 chunks); 168157144 used\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n Caller tuples: 1048576 total in 8 blocks; 21608 free (14 chunks); 1026968 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 2424300520 bytes in 271910 blocks; 7332360 free (17596 chunks); 2416968160 used\n2019-04-21 19:50:21.338 UTC [6974] ERROR: out of memory\n2019-04-21 19:50:21.338 UTC [6974] DETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\n2019-04-21 19:50:21.338 UTC [6974] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\n\nNext I'll apply Tomas' corrected 0002 patch on top of this and see ...\n\n-Gunther\n\n\n\n\n\n\n\nAfter applying Tomas' corrected patch 0001, and routing HJDEBUG\n messages to stderr:\n\nintegrator=# set enable_nestloop to off;\nSET\nintegrator=# explain analyze select * from reports.v_BusinessOperation;\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 32 to 64\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 64 to 128\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 128 to 256\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 256 to 512\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 512 to 1024\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 1024 to 2048\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 2048 to 4096\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 4096 to 8192\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 8192 to 16384\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 16384 to 32768\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 32768 to 65536\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 65536 to 131072\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 131072 to 262144\nWARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 262144 to 524288\nERROR:� out of memory\nDETAIL:� Failed on request of size 32800 in memory context \"HashBatchContext\".\nNow�\nTopMemoryContext: 4347672 total in 9 blocks; 41688 free (18 chunks); 4305984 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 5416 free (2 chunks); 2776 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 2449896 total in 16 blocks; 1795000 free (3158 chunks); 654896 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 16384 total in 2 blocks; 3008 free (6 chunks); 13376 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 1073512 total in 11 blocks; 246792 free (39 chunks); 826720 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 2242545904 total in 266270 blocks; 3996232 free (14164 chunks); 2238549672 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 168165080 total in 5118 blocks; 7936 free (0 chunks); 168157144 used\n TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n Caller tuples: 1048576 total in 8 blocks; 21608 free (14 chunks); 1026968 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 2424300520 bytes in 271910 blocks; 7332360 free (17596 chunks); 2416968160 used\n2019-04-21 19:50:21.338 UTC [6974] ERROR: out of memory\n2019-04-21 19:50:21.338 UTC [6974] DETAIL: Failed on request of size 32800 in memory context \"HashBatchContext\".\n2019-04-21 19:50:21.338 UTC [6974] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\n\nNext I'll apply Tomas' corrected 0002 patch on top of this and\n see ...\n-Gunther",
"msg_date": "Sun, 21 Apr 2019 19:25:15 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 07:25:15PM -0400, Gunther wrote:\n> After applying Tomas' corrected patch 0001, and routing HJDEBUG messages\n> to stderr:\n>\n> integrator=# set enable_nestloop to off;\n> SET\n> integrator=# explain analyze select * from reports.v_BusinessOperation;\n>\n> ...\n> WARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 131072 to 262144\n> WARNING:� ExecHashIncreaseNumBatches: increasing number of batches from 262144 to 524288\n> ERROR:� out of memory\n> DETAIL:� Failed on request of size 32800 in memory context \"HashBatchContext\".\n>\n> Now�\n>\n> TopMemoryContext: 4347672 total in 9 blocks; 41688 free (18 chunks); 4305984 used\n> ...\n> Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n> TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n> PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n> PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n> ExecutorState: 2449896 total in 16 blocks; 1795000 free (3158 chunks); 654896 used\n> TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n> ...\n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n> HashBatchFiles: 2242545904 total in 266270 blocks; 3996232 free (14164 chunks); 2238549672 used\n> HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n> HashBatchContext: 168165080 total in 5118 blocks; 7936 free (0 chunks); 168157144 used\n> TupleSort main: 452880 total in 8 blocks; 126248 free (27 chunks); 326632 used\n> Caller tuples: 1048576 total in 8 blocks; 21608 free (14 chunks); 1026968 used\n> ...\n> Grand total: 2424300520 bytes in 271910 blocks; 7332360 free (17596 chunks); 2416968160 used\n>\n\nIMO this pretty much proves that the memory allocated for BufFile really\nis the root cause of the issues with this query. 524288 batches means\nup to 1048576 BufFiles, which is a bit more than ~8GB of RAM. However\nthose for the inner relation were not allycated yet, so at this point\nonly about 4GB might be allocated. And it seems ~1/2 of them did not\nreceive any tuples, so only about 2GB got allocated so far.\n\nThe second batch will probably make it fail much sooner, because it\nallocates the BufFile stuff eagerly (both for inner and outer side).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 22 Apr 2019 01:51:02 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "After applying Tomas' patch 0002 as corrected, over 0001, same thing:\n\nintegrator=# set enable_nestloop to off;\nSET\nintegrator=# explain analyze select * from reports.v_BusinessOperation;\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32 to 64\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 64 to 128\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 128 to 256\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 256 to 512\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 512 to 1024\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 1024 to 2048\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 2048 to 4096\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 4096 to 8192\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 8192 to 16384\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16384 to 32768\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32768 to 65536\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 65536 to 131072\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 131072 to 262144\nERROR: out of memory\nDETAIL: Failed on request of size 8272 in memory context \"HashBatchFiles\".\n\nAnd from the log:\n\n2019-04-21 23:29:33.497 UTC [8890] LOG: database system was shut down at 2019-04-21 23:29:33 UTC\n2019-04-21 23:29:33.507 UTC [8888] LOG: database system is ready to accept connections\nHashjoin 0x1732b88: initial nbatch = 16, nbuckets = 8192\n2019-04-21 23:31:54.447 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32\n2019-04-21 23:31:54.447 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n2019-04-21 23:31:54.447 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 10016 free (6 chunks); 110528 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 258032 total in 31 blocks; 6208 free (0 chunks); 251824 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 17429152 bytes in 668 blocks; 1452392 free (220 chunks); 15976760 used\nHashjoin 0x1732b88: increasing nbatch to 32 because space = 4128933\nHashjoin 0x1732b88: freed 148 of 10584 tuples, space now 4071106\n2019-04-21 23:31:54.450 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n2019-04-21 23:31:54.450 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 9760 free (7 chunks); 110784 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 524528 total in 63 blocks; 4416 free (2 chunks); 520112 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4213640 total in 128 blocks; 7936 free (0 chunks); 4205704 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 17629936 bytes in 698 blocks; 1450344 free (223 chunks); 16179592 used\n2019-04-21 23:31:54.452 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32 to 64\n2019-04-21 23:31:54.452 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n2019-04-21 23:31:54.452 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 9760 free (7 chunks); 110784 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 524528 total in 63 blocks; 4416 free (2 chunks); 520112 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n\n...\n �ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 17695648 bytes in 700 blocks; 1450344 free (223 chunks); 16245304 used\nHashjoin 0x1732b88: increasing nbatch to 64 because space = 4128826\nHashjoin 0x1732b88: freed 544 of 10584 tuples, space now 3916296\n2019-04-21 23:31:54.456 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n2019-04-21 23:31:54.456 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 8224 free (7 chunks); 112320 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4049360 total in 123 blocks; 7936 free (0 chunks); 4041424 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n �ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 17998648 bytes in 757 blocks; 1445224 free (225 chunks); 16553424 used\n2019-04-21 23:31:54.459 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 64 to 128\n2019-04-21 23:31:54.459 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n2019-04-21 23:31:54.459 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 8224 free (7 chunks); 112320 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 18228640 bytes in 764 blocks; 1445224 free (225 chunks); 16783416 used\nHashjoin 0x1732b88: increasing nbatch to 128 because space = 4128846\nHashjoin 0x1732b88: freed 10419 of 10585 tuples, space now 65570\n2019-04-21 23:31:54.466 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n2019-04-21 23:31:54.466 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 6176 free (8 chunks); 114368 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n...\nI notice now you have tons of these memory map dumps, tell me what you want to see and I will grep it out for you.\nI guess the Hashjoin related things .... OK, last one before the out of memory now, and then I give you some grepped stuff...\n...\nTopMemoryContext: 4347672 total in 9 blocks; 41608 free (18 chunks); 4306064 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 5416 free (2 chunks); 2776 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 2449896 total in 16 blocks; 1794968 free (3158 chunks); 654928 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 2468537520 total in 293910 blocks; 2669512 free (14 chunks); 2465868008 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 256272 total in 6 blocks; 36424 free (15 chunks); 219848 used\n Caller tuples: 2097152 total in 9 blocks; 929696 free (17 chunks); 1167456 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 2486488160 bytes in 294559 blocks; 6838088 free (3440 chunks); 2479650072 used\n2019-04-21 23:40:23.118 UTC [8896] ERROR: out of memory\n2019-04-21 23:40:23.118 UTC [8896] DETAIL: Failed on request of size 8272 in memory context \"HashBatchFiles\".\n2019-04-21 23:40:23.118 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\n2019-04-21 23:40:23.119 UTC [8896] LOG: could not open directory \"base/pgsql_tmp/pgsql_tmp8896.2.sharedfileset\": Cannot allocate memory\n\nok now here comes a summary grepped out:\n\ngrep 'Hash'\n\nHashjoin 0x1732b88: initial nbatch = 16, nbuckets = 8192\n2019-04-21 23:31:54.447 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32\n2019-04-21 23:31:54.447 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 258032 total in 31 blocks; 6208 free (0 chunks); 251824 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 32 because space = 4128933\nHashjoin 0x1732b88: freed 148 of 10584 tuples, space now 4071106\n2019-04-21 23:31:54.450 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 524528 total in 63 blocks; 4416 free (2 chunks); 520112 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4213640 total in 128 blocks; 7936 free (0 chunks); 4205704 used\n2019-04-21 23:31:54.452 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32 to 64\n2019-04-21 23:31:54.452 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 524528 total in 63 blocks; 4416 free (2 chunks); 520112 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 64 because space = 4128826\nHashjoin 0x1732b88: freed 544 of 10584 tuples, space now 3916296\n2019-04-21 23:31:54.456 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4049360 total in 123 blocks; 7936 free (0 chunks); 4041424 used\n2019-04-21 23:31:54.459 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 64 to 128\n2019-04-21 23:31:54.459 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 128 because space = 4128846\nHashjoin 0x1732b88: freed 10419 of 10585 tuples, space now 65570\n2019-04-21 23:31:54.466 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 2148080 total in 257 blocks; 18160 free (6 chunks); 2129920 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 172352 total in 5 blocks; 7936 free (0 chunks); 164416 used\n2019-04-21 23:32:07.174 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 128 to 256\n2019-04-21 23:32:07.174 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 2148080 total in 257 blocks; 18160 free (6 chunks); 2129920 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4312208 total in 131 blocks; 7936 free (0 chunks); 4304272 used\nHashjoin 0x1732b88: increasing nbatch to 256 because space = 4128829\nHashjoin 0x1732b88: freed 10308 of 10734 tuples, space now 161815\n2019-04-21 23:32:07.183 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 4312816 total in 514 blocks; 36552 free (8 chunks); 4276264 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 270920 total in 8 blocks; 7936 free (0 chunks); 262984 used\n2019-04-21 23:32:18.865 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 256 to 512\n2019-04-21 23:32:18.865 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 4312816 total in 514 blocks; 36552 free (8 chunks); 4276264 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 512 because space = 4128908\nHashjoin 0x1732b88: freed 398 of 10379 tuples, space now 3977787\n2019-04-21 23:32:18.877 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8642288 total in 1027 blocks; 73376 free (10 chunks); 8568912 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4147928 total in 126 blocks; 7936 free (0 chunks); 4139992 used\n2019-04-21 23:32:18.880 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 512 to 1024\n2019-04-21 23:32:18.880 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8642288 total in 1027 blocks; 73376 free (10 chunks); 8568912 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 1024 because space = 4129008\nHashjoin 0x1732b88: freed 296 of 10360 tuples, space now 4013423\n2019-04-21 23:32:18.903 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 17301232 total in 2052 blocks; 147064 free (12 chunks); 17154168 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4180784 total in 127 blocks; 7936 free (0 chunks); 4172848 used\n2019-04-21 23:32:18.906 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 1024 to 2048\n2019-04-21 23:32:18.906 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 17301232 total in 2052 blocks; 147064 free (12 chunks); 17154168 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 2048 because space = 4129133\nHashjoin 0x1732b88: freed 154 of 10354 tuples, space now 4068786\n2019-04-21 23:32:18.946 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 34389856 total in 4102 blocks; 65176 free (14 chunks); 34324680 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4213640 total in 128 blocks; 7936 free (0 chunks); 4205704 used\n2019-04-21 23:32:18.949 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 2048 to 4096\n2019-04-21 23:32:18.949 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 34389856 total in 4102 blocks; 65176 free (14 chunks); 34324680 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 4096 because space = 4129035\nHashjoin 0x1732b88: freed 10167 of 10351 tuples, space now 72849\n2019-04-21 23:32:19.032 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 68796256 total in 8199 blocks; 130672 free (14 chunks); 68665584 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 172352 total in 5 blocks; 7936 free (0 chunks); 164416 used\nHashjoin 0x1b3d4c8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1b3bc08: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1b3b5d8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x15499a8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1549978: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1b3cc88: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x1553638: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x1553638: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x1553d38: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x1b3ad98: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x15538d8: initial nbatch = 16, nbuckets = 65536\n2019-04-21 23:40:06.495 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 4096 to 8192\n2019-04-21 23:40:06.495 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 68663008 total in 8183 blocks; 131440 free (46 chunks); 68531568 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 8192 because space = 4128997\nHashjoin 0x1732b88: freed 10555 of 10560 tuples, space now 1983\n2019-04-21 23:40:06.680 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 137609056 total in 16392 blocks; 261704 free (14 chunks); 137347352 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 106640 total in 3 blocks; 7936 free (0 chunks); 98704 used\n2019-04-21 23:40:06.883 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 8192 to 16384\n2019-04-21 23:40:06.883 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 137592400 total in 16390 blocks; 261800 free (18 chunks); 137330600 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4246496 total in 129 blocks; 7936 free (0 chunks); 4238560 used\nHashjoin 0x1732b88: increasing nbatch to 16384 because space = 4128957\nHashjoin 0x1732b88: freed 10697 of 10764 tuples, space now 25956\n2019-04-21 23:40:07.268 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 275234656 total in 32777 blocks; 523808 free (14 chunks); 274710848 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 106640 total in 3 blocks; 7936 free (0 chunks); 98704 used\n2019-04-21 23:40:09.096 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16384 to 32768\n2019-04-21 23:40:09.096 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 275018128 total in 32751 blocks; 525056 free (66 chunks); 274493072 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 32768 because space = 4128890\nHashjoin 0x1732b88: freed 8 of 10809 tuples, space now 4125769\n2019-04-21 23:40:10.050 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 550485856 total in 65546 blocks; 1048056 free (14 chunks); 549437800 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n2019-04-21 23:40:10.060 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32768 to 65536\n2019-04-21 23:40:10.060 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 550485856 total in 65546 blocks; 1048056 free (14 chunks); 549437800 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 65536 because space = 4128825\nHashjoin 0x1732b88: freed 20 of 10809 tuples, space now 4121380\n2019-04-21 23:40:12.686 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 1100988256 total in 131083 blocks; 2096592 free (14 chunks); 1098891664 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4246496 total in 129 blocks; 7936 free (0 chunks); 4238560 used\n2019-04-21 23:40:12.703 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 65536 to 131072\n2019-04-21 23:40:12.703 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 1100988256 total in 131083 blocks; 2096592 free (14 chunks); 1098891664 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 131072 because space = 4129020\nHashjoin 0x1732b88: freed 2 of 10809 tuples, space now 4128291\n2019-04-21 23:40:20.571 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 2201993056 total in 262156 blocks; 4193704 free (14 chunks); 2197799352 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n2019-04-21 23:40:20.602 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 131072 to 262144\n2019-04-21 23:40:20.602 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 2201993056 total in 262156 blocks; 4193704 free (14 chunks); 2197799352 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 262144 because space = 4129055\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 2468537520 total in 293910 blocks; 2669512 free (14 chunks); 2465868008 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n2019-04-21 23:40:23.118 UTC [8896] DETAIL: Failed on request of size 8272 in memory context \"HashBatchFiles\".\n(END)\n\n-Gunther\n\n\n\n\n\n\n\n\n\nAfter applying Tomas' patch 0002 as corrected, over 0001, same\n thing:\nintegrator=# set enable_nestloop to off;\nSET\nintegrator=# explain analyze select * from reports.v_BusinessOperation;\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32 to 64\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 64 to 128\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 128 to 256\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 256 to 512\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 512 to 1024\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 1024 to 2048\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 2048 to 4096\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 4096 to 8192\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 8192 to 16384\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16384 to 32768\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32768 to 65536\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 65536 to 131072\nWARNING: ExecHashIncreaseNumBatches: increasing number of batches from 131072 to 262144\nERROR: out of memory\nDETAIL: Failed on request of size 8272 in memory context \"HashBatchFiles\".\n\nAnd from the log:\n2019-04-21 23:29:33.497 UTC [8890] LOG: database system was shut down at 2019-04-21 23:29:33 UTC\n2019-04-21 23:29:33.507 UTC [8888] LOG: database system is ready to accept connections\nHashjoin 0x1732b88: initial nbatch = 16, nbuckets = 8192\n2019-04-21 23:31:54.447 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32\n2019-04-21 23:31:54.447 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n2019-04-21 23:31:54.447 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 10016 free (6 chunks); 110528 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 258032 total in 31 blocks; 6208 free (0 chunks); 251824 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 17429152 bytes in 668 blocks; 1452392 free (220 chunks); 15976760 used\nHashjoin 0x1732b88: increasing nbatch to 32 because space = 4128933\nHashjoin 0x1732b88: freed 148 of 10584 tuples, space now 4071106\n2019-04-21 23:31:54.450 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n2019-04-21 23:31:54.450 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 9760 free (7 chunks); 110784 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 524528 total in 63 blocks; 4416 free (2 chunks); 520112 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4213640 total in 128 blocks; 7936 free (0 chunks); 4205704 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 17629936 bytes in 698 blocks; 1450344 free (223 chunks); 16179592 used\n2019-04-21 23:31:54.452 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32 to 64\n2019-04-21 23:31:54.452 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n2019-04-21 23:31:54.452 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 9760 free (7 chunks); 110784 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 524528 total in 63 blocks; 4416 free (2 chunks); 520112 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n\n...\n �ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 17695648 bytes in 700 blocks; 1450344 free (223 chunks); 16245304 used\nHashjoin 0x1732b88: increasing nbatch to 64 because space = 4128826\nHashjoin 0x1732b88: freed 544 of 10584 tuples, space now 3916296\n2019-04-21 23:31:54.456 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n2019-04-21 23:31:54.456 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 8224 free (7 chunks); 112320 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4049360 total in 123 blocks; 7936 free (0 chunks); 4041424 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n �ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 17998648 bytes in 757 blocks; 1445224 free (225 chunks); 16553424 used\n2019-04-21 23:31:54.459 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 64 to 128\n2019-04-21 23:31:54.459 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n2019-04-21 23:31:54.459 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 8224 free (7 chunks); 112320 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 647368 total in 10 blocks; 197536 free (13 chunks); 449832 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n TupleSort main: 4219912 total in 23 blocks; 246792 free (39 chunks); 3973120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 41016 total in 3 blocks; 6504 free (6 chunks); 34512 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 18228640 bytes in 764 blocks; 1445224 free (225 chunks); 16783416 used\nHashjoin 0x1732b88: increasing nbatch to 128 because space = 4128846\nHashjoin 0x1732b88: freed 10419 of 10585 tuples, space now 65570\n2019-04-21 23:31:54.466 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n2019-04-21 23:31:54.466 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\nTopMemoryContext: 120544 total in 7 blocks; 6176 free (8 chunks); 114368 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 6680 free (0 chunks); 1512 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n...\nI notice now you have tons of these memory map dumps, tell me what you want to see and I will grep it out for you.\nI guess the Hashjoin related things .... OK, last one before the out of memory now, and then I give you some grepped stuff...\n...\nTopMemoryContext: 4347672 total in 9 blocks; 41608 free (18 chunks); 4306064 used\n HandleParallelMessages: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 5416 free (2 chunks); 2776 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 32768 total in 3 blocks; 13488 free (10 chunks); 19280 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 16832 free (8 chunks); 15936 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalHoldContext: 24632 total in 2 blocks; 7392 free (0 chunks); 17240 used\n PortalContext: 1482752 total in 184 blocks; 11216 free (8 chunks); 1471536 used:\n ExecutorState: 2449896 total in 16 blocks; 1794968 free (3158 chunks); 654928 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n TupleSort main: 286912 total in 8 blocks; 246792 free (39 chunks); 40120 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 2468537520 total in 293910 blocks; 2669512 free (14 chunks); 2465868008 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n TupleSort main: 256272 total in 6 blocks; 36424 free (15 chunks); 219848 used\n Caller tuples: 2097152 total in 9 blocks; 929696 free (17 chunks); 1167456 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n...\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872 used\n CacheMemoryContext: 1101328 total in 14 blocks; 288672 free (1 chunks); 812656 used\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n...\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6560 free (1 chunks); 1632 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (5 chunks); 256 used\nGrand total: 2486488160 bytes in 294559 blocks; 6838088 free (3440 chunks); 2479650072 used\n2019-04-21 23:40:23.118 UTC [8896] ERROR: out of memory\n2019-04-21 23:40:23.118 UTC [8896] DETAIL: Failed on request of size 8272 in memory context \"HashBatchFiles\".\n2019-04-21 23:40:23.118 UTC [8896] STATEMENT: explain analyze select * from reports.v_BusinessOperation;\n2019-04-21 23:40:23.119 UTC [8896] LOG: could not open directory \"base/pgsql_tmp/pgsql_tmp8896.2.sharedfileset\": Cannot allocate memory\n\nok now here comes a summary grepped out:\n\ngrep 'Hash'\n\nHashjoin 0x1732b88: initial nbatch = 16, nbuckets = 8192\n2019-04-21 23:31:54.447 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32\n2019-04-21 23:31:54.447 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 258032 total in 31 blocks; 6208 free (0 chunks); 251824 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 32 because space = 4128933\nHashjoin 0x1732b88: freed 148 of 10584 tuples, space now 4071106\n2019-04-21 23:31:54.450 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 524528 total in 63 blocks; 4416 free (2 chunks); 520112 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4213640 total in 128 blocks; 7936 free (0 chunks); 4205704 used\n2019-04-21 23:31:54.452 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32 to 64\n2019-04-21 23:31:54.452 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 524528 total in 63 blocks; 4416 free (2 chunks); 520112 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 64 because space = 4128826\nHashjoin 0x1732b88: freed 544 of 10584 tuples, space now 3916296\n2019-04-21 23:31:54.456 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4049360 total in 123 blocks; 7936 free (0 chunks); 4041424 used\n2019-04-21 23:31:54.459 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 64 to 128\n2019-04-21 23:31:54.459 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 1057520 total in 127 blocks; 832 free (4 chunks); 1056688 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 128 because space = 4128846\nHashjoin 0x1732b88: freed 10419 of 10585 tuples, space now 65570\n2019-04-21 23:31:54.466 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 2148080 total in 257 blocks; 18160 free (6 chunks); 2129920 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 172352 total in 5 blocks; 7936 free (0 chunks); 164416 used\n2019-04-21 23:32:07.174 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 128 to 256\n2019-04-21 23:32:07.174 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 2148080 total in 257 blocks; 18160 free (6 chunks); 2129920 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4312208 total in 131 blocks; 7936 free (0 chunks); 4304272 used\nHashjoin 0x1732b88: increasing nbatch to 256 because space = 4128829\nHashjoin 0x1732b88: freed 10308 of 10734 tuples, space now 161815\n2019-04-21 23:32:07.183 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 4312816 total in 514 blocks; 36552 free (8 chunks); 4276264 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 270920 total in 8 blocks; 7936 free (0 chunks); 262984 used\n2019-04-21 23:32:18.865 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 256 to 512\n2019-04-21 23:32:18.865 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 4312816 total in 514 blocks; 36552 free (8 chunks); 4276264 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 512 because space = 4128908\nHashjoin 0x1732b88: freed 398 of 10379 tuples, space now 3977787\n2019-04-21 23:32:18.877 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8642288 total in 1027 blocks; 73376 free (10 chunks); 8568912 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4147928 total in 126 blocks; 7936 free (0 chunks); 4139992 used\n2019-04-21 23:32:18.880 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 512 to 1024\n2019-04-21 23:32:18.880 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8642288 total in 1027 blocks; 73376 free (10 chunks); 8568912 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 1024 because space = 4129008\nHashjoin 0x1732b88: freed 296 of 10360 tuples, space now 4013423\n2019-04-21 23:32:18.903 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 17301232 total in 2052 blocks; 147064 free (12 chunks); 17154168 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4180784 total in 127 blocks; 7936 free (0 chunks); 4172848 used\n2019-04-21 23:32:18.906 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 1024 to 2048\n2019-04-21 23:32:18.906 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 17301232 total in 2052 blocks; 147064 free (12 chunks); 17154168 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 2048 because space = 4129133\nHashjoin 0x1732b88: freed 154 of 10354 tuples, space now 4068786\n2019-04-21 23:32:18.946 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 34389856 total in 4102 blocks; 65176 free (14 chunks); 34324680 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4213640 total in 128 blocks; 7936 free (0 chunks); 4205704 used\n2019-04-21 23:32:18.949 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 2048 to 4096\n2019-04-21 23:32:18.949 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 34389856 total in 4102 blocks; 65176 free (14 chunks); 34324680 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 4096 because space = 4129035\nHashjoin 0x1732b88: freed 10167 of 10351 tuples, space now 72849\n2019-04-21 23:32:19.032 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 68796256 total in 8199 blocks; 130672 free (14 chunks); 68665584 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 172352 total in 5 blocks; 7936 free (0 chunks); 164416 used\nHashjoin 0x1b3d4c8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1b3bc08: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1b3b5d8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x15499a8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1549978: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1b3cc88: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x1553638: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x1553638: initial nbatch = 1, nbuckets = 65536\nHashjoin 0x1553d38: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x1b3ad98: initial nbatch = 16, nbuckets = 65536\nHashjoin 0x15538d8: initial nbatch = 16, nbuckets = 65536\n2019-04-21 23:40:06.495 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 4096 to 8192\n2019-04-21 23:40:06.495 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 68663008 total in 8183 blocks; 131440 free (46 chunks); 68531568 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 8192 because space = 4128997\nHashjoin 0x1732b88: freed 10555 of 10560 tuples, space now 1983\n2019-04-21 23:40:06.680 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 137609056 total in 16392 blocks; 261704 free (14 chunks); 137347352 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 106640 total in 3 blocks; 7936 free (0 chunks); 98704 used\n2019-04-21 23:40:06.883 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 8192 to 16384\n2019-04-21 23:40:06.883 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 137592400 total in 16390 blocks; 261800 free (18 chunks); 137330600 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4246496 total in 129 blocks; 7936 free (0 chunks); 4238560 used\nHashjoin 0x1732b88: increasing nbatch to 16384 because space = 4128957\nHashjoin 0x1732b88: freed 10697 of 10764 tuples, space now 25956\n2019-04-21 23:40:07.268 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 275234656 total in 32777 blocks; 523808 free (14 chunks); 274710848 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 106640 total in 3 blocks; 7936 free (0 chunks); 98704 used\n2019-04-21 23:40:09.096 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16384 to 32768\n2019-04-21 23:40:09.096 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 275018128 total in 32751 blocks; 525056 free (66 chunks); 274493072 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 32768 because space = 4128890\nHashjoin 0x1732b88: freed 8 of 10809 tuples, space now 4125769\n2019-04-21 23:40:10.050 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 550485856 total in 65546 blocks; 1048056 free (14 chunks); 549437800 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n2019-04-21 23:40:10.060 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 32768 to 65536\n2019-04-21 23:40:10.060 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 550485856 total in 65546 blocks; 1048056 free (14 chunks); 549437800 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 65536 because space = 4128825\nHashjoin 0x1732b88: freed 20 of 10809 tuples, space now 4121380\n2019-04-21 23:40:12.686 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 1100988256 total in 131083 blocks; 2096592 free (14 chunks); 1098891664 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4246496 total in 129 blocks; 7936 free (0 chunks); 4238560 used\n2019-04-21 23:40:12.703 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 65536 to 131072\n2019-04-21 23:40:12.703 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 1100988256 total in 131083 blocks; 2096592 free (14 chunks); 1098891664 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 131072 because space = 4129020\nHashjoin 0x1732b88: freed 2 of 10809 tuples, space now 4128291\n2019-04-21 23:40:20.571 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats end =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 2201993056 total in 262156 blocks; 4193704 free (14 chunks); 2197799352 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n2019-04-21 23:40:20.602 UTC [8896] WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 131072 to 262144\n2019-04-21 23:40:20.602 UTC [8896] LOG: ExecHashIncreaseNumBatches ======= context stats start =======\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 2201993056 total in 262156 blocks; 4193704 free (14 chunks); 2197799352 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\nHashjoin 0x1732b88: increasing nbatch to 262144 because space = 4129055\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 32768 total in 3 blocks; 17304 free (9 chunks); 15464 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7320 free (0 chunks); 872 used\n HashBatchContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7752 free (0 chunks); 440 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (0 chunks); 568 used\n HashBatchContext: 90288 total in 4 blocks; 16072 free (6 chunks); 74216 used\n HashBatchFiles: 2468537520 total in 293910 blocks; 2669512 free (14 chunks); 2465868008 used\n HashTableContext: 8192 total in 1 blocks; 7624 free (5 chunks); 568 used\n HashBatchContext: 4279352 total in 130 blocks; 7936 free (0 chunks); 4271416 used\n2019-04-21 23:40:23.118 UTC [8896] DETAIL: Failed on request of size 8272 in memory context \"HashBatchFiles\".\n(END)\n\n-Gunther",
"msg_date": "Sun, 21 Apr 2019 19:58:49 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Now to Justin's patch. First undo Tomas' patch and apply:\n\n$ mv src/include/executor/hashjoin.h.orig src/include/executor/hashjoin.h\n$ mv src/backend/executor/nodeHash.c.orig src/backend/executor/nodeHash.c\n$ mv src/backend/executor/nodeHashjoin.c.orig src/backend/executor/nodeHashjoin.c\n$ patch -p1 <../limit-hash-nbatches-v2.patch\npatching file src/backend/executor/nodeHash.c\nHunk #1 succeeded at 570 (offset -3 lines).\nHunk #2 succeeded at 917 (offset -3 lines).\nHunk #3 succeeded at 930 (offset -3 lines).\nHunk #4 succeeded at 1037 (offset -3 lines).\nHunk #5 succeeded at 1658 (offset -4 lines).\n\n$ make\n$ make install\n$ pg_ctl -c restart\n\nand go ...\n\nlots of CPU% again and very limited memory use as of yet.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11054 postgres 20 0 1302772 90316 58004 R 94.4 1.1 4:38.05 postgres: postgres integrator [local] EXPLAIN\n11055 postgres 20 0 1280532 68076 57168 R 97.7 0.9 2:03.54 postgres: parallel worker for PID 11054\n11056 postgres 20 0 1280532 67964 57124 S 0.0 0.9 2:08.28 postgres: parallel worker for PID 11054\n\nthat's a pretty decent sign so far. Slight increase ... but still \nrelatively steady ...\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11054 postgres 20 0 1379704 167140 58004 R 95.0 2.1 5:56.28 postgres: postgres integrator [local] EXPLAIN\n11055 postgres 20 0 1280532 68076 57168 S 25.6 0.9 2:36.59 postgres: parallel worker for PID 11054\n11056 postgres 20 0 1280532 67964 57124 R 61.8 0.9 2:29.65 postgres: parallel worker for PID 11054\n\naaand break out ...\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11119 postgres 20 0 1271660 1.0g 1.0g D 0.0 13.4 0:03.10 postgres: parallel worker for PID 11054\n11054 postgres 20 0 1380940 1.0g 950508 D 0.0 13.4 6:56.09 postgres: postgres integrator [local] EXPLAIN\n11118 postgres 20 0 1271660 884540 882724 D 0.0 11.2 0:02.84 postgres: parallel worker for PID 11054\n\nand crash:\n\nfoo=# explain analyze select * from reports.v_BusinessOperation;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!>\n\nwhat happened? ouch, no space left on root device, too much logging? \nMaybe the core dump ... Log file content is simple:\n\n2019-04-22 00:07:56.104 UTC [11048] LOG: database system was shut down at 2019-04-22 00:07:55 UTC\n2019-04-22 00:07:56.108 UTC [11046] LOG: database system is ready to accept connections\nHashjoin 0x2122458: initial nbatch = 16, nbuckets = 8192\nHashjoin 0x2122458: increasing nbatch to 32 because space = 4128933\nHashjoin 0x2122458: freed 148 of 10584 tuples, space now 4071106\nHashjoin 0x2122458: increasing nbatch to 64 because space = 4128826\nHashjoin 0x2122458: freed 544 of 10584 tuples, space now 3916296\nHashjoin 0x2122458: increasing nbatch to 128 because space = 4128846\nHashjoin 0x2122458: freed 10419 of 10585 tuples, space now 65570\nHashjoin 0x2122458: increasing nbatch to 256 because space = 4128829\nHashjoin 0x2122458: freed 10308 of 10734 tuples, space now 161815\nHashjoin 0x2122458: increasing nbatch to 512 because space = 4128908\nHashjoin 0x2122458: freed 398 of 10379 tuples, space now 3977787\nHashjoin 0x3ac9918: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x3ac91a8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x3ac93c8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1f41018: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1f41048: initial nbatch = 1, nbuckets = 1024\n2019-04-22 00:16:55.273 UTC [11046] LOG: background worker \"parallel worker\" (PID 11119) was terminated by signal 11: Segmentation fault\n2019-04-22 00:16:55.273 UTC [11046] DETAIL: Failed process was running: explain analyze select * from reports.v_BusinessOperation;\n2019-04-22 00:16:55.273 UTC [11046] LOG: terminating any other active server processes\n2019-04-22 00:16:55.274 UTC [11058] WARNING: terminating connection because of crash of another server process\n2019-04-22 00:16:55.274 UTC [11058] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n2019-04-22 00:16:55.274 UTC [11058] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n2019-04-22 00:16:55.277 UTC [11052] LOG: could not write temporary statistics file \"pg_stat/db_16903.tmp\": No space left on device\n2019-04-22 00:16:55.278 UTC [11052] LOG: could not close temporary statistics file \"pg_stat/db_0.tmp\": No space left on device\n2019-04-22 00:16:55.278 UTC [11052] LOG: could not close temporary statistics file \"pg_stat/global.tmp\": No space left on device\n2019-04-22 00:16:55.315 UTC [11046] LOG: all server processes terminated; reinitializing\n2019-04-22 00:16:55.425 UTC [11123] LOG: database system was interrupted; last known up at 2019-04-22 00:12:56 UTC\n2019-04-22 00:16:55.426 UTC [11124] FATAL: the database system is in recovery mode\n2019-04-22 00:16:55.545 UTC [11123] LOG: database system was not properly shut down; automatic recovery in progress\n2019-04-22 00:16:55.549 UTC [11123] LOG: redo starts at 3D2/C44FDCF8\n\nok it is all because it dumped 3 core dumps, glad that I captured the \ntop lines of the backend and its 2 workers\n\n-rw------- 1 postgres postgres 1075843072 Apr 22 00:16 core.11054 -- backend\n-rw------- 1 postgres postgres� 894640128 Apr 22 00:16 core.11118 -- \nworker 1\n-rw------- 1 postgres postgres 1079103488 Apr 22 00:16 core.11119 -- \nworker 2\n\nAnd the melt down starts with \"parallel worker\" (PID 11119) receiving \nSIGSEGV.\n\nSo let's get gdb to the task to see what's up:\n\n$ gdb -c data/core.11119 postgresql-11.2/src/backend/postgres\n...\nReading symbols from postgresql-11.2/src/backend/postgres...done.\nBFD: Warning: /var/lib/pgsql/data/core.11119 is truncated: expected core file size >= 1127112704, found: 1079103488.\n[New LWP 11119]\nCannot access memory at address 0x7ff8d25dc108\nCannot access memory at address 0x7ff8d25dc100\nFailed to read a valid object file image from memory.\nCore was generated by `postgres: parallel worker for'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>) at nodeHashjoin.c:1127\n1127 {\n(gdb) bt 8\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>) at nodeHashjoin.c:1127\nBacktrace stopped: Cannot access memory at address 0x7ffd45fa9c88\n(gdb) info frame\nStack level 0, frame at 0x7ffd45fa9c90:\n rip = 0x6bd792 in ExecParallelHashJoinNewBatch (nodeHashjoin.c:1127); saved rip = <not saved>\n Outermost frame: Cannot access memory at address 0x7ffd45fa9c88\n source language c.\n Arglist at 0x7ffd45fa9c80, args: hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>\n Locals at 0x7ffd45fa9c80, Previous frame's sp is 0x7ffd45fa9c90\nCannot access memory at address 0x7ffd45fa9c80\n(gdb) list\n1122 SharedTuplestoreAccessor *inner_tuples;\n1123 Barrier *batch_barrier =\n1124 &hashtable->batches[batchno].shared->batch_barrier;\n1125\n1126 switch (BarrierAttach(batch_barrier))\n1127 {\n1128 case PHJ_BATCH_ELECTING:\n1129\n1130 /* One backend allocates the hash table. */\n1131 if (BarrierArriveAndWait(batch_barrier,\n\nunfortunately this core file is truncated because of the file system \nrunning out of space. Let's see the others.\n\n$ gdb -c data/core.11118 postgresql-11.2/src/backend/postgres\n...\nReading symbols from postgresql-11.2/src/backend/postgres...done.\nBFD: Warning: /var/lib/pgsql/data/core.11118 is truncated: expected core file size >= 1127112704, found: 894640128.\n[New LWP 11118]\nCannot access memory at address 0x7ff8d25dc108\nCannot access memory at address 0x7ff8d25dc100\nFailed to read a valid object file image from memory.\nCore was generated by `postgres: parallel worker for'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>) at nodeHashjoin.c:1127\n1127 {\n(gdb) bt 5\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>) at nodeHashjoin.c:1127\nBacktrace stopped: Cannot access memory at address 0x7ffd45fa9c88\n(gdb) list\n1122 SharedTuplestoreAccessor *inner_tuples;\n1123 Barrier *batch_barrier =\n1124 &hashtable->batches[batchno].shared->batch_barrier;\n1125\n1126 switch (BarrierAttach(batch_barrier))\n1127 {\n1128 case PHJ_BATCH_ELECTING:\n1129\n1130 /* One backend allocates the hash table. */\n1131 if (BarrierArriveAndWait(batch_barrier,\n\nstrange, that one must have died very similar, same place, also truncated.\n\n$ gdb -c data/core.11054 postgresql-11.2/src/backend/postgres\n...\nReading symbols from postgresql-11.2/src/backend/postgres...done.\nBFD: Warning: /var/lib/pgsql/data/core.11054 is truncated: expected core file size >= 1238786048, found: 1075843072.\n[New LWP 11054]\nCannot access memory at address 0x7ff8d25dc108\nCannot access memory at address 0x7ff8d25dc100\nFailed to read a valid object file image from memory.\nCore was generated by `postgres: postgres integrator'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9498>) at nodeHashjoin.c:1127\n1127 {\n(\n\nI don't understand why all of them are at the same location. Doesn't \nmake any sense to me.\n\nBut I'll leave it at that right now.\n\n-Gunther\n\n\n\n\n\n\n\n Now to Justin's patch. First undo Tomas' patch and apply:\n$ mv src/include/executor/hashjoin.h.orig src/include/executor/hashjoin.h\n$ mv src/backend/executor/nodeHash.c.orig src/backend/executor/nodeHash.c\n$ mv src/backend/executor/nodeHashjoin.c.orig src/backend/executor/nodeHashjoin.c\n$ patch -p1 <../limit-hash-nbatches-v2.patch\npatching file src/backend/executor/nodeHash.c\nHunk #1 succeeded at 570 (offset -3 lines).\nHunk #2 succeeded at 917 (offset -3 lines).\nHunk #3 succeeded at 930 (offset -3 lines).\nHunk #4 succeeded at 1037 (offset -3 lines).\nHunk #5 succeeded at 1658 (offset -4 lines).\n\n$ make \n$ make install\n$ pg_ctl -c restart\n\nand go ... \n\nlots of CPU% again and very limited memory use as of yet.\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11054 postgres 20 0 1302772 90316 58004 R 94.4 1.1 4:38.05 postgres: postgres integrator [local] EXPLAIN\n11055 postgres 20 0 1280532 68076 57168 R 97.7 0.9 2:03.54 postgres: parallel worker for PID 11054\n11056 postgres 20 0 1280532 67964 57124 S 0.0 0.9 2:08.28 postgres: parallel worker for PID 11054\n\nthat's a pretty decent sign so far. Slight increase ... but still\n relatively steady ...\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11054 postgres 20 0 1379704 167140 58004 R 95.0 2.1 5:56.28 postgres: postgres integrator [local] EXPLAIN\n11055 postgres 20 0 1280532 68076 57168 S 25.6 0.9 2:36.59 postgres: parallel worker for PID 11054\n11056 postgres 20 0 1280532 67964 57124 R 61.8 0.9 2:29.65 postgres: parallel worker for PID 11054\n\naaand break out ...\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11119 postgres 20 0 1271660 1.0g 1.0g D 0.0 13.4 0:03.10 postgres: parallel worker for PID 11054\n11054 postgres 20 0 1380940 1.0g 950508 D 0.0 13.4 6:56.09 postgres: postgres integrator [local] EXPLAIN\n11118 postgres 20 0 1271660 884540 882724 D 0.0 11.2 0:02.84 postgres: parallel worker for PID 11054\nand crash:\nfoo=# explain analyze select * from reports.v_BusinessOperation;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!>\n\nwhat happened? ouch, no space left on root device, too much\n logging? Maybe the core dump ... Log file content is simple:\n2019-04-22 00:07:56.104 UTC [11048] LOG: database system was shut down at 2019-04-22 00:07:55 UTC\n2019-04-22 00:07:56.108 UTC [11046] LOG: database system is ready to accept connections\nHashjoin 0x2122458: initial nbatch = 16, nbuckets = 8192\nHashjoin 0x2122458: increasing nbatch to 32 because space = 4128933\nHashjoin 0x2122458: freed 148 of 10584 tuples, space now 4071106\nHashjoin 0x2122458: increasing nbatch to 64 because space = 4128826\nHashjoin 0x2122458: freed 544 of 10584 tuples, space now 3916296\nHashjoin 0x2122458: increasing nbatch to 128 because space = 4128846\nHashjoin 0x2122458: freed 10419 of 10585 tuples, space now 65570\nHashjoin 0x2122458: increasing nbatch to 256 because space = 4128829\nHashjoin 0x2122458: freed 10308 of 10734 tuples, space now 161815\nHashjoin 0x2122458: increasing nbatch to 512 because space = 4128908\nHashjoin 0x2122458: freed 398 of 10379 tuples, space now 3977787\nHashjoin 0x3ac9918: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x3ac91a8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x3ac93c8: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1f41018: initial nbatch = 1, nbuckets = 1024\nHashjoin 0x1f41048: initial nbatch = 1, nbuckets = 1024\n2019-04-22 00:16:55.273 UTC [11046] LOG: background worker \"parallel worker\" (PID 11119) was terminated by signal 11: Segmentation fault\n2019-04-22 00:16:55.273 UTC [11046] DETAIL: Failed process was running: explain analyze select * from reports.v_BusinessOperation;\n2019-04-22 00:16:55.273 UTC [11046] LOG: terminating any other active server processes\n2019-04-22 00:16:55.274 UTC [11058] WARNING: terminating connection because of crash of another server process\n2019-04-22 00:16:55.274 UTC [11058] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n2019-04-22 00:16:55.274 UTC [11058] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n2019-04-22 00:16:55.277 UTC [11052] LOG: could not write temporary statistics file \"pg_stat/db_16903.tmp\": No space left on device\n2019-04-22 00:16:55.278 UTC [11052] LOG: could not close temporary statistics file \"pg_stat/db_0.tmp\": No space left on device\n2019-04-22 00:16:55.278 UTC [11052] LOG: could not close temporary statistics file \"pg_stat/global.tmp\": No space left on device\n2019-04-22 00:16:55.315 UTC [11046] LOG: all server processes terminated; reinitializing\n2019-04-22 00:16:55.425 UTC [11123] LOG: database system was interrupted; last known up at 2019-04-22 00:12:56 UTC\n2019-04-22 00:16:55.426 UTC [11124] FATAL: the database system is in recovery mode\n2019-04-22 00:16:55.545 UTC [11123] LOG: database system was not properly shut down; automatic recovery in progress\n2019-04-22 00:16:55.549 UTC [11123] LOG: redo starts at 3D2/C44FDCF8\n\nok it is all because it dumped 3 core dumps, glad that I captured\n the top lines of the backend and its 2 workers\n\n-rw------- 1 postgres postgres 1075843072 Apr 22 00:16 core.11054\n -- backend\n -rw------- 1 postgres postgres� 894640128 Apr 22 00:16 core.11118\n -- worker 1\n -rw------- 1 postgres postgres 1079103488 Apr 22 00:16 core.11119\n -- worker 2\n\nAnd the melt down starts with \"parallel worker\" (PID 11119)\n receiving SIGSEGV.\nSo let's get gdb to the task to see what's up:\n$ gdb -c data/core.11119 postgresql-11.2/src/backend/postgres\n...\nReading symbols from postgresql-11.2/src/backend/postgres...done.\nBFD: Warning: /var/lib/pgsql/data/core.11119 is truncated: expected core file size >= 1127112704, found: 1079103488.\n[New LWP 11119]\nCannot access memory at address 0x7ff8d25dc108\nCannot access memory at address 0x7ff8d25dc100\nFailed to read a valid object file image from memory.\nCore was generated by `postgres: parallel worker for'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>) at nodeHashjoin.c:1127\n1127 {\n(gdb) bt 8\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>) at nodeHashjoin.c:1127\nBacktrace stopped: Cannot access memory at address 0x7ffd45fa9c88\n(gdb) info frame\nStack level 0, frame at 0x7ffd45fa9c90:\n rip = 0x6bd792 in ExecParallelHashJoinNewBatch (nodeHashjoin.c:1127); saved rip = <not saved>\n Outermost frame: Cannot access memory at address 0x7ffd45fa9c88\n source language c.\n Arglist at 0x7ffd45fa9c80, args: hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>\n Locals at 0x7ffd45fa9c80, Previous frame's sp is 0x7ffd45fa9c90\nCannot access memory at address 0x7ffd45fa9c80\n(gdb) list\n1122 SharedTuplestoreAccessor *inner_tuples;\n1123 Barrier *batch_barrier =\n1124 &hashtable->batches[batchno].shared->batch_barrier;\n1125\n1126 switch (BarrierAttach(batch_barrier))\n1127 {\n1128 case PHJ_BATCH_ELECTING:\n1129\n1130 /* One backend allocates the hash table. */\n1131 if (BarrierArriveAndWait(batch_barrier,\n\nunfortunately this core file is truncated because of the file\n system running out of space. Let's see the others.\n$ gdb -c data/core.11118 postgresql-11.2/src/backend/postgres\n...\nReading symbols from postgresql-11.2/src/backend/postgres...done.\nBFD: Warning: /var/lib/pgsql/data/core.11118 is truncated: expected core file size >= 1127112704, found: 894640128.\n[New LWP 11118]\nCannot access memory at address 0x7ff8d25dc108\nCannot access memory at address 0x7ff8d25dc100\nFailed to read a valid object file image from memory.\nCore was generated by `postgres: parallel worker for'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>) at nodeHashjoin.c:1127\n1127 {\n(gdb) bt 5\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9c38>) at nodeHashjoin.c:1127\nBacktrace stopped: Cannot access memory at address 0x7ffd45fa9c88\n(gdb) list\n1122 SharedTuplestoreAccessor *inner_tuples;\n1123 Barrier *batch_barrier =\n1124 &hashtable->batches[batchno].shared->batch_barrier;\n1125\n1126 switch (BarrierAttach(batch_barrier))\n1127 {\n1128 case PHJ_BATCH_ELECTING:\n1129\n1130 /* One backend allocates the hash table. */\n1131 if (BarrierArriveAndWait(batch_barrier,\n\nstrange, that one must have died very similar, same place, also\n truncated.\n$ gdb -c data/core.11054 postgresql-11.2/src/backend/postgres\n...\nReading symbols from postgresql-11.2/src/backend/postgres...done.\nBFD: Warning: /var/lib/pgsql/data/core.11054 is truncated: expected core file size >= 1238786048, found: 1075843072.\n[New LWP 11054]\nCannot access memory at address 0x7ff8d25dc108\nCannot access memory at address 0x7ff8d25dc100\nFailed to read a valid object file image from memory.\nCore was generated by `postgres: postgres integrator'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000006bd792 in ExecParallelHashJoinNewBatch (\n hjstate=<error reading variable: Cannot access memory at address 0x7ffd45fa9498>) at nodeHashjoin.c:1127\n1127 {\n(\n\nI don't understand why all of them are at the same location.\n Doesn't make any sense to me.\nBut I'll leave it at that right now.\n-Gunther",
"msg_date": "Sun, 21 Apr 2019 20:55:03 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 11:40:22AM -0500, Justin Pryzby wrote:\n>On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:\n>> Jeff Janes <[email protected]> writes:\n>> > The growEnabled stuff only prevents infinite loops. It doesn't prevent\n>> > extreme silliness.\n>>\n>> > If a single 32 bit hash value has enough tuples by itself to not fit in\n>> > work_mem, then it will keep splitting until that value is in a batch by\n>> > itself before shutting off\n>>\n>> I suspect, however, that we might be better off just taking the existence\n>> of the I/O buffers into account somehow while deciding whether it's worth\n>> growing further. That is, I'm imagining adding a second independent\n>> reason for shutting off growEnabled, along the lines of \"increasing\n>> nbatch any further will require an unreasonable amount of buffer memory\".\n>> The question then becomes how to define \"unreasonable\".\n>\n>On Sun, Apr 21, 2019 at 06:15:25PM +0200, Tomas Vondra wrote:\n>> I think the question the code needs to be asking is \"If we double the\n>> number of batches, does the amount of memory we need drop?\" And the\n>> memory needs to account both for the buffers and per-batch data.\n>>\n>> I don't think we can just stop increasing the number of batches when the\n>> memory for BufFile exceeds work_mem, because that entirely ignores the\n>> fact that by doing that we force the system to keep the per-batch stuff\n>> in memory (and that can be almost arbitrary amount).\n>...\n>> Of course, this just stops enforcing work_mem at some point, but it at\n>> least attempts to minimize the amount of memory used.\n>\n>This patch defines reasonable as \"additional BatchFiles will not themselves\n>exceed work_mem; OR, exceeded work_mem already but additional BatchFiles are\n>going to save us RAM\"...\n>\n\nOK.\n\n>I think the first condition is insensitive and not too important to get right,\n>it only allows work_mem to be exceeded by 2x, which maybe already happens for\n>multiple reasons, related to this thread and otherwise. It'd be fine to slap\n>on a factor of /2 or /4 or /8 there too.\n>\n\nTBH I'm not quite sure I understand all the conditions in the patch - it\nseems unnecessarily complicated. And I don't think it actually minimizes\nthe amount of memory used for hash table + buffers, because it keeps the\nsame spaceAllowed (which pushes nbatches up). At some point it actually\nmakes to bump spaceAllowed and make larger batches instead of adding\nmore batches, and the patch does not seem to do that.\n\nAlso, the patch does this:\n\n if (hashtable->nbatch*sizeof(PGAlignedBlock) < hashtable->spaceAllowed)\n {\n ExecHashIncreaseNumBatches(hashtable);\n }\n else if (hashtable->spaceUsed/2 >= hashtable->spaceAllowed)\n {\n /* Exceeded spaceAllowed by 2x, so we'll save RAM by allowing nbatches to increase */\n /* I think this branch would be hit almost same as below branch */\n ExecHashIncreaseNumBatches(hashtable);\n }\n ...\n\nbut the reasoning for the second branch seems wrong, because\n\n (spaceUsed/2 >= spaceAllowed)\n\nis not enough to guarantee that we actually save memory by doubling the\nnumber of batches. To do that, we need to make sure that\n\n (spaceUsed/2 >= hashtable->nbatch * sizeof(PGAlignedBlock))\n\nBut that may not be true - it certainly is not guaranteed by not getting\ninto the first branch.\n\nConsider an ideal example with uniform distribution:\n\n create table small (id bigint, val text);\n create table large (id bigint, val text);\n\n insert into large select 1000000000 * random(), md5(i::text)\n from generate_series(1, 700000000) s(i);\n\n insert into small select 1000000000 * random(), md5(i::text)\n from generate_series(1, 10000) s(i);\n\n vacuum analyze large;\n vacuum analyze small;\n\n update pg_class set (relpages, reltuples) = (1000000, 1)\n where relname = 'large';\n\n update pg_class set (relpages, reltuples) = (1, 1000000)\n where relname = 'small';\n\n set work_mem = '1MB';\n\n explain analyze select * from small join large using (id);\n\nA log after each call to ExecHashIncreaseNumBatches says this:\n\n nbatch=2 spaceUsed=463200 spaceAllowed=1048576 BufFile=16384\n nbatch=4 spaceUsed=463120 spaceAllowed=1048576 BufFile=32768\n nbatch=8 spaceUsed=457120 spaceAllowed=1048576 BufFile=65536\n nbatch=16 spaceUsed=458320 spaceAllowed=1048576 BufFile=131072\n nbatch=32 spaceUsed=457120 spaceAllowed=1048576 BufFile=262144\n nbatch=64 spaceUsed=459200 spaceAllowed=1048576 BufFile=524288\n nbatch=128 spaceUsed=455600 spaceAllowed=1048576 BufFile=1048576\n nbatch=256 spaceUsed=525120 spaceAllowed=1048576 BufFile=2097152\n nbatch=256 spaceUsed=2097200 spaceAllowed=1048576 BufFile=2097152\n nbatch=512 spaceUsed=2097200 spaceAllowed=1048576 BufFile=4194304\n nbatch=1024 spaceUsed=2097200 spaceAllowed=1048576 BufFile=8388608\n nbatch=2048 spaceUsed=2097200 spaceAllowed=1048576 BufFile=16777216\n nbatch=4096 spaceUsed=2097200 spaceAllowed=1048576 BufFile=33554432\n nbatch=8192 spaceUsed=2097200 spaceAllowed=1048576 BufFile=67108864\n nbatch=16384 spaceUsed=2097200 spaceAllowed=1048576 BufFile=134217728\n\nSo we've succeeded in keeping spaceUsed below 2*spaceAllowed (which\nseems rather confusing, BTW), but we've allocated 128MB for BufFile. So\nabout 130MB in total. With 16k batches.\n\nWhat I think might work better is the attached v2 of the patch, with a\nsingle top-level condition, comparing the combined memory usage\n(spaceUsed + BufFile) against spaceAllowed. But it also tweaks\nspaceAllowed once the size needed for BufFile gets over work_mem/3.\n\nAnd it behaves like this:\n\n nbatch=2 spaceUsed=458640 spaceAllowed=1048576 BufFile=16384\n nbatch=4 spaceUsed=455040 spaceAllowed=1048576 BufFile=32768\n nbatch=8 spaceUsed=440160 spaceAllowed=1048576 BufFile=65536\n nbatch=16 spaceUsed=426560 spaceAllowed=1048576 BufFile=131072\n nbatch=32 spaceUsed=393200 spaceAllowed=1048576 BufFile=262144\n nbatch=64 spaceUsed=329120 spaceAllowed=1572864 BufFile=524288\n nbatch=128 spaceUsed=455600 spaceAllowed=3145728 BufFile=1048576\n nbatch=256 spaceUsed=987440 spaceAllowed=6291456 BufFile=2097152\n nbatch=512 spaceUsed=2040560 spaceAllowed=12582912 BufFile=4194304\n nbatch=1024 spaceUsed=4114640 spaceAllowed=25165824 BufFile=8388608\n nbatch=2048 spaceUsed=8302880 spaceAllowed=50331648 BufFile=16777216\n\nSo we end up with just 2k batches, using ~24MB of memory in total.\nThat's because the spaceAllowed limit was bumped up instead of adding\nmore and more batches.\n\n>The current patch doesn't unset growEnabled, since there's no point at which\n>the hashtable should grow without bound: if hash tables are *already* exceeding\n>work_mem by 2x as big, nbatches should be doubled.\n>\n\nNot sure. I guess it might be useful to re-evaluate the flag after a\nwhile - not necessarily by actually enabling it right away, but just\nchecking if it'd move any tuples. Just disabling it once may be an issue\nwhen the input data is somehow correlated, which seems to be one of the\nissues with the data set discussed in this thread.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 22 Apr 2019 05:09:27 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Batch splitting shouldn't be followed by a hash function change?\n\nOn Mon, Apr 22, 2019, 05:09 Tomas Vondra <[email protected]>\nwrote:\n\n> On Sun, Apr 21, 2019 at 11:40:22AM -0500, Justin Pryzby wrote:\n> >On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:\n> >> Jeff Janes <[email protected]> writes:\n> >> > The growEnabled stuff only prevents infinite loops. It doesn't\n> prevent\n> >> > extreme silliness.\n> >>\n> >> > If a single 32 bit hash value has enough tuples by itself to not fit\n> in\n> >> > work_mem, then it will keep splitting until that value is in a batch\n> by\n> >> > itself before shutting off\n> >>\n> >> I suspect, however, that we might be better off just taking the\n> existence\n> >> of the I/O buffers into account somehow while deciding whether it's\n> worth\n> >> growing further. That is, I'm imagining adding a second independent\n> >> reason for shutting off growEnabled, along the lines of \"increasing\n> >> nbatch any further will require an unreasonable amount of buffer\n> memory\".\n> >> The question then becomes how to define \"unreasonable\".\n> >\n> >On Sun, Apr 21, 2019 at 06:15:25PM +0200, Tomas Vondra wrote:\n> >> I think the question the code needs to be asking is \"If we double the\n> >> number of batches, does the amount of memory we need drop?\" And the\n> >> memory needs to account both for the buffers and per-batch data.\n> >>\n> >> I don't think we can just stop increasing the number of batches when the\n> >> memory for BufFile exceeds work_mem, because that entirely ignores the\n> >> fact that by doing that we force the system to keep the per-batch stuff\n> >> in memory (and that can be almost arbitrary amount).\n> >...\n> >> Of course, this just stops enforcing work_mem at some point, but it at\n> >> least attempts to minimize the amount of memory used.\n> >\n> >This patch defines reasonable as \"additional BatchFiles will not\n> themselves\n> >exceed work_mem; OR, exceeded work_mem already but additional BatchFiles\n> are\n> >going to save us RAM\"...\n> >\n>\n> OK.\n>\n> >I think the first condition is insensitive and not too important to get\n> right,\n> >it only allows work_mem to be exceeded by 2x, which maybe already happens\n> for\n> >multiple reasons, related to this thread and otherwise. It'd be fine to\n> slap\n> >on a factor of /2 or /4 or /8 there too.\n> >\n>\n> TBH I'm not quite sure I understand all the conditions in the patch - it\n> seems unnecessarily complicated. And I don't think it actually minimizes\n> the amount of memory used for hash table + buffers, because it keeps the\n> same spaceAllowed (which pushes nbatches up). At some point it actually\n> makes to bump spaceAllowed and make larger batches instead of adding\n> more batches, and the patch does not seem to do that.\n>\n> Also, the patch does this:\n>\n> if (hashtable->nbatch*sizeof(PGAlignedBlock) < hashtable->spaceAllowed)\n> {\n> ExecHashIncreaseNumBatches(hashtable);\n> }\n> else if (hashtable->spaceUsed/2 >= hashtable->spaceAllowed)\n> {\n> /* Exceeded spaceAllowed by 2x, so we'll save RAM by allowing\n> nbatches to increase */\n> /* I think this branch would be hit almost same as below branch */\n> ExecHashIncreaseNumBatches(hashtable);\n> }\n> ...\n>\n> but the reasoning for the second branch seems wrong, because\n>\n> (spaceUsed/2 >= spaceAllowed)\n>\n> is not enough to guarantee that we actually save memory by doubling the\n> number of batches. To do that, we need to make sure that\n>\n> (spaceUsed/2 >= hashtable->nbatch * sizeof(PGAlignedBlock))\n>\n> But that may not be true - it certainly is not guaranteed by not getting\n> into the first branch.\n>\n> Consider an ideal example with uniform distribution:\n>\n> create table small (id bigint, val text);\n> create table large (id bigint, val text);\n>\n> insert into large select 1000000000 * random(), md5(i::text)\n> from generate_series(1, 700000000) s(i);\n>\n> insert into small select 1000000000 * random(), md5(i::text)\n> from generate_series(1, 10000) s(i);\n>\n> vacuum analyze large;\n> vacuum analyze small;\n>\n> update pg_class set (relpages, reltuples) = (1000000, 1)\n> where relname = 'large';\n>\n> update pg_class set (relpages, reltuples) = (1, 1000000)\n> where relname = 'small';\n>\n> set work_mem = '1MB';\n>\n> explain analyze select * from small join large using (id);\n>\n> A log after each call to ExecHashIncreaseNumBatches says this:\n>\n> nbatch=2 spaceUsed=463200 spaceAllowed=1048576 BufFile=16384\n> nbatch=4 spaceUsed=463120 spaceAllowed=1048576 BufFile=32768\n> nbatch=8 spaceUsed=457120 spaceAllowed=1048576 BufFile=65536\n> nbatch=16 spaceUsed=458320 spaceAllowed=1048576 BufFile=131072\n> nbatch=32 spaceUsed=457120 spaceAllowed=1048576 BufFile=262144\n> nbatch=64 spaceUsed=459200 spaceAllowed=1048576 BufFile=524288\n> nbatch=128 spaceUsed=455600 spaceAllowed=1048576 BufFile=1048576\n> nbatch=256 spaceUsed=525120 spaceAllowed=1048576 BufFile=2097152\n> nbatch=256 spaceUsed=2097200 spaceAllowed=1048576 BufFile=2097152\n> nbatch=512 spaceUsed=2097200 spaceAllowed=1048576 BufFile=4194304\n> nbatch=1024 spaceUsed=2097200 spaceAllowed=1048576 BufFile=8388608\n> nbatch=2048 spaceUsed=2097200 spaceAllowed=1048576 BufFile=16777216\n> nbatch=4096 spaceUsed=2097200 spaceAllowed=1048576 BufFile=33554432\n> nbatch=8192 spaceUsed=2097200 spaceAllowed=1048576 BufFile=67108864\n> nbatch=16384 spaceUsed=2097200 spaceAllowed=1048576 BufFile=134217728\n>\n> So we've succeeded in keeping spaceUsed below 2*spaceAllowed (which\n> seems rather confusing, BTW), but we've allocated 128MB for BufFile. So\n> about 130MB in total. With 16k batches.\n>\n> What I think might work better is the attached v2 of the patch, with a\n> single top-level condition, comparing the combined memory usage\n> (spaceUsed + BufFile) against spaceAllowed. But it also tweaks\n> spaceAllowed once the size needed for BufFile gets over work_mem/3.\n>\n> And it behaves like this:\n>\n> nbatch=2 spaceUsed=458640 spaceAllowed=1048576 BufFile=16384\n> nbatch=4 spaceUsed=455040 spaceAllowed=1048576 BufFile=32768\n> nbatch=8 spaceUsed=440160 spaceAllowed=1048576 BufFile=65536\n> nbatch=16 spaceUsed=426560 spaceAllowed=1048576 BufFile=131072\n> nbatch=32 spaceUsed=393200 spaceAllowed=1048576 BufFile=262144\n> nbatch=64 spaceUsed=329120 spaceAllowed=1572864 BufFile=524288\n> nbatch=128 spaceUsed=455600 spaceAllowed=3145728 BufFile=1048576\n> nbatch=256 spaceUsed=987440 spaceAllowed=6291456 BufFile=2097152\n> nbatch=512 spaceUsed=2040560 spaceAllowed=12582912 BufFile=4194304\n> nbatch=1024 spaceUsed=4114640 spaceAllowed=25165824 BufFile=8388608\n> nbatch=2048 spaceUsed=8302880 spaceAllowed=50331648 BufFile=16777216\n>\n> So we end up with just 2k batches, using ~24MB of memory in total.\n> That's because the spaceAllowed limit was bumped up instead of adding\n> more and more batches.\n>\n> >The current patch doesn't unset growEnabled, since there's no point at\n> which\n> >the hashtable should grow without bound: if hash tables are *already*\n> exceeding\n> >work_mem by 2x as big, nbatches should be doubled.\n> >\n>\n> Not sure. I guess it might be useful to re-evaluate the flag after a\n> while - not necessarily by actually enabling it right away, but just\n> checking if it'd move any tuples. Just disabling it once may be an issue\n> when the input data is somehow correlated, which seems to be one of the\n> issues with the data set discussed in this thread.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nBatch splitting shouldn't be followed by a hash function change? On Mon, Apr 22, 2019, 05:09 Tomas Vondra <[email protected]> wrote:On Sun, Apr 21, 2019 at 11:40:22AM -0500, Justin Pryzby wrote:\n>On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:\n>> Jeff Janes <[email protected]> writes:\n>> > The growEnabled stuff only prevents infinite loops. It doesn't prevent\n>> > extreme silliness.\n>>\n>> > If a single 32 bit hash value has enough tuples by itself to not fit in\n>> > work_mem, then it will keep splitting until that value is in a batch by\n>> > itself before shutting off\n>>\n>> I suspect, however, that we might be better off just taking the existence\n>> of the I/O buffers into account somehow while deciding whether it's worth\n>> growing further. That is, I'm imagining adding a second independent\n>> reason for shutting off growEnabled, along the lines of \"increasing\n>> nbatch any further will require an unreasonable amount of buffer memory\".\n>> The question then becomes how to define \"unreasonable\".\n>\n>On Sun, Apr 21, 2019 at 06:15:25PM +0200, Tomas Vondra wrote:\n>> I think the question the code needs to be asking is \"If we double the\n>> number of batches, does the amount of memory we need drop?\" And the\n>> memory needs to account both for the buffers and per-batch data.\n>>\n>> I don't think we can just stop increasing the number of batches when the\n>> memory for BufFile exceeds work_mem, because that entirely ignores the\n>> fact that by doing that we force the system to keep the per-batch stuff\n>> in memory (and that can be almost arbitrary amount).\n>...\n>> Of course, this just stops enforcing work_mem at some point, but it at\n>> least attempts to minimize the amount of memory used.\n>\n>This patch defines reasonable as \"additional BatchFiles will not themselves\n>exceed work_mem; OR, exceeded work_mem already but additional BatchFiles are\n>going to save us RAM\"...\n>\n\nOK.\n\n>I think the first condition is insensitive and not too important to get right,\n>it only allows work_mem to be exceeded by 2x, which maybe already happens for\n>multiple reasons, related to this thread and otherwise. It'd be fine to slap\n>on a factor of /2 or /4 or /8 there too.\n>\n\nTBH I'm not quite sure I understand all the conditions in the patch - it\nseems unnecessarily complicated. And I don't think it actually minimizes\nthe amount of memory used for hash table + buffers, because it keeps the\nsame spaceAllowed (which pushes nbatches up). At some point it actually\nmakes to bump spaceAllowed and make larger batches instead of adding\nmore batches, and the patch does not seem to do that.\n\nAlso, the patch does this:\n\n if (hashtable->nbatch*sizeof(PGAlignedBlock) < hashtable->spaceAllowed)\n {\n ExecHashIncreaseNumBatches(hashtable);\n }\n else if (hashtable->spaceUsed/2 >= hashtable->spaceAllowed)\n {\n /* Exceeded spaceAllowed by 2x, so we'll save RAM by allowing nbatches to increase */\n /* I think this branch would be hit almost same as below branch */\n ExecHashIncreaseNumBatches(hashtable);\n }\n ...\n\nbut the reasoning for the second branch seems wrong, because\n\n (spaceUsed/2 >= spaceAllowed)\n\nis not enough to guarantee that we actually save memory by doubling the\nnumber of batches. To do that, we need to make sure that\n\n (spaceUsed/2 >= hashtable->nbatch * sizeof(PGAlignedBlock))\n\nBut that may not be true - it certainly is not guaranteed by not getting\ninto the first branch.\n\nConsider an ideal example with uniform distribution:\n\n create table small (id bigint, val text);\n create table large (id bigint, val text);\n\n insert into large select 1000000000 * random(), md5(i::text)\n from generate_series(1, 700000000) s(i);\n\n insert into small select 1000000000 * random(), md5(i::text)\n from generate_series(1, 10000) s(i);\n\n vacuum analyze large;\n vacuum analyze small;\n\n update pg_class set (relpages, reltuples) = (1000000, 1)\n where relname = 'large';\n\n update pg_class set (relpages, reltuples) = (1, 1000000)\n where relname = 'small';\n\n set work_mem = '1MB';\n\n explain analyze select * from small join large using (id);\n\nA log after each call to ExecHashIncreaseNumBatches says this:\n\n nbatch=2 spaceUsed=463200 spaceAllowed=1048576 BufFile=16384\n nbatch=4 spaceUsed=463120 spaceAllowed=1048576 BufFile=32768\n nbatch=8 spaceUsed=457120 spaceAllowed=1048576 BufFile=65536\n nbatch=16 spaceUsed=458320 spaceAllowed=1048576 BufFile=131072\n nbatch=32 spaceUsed=457120 spaceAllowed=1048576 BufFile=262144\n nbatch=64 spaceUsed=459200 spaceAllowed=1048576 BufFile=524288\n nbatch=128 spaceUsed=455600 spaceAllowed=1048576 BufFile=1048576\n nbatch=256 spaceUsed=525120 spaceAllowed=1048576 BufFile=2097152\n nbatch=256 spaceUsed=2097200 spaceAllowed=1048576 BufFile=2097152\n nbatch=512 spaceUsed=2097200 spaceAllowed=1048576 BufFile=4194304\n nbatch=1024 spaceUsed=2097200 spaceAllowed=1048576 BufFile=8388608\n nbatch=2048 spaceUsed=2097200 spaceAllowed=1048576 BufFile=16777216\n nbatch=4096 spaceUsed=2097200 spaceAllowed=1048576 BufFile=33554432\n nbatch=8192 spaceUsed=2097200 spaceAllowed=1048576 BufFile=67108864\n nbatch=16384 spaceUsed=2097200 spaceAllowed=1048576 BufFile=134217728\n\nSo we've succeeded in keeping spaceUsed below 2*spaceAllowed (which\nseems rather confusing, BTW), but we've allocated 128MB for BufFile. So\nabout 130MB in total. With 16k batches.\n\nWhat I think might work better is the attached v2 of the patch, with a\nsingle top-level condition, comparing the combined memory usage\n(spaceUsed + BufFile) against spaceAllowed. But it also tweaks\nspaceAllowed once the size needed for BufFile gets over work_mem/3.\n\nAnd it behaves like this:\n\n nbatch=2 spaceUsed=458640 spaceAllowed=1048576 BufFile=16384\n nbatch=4 spaceUsed=455040 spaceAllowed=1048576 BufFile=32768\n nbatch=8 spaceUsed=440160 spaceAllowed=1048576 BufFile=65536\n nbatch=16 spaceUsed=426560 spaceAllowed=1048576 BufFile=131072\n nbatch=32 spaceUsed=393200 spaceAllowed=1048576 BufFile=262144\n nbatch=64 spaceUsed=329120 spaceAllowed=1572864 BufFile=524288\n nbatch=128 spaceUsed=455600 spaceAllowed=3145728 BufFile=1048576\n nbatch=256 spaceUsed=987440 spaceAllowed=6291456 BufFile=2097152\n nbatch=512 spaceUsed=2040560 spaceAllowed=12582912 BufFile=4194304\n nbatch=1024 spaceUsed=4114640 spaceAllowed=25165824 BufFile=8388608\n nbatch=2048 spaceUsed=8302880 spaceAllowed=50331648 BufFile=16777216\n\nSo we end up with just 2k batches, using ~24MB of memory in total.\nThat's because the spaceAllowed limit was bumped up instead of adding\nmore and more batches.\n\n>The current patch doesn't unset growEnabled, since there's no point at which\n>the hashtable should grow without bound: if hash tables are *already* exceeding\n>work_mem by 2x as big, nbatches should be doubled.\n>\n\nNot sure. I guess it might be useful to re-evaluate the flag after a\nwhile - not necessarily by actually enabling it right away, but just\nchecking if it'd move any tuples. Just disabling it once may be an issue\nwhen the input data is somehow correlated, which seems to be one of the\nissues with the data set discussed in this thread.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 22 Apr 2019 10:07:52 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 10:07:52AM +0200, Gaetano Mendola wrote:\n> Batch splitting shouldn't be followed by a hash function change?�\n\nWhat would be the value? That can help with hash collisions, but that's\nnot the issue with the data sets discussed in this thread. The issue\nreported originally is about underestimates, and the sample data set has\na large number of duplicate values (a single value representing ~10% of\nthe data set). Neither of those issues is about hash collisions.\n\nThe data set I used to demonstrate how the algorithms work is pretty\nperfect, with uniform distribution and no hash collisions.\n\nFurthermore, I don't think we can just change the hash function, for a\ncouple of technical reasons.\n\nFirstly, it's not like we totally redistribute the whole dataset from N\nold batches to (2*N) new ones. By using the same 32-bit hash value and\ncconsidering one extra bit, the tuples either stay in the same batch\n(when the new bit is 0) or move to a single new batch (when it's 1). So\neach batch is split in 1/2. By changing the hash function this would no\nlonger be true, and we'd redistribute pretty much the whole data set.\n\nThe other issue is even more significant - we don't redistribute the\ntuples immediately. We only redistribute the current batch, but leave\nthe other batches alone and handle them when we actually get to them.\nThis is possible, because the tuples never move backwards - when\nsplitting batch K, the tuples either stay in K or move to 2K. Or\nsomething like that, I'm too lazy to recall the exact formula now.\n\nAnd if I recall correctly, I think we can increment the number of\nbatches while already performing the join, after some rows were already\nprocessed. That would probably be no longer true if we just switched the\nhash function, because it might move rows backwards (to the already\nprocessed region).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 22 Apr 2019 15:04:15 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 4:48 PM Tom Lane <[email protected]> wrote:\n\n> Gunther <[email protected]> writes:\n> > and checked my log file and there was nothing before the call\n> > MemoryContextStats(TopPortalContext) so I don't understand where this\n> > printf stuff is ending up.\n>\n> It's going to stdout, which is likely block-buffered whereas stderr\n> is line-buffered, so data from the latter will show up in your log\n> file much sooner. You might consider adding something to startup\n> to switch stdout to line buffering.\n>\n\nIs there a reason to not just elog the HJDEBUG stuff? With some of the\nother DEBUG defines, we will probably be using them before the logging\nsystem is set up, but surely we won't be doing Hash Joins that early?\n\nI think we could just get rid of the conditional compilation and elog this\nat DEBUG1 or DEBUG2. Or keep the conditional compilation and elog it at\nLOG.\n\nCheers,\n\nJeff\n\nOn Sat, Apr 20, 2019 at 4:48 PM Tom Lane <[email protected]> wrote:Gunther <[email protected]> writes:\n> and checked my log file and there was nothing before the call \n> MemoryContextStats(TopPortalContext) so I don't understand where this \n> printf stuff is ending up.\n\nIt's going to stdout, which is likely block-buffered whereas stderr\nis line-buffered, so data from the latter will show up in your log\nfile much sooner. You might consider adding something to startup\nto switch stdout to line buffering.Is there a reason to not just elog the HJDEBUG stuff? With some of the other DEBUG defines, we will probably be using them before the logging system is set up, but surely we won't be doing Hash Joins that early?I think we could just get rid of the conditional compilation and elog this at DEBUG1 or DEBUG2. Or keep the conditional compilation and elog it at LOG.Cheers,Jeff",
"msg_date": "Mon, 22 Apr 2019 13:15:10 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> Is there a reason to not just elog the HJDEBUG stuff?\n\nYes --- it'd be expensive (a \"no op\" elog is far from free) and\nuseless to ~ 99.999% of users.\n\nAlmost all the conditionally-compiled debug support in the PG executor\nis legacy leftovers from Berkeley days. If it were useful more often\nthan once in a blue moon, we probably would have made it more easily\nreachable long ago. I'm a bit surprised we haven't just ripped it\nout, TBH. When I find myself needing extra debug output, it's almost\nnever the case that any of that old code does what I need.\n\nThere might be a case for changing it all to print to stderr not\nstdout, so that it plays more nicely with elog/ereport output\nwhen you do have it turned on, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 13:57:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 4/21/2019 23:09, Tomas Vondra wrote:\n> What I think might work better is the attached v2 of the patch, with a\n> single top-level condition, comparing the combined memory usage\n> (spaceUsed + BufFile) against spaceAllowed. But it also tweaks\n> spaceAllowed once the size needed for BufFile gets over work_mem/3.\n\nThanks for this, and I am trying this now.\n\nSo far it is promising.\n\nI see the memory footprint contained under 1 GB. I see it go up, but \nalso down again. CPU, IO, all being live.\n\nfoo=# set enable_nestloop to off; SET foo=# explain analyze select * \nfrom reports.v_BusinessOperation; WARNING: ExecHashIncreaseNumBatches: \nnbatch=32 spaceAllowed=4194304 WARNING: ExecHashIncreaseNumBatches: \nnbatch=64 spaceAllowed=4194304 WARNING: ExecHashIncreaseNumBatches: \nnbatch=128 spaceAllowed=4194304 WARNING: ExecHashIncreaseNumBatches: \nnbatch=256 spaceAllowed=6291456 WARNING: ExecHashIncreaseNumBatches: \nnbatch=512 spaceAllowed=12582912 WARNING: ExecHashIncreaseNumBatches: \nnbatch=1024 spaceAllowed=25165824 WARNING: ExecHashIncreaseNumBatches: \nnbatch=2048 spaceAllowed=50331648 WARNING: ExecHashIncreaseNumBatches: \nnbatch=4096 spaceAllowed=100663296 WARNING: ExecHashIncreaseNumBatches: \nnbatch=8192 spaceAllowed=201326592 WARNING: ExecHashIncreaseNumBatches: \nnbatch=16384 spaceAllowed=402653184 WARNING: ExecHashIncreaseNumBatches: \nnbatch=32768 spaceAllowed=805306368 WARNING: ExecHashIncreaseNumBatches: \nnbatch=65536 spaceAllowed=1610612736\n\nAaaaaand, it's a winner!\n\nUnique (cost=5551524.36..5554207.33 rows=34619 width=1197) (actual \ntime=6150303.060..6895451.210 rows=435274 loops=1) -> Sort \n(cost=5551524.36..5551610.91 rows=34619 width=1197) (actual \ntime=6150303.058..6801372.192 rows=113478386 loops=1) Sort Key: \ndocumentinformationsubject.documentinternalid, \ndocumentinformationsubject.is_current, \ndocumentinformationsubject.documentid, \ndocumentinformationsubject.documenttypecode, \ndocumentinformationsubject.subjectroleinternalid, \ndocumentinformationsubject.subjectentityinternalid, \ndocumentinformationsubject.subjectentityid, \ndocumentinformationsubject.subjectentityidroot, \ndocumentinformationsubject.subjectentityname, \ndocumentinformationsubject.subjectentitytel, \ndocumentinformationsubject.subjectentityemail, \ndocumentinformationsubject.otherentityinternalid, \ndocumentinformationsubject.confidentialitycode, \ndocumentinformationsubject.actinternalid, \ndocumentinformationsubject.code_code, \ndocumentinformationsubject.code_displayname, q.code_code, \nq.code_displayname, an.extension, an.root, \ndocumentinformationsubject_2.subjectentitycode, \ndocumentinformationsubject_2.subjectentitycodesystem, \ndocumentinformationsubject_2.effectivetime_low, \ndocumentinformationsubject_2.effectivetime_high, \ndocumentinformationsubject_2.statuscode, \ndocumentinformationsubject_2.code_code, agencyid.extension, \nagencyname.trivialname, documentinformationsubject_1.subjectentitycode, \ndocumentinformationsubject_1.subjectentityinternalid Sort Method: \nexternal merge Disk: 40726720kB -> Hash Right Join \n(cost=4255031.53..5530808.71 rows=34619 width=1197) (actual \ntime=325240.679..1044194.775 rows=113478386 loops=1) Hash Cond: \n(((q.documentinternalid)::text = \n(documentinformationsubject.documentinternalid)::text) AND \n((r.targetinternalid)::text = \n(documentinformationsubject.actinternalid)::text)) -> Hash Right Join \n(cost=1341541.37..2612134.36 rows=13 width=341) (actual \ntime=81093.327..81093.446 rows=236 loops=1) Hash Cond: \n(((documentinformationsubject_2.documentinternalid)::text = \n(q.documentinternalid)::text) AND \n((documentinformationsubject_2.actinternalid)::text = \n(q.actinternalid)::text)) -> Gather (cost=31291.54..1301884.52 rows=1 \nwidth=219) (actual time=41920.563..41929.780 rows=0 loops=1) Workers \nPlanned: 2 Workers Launched: 2 -> Parallel Hash Left Join \n(cost=30291.54..1300884.42 rows=1 width=219) (actual \ntime=41915.960..41915.960 rows=0 loops=3) Hash Cond: \n((documentinformationsubject_2.otherentityinternalid)::text = \n(agencyid.entityinternalid)::text) -> Parallel Hash Left Join \n(cost=28606.13..1299199.00 rows=1 width=204) (actual \ntime=41862.767..41862.769 rows=0 loops=3) Hash Cond: \n((documentinformationsubject_2.otherentityinternalid)::text = \n(agencyname.entityinternalid)::text) -> Parallel Seq Scan on \ndocumentinformationsubject documentinformationsubject_2 \n(cost=0.00..1268800.85 rows=1 width=177) (actual \ntime=40805.337..40805.337 rows=0 loops=3) Filter: \n((participationtypecode)::text = 'AUT'::text) Rows Removed by Filter: \n2815562 -> Parallel Hash (cost=24733.28..24733.28 rows=166628 width=64) \n(actual time=981.000..981.001 rows=133303 loops=3) Buckets: 65536 \nBatches: 16 Memory Usage: 3136kB -> Parallel Seq Scan on bestname \nagencyname (cost=0.00..24733.28 rows=166628 width=64) (actual \ntime=0.506..916.816 rows=133303 loops=3) -> Parallel Hash \n(cost=1434.07..1434.07 rows=20107 width=89) (actual time=52.350..52.350 \nrows=11393 loops=3) Buckets: 65536 Batches: 1 Memory Usage: 4680kB -> \nParallel Seq Scan on entity_id agencyid (cost=0.00..1434.07 rows=20107 \nwidth=89) (actual time=0.376..46.875 rows=11393 loops=3) -> Hash \n(cost=1310249.63..1310249.63 rows=13 width=233) (actual \ntime=39172.682..39172.682 rows=236 loops=1) Buckets: 1024 Batches: 1 \nMemory Usage: 70kB -> Hash Right Join (cost=829388.20..1310249.63 \nrows=13 width=233) (actual time=35084.850..39172.545 rows=236 loops=1) \nHash Cond: ((an.actinternalid)::text = (q.actinternalid)::text) -> Seq \nScan on act_id an (cost=0.00..425941.04 rows=14645404 width=134) (actual \ntime=0.908..7583.123 rows=14676871 loops=1) -> Hash \n(cost=829388.19..829388.19 rows=1 width=136) (actual \ntime=29347.614..29347.614 rows=236 loops=1) Buckets: 1024 Batches: 1 \nMemory Usage: 63kB -> Gather (cost=381928.46..829388.19 rows=1 \nwidth=136) (actual time=23902.428..29347.481 rows=236 loops=1) Workers \nPlanned: 2 Workers Launched: 2 -> Parallel Hash Join \n(cost=380928.46..828388.09 rows=1 width=136) (actual \ntime=23915.790..29336.452 rows=79 loops=3) Hash Cond: \n((q.actinternalid)::text = (r.sourceinternalid)::text) -> Parallel Seq \nScan on documentinformation q (cost=0.00..447271.93 rows=50050 width=99) \n(actual time=10055.238..15484.478 rows=87921 loops=3) Filter: \n(((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = \n'EVN'::text) AND ((code_codesystem)::text = \n'2.16.840.1.113883.3.26.1.1'::text)) Rows Removed by Filter: 1540625 -> \nParallel Hash (cost=380928.44..380928.44 rows=1 width=74) (actual \ntime=13825.726..13825.726 rows=79 loops=3) Buckets: 1024 Batches: 1 \nMemory Usage: 112kB -> Parallel Seq Scan on actrelationship r \n(cost=0.00..380928.44 rows=1 width=74) (actual time=5289.948..13825.576 \nrows=79 loops=3) Filter: ((typecode)::text = 'SUBJ'::text) Rows Removed \nby Filter: 3433326 -> Hash (cost=2908913.87..2908913.87 rows=34619 \nwidth=930) (actual time=244145.322..244145.322 rows=113478127 loops=1) \nBuckets: 8192 (originally 8192) Batches: 65536 (originally 16) Memory \nUsage: 1204250kB -> Gather Merge (cost=2892141.40..2908913.87 rows=34619 \nwidth=930) (actual time=75215.333..145622.427 rows=113478127 loops=1) \nWorkers Planned: 2 Workers Launched: 2 -> Merge Left Join \n(cost=2891141.37..2903917.96 rows=14425 width=930) (actual \ntime=75132.988..99411.448 rows=37826042 loops=3) Merge Cond: \n(((documentinformationsubject.documentinternalid)::text = \n(documentinformationsubject_1.documentinternalid)::text) AND \n((documentinformationsubject.documentid)::text = \n(documentinformationsubject_1.documentid)::text) AND \n((documentinformationsubject.actinternalid)::text = \n(documentinformationsubject_1.actinternalid)::text)) -> Sort \n(cost=1301590.26..1301626.32 rows=14425 width=882) (actual \ntime=39801.337..40975.780 rows=231207 loops=3) Sort Key: \ndocumentinformationsubject.documentinternalid, \ndocumentinformationsubject.documentid, \ndocumentinformationsubject.actinternalidct_1.documentid, \ndocumentinformationsubject_1.actinternalid Sort Method: external merge \nDisk: 169768kB Worker 0: Sort Method: external merge Disk: 169768kB \nWorker 1: Sort Method: external merge Disk: 169768kB -> Seq Scan on \ndocumentinformationsubject documentinformationsubject_1 \n(cost=0.00..1329868.64 rows=1010585 width=159) (actual \ntime=23401.537..31758.042 rows=1031106 loops=3) Filter: \n((participationtypecode)::text = 'PRD'::text) Rows Removed by Filter: \n7415579 Planning Time: 40.559 ms Execution Time: 6896581.566 ms (70 rows)\n\nFor the first time this query has succeeded now. Memory was bounded. The \ntime of nearly hours is crazy, but things sometimes take that long. The \nimportant thing was not to get an out of memory error.\n\nThank you. Anything else you want to try, I can do it.\n\nregards,\n-Gunther\n\n\n\n\n\n\n\nOn 4/21/2019 23:09, Tomas Vondra wrote:\n\nWhat I\r\n think might work better is the attached v2 of the patch, with a\r\n \r\n single top-level condition, comparing the combined memory usage\r\n \r\n (spaceUsed + BufFile) against spaceAllowed. But it also tweaks\r\n \r\n spaceAllowed once the size needed for BufFile gets over\r\n work_mem/3.\r\n \n\nThanks for this, and I am trying this now.\nSo far it is promising.\nI see the memory footprint contained under 1 GB. I see it go up,\r\n but also down again. CPU, IO, all being live.\nfoo=# set enable_nestloop to off;\r\nSET\r\nfoo=# explain analyze select * from reports.v_BusinessOperation;\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=32 spaceAllowed=4194304\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=64 spaceAllowed=4194304\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=128 spaceAllowed=4194304\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=256 spaceAllowed=6291456\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=512 spaceAllowed=12582912\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=1024 spaceAllowed=25165824\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=2048 spaceAllowed=50331648\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=4096 spaceAllowed=100663296\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=8192 spaceAllowed=201326592\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=16384 spaceAllowed=402653184\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=32768 spaceAllowed=805306368\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=65536 spaceAllowed=1610612736\r\n\nAaaaaand, it's a winner!\n Unique (cost=5551524.36..5554207.33 rows=34619 width=1197) (actual time=6150303.060..6895451.210 rows=435274 loops=1)\r\n -> Sort (cost=5551524.36..5551610.91 rows=34619 width=1197) (actual time=6150303.058..6801372.192 rows=113478386 loops=1)\r\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.is_current, documentinformationsubject.documentid, documentinformationsubject.documenttypecode, documentinformationsubject.subjectroleinternalid, documentinformationsubject.subjectentityinternalid, documentinformationsubject.subjectentityid, documentinformationsubject.subjectentityidroot, documentinformationsubject.subjectentityname, documentinformationsubject.subjectentitytel, documentinformationsubject.subjectentityemail, documentinformationsubject.otherentityinternalid, documentinformationsubject.confidentialitycode, documentinformationsubject.actinternalid, documentinformationsubject.code_code, documentinformationsubject.code_displayname, q.code_code, q.code_displayname, an.extension, an.root, documentinformationsubject_2.subjectentitycode, documentinformationsubject_2.subjectentitycodesystem, documentinformationsubject_2.effectivetime_low, documentinformationsubject_2.effectivetime_high, documentinformationsubject_2.statuscode, documentinformationsubject_2.code_code, agencyid.extension, agencyname.trivialname, documentinformationsubject_1.subjectentitycode, documentinformationsubject_1.subjectentityinternalid\r\n Sort Method: external merge Disk: 40726720kB\r\n -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197) (actual time=325240.679..1044194.775 rows=113478386 loops=1)\r\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\r\n -> Hash Right Join (cost=1341541.37..2612134.36 rows=13 width=341) (actual time=81093.327..81093.446 rows=236 loops=1)\r\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\r\n -> Gather (cost=31291.54..1301884.52 rows=1 width=219) (actual time=41920.563..41929.780 rows=0 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Parallel Hash Left Join (cost=30291.54..1300884.42 rows=1 width=219) (actual time=41915.960..41915.960 rows=0 loops=3)\r\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyid.entityinternalid)::text)\r\n -> Parallel Hash Left Join (cost=28606.13..1299199.00 rows=1 width=204) (actual time=41862.767..41862.769 rows=0 loops=3)\r\n Hash Cond: ((documentinformationsubject_2.otherentityinternalid)::text = (agencyname.entityinternalid)::text)\r\n -> Parallel Seq Scan on documentinformationsubject documentinformationsubject_2 (cost=0.00..1268800.85 rows=1 width=177) (actual time=40805.337..40805.337 rows=0 loops=3)\r\n Filter: ((participationtypecode)::text = 'AUT'::text)\r\n Rows Removed by Filter: 2815562\r\n -> Parallel Hash (cost=24733.28..24733.28 rows=166628 width=64) (actual time=981.000..981.001 rows=133303 loops=3)\r\n Buckets: 65536 Batches: 16 Memory Usage: 3136kB\r\n -> Parallel Seq Scan on bestname agencyname (cost=0.00..24733.28 rows=166628 width=64) (actual time=0.506..916.816 rows=133303 loops=3)\r\n -> Parallel Hash (cost=1434.07..1434.07 rows=20107 width=89) (actual time=52.350..52.350 rows=11393 loops=3)\r\n Buckets: 65536 Batches: 1 Memory Usage: 4680kB\r\n -> Parallel Seq Scan on entity_id agencyid (cost=0.00..1434.07 rows=20107 width=89) (actual time=0.376..46.875 rows=11393 loops=3)\r\n -> Hash (cost=1310249.63..1310249.63 rows=13 width=233) (actual time=39172.682..39172.682 rows=236 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 70kB\r\n -> Hash Right Join (cost=829388.20..1310249.63 rows=13 width=233) (actual time=35084.850..39172.545 rows=236 loops=1)\r\n Hash Cond: ((an.actinternalid)::text = (q.actinternalid)::text)\r\n -> Seq Scan on act_id an (cost=0.00..425941.04 rows=14645404 width=134) (actual time=0.908..7583.123 rows=14676871 loops=1)\r\n -> Hash (cost=829388.19..829388.19 rows=1 width=136) (actual time=29347.614..29347.614 rows=236 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 63kB\r\n -> Gather (cost=381928.46..829388.19 rows=1 width=136) (actual time=23902.428..29347.481 rows=236 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Parallel Hash Join (cost=380928.46..828388.09 rows=1 width=136) (actual time=23915.790..29336.452 rows=79 loops=3)\r\n Hash Cond: ((q.actinternalid)::text = (r.sourceinternalid)::text)\r\n -> Parallel Seq Scan on documentinformation q (cost=0.00..447271.93 rows=50050 width=99) (actual time=10055.238..15484.478 rows=87921 loops=3)\r\n Filter: (((classcode)::text = 'CNTRCT'::text) AND ((moodcode)::text = 'EVN'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\r\n Rows Removed by Filter: 1540625\r\n -> Parallel Hash (cost=380928.44..380928.44 rows=1 width=74) (actual time=13825.726..13825.726 rows=79 loops=3)\r\n Buckets: 1024 Batches: 1 Memory Usage: 112kB\r\n -> Parallel Seq Scan on actrelationship r (cost=0.00..380928.44 rows=1 width=74) (actual time=5289.948..13825.576 rows=79 loops=3)\r\n Filter: ((typecode)::text = 'SUBJ'::text)\r\n Rows Removed by Filter: 3433326\r\n -> Hash (cost=2908913.87..2908913.87 rows=34619 width=930) (actual time=244145.322..244145.322 rows=113478127 loops=1)\r\n Buckets: 8192 (originally 8192) Batches: 65536 (originally 16) Memory Usage: 1204250kB\r\n -> Gather Merge (cost=2892141.40..2908913.87 rows=34619 width=930) (actual time=75215.333..145622.427 rows=113478127 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Merge Left Join (cost=2891141.37..2903917.96 rows=14425 width=930) (actual time=75132.988..99411.448 rows=37826042 loops=3)\r\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\r\n -> Sort (cost=1301590.26..1301626.32 rows=14425 width=882) (actual time=39801.337..40975.780 rows=231207 loops=3)\r\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalidct_1.documentid, documentinformationsubject_1.actinternalid\r\n Sort Method: external merge Disk: 169768kB\r\n Worker 0: Sort Method: external merge Disk: 169768kB\r\n Worker 1: Sort Method: external merge Disk: 169768kB\r\n -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=159) (actual time=23401.537..31758.042 rows=1031106 loops=3)\r\n Filter: ((participationtypecode)::text = 'PRD'::text)\r\n Rows Removed by Filter: 7415579\r\n Planning Time: 40.559 ms\r\n Execution Time: 6896581.566 ms\r\n(70 rows)\r\n\r\n\nFor the first time this query has succeeded now. Memory was\r\n bounded. The time of nearly hours is crazy, but things sometimes\r\n take that long. The important thing was not to get an out of\r\n memory error.\nThank you. Anything else you want to try, I can do it.\nregards,\r\n -Gunther",
"msg_date": "Tue, 23 Apr 2019 16:37:50 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:\n> On 4/21/2019 23:09, Tomas Vondra wrote:\n> >What I think might work better is the attached v2 of the patch, with a\n> Thanks for this, and I am trying this now.\n...\n> Aaaaaand, it's a winner!\n> \n> Unique (cost=5551524.36..5554207.33 rows=34619 width=1197) (actual time=6150303.060..6895451.210 rows=435274 loops=1)\n> -> Sort (cost=5551524.36..5551610.91 rows=34619 width=1197) (actual time=6150303.058..6801372.192 rows=113478386 loops=1) \n> Sort Method: external merge Disk: 40726720kB\n> \n> For the first time this query has succeeded now. Memory was bounded. The\n> time of nearly hours is crazy, but things sometimes take that long\n\nIt wrote 40GB tempfiles - perhaps you can increase work_mem now to improve the\nquery time.\n\nWe didn't address it yet, but your issue was partially caused by a misestimate.\nIt's almost certainly because these conditions are correlated, or maybe\nredundant.\n\n> Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n\nIf they're completely redundant and you can get the same result after dropping\none or two of those conditions, then you should.\n\nAlternately, if they're correlated but not redundant, you can use PG10\n\"dependency\" statistics (CREATE STATISTICS) on the correlated columns (and\nANALYZE).\n\nOn Tue, Apr 16, 2019 at 10:24:53PM -0400, Gunther wrote:\n> Hash Right Join (cost=4203858.53..5475530.71 rows=34619 width=4) (actual time=309603.384..459480.863 rows=113478386 loops=1)\n...\n> -> Hash (cost=1310249.63..1310249.63 rows=13 width=111) (actual time=51077.049..51077.049 rows=236 loops=1)\n...\n> -> Hash (cost=2861845.87..2861845.87 rows=34619 width=74) (actual time=199792.446..199792.446 rows=113478127 loops=1)\n> Buckets: 65536 (originally 65536) Batches: 131072 (originally 2) Memory Usage: 189207kB\n> -> Gather Merge (cost=2845073.40..2861845.87 rows=34619 width=74) (actual time=107620.262..156256.432 rows=113478127 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Merge Left Join (cost=2844073.37..2856849.96 rows=14425 width=74) (actual time=107570.719..126113.792 rows=37826042 loops=3)\n> Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n> -> Sort (cost=1295969.26..1296005.32 rows=14425 width=111) (actual time=57700.723..58134.751 rows=231207 loops=3)\n> Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.documentid, documentinformationsubject.actinternalid\n> Sort Method: external merge Disk: 26936kB\n> Worker 0: Sort Method: external merge Disk: 27152kB\n> Worker 1: Sort Method: external merge Disk: 28248kB\n> -> Parallel Seq Scan on documentinformationsubject (cost=0.00..1294972.76 rows=14425 width=111) (actual time=24866.656..57424.420 rows=231207 loops=3)\n> Filter: (((participationtypecode)::text = ANY ('{PPRF,PRF}'::text[])) AND ((classcode)::text = 'ACT'::text) AND ((moodcode)::text = 'DEF'::text) AND ((code_codesystem)::text = '2.16.840.1.113883.3.26.1.1'::text))\n> Rows Removed by Filter: 2584355\n> -> Materialize (cost=1548104.12..1553157.04 rows=1010585 width=111) (actual time=49869.984..54191.701 rows=38060250 loops=3)\n> -> Sort (cost=1548104.12..1550630.58 rows=1010585 width=111) (actual time=49869.980..50832.205 rows=1031106 loops=3)\n> Sort Key: documentinformationsubject_1.documentinternalid, documentinformationsubject_1.documentid, documentinformationsubject_1.actinternalid\n> Sort Method: external merge Disk: 122192kB\n> Worker 0: Sort Method: external merge Disk: 122192kB\n> Worker 1: Sort Method: external merge Disk: 122192kB\n> -> Seq Scan on documentinformationsubject documentinformationsubject_1 (cost=0.00..1329868.64 rows=1010585 width=111) (actual time=20366.166..47751.267 rows=1031106 loops=3)\n> Filter: ((participationtypecode)::text = 'PRD'::text)\n> Rows Removed by Filter: 7415579\n\n\n",
"msg_date": "Tue, 23 Apr 2019 15:43:48 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 03:43:48PM -0500, Justin Pryzby wrote:\n>On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:\n>> On 4/21/2019 23:09, Tomas Vondra wrote:\n>> >What I think might work better is the attached v2 of the patch, with a\n>> Thanks for this, and I am trying this now.\n>...\n>> Aaaaaand, it's a winner!\n>>\n>> Unique (cost=5551524.36..5554207.33 rows=34619 width=1197) (actual time=6150303.060..6895451.210 rows=435274 loops=1)\n>> -> Sort (cost=5551524.36..5551610.91 rows=34619 width=1197) (actual time=6150303.058..6801372.192 rows=113478386 loops=1)\n>> Sort Method: external merge Disk: 40726720kB\n>>\n>> For the first time this query has succeeded now. Memory was bounded. The\n>> time of nearly hours is crazy, but things sometimes take that long\n>\n>It wrote 40GB tempfiles - perhaps you can increase work_mem now to improve the\n>query time.\n>\n\nThat's unlikely to reduce the amount of data written to temporary files,\nit just means there will be fewer larger files - in total it's still\ngoing to be ~40GB. And it's not guaranteed it'll improve performance,\nbecause work_mem=4MB might fit into CPU caches and larger values almost\ncertainly won't. I don't think there's much to gain, really.\n\n>We didn't address it yet, but your issue was partially caused by a misestimate.\n>It's almost certainly because these conditions are correlated, or maybe\n>redundant.\n>\n\nRight. Chances are that with a bettwe estimate the optimizer would pick\nmerge join instead. I wonder if that would be significantly faster.\n\n>> Merge Cond: (((documentinformationsubject.documentinternalid)::text =\n>> (documentinformationsubject_1.documentinternalid)::text) AND\n>> ((documentinformationsubject.documentid)::text =\n>> (documentinformationsubject_1.documentid)::text) AND\n>> ((documentinformationsubject.actinternalid)::text =\n>> (documentinformationsubject_1.actinternalid)::text))\n>\n>If they're completely redundant and you can get the same result after\n>dropping one or two of those conditions, then you should.\n>\n>Alternately, if they're correlated but not redundant, you can use PG10\n>\"dependency\" statistics (CREATE STATISTICS) on the correlated columns\n>(and ANALYZE).\n>\n\nThat's not going to help, because we don't use functional dependencies\nin join estimation yet.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 23 Apr 2019 23:46:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:\n> On 4/21/2019 23:09, Tomas Vondra wrote:\n>\n> What I think might work better is the attached v2 of the patch, with a\n> single top-level condition, comparing the combined memory usage\n> (spaceUsed + BufFile) against spaceAllowed. But it also tweaks\n> spaceAllowed once the size needed for BufFile gets over work_mem/3.\n>\n> Thanks for this, and I am trying this now.\n>\n> So far it is promising.\n>\n> I see the memory footprint contained under 1 GB. I see it go up, but also\n> down again. CPU, IO, all being live.\n>\n> foo=# set enable_nestloop to off;\n> SET\n> foo=# explain analyze select * from reports.v_BusinessOperation;\n> WARNING: ExecHashIncreaseNumBatches: nbatch=32 spaceAllowed=4194304\n> WARNING: ExecHashIncreaseNumBatches: nbatch=64 spaceAllowed=4194304\n> WARNING: ExecHashIncreaseNumBatches: nbatch=128 spaceAllowed=4194304\n> WARNING: ExecHashIncreaseNumBatches: nbatch=256 spaceAllowed=6291456\n> WARNING: ExecHashIncreaseNumBatches: nbatch=512 spaceAllowed=12582912\n> WARNING: ExecHashIncreaseNumBatches: nbatch=1024 spaceAllowed=25165824\n> WARNING: ExecHashIncreaseNumBatches: nbatch=2048 spaceAllowed=50331648\n> WARNING: ExecHashIncreaseNumBatches: nbatch=4096 spaceAllowed=100663296\n> WARNING: ExecHashIncreaseNumBatches: nbatch=8192 spaceAllowed=201326592\n> WARNING: ExecHashIncreaseNumBatches: nbatch=16384 spaceAllowed=402653184\n> WARNING: ExecHashIncreaseNumBatches: nbatch=32768 spaceAllowed=805306368\n> WARNING: ExecHashIncreaseNumBatches: nbatch=65536 spaceAllowed=1610612736\n>\n> Aaaaaand, it's a winner!\n>\n\nGood ;-)\n\n> Unique (cost=5551524.36..5554207.33 rows=34619 width=1197) (actual time=6150303.060..6895451.210 rows=435274 loops=1)\n> -> Sort (cost=5551524.36..5551610.91 rows=34619 width=1197) (actual time=6150303.058..6801372.192 rows=113478386 loops=1)\n> Sort Key: ...\n> Sort Method: external merge Disk: 40726720kB\n> -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197) (actual time=325240.679..1044194.775 rows=113478386 loops=1)\n> Hash Cond: ...\n> ...\n> Planning Time: 40.559 ms\n> Execution Time: 6896581.566 ms\n> (70 rows)\n>\n>\n> For the first time this query has succeeded now. Memory was bounded. The\n> time of nearly hours is crazy, but things sometimes take that long. The\n> important thing was not to get an out of memory error.\n>\n\nTBH I don't think there's much we can do to improve this further - it's\na rather desperate effort to keep the memory usage as low as possible,\nwithout any real guarantees.\n\nAlso, the hash join only takes about 1000 seconds out of the 6900 total.\nSo even if we got it much faster, the query would still take almost two\nhours, give or take.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 23 Apr 2019 23:59:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On 4/23/2019 16:43, Justin Pryzby wrote:\n> It wrote 40GB tempfiles - perhaps you can increase work_mem now to improve the\n> query time.\n\nI now upped my shared_buffers back from 1 to 2GB and work_mem from 4 to \n16MB. Need to set vm.overcommit_ratio from 50 to 75 (percent, with \nvm.overcommit_memory = 2 as it is.)\n\n> We didn't address it yet, but your issue was partially caused by a misestimate.\n> It's almost certainly because these conditions are correlated, or maybe\n> redundant.\n\nThat may be so, but mis-estimates happen. And I can still massively \nimprove this query logically I am sure. In fact it sticks out like a \nsore thumb, logically it makes no sense to churn over 100 million rows \nhere, but the point is that hopefully PostgreSQL runs stable in such \noutlier situations, comes back and presents you with 2 hours of work \ntime, 40 GB temp space, or whatever, and then we users can figure out \nhow to make it work better. The frustrating thing it to get out of \nmemory and we not knowing what we can possibly do about it.\n\n From my previous attempt with this tmp_r and tmp_q table, I also know \nthat the Sort/Uniqe step is taking a lot of extra time. I can cut that \nout too by addressing the causes of the \"repeated result\" rows. But \nagain, that is all secondary optimizations.\n\n>> Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n> If they're completely redundant and you can get the same result after dropping\n> one or two of those conditions, then you should.\nI understand. You are saying by reducing the amount of columns in the \njoin condition, somehow you might be able to reduce the size of the hash \ntemporary table?\n> Alternately, if they're correlated but not redundant, you can use PG10\n> \"dependency\" statistics (CREATE STATISTICS) on the correlated columns (and\n> ANALYZE).\n\nI think documentId and documentInternalId is 1:1 they are both primary / \nalternate keys. So I could go with only one of them, but since I end up \nneeding both elsewhere inside the query I like to throw them all into \nthe natural join key, so that I don't have to deal with the duplicate \nresult columns.\n\nNow running:\n\nintegrator=# set enable_nestloop to off; SET integrator=# explain \nanalyze select * from reports.v_BusinessOperation; WARNING: \nExecHashIncreaseNumBatches: nbatch=8 spaceAllowed=16777216 WARNING: \nExecHashIncreaseNumBatches: nbatch=16 spaceAllowed=16777216 WARNING: \nExecHashIncreaseNumBatches: nbatch=32 spaceAllowed=16777216 WARNING: \nExecHashIncreaseNumBatches: nbatch=64 spaceAllowed=16777216 WARNING: \nExecHashIncreaseNumBatches: nbatch=128 spaceAllowed=16777216 WARNING: \nExecHashIncreaseNumBatches: nbatch=256 spaceAllowed=16777216 WARNING: \nExecHashIncreaseNumBatches: nbatch=512 spaceAllowed=16777216 WARNING: \nExecHashIncreaseNumBatches: nbatch=1024 spaceAllowed=25165824 WARNING: \nExecHashIncreaseNumBatches: nbatch=2048 spaceAllowed=50331648 WARNING: \nExecHashIncreaseNumBatches: nbatch=4096 spaceAllowed=100663296 WARNING: \nExecHashIncreaseNumBatches: nbatch=8192 spaceAllowed=201326592 WARNING: \nExecHashIncreaseNumBatches: nbatch=16384 spaceAllowed=402653184 WARNING: \nExecHashIncreaseNumBatches: nbatch=32768 spaceAllowed=805306368 WARNING: \nExecHashIncreaseNumBatches: nbatch=65536 spaceAllowed=1610612736\n\nI am waiting now, probably for that Sort/Unique to finish I think that \nthe vast majority of the time spent is in this sort\n\nUnique (cost=5551524.36..5554207.33 rows=34619 width=1197) (actual \ntime=6150303.060..6895451.210 rows=435274 loops=1) -> Sort \n(cost=5551524.36..5551610.91 rows=34619 width=1197) (actual \ntime=6150303.058..6801372.192 rows=113478386 loops=1) Sort Key: \ndocumentinformationsubject.documentinternalid, \ndocumentinformationsubject.is_current, \ndocumentinformationsubject.documentid, \ndocumentinformationsubject.documenttypecode, \ndocumentinformationsubject.subjectroleinternalid, \ndocumentinformationsubject.subjectentityinternalid, \ndocumentinformationsubject.subjectentityid, \ndocumentinformationsubject.subjectentityidroot, \ndocumentinformationsubject.subjectentityname, \ndocumentinformationsubject.subjectentitytel, \ndocumentinformationsubject.subjectentityemail, \ndocumentinformationsubject.otherentityinternalid, \ndocumentinformationsubject.confidentialitycode, \ndocumentinformationsubject.actinternalid, \ndocumentinformationsubject.code_code, \ndocumentinformationsubject.code_displayname, q.code_code, \nq.code_displayname, an.extension, an.root, \ndocumentinformationsubject_2.subjectentitycode, \ndocumentinformationsubject_2.subjectentitycodesystem, \ndocumentinformationsubject_2.effectivetime_low, \ndocumentinformationsubject_2.effectivetime_high, \ndocumentinformationsubject_2.statuscode, \ndocumentinformationsubject_2.code_code, agencyid.extension, \nagencyname.trivialname, documentinformationsubject_1.subjectentitycode, \ndocumentinformationsubject_1.subjectentityinternalid Sort Method: \nexternal merge Disk: 40726720kB -> Hash Right Join \n(cost=4255031.53..5530808.71 rows=34619 width=1197) (actual \ntime=325240.679..1044194.775 rows=113478386 loops=1)\n\nisn't it?\n\nUnique/Sort actual time 6,150,303.060 ms = 6,150 s <~ 2 h.\nHash Right Join actual time 325,240.679 ms.\n\nSo really all time is wasted in that sort, no need for you guys to worry \nabout anything else with these 2 hours. Tomas just stated the same thing.\n\n> Right. Chances are that with a bettwe estimate the optimizer would pick\n> merge join instead. I wonder if that would be significantly faster. \nThe prospect of a merge join is interesting here to consider: with the \nSort/Unique step taking so long, it seems the Merge Join might also take \na lot of time? I see my disks are churning for the most time in this way:\n\navg-cpu: %user %nice %system %iowait %steal %idle 7.50 0.00 2.50 89.50 \n0.00 0.50 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz \nawait r_await w_await svctm %util nvme1n1 0.00 0.00 253.00 131.00 30.15 \n32.20 332.50 2.01 8.40 8.41 8.37 2.59 99.60 nvme1n1p24 0.00 0.00 253.00 \n131.00 30.15 32.20 332.50 2.01 8.40 8.41 8.37 2.59 99.60\n\nI.e. 400 IOPS at 60 MB/s half of it read, half of it write. During the \nprevious steps, the hash join presumably, throughput was a lot higher, \nlike 2000 IOPS with 120 MB/s read or write.\n\nBut even if the Merge Join would have taken about the same or a little \nmore time than the Hash Join, I wonder, if one could not use that to \ncollapse the Sort/Unique step into that? Like it seems that after the \nSort/Merge has completed, one should be able to read Uniqe records \nwithout any further sorting? In that case the Merge would be a great \nadvantage.\n\nWhat I like about the situation now is that with that 4x bigger \nwork_mem, the overall memory situation remains the same. I.e., we are \nscraping just below 1GB for this process and we see oscillation, growth \nand shrinkage occurring. So I consider this case closed for me. That \ndoesn't mean I wouldn't be available if you guys want to try anything \nelse about it.\n\nOK, now here is the result with the 16 MB work_mem:\n\nUnique (cost=5462874.86..5465557.83 rows=34619 width=1197) (actual \ntime=6283539.282..7003311.451 rows=435274 loops=1) -> Sort \n(cost=5462874.86..5462961.41 rows=34619 width=1197) (actual \ntime=6283539.280..6908879.456 rows=113478386 loops=1) Sort Key: \ndocumentinformationsubject.documentinternalid, \ndocumentinformationsubject.is_current, \ndocumentinformationsubject.documentid, \ndocumentinformationsubject.documenttypecode, \ndocumentinformationsubject.subjectroleinternalid, documentinformati \nonsubject.subjectentityinternalid, \ndocumentinformationsubject.subjectentityid, \ndocumentinformationsubject.subjectentityidroot, \ndocumentinformationsubject.subjectentityname, \ndocumentinformationsubject.subjectentitytel, \ndocumentinformationsubject.subjectenti tyemail, \ndocumentinformationsubject.otherentityinternalid, \ndocumentinformationsubject.confidentialitycode, \ndocumentinformationsubject.actinternalid, \ndocumentinformationsubject.code_code, \ndocumentinformationsubject.code_displayname, q.code_code, q.code_disp \nlayname, an.extension, an.root, \ndocumentinformationsubject_2.subjectentitycode, \ndocumentinformationsubject_2.subjectentitycodesystem, \ndocumentinformationsubject_2.effectivetime_low, \ndocumentinformationsubject_2.effectivetime_high, \ndocumentinformationsubjec t_2.statuscode, \ndocumentinformationsubject_2.code_code, agencyid.extension, \nagencyname.trivialname, documentinformationsubject_1.subjectentitycode, \ndocumentinformationsubject_1.subjectentityinternalid Sort Method: \nexternal merge Disk: 40726872kB -> Hash Right Join \n(cost=4168174.03..5442159.21 rows=34619 width=1197) (actual \ntime=337057.290..1695675.896 rows=113478386 loops=1) Hash Cond: \n(((q.documentinternalid)::text = \n(documentinformationsubject.documentinternalid)::text) AND \n((r.targetinternalid)::text = \n(documentinformationsubject.actinternalid)::text)) -> Hash Right Join \n(cost=1339751.37..2608552.36 rows=13 width=341) (actual \ntime=84109.143..84109.238 rows=236 loops=1) Hash Cond: \n(((documentinformationsubject_2.documentinternalid)::text = \n(q.documentinternalid)::text) AND \n((documentinformationsubject_2.actinternalid)::text = \n(q.actinternalid)::text)) -> Gather (cost=29501.54..1298302.52 rows=1 \nwidth=219) (actual time=43932.534..43936.888 rows=0 loops=1) Workers \nPlanned: 2 Workers Launched: 2 -> Parallel Hash Left Join \n(cost=28501.54..1297302.42 rows=1 width=219) (actual \ntime=43925.304..43925.304 rows=0 loops=3) ... -> Hash \n(cost=1310249.63..1310249.63 rows=13 width=233) (actual \ntime=40176.581..40176.581 rows=236 loops=1) Buckets: 1024 Batches: 1 \nMemory Usage: 70kB -> Hash Right Join (cost=829388.20..1310249.63 \nrows=13 width=233) (actual time=35925.031..40176.447 rows=236 loops=1) \nHash Cond: ((an.actinternalid)::text = (q.actinternalid)::text) -> Seq \nScan on act_id an (cost=0.00..425941.04 rows=14645404 width=134) (actual \ntime=1.609..7687.986 rows=14676871 loops=1) -> Hash \n(cost=829388.19..829388.19 rows=1 width=136) (actual \ntime=30106.123..30106.123 rows=236 loops=1) Buckets: 1024 Batches: 1 \nMemory Usage: 63kB -> Gather (cost=381928.46..829388.19 rows=1 \nwidth=136) (actual time=24786.510..30105.983 rows=236 loops=1) ... -> \nHash (cost=2823846.37..2823846.37 rows=34619 width=930) (actual \ntime=252946.367..252946.367 rows=113478127 loops=1) Buckets: 32768 \n(originally 32768) Batches: 65536 (originally 4) Memory Usage: 1204250kB \n-> Gather Merge (cost=2807073.90..2823846.37 rows=34619 width=930) \n(actual time=83891.069..153380.040 rows=113478127 loops=1) Workers \nPlanned: 2 Workers Launched: 2 -> Merge Left Join \n(cost=2806073.87..2818850.46 rows=14425 width=930) (actual \ntime=83861.921..108022.671 rows=37826042 loops=3) Merge Cond: \n(((documentinformationsubject.documentinternalid)::text = \n(documentinformationsubject_1.documentinternalid)::text) AND \n((documentinformationsubject.documentid)::text = \n(documentinformationsubject_1.documentid):: text) AND \n((documentinformationsubject.actinternalid)::text = \n(documentinformationsubject_1.actinternalid)::text)) -> Sort \n(cost=1295969.26..1296005.32 rows=14425 width=882) (actual \ntime=44814.114..45535.398 rows=231207 loops=3) Sort Key: \ndocumentinformationsubject.documentinternalid, \ndocumentinformationsubject.docum... Workers Planned: 2 Workers Launched: \n2 -> Merge Left Join (cost=2806073.87..2818850.46 rows=14425 width=930) \n(actual time=83861.921..108022.671 rows=37826042 loops=3) Merge Cond: \n(((documentinformationsubject.documentinternalid)::text = \n(documentinformationsubject_1.documentinternalid)::text) AND \n((documentinformationsubject.documentid)::text = \n(documentinformationsubject_1.documentid):: text) AND \n((documentinformationsubject.actinternalid)::text = \n(documentinformationsubject_1.actinternalid)::text)) -> Sort \n(cost=1295969.26..1296005.32 rows=14425 width=882) (actual \ntime=44814.114..45535.398rows=231207 loops=3) ... Planning Time: 2.953 \nms Execution Time: 7004340.091 ms (70 rows)\n\nThere isn't really any big news here. But what matters is that it works.\n\nthanks & regards,\n-Gunther Schadow\n\n\n\n\n\n\n\n\n\n\nOn 4/23/2019 16:43, Justin Pryzby\r\n wrote:\r\n \n\nIt wrote 40GB tempfiles - perhaps you can increase work_mem now to improve the\r\nquery time.\n\nI now upped my shared_buffers back from 1 to 2GB and work_mem\r\n from 4 to 16MB. Need to set vm.overcommit_ratio from 50 to 75\r\n (percent, with vm.overcommit_memory = 2 as it is.)\r\n \n\nWe didn't address it yet, but your issue was partially caused by a misestimate.\r\nIt's almost certainly because these conditions are correlated, or maybe\r\nredundant.\n\nThat may be so, but mis-estimates happen. And I can still\r\n massively improve this query logically I am sure. In fact it\r\n sticks out like a sore thumb, logically it makes no sense to churn\r\n over 100 million rows here, but the point is that hopefully\r\n PostgreSQL runs stable in such outlier situations, comes back and\r\n presents you with 2 hours of work time, 40 GB temp space, or\r\n whatever, and then we users can figure out how to make it work\r\n better. The frustrating thing it to get out of memory and we not\r\n knowing what we can possibly do about it.\nFrom my previous attempt with this tmp_r and tmp_q table, I also\r\n know that the Sort/Uniqe step is taking a lot of extra time. I\r\n can cut that out too by addressing the causes of the \"repeated\r\n result\" rows. But again, that is all secondary optimizations.\n\n\n\nMerge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\r\n\n\n\r\nIf they're completely redundant and you can get the same result after dropping\r\none or two of those conditions, then you should.\n\r\n I understand. You are saying by reducing the amount of columns in\r\n the join condition, somehow you might be able to reduce the size of\r\n the hash temporary table?\r\n \nAlternately, if they're correlated but not redundant, you can use PG10\r\n\"dependency\" statistics (CREATE STATISTICS) on the correlated columns (and\r\nANALYZE).\n\nI think documentId and documentInternalId is 1:1 they are both\r\n primary / alternate keys. So I could go with only one of them, but\r\n since I end up needing both elsewhere inside the query I like to\r\n throw them all into the natural join key, so that I don't have to\r\n deal with the duplicate result columns. \n\nNow running:\nintegrator=# set enable_nestloop to off;\r\nSET\r\nintegrator=# explain analyze select * from reports.v_BusinessOperation;\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=8 spaceAllowed=16777216\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=16 spaceAllowed=16777216\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=32 spaceAllowed=16777216\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=64 spaceAllowed=16777216\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=128 spaceAllowed=16777216\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=256 spaceAllowed=16777216\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=512 spaceAllowed=16777216\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=1024 spaceAllowed=25165824\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=2048 spaceAllowed=50331648\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=4096 spaceAllowed=100663296\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=8192 spaceAllowed=201326592\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=16384 spaceAllowed=402653184\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=32768 spaceAllowed=805306368\r\nWARNING: ExecHashIncreaseNumBatches: nbatch=65536 spaceAllowed=1610612736\r\n\nI am waiting now, probably for that Sort/Unique to finish I think\r\n that the vast majority of the time spent is in this sort\n Unique (cost=5551524.36..5554207.33 rows=34619 width=1197) (actual time=6150303.060..6895451.210 rows=435274 loops=1)\r\n -> Sort (cost=5551524.36..5551610.91 rows=34619 width=1197) (actual time=6150303.058..6801372.192 rows=113478386 loops=1)\r\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.is_current, documentinformationsubject.documentid, documentinformationsubject.documenttypecode, documentinformationsubject.subjectroleinternalid, documentinformationsubject.subjectentityinternalid, documentinformationsubject.subjectentityid, documentinformationsubject.subjectentityidroot, documentinformationsubject.subjectentityname, documentinformationsubject.subjectentitytel, documentinformationsubject.subjectentityemail, documentinformationsubject.otherentityinternalid, documentinformationsubject.confidentialitycode, documentinformationsubject.actinternalid, documentinformationsubject.code_code, documentinformationsubject.code_displayname, q.code_code, q.code_displayname, an.extension, an.root, documentinformationsubject_2.subjectentitycode, documentinformationsubject_2.subjectentitycodesystem, documentinformationsubject_2.effectivetime_low, documentinformationsubject_2.effectivetime_high, documentinformationsubject_2.statuscode, documentinformationsubject_2.code_code, agencyid.extension, agencyname.trivialname, documentinformationsubject_1.subjectentitycode, documentinformationsubject_1.subjectentityinternalid\r\n Sort Method: external merge Disk: 40726720kB\r\n -> Hash Right Join (cost=4255031.53..5530808.71 rows=34619 width=1197) (actual time=325240.679..1044194.775 rows=113478386 loops=1)\r\n\nisn't it?\nUnique/Sort actual time 6,150,303.060 ms = 6,150 s <~ 2 h.\r\n Hash Right Join actual time 325,240.679 ms. \n\nSo really all time is wasted in that sort, no need for you guys\r\n to worry about anything else with these 2 hours. Tomas just\r\n stated the same thing.\n\nRight. Chances are that with a bettwe\r\n estimate the optimizer would pick\r\n \r\n merge join instead. I wonder if that would be significantly\r\n faster.\r\n \r\n The prospect of a merge join is interesting here to consider: with\r\n the Sort/Unique step taking so long, it seems the Merge Join might\r\n also take a lot of time? I see my disks are churning for the most\r\n time in this way:\r\n avg-cpu: %user %nice %system %iowait %steal %idle\r\n 7.50 0.00 2.50 89.50 0.00 0.50\r\n\r\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\r\nnvme1n1 0.00 0.00 253.00 131.00 30.15 32.20 332.50 2.01 8.40 8.41 8.37 2.59 99.60\r\nnvme1n1p24 0.00 0.00 253.00 131.00 30.15 32.20 332.50 2.01 8.40 8.41 8.37 2.59 99.60\r\n\nI.e. 400 IOPS at 60 MB/s half of it read, half of it write.\r\n During the previous steps, the hash join presumably, throughput\r\n was a lot higher, like 2000 IOPS with 120 MB/s read or write. \n\nBut even if the Merge Join would have taken about the same or a\r\n little more time than the Hash Join, I wonder, if one could not\r\n use that to collapse the Sort/Unique step into that? Like it seems\r\n that after the Sort/Merge has completed, one should be able to\r\n read Uniqe records without any further sorting? In that case the\r\n Merge would be a great advantage.\n\nWhat I like about the situation now is that with that 4x bigger\r\n work_mem, the overall memory situation remains the same. I.e., we\r\n are scraping just below 1GB for this process and we see\r\n oscillation, growth and shrinkage occurring. So I consider this\r\n case closed for me. That doesn't mean I wouldn't be available if\r\n you guys want to try anything else about it.\nOK, now here is the result with the 16 MB work_mem:\n Unique (cost=5462874.86..5465557.83 rows=34619 width=1197) (actual time=6283539.282..7003311.451 rows=435274 loops=1)\r\n -> Sort (cost=5462874.86..5462961.41 rows=34619 width=1197) (actual time=6283539.280..6908879.456 rows=113478386 loops=1)\r\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.is_current, documentinformationsubject.documentid, documentinformationsubject.documenttypecode, documentinformationsubject.subjectroleinternalid, documentinformati\r\nonsubject.subjectentityinternalid, documentinformationsubject.subjectentityid, documentinformationsubject.subjectentityidroot, documentinformationsubject.subjectentityname, documentinformationsubject.subjectentitytel, documentinformationsubject.subjectenti\r\ntyemail, documentinformationsubject.otherentityinternalid, documentinformationsubject.confidentialitycode, documentinformationsubject.actinternalid, documentinformationsubject.code_code, documentinformationsubject.code_displayname, q.code_code, q.code_disp\r\nlayname, an.extension, an.root, documentinformationsubject_2.subjectentitycode, documentinformationsubject_2.subjectentitycodesystem, documentinformationsubject_2.effectivetime_low, documentinformationsubject_2.effectivetime_high, documentinformationsubjec\r\nt_2.statuscode, documentinformationsubject_2.code_code, agencyid.extension, agencyname.trivialname, documentinformationsubject_1.subjectentitycode, documentinformationsubject_1.subjectentityinternalid\r\n Sort Method: external merge Disk: 40726872kB\r\n -> Hash Right Join (cost=4168174.03..5442159.21 rows=34619 width=1197) (actual time=337057.290..1695675.896 rows=113478386 loops=1)\r\n Hash Cond: (((q.documentinternalid)::text = (documentinformationsubject.documentinternalid)::text) AND ((r.targetinternalid)::text = (documentinformationsubject.actinternalid)::text))\r\n -> Hash Right Join (cost=1339751.37..2608552.36 rows=13 width=341) (actual time=84109.143..84109.238 rows=236 loops=1)\r\n Hash Cond: (((documentinformationsubject_2.documentinternalid)::text = (q.documentinternalid)::text) AND ((documentinformationsubject_2.actinternalid)::text = (q.actinternalid)::text))\r\n -> Gather (cost=29501.54..1298302.52 rows=1 width=219) (actual time=43932.534..43936.888 rows=0 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Parallel Hash Left Join (cost=28501.54..1297302.42 rows=1 width=219) (actual time=43925.304..43925.304 rows=0 loops=3)\r\n ...\r\n -> Hash (cost=1310249.63..1310249.63 rows=13 width=233) (actual time=40176.581..40176.581 rows=236 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 70kB\r\n -> Hash Right Join (cost=829388.20..1310249.63 rows=13 width=233) (actual time=35925.031..40176.447 rows=236 loops=1)\r\n Hash Cond: ((an.actinternalid)::text = (q.actinternalid)::text)\r\n -> Seq Scan on act_id an (cost=0.00..425941.04 rows=14645404 width=134) (actual time=1.609..7687.986 rows=14676871 loops=1)\r\n -> Hash (cost=829388.19..829388.19 rows=1 width=136) (actual time=30106.123..30106.123 rows=236 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 63kB\r\n -> Gather (cost=381928.46..829388.19 rows=1 width=136) (actual time=24786.510..30105.983 rows=236 loops=1)\r\n ...\r\n -> Hash (cost=2823846.37..2823846.37 rows=34619 width=930) (actual time=252946.367..252946.367 rows=113478127 loops=1)\r\n Buckets: 32768 (originally 32768) Batches: 65536 (originally 4) Memory Usage: 1204250kB\r\n -> Gather Merge (cost=2807073.90..2823846.37 rows=34619 width=930) (actual time=83891.069..153380.040 rows=113478127 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Merge Left Join (cost=2806073.87..2818850.46 rows=14425 width=930) (actual time=83861.921..108022.671 rows=37826042 loops=3)\r\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::\r\ntext) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\r\n -> Sort (cost=1295969.26..1296005.32 rows=14425 width=882) (actual time=44814.114..45535.398 rows=231207 loops=3)\r\n Sort Key: documentinformationsubject.documentinternalid, documentinformationsubject.docum...\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Merge Left Join (cost=2806073.87..2818850.46 rows=14425 width=930) (actual time=83861.921..108022.671 rows=37826042 loops=3)\r\n Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::\r\ntext) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\r\n -> Sort (cost=1295969.26..1296005.32 rows=14425 width=882) (actual time=44814.114..45535.398rows=231207 loops=3)\r\n ...\r\n Planning Time: 2.953 ms\r\n Execution Time: 7004340.091 ms\r\n(70 rows)\r\n\nThere isn't really any big news here. But what matters is that it\r\n works.\n\nthanks & regards,\r\n -Gunther Schadow",
"msg_date": "Tue, 23 Apr 2019 19:09:00 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 07:09:00PM -0400, Gunther wrote:\n> On 4/23/2019 16:43, Justin Pryzby wrote:\n>\n> It wrote 40GB tempfiles - perhaps you can increase work_mem now to improve the\n> query time.\n>\n> I now upped my shared_buffers back from 1 to 2GB and work_mem from 4 to\n> 16MB. Need to set vm.overcommit_ratio from 50 to 75 (percent, with\n> vm.overcommit_memory = 2 as it is.)\n>\n> We didn't address it yet, but your issue was partially caused by a misestimate.\n> It's almost certainly because these conditions are correlated, or maybe\n> redundant.\n>\n> That may be so, but mis-estimates happen. And I can still massively\n> improve this query logically I am sure.� In fact it sticks out like a sore\n> thumb, logically it makes no sense to churn over 100 million rows here,\n> but the point is that hopefully PostgreSQL runs stable in such outlier\n> situations, comes back and presents you with 2 hours of work time, 40 GB\n> temp space, or whatever, and then we users can figure out how to make it\n> work better. The frustrating thing it to get out of memory and we not\n> knowing what we can possibly do about it.\n>\n\nSure. And I think the memory balancing algorithm implemented in the v2\npatch is a step in that direction. I think we can do better in terms of\nmemory consumption (essentially keeping it closer to work_mem) but it's\nunlikely to be any faster.\n\nIn a way this is similar to underestimates in hash aggregate, except\nthat in that case we don't have any spill-to-disk fallback at all.\n\n> From my previous attempt with this tmp_r and tmp_q table, I also know that\n> the Sort/Uniqe step is taking� a lot of extra time. I can cut that out too\n> by addressing the causes of the \"repeated result\" rows. But again, that is\n> all secondary optimizations.\n>\n> Merge Cond: (((documentinformationsubject.documentinternalid)::text = (documentinformationsubject_1.documentinternalid)::text) AND ((documentinformationsubject.documentid)::text = (documentinformationsubject_1.documentid)::text) AND ((documentinformationsubject.actinternalid)::text = (documentinformationsubject_1.actinternalid)::text))\n>\n> If they're completely redundant and you can get the same result after dropping\n> one or two of those conditions, then you should.\n>\n> I understand. You are saying by reducing the amount of columns in the join\n> condition, somehow you might be able to reduce the size of the hash\n> temporary table?\n>\n\nNo. When estimating the join result size with multiple join clauses, the\noptimizer essentially has to compute \n\n1: P((x1 = y1) && (x2 = y2) && (x3 = y3))\n\nso it assumes statistical independence of those conditions and splits\nthat into\n\n2: P(x1 = y1) * P(x2 = y2) * P(x3 = y3)\n\nBut when those conditions are dependent - for example when (x1=y1) means\nthat ((x2=y2) && (x3=y3)) - this results into significant underestimate.\nE.g. let's assume that each of those conditions matches 1/100 rows, but\nthat essentially x1=x2=x3 and y1=y2=y3. Then (1) is 1/100 but (2) ends\nup being 1/1000000, so 10000x off.\n\nChances are this is what's happenning with the inner side of the hash\njoin, which is estimated to return 14425 but ends up returning 37826042.\n\nThere's one trick you might try, though - using indexes on composite types:\n\n create table t1 (a int, b int);\n create table t2 (a int, b int);\n \n \n insert into t1 select mod(i,1000), mod(i,1000)\n from generate_series(1,100000) s(i);\n \n insert into t2 select mod(i,1000), mod(i,1000)\n from generate_series(1,100000) s(i);\n \n analyze t1;\n analyze t2;\n \n explain analyze select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b);\n \n QUERY PLAN\n --------------------------------------------------------------------\n Merge Join (cost=19495.72..21095.56 rows=9999 width=16)\n (actual time=183.043..10360.276 rows=10000000 loops=1)\n Merge Cond: ((t1.a = t2.a) AND (t1.b = t2.b))\n ...\n \n create type composite_id as (a int, b int);\n \n create index on t1 (((a,b)::composite_id));\n create index on t2 (((a,b)::composite_id));\n \n analyze t1;\n analyze t2;\n \n explain analyze select * from t1 join t2\n on ((t1.a,t1.b)::composite_id = (t2.a,t2.b)::composite_id);\n QUERY PLAN\n --------------------------------------------------------------------------\n Merge Join (cost=0.83..161674.40 rows=9999460 width=16)\n (actual time=0.020..12726.767 rows=10000000 loops=1)\n Merge Cond: (ROW(t1.a, t1.b)::composite_id = ROW(t2.a, t2.b)::composite_id)\n\nObviously, that's not exactly free - you have to pay price for the index\ncreation, maintenance and storage.\n\n> ...\n>\n> Unique/Sort actual time�� 6,150,303.060 ms = 6,150 s <~ 2 h.\n> Hash Right Join actual time 325,240.679 ms.\n>\n> So really all time is wasted in that sort, no need for you guys to worry\n> about anything else with these 2 hours.� Tomas just stated the same thing.\n>\n> Right. Chances are that with a bettwe estimate the optimizer would pick\n> merge join instead. I wonder if that would be significantly faster.\n>\n> The prospect of a merge join is interesting here to consider: with the\n> Sort/Unique step taking so long, it seems the Merge Join might also take a\n> lot of time? I see my disks are churning for the most time in this way:\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 7.50 0.00 2.50 89.50 0.00 0.50\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\n> nvme1n1 0.00 0.00 253.00 131.00 30.15 32.20 332.50 2.01 8.40 8.41 8.37 2.59 99.60\n> nvme1n1p24 0.00 0.00 253.00 131.00 30.15 32.20 332.50 2.01 8.40 8.41 8.37 2.59 99.60\n>\n> I.e. 400 IOPS at 60 MB/s half of it read, half of it write. During the\n> previous steps, the hash join presumably, throughput was a lot higher,\n> like 2000 IOPS with 120 MB/s read or write.\n>\n> But even if the Merge Join would have taken about the same or a little\n> more time than the Hash Join, I wonder, if one could not use that to\n> collapse the Sort/Unique step into that? Like it seems that after the\n> Sort/Merge has completed, one should be able to read Uniqe records without\n> any further sorting? In that case the Merge would be a great advantage.\n>\n\nProbably not, because there are far more columns in the Unique step. We\nmight have done something with \"incremental sort\" but we don't have that\ncapability yet.\n\n> What I like about the situation now is that with that 4x bigger work_mem,\n> the overall memory situation remains the same. I.e., we are scraping just\n> below 1GB for this process and we see oscillation, growth and shrinkage\n> occurring. So I consider this case closed for me. That doesn't mean I\n> wouldn't be available if you guys want to try anything else about it.\n>\n> OK, now here is the result with the 16 MB work_mem:\n>\n> Unique (cost=5462874.86..5465557.83 rows=34619 width=1197) (actual time=6283539.282..7003311.451 rows=435274 loops=1)\n> -> Sort (cost=5462874.86..5462961.41 rows=34619 width=1197) (actual time=6283539.280..6908879.456 rows=113478386 loops=1)\n> ...\n> Planning Time: 2.953 ms\n> Execution Time: 7004340.091 ms\n> (70 rows)\n>\n> There isn't really any big news here. But what matters is that it works.\n>\n\nYeah. Once the hash join outgrows the work_mem, the fallback logick\nstarts ignoring that in the effort to keep the memory usage minimal.\n\nI still think the idea with an \"overflow batch\" is worth considering,\nbecause it'd allow us to keep the memory usage within work_mem. And\nafter getting familiar with the hash join code again (haven't messed\nwith it since 9.5 or so) I think it should not be all that difficult.\nI'll give it a try over the weekend if I get bored for a while.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 24 Apr 2019 02:36:33 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 02:36:33AM +0200, Tomas Vondra wrote:\n>\n> ...\n>\n>I still think the idea with an \"overflow batch\" is worth considering,\n>because it'd allow us to keep the memory usage within work_mem. And\n>after getting familiar with the hash join code again (haven't messed\n>with it since 9.5 or so) I think it should not be all that difficult.\n>I'll give it a try over the weekend if I get bored for a while.\n>\n\nOK, so I took a stab at this, and overall it seems to be workable. The\npatches I have are nowhere near committable, but I think the approach\nworks fairly well - the memory is kept in check, and the performance is\ncomparable to the \"ballancing\" approach tested before.\n\nTo explain it a bit, the idea is that we can compute how many BufFile\nstructures we can keep in memory - we can't use more than work_mem/2 for\nthat, because then we'd mostly eliminate space for the actual data. For\nexample with 4MB, we know we can keep 128 batches - we need 128 for\nouter and inner side, so 256 in total, and 256*8kB = 2MB.\n\nAnd then, we just increase the number of batches but instead of adding\nthe BufFile entries, we split batches into slices that we can keep in\nmemory (say, the 128 batches). And we keep BufFiles for the current one\nand an \"overflow file\" for the other slices. After processing a slice,\nwe simply switch to the next one, and use the overflow file as a temp\nfile for the first batch - we redistribute it into the other batches in\nthe slice and another overflow file.\n\nThat's what the v3 patch (named 'single overflow file') does. I does\nwork, but unfortunately it significantly inflates the amount of data\nwritten to temporary files. Assume we need e.g. 1024 batches, but only\n128 fit into memory. That means we'll need 8 slices, and during the\nfirst pass we'll handle 1/8 of the data and write 7/8 to the overflow\nfile. Then after processing the slice and switching to the next one, we\nrepeat this dance - 1/8 gets processed, 6/8 written to another overflow\nfile. So essentially we \"forward\" about\n\n 7/8 + 6/8 + 5/8 + ... + 1/8 = 28/8 = 3.5\n\nof data between slices, and we need to re-shuffle data in each slice,\nwhich amounts to additional 1x data. That's pretty significant overhead,\nas will be clear from the measurements I'll present shortly.\n\nBut luckily, there's a simple solution to this - instead of writing the\ndata into a single overflow file, we can create one overflow file for\neach slice. That will leave us with the ~1x of additional writes when\ndistributing data into batches in the current slice, but it eliminates\nthe main source of write amplification - awalanche-like forwarding of\ndata between slices.\n\nThis relaxes the memory limit a bit again, because we can't really keep\nthe number of overflow files constrained by work_mem, but we should only\nneed few of them (much less than when adding one file per batch right\naway). For example with 128 in-memory batches, this reduces the amount\nof necessary memory 128x.\n\nAnd this is what v4 (per-slice overflow file) does, pretty much.\n\n\nTwo more comments, regarding memory accounting in previous patches. It\nwas a bit broken, because we actually need 2x the number of BufFiles. We\nneeded nbatch files for outer side and nbatch files for inner side, but\nwe only considered one of those - both when deciding when to increase\nthe number of batches / increase spaceAllowed, and when reporting the\nmemory usage. So with large number of batches the reported amount of\nused memory was roughly 1/2 of the actual value :-/\n\nThe memory accounting was a bit bogus for another reason - spaceUsed\nsimply tracks the amount of memory for hash table contents. But at the\nend we were simply adding the current space for BufFile stuff, ignoring\nthe fact that that's likely much larger than when the spacePeak value\ngot stored. For example we might have kept early spaceUsed when it was\nalmost work_mem, and then added the final large BufFile allocation.\n\nI've fixed both issues in the patches attached to this message. It does\nnot make a huge difference in practice, but it makes it easier to\ncompare values between patches.\n\n\nNow, some test results - I've repeated the simple test with uniform data\nset, which is pretty much ideal for hash joins (no unexlectedly large\nbatches that can't be split, etc.). I've done this with 1M, 5M, 10M, 25M\nand 50M rows in the large table (which gets picked for the \"hash\" side),\nand measured how much memory gets used, how many batches, how long it\ntakes and how much data gets written to temp files.\n\nSee the hashjoin-test.sh script for more details.\n\nSo, here are the results with work_mem = 4MB (so the number of in-memory\nbatches for the last two entries is 128). The columns are:\n\n* nbatch - the final number of batches\n* memory - memory usage, as reported by explain analyze\n* time - duration of the query (without explain analyze) in seconds\n* size - size of the large table\n* temp - amount of data written to temp files\n* amplif - write amplification (temp / size)\n\n\n 1M rows\n ===================================================================\n nbatch memory time size (MB) temp (MB) amplif\n -------------------------------------------------------------------\n master 256 7681 3.3 730 899 1.23\n rebalance 256 7711 3.3 730 884 1.21\n single file 1024 4161 7.2 730 3168 4.34\n per-slice file 1024 4161 4.7 730 1653 2.26\n\n\n 5M rows\n ===================================================================\n nbatch memory time size (MB) temp (MB) amplif\n -------------------------------------------------------------------\n master 2048 36353 22 3652 5276 1.44\n rebalance 512 16515 18 3652 4169 1.14\n single file 4096 4353 156 3652 53897 14.76\n per-slice file 4096 4353 28 3652 8106 2.21\n\n\n 10M rows\n ===================================================================\n nbatch memory time size (MB) temp (MB) amplif\n -------------------------------------------------------------------\n master 4096 69121 61 7303 10556 1.45\n rebalance 512 24326 46 7303 7405 1.01\n single file 8192 4636 762 7303 211234 28.92\n per-slice file 8192 4636 65 7303 16278 2.23\n\n\n 25M rows\n ===================================================================\n nbatch memory time size (MB) temp (MB) amplif\n -------------------------------------------------------------------\n master 8192 134657 190 7303 24279 1.33\n rebalance 1024 36611 158 7303 20024 1.10\n single file 16384 6011 4054 7303 1046174 57.32\n per-slice file 16384 6011 207 7303 39073 2.14\n\n\n 50M rows\n ===================================================================\n nbatch memory time size (MB) temp (MB) amplif\n -------------------------------------------------------------------\n master 16384 265729 531 36500 48519 1.33\n rebalance 2048 53241 447 36500 48077 1.32\n single file - - - 36500 - -\n per-slice file 32768 8125 451 36500 78662 2.16\n\n\n From those numbers it's pretty clear that per-slice overflow file does\nby far the best job in enforcing work_mem and minimizing the amount of\ndata spilled to temp files. It does write a bit more data than both\nmaster and the simple rebalancing, but that's the cost for enforcing\nwork_mem more strictly. It's generally a bit slower than those two\napproaches, although on the largest scale it's actually a bit faster\nthan master. I think that's pretty acceptable, considering this is meant\nto address extreme underestimates where we currently just eat memory.\n\nThe case with single overflow file performs rather poorly - I haven't\neven collected data from the largest scale, but considering it spilled\n1TB of temp files with a dataset half the size, that's not an issue.\n(Note that this does not mean it needs 1TB of temp space, those writes\nare spread over time and the files are created/closed as we go. The\nsystem only has ~100GB of free disk space.)\n\n\nGunther, could you try the v2 and v4 patches on your data set? That\nwould be an interesting data point, I think.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 28 Apr 2019 16:19:01 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Hi all, I am connecting to a discussion back from April this year. My \ndata has grown and now I am running into new out of memory situations. \nMeanwhile the world turned from 11.2 to 11.5 which I just installed only \nto find the same out of memory error.\n\nHave any of the things discussed and proposed, especially this last one \nby Tomas Vondra, been applied to the 11 releases? Should I try these \nolder patches from April?\n\nregards,\n-Gunther\n\nFor what it is worth, this is what I am getting:\n\nTopMemoryContext: 67424 total in 5 blocks; 7184 free (7 chunks); 60240 \nused pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; \n416 free (0 chunks); 7776 used TopTransactionContext: 8192 total in 1 \nblocks; 7720 free (1 chunks); 472 used Operator lookup cache: 24576 \ntotal in 2 blocks; 10760 free (3 chunks); 13816 used TableSpace cache: \n8192 total in 1 blocks; 2096 free (0 chunks); 6096 used Type information \ncache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used \nRowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); \n1296 used MessageContext: 8388608 total in 11 blocks; 3094872 free (4 \nchunks); 5293736 used JoinRelHashTable: 16384 total in 2 blocks; 5576 \nfree (1 chunks); 10808 used Operator class cache: 8192 total in 1 \nblocks; 560 free (0 chunks); 7632 used smgr relation table: 32768 total \nin 3 blocks; 12720 free (8 chunks); 20048 used TransactionAbortContext: \n32768 total in 1 blocks; 32512 free (0 chunks); 256 used Portal hash: \n8192 total in 1 blocks; 560 free (0 chunks); 7632 used TopPortalContext: \n8192 total in 1 blocks; 7664 free (0 chunks); 528 used PortalContext: \n1024 total in 1 blocks; 624 free (0 chunks); 400 used: ExecutorState: \n202528536 total in 19 blocks; 433464 free (12 chunks); 202095072 used \nHashTableContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used \nHashBatchContext: 10615104 total in 261 blocks; 7936 free (0 chunks); \n10607168 used HashTableContext: 8192 total in 1 blocks; 7688 free (1 \nchunks); 504 used HashBatchContext: 13079304 total in 336 blocks; 7936 \nfree (0 chunks); 13071368 used TupleSort main: 49208 total in 3 blocks; \n8552 free (7 chunks); 40656 used Caller tuples: 8192 total in 1 blocks; \n7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 \nfree (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free \n(0 chunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used Subplan HashTable Temp Context: 1024 total in 1 \nblocks; 768 free (0 chunks); 256 used Subplan HashTable Context: 8192 \ntotal in 1 blocks; 7936 free (0 chunks); 256 used ExprContext: 8192 \ntotal in 1 blocks; 7936 free (0 chunks); 256 used Subplan HashTable Temp \nContext: 1024 total in 1 blocks; 768 free (0 chunks); 256 used Subplan \nHashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 \nused ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nSubplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 \nchunks); 256 used Subplan HashTable Context: 8192 total in 1 blocks; \n7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; 7936 \nfree (0 chunks); 256 used Subplan HashTable Temp Context: 1024 total in \n1 blocks; 768 free (0 chunks); 256 used Subplan HashTable Context: 8192 \ntotal in 1 blocks; 7936 free (0 chunks); 256 used ExprContext: 8192 \ntotal in 1 blocks; 7936 free (0 chunks); 256 used Subplan HashTable Temp \nContext: 1024 total in 1 blocks; 768 free (0 chunks); 256 used Subplan \nHashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 \nused ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7360 free (0 chunks); 832 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \nExprContext: 1107296256 total in 142 blocks; 6328 free (101 chunks); \n1107289928 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \nchunks); 256 used 1 more child contexts containing 8192 total in 1 \nblocks; 7936 free (0 chunks); 256 used Relcache by OID: 16384 total in 2 \nblocks; 2472 free (2 chunks); 13912 used CacheMemoryContext: 1113488 \ntotal in 14 blocks; 16776 free (0 chunks); 1096712 used index info: 1024 \ntotal in 1 blocks; 48 free (0 chunks); 976 used: docsubjh_sjrcode_ndx \nindex info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \ndocsubjh_sjrclass_ndx index info: 1024 total in 1 blocks; 48 free (0 \nchunks); 976 used: docsubjh_scopeiid_ndx index info: 1024 total in 1 \nblocks; 48 free (0 chunks); 976 used: docsubjh_dociid_ndx index info: \n4096 total in 3 blocks; 2064 free (2 chunks); 2032 used: \nrole_telecom_idx index info: 2048 total in 2 blocks; 968 free (1 \nchunks); 1080 used: role_addr_fkidx index info: 2048 total in 2 blocks; \n968 free (1 chunks); 1080 used: role_id_fkidx index info: 2048 total in \n2 blocks; 696 free (1 chunks); 1352 used: role_id_idx index info: 2048 \ntotal in 2 blocks; 968 free (1 chunks); 1080 used: role_name_fkidx index \ninfo: 4096 total in 3 blocks; 2064 free (2 chunks); 2032 used: \nentity_telecom_idx index info: 2048 total in 2 blocks; 968 free (1 \nchunks); 1080 used: entity_id_fkidx index info: 2048 total in 2 blocks; \n696 free (1 chunks); 1352 used: entity_id_idx index info: 2048 total in \n2 blocks; 624 free (1 chunks); 1424 used: entity_det_code_idx index \ninfo: 4096 total in 3 blocks; 2016 free (2 chunks); 2080 used: \nentity_code_nodash_idx index info: 2048 total in 2 blocks; 968 free (1 \nchunks); 1080 used: entity_pkey index info: 2048 total in 2 blocks; 680 \nfree (1 chunks); 1368 used: connect_rule_pkey index info: 2048 total in \n2 blocks; 952 free (1 chunks); 1096 used: role_context_idx index info: \n2048 total in 2 blocks; 640 free (2 chunks); 1408 used: role_partitions \nindex info: 2048 total in 2 blocks; 640 free (2 chunks); 1408 used: \nrole_scoper_idx index info: 2048 total in 2 blocks; 640 free (2 chunks); \n1408 used: role_player_idx index info: 2048 total in 2 blocks; 968 free \n(1 chunks); 1080 used: role__pkey index info: 2048 total in 2 blocks; \n680 free (1 chunks); 1368 used: pg_toast_2619_index index info: 2048 \ntotal in 2 blocks; 592 free (1 chunks); 1456 used: \npg_constraint_conrelid_contypid_conname_index index info: 2048 total in \n2 blocks; 624 free (1 chunks); 1424 used: participation_act_idx index \ninfo: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: \nparticipation_role_idx index info: 2048 total in 2 blocks; 952 free (1 \nchunks); 1096 used: participation_pkey index info: 1024 total in 1 \nblocks; 48 free (0 chunks); 976 used: pg_statistic_ext_relid_index index \ninfo: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: \ndoc_ndx_internaiddoctype index info: 2048 total in 2 blocks; 680 free (1 \nchunks); 1368 used: pg_toast_2618_index index info: 2048 total in 2 \nblocks; 952 free (1 chunks); 1096 used: pg_index_indrelid_index relation \nrules: 827392 total in 104 blocks; 2400 free (1 chunks); 824992 used: \nv_documentsubjecthistory index info: 2048 total in 2 blocks; 648 free (2 \nchunks); 1400 used: pg_db_role_setting_databaseid_rol_index index info: \n2048 total in 2 blocks; 624 free (2 chunks); 1424 used: \npg_opclass_am_name_nsp_index index info: 1024 total in 1 blocks; 16 free \n(0 chunks); 1008 used: pg_foreign_data_wrapper_name_index index info: \n1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_enum_oid_index \nindex info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: \npg_class_relname_nsp_index index info: 1024 total in 1 blocks; 48 free \n(0 chunks); 976 used: pg_foreign_server_oid_index index info: 1024 total \nin 1 blocks; 48 free (0 chunks); 976 used: pg_publication_pubname_index \nindex info: 2048 total in 2 blocks; 592 free (3 chunks); 1456 used: \npg_statistic_relid_att_inh_index index info: 2048 total in 2 blocks; 680 \nfree (2 chunks); 1368 used: pg_cast_source_target_index index info: 1024 \ntotal in 1 blocks; 48 free (0 chunks); 976 used: pg_language_name_index \nindex info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_transform_oid_index index info: 1024 total in 1 blocks; 48 free (0 \nchunks); 976 used: pg_collation_oid_index index info: 3072 total in 2 \nblocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index index \ninfo: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: \npg_index_indexrelid_index index info: 2048 total in 2 blocks; 760 free \n(2 chunks); 1288 used: pg_ts_template_tmplname_index index info: 2048 \ntotal in 2 blocks; 704 free (3 chunks); 1344 used: \npg_ts_config_map_index index info: 2048 total in 2 blocks; 952 free (1 \nchunks); 1096 used: pg_opclass_oid_index index info: 1024 total in 1 \nblocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_oid_index \nindex info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_event_trigger_evtname_index index info: 2048 total in 2 blocks; 760 \nfree (2 chunks); 1288 used: pg_statistic_ext_name_index index info: 1024 \ntotal in 1 blocks; 48 free (0 chunks); 976 used: \npg_publication_oid_index index info: 1024 total in 1 blocks; 48 free (0 \nchunks); 976 used: pg_ts_dict_oid_index index info: 1024 total in 1 \nblocks; 48 free (0 chunks); 976 used: pg_event_trigger_oid_index index \ninfo: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: \npg_conversion_default_index index info: 3072 total in 2 blocks; 1136 \nfree (2 chunks); 1936 used: pg_operator_oprname_l_r_n_index index info: \n2048 total in 2 blocks; 680 free (2 chunks); 1368 used: \npg_trigger_tgrelid_tgname_index index info: 2048 total in 2 blocks; 760 \nfree (2 chunks); 1288 used: pg_enum_typid_label_index index info: 1024 \ntotal in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_config_oid_index \nindex info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_user_mapping_oid_index index info: 2048 total in 2 blocks; 704 free \n(3 chunks); 1344 used: pg_opfamily_am_name_nsp_index index info: 1024 \ntotal in 1 blocks; 48 free (0 chunks); 976 used: \npg_foreign_table_relid_index index info: 2048 total in 2 blocks; 952 \nfree (1 chunks); 1096 used: pg_type_oid_index index info: 2048 total in \n2 blocks; 952 free (1 chunks); 1096 used: pg_aggregate_fnoid_index index \ninfo: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_constraint_oid_index index info: 2048 total in 2 blocks; 680 free (2 \nchunks); 1368 used: pg_rewrite_rel_rulename_index index info: 2048 total \nin 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_parser_prsname_index \nindex info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \npg_ts_config_cfgname_index index info: 1024 total in 1 blocks; 48 free \n(0 chunks); 976 used: pg_ts_parser_oid_index index info: 2048 total in 2 \nblocks; 728 free (1 chunks); 1320 used: \npg_publication_rel_prrelid_prpubid_index index info: 2048 total in 2 \nblocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index index \ninfo: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: \npg_namespace_nspname_index index info: 1024 total in 1 blocks; 48 free \n(0 chunks); 976 used: pg_ts_template_oid_index index info: 2048 total in \n2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index index \ninfo: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: \npg_default_acl_role_nsp_obj_index index info: 2048 total in 2 blocks; \n704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index index \ninfo: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_publication_rel_oid_index index info: 1024 total in 1 blocks; 48 free \n(0 chunks); 976 used: pg_range_rngtypid_index index info: 2048 total in \n2 blocks; 760 free (2 chunks); 1288 used: pg_ts_dict_dictname_index \nindex info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: \npg_type_typname_nsp_index index info: 1024 total in 1 blocks; 48 free (0 \nchunks); 976 used: pg_opfamily_oid_index index info: 1024 total in 1 \nblocks; 48 free (0 chunks); 976 used: pg_statistic_ext_oid_index index \ninfo: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: \npg_class_oid_index index info: 2048 total in 2 blocks; 624 free (2 \nchunks); 1424 used: pg_proc_proname_args_nsp_index index info: 1024 \ntotal in 1 blocks; 16 free (0 chunks); 1008 used: \npg_partitioned_table_partrelid_index index info: 2048 total in 2 blocks; \n760 free (2 chunks); 1288 used: pg_transform_type_lang_index index info: \n2048 total in 2 blocks; 680 free (2 chunks); 1368 used: \npg_attribute_relid_attnum_index index info: 2048 total in 2 blocks; 952 \nfree (1 chunks); 1096 used: pg_proc_oid_index index info: 1024 total in \n1 blocks; 48 free (0 chunks); 976 used: pg_language_oid_index index \ninfo: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \npg_namespace_oid_index index info: 3072 total in 2 blocks; 1136 free (2 \nchunks); 1936 used: pg_amproc_fam_proc_index index info: 1024 total in 1 \nblocks; 48 free (0 chunks); 976 used: pg_foreign_server_name_index index \ninfo: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \npg_attribute_relid_attnam_index index info: 1024 total in 1 blocks; 48 \nfree (0 chunks); 976 used: pg_conversion_oid_index index info: 2048 \ntotal in 2 blocks; 728 free (1 chunks); 1320 used: \npg_user_mapping_user_server_index index info: 2048 total in 2 blocks; \n728 free (1 chunks); 1320 used: \npg_subscription_rel_srrelid_srsubid_index index info: 1024 total in 1 \nblocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index index \ninfo: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \npg_conversion_name_nsp_index index info: 2048 total in 2 blocks; 952 \nfree (1 chunks); 1096 used: pg_authid_oid_index index info: 2048 total \nin 2 blocks; 728 free (1 chunks); 1320 used: \npg_auth_members_member_role_index 10 more child contexts containing \n17408 total in 17 blocks; 6080 free (10 chunks); 11328 used WAL record \nconstruction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used \nPrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used \nMdSmgr: 8192 total in 1 blocks; 6408 free (0 chunks); 1784 used \nLOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 \nused Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 \nused ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 \nused Grand total: 1345345736 bytes in 1209 blocks; 4529600 free (270 \nchunks); 1340816136 used\n\nOn 4/28/2019 10:19, Tomas Vondra wrote:\n> On Wed, Apr 24, 2019 at 02:36:33AM +0200, Tomas Vondra wrote:\n>>\n>> ...\n>>\n>> I still think the idea with an \"overflow batch\" is worth considering,\n>> because it'd allow us to keep the memory usage within work_mem. And\n>> after getting familiar with the hash join code again (haven't messed\n>> with it since 9.5 or so) I think it should not be all that difficult.\n>> I'll give it a try over the weekend if I get bored for a while.\n>>\n>\n> OK, so I took a stab at this, and overall it seems to be workable. The\n> patches I have are nowhere near committable, but I think the approach\n> works fairly well - the memory is kept in check, and the performance is\n> comparable to the \"ballancing\" approach tested before.\n>\n> To explain it a bit, the idea is that we can compute how many BufFile\n> structures we can keep in memory - we can't use more than work_mem/2 for\n> that, because then we'd mostly eliminate space for the actual data. For\n> example with 4MB, we know we can keep 128 batches - we need 128 for\n> outer and inner side, so 256 in total, and 256*8kB = 2MB.\n>\n> And then, we just increase the number of batches but instead of adding\n> the BufFile entries, we split batches into slices that we can keep in\n> memory (say, the 128 batches). And we keep BufFiles for the current one\n> and an \"overflow file\" for the other slices. After processing a slice,\n> we simply switch to the next one, and use the overflow file as a temp\n> file for the first batch - we redistribute it into the other batches in\n> the slice and another overflow file.\n>\n> That's what the v3 patch (named 'single overflow file') does. I does\n> work, but unfortunately it significantly inflates the amount of data\n> written to temporary files. Assume we need e.g. 1024 batches, but only\n> 128 fit into memory. That means we'll need 8 slices, and during the\n> first pass we'll handle 1/8 of the data and write 7/8 to the overflow\n> file.� Then after processing the slice and switching to the next one, we\n> repeat this dance - 1/8 gets processed, 6/8 written to another overflow\n> file. So essentially we \"forward\" about\n>\n> �� 7/8 + 6/8 + 5/8 + ... + 1/8 = 28/8 = 3.5\n>\n> of data between slices, and we need to re-shuffle data in each slice,\n> which amounts to additional 1x data. That's pretty significant overhead,\n> as will be clear from the measurements I'll present shortly.\n>\n> But luckily, there's a simple solution to this - instead of writing the\n> data into a single overflow file, we can create one overflow file for\n> each slice. That will leave us with the ~1x of additional writes when\n> distributing data into batches in the current slice, but it eliminates\n> the main source of write amplification - awalanche-like forwarding of\n> data between slices.\n>\n> This relaxes the memory limit a bit again, because we can't really keep\n> the number of overflow files constrained by work_mem, but we should only\n> need few of them (much less than when adding one file per batch right\n> away). For example with 128 in-memory batches, this reduces the amount\n> of necessary memory 128x.\n>\n> And this is what v4 (per-slice overflow file) does, pretty much.\n>\n>\n> Two more comments, regarding memory accounting in previous patches. It\n> was a bit broken, because we actually need 2x the number of BufFiles. We\n> needed nbatch files for outer side and nbatch files for inner side, but\n> we only considered one of those - both when deciding when to increase\n> the number of batches / increase spaceAllowed, and when reporting the\n> memory usage. So with large number of batches the reported amount of\n> used memory was roughly 1/2 of the actual value :-/\n>\n> The memory accounting was a bit bogus for another reason - spaceUsed\n> simply tracks the amount of memory for hash table contents. But at the\n> end we were simply adding the current space for BufFile stuff, ignoring\n> the fact that that's likely much larger than when the spacePeak value\n> got stored. For example we might have kept early spaceUsed when it was\n> almost work_mem, and then added the final large BufFile allocation.\n>\n> I've fixed both issues in the patches attached to this message. It does\n> not make a huge difference in practice, but it makes it easier to\n> compare values between patches.\n>\n>\n> Now, some test results - I've repeated the simple test with uniform data\n> set, which is pretty much ideal for hash joins (no unexlectedly large\n> batches that can't be split, etc.). I've done this with 1M, 5M, 10M, 25M\n> and 50M rows in the large table (which gets picked for the \"hash\" side),\n> and measured how much memory gets used, how many batches, how long it\n> takes and how much data gets written to temp files.\n>\n> See the hashjoin-test.sh script for more details.\n>\n> So, here are the results with work_mem = 4MB (so the number of in-memory\n> batches for the last two entries is 128). The columns are:\n>\n> * nbatch - the final number of batches\n> * memory - memory usage, as reported by explain analyze\n> * time - duration of the query (without explain analyze) in seconds\n> * size - size of the large table\n> * temp - amount of data written to temp files\n> * amplif - write amplification (temp / size)\n>\n>\n> �1M rows\n> �===================================================================\n> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n> �-------------------------------------------------------------------\n> �master������������ 256��� 7681��� 3.3������� 730������� 899 1.23\n> �rebalance��������� 256��� 7711��� 3.3������� 730������� 884 1.21\n> �single file������ 1024��� 4161��� 7.2������� 730������ 3168 4.34\n> �per-slice file��� 1024��� 4161��� 4.7������� 730������ 1653 2.26\n>\n>\n> �5M rows\n> �===================================================================\n> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n> �-------------------------------------------------------------------\n> �master����������� 2048�� 36353���� 22������ 3652������ 5276 1.44\n> �rebalance��������� 512�� 16515���� 18������ 3652������ 4169 1.14\n> �single file������ 4096��� 4353��� 156������ 3652����� 53897 14.76\n> �per-slice file��� 4096��� 4353���� 28������ 3652������ 8106 2.21\n>\n>\n> �10M rows\n> �===================================================================\n> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n> �-------------------------------------------------------------------\n> �master����������� 4096�� 69121���� 61������ 7303����� 10556 1.45\n> �rebalance��������� 512�� 24326���� 46������ 7303������ 7405 1.01\n> �single file������ 8192��� 4636��� 762������ 7303���� 211234 28.92\n> �per-slice file��� 8192��� 4636���� 65������ 7303����� 16278 2.23\n>\n>\n> �25M rows\n> �===================================================================\n> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n> �-------------------------------------------------------------------\n> �master����������� 8192� 134657��� 190������ 7303����� 24279 1.33\n> �rebalance�������� 1024�� 36611��� 158������ 7303����� 20024 1.10\n> �single file����� 16384��� 6011�� 4054������ 7303��� 1046174 57.32\n> �per-slice file�� 16384��� 6011��� 207������ 7303����� 39073 2.14\n>\n>\n> �50M rows\n> �===================================================================\n> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n> �-------------------------------------------------------------------\n> �master���������� 16384� 265729��� 531����� 36500����� 48519 1.33\n> �rebalance�������� 2048�� 53241��� 447����� 36500����� 48077 1.32\n> �single file��������� -������ -����� -����� 36500��������� - -\n> �per-slice file�� 32768��� 8125��� 451����� 36500����� 78662 2.16\n>\n>\n> From those numbers it's pretty clear that per-slice overflow file does\n> by far the best job in enforcing work_mem and minimizing the amount of\n> data spilled to temp files. It does write a bit more data than both\n> master and the simple rebalancing, but that's the cost for enforcing\n> work_mem more strictly. It's generally a bit slower than those two\n> approaches, although on the largest scale it's actually a bit faster\n> than master. I think that's pretty acceptable, considering this is meant\n> to address extreme underestimates where we currently just eat memory.\n>\n> The case with single overflow file performs rather poorly - I haven't\n> even collected data from the largest scale, but considering it spilled\n> 1TB of temp files with a dataset half the size, that's not an issue.\n> (Note that this does not mean it needs 1TB of temp space, those writes\n> are spread over time and the files are created/closed as we go. The\n> system only has ~100GB of free disk space.)\n>\n>\n> Gunther, could you try the v2 and v4 patches on your data set? That\n> would be an interesting data point, I think.\n>\n>\n> regards\n>\n\n\n\n\n\n\nHi all, I am connecting to a discussion back from April this\n year. My data has grown and now I am running into new out of\n memory situations. Meanwhile the world turned from 11.2 to 11.5\n which I just installed only to find the same out of memory error.\nHave any of the things discussed and proposed, especially this\n last one by Tomas Vondra, been applied to the 11 releases? Should\n I try these older patches from April? \n\nregards,\n -Gunther\n\nFor what it is worth, this is what I am getting:\n\nTopMemoryContext: 67424 total in 5 blocks; 7184 free (7 chunks); 60240 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 7720 free (1 chunks); 472 used\n Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 8388608 total in 11 blocks; 3094872 free (4 chunks); 5293736 used\n JoinRelHashTable: 16384 total in 2 blocks; 5576 free (1 chunks); 10808 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 12720 free (8 chunks); 20048 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalContext: 1024 total in 1 blocks; 624 free (0 chunks); 400 used:\n ExecutorState: 202528536 total in 19 blocks; 433464 free (12 chunks); 202095072 used\n HashTableContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used\n HashBatchContext: 10615104 total in 261 blocks; 7936 free (0 chunks); 10607168 used\n HashTableContext: 8192 total in 1 blocks; 7688 free (1 chunks); 504 used\n HashBatchContext: 13079304 total in 336 blocks; 7936 free (0 chunks); 13071368 used\n TupleSort main: 49208 total in 3 blocks; 8552 free (7 chunks); 40656 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7360 free (0 chunks); 832 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 1107296256 total in 142 blocks; 6328 free (101 chunks); 1107289928 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n 1 more child contexts containing 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 2472 free (2 chunks); 13912 used\n CacheMemoryContext: 1113488 total in 14 blocks; 16776 free (0 chunks); 1096712 used\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: docsubjh_sjrcode_ndx\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: docsubjh_sjrclass_ndx\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: docsubjh_scopeiid_ndx\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: docsubjh_dociid_ndx\n index info: 4096 total in 3 blocks; 2064 free (2 chunks); 2032 used: role_telecom_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: role_addr_fkidx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: role_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: role_id_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: role_name_fkidx\n index info: 4096 total in 3 blocks; 2064 free (2 chunks); 2032 used: entity_telecom_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: entity_det_code_idx\n index info: 4096 total in 3 blocks; 2016 free (2 chunks); 2080 used: entity_code_nodash_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_pkey\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: connect_rule_pkey\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: role_context_idx\n index info: 2048 total in 2 blocks; 640 free (2 chunks); 1408 used: role_partitions\n index info: 2048 total in 2 blocks; 640 free (2 chunks); 1408 used: role_scoper_idx\n index info: 2048 total in 2 blocks; 640 free (2 chunks); 1408 used: role_player_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: role__pkey\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 592 free (1 chunks); 1456 used: pg_constraint_conrelid_contypid_conname_index\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: participation_act_idx\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: participation_role_idx\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: participation_pkey\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_relid_index\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: doc_ndx_internaiddoctype\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2618_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indrelid_index\n relation rules: 827392 total in 104 blocks; 2400 free (1 chunks); 824992 used: v_documentsubjecthistory\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_db_role_setting_databaseid_rol_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_opclass_am_name_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_enum_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_class_relname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_pubname_index\n index info: 2048 total in 2 blocks; 592 free (3 chunks); 1456 used: pg_statistic_relid_att_inh_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_cast_source_target_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_transform_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_collation_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indexrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_template_tmplname_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_ts_config_map_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_opclass_oid_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_evtname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_statistic_ext_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_dict_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_oid_index\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_conversion_default_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_operator_oprname_l_r_n_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_trigger_tgrelid_tgname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_enum_typid_label_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_config_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_user_mapping_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_opfamily_am_name_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_table_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_type_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_aggregate_fnoid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_constraint_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_rewrite_rel_rulename_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_parser_prsname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_config_cfgname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_parser_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_publication_rel_prrelid_prpubid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_namespace_nspname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_template_oid_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index\n index info: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: pg_default_acl_role_nsp_obj_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_rel_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_range_rngtypid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_dict_dictname_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_type_typname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_opfamily_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_proc_proname_args_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_partitioned_table_partrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_transform_type_lang_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_proc_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_namespace_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amproc_fam_proc_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_name_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_attribute_relid_attnam_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_conversion_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_user_mapping_user_server_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_subscription_rel_srrelid_srsubid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_conversion_name_nsp_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_member_role_index\n 10 more child contexts containing 17408 total in 17 blocks; 6080 free (10 chunks); 11328 used\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6408 free (0 chunks); 1784 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\nGrand total: 1345345736 bytes in 1209 blocks; 4529600 free (270 chunks); 1340816136 used\n\n\n\nOn 4/28/2019 10:19, Tomas Vondra wrote:\n\nOn Wed, Apr\n 24, 2019 at 02:36:33AM +0200, Tomas Vondra wrote:\n \n\n\n ...\n \n\n I still think the idea with an \"overflow batch\" is worth\n considering,\n \n because it'd allow us to keep the memory usage within work_mem.\n And\n \n after getting familiar with the hash join code again (haven't\n messed\n \n with it since 9.5 or so) I think it should not be all that\n difficult.\n \n I'll give it a try over the weekend if I get bored for a while.\n \n\n\n\n OK, so I took a stab at this, and overall it seems to be workable.\n The\n \n patches I have are nowhere near committable, but I think the\n approach\n \n works fairly well - the memory is kept in check, and the\n performance is\n \n comparable to the \"ballancing\" approach tested before.\n \n\n To explain it a bit, the idea is that we can compute how many\n BufFile\n \n structures we can keep in memory - we can't use more than\n work_mem/2 for\n \n that, because then we'd mostly eliminate space for the actual\n data. For\n \n example with 4MB, we know we can keep 128 batches - we need 128\n for\n \n outer and inner side, so 256 in total, and 256*8kB = 2MB.\n \n\n And then, we just increase the number of batches but instead of\n adding\n \n the BufFile entries, we split batches into slices that we can keep\n in\n \n memory (say, the 128 batches). And we keep BufFiles for the\n current one\n \n and an \"overflow file\" for the other slices. After processing a\n slice,\n \n we simply switch to the next one, and use the overflow file as a\n temp\n \n file for the first batch - we redistribute it into the other\n batches in\n \n the slice and another overflow file.\n \n\n That's what the v3 patch (named 'single overflow file') does. I\n does\n \n work, but unfortunately it significantly inflates the amount of\n data\n \n written to temporary files. Assume we need e.g. 1024 batches, but\n only\n \n 128 fit into memory. That means we'll need 8 slices, and during\n the\n \n first pass we'll handle 1/8 of the data and write 7/8 to the\n overflow\n \n file.� Then after processing the slice and switching to the next\n one, we\n \n repeat this dance - 1/8 gets processed, 6/8 written to another\n overflow\n \n file. So essentially we \"forward\" about\n \n\n �� 7/8 + 6/8 + 5/8 + ... + 1/8 = 28/8 = 3.5\n \n\n of data between slices, and we need to re-shuffle data in each\n slice,\n \n which amounts to additional 1x data. That's pretty significant\n overhead,\n \n as will be clear from the measurements I'll present shortly.\n \n\n But luckily, there's a simple solution to this - instead of\n writing the\n \n data into a single overflow file, we can create one overflow file\n for\n \n each slice. That will leave us with the ~1x of additional writes\n when\n \n distributing data into batches in the current slice, but it\n eliminates\n \n the main source of write amplification - awalanche-like forwarding\n of\n \n data between slices.\n \n\n This relaxes the memory limit a bit again, because we can't really\n keep\n \n the number of overflow files constrained by work_mem, but we\n should only\n \n need few of them (much less than when adding one file per batch\n right\n \n away). For example with 128 in-memory batches, this reduces the\n amount\n \n of necessary memory 128x.\n \n\n And this is what v4 (per-slice overflow file) does, pretty much.\n \n\n\n Two more comments, regarding memory accounting in previous\n patches. It\n \n was a bit broken, because we actually need 2x the number of\n BufFiles. We\n \n needed nbatch files for outer side and nbatch files for inner\n side, but\n \n we only considered one of those - both when deciding when to\n increase\n \n the number of batches / increase spaceAllowed, and when reporting\n the\n \n memory usage. So with large number of batches the reported amount\n of\n \n used memory was roughly 1/2 of the actual value :-/\n \n\n The memory accounting was a bit bogus for another reason -\n spaceUsed\n \n simply tracks the amount of memory for hash table contents. But at\n the\n \n end we were simply adding the current space for BufFile stuff,\n ignoring\n \n the fact that that's likely much larger than when the spacePeak\n value\n \n got stored. For example we might have kept early spaceUsed when it\n was\n \n almost work_mem, and then added the final large BufFile\n allocation.\n \n\n I've fixed both issues in the patches attached to this message. It\n does\n \n not make a huge difference in practice, but it makes it easier to\n \n compare values between patches.\n \n\n\n Now, some test results - I've repeated the simple test with\n uniform data\n \n set, which is pretty much ideal for hash joins (no unexlectedly\n large\n \n batches that can't be split, etc.). I've done this with 1M, 5M,\n 10M, 25M\n \n and 50M rows in the large table (which gets picked for the \"hash\"\n side),\n \n and measured how much memory gets used, how many batches, how long\n it\n \n takes and how much data gets written to temp files.\n \n\n See the hashjoin-test.sh script for more details.\n \n\n So, here are the results with work_mem = 4MB (so the number of\n in-memory\n \n batches for the last two entries is 128). The columns are:\n \n\n * nbatch - the final number of batches\n \n * memory - memory usage, as reported by explain analyze\n \n * time - duration of the query (without explain analyze) in\n seconds\n \n * size - size of the large table\n \n * temp - amount of data written to temp files\n \n * amplif - write amplification (temp / size)\n \n\n\n �1M rows\n \n�===================================================================\n \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif\n \n�-------------------------------------------------------------------\n \n �master������������ 256��� 7681��� 3.3������� 730������� 899���\n 1.23\n \n �rebalance��������� 256��� 7711��� 3.3������� 730������� 884���\n 1.21\n \n �single file������ 1024��� 4161��� 7.2������� 730������ 3168���\n 4.34\n \n �per-slice file��� 1024��� 4161��� 4.7������� 730������ 1653���\n 2.26\n \n\n\n �5M rows\n \n�===================================================================\n \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif\n \n�-------------------------------------------------------------------\n \n �master����������� 2048�� 36353���� 22������ 3652������ 5276���\n 1.44\n \n �rebalance��������� 512�� 16515���� 18������ 3652������ 4169���\n 1.14\n \n �single file������ 4096��� 4353��� 156������ 3652����� 53897��\n 14.76\n \n �per-slice file��� 4096��� 4353���� 28������ 3652������ 8106���\n 2.21\n \n\n\n �10M rows\n \n�===================================================================\n \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif\n \n�-------------------------------------------------------------------\n \n �master����������� 4096�� 69121���� 61������ 7303����� 10556���\n 1.45\n \n �rebalance��������� 512�� 24326���� 46������ 7303������ 7405���\n 1.01\n \n �single file������ 8192��� 4636��� 762������ 7303���� 211234��\n 28.92\n \n �per-slice file��� 8192��� 4636���� 65������ 7303����� 16278���\n 2.23\n \n\n\n �25M rows\n \n�===================================================================\n \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif\n \n�-------------------------------------------------------------------\n \n �master����������� 8192� 134657��� 190������ 7303����� 24279���\n 1.33\n \n �rebalance�������� 1024�� 36611��� 158������ 7303����� 20024���\n 1.10\n \n �single file����� 16384��� 6011�� 4054������ 7303��� 1046174��\n 57.32\n \n �per-slice file�� 16384��� 6011��� 207������ 7303����� 39073���\n 2.14\n \n\n\n �50M rows\n \n�===================================================================\n \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif\n \n�-------------------------------------------------------------------\n \n �master���������� 16384� 265729��� 531����� 36500����� 48519��\n 1.33\n \n �rebalance�������� 2048�� 53241��� 447����� 36500����� 48077��\n 1.32\n \n �single file��������� -������ -����� -����� 36500��������� -�����\n -\n \n �per-slice file�� 32768��� 8125��� 451����� 36500����� 78662��\n 2.16\n \n\n\n From those numbers it's pretty clear that per-slice overflow file\n does\n \n by far the best job in enforcing work_mem and minimizing the\n amount of\n \n data spilled to temp files. It does write a bit more data than\n both\n \n master and the simple rebalancing, but that's the cost for\n enforcing\n \n work_mem more strictly. It's generally a bit slower than those two\n \n approaches, although on the largest scale it's actually a bit\n faster\n \n than master. I think that's pretty acceptable, considering this is\n meant\n \n to address extreme underestimates where we currently just eat\n memory.\n \n\n The case with single overflow file performs rather poorly - I\n haven't\n \n even collected data from the largest scale, but considering it\n spilled\n \n 1TB of temp files with a dataset half the size, that's not an\n issue.\n \n (Note that this does not mean it needs 1TB of temp space, those\n writes\n \n are spread over time and the files are created/closed as we go.\n The\n \n system only has ~100GB of free disk space.)\n \n\n\n Gunther, could you try the v2 and v4 patches on your data set?\n That\n \n would be an interesting data point, I think.\n \n\n\n regards",
"msg_date": "Fri, 23 Aug 2019 09:17:38 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "OK, I went back through that old thread, and I noticed an early opinion \nby a certain Peter <pmc at citylink> who said that I should provision \nsome swap space. Since I had plenty of disk and no other option I tried \nthat. And it did some magic. Here this is a steady state now:\n\ntop - 14:07:32 up 103 days, 9:57, 5 users, load average: 1.33, 1.05, 0.54\nTasks: 329 total, 2 running, 117 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 31.0 us, 11.4 sy, 0.0 ni, 35.3 id, 22.3 wa, 0.0 hi, 0.0 si, 0.0 st\nKiB Mem : 7910376 total, 120524 free, 2174940 used, 5614912 buff/cache\nKiB Swap: 16777212 total, 16777212 free, 0 used. 3239724 avail Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 5068 postgres 20 0 4352496 4.0g 2.0g R 76.4 52.6 3:01.39 postgres: postgres integrator [local] INSERT\n 435 root 20 0 0 0 0 S 4.0 0.0 10:52.38 [kswapd0]\n\nand the nice thing is, the backend server process appears to be bounded \nat 4GB, so there isn't really a \"memory leak\". And also, the swap space \nisn't really being used. This may have to do with these vm. sysctl \nsettings, overcommit, etc.\n\n * vm.overcommit_memory = 2 -- values are\n o 0 -- estimate free memory\n o 1 -- always assume there is enough memory\n o 2 -- no over-commit allocate only inside the following two\n parameters\n * vm.overcommit_kbytes = 0 -- how many kB above swap can be\n over-committed, EITHER this OR\n * vm.overcommit_ratio = 50 -- percent of main memory that can be\n committed over swap,\n o with 0 swap, that percent can be committed\n o i.e., this of 8 GB, 4 GB are reserved for buffer cache\n o not a good idea probably\n o at least we should allow 75% committed, i.e., 6 GB of 8 GB, leaving\n + 2 GB of buffer cache\n + 2 GB of shared buffers\n + 4 GB of all other memory\n\nI have vm.overcommit_memory = 2, _kbytes = 0, _ratio = 50. So this means \nwith _ratio = 50 I can commit 50% of memory, 4GB and this is exactly \nwhat the server process wants. So with little impact on the available \nbuffer cache I am in a fairly good position now. The swap (that in my \ncase I set at 2 x main memory = 16G) serves as a buffer to smooth out \nthis peak usage without ever actually paging.\n\nI suppose even without swap I could have set vm.overcommit_ratio = 75, \nand I notice now that I already commented this much (the above bullet \npoints are my own notes.)\n\nAnyway, for now, I am good. Thank you very much.\n\nregards,\n-Gunther\n\n\n\nOn 8/23/2019 9:17, Gunther wrote:\n>\n> Hi all, I am connecting to a discussion back from April this year. My \n> data has grown and now I am running into new out of memory situations. \n> Meanwhile the world turned from 11.2 to 11.5 which I just installed \n> only to find the same out of memory error.\n>\n> Have any of the things discussed and proposed, especially this last \n> one by Tomas Vondra, been applied to the 11 releases? Should I try \n> these older patches from April?\n>\n> regards,\n> -Gunther\n>\n> For what it is worth, this is what I am getting:\n>\n> TopMemoryContext: 67424 total in 5 blocks; 7184 free (7 chunks); 60240 \n> used pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; \n> 416 free (0 chunks); 7776 used TopTransactionContext: 8192 total in 1 \n> blocks; 7720 free (1 chunks); 472 used Operator lookup cache: 24576 \n> total in 2 blocks; 10760 free (3 chunks); 13816 used TableSpace cache: \n> 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used Type \n> information cache: 24352 total in 2 blocks; 2624 free (0 chunks); \n> 21728 used RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 \n> chunks); 1296 used MessageContext: 8388608 total in 11 blocks; 3094872 \n> free (4 chunks); 5293736 used JoinRelHashTable: 16384 total in 2 \n> blocks; 5576 free (1 chunks); 10808 used Operator class cache: 8192 \n> total in 1 blocks; 560 free (0 chunks); 7632 used smgr relation table: \n> 32768 total in 3 blocks; 12720 free (8 chunks); 20048 used \n> TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 \n> chunks); 256 used Portal hash: 8192 total in 1 blocks; 560 free (0 \n> chunks); 7632 used TopPortalContext: 8192 total in 1 blocks; 7664 free \n> (0 chunks); 528 used PortalContext: 1024 total in 1 blocks; 624 free \n> (0 chunks); 400 used: ExecutorState: 202528536 total in 19 blocks; \n> 433464 free (12 chunks); 202095072 used HashTableContext: 8192 total \n> in 1 blocks; 7656 free (0 chunks); 536 used HashBatchContext: 10615104 \n> total in 261 blocks; 7936 free (0 chunks); 10607168 used \n> HashTableContext: 8192 total in 1 blocks; 7688 free (1 chunks); 504 \n> used HashBatchContext: 13079304 total in 336 blocks; 7936 free (0 \n> chunks); 13071368 used TupleSort main: 49208 total in 3 blocks; 8552 \n> free (7 chunks); 40656 used Caller tuples: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used Subplan HashTable Temp Context: 1024 \n> total in 1 blocks; 768 free (0 chunks); 256 used Subplan HashTable \n> Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 \n> chunks); 256 used Subplan HashTable Context: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used Subplan HashTable Temp Context: 1024 \n> total in 1 blocks; 768 free (0 chunks); 256 used Subplan HashTable \n> Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 \n> chunks); 256 used Subplan HashTable Context: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used ExprContext: 8192 total in 1 blocks; \n> 7936 free (0 chunks); 256 used Subplan HashTable Temp Context: 1024 \n> total in 1 blocks; 768 free (0 chunks); 256 used Subplan HashTable \n> Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7360 free (0 chunks); 832 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used \n> ExprContext: 1107296256 total in 142 blocks; 6328 free (101 chunks); \n> 1107289928 used ExprContext: 8192 total in 1 blocks; 7936 free (0 \n> chunks); 256 used 1 more child contexts containing 8192 total in 1 \n> blocks; 7936 free (0 chunks); 256 used Relcache by OID: 16384 total in \n> 2 blocks; 2472 free (2 chunks); 13912 used CacheMemoryContext: 1113488 \n> total in 14 blocks; 16776 free (0 chunks); 1096712 used index info: \n> 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \n> docsubjh_sjrcode_ndx index info: 1024 total in 1 blocks; 48 free (0 \n> chunks); 976 used: docsubjh_sjrclass_ndx index info: 1024 total in 1 \n> blocks; 48 free (0 chunks); 976 used: docsubjh_scopeiid_ndx index \n> info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \n> docsubjh_dociid_ndx index info: 4096 total in 3 blocks; 2064 free (2 \n> chunks); 2032 used: role_telecom_idx index info: 2048 total in 2 \n> blocks; 968 free (1 chunks); 1080 used: role_addr_fkidx index info: \n> 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: role_id_fkidx \n> index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: \n> role_id_idx index info: 2048 total in 2 blocks; 968 free (1 chunks); \n> 1080 used: role_name_fkidx index info: 4096 total in 3 blocks; 2064 \n> free (2 chunks); 2032 used: entity_telecom_idx index info: 2048 total \n> in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx index \n> info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: \n> entity_id_idx index info: 2048 total in 2 blocks; 624 free (1 chunks); \n> 1424 used: entity_det_code_idx index info: 4096 total in 3 blocks; \n> 2016 free (2 chunks); 2080 used: entity_code_nodash_idx index info: \n> 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_pkey \n> index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: \n> connect_rule_pkey index info: 2048 total in 2 blocks; 952 free (1 \n> chunks); 1096 used: role_context_idx index info: 2048 total in 2 \n> blocks; 640 free (2 chunks); 1408 used: role_partitions index info: \n> 2048 total in 2 blocks; 640 free (2 chunks); 1408 used: \n> role_scoper_idx index info: 2048 total in 2 blocks; 640 free (2 \n> chunks); 1408 used: role_player_idx index info: 2048 total in 2 \n> blocks; 968 free (1 chunks); 1080 used: role__pkey index info: 2048 \n> total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index \n> index info: 2048 total in 2 blocks; 592 free (1 chunks); 1456 used: \n> pg_constraint_conrelid_contypid_conname_index index info: 2048 total \n> in 2 blocks; 624 free (1 chunks); 1424 used: participation_act_idx \n> index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: \n> participation_role_idx index info: 2048 total in 2 blocks; 952 free (1 \n> chunks); 1096 used: participation_pkey index info: 1024 total in 1 \n> blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_relid_index \n> index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: \n> doc_ndx_internaiddoctype index info: 2048 total in 2 blocks; 680 free \n> (1 chunks); 1368 used: pg_toast_2618_index index info: 2048 total in 2 \n> blocks; 952 free (1 chunks); 1096 used: pg_index_indrelid_index \n> relation rules: 827392 total in 104 blocks; 2400 free (1 chunks); \n> 824992 used: v_documentsubjecthistory index info: 2048 total in 2 \n> blocks; 648 free (2 chunks); 1400 used: \n> pg_db_role_setting_databaseid_rol_index index info: 2048 total in 2 \n> blocks; 624 free (2 chunks); 1424 used: pg_opclass_am_name_nsp_index \n> index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: \n> pg_foreign_data_wrapper_name_index index info: 1024 total in 1 blocks; \n> 48 free (0 chunks); 976 used: pg_enum_oid_index index info: 2048 total \n> in 2 blocks; 680 free (2 chunks); 1368 used: \n> pg_class_relname_nsp_index index info: 1024 total in 1 blocks; 48 free \n> (0 chunks); 976 used: pg_foreign_server_oid_index index info: 1024 \n> total in 1 blocks; 48 free (0 chunks); 976 used: \n> pg_publication_pubname_index index info: 2048 total in 2 blocks; 592 \n> free (3 chunks); 1456 used: pg_statistic_relid_att_inh_index index \n> info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: \n> pg_cast_source_target_index index info: 1024 total in 1 blocks; 48 \n> free (0 chunks); 976 used: pg_language_name_index index info: 1024 \n> total in 1 blocks; 48 free (0 chunks); 976 used: \n> pg_transform_oid_index index info: 1024 total in 1 blocks; 48 free (0 \n> chunks); 976 used: pg_collation_oid_index index info: 3072 total in 2 \n> blocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index index \n> info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: \n> pg_index_indexrelid_index index info: 2048 total in 2 blocks; 760 free \n> (2 chunks); 1288 used: pg_ts_template_tmplname_index index info: 2048 \n> total in 2 blocks; 704 free (3 chunks); 1344 used: \n> pg_ts_config_map_index index info: 2048 total in 2 blocks; 952 free (1 \n> chunks); 1096 used: pg_opclass_oid_index index info: 1024 total in 1 \n> blocks; 16 free (0 chunks); 1008 used: \n> pg_foreign_data_wrapper_oid_index index info: 1024 total in 1 blocks; \n> 48 free (0 chunks); 976 used: pg_event_trigger_evtname_index index \n> info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \n> pg_statistic_ext_name_index index info: 1024 total in 1 blocks; 48 \n> free (0 chunks); 976 used: pg_publication_oid_index index info: 1024 \n> total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_dict_oid_index \n> index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \n> pg_event_trigger_oid_index index info: 3072 total in 2 blocks; 1216 \n> free (3 chunks); 1856 used: pg_conversion_default_index index info: \n> 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: \n> pg_operator_oprname_l_r_n_index index info: 2048 total in 2 blocks; \n> 680 free (2 chunks); 1368 used: pg_trigger_tgrelid_tgname_index index \n> info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \n> pg_enum_typid_label_index index info: 1024 total in 1 blocks; 48 free \n> (0 chunks); 976 used: pg_ts_config_oid_index index info: 1024 total in \n> 1 blocks; 48 free (0 chunks); 976 used: pg_user_mapping_oid_index \n> index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: \n> pg_opfamily_am_name_nsp_index index info: 1024 total in 1 blocks; 48 \n> free (0 chunks); 976 used: pg_foreign_table_relid_index index info: \n> 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: \n> pg_type_oid_index index info: 2048 total in 2 blocks; 952 free (1 \n> chunks); 1096 used: pg_aggregate_fnoid_index index info: 1024 total in \n> 1 blocks; 48 free (0 chunks); 976 used: pg_constraint_oid_index index \n> info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: \n> pg_rewrite_rel_rulename_index index info: 2048 total in 2 blocks; 760 \n> free (2 chunks); 1288 used: pg_ts_parser_prsname_index index info: \n> 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \n> pg_ts_config_cfgname_index index info: 1024 total in 1 blocks; 48 free \n> (0 chunks); 976 used: pg_ts_parser_oid_index index info: 2048 total in \n> 2 blocks; 728 free (1 chunks); 1320 used: \n> pg_publication_rel_prrelid_prpubid_index index info: 2048 total in 2 \n> blocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index index \n> info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: \n> pg_namespace_nspname_index index info: 1024 total in 1 blocks; 48 free \n> (0 chunks); 976 used: pg_ts_template_oid_index index info: 2048 total \n> in 2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index \n> index info: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: \n> pg_default_acl_role_nsp_obj_index index info: 2048 total in 2 blocks; \n> 704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index index \n> info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \n> pg_publication_rel_oid_index index info: 1024 total in 1 blocks; 48 \n> free (0 chunks); 976 used: pg_range_rngtypid_index index info: 2048 \n> total in 2 blocks; 760 free (2 chunks); 1288 used: \n> pg_ts_dict_dictname_index index info: 2048 total in 2 blocks; 680 free \n> (2 chunks); 1368 used: pg_type_typname_nsp_index index info: 1024 \n> total in 1 blocks; 48 free (0 chunks); 976 used: pg_opfamily_oid_index \n> index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: \n> pg_statistic_ext_oid_index index info: 2048 total in 2 blocks; 952 \n> free (1 chunks); 1096 used: pg_class_oid_index index info: 2048 total \n> in 2 blocks; 624 free (2 chunks); 1424 used: \n> pg_proc_proname_args_nsp_index index info: 1024 total in 1 blocks; 16 \n> free (0 chunks); 1008 used: pg_partitioned_table_partrelid_index index \n> info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \n> pg_transform_type_lang_index index info: 2048 total in 2 blocks; 680 \n> free (2 chunks); 1368 used: pg_attribute_relid_attnum_index index \n> info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: \n> pg_proc_oid_index index info: 1024 total in 1 blocks; 48 free (0 \n> chunks); 976 used: pg_language_oid_index index info: 1024 total in 1 \n> blocks; 48 free (0 chunks); 976 used: pg_namespace_oid_index index \n> info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: \n> pg_amproc_fam_proc_index index info: 1024 total in 1 blocks; 48 free \n> (0 chunks); 976 used: pg_foreign_server_name_index index info: 2048 \n> total in 2 blocks; 760 free (2 chunks); 1288 used: \n> pg_attribute_relid_attnam_index index info: 1024 total in 1 blocks; 48 \n> free (0 chunks); 976 used: pg_conversion_oid_index index info: 2048 \n> total in 2 blocks; 728 free (1 chunks); 1320 used: \n> pg_user_mapping_user_server_index index info: 2048 total in 2 blocks; \n> 728 free (1 chunks); 1320 used: \n> pg_subscription_rel_srrelid_srsubid_index index info: 1024 total in 1 \n> blocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index index \n> info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: \n> pg_conversion_name_nsp_index index info: 2048 total in 2 blocks; 952 \n> free (1 chunks); 1096 used: pg_authid_oid_index index info: 2048 total \n> in 2 blocks; 728 free (1 chunks); 1320 used: \n> pg_auth_members_member_role_index 10 more child contexts containing \n> 17408 total in 17 blocks; 6080 free (10 chunks); 11328 used WAL record \n> construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 \n> used PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); \n> 5568 used MdSmgr: 8192 total in 1 blocks; 6408 free (0 chunks); 1784 \n> used LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); \n> 11784 used Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); \n> 101496 used ErrorContext: 8192 total in 1 blocks; 7936 free (4 \n> chunks); 256 used Grand total: 1345345736 bytes in 1209 blocks; \n> 4529600 free (270 chunks); 1340816136 used\n> On 4/28/2019 10:19, Tomas Vondra wrote:\n>> On Wed, Apr 24, 2019 at 02:36:33AM +0200, Tomas Vondra wrote:\n>>>\n>>> ...\n>>>\n>>> I still think the idea with an \"overflow batch\" is worth considering,\n>>> because it'd allow us to keep the memory usage within work_mem. And\n>>> after getting familiar with the hash join code again (haven't messed\n>>> with it since 9.5 or so) I think it should not be all that difficult.\n>>> I'll give it a try over the weekend if I get bored for a while.\n>>>\n>>\n>> OK, so I took a stab at this, and overall it seems to be workable. The\n>> patches I have are nowhere near committable, but I think the approach\n>> works fairly well - the memory is kept in check, and the performance is\n>> comparable to the \"ballancing\" approach tested before.\n>>\n>> To explain it a bit, the idea is that we can compute how many BufFile\n>> structures we can keep in memory - we can't use more than work_mem/2 for\n>> that, because then we'd mostly eliminate space for the actual data. For\n>> example with 4MB, we know we can keep 128 batches - we need 128 for\n>> outer and inner side, so 256 in total, and 256*8kB = 2MB.\n>>\n>> And then, we just increase the number of batches but instead of adding\n>> the BufFile entries, we split batches into slices that we can keep in\n>> memory (say, the 128 batches). And we keep BufFiles for the current one\n>> and an \"overflow file\" for the other slices. After processing a slice,\n>> we simply switch to the next one, and use the overflow file as a temp\n>> file for the first batch - we redistribute it into the other batches in\n>> the slice and another overflow file.\n>>\n>> That's what the v3 patch (named 'single overflow file') does. I does\n>> work, but unfortunately it significantly inflates the amount of data\n>> written to temporary files. Assume we need e.g. 1024 batches, but only\n>> 128 fit into memory. That means we'll need 8 slices, and during the\n>> first pass we'll handle 1/8 of the data and write 7/8 to the overflow\n>> file.� Then after processing the slice and switching to the next one, we\n>> repeat this dance - 1/8 gets processed, 6/8 written to another overflow\n>> file. So essentially we \"forward\" about\n>>\n>> �� 7/8 + 6/8 + 5/8 + ... + 1/8 = 28/8 = 3.5\n>>\n>> of data between slices, and we need to re-shuffle data in each slice,\n>> which amounts to additional 1x data. That's pretty significant overhead,\n>> as will be clear from the measurements I'll present shortly.\n>>\n>> But luckily, there's a simple solution to this - instead of writing the\n>> data into a single overflow file, we can create one overflow file for\n>> each slice. That will leave us with the ~1x of additional writes when\n>> distributing data into batches in the current slice, but it eliminates\n>> the main source of write amplification - awalanche-like forwarding of\n>> data between slices.\n>>\n>> This relaxes the memory limit a bit again, because we can't really keep\n>> the number of overflow files constrained by work_mem, but we should only\n>> need few of them (much less than when adding one file per batch right\n>> away). For example with 128 in-memory batches, this reduces the amount\n>> of necessary memory 128x.\n>>\n>> And this is what v4 (per-slice overflow file) does, pretty much.\n>>\n>>\n>> Two more comments, regarding memory accounting in previous patches. It\n>> was a bit broken, because we actually need 2x the number of BufFiles. We\n>> needed nbatch files for outer side and nbatch files for inner side, but\n>> we only considered one of those - both when deciding when to increase\n>> the number of batches / increase spaceAllowed, and when reporting the\n>> memory usage. So with large number of batches the reported amount of\n>> used memory was roughly 1/2 of the actual value :-/\n>>\n>> The memory accounting was a bit bogus for another reason - spaceUsed\n>> simply tracks the amount of memory for hash table contents. But at the\n>> end we were simply adding the current space for BufFile stuff, ignoring\n>> the fact that that's likely much larger than when the spacePeak value\n>> got stored. For example we might have kept early spaceUsed when it was\n>> almost work_mem, and then added the final large BufFile allocation.\n>>\n>> I've fixed both issues in the patches attached to this message. It does\n>> not make a huge difference in practice, but it makes it easier to\n>> compare values between patches.\n>>\n>>\n>> Now, some test results - I've repeated the simple test with uniform data\n>> set, which is pretty much ideal for hash joins (no unexlectedly large\n>> batches that can't be split, etc.). I've done this with 1M, 5M, 10M, 25M\n>> and 50M rows in the large table (which gets picked for the \"hash\" side),\n>> and measured how much memory gets used, how many batches, how long it\n>> takes and how much data gets written to temp files.\n>>\n>> See the hashjoin-test.sh script for more details.\n>>\n>> So, here are the results with work_mem = 4MB (so the number of in-memory\n>> batches for the last two entries is 128). The columns are:\n>>\n>> * nbatch - the final number of batches\n>> * memory - memory usage, as reported by explain analyze\n>> * time - duration of the query (without explain analyze) in seconds\n>> * size - size of the large table\n>> * temp - amount of data written to temp files\n>> * amplif - write amplification (temp / size)\n>>\n>>\n>> �1M rows\n>> �===================================================================\n>> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n>> �-------------------------------------------------------------------\n>> �master������������ 256��� 7681��� 3.3������� 730������� 899 1.23\n>> �rebalance��������� 256��� 7711��� 3.3������� 730������� 884 1.21\n>> �single file������ 1024��� 4161��� 7.2������� 730������ 3168 4.34\n>> �per-slice file��� 1024��� 4161��� 4.7������� 730������ 1653 2.26\n>>\n>>\n>> �5M rows\n>> �===================================================================\n>> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n>> �-------------------------------------------------------------------\n>> �master����������� 2048�� 36353���� 22������ 3652������ 5276 1.44\n>> �rebalance��������� 512�� 16515���� 18������ 3652������ 4169 1.14\n>> �single file������ 4096��� 4353��� 156������ 3652����� 53897 14.76\n>> �per-slice file��� 4096��� 4353���� 28������ 3652������ 8106 2.21\n>>\n>>\n>> �10M rows\n>> �===================================================================\n>> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n>> �-------------------------------------------------------------------\n>> �master����������� 4096�� 69121���� 61������ 7303����� 10556 1.45\n>> �rebalance��������� 512�� 24326���� 46������ 7303������ 7405 1.01\n>> �single file������ 8192��� 4636��� 762������ 7303���� 211234 28.92\n>> �per-slice file��� 8192��� 4636���� 65������ 7303����� 16278 2.23\n>>\n>>\n>> �25M rows\n>> �===================================================================\n>> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n>> �-------------------------------------------------------------------\n>> �master����������� 8192� 134657��� 190������ 7303����� 24279 1.33\n>> �rebalance�������� 1024�� 36611��� 158������ 7303����� 20024 1.10\n>> �single file����� 16384��� 6011�� 4054������ 7303��� 1046174 57.32\n>> �per-slice file�� 16384��� 6011��� 207������ 7303����� 39073 2.14\n>>\n>>\n>> �50M rows\n>> �===================================================================\n>> ���������������� nbatch� memory�� time� size (MB)� temp (MB) amplif\n>> �-------------------------------------------------------------------\n>> �master���������� 16384� 265729��� 531����� 36500����� 48519 1.33\n>> �rebalance�������� 2048�� 53241��� 447����� 36500����� 48077 1.32\n>> �single file��������� -������ -����� -����� 36500 -����� -\n>> �per-slice file�� 32768��� 8125��� 451����� 36500����� 78662 2.16\n>>\n>>\n>> From those numbers it's pretty clear that per-slice overflow file does\n>> by far the best job in enforcing work_mem and minimizing the amount of\n>> data spilled to temp files. It does write a bit more data than both\n>> master and the simple rebalancing, but that's the cost for enforcing\n>> work_mem more strictly. It's generally a bit slower than those two\n>> approaches, although on the largest scale it's actually a bit faster\n>> than master. I think that's pretty acceptable, considering this is meant\n>> to address extreme underestimates where we currently just eat memory.\n>>\n>> The case with single overflow file performs rather poorly - I haven't\n>> even collected data from the largest scale, but considering it spilled\n>> 1TB of temp files with a dataset half the size, that's not an issue.\n>> (Note that this does not mean it needs 1TB of temp space, those writes\n>> are spread over time and the files are created/closed as we go. The\n>> system only has ~100GB of free disk space.)\n>>\n>>\n>> Gunther, could you try the v2 and v4 patches on your data set? That\n>> would be an interesting data point, I think.\n>>\n>>\n>> regards\n>>\n\n\n\n\n\n\nOK, I went back through that old thread, and I noticed an early\n opinion by a certain Peter <pmc at citylink> who said that I\n should provision some swap space. Since I had plenty of disk and\n no other option I tried that. And it did some magic. Here this is\n a steady state now:\ntop - 14:07:32 up 103 days, 9:57, 5 users, load average: 1.33, 1.05, 0.54\nTasks: 329 total, 2 running, 117 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 31.0 us, 11.4 sy, 0.0 ni, 35.3 id, 22.3 wa, 0.0 hi, 0.0 si, 0.0 st\nKiB Mem : 7910376 total, 120524 free, 2174940 used, 5614912 buff/cache\nKiB Swap: 16777212 total, 16777212 free, 0 used. 3239724 avail Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 5068 postgres 20 0 4352496 4.0g 2.0g R 76.4 52.6 3:01.39 postgres: postgres integrator [local] INSERT\n 435 root 20 0 0 0 0 S 4.0 0.0 10:52.38 [kswapd0]\n\nand the nice thing is, the backend server process appears to be\n bounded at 4GB, so there isn't really a \"memory leak\". And also,\n the swap space isn't really being used. This may have to do with\n these vm. sysctl settings, overcommit, etc.\n\nvm.overcommit_memory = 2 -- values are\n \n0 -- estimate free memory\n1 -- always assume there is enough memory\n2 -- no over-commit allocate only inside the following two\n parameters\n\n\nvm.overcommit_kbytes = 0 -- how many kB above swap can be\n over-committed, EITHER this OR\nvm.overcommit_ratio = 50 -- percent of main memory that can be\n committed over swap,\n \nwith 0 swap, that percent can be committed\ni.e., this of 8 GB, 4 GB are reserved for buffer cache\nnot a good idea probably\nat least we should allow 75% committed, i.e., 6 GB of 8\n GB, leaving\n \n2 GB of buffer cache\n2 GB of shared buffers\n4 GB of all other memory\n\n\n\n\n\nI have vm.overcommit_memory = 2, _kbytes = 0, _ratio = 50. So\n this means with _ratio = 50 I can commit 50% of memory, 4GB and\n this is exactly what the server process wants. So with little\n impact on the available buffer cache I am in a fairly good\n position now. The swap (that in my case I set at 2 x main memory =\n 16G) serves as a buffer to smooth out this peak usage without ever\n actually paging.\nI suppose even without swap I could have set vm.overcommit_ratio\n = 75, and I notice now that I already commented this much (the\n above bullet points are my own notes.) \n\nAnyway, for now, I am good. Thank you very much.\nregards,\n -Gunther\n\n\n\n\n\nOn 8/23/2019 9:17, Gunther wrote:\n\n\n\nHi all, I am connecting to a discussion back from April this\n year. My data has grown and now I am running into new out of\n memory situations. Meanwhile the world turned from 11.2 to 11.5\n which I just installed only to find the same out of memory\n error.\nHave any of the things discussed and proposed, especially this\n last one by Tomas Vondra, been applied to the 11 releases?\n Should I try these older patches from April? \n\nregards,\n -Gunther\n\nFor what it is worth, this is what I am getting:\n\nTopMemoryContext: 67424 total in 5 blocks; 7184 free (7 chunks); 60240 used\n pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 416 free (0 chunks); 7776 used\n TopTransactionContext: 8192 total in 1 blocks; 7720 free (1 chunks); 472 used\n Operator lookup cache: 24576 total in 2 blocks; 10760 free (3 chunks); 13816 used\n TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096 used\n Type information cache: 24352 total in 2 blocks; 2624 free (0 chunks); 21728 used\n RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used\n MessageContext: 8388608 total in 11 blocks; 3094872 free (4 chunks); 5293736 used\n JoinRelHashTable: 16384 total in 2 blocks; 5576 free (1 chunks); 10808 used\n Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n smgr relation table: 32768 total in 3 blocks; 12720 free (8 chunks); 20048 used\n TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks); 256 used\n Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used\n TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used\n PortalContext: 1024 total in 1 blocks; 624 free (0 chunks); 400 used:\n ExecutorState: 202528536 total in 19 blocks; 433464 free (12 chunks); 202095072 used\n HashTableContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used\n HashBatchContext: 10615104 total in 261 blocks; 7936 free (0 chunks); 10607168 used\n HashTableContext: 8192 total in 1 blocks; 7688 free (1 chunks); 504 used\n HashBatchContext: 13079304 total in 336 blocks; 7936 free (0 chunks); 13071368 used\n TupleSort main: 49208 total in 3 blocks; 8552 free (7 chunks); 40656 used\n Caller tuples: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Subplan HashTable Temp Context: 1024 total in 1 blocks; 768 free (0 chunks); 256 used\n Subplan HashTable Context: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7360 free (0 chunks); 832 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n ExprContext: 1107296256 total in 142 blocks; 6328 free (101 chunks); 1107289928 used\n ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n 1 more child contexts containing 8192 total in 1 blocks; 7936 free (0 chunks); 256 used\n Relcache by OID: 16384 total in 2 blocks; 2472 free (2 chunks); 13912 used\n CacheMemoryContext: 1113488 total in 14 blocks; 16776 free (0 chunks); 1096712 used\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: docsubjh_sjrcode_ndx\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: docsubjh_sjrclass_ndx\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: docsubjh_scopeiid_ndx\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: docsubjh_dociid_ndx\n index info: 4096 total in 3 blocks; 2064 free (2 chunks); 2032 used: role_telecom_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: role_addr_fkidx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: role_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: role_id_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: role_name_fkidx\n index info: 4096 total in 3 blocks; 2064 free (2 chunks); 2032 used: entity_telecom_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_id_fkidx\n index info: 2048 total in 2 blocks; 696 free (1 chunks); 1352 used: entity_id_idx\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: entity_det_code_idx\n index info: 4096 total in 3 blocks; 2016 free (2 chunks); 2080 used: entity_code_nodash_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: entity_pkey\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: connect_rule_pkey\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: role_context_idx\n index info: 2048 total in 2 blocks; 640 free (2 chunks); 1408 used: role_partitions\n index info: 2048 total in 2 blocks; 640 free (2 chunks); 1408 used: role_scoper_idx\n index info: 2048 total in 2 blocks; 640 free (2 chunks); 1408 used: role_player_idx\n index info: 2048 total in 2 blocks; 968 free (1 chunks); 1080 used: role__pkey\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2619_index\n index info: 2048 total in 2 blocks; 592 free (1 chunks); 1456 used: pg_constraint_conrelid_contypid_conname_index\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: participation_act_idx\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: participation_role_idx\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: participation_pkey\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_relid_index\n index info: 2048 total in 2 blocks; 624 free (1 chunks); 1424 used: doc_ndx_internaiddoctype\n index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used: pg_toast_2618_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indrelid_index\n relation rules: 827392 total in 104 blocks; 2400 free (1 chunks); 824992 used: v_documentsubjecthistory\n index info: 2048 total in 2 blocks; 648 free (2 chunks); 1400 used: pg_db_role_setting_databaseid_rol_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_opclass_am_name_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_enum_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_class_relname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_pubname_index\n index info: 2048 total in 2 blocks; 592 free (3 chunks); 1456 used: pg_statistic_relid_att_inh_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_cast_source_target_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_transform_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_collation_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amop_fam_strat_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_index_indexrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_template_tmplname_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_ts_config_map_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_opclass_oid_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_foreign_data_wrapper_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_evtname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_statistic_ext_name_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_dict_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_event_trigger_oid_index\n index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used: pg_conversion_default_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_operator_oprname_l_r_n_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_trigger_tgrelid_tgname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_enum_typid_label_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_config_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_user_mapping_oid_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_opfamily_am_name_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_table_relid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_type_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_aggregate_fnoid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_constraint_oid_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_rewrite_rel_rulename_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_parser_prsname_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_config_cfgname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_parser_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_publication_rel_prrelid_prpubid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_operator_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_namespace_nspname_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_ts_template_oid_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_amop_opr_fam_index\n index info: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used: pg_default_acl_role_nsp_obj_index\n index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used: pg_collation_name_enc_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_publication_rel_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_range_rngtypid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_ts_dict_dictname_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_type_typname_nsp_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_opfamily_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_statistic_ext_oid_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_class_oid_index\n index info: 2048 total in 2 blocks; 624 free (2 chunks); 1424 used: pg_proc_proname_args_nsp_index\n index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_partitioned_table_partrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_transform_type_lang_index\n index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used: pg_attribute_relid_attnum_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_proc_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_language_oid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_namespace_oid_index\n index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used: pg_amproc_fam_proc_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_foreign_server_name_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_attribute_relid_attnam_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_conversion_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_user_mapping_user_server_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_subscription_rel_srrelid_srsubid_index\n index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used: pg_sequence_seqrelid_index\n index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used: pg_conversion_name_nsp_index\n index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used: pg_authid_oid_index\n index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used: pg_auth_members_member_role_index\n 10 more child contexts containing 17408 total in 17 blocks; 6080 free (10 chunks); 11328 used\n WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks); 43400 used\n PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used\n MdSmgr: 8192 total in 1 blocks; 6408 free (0 chunks); 1784 used\n LOCALLOCK hash: 16384 total in 2 blocks; 4600 free (2 chunks); 11784 used\n Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used\n ErrorContext: 8192 total in 1 blocks; 7936 free (4 chunks); 256 used\nGrand total: 1345345736 bytes in 1209 blocks; 4529600 free (270 chunks); 1340816136 used\n\n\n\nOn 4/28/2019 10:19, Tomas Vondra\n wrote:\n\nOn Wed,\n Apr 24, 2019 at 02:36:33AM +0200, Tomas Vondra wrote: \n \n ... \n\n I still think the idea with an \"overflow batch\" is worth\n considering, \n because it'd allow us to keep the memory usage within\n work_mem. And \n after getting familiar with the hash join code again (haven't\n messed \n with it since 9.5 or so) I think it should not be all that\n difficult. \n I'll give it a try over the weekend if I get bored for a\n while. \n\n\n\n OK, so I took a stab at this, and overall it seems to be\n workable. The \n patches I have are nowhere near committable, but I think the\n approach \n works fairly well - the memory is kept in check, and the\n performance is \n comparable to the \"ballancing\" approach tested before. \n\n To explain it a bit, the idea is that we can compute how many\n BufFile \n structures we can keep in memory - we can't use more than\n work_mem/2 for \n that, because then we'd mostly eliminate space for the actual\n data. For \n example with 4MB, we know we can keep 128 batches - we need 128\n for \n outer and inner side, so 256 in total, and 256*8kB = 2MB. \n\n And then, we just increase the number of batches but instead of\n adding \n the BufFile entries, we split batches into slices that we can\n keep in \n memory (say, the 128 batches). And we keep BufFiles for the\n current one \n and an \"overflow file\" for the other slices. After processing a\n slice, \n we simply switch to the next one, and use the overflow file as a\n temp \n file for the first batch - we redistribute it into the other\n batches in \n the slice and another overflow file. \n\n That's what the v3 patch (named 'single overflow file') does. I\n does \n work, but unfortunately it significantly inflates the amount of\n data \n written to temporary files. Assume we need e.g. 1024 batches,\n but only \n 128 fit into memory. That means we'll need 8 slices, and during\n the \n first pass we'll handle 1/8 of the data and write 7/8 to the\n overflow \n file.� Then after processing the slice and switching to the next\n one, we \n repeat this dance - 1/8 gets processed, 6/8 written to another\n overflow \n file. So essentially we \"forward\" about \n\n �� 7/8 + 6/8 + 5/8 + ... + 1/8 = 28/8 = 3.5 \n\n of data between slices, and we need to re-shuffle data in each\n slice, \n which amounts to additional 1x data. That's pretty significant\n overhead, \n as will be clear from the measurements I'll present shortly. \n\n But luckily, there's a simple solution to this - instead of\n writing the \n data into a single overflow file, we can create one overflow\n file for \n each slice. That will leave us with the ~1x of additional writes\n when \n distributing data into batches in the current slice, but it\n eliminates \n the main source of write amplification - awalanche-like\n forwarding of \n data between slices. \n\n This relaxes the memory limit a bit again, because we can't\n really keep \n the number of overflow files constrained by work_mem, but we\n should only \n need few of them (much less than when adding one file per batch\n right \n away). For example with 128 in-memory batches, this reduces the\n amount \n of necessary memory 128x. \n\n And this is what v4 (per-slice overflow file) does, pretty much.\n \n\n\n Two more comments, regarding memory accounting in previous\n patches. It \n was a bit broken, because we actually need 2x the number of\n BufFiles. We \n needed nbatch files for outer side and nbatch files for inner\n side, but \n we only considered one of those - both when deciding when to\n increase \n the number of batches / increase spaceAllowed, and when\n reporting the \n memory usage. So with large number of batches the reported\n amount of \n used memory was roughly 1/2 of the actual value :-/ \n\n The memory accounting was a bit bogus for another reason -\n spaceUsed \n simply tracks the amount of memory for hash table contents. But\n at the \n end we were simply adding the current space for BufFile stuff,\n ignoring \n the fact that that's likely much larger than when the spacePeak\n value \n got stored. For example we might have kept early spaceUsed when\n it was \n almost work_mem, and then added the final large BufFile\n allocation. \n\n I've fixed both issues in the patches attached to this message.\n It does \n not make a huge difference in practice, but it makes it easier\n to \n compare values between patches. \n\n\n Now, some test results - I've repeated the simple test with\n uniform data \n set, which is pretty much ideal for hash joins (no unexlectedly\n large \n batches that can't be split, etc.). I've done this with 1M, 5M,\n 10M, 25M \n and 50M rows in the large table (which gets picked for the\n \"hash\" side), \n and measured how much memory gets used, how many batches, how\n long it \n takes and how much data gets written to temp files. \n\n See the hashjoin-test.sh script for more details. \n\n So, here are the results with work_mem = 4MB (so the number of\n in-memory \n batches for the last two entries is 128). The columns are: \n\n * nbatch - the final number of batches \n * memory - memory usage, as reported by explain analyze \n * time - duration of the query (without explain analyze) in\n seconds \n * size - size of the large table \n * temp - amount of data written to temp files \n * amplif - write amplification (temp / size) \n\n\n �1M rows \n�=================================================================== \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif \n�------------------------------------------------------------------- \n �master������������ 256��� 7681��� 3.3������� 730������� 899���\n 1.23 \n �rebalance��������� 256��� 7711��� 3.3������� 730������� 884���\n 1.21 \n �single file������ 1024��� 4161��� 7.2������� 730������ 3168���\n 4.34 \n �per-slice file��� 1024��� 4161��� 4.7������� 730������ 1653���\n 2.26 \n\n\n �5M rows \n�=================================================================== \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif \n�------------------------------------------------------------------- \n �master����������� 2048�� 36353���� 22������ 3652������ 5276���\n 1.44 \n �rebalance��������� 512�� 16515���� 18������ 3652������ 4169���\n 1.14 \n �single file������ 4096��� 4353��� 156������ 3652����� 53897��\n 14.76 \n �per-slice file��� 4096��� 4353���� 28������ 3652������ 8106���\n 2.21 \n\n\n �10M rows \n�=================================================================== \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif \n�------------------------------------------------------------------- \n �master����������� 4096�� 69121���� 61������ 7303����� 10556���\n 1.45 \n �rebalance��������� 512�� 24326���� 46������ 7303������ 7405���\n 1.01 \n �single file������ 8192��� 4636��� 762������ 7303���� 211234��\n 28.92 \n �per-slice file��� 8192��� 4636���� 65������ 7303����� 16278���\n 2.23 \n\n\n �25M rows \n�=================================================================== \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif \n�------------------------------------------------------------------- \n �master����������� 8192� 134657��� 190������ 7303����� 24279���\n 1.33 \n �rebalance�������� 1024�� 36611��� 158������ 7303����� 20024���\n 1.10 \n �single file����� 16384��� 6011�� 4054������ 7303��� 1046174��\n 57.32 \n �per-slice file�� 16384��� 6011��� 207������ 7303����� 39073���\n 2.14 \n\n\n �50M rows \n�=================================================================== \n ���������������� nbatch� memory�� time� size (MB)� temp (MB)�\n amplif \n�------------------------------------------------------------------- \n �master���������� 16384� 265729��� 531����� 36500����� 48519��\n 1.33 \n �rebalance�������� 2048�� 53241��� 447����� 36500����� 48077��\n 1.32 \n �single file��������� -������ -����� -����� 36500���������\n -����� - \n �per-slice file�� 32768��� 8125��� 451����� 36500����� 78662��\n 2.16 \n\n\n From those numbers it's pretty clear that per-slice overflow\n file does \n by far the best job in enforcing work_mem and minimizing the\n amount of \n data spilled to temp files. It does write a bit more data than\n both \n master and the simple rebalancing, but that's the cost for\n enforcing \n work_mem more strictly. It's generally a bit slower than those\n two \n approaches, although on the largest scale it's actually a bit\n faster \n than master. I think that's pretty acceptable, considering this\n is meant \n to address extreme underestimates where we currently just eat\n memory. \n\n The case with single overflow file performs rather poorly - I\n haven't \n even collected data from the largest scale, but considering it\n spilled \n 1TB of temp files with a dataset half the size, that's not an\n issue. \n (Note that this does not mean it needs 1TB of temp space, those\n writes \n are spread over time and the files are created/closed as we go.\n The \n system only has ~100GB of free disk space.) \n\n\n Gunther, could you try the v2 and v4 patches on your data set?\n That \n would be an interesting data point, I think. \n\n\n regards",
"msg_date": "Fri, 23 Aug 2019 10:19:51 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Gunther <[email protected]> writes:\n> Hi all, I am connecting to a discussion back from April this year. My \n> data has grown and now I am running into new out of memory situations. \n\nIt doesn't look like this has much of anything to do with the hash-table\ndiscussion. The big hog is an ExprContext:\n\n> ExprContext: 1107296256 total in 142 blocks; 6328 free (101 chunks); \n> 1107289928 used\n\nSo there's something leaking in there, but this isn't enough info\nto guess what.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Aug 2019 10:20:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "Thanks Tom, yes I'd say it's using a lot of memory, but wouldn't call it \n\"leak\" as it doesn't grow during the 30 min or so that this query runs. \nIt explodes to 4GB and then stays flat until done.\n\nYes, and this time the query is super complicated with many joins and \ntables involved. The query plan has 100 lines. Not easy to share for \nreproduce and I have my issue under control by adding some swap just in \ncase. The swap space was never actually used.\n\nthanks,\n-Gunther\n\nOn 8/23/2019 10:20, Tom Lane wrote:\n> Gunther <[email protected]> writes:\n>> Hi all, I am connecting to a discussion back from April this year. My\n>> data has grown and now I am running into new out of memory situations.\n> It doesn't look like this has much of anything to do with the hash-table\n> discussion. The big hog is an ExprContext:\n>\n>> ExprContext: 1107296256 total in 142 blocks; 6328 free (101 chunks);\n>> 1107289928 used\n> So there's something leaking in there, but this isn't enough info\n> to guess what.\n>\n> \t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Aug 2019 11:40:09 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sat, Aug 24, 2019 at 11:40:09AM -0400, Gunther wrote:\n>Thanks Tom, yes I'd say it's using a lot of memory, but wouldn't call \n>it \"leak\" as it doesn't grow during the 30 min or so that this query \n>runs. It explodes to 4GB and then stays flat until done.\n>\n\nWell, the memory context stats you've shared however show this:\n\ntotal: 1345345736 bytes in 1209 blocks; 4529600 free (270 chunks); 1340816136 used\n\nThat's only ~1.3GB, and ~1.1GB of that is the expression context. So\nwhen you say 4GB, when does that happen and can you share stats showing\nstate at that point?\n\n>Yes, and this time the query is super complicated with many joins and \n>tables involved. The query plan has 100 lines. Not easy to share for \n>reproduce and I have my issue under control by adding some swap just \n>in case. The swap space was never actually used.\n>\n\nStill, without the query plan we can hardly do any guesses about what\nmight be the issue.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 6 Oct 2019 23:06:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Fri, Aug 23, 2019 at 09:17:38AM -0400, Gunther wrote:\n>Hi all, I am connecting to a discussion back from April this year. My \n>data has grown and now I am running into new out of memory situations. \n>Meanwhile the world turned from 11.2 to 11.5 which I just installed \n>only to find the same out of memory error.\n>\n\nAs Tom already said, this seems like a quite independent issue. Next\ntime it'd be better to share it in a new thread, not to mix it up with\nthe old discussion.\n\n>Have any of the things discussed and proposed, especially this last \n>one by Tomas Vondra, been applied to the 11 releases? Should I try \n>these older patches from April?\n>\n\nUnfortunately, no. We're still discussing what would be the right fix\n(it's rather tricky and the patches I shared were way too experimental\nfor that). But I'm pretty sure whatever we end up doing it's going to be\nway too invasive for backpatch. I.e. the older branches will likely have\nthis issue until EOL.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 6 Oct 2019 23:11:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 3:51 PM Gunther <[email protected]> wrote:\n>\n> For weeks now, I am banging my head at an \"out of memory\" situation. There is only one query I am running on an 8 GB system, whatever I try, I get knocked out on this out of memory. It is extremely impenetrable to understand and fix this error. I guess I could add a swap file, and then I would have to take the penalty of swapping. But how can I actually address an out of memory condition if the system doesn't tell me where it is happening?\n> We can't really see anything too worrisome. There is always lots of memory used by cache, which could have been mobilized. The only possible explanation I can think of is that in that moment of the crash the memory utilization suddenly skyrocketed in less than a second, so that the 2 second vmstat interval wouldn't show it??? Nah.\n>\n> I have already much reduced work_mem, which has helped in some other cases before. Now I am going to reduce the shared_buffers now, but that seems counter-intuitive because we are sitting on all that cache memory unused!\n>\n> Might this be a bug? It feels like a bug. It feels like those out of memory issues should be handled more gracefully (garbage collection attempt?) and that somehow there should be more information so the person can do anything about it.\n\nI kind of agree that nothing according to vmstat suggests you have a\nproblem. One thing you left out is the precise mechanics of the\nfailure; is the database getting nuked by the oom killer? Do you have\nthe logs?\n\n*) what are values of shared_buffers and work_mem and maintenance_work_mem?\n\n*) Is this a 32 bit build? (I'm guessing no, but worth asking)\n\n*) I see that you've disabled swap. Maybe it should be enabled?\n\n*) Can you get the query to run through? an 'explain analyze' might\npoint to gross misses in plan; say, sort memory overuse\n\n*) If you're still getting failures, maybe we need to look at sampling\nfrequency of memory usage.\n\n*) iowait is super high.\n\n*) I see optimization potential in this query; explain analyze would\nhelp here too.\n\nmerlin\n\n\n",
"msg_date": "Tue, 8 Oct 2019 12:44:39 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
},
{
"msg_contents": "On Tue, Oct 8, 2019 at 12:44 PM Merlin Moncure <[email protected]> wrote:\n> On Sun, Apr 14, 2019 at 3:51 PM Gunther <[email protected]> wrote:\n> >\n> > For weeks now, I am banging my head at an \"out of memory\" situation. There is only one query I am running on an 8 GB system, whatever I try, I get knocked out on this out of memory. It is extremely impenetrable to understand and fix this error. I guess I could add a swap file, and then I would have to take the penalty of swapping. But how can I actually address an out of memory condition if the system doesn't tell me where it is happening?\n> > We can't really see anything too worrisome. There is always lots of memory used by cache, which could have been mobilized. The only possible explanation I can think of is that in that moment of the crash the memory utilization suddenly skyrocketed in less than a second, so that the 2 second vmstat interval wouldn't show it??? Nah.\n> >\n> > I have already much reduced work_mem, which has helped in some other cases before. Now I am going to reduce the shared_buffers now, but that seems counter-intuitive because we are sitting on all that cache memory unused!\n> >\n> > Might this be a bug? It feels like a bug. It feels like those out of memory issues should be handled more gracefully (garbage collection attempt?) and that somehow there should be more information so the person can do anything about it.\n>\n> I kind of agree that nothing according to vmstat suggests you have a\n> problem. One thing you left out is the precise mechanics of the\n> failure; is the database getting nuked by the oom killer? Do you have\n> the logs?\n\noops, I missed quite a bit of context upthread. sorry for repeat noise.\n\nmerlin\n\n\n",
"msg_date": "Tue, 8 Oct 2019 12:50:32 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of Memory errors are frustrating as heck!"
}
] |
[
{
"msg_contents": "Dear all\nI have a postgres db have a big data so I seek for any enhancement in\nperformance and I think for iscsi disks so the question is :-\nis iscsi will increase db performance on read and write ....etc\n2- what is the best config to make use of iscsi disk - I mean\npostgresql.conf configuration\n\nthanks all\n\nDear all I have a postgres db have a big data so I seek for any enhancement in performance and I think for iscsi disks so the question is :- is iscsi will increase db performance on read and write ....etc 2- what is the best config to make use of iscsi disk - I mean postgresql.conf configurationthanks all",
"msg_date": "Mon, 15 Apr 2019 17:02:20 +0200",
"msg_from": "Mahmoud Moharam <[email protected]>",
"msg_from_op": true,
"msg_subject": "iscsi performance"
},
{
"msg_contents": "---------- Forwarded message ---------\nFrom: Mahmoud Moharam <[email protected]>\nDate: Mon, Apr 15, 2019 at 5:02 PM\nSubject: iscsi performance\nTo: Pgsql-admin <[email protected]>\n\n\nDear all\nI have a postgres db have a big data so I seek for any enhancement in\nperformance and I think for iscsi disks so the question is :-\nis iscsi will increase db performance on read and write ....etc\n2- what is the best config to make use of iscsi disk - I mean\npostgresql.conf configuration\n\nthanks all\n\n---------- Forwarded message ---------From: Mahmoud Moharam <[email protected]>Date: Mon, Apr 15, 2019 at 5:02 PMSubject: iscsi performanceTo: Pgsql-admin <[email protected]>Dear all I have a postgres db have a big data so I seek for any enhancement in performance and I think for iscsi disks so the question is :- is iscsi will increase db performance on read and write ....etc 2- what is the best config to make use of iscsi disk - I mean postgresql.conf configurationthanks all",
"msg_date": "Thu, 18 Apr 2019 08:15:17 +0200",
"msg_from": "Mahmoud Moharam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: iscsi performance"
}
] |
[
{
"msg_contents": "Hello team,\n\nWe have a postgres11.2 on docker and we are migrating a kb_rep database from postgres 9.6 to postgres 11.2 via pg_dump/pg_restore\nWe have created a kb_rep schema in postgres 11.2 also but during pg_restore there is an error \"pg_restore: [archiver (db)] connection to database \"kb_rep\" failed: FATAL: database \"kb_rep\" does not exist\"\n\nSee below:\n\npsql (11.2 (Ubuntu 11.2-1.pgdg18.04+1))\nType \"help\" for help.\n\npostgres=# \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype | Access privileges\n-----------+----------+----------+---------+---------+-----------------------\nkb_rep | postgres | UTF8 | C.UTF-8 | C.UTF-8 |\nkbdb | postgres | UTF8 | C.UTF-8 | C.UTF-8 |\npostgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 |\ntemplate0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +\n | | | | | postgres=CTc/postgres\ntemplate1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +\n | | | | | postgres=CTc/postgres\n(5 rows)\n\npostgres=#\n\n\npostgres@b06a42b503e9:/$ pg_restore -h 10.29.50.21 -p 5432 -d kb_rep -v /var/lib/kb_rep_backup16\npg_restore: connecting to database for restore\nPassword:\npg_restore: [archiver (db)] connection to database \"kb_rep\" failed: FATAL: database \"kb_rep\" does not exist\npostgres@b06a42b503e9:/$\n\nPlease help on this issue.\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\n\n\n\nHello team, \n \nWe have a postgres11.2 on docker and we are migrating a kb_rep database from postgres 9.6 to postgres 11.2 via pg_dump/pg_restore\nWe have created a kb_rep schema in postgres 11.2 also but during pg_restore there is an error “pg_restore: [archiver (db)] connection to database \"kb_rep\" failed: FATAL: database \"kb_rep\" does not exist”\n \nSee below:\n \npsql (11.2 (Ubuntu 11.2-1.pgdg18.04+1))\nType \"help\" for help.\n \npostgres=# \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype | Access privileges\n-----------+----------+----------+---------+---------+-----------------------\nkb_rep | postgres | UTF8 | C.UTF-8 | C.UTF-8 |\nkbdb | postgres | UTF8 | C.UTF-8 | C.UTF-8 |\npostgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 |\ntemplate0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +\n | | | | | postgres=CTc/postgres\ntemplate1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +\n | | | | | postgres=CTc/postgres\n(5 rows)\n \npostgres=#\n \n \npostgres@b06a42b503e9:/$ pg_restore -h 10.29.50.21 -p 5432 -d kb_rep -v /var/lib/kb_rep_backup16\npg_restore: connecting to database for restore\nPassword:\npg_restore: [archiver (db)] connection to database \"kb_rep\" failed: FATAL: database \"kb_rep\" does not exist\npostgres@b06a42b503e9:/$\n \nPlease help on this issue.\n \nRegards,\nDaulat",
"msg_date": "Tue, 16 Apr 2019 17:41:31 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres backup & restore"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 05:41:31PM +0000, Daulat Ram wrote:\n> postgres=# \\l\n> kb_rep | postgres | UTF8 | C.UTF-8 | C.UTF-8 |\n\n> postgres@b06a42b503e9:/$ pg_restore -h 10.29.50.21 -p 5432 -d kb_rep -v /var/lib/kb_rep_backup16\n> pg_restore: [archiver (db)] connection to database \"kb_rep\" failed: FATAL: database \"kb_rep\" does not exist\n\nAre you sure this DB is the one running on 10.29.50.21 port 5432 ?\n\nJustin\n\n\n",
"msg_date": "Tue, 16 Apr 2019 12:54:03 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres backup & restore"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nI am working on PostgreSQL 10.5 and I have a discrepancy between clients regarding parallelism feature.\r\n\r\nFor a simple query (say a simple SELECT COUNT(*) FROM BIG_TABLE), I can see PostgreSQL use parallelism when the query is launched from psql or PgAdmin4. However the same query launched with DBeaver (ie connected through JDBC) does not use parallelism. \r\n\r\nSELECT current_setting('max_parallel_workers_per_gather') gives 10 from my session.\r\n\r\nIs there a client configuration that prevents from using parallelism ?\r\n\r\nThanks.\r\n\r\nLaurent\r\n\n_________________________________________________________________________________________________________________________\n\nCe message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc\npas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler\na l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,\nOrange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.\n\nThis message and its attachments may contain confidential or privileged information that may be protected by law;\nthey should not be distributed, used or copied without authorisation.\nIf you have received this email in error, please notify the sender and delete this message and its attachments.\nAs emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.\nThank you.\n\n",
"msg_date": "Wed, 17 Apr 2019 06:30:28 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "\n\nAm 17.04.19 um 08:30 schrieb [email protected]:\n> SELECT current_setting('max_parallel_workers_per_gather') gives 10 from my session.\n>\n> Is there a client configuration that prevents from using parallelism ?\nunlikely.\n\nif i were you, i would compare all settings, using the different client \nsoftware. (show all, and compare)\n\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 10:07:09 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "[email protected] schrieb am 17.04.2019 um 08:30:\n> I am working on PostgreSQL 10.5 and I have a discrepancy between clients regarding parallelism feature.\n> \n> For a simple query (say a simple SELECT COUNT(*) FROM BIG_TABLE), I\n> can see PostgreSQL use parallelism when the query is launched from\n> psql or PgAdmin4. However the same query launched with DBeaver (ie\n> connected through JDBC) does not use parallelism.\n> \n> SELECT current_setting('max_parallel_workers_per_gather') gives 10\n> from my session.\n> \n> Is there a client configuration that prevents from using parallelism?\n\nMaybe DBeaver wraps the statement for some reason? (I have seen SQL clients do that)\nA CTE would prevent parallelism. \n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 10:33:43 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "På onsdag 17. april 2019 kl. 08:30:28, skrev <[email protected] \n<mailto:[email protected]>>: Hi,\n\n I am working on PostgreSQL 10.5 and I have a discrepancy between clients \nregarding parallelism feature.\n\n For a simple query (say a simple SELECT COUNT(*) FROM BIG_TABLE), I can see \nPostgreSQL use parallelism when the query is launched from psql or PgAdmin4. \nHowever the same query launched with DBeaver (ie connected through JDBC) does \nnot use parallelism.\n\n SELECT current_setting('max_parallel_workers_per_gather') gives 10 from my \nsession.\n\n Is there a client configuration that prevents from using parallelism ?\n\n Thanks.\n\n Laurent Set in postgresql.conf: log_statement = 'all' reload settings and \ncheck the logs for what statemets are acutally issued. -- Andreas Joseph Krogh \nCTO / Partner - Visena AS Mobile: +47 909 56 963 [email protected] \n<mailto:[email protected]> www.visena.com <https://www.visena.com> \n<https://www.visena.com>",
"msg_date": "Wed, 17 Apr 2019 11:07:51 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Sv: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "Thanks for the tip. I have compared all settings and they are identical.\r\n\r\nVery strange.\r\n\r\n-----Message d'origine-----\r\nDe : Andreas Kretschmer [mailto:[email protected]] \r\nEnvoyé : mercredi 17 avril 2019 10:07\r\nÀ : [email protected]\r\nObjet : Re: Pg10 : Client Configuration for Parallelism ?\r\n\r\n\r\n\r\nAm 17.04.19 um 08:30 schrieb [email protected]:\r\n> SELECT current_setting('max_parallel_workers_per_gather') gives 10 from my session.\r\n>\r\n> Is there a client configuration that prevents from using parallelism ?\r\nunlikely.\r\n\r\nif i were you, i would compare all settings, using the different client \r\nsoftware. (show all, and compare)\r\n\r\n\r\n\r\nRegards, Andreas\r\n\r\n-- \r\n2ndQuadrant - The PostgreSQL Support Company.\r\nwww.2ndQuadrant.com\r\n\r\n\r\n\r\n\n_________________________________________________________________________________________________________________________\n\nCe message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc\npas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler\na l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,\nOrange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.\n\nThis message and its attachments may contain confidential or privileged information that may be protected by law;\nthey should not be distributed, used or copied without authorisation.\nIf you have received this email in error, please notify the sender and delete this message and its attachments.\nAs emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.\nThank you.\n\n",
"msg_date": "Wed, 17 Apr 2019 09:22:40 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "As answered to Andreas Kretschmer all settings are identical.\r\n\r\nI have made some other tests, even testing a basic jdbc program (open connection, execute statement, display result, close connection)\r\n\r\nHere are the logs (with log_error_verbosity = verbose) :\r\n\r\n<DBEAVER>\r\n2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOG: 00000: execute <unnamed>: SELECT COUNT(1) FROM big_table\r\n2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:1959\r\n2019-04-17 11:31:08 CEST;35895;thedbuser;thedb;00000;LOG: 00000: duration: 25950.908 ms\r\n2019-04-17 11:31:08 CEST;35895;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:2031\r\n\r\n<BASIC JDBC>\r\n2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOG: 00000: execute <unnamed>: SELECT COUNT(1) FROM big_table\r\n2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:1959\r\n2019-04-17 11:31:32 CEST;37257;thedbuser;thedb;00000;LOG: 00000: duration: 11459.943 ms\r\n2019-04-17 11:31:32 CEST;37257;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:2031\r\n\r\n<PGADMIN4>\r\n2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOG: 00000: statement: SELECT COUNT(1) FROM big_table;\r\n2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOCATION: exec_simple_query, postgres.c:940\r\n2019-04-17 11:33:08 CEST;37324;thedbuser;thedb;00000;LOG: 00000: duration: 11334.677 ms\r\n2019-04-17 11:33:08 CEST;37313;thedbuser;thedb;00000;LOG: 00000: statement: SELECT oid, format_type(oid, NULL) AS typname FROM pg_type WHERE oid IN (20) ORDER BY oid;\r\n2019-04-17 11:33:08 CEST;37313;thedbuser;thedb;00000;LOCATION: exec_simple_query, postgres.c:940\r\n2019-04-17 11:33:08 CEST;37313;thedbuser;thedb;00000;LOG: 00000: duration: 0.900 ms\r\n2019-04-17 11:33:08 CEST;37313;thedbuser;thedb;00000;LOCATION: exec_simple_query, postgres.c:1170\r\n\r\nI don’t see any difference a part from the query duration. Note that while monitoring the server I saw that there was parallelism with JDBC program and PGAdmin4, but not with Dbeaver. And the JDBC driver is the same in both “Basic JDBC” and DBeaver.\r\n\r\nRegards.\r\n\r\nLaurent.\r\n\r\n\r\n\r\nDe : Andreas Joseph Krogh [mailto:[email protected]]\r\nEnvoyé : mercredi 17 avril 2019 11:08\r\nÀ : [email protected]\r\nObjet : Sv: Pg10 : Client Configuration for Parallelism ?\r\n\r\nPå onsdag 17. april 2019 kl. 08:30:28, skrev <[email protected]<mailto:[email protected]>>:\r\nHi,\r\n\r\nI am working on PostgreSQL 10.5 and I have a discrepancy between clients regarding parallelism feature.\r\n\r\nFor a simple query (say a simple SELECT COUNT(*) FROM BIG_TABLE), I can see PostgreSQL use parallelism when the query is launched from psql or PgAdmin4. However the same query launched with DBeaver (ie connected through JDBC) does not use parallelism.\r\n\r\nSELECT current_setting('max_parallel_workers_per_gather') gives 10 from my session.\r\n\r\nIs there a client configuration that prevents from using parallelism ?\r\n\r\nThanks.\r\n\r\nLaurent\r\n\r\nSet in postgresql.conf:\r\n\r\nlog_statement = 'all'\r\n\r\nreload settings and check the logs for what statemets are acutally issued.\r\n\r\n--\r\nAndreas Joseph Krogh\r\nCTO / Partner - Visena AS\r\nMobile: +47 909 56 963\r\[email protected]<mailto:[email protected]>\r\nwww.visena.com<https://www.visena.com>\r\n[cid:[email protected]]<https://www.visena.com>\r\n\r\n\n_________________________________________________________________________________________________________________________\n\nCe message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc\npas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler\na l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,\nOrange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.\n\nThis message and its attachments may contain confidential or privileged information that may be protected by law;\nthey should not be distributed, used or copied without authorisation.\nIf you have received this email in error, please notify the sender and delete this message and its attachments.\nAs emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.\nThank you.",
"msg_date": "Wed, 17 Apr 2019 09:51:02 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "\n\nAm 17.04.19 um 11:51 schrieb [email protected]:\n>\n> Here are the logs (with log_error_verbosity = verbose) :\n>\n> <DBEAVER>\n>\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOG: 00000: \n> execute <unnamed>: SELECT COUNT(1) FROM big_table\n>\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOCATION: \n> exec_execute_message, postgres.c:1959\n>\n> 2019-04-17 11:31:08 CEST;35895;thedbuser;thedb;00000;LOG: 00000: \n> duration: 25950.908 ms\n>\n> <BASIC JDBC>\n>\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOG: 00000: \n> execute <unnamed>: SELECT COUNT(1) FROM big_table\n>\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOCATION: \n> exec_execute_message, postgres.c:1959\n>\n> 2019-04-17 11:31:32 CEST;37257;thedbuser;thedb;00000;LOG: 00000: \n> duration: 11459.943 ms\n>\n>\n> <PGADMIN4>\n>\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOG: 00000: \n> statement: SELECT COUNT(1) FROM big_table;\n>\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOCATION: \n> exec_simple_query, postgres.c:940\n>\n> 2019-04-17 11:33:08 CEST;37324;thedbuser;thedb;00000;LOG: 00000: \n> duration: 11334.677 ms\n>\n>\n\nThat's compareable. The first one took more time, cold cache. The 2nd \nand 3rd are faster, warm cache.\n\nBut: we can't see if the execution is paralell or not. If you want to \nknow that, install and use auto_explain.\n\n\nRegards, Andreas\n\n\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 12:38:57 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "I can see whether there is parallelism with pg_top or barely top on the server. \r\n\r\n<DBEAVER>\r\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\r\n 38584 postgres 20 0 8863828 8.153g 8.151g R 100.0 3.2 1:23.01 postgres\r\n 10 root 20 0 0 0 0 S 0.3 0.0 88:07.26 rcu_sched\r\n\r\n<BASIC JDBC>\r\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\r\n 46687 postgres 20 0 8864620 0.978g 0.977g S 38.5 0.4 0:01.16 postgres\r\n 46689 postgres 20 0 8864348 996.4m 995.1m R 38.5 0.4 0:01.16 postgres\r\n 46690 postgres 20 0 8864348 987.2m 985.8m S 38.5 0.4 0:01.16 postgres\r\n 46691 postgres 20 0 8864348 998436 997084 R 38.5 0.4 0:01.16 postgres\r\n 46692 postgres 20 0 8864348 982612 981260 S 38.5 0.4 0:01.16 postgres\r\n 46693 postgres 20 0 8864348 979.9m 978.6m R 38.5 0.4 0:01.16 postgres\r\n 46694 postgres 20 0 8864348 987.9m 986.6m S 38.5 0.4 0:01.16 postgres\r\n 46696 postgres 20 0 8864348 996864 995512 S 38.5 0.4 0:01.16 postgres\r\n 46688 postgres 20 0 8864348 982.3m 981.0m R 38.2 0.4 0:01.15 postgres\r\n 46695 postgres 20 0 8864348 986.9m 985.6m S 38.2 0.4 0:01.15 postgres\r\n 21323 postgres 20 0 8862788 8.096g 8.095g S 0.7 3.2 2:24.75 postgres\r\n 46682 postgres 20 0 157996 2596 1548 R 0.7 0.0 0:00.05 top\r\n\r\nThis is not a matter of cache. If I execute the queries in a different order the result will be the same : DBeaver query is longer.\r\n\r\nThere is something in documentation that says that there won't be parallelism if \" The client sends an Execute message with a non-zero fetch count.\"\r\nI am not sure what this sentence means. \r\n\r\n-----Message d'origine-----\r\nDe : Andreas Kretschmer [mailto:[email protected]] \r\nEnvoyé : mercredi 17 avril 2019 12:39\r\nÀ : [email protected]\r\nObjet : Re: Pg10 : Client Configuration for Parallelism ?\r\n\r\n\r\n\r\nAm 17.04.19 um 11:51 schrieb [email protected]:\r\n>\r\n> Here are the logs (with log_error_verbosity = verbose) :\r\n>\r\n> <DBEAVER>\r\n>\r\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOG: 00000: \r\n> execute <unnamed>: SELECT COUNT(1) FROM big_table\r\n>\r\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOCATION: \r\n> exec_execute_message, postgres.c:1959\r\n>\r\n> 2019-04-17 11:31:08 CEST;35895;thedbuser;thedb;00000;LOG: 00000: \r\n> duration: 25950.908 ms\r\n>\r\n> <BASIC JDBC>\r\n>\r\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOG: 00000: \r\n> execute <unnamed>: SELECT COUNT(1) FROM big_table\r\n>\r\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOCATION: \r\n> exec_execute_message, postgres.c:1959\r\n>\r\n> 2019-04-17 11:31:32 CEST;37257;thedbuser;thedb;00000;LOG: 00000: \r\n> duration: 11459.943 ms\r\n>\r\n>\r\n> <PGADMIN4>\r\n>\r\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOG: 00000: \r\n> statement: SELECT COUNT(1) FROM big_table;\r\n>\r\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOCATION: \r\n> exec_simple_query, postgres.c:940\r\n>\r\n> 2019-04-17 11:33:08 CEST;37324;thedbuser;thedb;00000;LOG: 00000: \r\n> duration: 11334.677 ms\r\n>\r\n>\r\n\r\nThat's compareable. The first one took more time, cold cache. The 2nd \r\nand 3rd are faster, warm cache.\r\n\r\nBut: we can't see if the execution is paralell or not. If you want to \r\nknow that, install and use auto_explain.\r\n\r\n\r\nRegards, Andreas\r\n\r\n\r\n\r\n-- \r\n2ndQuadrant - The PostgreSQL Support Company.\r\nwww.2ndQuadrant.com\r\n\r\n\r\n\r\n\n_________________________________________________________________________________________________________________________\n\nCe message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc\npas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler\na l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,\nOrange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.\n\nThis message and its attachments may contain confidential or privileged information that may be protected by law;\nthey should not be distributed, used or copied without authorisation.\nIf you have received this email in error, please notify the sender and delete this message and its attachments.\nAs emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.\nThank you.\n\n",
"msg_date": "Wed, 17 Apr 2019 11:26:07 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "On 4/17/2019 4:33, Thomas Kellerer wrote:\n> A CTE would prevent parallelism.\n\nYou mean like always? His\n\nSELECT count(1) FROM BigTable\n\nwould be parallel if run alone but as\n\nWITH Data AS (SELECT count(1) FROM BigTable) SELECT * FROM Data\n\nnothing would be parallel any more? How about:\n\nSELECT * FROM (SELECT count(1) FROM BigTable) x\n\nParallel or not?\n\n-Gunther\n\n\n\n\n\n\n\nOn 4/17/2019 4:33, Thomas Kellerer\n wrote:\n\n\nA CTE would prevent parallelism. \n\nYou mean like always? His \n\nSELECT count(1) FROM BigTable \n\nwould be parallel if run alone but as \n\nWITH Data AS (SELECT count(1) FROM BigTable) SELECT * FROM Data \n\nnothing would be parallel any more? How about:\nSELECT * FROM (SELECT count(1) FROM BigTable) x\nParallel or not?\n\n-Gunther",
"msg_date": "Wed, 17 Apr 2019 07:55:14 -0400",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "By the way\n\nOn 4/17/2019 7:26, [email protected] wrote:\n> I can see whether there is parallelism with pg_top or barely top on the server.\n>\n> <DBEAVER>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 38584 postgres 20 0 8863828 8.153g 8.151g R 100.0 3.2 1:23.01 postgres\n> 10 root 20 0 0 0 0 S 0.3 0.0 88:07.26 rcu_sched\n>\n> <BASIC JDBC>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 46687 postgres 20 0 8864620 0.978g 0.977g S 38.5 0.4 0:01.16 postgres\n> 46689 postgres 20 0 8864348 996.4m 995.1m R 38.5 0.4 0:01.16 postgres\n> 46690 postgres 20 0 8864348 987.2m 985.8m S 38.5 0.4 0:01.16 postgres\n> 46691 postgres 20 0 8864348 998436 997084 R 38.5 0.4 0:01.16 postgres\n> ...\n> 46682 postgres 20 0 157996 2596 1548 R 0.7 0.0 0:00.05 top\n\nIf you just use top with the -c option, you will see each postgres \nprocess identify itself as to its role, e.g.\n\npostgres: parallel worker for PID 46687\n\nor\n\npostgres: SELECT ...\n\nor\n\npostgres: wal writer\n\nextremely useful this.\n\n-Gunther\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 07:58:58 -0400",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "Auto explain shows that in both cases there are workers planned, but with DBeaver they are not launched.\r\n\r\nHere's what I get with auto_explain : \r\n\r\n<DBEAVER> \r\n2019-04-17 14:46:09 CEST;54882;thedbuser;thedb;00000;LOG: 00000: duration: 0.095 ms\r\n2019-04-17 14:46:09 CEST;54882;thedbuser;thedb;00000;LOCATION: exec_parse_message, postgres.c:1433\r\n2019-04-17 14:46:09 CEST;54882;thedbuser;thedb;00000;LOG: 00000: duration: 0.191 ms\r\n2019-04-17 14:46:09 CEST;54882;thedbuser;thedb;00000;LOCATION: exec_bind_message, postgres.c:1813\r\n2019-04-17 14:46:09 CEST;54882;thedbuser;thedb;00000;LOG: 00000: execute <unnamed>: SELECT COUNT(1) FROM big_table\r\n2019-04-17 14:46:09 CEST;54882;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:1959\r\n2019-04-17 14:46:45 CEST;54882;thedbuser;thedb;00000;LOG: 00000: duration: 35842.146 ms\r\n2019-04-17 14:46:45 CEST;54882;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:2031\r\n2019-04-17 14:46:45 CEST;54882;thedbuser;thedb;00000;LOG: 00000: duration: 35842.110 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Finalize Aggregate (cost=3081157.61..3081157.62 rows=1 width=8) (actual time=35842.072..35842.072 rows=1 loops=1)\r\n Output: count(1)\r\n -> Gather (cost=3081156.68..3081157.59 rows=9 width=8) (actual time=35842.062..35842.062 rows=1 loops=1)\r\n Output: (PARTIAL count(1))\r\n Workers Planned: 9\r\n Workers Launched: 0\r\n -> Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=35842.060..35842.060 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.036..24038.340 rows=183778867 loops=1)\r\n Heap Fetches: 57043846\r\n2019-04-17 14:46:45 CEST;54882;thedbuser;thedb;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n\r\n<BASIC JDBC>\r\n2019-04-17 14:47:39 CEST;55222;thedbuser;thedb;00000;LOCATION: exec_parse_message, postgres.c:1433\r\n2019-04-17 14:47:39 CEST;55222;thedbuser;thedb;00000;LOG: 00000: duration: 2.077 ms\r\n2019-04-17 14:47:39 CEST;55222;thedbuser;thedb;00000;LOCATION: exec_bind_message, postgres.c:1813\r\n2019-04-17 14:47:39 CEST;55222;thedbuser;thedb;00000;LOG: 00000: execute <unnamed>: SELECT COUNT(1) FROM big_table\r\n2019-04-17 14:47:39 CEST;55222;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:1959\r\n2019-04-17 14:47:50 CEST;55235;;;00000;LOG: 00000: duration: 11317.118 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11317.095..11317.095 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.135..10036.104 rows=18161056 loops=1)\r\n Heap Fetches: 5569541\r\n2019-04-17 14:47:50 CEST;55235;;;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n2019-04-17 14:47:50 CEST;55236;;;00000;LOG: 00000: duration: 11316.071 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11316.043..11316.043 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.171..10000.782 rows=18377525 loops=1)\r\n Heap Fetches: 5735254\r\n2019-04-17 14:47:50 CEST;55236;;;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n2019-04-17 14:47:50 CEST;55237;;;00000;LOG: 00000: duration: 11315.871 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11315.851..11315.852 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.140..10042.102 rows=18082389 loops=1)\r\n Heap Fetches: 5579176\r\n2019-04-17 14:47:50 CEST;55237;;;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n2019-04-17 14:47:50 CEST;55232;;;00000;LOG: 00000: duration: 11317.573 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11317.553..11317.553 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.115..10047.908 rows=18732838 loops=1)\r\n Heap Fetches: 5849965\r\n2019-04-17 14:47:50 CEST;55232;;;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n2019-04-17 14:47:50 CEST;55234;;;00000;LOG: 00000: duration: 11317.221 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11317.202..11317.202 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.116..10027.937 rows=18517339 loops=1)\r\n Heap Fetches: 5849910\r\n2019-04-17 14:47:50 CEST;55234;;;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n2019-04-17 14:47:50 CEST;55238;;;00000;LOG: 00000: duration: 11316.571 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11316.553..11316.554 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.111..10047.353 rows=18722306 loops=1)\r\n Heap Fetches: 5829235\r\n2019-04-17 14:47:50 CEST;55238;;;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n2019-04-17 14:47:50 CEST;55230;;;00000;LOG: 00000: duration: 11320.223 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11320.198..11320.198 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.132..10040.186 rows=18164384 loops=1)\r\n Heap Fetches: 5585309\r\n2019-04-17 14:47:50 CEST;55230;;;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n2019-04-17 14:47:50 CEST;55231;;;00000;LOG: 00000: duration: 11319.001 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11318.981..11318.981 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.157..10035.136 rows=18189018 loops=1)\r\n Heap Fetches: 5638358\r\n2019-04-17 14:47:50 CEST;55231;;;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n2019-04-17 14:47:50 CEST;55233;;;00000;LOG: 00000: duration: 11317.772 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11317.750..11317.751 rows=1 loops=1)\r\n Output: PARTIAL count(1)\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.113..10036.766 rows=18198240 loops=1)\r\n Heap Fetches: 5627716\r\n2019-04-17 14:47:50 CEST;55233;;;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n2019-04-17 14:47:51 CEST;55222;thedbuser;thedb;00000;LOG: 00000: duration: 11735.201 ms\r\n2019-04-17 14:47:51 CEST;55222;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:2031\r\n2019-04-17 14:47:51 CEST;55222;thedbuser;thedb;00000;LOG: 00000: duration: 11735.174 ms plan:\r\n Query Text: SELECT COUNT(1) FROM big_table\r\n Finalize Aggregate (cost=3081157.61..3081157.62 rows=1 width=8) (actual time=11326.891..11326.891 rows=1 loops=1)\r\n Output: count(1)\r\n -> Gather (cost=3081156.68..3081157.59 rows=9 width=8) (actual time=11325.571..11735.108 rows=10 loops=1)\r\n Output: (PARTIAL count(1))\r\n Workers Planned: 9\r\n Workers Launched: 9\r\n -> Partial Aggregate (cost=3080156.68..3080156.69 rows=1 width=8) (actual time=11318.223..11318.223 rows=1 loops=10)\r\n Output: PARTIAL count(1)\r\n Worker 0: actual time=11316.553..11316.554 rows=1 loops=1\r\n Worker 1: actual time=11315.851..11315.852 rows=1 loops=1\r\n Worker 2: actual time=11316.043..11316.043 rows=1 loops=1\r\n Worker 3: actual time=11317.095..11317.095 rows=1 loops=1\r\n Worker 4: actual time=11317.202..11317.202 rows=1 loops=1\r\n Worker 5: actual time=11317.750..11317.751 rows=1 loops=1\r\n Worker 6: actual time=11317.553..11317.553 rows=1 loops=1\r\n Worker 7: actual time=11318.981..11318.981 rows=1 loops=1\r\n Worker 8: actual time=11320.198..11320.198 rows=1 loops=1\r\n -> Parallel Index Only Scan using idx_big_table__inact on big_table (cost=0.57..3029148.07 rows=20403444 width=0) (actual time=0.131..10036.680 rows=18377887 loops=10)\r\n Heap Fetches: 5779382\r\n Worker 0: actual time=0.111..10047.353 rows=18722306 loops=1\r\n Worker 1: actual time=0.140..10042.102 rows=18082389 loops=1\r\n Worker 2: actual time=0.171..10000.782 rows=18377525 loops=1\r\n Worker 3: actual time=0.135..10036.104 rows=18161056 loops=1\r\n Worker 4: actual time=0.116..10027.937 rows=18517339 loops=1\r\n Worker 5: actual time=0.113..10036.766 rows=18198240 loops=1\r\n Worker 6: actual time=0.115..10047.908 rows=18732838 loops=1\r\n Worker 7: actual time=0.157..10035.136 rows=18189018 loops=1\r\n Worker 8: actual time=0.132..10040.186 rows=18164384 loops=1\r\n2019-04-17 14:47:51 CEST;55222;thedbuser;thedb;00000;LOCATION: explain_ExecutorEnd, auto_explain.c:359\r\n\r\n\r\n\r\n-----Message d'origine-----\r\nDe : DECHAMBE Laurent DTSI/DSI \r\nEnvoyé : mercredi 17 avril 2019 13:26\r\nÀ : 'Andreas Kretschmer'; [email protected]\r\nObjet : RE: Pg10 : Client Configuration for Parallelism ?\r\n\r\nI can see whether there is parallelism with pg_top or barely top on the server. \r\n\r\n<DBEAVER>\r\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\r\n 38584 postgres 20 0 8863828 8.153g 8.151g R 100.0 3.2 1:23.01 postgres\r\n 10 root 20 0 0 0 0 S 0.3 0.0 88:07.26 rcu_sched\r\n\r\n<BASIC JDBC>\r\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\r\n 46687 postgres 20 0 8864620 0.978g 0.977g S 38.5 0.4 0:01.16 postgres\r\n 46689 postgres 20 0 8864348 996.4m 995.1m R 38.5 0.4 0:01.16 postgres\r\n 46690 postgres 20 0 8864348 987.2m 985.8m S 38.5 0.4 0:01.16 postgres\r\n 46691 postgres 20 0 8864348 998436 997084 R 38.5 0.4 0:01.16 postgres\r\n 46692 postgres 20 0 8864348 982612 981260 S 38.5 0.4 0:01.16 postgres\r\n 46693 postgres 20 0 8864348 979.9m 978.6m R 38.5 0.4 0:01.16 postgres\r\n 46694 postgres 20 0 8864348 987.9m 986.6m S 38.5 0.4 0:01.16 postgres\r\n 46696 postgres 20 0 8864348 996864 995512 S 38.5 0.4 0:01.16 postgres\r\n 46688 postgres 20 0 8864348 982.3m 981.0m R 38.2 0.4 0:01.15 postgres\r\n 46695 postgres 20 0 8864348 986.9m 985.6m S 38.2 0.4 0:01.15 postgres\r\n 21323 postgres 20 0 8862788 8.096g 8.095g S 0.7 3.2 2:24.75 postgres\r\n 46682 postgres 20 0 157996 2596 1548 R 0.7 0.0 0:00.05 top\r\n\r\nThis is not a matter of cache. If I execute the queries in a different order the result will be the same : DBeaver query is longer.\r\n\r\nThere is something in documentation that says that there won't be parallelism if \" The client sends an Execute message with a non-zero fetch count.\"\r\nI am not sure what this sentence means. \r\n\r\n-----Message d'origine-----\r\nDe : Andreas Kretschmer [mailto:[email protected]] \r\nEnvoyé : mercredi 17 avril 2019 12:39\r\nÀ : [email protected]\r\nObjet : Re: Pg10 : Client Configuration for Parallelism ?\r\n\r\n\r\n\r\nAm 17.04.19 um 11:51 schrieb [email protected]:\r\n>\r\n> Here are the logs (with log_error_verbosity = verbose) :\r\n>\r\n> <DBEAVER>\r\n>\r\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOG: 00000: \r\n> execute <unnamed>: SELECT COUNT(1) FROM big_table\r\n>\r\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOCATION: \r\n> exec_execute_message, postgres.c:1959\r\n>\r\n> 2019-04-17 11:31:08 CEST;35895;thedbuser;thedb;00000;LOG: 00000: \r\n> duration: 25950.908 ms\r\n>\r\n> <BASIC JDBC>\r\n>\r\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOG: 00000: \r\n> execute <unnamed>: SELECT COUNT(1) FROM big_table\r\n>\r\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOCATION: \r\n> exec_execute_message, postgres.c:1959\r\n>\r\n> 2019-04-17 11:31:32 CEST;37257;thedbuser;thedb;00000;LOG: 00000: \r\n> duration: 11459.943 ms\r\n>\r\n>\r\n> <PGADMIN4>\r\n>\r\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOG: 00000: \r\n> statement: SELECT COUNT(1) FROM big_table;\r\n>\r\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOCATION: \r\n> exec_simple_query, postgres.c:940\r\n>\r\n> 2019-04-17 11:33:08 CEST;37324;thedbuser;thedb;00000;LOG: 00000: \r\n> duration: 11334.677 ms\r\n>\r\n>\r\n\r\nThat's compareable. The first one took more time, cold cache. The 2nd \r\nand 3rd are faster, warm cache.\r\n\r\nBut: we can't see if the execution is paralell or not. If you want to \r\nknow that, install and use auto_explain.\r\n\r\n\r\nRegards, Andreas\r\n\r\n\r\n\r\n-- \r\n2ndQuadrant - The PostgreSQL Support Company.\r\nwww.2ndQuadrant.com\r\n\r\n\r\n\r\n\n_________________________________________________________________________________________________________________________\n\nCe message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc\npas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler\na l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,\nOrange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.\n\nThis message and its attachments may contain confidential or privileged information that may be protected by law;\nthey should not be distributed, used or copied without authorisation.\nIf you have received this email in error, please notify the sender and delete this message and its attachments.\nAs emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.\nThank you.\n\n",
"msg_date": "Wed, 17 Apr 2019 12:56:39 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 09:51:02AM +0000, [email protected] wrote:\n> <DBEAVER>\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOG: 00000: execute <unnamed>: SELECT COUNT(1) FROM big_table\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:1959\n\n\"execute\" means it's using the extended protocol.\nhttps://www.postgresql.org/docs/11/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY\n\n> <BASIC JDBC>\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOG: 00000: execute <unnamed>: SELECT COUNT(1) FROM big_table\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:1959\n\nSame.\n\n> <PGADMIN4>\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOG: 00000: statement: SELECT COUNT(1) FROM big_table;\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOCATION: exec_simple_query, postgres.c:940\n\nThis is a \"simple query\", not using the \"extended protocol\".\n\nOn Wed, Apr 17, 2019 at 11:26:07AM +0000, [email protected] wrote:\n> There is something in documentation that says that there won't be parallelism if \" The client sends an Execute message with a non-zero fetch count.\"\n> I am not sure what this sentence means. \n\nThis is likely the cause of the difference.\n\nCould you run wireshark to watch the protocol traffic ?\n\nI think it'll show that dbeaver is retrieving a portion of the result set.\n\nJustin\n\n\n",
"msg_date": "Wed, 17 Apr 2019 08:57:18 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "Hello Justin and thank you for your clues.\n\nFinally I found that putting blank to the option that limits the number of rows to retrieve (which is normal for this kind of tool) allows PostgreSQL to parallelize the query.\n\nOn jdbc it seems this is equivalent to write :\nstatement. setMaxRows(0); // parallelism authorized, which is the default.\n\nThus on my jdbc basic program if I add :\nstatement. setMaxRows(100); // No parallelism allowed (at least in Pg10)\n\nThanks to all who were kind enough to help.\n\nLaurent\n\n-----Message d'origine-----\nDe : Justin Pryzby [mailto:[email protected]] \nEnvoyé : mercredi 17 avril 2019 15:57\nÀ : DECHAMBE Laurent DTSI/DSI\nCc : Andreas Joseph Krogh; [email protected]\nObjet : Re: Pg10 : Client Configuration for Parallelism ?\n\nOn Wed, Apr 17, 2019 at 09:51:02AM +0000, [email protected] wrote:\n> <DBEAVER>\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOG: 00000: execute <unnamed>: SELECT COUNT(1) FROM big_table\n> 2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:1959\n\n\"execute\" means it's using the extended protocol.\nhttps://www.postgresql.org/docs/11/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY\n\n> <BASIC JDBC>\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOG: 00000: execute <unnamed>: SELECT COUNT(1) FROM big_table\n> 2019-04-17 11:31:20 CEST;37257;thedbuser;thedb;00000;LOCATION: exec_execute_message, postgres.c:1959\n\nSame.\n\n> <PGADMIN4>\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOG: 00000: statement: SELECT COUNT(1) FROM big_table;\n> 2019-04-17 11:32:56 CEST;37324;thedbuser;thedb;00000;LOCATION: exec_simple_query, postgres.c:940\n\nThis is a \"simple query\", not using the \"extended protocol\".\n\nOn Wed, Apr 17, 2019 at 11:26:07AM +0000, [email protected] wrote:\n> There is something in documentation that says that there won't be parallelism if \" The client sends an Execute message with a non-zero fetch count.\"\n> I am not sure what this sentence means. \n\nThis is likely the cause of the difference.\n\nCould you run wireshark to watch the protocol traffic ?\n\nI think it'll show that dbeaver is retrieving a portion of the result set.\n\nJustin\n\n_________________________________________________________________________________________________________________________\n\nCe message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc\npas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler\na l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,\nOrange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.\n\nThis message and its attachments may contain confidential or privileged information that may be protected by law;\nthey should not be distributed, used or copied without authorisation.\nIf you have received this email in error, please notify the sender and delete this message and its attachments.\nAs emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.\nThank you.\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 14:33:28 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "[email protected] wrote:\n> There is something in documentation that says that there won't be parallelism\n> if \" The client sends an Execute message with a non-zero fetch count.\"\n> I am not sure what this sentence means.\n\nThe JDBC driver sends an \"Execute\" message to the server.\nhttps://www.postgresql.org/docs/current/protocol-message-formats.html says:\n\nExecute (F)\n\n Byte1('E')\n Identifies the message as an Execute command.\n Int32\n Length of message contents in bytes, including self.\n String\n The name of the portal to execute (an empty string selects the unnamed portal).\n Int32\n Maximum number of rows to return, if portal contains a query that returns rows\n (ignored otherwise). Zero denotes “no limit”.\n\nIf you use setMaxRows non-zero, that number is sent as the \"maximum number of rows\".\n\nParallelism currently cannot be used if there is a limit on the row count.\nImagine you want ten rows and already have nine, now if two workers are busy\ncalculating the next row, there is no good way to stop one of them when the other\nreturns a row.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 19:11:42 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "[email protected] schrieb am 17.04.2019 um 16:33:\n> Hello Justin and thank you for your clues.\n>\n> Finally I found that putting blank to the option that limits the\n> number of rows to retrieve (which is normal for this kind of tool)\n> allows PostgreSQL to parallelize the query.\n>\n> On jdbc it seems this is equivalent to write :\n> statement. setMaxRows(0); // parallelism authorized, which is the default.\n>\n> Thus on my jdbc basic program if I add :\n> statement. setMaxRows(100); // No parallelism allowed (at least in Pg10)\n>\n> Thanks to all who were kind enough to help.\n\nThis isn't limited to Statement.setMaxRows()\n\nIf you use \"LIMIT x\" in your SQL query, the same thing happens.\n\nThomas\n\n\n\n",
"msg_date": "Fri, 19 Apr 2019 08:52:24 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg10 : Client Configuration for Parallelism ?"
},
{
"msg_contents": "Thomas Kellerer <[email protected]> writes:\n> [email protected] schrieb am 17.04.2019 um 16:33:\n>> On jdbc it seems this is equivalent to write :\n>> statement. setMaxRows(0); // parallelism authorized, which is the default.\n>> \n>> Thus on my jdbc basic program if I add :\n>> statement. setMaxRows(100); // No parallelism allowed (at least in Pg10)\n\n> This isn't limited to Statement.setMaxRows()\n> If you use \"LIMIT x\" in your SQL query, the same thing happens.\n\nNo, not true: queries with LIMIT x are perfectly parallelizable.\n\nThe trouble with the protocol-level limit (setMaxRows) is that it\nrequires being able to suspend the query and resume fetching rows\nlater. We don't allow that for parallel query because it would\ninvolve tying up vastly more resources, ie a bunch of worker\nprocesses, not just some extra memory in the client's own backend.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Apr 2019 09:43:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg10 : Client Configuration for Parallelism ?"
}
] |
[
{
"msg_contents": "What would be the best filesystem to run PostgreSQL on, in Terms of Performance and data Integrity?\n\nBest regards,\n\nstephan\n\n\n\n\n\n\n\n\n\nWhat would be the best filesystem to run PostgreSQL on, in Terms of Performance and data Integrity?\n \nBest regards,\n \nstephan",
"msg_date": "Wed, 17 Apr 2019 20:59:13 +0000",
"msg_from": "Stephan Schmidt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best Filesystem for PostgreSQL "
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 08:59:13PM +0000, Stephan Schmidt wrote:\n> What would be the best filesystem to run PostgreSQL on, in Terms of Performance\n> and data Integrity?\n\nUh, which operating system? If it is Linux, many people like ext4 or\nxfs. Some like zfs. ext3/ext2 are not recommended due to fsync\nperformance.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 17 Apr 2019 17:07:21 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best Filesystem for PostgreSQL"
},
{
"msg_contents": "> best filesystem to run PostgreSQL on, in Terms of Performance ...\n\ntest: PostgreSQL v10.3 + Linux 5.0 File-System Benchmarks: Btrfs vs.\nEXT4 vs. F2FS vs. XFS\nhttps://www.phoronix.com/scan.php?page=article&item=linux-50-filesystems&num=3\n\nImre\n\n\nStephan Schmidt <[email protected]> ezt írta (időpont: 2019. ápr. 17.,\nSze, 22:59):\n\n> What would be the best filesystem to run PostgreSQL on, in Terms of\n> Performance and data Integrity?\n>\n>\n>\n> Best regards,\n>\n>\n>\n> stephan\n>\n\n> best filesystem to run PostgreSQL on, in Terms of Performance ...test: PostgreSQL v10.3 + Linux 5.0 File-System Benchmarks: Btrfs vs. EXT4 vs. F2FS vs. XFShttps://www.phoronix.com/scan.php?page=article&item=linux-50-filesystems&num=3ImreStephan Schmidt <[email protected]> ezt írta (időpont: 2019. ápr. 17., Sze, 22:59):\n\n\nWhat would be the best filesystem to run PostgreSQL on, in Terms of Performance and data Integrity?\n \nBest regards,\n \nstephan",
"msg_date": "Thu, 18 Apr 2019 00:03:05 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best Filesystem for PostgreSQL"
},
{
"msg_contents": "On 4/17/2019 18:03, Imre Samu wrote:\n> test: PostgreSQL v10.3 + Linux 5.0 File-System Benchmarks: Btrfs \n> vs. EXT4 vs. F2FS vs. XFS\n> https://www.phoronix.com/scan.php?page=article&item=linux-50-filesystems&num=3\n>\nSo looks like XFS won. I like XFS for its ease of use especially when \ngrowing.\n\nAny ideas on how ZFS might do? ZFS is of course so much more flexible.\n\n-Gunther\n\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 18:38:47 -0400",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best Filesystem for PostgreSQL"
},
{
"msg_contents": "my Question was meant for a Debian 9 environment with heavy read/wright load and very high requirements towards Performance and data Consistency\n\n\n\nStephan\n\n\n\n________________________________\nVon: Bruce Momjian <[email protected]>\nGesendet: Wednesday, April 17, 2019 11:07:21 PM\nAn: Stephan Schmidt\nCc: [email protected]\nBetreff: Re: Best Filesystem for PostgreSQL\n\nOn Wed, Apr 17, 2019 at 08:59:13PM +0000, Stephan Schmidt wrote:\n> What would be the best filesystem to run PostgreSQL on, in Terms of Performance\n> and data Integrity?\n\nUh, which operating system? If it is Linux, many people like ext4 or\nxfs. Some like zfs. ext3/ext2 are not recommended due to fsync\nperformance.\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n\n\n\n\n\n\n\n\n\nmy Question was meant for a Debian 9 environment with heavy read/wright load and very high requirements towards Performance and data Consistency\n \nStephan\n \n\n\nVon: Bruce Momjian <[email protected]>\nGesendet: Wednesday, April 17, 2019 11:07:21 PM\nAn: Stephan Schmidt\nCc: [email protected]\nBetreff: Re: Best Filesystem for PostgreSQL\n \n\n\n\nOn Wed, Apr 17, 2019 at 08:59:13PM +0000, Stephan Schmidt wrote:\n> What would be the best filesystem to run PostgreSQL on, in Terms of Performance\n> and data Integrity?\n\nUh, which operating system? If it is Linux, many people like ext4 or\nxfs. Some like zfs. ext3/ext2 are not recommended due to fsync\nperformance.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 18 Apr 2019 02:19:28 +0000",
"msg_from": "Stephan Schmidt <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Best Filesystem for PostgreSQL"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 02:19:28AM +0000, Stephan Schmidt wrote:\n>my Question was meant for a Debian 9 environment with heavy read/wright\n>load and very high requirements towards Performance and data Consistency\n>\n\nWell, that's like asking \"which car is the best\" unfortunately. There's no\ngood answer, as it very much depends on your expectations, hardware etc.\nEveryone wants good performance, reliability and consistency.\n\nSimply said, if you're on current Linux and you don't have any additional\nrequirements (like snapshotting), then ext4/xfs are likely your best bet.\nThere are differences between these two filesystems, but it depends on the\nworkload, hardware etc. Overall the behavior is pretty close, though. So\neither you just go with either of those, or you do some testing with your\napplication on the actual hardware.\n\nIf you need something more advanced (like better snapshotting, etc.) then\nmaybe ZFS is the right choice for you. It also allos various advanced\nconfigurations with ZIL, L2ARC, ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 18 Apr 2019 17:33:57 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best Filesystem for PostgreSQL"
},
{
"msg_contents": "On 4/17/19 6:38 PM, Gunther Schadow wrote:\n> So looks like XFS won. I like XFS for its ease of use especially when\n> growing.\n> \n> Any ideas on how ZFS might do? ZFS is of course so much more flexible.\n\n\nThat would totally depend on your data sets and expectations. If you're\ndoing a LOT of random inserts/updates/deletes, etc then you would have\nto tune the hell out of ZFS along with right caching layers in place.\nSame could be said of reads, but if you have a TON of memory in the\nserver that's greatly mitigated and work well.\n\nIf you're looking to warehouse big blobs of data or lots of archive and\nreporting; then by all means ZFS is a great choice.\n\nZFS certainly can provide higher levels of growth and resiliency vs\next4/xfs.\n\n-- \ninoc.net!rblayzor\nXMPP: rblayzor.AT.inoc.net\nPGP: https://inoc.net/~rblayzor/\n\n\n\n",
"msg_date": "Thu, 18 Apr 2019 14:11:58 -0400",
"msg_from": "Robert Blayzor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best Filesystem for PostgreSQL"
}
] |
[
{
"msg_contents": "Hello Team,\n\nWe are getting below error while migrating pg_dump from Postgresql 9.6 to Postgresql 11.2 via pg_restore in docker environment.\n\n90d4c9f363c8:~$ pg_restore -d kbcn \"/var/lib/kbcn_backup19\"\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA public postgres\npg_restore: [archiver (db)] could not execute query: ERROR: schema \"public\" already exists\nCommand was: CREATE SCHEMA public;\n\n\n\nPlease advise.\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\nHello Team,\n \nWe are getting below error while migrating pg_dump from Postgresql 9.6 to Postgresql 11.2 via pg_restore in docker environment.\n\n \n90d4c9f363c8:~$ pg_restore -d kbcn \"/var/lib/kbcn_backup19\"\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA public postgres\npg_restore: [archiver (db)] could not execute query: ERROR: schema \"public\" already exists\nCommand was: CREATE SCHEMA public;\n \n \n \nPlease advise.\n \nRegards,\nDaulat",
"msg_date": "Sat, 20 Apr 2019 18:50:47 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backup and Restore (pg_dump & pg_restore)"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 06:50:47PM +0000, Daulat Ram wrote:\n> Hello Team,\n>\n> \n>\n> We are getting below error while migrating pg_dump from Postgresql 9.6 to\n> Postgresql 11.2 via pg_restore in docker environment.\n>\n> \n>\n> 90d4c9f363c8:~$ pg_restore -d kbcn \"/var/lib/kbcn_backup19\"\n>\n> pg_restore: [archiver (db)] Error while PROCESSING TOC:\n>\n> pg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA\n> public postgres\n>\n> pg_restore: [archiver (db)] could not execute query: ERROR: schema\n> \"public\" already exists\n>\n> Command was: CREATE SCHEMA public;\n>\n> \n\nHow is this related to performance? Please send it to pgsql-general, and\ninclude information about how you created the dump.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 21 Apr 2019 18:18:24 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup and Restore (pg_dump & pg_restore)"
},
{
"msg_contents": "Hello Team,\n\nWe are getting below error while migrating pg_dump from Postgresql 9.6 to Postgresql 11.2 via pg_restore in docker environment.\n\n90d4c9f363c8:~$ pg_restore -d kbcn \"/var/lib/kbcn_backup19\"\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA public postgres\npg_restore: [archiver (db)] could not execute query: ERROR: schema \"public\" already exists\nCommand was: CREATE SCHEMA public;\n\nScript used for pg_dump:\n-------------------------------------\n\npg_dump -h 10.26.33.3 -p 5432 -U postgres -W -F c -v -f tmp/postgres/backup/backup10/ kbcn_backup19 kbcn >& tmp/postgres/backup/backup10/ kbcn_backup19.log; echo $? > tmp/postgres/backup/backup10/_'date+%Y-%m-%d.%H:%M:%S'\n\n\n\nPlease advise.\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\nHello Team,\n \nWe are getting below error while migrating pg_dump from Postgresql 9.6 to Postgresql 11.2 via pg_restore in docker environment.\n\n \n90d4c9f363c8:~$ pg_restore -d kbcn \"/var/lib/kbcn_backup19\"\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA public postgres\npg_restore: [archiver (db)] could not execute query: ERROR: schema \"public\" already exists\nCommand was: CREATE SCHEMA public;\n \nScript used for pg_dump:\n-------------------------------------\n \npg_dump -h 10.26.33.3 -p 5432 -U postgres -W -F c -v -f tmp/postgres/backup/backup10/ kbcn_backup19 kbcn >& tmp/postgres/backup/backup10/ kbcn_backup19.log; echo $? > tmp/postgres/backup/backup10/_'date+%Y-%m-%d.%H:%M:%S'\n \n \n \nPlease advise.\n \nRegards,\nDaulat",
"msg_date": "Sun, 21 Apr 2019 16:35:59 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backup and Restore (pg_dump & pg_restore)"
},
{
"msg_contents": "On 4/21/19 9:35 AM, Daulat Ram wrote:\n> Hello Team,\n> \n> We are getting below error while migrating pg_dump from Postgresql 9.6 \n> to Postgresql 11.2 via pg_restore in docker environment.\n> \n> 90d4c9f363c8:~$ pg_restore -d kbcn \"/var/lib/kbcn_backup19\"\n> \n> pg_restore: [archiver (db)] Error while PROCESSING TOC:\n> \n> pg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA \n> public postgres\n> \n> pg_restore: [archiver (db)] could not execute query: ERROR: schema \n> \"public\" already exists\n> \n> Command was: CREATE SCHEMA public;\n\nExpected as the public schema is there by default. It is an \ninformational error, you can ignore it.\n\nIf you want to not see it and want a clean install on the 11.2 side use:\n\n-c\n--clean\n\n Output commands to clean (drop) database objects prior to \noutputting the commands for creating them. (Unless --if-exists is also \nspecified, restore might generate some harmless error messages, if any \nobjects were not present in the destination database.)\n\n This option is only meaningful for the plain-text format. For the \narchive formats, you can specify the option when you call pg_restore.\n\non pg_restore side(along with --if-exists to remove other harmless error \nmessages).\n\nFYI the -W on the pg_dump is redundant as the password will be prompted \nfor without it:\n\n-W\n--password\n\n Force pg_dump to prompt for a password before connecting to a database.\n\n This option is never essential, since pg_dump will automatically \nprompt for a password if the server demands password authentication. \nHowever, pg_dump will waste a connection attempt finding out that the \nserver wants a password. In some cases it is worth typing -W to avoid \nthe extra connection attempt.\n\n\n> \n> Script used for pg_dump:\n> \n> -------------------------------------\n> \n> pg_dump -h 10.26.33.3 -p 5432 -U postgres -W -F c -v -f \n> tmp/postgres/backup/backup10/ kbcn_backup19 �kbcn >& \n> tmp/postgres/backup/backup10/ kbcn_backup19.log; echo $? > \n> tmp/postgres/backup/backup10/_'date+%Y-%m-%d.%H:%M:%S'\n> \n> Please advise.\n> \n> Regards,\n> \n> Daulat\n> \n\n\n-- \nAdrian Klaver\[email protected]\n\n\n",
"msg_date": "Sun, 21 Apr 2019 11:46:09 -0700",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup and Restore (pg_dump & pg_restore)"
},
{
"msg_contents": "Adrian Klaver <[email protected]> writes:\n> On 4/21/19 9:35 AM, Daulat Ram wrote:\n>> pg_restore: [archiver (db)] could not execute query: ERROR: schema \n>> \"public\" already exists\n>> Command was: CREATE SCHEMA public;\n\n> Expected as the public schema is there by default. It is an \n> informational error, you can ignore it.\n\nIt's expected only if you made a dump file with 9.6's pg_dump and\nrestored it with a later pg_restore; there were some changes in\nhow the public schema got handled between the two versions.\n\nThe usual recommendation when you are doing a version migration\nis to use the newer release's pg_dump to suck the data out of\nthe older server. If you can't do that, it'll (probably)\nstill work, but you may have cosmetic issues like this one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Apr 2019 15:25:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup and Restore (pg_dump & pg_restore)"
},
{
"msg_contents": "On 4/21/19 1:46 PM, Adrian Klaver wrote:\n> On 4/21/19 9:35 AM, Daulat Ram wrote:\n>> Hello Team,\n>>\n>> We are getting below error while migrating pg_dump from Postgresql 9.6 to \n>> Postgresql 11.2 via pg_restore in docker environment.\n>>\n>> 90d4c9f363c8:~$ pg_restore -d kbcn \"/var/lib/kbcn_backup19\"\n>>\n>> pg_restore: [archiver (db)] Error while PROCESSING TOC:\n>>\n>> pg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA \n>> public postgres\n>>\n>> pg_restore: [archiver (db)] could not execute query: ERROR: schema \n>> \"public\" already exists\n>>\n>> Command was: CREATE SCHEMA public;\n>\n> Expected as the public schema is there by default. It is an informational \n> error, you can ignore it.\n\n\"Informational error\" is a contradiction in terms.\n\n-- \nAngular momentum makes the world go 'round.\n\n\n",
"msg_date": "Sun, 21 Apr 2019 15:42:06 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup and Restore (pg_dump & pg_restore)"
},
{
"msg_contents": "On 4/21/19 1:42 PM, Ron wrote:\n> On 4/21/19 1:46 PM, Adrian Klaver wrote:\n>> On 4/21/19 9:35 AM, Daulat Ram wrote:\n>>> Hello Team,\n>>>\n>>> We are getting below error while migrating pg_dump from Postgresql \n>>> 9.6 to Postgresql 11.2 via pg_restore in docker environment.\n>>>\n>>> 90d4c9f363c8:~$ pg_restore -d kbcn \"/var/lib/kbcn_backup19\"\n>>>\n>>> pg_restore: [archiver (db)] Error while PROCESSING TOC:\n>>>\n>>> pg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA \n>>> public postgres\n>>>\n>>> pg_restore: [archiver (db)] could not execute query: ERROR: schema \n>>> \"public\" already exists\n>>>\n>>> Command was: CREATE SCHEMA public;\n>>\n>> Expected as the public schema is there by default. It is an \n>> informational error, you can ignore it.\n> \n> \"Informational error\" is a contradiction in terms.\n> \n\n\n1) Well the public schema was in the dump, so the OP wanted it.\n2) It also existed in the target database.\n3) The error let you know 1) & 2)\n4) To my way of thinking it was a 'no harm, no foul' situation where the \nerror just informed you that the target database took a side track to \nget where you wanted to be anyway.\n\nI see this sort of thing in monitoring systems e.g. environmental \ncontrols all the time. Things get flagged because they wander over set \npoints intermittently. It is technically an error but unless they stay \nover the line it is just another data point.\n\n-- \nAdrian Klaver\[email protected]\n\n\n",
"msg_date": "Sun, 21 Apr 2019 13:58:16 -0700",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup and Restore (pg_dump & pg_restore)"
},
{
"msg_contents": "On 4/21/19 3:58 PM, Adrian Klaver wrote:\n> On 4/21/19 1:42 PM, Ron wrote:\n>> On 4/21/19 1:46 PM, Adrian Klaver wrote:\n>>> On 4/21/19 9:35 AM, Daulat Ram wrote:\n>>>> Hello Team,\n>>>>\n>>>> We are getting below error while migrating pg_dump from Postgresql 9.6 \n>>>> to Postgresql 11.2 via pg_restore in docker environment.\n>>>>\n>>>> 90d4c9f363c8:~$ pg_restore -d kbcn \"/var/lib/kbcn_backup19\"\n>>>>\n>>>> pg_restore: [archiver (db)] Error while PROCESSING TOC:\n>>>>\n>>>> pg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA \n>>>> public postgres\n>>>>\n>>>> pg_restore: [archiver (db)] could not execute query: ERROR: schema \n>>>> \"public\" already exists\n>>>>\n>>>> Command was: CREATE SCHEMA public;\n>>>\n>>> Expected as the public schema is there by default. It is an \n>>> informational error, you can ignore it.\n>>\n>> \"Informational error\" is a contradiction in terms.\n>>\n>\n>\n> 1) Well the public schema was in the dump, so the OP wanted it.\n> 2) It also existed in the target database.\n> 3) The error let you know 1) & 2)\n> 4) To my way of thinking it was a 'no harm, no foul' situation where the \n> error just informed you that the target database took a side track to get \n> where you wanted to be anyway.\n>\n> I see this sort of thing in monitoring systems e.g. environmental controls \n> all the time. Things get flagged because they wander over set points \n> intermittently. It is technically an error but unless they stay over the \n> line it is just another data point.\n\nErrors need to be fixed. If the restore can proceed without harm, then it's \nan Informational message.\n\n-- \nAngular momentum makes the world go 'round.\n\n\n",
"msg_date": "Sun, 21 Apr 2019 16:20:03 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup and Restore (pg_dump & pg_restore)"
},
{
"msg_contents": "On 4/21/19 2:20 PM, Ron wrote:\n\n>> I see this sort of thing in monitoring systems e.g. environmental \n>> controls all the time. Things get flagged because they wander over set \n>> points intermittently. It is technically an error but unless they stay \n>> over the line it is just another data point.\n> \n> Errors need to be fixed. If the restore can proceed without harm, then \n> it's an Informational message.\n\nThat is a choice thing:\n\nhttps://www.postgresql.org/docs/11/app-pgrestore.html\n\n\"\n-e\n--exit-on-error\n\n Exit if an error is encountered while sending SQL commands to the \ndatabase. The default is to continue and to display a count of errors at \nthe end of the restoration.\n\"\n\n\n\nIt is also one of those eye of the beholder things as evidenced by:\n\nhttps://www.postgresql.org/docs/11/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT\n\nSeverity \tUsage \t\t\tsyslog \t\teventlog\n...\nERROR \t\tReports an error ... \tWARNING \tERROR\n...\n\nEdited to keep on one line.\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n",
"msg_date": "Sun, 21 Apr 2019 15:13:23 -0700",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup and Restore (pg_dump & pg_restore)"
}
] |
[
{
"msg_contents": "Hi,\nSince Postgres 9.2, for prepared statements, the CBO automatically switches from Custom Plan to Generic plan on the sixth iteration (reference backend/utils/cache/plancache.c).\nI am observing that the Generic plan for Prepared statement requires 5544.701 ms to execute where as custom plan for same query requires 3.497 ms.\nThe cost of execution is reduced from 402 (custom plan) to 12.68 (generic plan).\nHowever the execution time has gone up from 3.497 ms to 5544.701 ms.\n\nBelow are the details about this use case.\n\nPostgres version - PostgreSQL 9.6.6, compiled by Visual C++ build 1800, 64-bit\n1. Full Table and Index Schema -\n\n Table \"public.t776\"\n\n Column | Type | Modifiers | Storage | Stats target | Description\n\n-------------+---------+-----------+----------+--------------+-------------\n\nc1 | citext | not null | extended | |\n\nc2 | citext | | extended | |\n\nc3 | integer | | plain | |\n\nc4 | citext | | extended | |\n\nc5 | citext | not null | extended | |\n\nc6 | integer | | plain | |\n\nc7 | integer | | plain | |\n\nc8 | citext | not null | extended | |\n\nc112 | citext | | extended | |\n\nc179 | citext | | extended | |\n\nc60513 | citext | | extended | |\n\nc60914 | citext | | extended | |\n\nc60989 | citext | | extended | |\n\nc200000001 | citext | | extended | |\n\nc200000003 | citext | | extended | |\n\nc200000004 | citext | | extended | |\n\nc200000005 | citext | | extended | |\n\nc200000020 | citext | | extended | |\n\nc200003000 | citext | | extended | |\n\nc240000007 | citext | | extended | |\n\nc240000008 | citext | | extended | |\n\nc240001002 | citext | | extended | |\n\nc240001003 | citext | | extended | |\n\nc240001005 | citext | | extended | |\n\nc260100002 | integer | | plain | |\n\nc300927600 | integer | | plain | |\n\nc301002800 | citext | | extended | |\n\nc301002900 | citext | | extended | |\n\nc301003400 | citext | | extended | |\n\nc301047700 | citext | | extended | |\n\nc301047800 | citext | | extended | |\n\nc301089100 | citext | | extended | |\n\nc301118000 | integer | | plain | |\n\nc301136600 | citext | | extended | |\n\nc301136800 | citext | | extended | |\n\nc301136900 | integer | | plain | |\n\nc301137000 | integer | | plain | |\n\nc301137100 | citext | | extended | |\n\nc301137200 | citext | | extended | |\n\nc301137300 | citext | | extended | |\n\nc301137400 | citext | | extended | |\n\nc301172600 | integer | | plain | |\n\nc301186800 | citext | | extended | |\n\nc400079600 | citext | | extended | |\n\nc400124500 | integer | | plain | |\n\nc400127400 | citext | | extended | |\n\nc400128800 | citext | | extended | |\n\nc400128900 | citext | | extended | |\n\nc400129100 | integer | | plain | |\n\nc400129200 | citext | | extended | |\n\nc400130900 | citext | | extended | |\n\nc400131000 | citext | | extended | |\n\nc400131200 | citext | | extended | |\n\nc400131300 | citext | | extended | |\n\nc490001289 | citext | | extended | |\n\nc490008000 | citext | | extended | |\n\nc490008100 | citext | | extended | |\n\nc490009000 | citext | | extended | |\n\nc490009100 | citext | | extended | |\n\nc530010100 | citext | | extended | |\n\nc530010200 | citext | | extended | |\n\nc530014300 | integer | | plain | |\n\nc530014400 | integer | | plain | |\n\nc530014500 | integer | | plain | |\n\nc530019500 | citext | | extended | |\n\nc530031600 | integer | | plain | |\n\nc530032500 | integer | | plain | |\n\nc530035000 | citext | | extended | |\n\nc530035200 | citext | | extended | |\n\nc530041601 | integer | | plain | |\n\nc530054200 | integer | | plain | |\n\nc530054400 | integer | | plain | |\n\nc530058400 | citext | | extended | |\n\nc530058500 | citext | | extended | |\n\nc530059800 | citext | | extended | |\n\nc530060100 | integer | | plain | |\n\nc530060200 | citext | | extended | |\n\nc530062400 | citext | | extended | |\n\nc530067430 | integer | | plain | |\n\nc530067920 | integer | | plain | |\n\nc530067930 | citext | | extended | |\n\nc530068090 | integer | | plain | |\n\nc530070390 | integer | | plain | |\n\nc530071130 | citext | | extended | |\n\nc530071180 | citext | | extended | |\n\nc530072336 | citext | | extended | |\n\nc530074016 | integer | | plain | |\n\nc200000006 | citext | | extended | |\n\nc200000007 | citext | | extended | |\n\nc200000012 | citext | | extended | |\n\nc240001004 | citext | | extended | |\n\nc260000001 | citext | | extended | |\n\nc260000005 | citext | | extended | |\n\nc260400003 | integer | | plain | |\n\nc1000000001 | citext | | extended | |\n\nIndexes:\n\n \"pk_t776\" PRIMARY KEY, btree (c1)\n\n \"i776_0_179_t776\" UNIQUE, btree (c179)\n\n \"i776_0_200000001_t776\" btree (c200000001)\n\n \"i776_0_240001002_t776\" btree (c240001002)\n\n \"i776_0_301186800_t776\" btree (c301186800, c400127400)\n\n \"i776_0_400079600_1136943505_t776\" btree (c400079600, c530041601, c179)\n\n \"i776_0_400079600_t776\" btree (c400079600)\n\n \"i776_0_400129200_1337395809_t776\" btree (c400129200, c400129100)\n\n \"i776_0_400129200_t776\" btree (c400129200, c400129100, c400127400, c1)\n\n \"i776_0_400131200_t776\" btree (c400131200)\n\n \"i776_0_400131300_t776\" btree (c400131300)\n\n \"i776_0_530010100_t776\" btree (c530010100, c400127400)\n\n \"i776_0_530060100_207771634_t776\" btree (c530060100, c6, c400129200)\n\n \"i776_0_530060100_t776\" btree (c530060100, c6, c400129100, c400129200)\n\n \"i776_0_530060200_t776\" btree (c530060200, c400127400)\n\nCheck constraints:\n\n \"len_c1\" CHECK (length(c1::text) <= 15)\n\n \"len_c112\" CHECK (length(c112::text) <= 255)\n\n \"len_c179\" CHECK (length(c179::text) <= 38)\n\n \"len_c2\" CHECK (length(c2::text) <= 254)\n\n \"len_c200000001\" CHECK (length(c200000001::text) <= 254)\n\n \"len_c200000003\" CHECK (length(c200000003::text) <= 60)\n\n \"len_c200000004\" CHECK (length(c200000004::text) <= 60)\n\n \"len_c200000005\" CHECK (length(c200000005::text) <= 60)\n\n \"len_c200000020\" CHECK (length(c200000020::text) <= 254)\n\n \"len_c240000007\" CHECK (length(c240000007::text) <= 254)\n\n \"len_c240001002\" CHECK (length(c240001002::text) <= 254)\n\n \"len_c240001003\" CHECK (length(c240001003::text) <= 254)\n\n \"len_c240001005\" CHECK (length(c240001005::text) <= 254)\n\n \"len_c301002800\" CHECK (length(c301002800::text) <= 254)\n\n \"len_c301002900\" CHECK (length(c301002900::text) <= 254)\n\n \"len_c301003400\" CHECK (length(c301003400::text) <= 255)\n\n \"len_c301047700\" CHECK (length(c301047700::text) <= 254)\n\n \"len_c301047800\" CHECK (length(c301047800::text) <= 38)\n\n \"len_c301089100\" CHECK (length(c301089100::text) <= 80)\n\n \"len_c301136600\" CHECK (length(c301136600::text) <= 254)\n\n \"len_c301136800\" CHECK (length(c301136800::text) <= 254)\n\n \"len_c301137100\" CHECK (length(c301137100::text) <= 254)\n\n \"len_c301137200\" CHECK (length(c301137200::text) <= 254)\n\n \"len_c301137300\" CHECK (length(c301137300::text) <= 254)\n\n \"len_c301137400\" CHECK (length(c301137400::text) <= 254)\n\n \"len_c301186800\" CHECK (length(c301186800::text) <= 254)\n\n \"len_c4\" CHECK (length(c4::text) <= 254)\n\n \"len_c400079600\" CHECK (length(c400079600::text) <= 38)\n\n \"len_c400127400\" CHECK (length(c400127400::text) <= 127)\n\n \"len_c400128800\" CHECK (length(c400128800::text) <= 255)\n\n \"len_c400128900\" CHECK (length(c400128900::text) <= 255)\n\n \"len_c400129200\" CHECK (length(c400129200::text) <= 38)\n\n \"len_c400130900\" CHECK (length(c400130900::text) <= 38)\n\n \"len_c400131000\" CHECK (length(c400131000::text) <= 38)\n\n \"len_c400131200\" CHECK (length(c400131200::text) <= 255)\n\n \"len_c400131300\" CHECK (length(c400131300::text) <= 255)\n\n \"len_c490001289\" CHECK (length(c490001289::text) <= 127)\n\n \"len_c490008000\" CHECK (length(c490008000::text) <= 40)\n\n \"len_c490008100\" CHECK (length(c490008100::text) <= 40)\n\n \"len_c490009000\" CHECK (length(c490009000::text) <= 40)\n\n \"len_c490009100\" CHECK (length(c490009100::text) <= 40)\n\n \"len_c5\" CHECK (length(c5::text) <= 254)\n\n \"len_c530010100\" CHECK (length(c530010100::text) <= 254)\n\n \"len_c530010200\" CHECK (length(c530010200::text) <= 254)\n\n \"len_c530035200\" CHECK (length(c530035200::text) <= 255)\n\n \"len_c530058400\" CHECK (length(c530058400::text) <= 254)\n\n \"len_c530058500\" CHECK (length(c530058500::text) <= 254)\n\n \"len_c530059800\" CHECK (length(c530059800::text) <= 255)\n\n \"len_c530060200\" CHECK (length(c530060200::text) <= 255)\n\n \"len_c530062400\" CHECK (length(c530062400::text) <= 254)\n\n \"len_c530067930\" CHECK (length(c530067930::text) <= 127)\n\n \"len_c530071130\" CHECK (length(c530071130::text) <= 128)\n\n \"len_c530071180\" CHECK (length(c530071180::text) <= 128)\n\n \"len_c530072336\" CHECK (length(c530072336::text) <= 254)\n\n \"len_c60513\" CHECK (length(c60513::text) <= 255)\n\n \"len_c60914\" CHECK (length(c60914::text) <= 255)\n\n \"len_c60989\" CHECK (length(c60989::text) <= 255)\n\n \"len_c8\" CHECK (length(c8::text) <= 254)\n\n\n\n\n\n\\d+: extra argument \">>c:/table_schemat.txt\" ignored\n\n\n\nNote : No custom functions used.\n\n\n\n\n3. SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='T776';\n\n't776',13295,'110743',0,'r',95,false,,'108920832'\n\n\n4. Explain (Analyze, Buffers)-\n\n\n\nPREPARE query (citext,citext,int,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext) as\n\nSELECT\n\n T776.C179,\n\n T776.C1\n\nFROM\n\n T776\n\nWHERE\n\n (\n\n(T776.C400129200 = $1)\n\n AND\n\n (\n\n T776.C400127400 = $2\n\n )\n\n AND\n\n (\n\n(T776.C400129100 <> $3)\n\n OR\n\n (\n\n T776.C400129100 IS NULL\n\n )\n\n )\n\n AND\n\n (\n\n(T776.C179 = $4)\n\n OR\n\n (\n\n T776.C179 = $5\n\n )\n\n OR\n\n (\n\n T776.C179 = $6\n\n )\n\n OR\n\n (\n\n T776.C179 = $7\n\n )\n\n OR\n\n (\n\n T776.C179 = $8\n\n )\n\n OR\n\n (\n\n T776.C179 = $9\n\n )\n\n OR\n\n (\n\n T776.C179 = $10\n\n )\n\n OR\n\n (\n\n T776.C179 = $11\n\n )\n\n OR\n\n (\n\n T776.C179 = $12\n\n )\n\n OR\n\n (\n\n T776.C179 = $13\n\n )\n\n OR\n\n (\n\n T776.C179 = $14\n\n )\n\n OR\n\n (\n\n T776.C179 = $15\n\n )\n\n OR\n\n (\n\n T776.C179 = $16\n\n )\n\n OR\n\n (\n\n T776.C179 = $17\n\n )\n\n OR\n\n (\n\n T776.C179 = $18\n\n )\n\n OR\n\n (\n\n T776.C179 = $19\n\n )\n\n OR\n\n (\n\n T776.C179 = $20\n\n )\n\n OR\n\n (\n\n T776.C179 = $21\n\n )\n\n OR\n\n (\n\n T776.C179 = $22\n\n )\n\n OR\n\n (\n\n T776.C179 = $23\n\n )\n\n OR\n\n (\n\n T776.C179 = $24\n\n )\n\n OR\n\n (\n\n T776.C179 = $25\n\n )\n\n OR\n\n (\n\n T776.C179 = $26\n\n )\n\n OR\n\n (\n\n T776.C179 = $27\n\n )\n\n OR\n\n (\n\n T776.C179 = $28\n\n )\n\n OR\n\n (\n\n T776.C179 = $29\n\n )\n\n OR\n\n (\n\n T776.C179 = $30\n\n )\n\n OR\n\n (\n\n T776.C179 = $31\n\n )\n\n OR\n\n (\n\n T776.C179 = $32\n\n )\n\n OR\n\n (\n\n T776.C179 = $33\n\n )\n\n OR\n\n (\n\n T776.C179 = $34\n\n )\n\n OR\n\n (\n\n T776.C179 = $35\n\n )\n\n OR\n\n (\n\n T776.C179 = $36\n\n )\n\n OR\n\n (\n\n T776.C179 = $37\n\n )\n\n OR\n\n (\n\n T776.C179 = $38\n\n )\n\n OR\n\n (\n\n T776.C179 = $39\n\n )\n\n OR\n\n (\n\n T776.C179 = $40\n\n )\n\n OR\n\n (\n\n T776.C179 = $41\n\n )\n\n OR\n\n (\n\n T776.C179 = $42\n\n )\n\n OR\n\n (\n\n T776.C179 = $43\n\n )\n\n OR\n\n (\n\n T776.C179 = $44\n\n )\n\n OR\n\n (\n\n T776.C179 = $45\n\n )\n\n OR\n\n (\n\n T776.C179 = $46\n\n )\n\n OR\n\n (\n\n T776.C179 = $47\n\n )\n\n OR\n\n (\n\n T776.C179 = $48\n\n )\n\n OR\n\n (\n\n T776.C179 = $49\n\n )\n\n OR\n\n (\n\n T776.C179 = $50\n\n )\n\n OR\n\n (\n\n T776.C179 = $51\n\n )\n\n )\n\n )\n\nORDER BY\n\n T776.C1 ASC LIMIT 2001 OFFSET 0;\n\n\n\n\n\nExplain (analyze,buffers) Execute query('0'::citext,'DATASET1M'::citext, 1,'OI-d791e838d0354ea59aa1c04622b7c8be'::citext, 'OI-44502144c7be49f4840d9d30c724f11b'::citext, 'OI-4c4f9f3bb1a344f294612cfeb1ac6838'::citext, 'OI-dd23d23ea6ca459ab6fc3256682df66a'::citext, 'OI-9239a9fa93c9459387d564940c0b4289'::citext, 'OI-f268ba1f12014f07b1b34fd9050aa92d'::citext, 'OI-8e365fa8461043a69950a638d3f3830a'::citext, 'OI-da2e9a38f45b41e9baea8c35b45577dc'::citext, 'OI-df0d9473d3934de29435d1c22fc9a269'::citext, 'OI-bd704daa55d24f12a54da6d5df68d05c'::citext, 'OI-4bed7c372fd44b2e96dd4bce44e2ab79'::citext, 'OI-4c0afdbbcb394670b8d93e39aa403e86'::citext, 'OI-d0c049f6459e4174bb4e2ea025104298'::citext, 'OI-f5fca0c13c454a04939b6f6a4871d647'::citext, 'OI-fb0e56e0b896448cbd3adff8212b3ddc'::citext, 'OI-4316868d400d450fb60bb620a89778f2'::citext, 'OI-4abdb84db1414bd1abbb66f2a35de267'::citext, 'OI-fbb28f59448d44adb65c1145b94e23fc'::citext, 'OI-02577caeab904f37b6d13bb761805e02'::citext, 'OI-ecde76cbefd847ed9602a2c875529123'::citext, 'OI-7b6e946f4e074cf6a8cd2fcec864cc3e'::citext, 'OI-55cf16be8f6e43aba7813d7dd898432c'::citext, 'OI-e1903455cdc14ce1a8f05a43ee452a7f'::citext, 'OI-81071273eacc44c4a46180be3a7d6a04'::citext, 'OI-74cf5387522b4a238483b258f3b0bb7a'::citext, 'OI-0ed0ff8956a84c598226f7e71f37f012'::citext, 'OI-7fc180b8d2944391b41ed90d70915357'::citext, 'OI-1f9e9cc0d2c4481199f98c898abf8b1b'::citext, 'OI-5dfbe9c70fe64a4080052f1d36ad654a'::citext, 'OI-ff83ae4d7a5a4906b97f2f78122324e4'::citext, 'OI-8f298f3c25c24f28943dd8cd98df748f'::citext, 'OI-78263146f1694c39935578c3fa4c6415'::citext, 'OI-ce1c830ed02540a58c3aaea265fa52af'::citext, 'OI-8dd73d417cf84827bc3708a362c7ee40'::citext, 'OI-83e223fa1b364ac8b20e396b21387758'::citext, 'OI-a6eb0ec674d242b793a26b259d15435f'::citext, 'OI-195dfbe207a64130b3bc686bfdabe051'::citext, 'OI-7ba86277cbce489694ba03c98e7d2059'::citext, 'OI-c7675935bd974244939ccac9181d9129'::citext, 'OI-64c958575289438bb86455ed81517df1'::citext, 'OI-05e14b018be14c4ea60f977f91b3fe04'::citext, 'OI-462d7db8d54541b996bbc977e3f4e6ec'::citext, 'OI-42de43dda54a4a018c0038c0de241da1'::citext, 'OI-e31f38e2a95e44bfa8b71ee1d31a66fa'::citext, 'OI-56e85efaaa5f42c0913fed3745687a23'::citext, 'OI-def2602379db49cfadf6c31d7dfc4872'::citext, 'OI-d81dc80af7af4ad8a8383e9834207e0b'::citext, 'OI-6f3333da01f349a3a17a5714a82530a6'::citext);\n\n\n\n\n\n\n\n4.a ) Explain (Analyze,Buffers) output for first 5 runs.\n\n'Limit (cost=402.71..402.74 rows=12 width=52) (actual time=3.185..3.266 rows=48 loops=1)'\n\n' Buffers: shared hit=184'\n\n' -> Sort (cost=402.71..402.74 rows=12 width=52) (actual time=3.179..3.207 rows=48 loops=1)'\n\n' Sort Key: c1'\n\n' Sort Method: quicksort Memory: 31kB'\n\n' Buffers: shared hit=184'\n\n' -> Bitmap Heap Scan on t776 (cost=212.54..402.49 rows=12 width=52) (actual time=2.629..2.794 rows=48 loops=1)'\n\n' Recheck Cond: ((c179 = 'OI-d791e838d0354ea59aa1c04622b7c8be'::citext) OR (c179 = 'OI-44502144c7be49f4840d9d30c724f11b'::citext) OR (c179 = 'OI-4c4f9f3bb1a344f294612cfeb1ac6838'::citext) OR (c179 = 'OI-dd23d23ea6ca459ab6fc3256682df66a'::citext) OR (c179 = 'OI-9239a9fa93c9459387d564940c0b4289'::citext) OR (c179 = 'OI-f268ba1f12014f07b1b34fd9050aa92d'::citext) OR (c179 = 'OI-8e365fa8461043a69950a638d3f3830a'::citext) OR (c179 = 'OI-da2e9a38f45b41e9baea8c35b45577dc'::citext) OR (c179 = 'OI-df0d9473d3934de29435d1c22fc9a269'::citext) OR (c179 = 'OI-bd704daa55d24f12a54da6d5df68d05c'::citext) OR (c179 = 'OI-4bed7c372fd44b2e96dd4bce44e2ab79'::citext) OR (c179 = 'OI-4c0afdbbcb394670b8d93e39aa403e86'::citext) OR (c179 = 'OI-d0c049f6459e4174bb4e2ea025104298'::citext) OR (c179 = 'OI-f5fca0c13c454a04939b6f6a4871d647'::citext) OR (c179 = 'OI-fb0e56e0b896448cbd3adff8212b3ddc'::citext) OR (c179 = 'OI-4316868d400d450fb60bb620a89778f2'::citext) OR (c179 = 'OI-4abdb84db1414bd1abbb66f2a35de267'::citext) OR (c179 = 'OI-fbb28f59448d44adb65c1145b94e23fc'::citext) OR (c179 = 'OI-02577caeab904f37b6d13bb761805e02'::citext) OR (c179 = 'OI-ecde76cbefd847ed9602a2c875529123'::citext) OR (c179 = 'OI-7b6e946f4e074cf6a8cd2fcec864cc3e'::citext) OR (c179 = 'OI-55cf16be8f6e43aba7813d7dd898432c'::citext) OR (c179 = 'OI-e1903455cdc14ce1a8f05a43ee452a7f'::citext) OR (c179 = 'OI-81071273eacc44c4a46180be3a7d6a04'::citext) OR (c179 = 'OI-74cf5387522b4a238483b258f3b0bb7a'::citext) OR (c179 = 'OI-0ed0ff8956a84c598226f7e71f37f012'::citext) OR (c179 = 'OI-7fc180b8d2944391b41ed90d70915357'::citext) OR (c179 = 'OI-1f9e9cc0d2c4481199f98c898abf8b1b'::citext) OR (c179 = 'OI-5dfbe9c70fe64a4080052f1d36ad654a'::citext) OR (c179 = 'OI-ff83ae4d7a5a4906b97f2f78122324e4'::citext) OR (c179 = 'OI-8f298f3c25c24f28943dd8cd98df748f'::citext) OR (c179 = 'OI-78263146f1694c39935578c3fa4c6415'::citext) OR (c179 = 'OI-ce1c830ed02540a58c3aaea265fa52af'::citext) OR (c179 = 'OI-8dd73d417cf84827bc3708a362c7ee40'::citext) OR (c179 = 'OI-83e223fa1b364ac8b20e396b21387758'::citext) OR (c179 = 'OI-a6eb0ec674d242b793a26b259d15435f'::citext) OR (c179 = 'OI-195dfbe207a64130b3bc686bfdabe051'::citext) OR (c179 = 'OI-7ba86277cbce489694ba03c98e7d2059'::citext) OR (c179 = 'OI-c7675935bd974244939ccac9181d9129'::citext) OR (c179 = 'OI-64c958575289438bb86455ed81517df1'::citext) OR (c179 = 'OI-05e14b018be14c4ea60f977f91b3fe04'::citext) OR (c179 = 'OI-462d7db8d54541b996bbc977e3f4e6ec'::citext) OR (c179 = 'OI-42de43dda54a4a018c0038c0de241da1'::citext) OR (c179 = 'OI-e31f38e2a95e44bfa8b71ee1d31a66fa'::citext) OR (c179 = 'OI-56e85efaaa5f42c0913fed3745687a23'::citext) OR (c179 = 'OI-def2602379db49cfadf6c31d7dfc4872'::citext) OR (c179 = 'OI-d81dc80af7af4ad8a8383e9834207e0b'::citext) OR (c179 = 'OI-6f3333da01f349a3a17a5714a82530a6'::citext))'\n\n' Filter: (((c400129100 <> 1) OR (c400129100 IS NULL)) AND (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext))'\n\n' Heap Blocks: exact=39'\n\n' Buffers: shared hit=184'\n\n' -> BitmapOr (cost=212.54..212.54 rows=48 width=0) (actual time=2.607..2.607 rows=0 loops=1)'\n\n' Buffers: shared hit=145'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.065..0.065 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-d791e838d0354ea59aa1c04622b7c8be'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.087..0.087 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-44502144c7be49f4840d9d30c724f11b'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.052..0.052 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4c4f9f3bb1a344f294612cfeb1ac6838'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.053..0.053 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-dd23d23ea6ca459ab6fc3256682df66a'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.056..0.056 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-9239a9fa93c9459387d564940c0b4289'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.061..0.061 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-f268ba1f12014f07b1b34fd9050aa92d'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.052..0.052 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-8e365fa8461043a69950a638d3f3830a'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-da2e9a38f45b41e9baea8c35b45577dc'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-df0d9473d3934de29435d1c22fc9a269'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.053..0.053 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-bd704daa55d24f12a54da6d5df68d05c'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4bed7c372fd44b2e96dd4bce44e2ab79'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4c0afdbbcb394670b8d93e39aa403e86'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.052..0.052 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-d0c049f6459e4174bb4e2ea025104298'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.044..0.044 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-f5fca0c13c454a04939b6f6a4871d647'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.041..0.041 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-fb0e56e0b896448cbd3adff8212b3ddc'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.053..0.053 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4316868d400d450fb60bb620a89778f2'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4abdb84db1414bd1abbb66f2a35de267'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.044..0.044 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-fbb28f59448d44adb65c1145b94e23fc'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-02577caeab904f37b6d13bb761805e02'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.055..0.055 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-ecde76cbefd847ed9602a2c875529123'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-7b6e946f4e074cf6a8cd2fcec864cc3e'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-55cf16be8f6e43aba7813d7dd898432c'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-e1903455cdc14ce1a8f05a43ee452a7f'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.101..0.101 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-81071273eacc44c4a46180be3a7d6a04'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-74cf5387522b4a238483b258f3b0bb7a'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.048..0.048 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-0ed0ff8956a84c598226f7e71f37f012'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.053..0.053 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-7fc180b8d2944391b41ed90d70915357'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-1f9e9cc0d2c4481199f98c898abf8b1b'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-5dfbe9c70fe64a4080052f1d36ad654a'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.042..0.042 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-ff83ae4d7a5a4906b97f2f78122324e4'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.054..0.054 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-8f298f3c25c24f28943dd8cd98df748f'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-78263146f1694c39935578c3fa4c6415'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.052..0.052 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-ce1c830ed02540a58c3aaea265fa52af'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.057..0.057 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-8dd73d417cf84827bc3708a362c7ee40'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-83e223fa1b364ac8b20e396b21387758'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-a6eb0ec674d242b793a26b259d15435f'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-195dfbe207a64130b3bc686bfdabe051'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-7ba86277cbce489694ba03c98e7d2059'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.054..0.054 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-c7675935bd974244939ccac9181d9129'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-64c958575289438bb86455ed81517df1'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.048..0.048 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-05e14b018be14c4ea60f977f91b3fe04'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.054..0.054 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-462d7db8d54541b996bbc977e3f4e6ec'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.058..0.058 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-42de43dda54a4a018c0038c0de241da1'::citext)'\n\n' Buffers: shared hit=4'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-e31f38e2a95e44bfa8b71ee1d31a66fa'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-56e85efaaa5f42c0913fed3745687a23'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-def2602379db49cfadf6c31d7dfc4872'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.054..0.054 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-d81dc80af7af4ad8a8383e9834207e0b'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-6f3333da01f349a3a17a5714a82530a6'::citext)'\n\n' Buffers: shared hit=3'\n\n'Execution time: 3.497 ms'\n\n\n\nLink to Analyze output for Custom Plan - https://explain.depesz.com/s/6u6H\n\n\n\n\n\n4.b) Explain (Analyze,Buffers) output from 6th run onwards\n\n\n\n\n\n\n\n'Limit (cost=12.67..12.68 rows=1 width=52) (actual time=5544.509..5544.590 rows=48 loops=1)'\n\n' Buffers: shared hit=55114'\n\n' -> Sort (cost=12.67..12.68 rows=1 width=52) (actual time=5544.507..5544.535 rows=48 loops=1)'\n\n' Sort Key: c1'\n\n' Sort Method: quicksort Memory: 31kB'\n\n' Buffers: shared hit=55114'\n\n' -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)'\n\n' Index Cond: ((c400129200 = $1) AND (c400127400 = $2))'\n\n' Filter: (((c400129100 <> $3) OR (c400129100 IS NULL)) AND ((c179 = $4) OR (c179 = $5) OR (c179 = $6) OR (c179 = $7) OR (c179 = $8) OR (c179 = $9) OR (c179 = $10) OR (c179 = $11) OR (c179 = $12) OR (c179 = $13) OR (c179 = $14) OR (c179 = $15) OR (c179 = $16) OR (c179 = $17) OR (c179 = $18) OR (c179 = $19) OR (c179 = $20) OR (c179 = $21) OR (c179 = $22) OR (c179 = $23) OR (c179 = $24) OR (c179 = $25) OR (c179 = $26) OR (c179 = $27) OR (c179 = $28) OR (c179 = $29) OR (c179 = $30) OR (c179 = $31) OR (c179 = $32) OR (c179 = $33) OR (c179 = $34) OR (c179 = $35) OR (c179 = $36) OR (c179 = $37) OR (c179 = $38) OR (c179 = $39) OR (c179 = $40) OR (c179 = $41) OR (c179 = $42) OR (c179 = $43) OR (c179 = $44) OR (c179 = $45) OR (c179 = $46) OR (c179 = $47) OR (c179 = $48) OR (c179 = $49) OR (c179 = $50) OR (c179 = $51)))'\n\n' Rows Removed by Filter: 55322'\n\n' Buffers: shared hit=55114'\n\n'Execution time: 5544.701 ms'\n\n\n\n\n\nLink to Analyze output for Generic Plan - https://explain.depesz.com/s/7jph\n\n\n5. History - Always slower on 6th iteration since Postgres 9.2\n6. System Information -\n\nOS Name Microsoft Windows Server 2008 R2 Enterprise\n\nVersion 6.1.7601 Service Pack 1 Build 7601\n\nOther OS Description Not Available\n\nOS Manufacturer Microsoft Corporation\n\nSystem Name VW-AUS-ATM-PG01\n\nSystem Manufacturer VMware, Inc.\n\nSystem Model VMware Virtual Platform\n\nSystem Type x64-based PC\n\nProcessor Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz, 2593 Mhz, 3 Core(s), 3 Logical Processor(s)\n\nProcessor Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz, 2593 Mhz, 3 Core(s), 3 Logical Processor(s)\n\nBIOS Version/Date Phoenix Technologies LTD 6.00, 9/21/2015\n\nSMBIOS Version 2.4\n\nWindows Directory C:\\Windows\n\nSystem Directory C:\\Windows\\system32\n\nBoot Device \\Device\\HarddiskVolume1\n\nLocale United States\n\nHardware Abstraction Layer Version = \"6.1.7601.24354\"\n\nUser Name Not Available\n\nTime Zone Central Daylight Time\n\nInstalled Physical Memory (RAM) 24.0 GB\n\nTotal Physical Memory 24.0 GB\n\nAvailable Physical Memory 21.1 GB\n\nTotal Virtual Memory 24.0 GB\n\nAvailable Virtual Memory 17.3 GB\n\nPage File Space 0 bytes\n\n\n-Thanks and Regards,\nSameer Naik\n\n\n\n\n\n\n\n\n\n\nHi,\nSince Postgres 9.2, for prepared statements, the CBO automatically switches from Custom Plan to Generic plan on the sixth iteration (reference backend/utils/cache/plancache.c).\nI am observing that the Generic plan for Prepared statement requires\n5544.701 ms to execute where as custom plan for same query requires 3.497 ms.\nThe cost of execution is reduced from 402 (custom plan) to 12.68 (generic plan).\nHowever the execution time has gone up from 3.497 ms to 5544.701 ms.\n \nBelow are the details about this use case.\n \nPostgres version - PostgreSQL 9.6.6, compiled by Visual C++ build 1800, 64-bit\n\n1. \nFull Table and Index Schema -\n\n Table \"public.t776\"\n\n Column | Type | Modifiers | Storage | Stats target | Description\n\n-------------+---------+-----------+----------+--------------+-------------\n\nc1 | citext | not null | extended | |\n\nc2 | citext | | extended | | \n\n\nc3 | integer | | plain | |\n\nc4 | citext | | extended | |\n\nc5 | citext | not null | extended | |\n\nc6 | integer | | plain | |\n\nc7 | integer | | plain | |\n\nc8 | citext | not null | extended | |\n\nc112 | citext | | extended | |\n\nc179 | citext | | extended | |\n\nc60513 | citext | | extended | |\n\nc60914 | citext | | extended | |\n\nc60989 | citext | | extended | |\n\nc200000001 | citext | | extended | |\n\nc200000003 | citext | | extended | |\n\nc200000004 | citext | | extended | |\n\nc200000005 | citext | | extended | |\n\nc200000020 | citext | | extended | |\n\nc200003000 | citext | | extended | |\n\nc240000007 | citext | | extended | |\n\nc240000008 | citext | | extended | |\n\nc240001002 | citext | | extended | |\n\nc240001003 | citext | | extended | |\n\nc240001005 | citext | | extended | |\n\nc260100002 | integer | | plain | |\n\nc300927600 | integer | | plain | |\n\nc301002800 | citext | | extended | |\n\nc301002900 | citext | | extended | |\n\nc301003400 | citext | | extended | |\n\nc301047700 | citext | | extended | |\n\nc301047800 | citext | | extended | |\n\nc301089100 | citext | | extended | |\n\nc301118000 | integer | | plain | |\n\nc301136600 | citext | | extended | |\n\nc301136800 | citext | | extended | |\n\nc301136900 | integer | | plain | |\n\nc301137000 | integer | | plain | |\n\nc301137100 | citext | | extended | |\n\nc301137200 | citext | | extended | |\n\nc301137300 | citext | | extended | |\n\nc301137400 | citext | | extended | |\n\nc301172600 | integer | | plain | |\n\nc301186800 | citext | | extended | |\n\nc400079600 | citext | | extended | |\n\nc400124500 | integer | | plain | |\n\nc400127400 | citext | | extended | |\n\nc400128800 | citext | | extended | |\n\nc400128900 | citext | | extended | |\n\nc400129100 | integer | | plain | |\n\nc400129200 | citext | | extended | |\n\nc400130900 | citext | | extended | |\n\nc400131000 | citext | | extended | |\n\nc400131200 | citext | | extended | |\n\nc400131300 | citext | | extended | |\n\nc490001289 | citext | | extended | |\n\nc490008000 | citext | | extended | |\n\nc490008100 | citext | | extended | |\n\nc490009000 | citext | | extended | |\n\nc490009100 | citext | | extended | |\n\nc530010100 | citext | | extended | |\n\nc530010200 | citext | | extended | |\n\nc530014300 | integer | | plain | |\n\nc530014400 | integer | | plain | |\n\nc530014500 | integer | | plain | |\n\nc530019500 | citext | | extended | |\n\nc530031600 | integer | | plain | |\n\nc530032500 | integer | | plain | |\n\nc530035000 | citext | | extended | |\n\nc530035200 | citext | | extended | |\n\nc530041601 | integer | | plain | |\n\nc530054200 | integer | | plain | |\n\nc530054400 | integer | | plain | |\n\nc530058400 | citext | | extended | |\n\nc530058500 | citext | | extended | |\n\nc530059800 | citext | | extended | |\n\nc530060100 | integer | | plain | |\n\nc530060200 | citext | | extended | |\n\nc530062400 | citext | | extended | |\n\nc530067430 | integer | | plain | |\n\nc530067920 | integer | | plain | |\n\nc530067930 | citext | | extended | |\n\nc530068090 | integer | | plain | |\n\nc530070390 | integer | | plain | |\n\nc530071130 | citext | | extended | |\n\nc530071180 | citext | | extended | |\n\nc530072336 | citext | | extended | |\n\nc530074016 | integer | | plain | |\n\nc200000006 | citext | | extended | |\n\nc200000007 | citext | | extended | |\n\nc200000012 | citext | | extended | |\n\nc240001004 | citext | | extended | |\n\nc260000001 | citext | | extended | |\n\nc260000005 | citext | | extended | |\n\nc260400003 | integer | | plain | |\n\nc1000000001 | citext | | extended | |\n\nIndexes:\n\n \"pk_t776\" PRIMARY KEY, btree (c1)\n\n \"i776_0_179_t776\" UNIQUE, btree (c179)\n\n \"i776_0_200000001_t776\" btree (c200000001)\n\n \"i776_0_240001002_t776\" btree (c240001002)\n\n \"i776_0_301186800_t776\" btree (c301186800, c400127400)\n\n \"i776_0_400079600_1136943505_t776\" btree (c400079600, c530041601, c179)\n\n \"i776_0_400079600_t776\" btree (c400079600)\n\n \"i776_0_400129200_1337395809_t776\" btree (c400129200, c400129100)\n\n \"i776_0_400129200_t776\" btree (c400129200, c400129100, c400127400, c1)\n\n \"i776_0_400131200_t776\" btree (c400131200)\n\n \"i776_0_400131300_t776\" btree (c400131300)\n\n \"i776_0_530010100_t776\" btree (c530010100, c400127400)\n\n \"i776_0_530060100_207771634_t776\" btree (c530060100, c6, c400129200)\n\n \"i776_0_530060100_t776\" btree (c530060100, c6, c400129100, c400129200)\n\n \"i776_0_530060200_t776\" btree (c530060200, c400127400)\n\nCheck constraints:\n\n \"len_c1\" CHECK (length(c1::text) <= 15)\n\n \"len_c112\" CHECK (length(c112::text) <= 255)\n\n \"len_c179\" CHECK (length(c179::text) <= 38)\n\n \"len_c2\" CHECK (length(c2::text) <= 254)\n\n \"len_c200000001\" CHECK (length(c200000001::text) <= 254)\n\n \"len_c200000003\" CHECK (length(c200000003::text) <= 60)\n\n \"len_c200000004\" CHECK (length(c200000004::text) <= 60)\n\n \"len_c200000005\" CHECK (length(c200000005::text) <= 60)\n\n \"len_c200000020\" CHECK (length(c200000020::text) <= 254)\n\n \"len_c240000007\" CHECK (length(c240000007::text) <= 254)\n\n \"len_c240001002\" CHECK (length(c240001002::text) <= 254)\n\n \"len_c240001003\" CHECK (length(c240001003::text) <= 254)\n\n \"len_c240001005\" CHECK (length(c240001005::text) <= 254)\n\n \"len_c301002800\" CHECK (length(c301002800::text) <= 254)\n\n \"len_c301002900\" CHECK (length(c301002900::text) <= 254)\n\n \"len_c301003400\" CHECK (length(c301003400::text) <= 255)\n\n \"len_c301047700\" CHECK (length(c301047700::text) <= 254)\n\n \"len_c301047800\" CHECK (length(c301047800::text) <= 38)\n\n \"len_c301089100\" CHECK (length(c301089100::text) <= 80)\n\n \"len_c301136600\" CHECK (length(c301136600::text) <= 254)\n\n \"len_c301136800\" CHECK (length(c301136800::text) <= 254)\n\n \"len_c301137100\" CHECK (length(c301137100::text) <= 254)\n\n \"len_c301137200\" CHECK (length(c301137200::text) <= 254)\n\n \"len_c301137300\" CHECK (length(c301137300::text) <= 254)\n\n \"len_c301137400\" CHECK (length(c301137400::text) <= 254)\n\n \"len_c301186800\" CHECK (length(c301186800::text) <= 254)\n\n \"len_c4\" CHECK (length(c4::text) <= 254)\n\n \"len_c400079600\" CHECK (length(c400079600::text) <= 38)\n\n \"len_c400127400\" CHECK (length(c400127400::text) <= 127)\n\n \"len_c400128800\" CHECK (length(c400128800::text) <= 255)\n\n \"len_c400128900\" CHECK (length(c400128900::text) <= 255)\n\n \"len_c400129200\" CHECK (length(c400129200::text) <= 38)\n\n \"len_c400130900\" CHECK (length(c400130900::text) <= 38)\n\n \"len_c400131000\" CHECK (length(c400131000::text) <= 38)\n\n \"len_c400131200\" CHECK (length(c400131200::text) <= 255)\n\n \"len_c400131300\" CHECK (length(c400131300::text) <= 255)\n\n \"len_c490001289\" CHECK (length(c490001289::text) <= 127)\n\n \"len_c490008000\" CHECK (length(c490008000::text) <= 40)\n\n \"len_c490008100\" CHECK (length(c490008100::text) <= 40)\n\n \"len_c490009000\" CHECK (length(c490009000::text) <= 40)\n\n \"len_c490009100\" CHECK (length(c490009100::text) <= 40)\n\n \"len_c5\" CHECK (length(c5::text) <= 254)\n\n \"len_c530010100\" CHECK (length(c530010100::text) <= 254)\n\n \"len_c530010200\" CHECK (length(c530010200::text) <= 254)\n\n \"len_c530035200\" CHECK (length(c530035200::text) <= 255)\n\n \"len_c530058400\" CHECK (length(c530058400::text) <= 254)\n\n \"len_c530058500\" CHECK (length(c530058500::text) <= 254)\n\n \"len_c530059800\" CHECK (length(c530059800::text) <= 255)\n\n \"len_c530060200\" CHECK (length(c530060200::text) <= 255)\n\n \"len_c530062400\" CHECK (length(c530062400::text) <= 254)\n\n \"len_c530067930\" CHECK (length(c530067930::text) <= 127)\n\n \"len_c530071130\" CHECK (length(c530071130::text) <= 128)\n\n \"len_c530071180\" CHECK (length(c530071180::text) <= 128)\n\n \"len_c530072336\" CHECK (length(c530072336::text) <= 254)\n\n \"len_c60513\" CHECK (length(c60513::text) <= 255)\n\n \"len_c60914\" CHECK (length(c60914::text) <= 255)\n\n \"len_c60989\" CHECK (length(c60989::text) <= 255)\n\n \"len_c8\" CHECK (length(c8::text) <= 254)\n\n \n\n \n\n\\d+: extra argument \">>c:/table_schemat.txt\" ignored\n\n \n\nNote : No custom functions used.\n\n \n\n \n\n3. \nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='T776';\n\n't776',13295,'110743',0,'r',95,false,,'108920832'\n\n \n\n4. \nExplain (Analyze, Buffers)- \n\n \n\nPREPARE query (citext,citext,int,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext)\n as \n\nSELECT\n\n T776.C179,\n\n T776.C1 \n\nFROM\n\n T776 \n\nWHERE\n\n (\n\n(T776.C400129200 = $1) \n\n AND \n\n (\n\n T776.C400127400 = $2\n\n )\n\n AND \n\n (\n\n(T776.C400129100 <> $3) \n\n OR \n\n (\n\n T776.C400129100 IS NULL\n\n )\n\n )\n\n AND \n\n (\n\n(T776.C179 = $4) \n\n OR \n\n (\n\n T776.C179 = $5\n\n )\n\n OR \n\n (\n\n T776.C179 = $6\n\n )\n\n OR \n\n (\n\n T776.C179 = $7\n\n )\n\n OR \n\n (\n\n T776.C179 = $8\n\n )\n\n OR \n\n (\n\n T776.C179 = $9\n\n )\n\n OR \n\n (\n\n T776.C179 = $10\n\n )\n\n OR \n\n (\n\n T776.C179 = $11\n\n )\n\n OR \n\n (\n\n T776.C179 = $12\n\n )\n\n OR \n\n (\n\n T776.C179 = $13\n\n )\n\n OR \n\n (\n\n T776.C179 = $14\n\n )\n\n OR \n\n (\n\n T776.C179 = $15\n\n )\n\n OR \n\n (\n\n T776.C179 = $16\n\n )\n\n OR \n\n (\n\n T776.C179 = $17\n\n ) \n\n OR \n\n (\n\n T776.C179 = $18\n\n )\n\n OR \n\n (\n\n T776.C179 = $19\n\n )\n\n OR \n\n (\n\n T776.C179 = $20\n\n )\n\n OR \n\n (\n\n T776.C179 = $21\n\n )\n\n OR \n\n (\n\n T776.C179 = $22\n\n )\n\n OR \n\n (\n\n T776.C179 = $23\n\n )\n\n OR \n\n (\n\n T776.C179 = $24\n\n )\n\n OR \n\n (\n\n T776.C179 = $25\n\n )\n\n OR \n\n (\n\n T776.C179 = $26\n\n )\n\n OR \n\n (\n\n T776.C179 = $27\n\n )\n\n OR \n\n (\n\n T776.C179 = $28\n\n )\n\n OR \n\n (\n\n T776.C179 = $29\n\n )\n\n OR \n\n (\n\n T776.C179 = $30\n\n )\n\n OR \n\n (\n\n T776.C179 = $31\n\n )\n\n OR \n\n (\n\n T776.C179 = $32\n\n )\n\n OR \n\n (\n\n T776.C179 = $33\n\n )\n\n OR \n\n (\n\n T776.C179 = $34\n\n )\n\n OR \n\n (\n\n T776.C179 = $35\n\n )\n\n OR \n\n (\n\n T776.C179 = $36\n\n )\n\n OR \n\n (\n\n T776.C179 = $37\n\n )\n\n OR \n\n (\n\n T776.C179 = $38\n\n )\n\n OR \n\n (\n\n T776.C179 = $39\n\n )\n\n OR \n\n (\n\n T776.C179 = $40\n\n )\n\n OR \n\n (\n\n T776.C179 = $41\n\n )\n\n OR \n\n (\n\n T776.C179 = $42\n\n )\n\n OR \n\n (\n\n T776.C179 = $43\n\n )\n\n OR \n\n (\n\n T776.C179 = $44\n\n )\n\n OR \n\n (\n\n T776.C179 = $45\n\n )\n\n OR \n\n (\n\n T776.C179 = $46\n\n )\n\n OR \n\n (\n\n T776.C179 = $47\n\n )\n\n OR \n\n (\n\n T776.C179 = $48\n\n )\n\n OR \n\n (\n\n T776.C179 = $49\n\n )\n\n OR \n\n (\n\n T776.C179 = $50\n\n )\n\n OR \n\n (\n\n T776.C179 = $51\n\n )\n\n )\n\n )\n\nORDER BY\n\n T776.C1 ASC LIMIT 2001 OFFSET 0;\n\n \n\n \n\nExplain (analyze,buffers) Execute query('0'::citext,'DATASET1M'::citext, 1,'OI-d791e838d0354ea59aa1c04622b7c8be'::citext, 'OI-44502144c7be49f4840d9d30c724f11b'::citext, 'OI-4c4f9f3bb1a344f294612cfeb1ac6838'::citext, 'OI-dd23d23ea6ca459ab6fc3256682df66a'::citext,\n 'OI-9239a9fa93c9459387d564940c0b4289'::citext, 'OI-f268ba1f12014f07b1b34fd9050aa92d'::citext, 'OI-8e365fa8461043a69950a638d3f3830a'::citext, 'OI-da2e9a38f45b41e9baea8c35b45577dc'::citext, 'OI-df0d9473d3934de29435d1c22fc9a269'::citext, 'OI-bd704daa55d24f12a54da6d5df68d05c'::citext,\n 'OI-4bed7c372fd44b2e96dd4bce44e2ab79'::citext, 'OI-4c0afdbbcb394670b8d93e39aa403e86'::citext, 'OI-d0c049f6459e4174bb4e2ea025104298'::citext, 'OI-f5fca0c13c454a04939b6f6a4871d647'::citext, 'OI-fb0e56e0b896448cbd3adff8212b3ddc'::citext, 'OI-4316868d400d450fb60bb620a89778f2'::citext,\n 'OI-4abdb84db1414bd1abbb66f2a35de267'::citext, 'OI-fbb28f59448d44adb65c1145b94e23fc'::citext, 'OI-02577caeab904f37b6d13bb761805e02'::citext, 'OI-ecde76cbefd847ed9602a2c875529123'::citext, 'OI-7b6e946f4e074cf6a8cd2fcec864cc3e'::citext, 'OI-55cf16be8f6e43aba7813d7dd898432c'::citext,\n 'OI-e1903455cdc14ce1a8f05a43ee452a7f'::citext, 'OI-81071273eacc44c4a46180be3a7d6a04'::citext, 'OI-74cf5387522b4a238483b258f3b0bb7a'::citext, 'OI-0ed0ff8956a84c598226f7e71f37f012'::citext, 'OI-7fc180b8d2944391b41ed90d70915357'::citext, 'OI-1f9e9cc0d2c4481199f98c898abf8b1b'::citext,\n 'OI-5dfbe9c70fe64a4080052f1d36ad654a'::citext, 'OI-ff83ae4d7a5a4906b97f2f78122324e4'::citext, 'OI-8f298f3c25c24f28943dd8cd98df748f'::citext, 'OI-78263146f1694c39935578c3fa4c6415'::citext, 'OI-ce1c830ed02540a58c3aaea265fa52af'::citext, 'OI-8dd73d417cf84827bc3708a362c7ee40'::citext,\n 'OI-83e223fa1b364ac8b20e396b21387758'::citext, 'OI-a6eb0ec674d242b793a26b259d15435f'::citext, 'OI-195dfbe207a64130b3bc686bfdabe051'::citext, 'OI-7ba86277cbce489694ba03c98e7d2059'::citext, 'OI-c7675935bd974244939ccac9181d9129'::citext, 'OI-64c958575289438bb86455ed81517df1'::citext,\n 'OI-05e14b018be14c4ea60f977f91b3fe04'::citext, 'OI-462d7db8d54541b996bbc977e3f4e6ec'::citext, 'OI-42de43dda54a4a018c0038c0de241da1'::citext, 'OI-e31f38e2a95e44bfa8b71ee1d31a66fa'::citext, 'OI-56e85efaaa5f42c0913fed3745687a23'::citext, 'OI-def2602379db49cfadf6c31d7dfc4872'::citext,\n 'OI-d81dc80af7af4ad8a8383e9834207e0b'::citext, 'OI-6f3333da01f349a3a17a5714a82530a6'::citext);\n\n \n \n \n\n4.a ) Explain (Analyze,Buffers) output for first 5 runs.\n\n'Limit (cost=402.71..402.74 rows=12 width=52) (actual time=3.185..3.266 rows=48 loops=1)'\n\n' Buffers: shared hit=184'\n\n' -> Sort (cost=402.71..402.74 rows=12 width=52) (actual time=3.179..3.207 rows=48 loops=1)'\n\n' Sort Key: c1'\n\n' Sort Method: quicksort Memory: 31kB'\n\n' Buffers: shared hit=184'\n\n' -> Bitmap Heap Scan on t776 (cost=212.54..402.49 rows=12 width=52) (actual time=2.629..2.794 rows=48 loops=1)'\n\n' Recheck Cond: ((c179 = 'OI-d791e838d0354ea59aa1c04622b7c8be'::citext) OR (c179 = 'OI-44502144c7be49f4840d9d30c724f11b'::citext) OR (c179 = 'OI-4c4f9f3bb1a344f294612cfeb1ac6838'::citext) OR (c179 = 'OI-dd23d23ea6ca459ab6fc3256682df66a'::citext)\n OR (c179 = 'OI-9239a9fa93c9459387d564940c0b4289'::citext) OR (c179 = 'OI-f268ba1f12014f07b1b34fd9050aa92d'::citext) OR (c179 = 'OI-8e365fa8461043a69950a638d3f3830a'::citext) OR (c179 = 'OI-da2e9a38f45b41e9baea8c35b45577dc'::citext) OR (c179 = 'OI-df0d9473d3934de29435d1c22fc9a269'::citext)\n OR (c179 = 'OI-bd704daa55d24f12a54da6d5df68d05c'::citext) OR (c179 = 'OI-4bed7c372fd44b2e96dd4bce44e2ab79'::citext) OR (c179 = 'OI-4c0afdbbcb394670b8d93e39aa403e86'::citext) OR (c179 = 'OI-d0c049f6459e4174bb4e2ea025104298'::citext) OR (c179 = 'OI-f5fca0c13c454a04939b6f6a4871d647'::citext)\n OR (c179 = 'OI-fb0e56e0b896448cbd3adff8212b3ddc'::citext) OR (c179 = 'OI-4316868d400d450fb60bb620a89778f2'::citext) OR (c179 = 'OI-4abdb84db1414bd1abbb66f2a35de267'::citext) OR (c179 = 'OI-fbb28f59448d44adb65c1145b94e23fc'::citext) OR (c179 = 'OI-02577caeab904f37b6d13bb761805e02'::citext)\n OR (c179 = 'OI-ecde76cbefd847ed9602a2c875529123'::citext) OR (c179 = 'OI-7b6e946f4e074cf6a8cd2fcec864cc3e'::citext) OR (c179 = 'OI-55cf16be8f6e43aba7813d7dd898432c'::citext) OR (c179 = 'OI-e1903455cdc14ce1a8f05a43ee452a7f'::citext) OR (c179 = 'OI-81071273eacc44c4a46180be3a7d6a04'::citext)\n OR (c179 = 'OI-74cf5387522b4a238483b258f3b0bb7a'::citext) OR (c179 = 'OI-0ed0ff8956a84c598226f7e71f37f012'::citext) OR (c179 = 'OI-7fc180b8d2944391b41ed90d70915357'::citext) OR (c179 = 'OI-1f9e9cc0d2c4481199f98c898abf8b1b'::citext) OR (c179 = 'OI-5dfbe9c70fe64a4080052f1d36ad654a'::citext)\n OR (c179 = 'OI-ff83ae4d7a5a4906b97f2f78122324e4'::citext) OR (c179 = 'OI-8f298f3c25c24f28943dd8cd98df748f'::citext) OR (c179 = 'OI-78263146f1694c39935578c3fa4c6415'::citext) OR (c179 = 'OI-ce1c830ed02540a58c3aaea265fa52af'::citext) OR (c179 = 'OI-8dd73d417cf84827bc3708a362c7ee40'::citext)\n OR (c179 = 'OI-83e223fa1b364ac8b20e396b21387758'::citext) OR (c179 = 'OI-a6eb0ec674d242b793a26b259d15435f'::citext) OR (c179 = 'OI-195dfbe207a64130b3bc686bfdabe051'::citext) OR (c179 = 'OI-7ba86277cbce489694ba03c98e7d2059'::citext) OR (c179 = 'OI-c7675935bd974244939ccac9181d9129'::citext)\n OR (c179 = 'OI-64c958575289438bb86455ed81517df1'::citext) OR (c179 = 'OI-05e14b018be14c4ea60f977f91b3fe04'::citext) OR (c179 = 'OI-462d7db8d54541b996bbc977e3f4e6ec'::citext) OR (c179 = 'OI-42de43dda54a4a018c0038c0de241da1'::citext) OR (c179 = 'OI-e31f38e2a95e44bfa8b71ee1d31a66fa'::citext)\n OR (c179 = 'OI-56e85efaaa5f42c0913fed3745687a23'::citext) OR (c179 = 'OI-def2602379db49cfadf6c31d7dfc4872'::citext) OR (c179 = 'OI-d81dc80af7af4ad8a8383e9834207e0b'::citext) OR (c179 = 'OI-6f3333da01f349a3a17a5714a82530a6'::citext))'\n\n' Filter: (((c400129100 <> 1) OR (c400129100 IS NULL)) AND (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext))'\n\n' Heap Blocks: exact=39'\n\n' Buffers: shared hit=184'\n\n' -> BitmapOr (cost=212.54..212.54 rows=48 width=0) (actual time=2.607..2.607 rows=0 loops=1)'\n\n' Buffers: shared hit=145'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.065..0.065 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-d791e838d0354ea59aa1c04622b7c8be'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.087..0.087 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-44502144c7be49f4840d9d30c724f11b'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.052..0.052 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4c4f9f3bb1a344f294612cfeb1ac6838'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.053..0.053 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-dd23d23ea6ca459ab6fc3256682df66a'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.056..0.056 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-9239a9fa93c9459387d564940c0b4289'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.061..0.061 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-f268ba1f12014f07b1b34fd9050aa92d'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.052..0.052 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-8e365fa8461043a69950a638d3f3830a'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-da2e9a38f45b41e9baea8c35b45577dc'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-df0d9473d3934de29435d1c22fc9a269'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.053..0.053 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-bd704daa55d24f12a54da6d5df68d05c'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4bed7c372fd44b2e96dd4bce44e2ab79'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4c0afdbbcb394670b8d93e39aa403e86'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.052..0.052 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-d0c049f6459e4174bb4e2ea025104298'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.044..0.044 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-f5fca0c13c454a04939b6f6a4871d647'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.041..0.041 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-fb0e56e0b896448cbd3adff8212b3ddc'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.053..0.053 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4316868d400d450fb60bb620a89778f2'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-4abdb84db1414bd1abbb66f2a35de267'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.044..0.044 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-fbb28f59448d44adb65c1145b94e23fc'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-02577caeab904f37b6d13bb761805e02'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.055..0.055 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-ecde76cbefd847ed9602a2c875529123'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-7b6e946f4e074cf6a8cd2fcec864cc3e'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-55cf16be8f6e43aba7813d7dd898432c'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-e1903455cdc14ce1a8f05a43ee452a7f'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.101..0.101 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-81071273eacc44c4a46180be3a7d6a04'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-74cf5387522b4a238483b258f3b0bb7a'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.048..0.048 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-0ed0ff8956a84c598226f7e71f37f012'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.053..0.053 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-7fc180b8d2944391b41ed90d70915357'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-1f9e9cc0d2c4481199f98c898abf8b1b'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-5dfbe9c70fe64a4080052f1d36ad654a'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.042..0.042 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-ff83ae4d7a5a4906b97f2f78122324e4'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.054..0.054 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-8f298f3c25c24f28943dd8cd98df748f'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-78263146f1694c39935578c3fa4c6415'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.052..0.052 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-ce1c830ed02540a58c3aaea265fa52af'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.057..0.057 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-8dd73d417cf84827bc3708a362c7ee40'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-83e223fa1b364ac8b20e396b21387758'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-a6eb0ec674d242b793a26b259d15435f'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-195dfbe207a64130b3bc686bfdabe051'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-7ba86277cbce489694ba03c98e7d2059'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.054..0.054 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-c7675935bd974244939ccac9181d9129'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-64c958575289438bb86455ed81517df1'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.048..0.048 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-05e14b018be14c4ea60f977f91b3fe04'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.054..0.054 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-462d7db8d54541b996bbc977e3f4e6ec'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.058..0.058 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-42de43dda54a4a018c0038c0de241da1'::citext)'\n\n' Buffers: shared hit=4'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-e31f38e2a95e44bfa8b71ee1d31a66fa'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-56e85efaaa5f42c0913fed3745687a23'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.049..0.049 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-def2602379db49cfadf6c31d7dfc4872'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.054..0.054 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-d81dc80af7af4ad8a8383e9834207e0b'::citext)'\n\n' Buffers: shared hit=3'\n\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n\n' Index Cond: (c179 = 'OI-6f3333da01f349a3a17a5714a82530a6'::citext)'\n\n' Buffers: shared hit=3'\n\n'Execution time: 3.497 ms'\n\n \n\nLink to Analyze output for Custom Plan - \nhttps://explain.depesz.com/s/6u6H\n\n \n\n \n\n4.b) Explain (Analyze,Buffers) output from 6th run onwards\n\n \n \n\n \n\n'Limit (cost=12.67..12.68 rows=1 width=52) (actual time=5544.509..5544.590 rows=48 loops=1)'\n\n' Buffers: shared hit=55114'\n\n' -> Sort (cost=12.67..12.68 rows=1 width=52) (actual time=5544.507..5544.535 rows=48 loops=1)'\n\n' Sort Key: c1'\n\n' Sort Method: quicksort Memory: 31kB'\n\n' Buffers: shared hit=55114'\n\n' -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)'\n\n' Index Cond: ((c400129200 = $1) AND (c400127400 = $2))'\n\n' Filter: (((c400129100 <> $3) OR (c400129100 IS NULL)) AND ((c179 = $4) OR (c179 = $5) OR (c179 = $6) OR (c179 = $7) OR (c179 = $8) OR (c179 = $9) OR (c179 = $10) OR (c179 = $11) OR (c179 = $12) OR (c179 = $13) OR (c179 = $14) OR (c179 = $15)\n OR (c179 = $16) OR (c179 = $17) OR (c179 = $18) OR (c179 = $19) OR (c179 = $20) OR (c179 = $21) OR (c179 = $22) OR (c179 = $23) OR (c179 = $24) OR (c179 = $25) OR (c179 = $26) OR (c179 = $27) OR (c179 = $28) OR (c179 = $29) OR (c179 = $30) OR (c179 = $31)\n OR (c179 = $32) OR (c179 = $33) OR (c179 = $34) OR (c179 = $35) OR (c179 = $36) OR (c179 = $37) OR (c179 = $38) OR (c179 = $39) OR (c179 = $40) OR (c179 = $41) OR (c179 = $42) OR (c179 = $43) OR (c179 = $44) OR (c179 = $45) OR (c179 = $46) OR (c179 = $47)\n OR (c179 = $48) OR (c179 = $49) OR (c179 = $50) OR (c179 = $51)))'\n\n' Rows Removed by Filter: 55322'\n\n' Buffers: shared hit=55114'\n\n'Execution time: 5544.701 ms'\n\n \n\n \n\nLink to Analyze output for Generic Plan - \nhttps://explain.depesz.com/s/7jph\n\n \n\n5. \nHistory - Always slower on 6th iteration since Postgres 9.2\n\n6. \nSystem Information -\n\nOS Name Microsoft Windows Server 2008 R2 Enterprise\n\nVersion 6.1.7601 Service Pack 1 Build 7601\n\nOther OS Description Not Available\n\nOS Manufacturer Microsoft Corporation\n\nSystem Name VW-AUS-ATM-PG01\n\nSystem Manufacturer VMware, Inc.\n\nSystem Model VMware Virtual Platform\n\nSystem Type x64-based PC\n\nProcessor Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz, 2593 Mhz, 3 Core(s), 3 Logical Processor(s)\n\nProcessor Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz, 2593 Mhz, 3 Core(s), 3 Logical Processor(s)\n\nBIOS Version/Date Phoenix Technologies LTD 6.00, 9/21/2015\n\nSMBIOS Version 2.4\n\nWindows Directory C:\\Windows\n\nSystem Directory C:\\Windows\\system32\n\nBoot Device \\Device\\HarddiskVolume1\n\nLocale United States\n\nHardware Abstraction Layer Version = \"6.1.7601.24354\"\n\nUser Name Not Available\n\nTime Zone Central Daylight Time\n\nInstalled Physical Memory (RAM) 24.0 GB\n\nTotal Physical Memory 24.0 GB\n\nAvailable Physical Memory 21.1 GB\n\nTotal Virtual Memory 24.0 GB\n\nAvailable Virtual Memory 17.3 GB\n\nPage File Space 0 bytes\n \n \n-Thanks and Regards,\nSameer Naik",
"msg_date": "Mon, 29 Apr 2019 10:36:20 +0000",
"msg_from": "\"Naik, Sameer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Generic Plans for Prepared Statement are 158155 times slower than\n Custom Plans"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 10:36:20AM +0000, Naik, Sameer wrote:\n> Hi,\n> \n> Since Postgres 9.2, for prepared statements, the CBO automatically switches\n> from Custom Plan to Generic plan on the sixth iteration (reference backend/\n> utils/cache/plancache.c).\n\nThis is not totally true. The PREPARE manual page for PG 11 says:\n\n Prepared statements can use generic plans rather than re-planning\n with each set of supplied EXECUTE values. This occurs immediately\n for prepared statements with no parameters; otherwise it occurs\n only after five or more executions produce plans whose estimated\n--> cost average (including planning overhead) is more expensive than\n--> the generic plan cost estimate. Once a generic plan is chosen, it\n is used for the remaining lifetime of the prepared statement. Using\n EXECUTE values which are rare in columns with many duplicates can\n generate custom plans that are so much cheaper than the generic\n plan, even after adding planning overhead, that the generic plan\n might never be used.\n\nAlso, PG 9.2 is EOL so are you actually using that or something more\nrecent? It would be interesting to see if this is true on a supported\nversion of Postgres.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 29 Apr 2019 09:36:07 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Generic Plans for Prepared Statement are 158155 times slower\n than Custom Plans"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Mon, Apr 29, 2019 at 10:36:20AM +0000, Naik, Sameer wrote:\n>> Since Postgres 9.2, for prepared statements, the CBO automatically switches\n>> from Custom Plan to Generic plan on the sixth iteration (reference backend/\n>> utils/cache/plancache.c).\n\n> This is not totally true.\n\nYeah, that's a pretty inaccurate statement of the behavior.\n\nThe problem seems to be that the actual values being used for\nc400129200 and c400127400 are quite common in the dataset,\nso that when considering\n\nFilter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\nthe planner makes a roughly correct assessment that there are a lot of\nsuch rows, so it prefers to index on the basis of the giant OR clause\ninstead, even though that's fairly expensive. But, when considering\nthe generic case\n\n -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\n Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\nit's evidently guessing that just a few rows will match the index\ncondition (no more than about 3 given the cost number), making this plan\nlook much cheaper, so it goes with this plan. I wonder what the actual\ndistribution of those keys is.\n\nIn v10 and later, it's quite possible that creating extended stats\non the combination of those two columns would produce a better\nestimate. Won't help OP on 9.6, though.\n\nThis isn't the first time we've seen a plan-choice failure of this sort.\nI've wondered if we should make the plancache simply disbelieve generic\ncost estimates that are actually cheaper than the custom plans, on the\ngrounds that they must be estimation errors. In principle a generic\nplan could never really be better than a custom plan; so if it looks\nthat way on a cost basis, what that probably means is that the actual\nparameter values are outliers of some sort (e.g. extremely common),\nand the custom plan \"knows\" that it's going to be taking a hit from\nthat, but the generic plan doesn't. In this sort of situation, going\nwith the generic plan could be really disastrous, which is exactly\nwhat the OP is seeing (and what we've seen reported before).\n\nHowever, I'm not sure how to tune this idea so that it doesn't end up\nrejecting perfectly good generic plans. It's likely that there will be\nsome variation in the cost estimates between the generic and specific\ncases, even if the plan structure is exactly the same; and that\nvariation could go in either direction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 10:35:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Generic Plans for Prepared Statement are 158155 times slower than\n Custom Plans"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-29 10:35:39 -0400, Tom Lane wrote:\n> This isn't the first time we've seen a plan-choice failure of this sort.\n> I've wondered if we should make the plancache simply disbelieve generic\n> cost estimates that are actually cheaper than the custom plans, on the\n> grounds that they must be estimation errors. In principle a generic\n> plan could never really be better than a custom plan; so if it looks\n> that way on a cost basis, what that probably means is that the actual\n> parameter values are outliers of some sort (e.g. extremely common),\n> and the custom plan \"knows\" that it's going to be taking a hit from\n> that, but the generic plan doesn't. In this sort of situation, going\n> with the generic plan could be really disastrous, which is exactly\n> what the OP is seeing (and what we've seen reported before).\n> \n> However, I'm not sure how to tune this idea so that it doesn't end up\n> rejecting perfectly good generic plans. It's likely that there will be\n> some variation in the cost estimates between the generic and specific\n> cases, even if the plan structure is exactly the same; and that\n> variation could go in either direction.\n\nYea, I've both seen the \"generic is cheaper due to averaged selectivity\"\nand the \"insignificant cost variations lead to always prefer custom\nplan\" problems in production.\n\nI've also - but less severely - seen that the \"planning cost\" we add to\nthe custom plan leads to the generic plan to always be preferred. In\nparticular for indexed queries, on system that set random_page_cost =\nseq_page_cost = 1 (due to SSD or expectation that workload is entirely\ncached), the added cost from cached_plan_cost() can be noticable in\ncomparison to the estimated cost of the total query.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Apr 2019 09:23:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Generic Plans for Prepared Statement are 158155 times slower\n than Custom Plans"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2019-04-29 10:35:39 -0400, Tom Lane wrote:\n>> This isn't the first time we've seen a plan-choice failure of this sort.\n>> I've wondered if we should make the plancache simply disbelieve generic\n>> cost estimates that are actually cheaper than the custom plans, on the\n>> grounds that they must be estimation errors. In principle a generic\n>> plan could never really be better than a custom plan; so if it looks\n>> that way on a cost basis, what that probably means is that the actual\n>> parameter values are outliers of some sort (e.g. extremely common),\n>> and the custom plan \"knows\" that it's going to be taking a hit from\n>> that, but the generic plan doesn't. In this sort of situation, going\n>> with the generic plan could be really disastrous, which is exactly\n>> what the OP is seeing (and what we've seen reported before).\n>> \n>> However, I'm not sure how to tune this idea so that it doesn't end up\n>> rejecting perfectly good generic plans. It's likely that there will be\n>> some variation in the cost estimates between the generic and specific\n>> cases, even if the plan structure is exactly the same; and that\n>> variation could go in either direction.\n\n> Yea, I've both seen the \"generic is cheaper due to averaged selectivity\"\n> and the \"insignificant cost variations lead to always prefer custom\n> plan\" problems in production.\n\nI wonder if we couldn't do something based on having seen several\ndifferent custom plans before we try to make this decision. It'd be\njust about free to track the min and max custom cost estimates, along\nwith their average. The case where it is sensible to be switching to\na generic plan is where all the plans come out looking more or less\nalike --- if the workload is such that we get markedly different plans\nfor different inputs, then we'd probably better just eat the cost of\nplanning every time. So maybe the rule should be something like\n\"if the min and max custom costs, as well as the generic cost\nestimate, are all within 10% of the average custom cost, then it's\nokay to switch to generic\". We might need to collect more than 5\ncustom estimates before we put much faith in the decision, too.\n\n> I've also - but less severely - seen that the \"planning cost\" we add to\n> the custom plan leads to the generic plan to always be preferred.\n\nYeah; the planning cost business is very much of a hack, because we\ndon't have a good handle on how that really relates to execution\ncosts. But if we're thinking of the decision as being risk-based,\nwhich is basically what I'm suggesting above, maybe we could just\ndrop that whole component of the algorithm?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 12:51:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Generic Plans for Prepared Statement are 158155 times slower than\n Custom Plans"
},
{
"msg_contents": ">The problem seems to be that the actual values being used for\n>c400129200 and c400127400 are quite common in the dataset, so that when considering\n\n>Filter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\n>the planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\n> -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\n> Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\n> it's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\nDistribution of the keys c400129200 and c400127400 .\n\nThe distribution of c400129200 is as follows- \nIn entire table having 110743 records, there are 55370 records for which the value of c400129200 is 0. For each of the remaining 55,373 records the value of c400129200 is distinct.\n\n\nThe distribution of c400127400 is as follows- \nIn entire table having 110743 records, there are 55370 records for which the value of c400127400 is 'DATASET1M' . For remaining 55,373 records the value of c400127400 the value is same and is ' 'DATASET2M' .\n\n\n-Thanks and Regards,\nSameer Naik\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, April 29, 2019 8:06 PM\nTo: Bruce Momjian <[email protected]>\nCc: Naik, Sameer <[email protected]>; [email protected]\nSubject: [EXTERNAL] Re: Generic> Plans for Prepared Statement are 158155 times slower than Custom Plans\n\nBruce Momjian <[email protected]> writes:\n> On Mon, Apr 29, 2019 at 10:36:20AM +0000, Naik, Sameer wrote:\n>> Since Postgres 9.2, for prepared statements, the CBO automatically \n>> switches from Custom Plan to Generic plan on the sixth iteration \n>> (reference backend/ utils/cache/plancache.c).\n\n> This is not totally true.\n\nYeah, that's a pretty inaccurate statement of the behavior.\n\nThe problem seems to be that the actual values being used for\nc400129200 and c400127400 are quite common in the dataset, so that when considering\n\nFilter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\nthe planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\n -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\n Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\nit's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\n\nIn v10 and later, it's quite possible that creating extended stats on the combination of those two columns would produce a better estimate. Won't help OP on 9.6, though.\n\nThis isn't the first time we've seen a plan-choice failure of this sort.\nI've wondered if we should make the plancache simply disbelieve generic cost estimates that are actually cheaper than the custom plans, on the grounds that they must be estimation errors. In principle a generic plan could never really be better than a custom plan; so if it looks that way on a cost basis, what that probably means is that the actual parameter values are outliers of some sort (e.g. extremely common), and the custom plan \"knows\" that it's going to be taking a hit from that, but the generic plan doesn't. In this sort of situation, going with the generic plan could be really disastrous, which is exactly what the OP is seeing (and what we've seen reported before).\n\nHowever, I'm not sure how to tune this idea so that it doesn't end up rejecting perfectly good generic plans. It's likely that there will be some variation in the cost estimates between the generic and specific cases, even if the plan structure is exactly the same; and that variation could go in either direction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 04:58:50 +0000",
"msg_from": "\"Naik, Sameer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: Generic Plans for Prepared Statement are 158155 times slower\n than\n Custom Plans"
},
{
"msg_contents": "Sameer,were you able to resolve it? \n\nI am not sure if this is very common in postges - I doubt though but have not seen such a drastic performance degradation and that too when planner making the call. \n\nDeepak\n\n\n On Tuesday, April 30, 2019, 1:27:14 AM PDT, Naik, Sameer <[email protected]> wrote: \n \n >The problem seems to be that the actual values being used for\n>c400129200 and c400127400 are quite common in the dataset, so that when considering\n\n>Filter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\n>the planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\n> -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\n> Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\n> it's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\nDistribution of the keys c400129200 and c400127400 .\n\nThe distribution of c400129200 is as follows- \nIn entire table having 110743 records, there are 55370 records for which the value of c400129200 is 0. For each of the remaining 55,373 records the value of c400129200 is distinct.\n\n\nThe distribution of c400127400 is as follows- \nIn entire table having 110743 records, there are 55370 records for which the value of c400127400 is 'DATASET1M' . For remaining 55,373 records the value of c400127400 the value is same and is ' 'DATASET2M' .\n\n\n-Thanks and Regards,\nSameer Naik\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, April 29, 2019 8:06 PM\nTo: Bruce Momjian <[email protected]>\nCc: Naik, Sameer <[email protected]>; [email protected]\nSubject: [EXTERNAL] Re: Generic> Plans for Prepared Statement are 158155 times slower than Custom Plans\n\nBruce Momjian <[email protected]> writes:\n> On Mon, Apr 29, 2019 at 10:36:20AM +0000, Naik, Sameer wrote:\n>> Since Postgres 9.2, for prepared statements, the CBO automatically \n>> switches from Custom Plan to Generic plan on the sixth iteration \n>> (reference backend/ utils/cache/plancache.c).\n\n> This is not totally true.\n\nYeah, that's a pretty inaccurate statement of the behavior.\n\nThe problem seems to be that the actual values being used for\nc400129200 and c400127400 are quite common in the dataset, so that when considering\n\nFilter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\nthe planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\n -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\n Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\nit's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\n\nIn v10 and later, it's quite possible that creating extended stats on the combination of those two columns would produce a better estimate. Won't help OP on 9.6, though.\n\nThis isn't the first time we've seen a plan-choice failure of this sort.\nI've wondered if we should make the plancache simply disbelieve generic cost estimates that are actually cheaper than the custom plans, on the grounds that they must be estimation errors. In principle a generic plan could never really be better than a custom plan; so if it looks that way on a cost basis, what that probably means is that the actual parameter values are outliers of some sort (e.g. extremely common), and the custom plan \"knows\" that it's going to be taking a hit from that, but the generic plan doesn't. In this sort of situation, going with the generic plan could be really disastrous, which is exactly what the OP is seeing (and what we've seen reported before).\n\nHowever, I'm not sure how to tune this idea so that it doesn't end up rejecting perfectly good generic plans. It's likely that there will be some variation in the cost estimates between the generic and specific cases, even if the plan structure is exactly the same; and that variation could go in either direction.\n\n regards, tom lane\n\n \nSameer,were you able to resolve it? I am not sure if this is very common in postges - I doubt though but have not seen such a drastic performance degradation and that too when planner making the call. Deepak\n\n\n On Tuesday, April 30, 2019, 1:27:14 AM PDT, Naik, Sameer <[email protected]> wrote:\n \n\n\n>The problem seems to be that the actual values being used for>c400129200 and c400127400 are quite common in the dataset, so that when considering>Filter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)>the planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case> -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)> Index Cond: ((c400129200 = $1) AND (c400127400 = $2))> it's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.Distribution of the keys c400129200 and c400127400 .The distribution of c400129200 is as follows- In entire table having 110743 records, there are 55370 records for which the value of c400129200 is 0. For each of the remaining 55,373 records the value of c400129200 is distinct.The distribution of c400127400 is as follows- In entire table having 110743 records, there are 55370 records for which the value of c400127400 is 'DATASET1M' . For remaining 55,373 records the value of c400127400 the value is same and is ' 'DATASET2M' .-Thanks and Regards,Sameer Naik-----Original Message-----From: Tom Lane [mailto:[email protected]] Sent: Monday, April 29, 2019 8:06 PMTo: Bruce Momjian <[email protected]>Cc: Naik, Sameer <[email protected]>; [email protected]: [EXTERNAL] Re: Generic> Plans for Prepared Statement are 158155 times slower than Custom PlansBruce Momjian <[email protected]> writes:> On Mon, Apr 29, 2019 at 10:36:20AM +0000, Naik, Sameer wrote:>> Since Postgres 9.2, for prepared statements, the CBO automatically >> switches from Custom Plan to Generic plan on the sixth iteration >> (reference backend/ utils/cache/plancache.c).> This is not totally true.Yeah, that's a pretty inaccurate statement of the behavior.The problem seems to be that the actual values being used forc400129200 and c400127400 are quite common in the dataset, so that when consideringFilter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)the planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1) Index Cond: ((c400129200 = $1) AND (c400127400 = $2))it's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.In v10 and later, it's quite possible that creating extended stats on the combination of those two columns would produce a better estimate. Won't help OP on 9.6, though.This isn't the first time we've seen a plan-choice failure of this sort.I've wondered if we should make the plancache simply disbelieve generic cost estimates that are actually cheaper than the custom plans, on the grounds that they must be estimation errors. In principle a generic plan could never really be better than a custom plan; so if it looks that way on a cost basis, what that probably means is that the actual parameter values are outliers of some sort (e.g. extremely common), and the custom plan \"knows\" that it's going to be taking a hit from that, but the generic plan doesn't. In this sort of situation, going with the generic plan could be really disastrous, which is exactly what the OP is seeing (and what we've seen reported before).However, I'm not sure how to tune this idea so that it doesn't end up rejecting perfectly good generic plans. It's likely that there will be some variation in the cost estimates between the generic and specific cases, even if the plan structure is exactly the same; and that variation could go in either direction. regards, tom lane",
"msg_date": "Thu, 9 May 2019 04:14:15 +0000 (UTC)",
"msg_from": "Deepak Somaiya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Generic Plans for Prepared Statement are 158155 times\n slower than Custom Plans"
},
{
"msg_contents": "Deepak,\r\nI changed the datatype from citext to text and now everything works fine.\r\nThe data distribution is same, plan is same, yet there is a huge performance degradation when citext is used instead of text.\r\nHowever the business case requires case insensitive string handling.\r\nI am looking forward to some expert advice here when dealing with citext data type.\r\n\r\n\r\n-Thanks and Regards,\r\nSameer Naik\r\n\r\nFrom: Deepak Somaiya [mailto:[email protected]]\r\nSent: Thursday, May 9, 2019 9:44 AM\r\nTo: Tom Lane <[email protected]>; Bruce Momjian <[email protected]>; Naik, Sameer <[email protected]>\r\nCc: [email protected]\r\nSubject: [EXTERNAL] Re: Re: Generic Plans for Prepared Statement are 158155 times slower than Custom Plans\r\n\r\nSameer,\r\nwere you able to resolve it?\r\n\r\nI am not sure if this is very common in postges - I doubt though but have not seen such a drastic performance degradation and that too when planner making the call.\r\n\r\nDeepak\r\n\r\n\r\nOn Tuesday, April 30, 2019, 1:27:14 AM PDT, Naik, Sameer <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\n>The problem seems to be that the actual values being used for\r\n>c400129200 and c400127400 are quite common in the dataset, so that when considering\r\n\r\n>Filter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\r\n\r\n>the planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\r\n\r\n> -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\r\n> Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\r\n\r\n> it's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\r\n\r\nDistribution of the keys c400129200 and c400127400 .\r\n\r\nThe distribution of c400129200 is as follows-\r\nIn entire table having 110743 records, there are 55370 records for which the value of c400129200 is 0. For each of the remaining 55,373 records the value of c400129200 is distinct.\r\n\r\n\r\nThe distribution of c400127400 is as follows-\r\nIn entire table having 110743 records, there are 55370 records for which the value of c400127400 is 'DATASET1M' . For remaining 55,373 records the value of c400127400 the value is same and is ' 'DATASET2M' .\r\n\r\n\r\n-Thanks and Regards,\r\nSameer Naik\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane [mailto:[email protected]<mailto:[email protected]>]\r\nSent: Monday, April 29, 2019 8:06 PM\r\nTo: Bruce Momjian <[email protected]<mailto:[email protected]>>\r\nCc: Naik, Sameer <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>\r\nSubject: [EXTERNAL] Re: Generic> Plans for Prepared Statement are 158155 times slower than Custom Plans\r\n\r\nBruce Momjian <[email protected]<mailto:[email protected]>> writes:\r\n> On Mon, Apr 29, 2019 at 10:36:20AM +0000, Naik, Sameer wrote:\r\n>> Since Postgres 9.2, for prepared statements, the CBO automatically\r\n>> switches from Custom Plan to Generic plan on the sixth iteration\r\n>> (reference backend/ utils/cache/plancache.c).\r\n\r\n> This is not totally true.\r\n\r\nYeah, that's a pretty inaccurate statement of the behavior.\r\n\r\nThe problem seems to be that the actual values being used for\r\nc400129200 and c400127400 are quite common in the dataset, so that when considering\r\n\r\nFilter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\r\n\r\nthe planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\r\n\r\n -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\r\n Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\r\n\r\nit's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\r\n\r\n\r\nIn v10 and later, it's quite possible that creating extended stats on the combination of those two columns would produce a better estimate. Won't help OP on 9.6, though.\r\n\r\nThis isn't the first time we've seen a plan-choice failure of this sort.\r\nI've wondered if we should make the plancache simply disbelieve generic cost estimates that are actually cheaper than the custom plans, on the grounds that they must be estimation errors. In principle a generic plan could never really be better than a custom plan; so if it looks that way on a cost basis, what that probably means is that the actual parameter values are outliers of some sort (e.g. extremely common), and the custom plan \"knows\" that it's going to be taking a hit from that, but the generic plan doesn't. In this sort of situation, going with the generic plan could be really disastrous, which is exactly what the OP is seeing (and what we've seen reported before).\r\n\r\nHowever, I'm not sure how to tune this idea so that it doesn't end up rejecting perfectly good generic plans. It's likely that there will be some variation in the cost estimates between the generic and specific cases, even if the plan structure is exactly the same; and that variation could go in either direction.\r\n\r\n regards, tom lane\r\n\n\n\n\n\n\n\n\n\nDeepak,\nI changed the datatype from citext to text and now everything works fine.\nThe data distribution is same, plan is same, yet there is a huge performance degradation when citext is used instead of text.\r\n\nHowever the business case requires case insensitive string handling.\r\n\nI am looking forward to some expert advice here when dealing with citext data type.\n \n \n\n-Thanks and Regards,\nSameer Naik\n\n \n\n\nFrom: Deepak Somaiya [mailto:[email protected]] \nSent: Thursday, May 9, 2019 9:44 AM\nTo: Tom Lane <[email protected]>; Bruce Momjian <[email protected]>; Naik, Sameer <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL] Re: Re: Generic Plans for Prepared Statement are 158155 times slower than Custom Plans\n\n\n \n\n\nSameer,\n\n\nwere you able to resolve it?\r\n\n\n\n \n\n\nI am not sure if this is very common in postges - I doubt though but have not seen such a drastic performance degradation and that too when planner making the call. \r\n\n\n\n \n\n\nDeepak\n\n\n \n\n\n \n\n\n\n\nOn Tuesday, April 30, 2019, 1:27:14 AM PDT, Naik, Sameer <[email protected]> wrote:\r\n\n\n\n \n\n\n \n\n\n\n>The problem seems to be that the actual values being used for\r\n>c400129200 and c400127400 are quite common in the dataset, so that when considering\n\r\n>Filter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\r\n>the planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\r\n> -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\r\n> Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\r\n> it's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\r\nDistribution of the keys c400129200 and c400127400 .\n\r\nThe distribution of c400129200 is as follows- \r\nIn entire table having 110743 records, there are 55370 records for which the value of c400129200 is 0. For each of the remaining 55,373 records the value of c400129200 is distinct.\n\n\r\nThe distribution of c400127400 is as follows- \r\nIn entire table having 110743 records, there are 55370 records for which the value of c400127400 is 'DATASET1M' . For remaining 55,373 records the value of c400127400 the value is same and is ' 'DATASET2M' .\n\n\r\n-Thanks and Regards,\r\nSameer Naik\n\n\r\n-----Original Message-----\r\nFrom: Tom Lane [mailto:[email protected]]\r\n\r\nSent: Monday, April 29, 2019 8:06 PM\r\nTo: Bruce Momjian <[email protected]>\r\nCc: Naik, Sameer <[email protected]>;\r\[email protected]\r\nSubject: [EXTERNAL] Re: Generic> Plans for Prepared Statement are 158155 times slower than Custom Plans\n\r\nBruce Momjian <[email protected]> writes:\r\n> On Mon, Apr 29, 2019 at 10:36:20AM +0000, Naik, Sameer wrote:\r\n>> Since Postgres 9.2, for prepared statements, the CBO automatically \r\n>> switches from Custom Plan to Generic plan on the sixth iteration \r\n>> (reference backend/ utils/cache/plancache.c).\n\r\n> This is not totally true.\n\r\nYeah, that's a pretty inaccurate statement of the behavior.\n\r\nThe problem seems to be that the actual values being used for\r\nc400129200 and c400127400 are quite common in the dataset, so that when considering\n\r\nFilter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\r\nthe planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\r\n -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\r\n Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\r\nit's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\n\r\nIn v10 and later, it's quite possible that creating extended stats on the combination of those two columns would produce a better estimate. Won't help OP on 9.6, though.\n\r\nThis isn't the first time we've seen a plan-choice failure of this sort.\r\nI've wondered if we should make the plancache simply disbelieve generic cost estimates that are actually cheaper than the custom plans, on the grounds that they must be estimation errors. In principle a generic plan could never really be better than a custom\r\n plan; so if it looks that way on a cost basis, what that probably means is that the actual parameter values are outliers of some sort (e.g. extremely common), and the custom plan \"knows\" that it's going to be taking a hit from that, but the generic plan doesn't. \r\n In this sort of situation, going with the generic plan could be really disastrous, which is exactly what the OP is seeing (and what we've seen reported before).\n\r\nHowever, I'm not sure how to tune this idea so that it doesn't end up rejecting perfectly good generic plans. It's likely that there will be some variation in the cost estimates between the generic and specific cases, even if the plan structure is exactly\r\n the same; and that variation could go in either direction.\n\r\n regards, tom lane",
"msg_date": "Fri, 17 May 2019 06:42:23 +0000",
"msg_from": "\"Naik, Sameer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: Re: Generic Plans for Prepared Statement are 158155 times\n slower\n than Custom Plans"
},
{
"msg_contents": "wow this is interesting! \n@Tom, Bruce, David - Experts\nAny idea why would changing the datatype would cause so much degradation - this is even when plan remains the same ,data is same.\nDeepak\n On Friday, May 17, 2019, 2:36:05 AM PDT, Naik, Sameer <[email protected]> wrote: \n \n \nDeepak,\n \nI changed the datatype from citext to text and now everything works fine.\n \nThe data distribution is same, plan is same, yet there is a huge performance degradation when citext is used instead of text.\n \nHowever the business case requires case insensitive string handling.\n \nI am looking forward to some expert advice here when dealing with citext data type.\n \n \n \n \n \n-Thanks and Regards,\n \nSameer Naik\n \n \n \nFrom: Deepak Somaiya [mailto:[email protected]] \nSent: Thursday, May 9, 2019 9:44 AM\nTo: Tom Lane <[email protected]>; Bruce Momjian <[email protected]>; Naik, Sameer <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL] Re: Re: Generic Plans for Prepared Statement are 158155 times slower than Custom Plans\n \n \n \nSameer,\n \nwere you able to resolve it?\n \n \n \nI am not sure if this is very common in postges - I doubt though but have not seen such a drastic performance degradation and that too when planner making the call. \n \n \n \nDeepak\n \n \n \n \n \nOn Tuesday, April 30, 2019, 1:27:14 AM PDT, Naik, Sameer <[email protected]> wrote:\n \n \n \n \n \n>The problem seems to be that the actual values being used for\n>c400129200 and c400127400 are quite common in the dataset, so that when considering\n\n>Filter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\n>the planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\n> -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\n> Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\n> it's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\nDistribution of the keys c400129200 and c400127400 .\n\nThe distribution of c400129200 is as follows- \nIn entire table having 110743 records, there are 55370 records for which the value of c400129200 is 0. For each of the remaining 55,373 records the value of c400129200 is distinct.\n\n\nThe distribution of c400127400 is as follows- \nIn entire table having 110743 records, there are 55370 records for which the value of c400127400 is 'DATASET1M' . For remaining 55,373 records the value of c400127400 the value is same and is ' 'DATASET2M' .\n\n\n-Thanks and Regards,\nSameer Naik\n \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Monday, April 29, 2019 8:06 PM\nTo: Bruce Momjian <[email protected]>\nCc: Naik, Sameer <[email protected]>;[email protected]\nSubject: [EXTERNAL] Re: Generic> Plans for Prepared Statement are 158155 times slower than Custom Plans\n\nBruce Momjian <[email protected]> writes:\n> On Mon, Apr 29, 2019 at 10:36:20AM +0000, Naik, Sameer wrote:\n>> Since Postgres 9.2, for prepared statements, the CBO automatically \n>> switches from Custom Plan to Generic plan on the sixth iteration \n>> (reference backend/ utils/cache/plancache.c).\n\n> This is not totally true.\n\nYeah, that's a pretty inaccurate statement of the behavior.\n\nThe problem seems to be that the actual values being used for\nc400129200 and c400127400 are quite common in the dataset, so that when considering\n\nFilter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\nthe planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\n -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\n Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\nit's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\n\nIn v10 and later, it's quite possible that creating extended stats on the combination of those two columns would produce a better estimate. Won't help OP on 9.6, though.\n\nThis isn't the first time we've seen a plan-choice failure of this sort.\nI've wondered if we should make the plancache simply disbelieve generic cost estimates that are actually cheaper than the custom plans, on the grounds that they must be estimation errors. In principle a generic plan could never really be better than a custom plan; so if it looks that way on a cost basis, what that probably means is that the actual parameter values are outliers of some sort (e.g. extremely common), and the custom plan \"knows\" that it's going to be taking a hit from that, but the generic plan doesn't. In this sort of situation, going with the generic plan could be really disastrous, which is exactly what the OP is seeing (and what we've seen reported before).\n\nHowever, I'm not sure how to tune this idea so that it doesn't end up rejecting perfectly good generic plans. It's likely that there will be some variation in the cost estimates between the generic and specific cases, even if the plan structure is exactly the same; and that variation could go in either direction.\n\n regards, tom lane\n \n\nwow this is interesting! @Tom, Bruce, David - ExpertsAny idea why would changing the datatype would cause so much degradation - this is even when plan remains the same ,data is same.Deepak\n\n\n\n On Friday, May 17, 2019, 2:36:05 AM PDT, Naik, Sameer <[email protected]> wrote:\n \n\n\n\n\nDeepak,\nI changed the datatype from citext to text and now everything works fine.\nThe data distribution is same, plan is same, yet there is a huge performance degradation when citext is used instead of text.\n\nHowever the business case requires case insensitive string handling.\n\nI am looking forward to some expert advice here when dealing with citext data type.\n \n \n\n-Thanks and Regards,\nSameer Naik\n\n \n\n\nFrom: Deepak Somaiya [mailto:[email protected]] \nSent: Thursday, May 9, 2019 9:44 AM\nTo: Tom Lane <[email protected]>; Bruce Momjian <[email protected]>; Naik, Sameer <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL] Re: Re: Generic Plans for Prepared Statement are 158155 times slower than Custom Plans\n\n\n \n\n\nSameer,\n\n\nwere you able to resolve it?\n\n\n\n \n\n\nI am not sure if this is very common in postges - I doubt though but have not seen such a drastic performance degradation and that too when planner making the call. \n\n\n\n \n\n\nDeepak\n\n\n \n\n\n \n\n\n\n\nOn Tuesday, April 30, 2019, 1:27:14 AM PDT, Naik, Sameer <[email protected]> wrote:\n\n\n\n \n\n\n \n\n\n\n>The problem seems to be that the actual values being used for\n>c400129200 and c400127400 are quite common in the dataset, so that when considering\n\n>Filter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\n>the planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\n> -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\n> Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\n> it's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\nDistribution of the keys c400129200 and c400127400 .\n\nThe distribution of c400129200 is as follows- \nIn entire table having 110743 records, there are 55370 records for which the value of c400129200 is 0. For each of the remaining 55,373 records the value of c400129200 is distinct.\n\n\nThe distribution of c400127400 is as follows- \nIn entire table having 110743 records, there are 55370 records for which the value of c400127400 is 'DATASET1M' . For remaining 55,373 records the value of c400127400 the value is same and is ' 'DATASET2M' .\n\n\n-Thanks and Regards,\nSameer Naik\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\n\nSent: Monday, April 29, 2019 8:06 PM\nTo: Bruce Momjian <[email protected]>\nCc: Naik, Sameer <[email protected]>;\[email protected]\nSubject: [EXTERNAL] Re: Generic> Plans for Prepared Statement are 158155 times slower than Custom Plans\n\nBruce Momjian <[email protected]> writes:\n> On Mon, Apr 29, 2019 at 10:36:20AM +0000, Naik, Sameer wrote:\n>> Since Postgres 9.2, for prepared statements, the CBO automatically \n>> switches from Custom Plan to Generic plan on the sixth iteration \n>> (reference backend/ utils/cache/plancache.c).\n\n> This is not totally true.\n\nYeah, that's a pretty inaccurate statement of the behavior.\n\nThe problem seems to be that the actual values being used for\nc400129200 and c400127400 are quite common in the dataset, so that when considering\n\nFilter: ... (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext)\n\nthe planner makes a roughly correct assessment that there are a lot of such rows, so it prefers to index on the basis of the giant OR clause instead, even though that's fairly expensive. But, when considering the generic case\n\n -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1190.399..5544.385 rows=48 loops=1)\n Index Cond: ((c400129200 = $1) AND (c400127400 = $2))\n\nit's evidently guessing that just a few rows will match the index condition (no more than about 3 given the cost number), making this plan look much cheaper, so it goes with this plan. I wonder what the actual distribution of those keys is.\n\n\nIn v10 and later, it's quite possible that creating extended stats on the combination of those two columns would produce a better estimate. Won't help OP on 9.6, though.\n\nThis isn't the first time we've seen a plan-choice failure of this sort.\nI've wondered if we should make the plancache simply disbelieve generic cost estimates that are actually cheaper than the custom plans, on the grounds that they must be estimation errors. In principle a generic plan could never really be better than a custom\n plan; so if it looks that way on a cost basis, what that probably means is that the actual parameter values are outliers of some sort (e.g. extremely common), and the custom plan \"knows\" that it's going to be taking a hit from that, but the generic plan doesn't. \n In this sort of situation, going with the generic plan could be really disastrous, which is exactly what the OP is seeing (and what we've seen reported before).\n\nHowever, I'm not sure how to tune this idea so that it doesn't end up rejecting perfectly good generic plans. It's likely that there will be some variation in the cost estimates between the generic and specific cases, even if the plan structure is exactly\n the same; and that variation could go in either direction.\n\n regards, tom lane",
"msg_date": "Mon, 20 May 2019 21:37:34 +0000 (UTC)",
"msg_from": "Deepak Somaiya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: Generic Plans for Prepared Statement are 158155 times\n slower than Custom Plans"
},
{
"msg_contents": "On Mon, May 20, 2019 at 09:37:34PM +0000, Deepak Somaiya wrote:\n> wow this is interesting!�\n>@Tom, Bruce, David - Experts\n>Any idea why would changing the datatype would cause so much degradation - this is even when plan remains the same ,data is same.\n>Deepak\n> On Friday, May 17, 2019, 2:36:05 AM PDT, Naik, Sameer <[email protected]> wrote:\n>\n>\n>Deepak,\n>\n>I changed the datatype from citext to text and now everything works fine.\n>\n>The data distribution is same, plan is same, yet there is a huge performance degradation when citext is used instead of text.\n>\n>However the business case requires case insensitive string handling.\n>\n>I am looking forward to some expert advice here when dealing with citext data type.\n>\n> \n\nIt's generally a good idea to share explain analyze output for both\nversions of the query - both with citext and text.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 21 May 2019 00:16:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Generic Plans for Prepared Statement are 158155 times slower\n than Custom Plans"
},
{
"msg_contents": "@Tom, Bruce, David\n>> It's generally a good idea to share explain analyze output for both versions of the query - both with citext and text.\n\nBelow are the queries and explain plan output(custom plan and generic plan) for both versions (with citext and text)\n\nCase Insensitive -\n\nPREPARE slowQuery (citext,citext,int,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext,citext) as \nSELECT\n T776.C179,\n T776.C1 \nFROM\n T776 \nWHERE\n (\n(T776.C400129200 = $1) \n AND \n (\n T776.C400127400 = $2\n )\n AND \n (\n(T776.C400129100 <> $3) \n OR \n (\n T776.C400129100 IS NULL\n )\n )\n AND \n (\n(T776.C179 = $4) \n OR \n (\n T776.C179 = $5\n )\n OR \n (\n T776.C179 = $6\n )\n OR \n (\n T776.C179 = $7\n )\n OR \n (\n T776.C179 = $8\n )\n OR \n (\n T776.C179 = $9\n )\n OR \n (\n T776.C179 = $10\n )\n OR \n (\n T776.C179 = $11\n )\n OR \n (\n T776.C179 = $12\n )\n OR \n (\n T776.C179 = $13\n )\n OR \n (\n T776.C179 = $14\n )\n OR \n (\n T776.C179 = $15\n )\n OR \n (\n T776.C179 = $16\n )\n OR \n (\n T776.C179 = $17\n ) \n OR \n (\n T776.C179 = $18\n )\n OR \n (\n T776.C179 = $19\n )\n OR \n (\n T776.C179 = $20\n )\n OR \n (\n T776.C179 = $21\n )\n OR \n (\n T776.C179 = $22\n )\n OR \n (\n T776.C179 = $23\n )\n OR \n (\n T776.C179 = $24\n )\n OR \n (\n T776.C179 = $25\n )\n OR \n (\n T776.C179 = $26\n )\n OR \n (\n T776.C179 = $27\n )\n OR \n (\n T776.C179 = $28\n )\n OR \n (\n T776.C179 = $29\n )\n OR \n (\n T776.C179 = $30\n )\n OR \n (\n T776.C179 = $31\n )\n OR \n (\n T776.C179 = $32\n )\n OR \n (\n T776.C179 = $33\n )\n OR \n (\n T776.C179 = $34\n )\n OR \n (\n T776.C179 = $35\n )\n OR \n (\n T776.C179 = $36\n )\n OR \n (\n T776.C179 = $37\n )\n OR \n (\n T776.C179 = $38\n )\n OR \n (\n T776.C179 = $39\n )\n OR \n (\n T776.C179 = $40\n )\n OR \n (\n T776.C179 = $41\n )\n OR \n (\n T776.C179 = $42\n )\n OR \n (\n T776.C179 = $43\n )\n OR \n (\n T776.C179 = $44\n )\n OR \n (\n T776.C179 = $45\n )\n OR \n (\n T776.C179 = $46\n )\n OR \n (\n T776.C179 = $47\n )\n OR \n (\n T776.C179 = $48\n )\n OR \n (\n T776.C179 = $49\n )\n OR \n (\n T776.C179 = $50\n )\n OR \n (\n T776.C179 = $51\n )\n )\n )\nORDER BY\n T776.C1 ASC LIMIT 2001 OFFSET 0\n \n select count(*) from T776 where C400129200='0'\n \n Explain (analyze,buffers) Execute slowQuery('0'::citext,'DATASET1M'::citext, 1,'OI-d791e838d0354ea59aa1c04622b7c8be'::citext, 'OI-44502144c7be49f4840d9d30c724f11b'::citext, 'OI-4c4f9f3bb1a344f294612cfeb1ac6838'::citext, 'OI-dd23d23ea6ca459ab6fc3256682df66a'::citext, 'OI-9239a9fa93c9459387d564940c0b4289'::citext, 'OI-f268ba1f12014f07b1b34fd9050aa92d'::citext, 'OI-8e365fa8461043a69950a638d3f3830a'::citext, 'OI-da2e9a38f45b41e9baea8c35b45577dc'::citext, 'OI-df0d9473d3934de29435d1c22fc9a269'::citext, 'OI-bd704daa55d24f12a54da6d5df68d05c'::citext, 'OI-4bed7c372fd44b2e96dd4bce44e2ab79'::citext, 'OI-4c0afdbbcb394670b8d93e39aa403e86'::citext, 'OI-d0c049f6459e4174bb4e2ea025104298'::citext, 'OI-f5fca0c13c454a04939b6f6a4871d647'::citext, 'OI-fb0e56e0b896448cbd3adff8212b3ddc'::citext, 'OI-4316868d400d450fb60bb620a89778f2'::citext, 'OI-4abdb84db1414bd1abbb66f2a35de267'::citext, 'OI-fbb28f59448d44adb65c1145b94e23fc'::citext, 'OI-02577caeab904f37b6d13bb761805e02'::citext, 'OI-ecde76cbefd847ed9602a2c875529123'::citext, 'OI-7b6e946f4e074cf6a8cd2fcec864cc3e'::citext, 'OI-55cf16be8f6e43aba7813d7dd898432c'::citext, 'OI-e1903455cdc14ce1a8f05a43ee452a7f'::citext, 'OI-81071273eacc44c4a46180be3a7d6a04'::citext, 'OI-74cf5387522b4a238483b258f3b0bb7a'::citext, 'OI-0ed0ff8956a84c598226f7e71f37f012'::citext, 'OI-7fc180b8d2944391b41ed90d70915357'::citext, 'OI-1f9e9cc0d2c4481199f98c898abf8b1b'::citext, 'OI-5dfbe9c70fe64a4080052f1d36ad654a'::citext, 'OI-ff83ae4d7a5a4906b97f2f78122324e4'::citext, 'OI-8f298f3c25c24f28943dd8cd98df748f'::citext, 'OI-78263146f1694c39935578c3fa4c6415'::citext, 'OI-ce1c830ed02540a58c3aaea265fa52af'::citext, 'OI-8dd73d417cf84827bc3708a362c7ee40'::citext, 'OI-83e223fa1b364ac8b20e396b21387758'::citext, 'OI-a6eb0ec674d242b793a26b259d15435f'::citext, 'OI-195dfbe207a64130b3bc686bfdabe051'::citext, 'OI-7ba86277cbce489694ba03c98e7d2059'::citext, 'OI-c7675935bd974244939ccac9181d9129'::citext, 'OI-64c958575289438bb86455ed81517df1'::citext, 'OI-05e14b018be14c4ea60f977f91b3fe04'::citext, 'OI-462d7db8d54541b996bbc977e3f4e6ec'::citext, 'OI-42de43dda54a4a018c0038c0de241da1'::citext, 'OI-e31f38e2a95e44bfa8b71ee1d31a66fa'::citext, 'OI-56e85efaaa5f42c0913fed3745687a23'::citext, 'OI-def2602379db49cfadf6c31d7dfc4872'::citext, 'OI-d81dc80af7af4ad8a8383e9834207e0b'::citext, 'OI-6f3333da01f349a3a17a5714a82530a6'::citext)\n\n\n\nCustom Plan for Case Insensitive ---\n'Limit (cost=402.71..402.74 rows=12 width=52) (actual time=4.724..4.803 rows=48 loops=1)'\n' Buffers: shared hit=139 read=53'\n' -> Sort (cost=402.71..402.74 rows=12 width=52) (actual time=4.720..4.747 rows=48 loops=1)'\n' Sort Key: c1'\n' Sort Method: quicksort Memory: 31kB'\n' Buffers: shared hit=139 read=53'\n' -> Bitmap Heap Scan on t776 (cost=212.54..402.49 rows=12 width=52) (actual time=3.715..4.040 rows=48 loops=1)'\n' Recheck Cond: ((c179 = 'OI-d791e838d0354ea59aa1c04622b7c8be'::citext) OR (c179 = 'OI-44502144c7be49f4840d9d30c724f11b'::citext) OR (c179 = 'OI-4c4f9f3bb1a344f294612cfeb1ac6838'::citext) OR (c179 = 'OI-dd23d23ea6ca459ab6fc3256682df66a'::citext) OR (c179 = 'OI-9239a9fa93c9459387d564940c0b4289'::citext) OR (c179 = 'OI-f268ba1f12014f07b1b34fd9050aa92d'::citext) OR (c179 = 'OI-8e365fa8461043a69950a638d3f3830a'::citext) OR (c179 = 'OI-da2e9a38f45b41e9baea8c35b45577dc'::citext) OR (c179 = 'OI-df0d9473d3934de29435d1c22fc9a269'::citext) OR (c179 = 'OI-bd704daa55d24f12a54da6d5df68d05c'::citext) OR (c179 = 'OI-4bed7c372fd44b2e96dd4bce44e2ab79'::citext) OR (c179 = 'OI-4c0afdbbcb394670b8d93e39aa403e86'::citext) OR (c179 = 'OI-d0c049f6459e4174bb4e2ea025104298'::citext) OR (c179 = 'OI-f5fca0c13c454a04939b6f6a4871d647'::citext) OR (c179 = 'OI-fb0e56e0b896448cbd3adff8212b3ddc'::citext) OR (c179 = 'OI-4316868d400d450fb60bb620a89778f2'::citext) OR (c179 = 'OI-4abdb84db1414bd1abbb66f2a35de267'::citext) OR (c179 = 'OI-fbb28f59448d44adb65c1145b94e23fc'::citext) OR (c179 = 'OI-02577caeab904f37b6d13bb761805e02'::citext) OR (c179 = 'OI-ecde76cbefd847ed9602a2c875529123'::citext) OR (c179 = 'OI-7b6e946f4e074cf6a8cd2fcec864cc3e'::citext) OR (c179 = 'OI-55cf16be8f6e43aba7813d7dd898432c'::citext) OR (c179 = 'OI-e1903455cdc14ce1a8f05a43ee452a7f'::citext) OR (c179 = 'OI-81071273eacc44c4a46180be3a7d6a04'::citext) OR (c179 = 'OI-74cf5387522b4a238483b258f3b0bb7a'::citext) OR (c179 = 'OI-0ed0ff8956a84c598226f7e71f37f012'::citext) OR (c179 = 'OI-7fc180b8d2944391b41ed90d70915357'::citext) OR (c179 = 'OI-1f9e9cc0d2c4481199f98c898abf8b1b'::citext) OR (c179 = 'OI-5dfbe9c70fe64a4080052f1d36ad654a'::citext) OR (c179 = 'OI-ff83ae4d7a5a4906b97f2f78122324e4'::citext) OR (c179 = 'OI-8f298f3c25c24f28943dd8cd98df748f'::citext) OR (c179 = 'OI-78263146f1694c39935578c3fa4c6415'::citext) OR (c179 = 'OI-ce1c830ed02540a58c3aaea265fa52af'::citext) OR (c179 = 'OI-8dd73d417cf84827bc3708a362c7ee40'::citext) OR (c179 = 'OI-83e223fa1b364ac8b20e396b21387758'::citext) OR (c179 = 'OI-a6eb0ec674d242b793a26b259d15435f'::citext) OR (c179 = 'OI-195dfbe207a64130b3bc686bfdabe051'::citext) OR (c179 = 'OI-7ba86277cbce489694ba03c98e7d2059'::citext) OR (c179 = 'OI-c7675935bd974244939ccac9181d9129'::citext) OR (c179 = 'OI-64c958575289438bb86455ed81517df1'::citext) OR (c179 = 'OI-05e14b018be14c4ea60f977f91b3fe04'::citext) OR (c179 = 'OI-462d7db8d54541b996bbc977e3f4e6ec'::citext) OR (c179 = 'OI-42de43dda54a4a018c0038c0de241da1'::citext) OR (c179 = 'OI-e31f38e2a95e44bfa8b71ee1d31a66fa'::citext) OR (c179 = 'OI-56e85efaaa5f42c0913fed3745687a23'::citext) OR (c179 = 'OI-def2602379db49cfadf6c31d7dfc4872'::citext) OR (c179 = 'OI-d81dc80af7af4ad8a8383e9834207e0b'::citext) OR (c179 = 'OI-6f3333da01f349a3a17a5714a82530a6'::citext))'\n' Filter: (((c400129100 <> 1) OR (c400129100 IS NULL)) AND (c400129200 = '0'::citext) AND (c400127400 = 'DATASET1M'::citext))'\n' Heap Blocks: exact=39'\n' Buffers: shared hit=131 read=53'\n' -> BitmapOr (cost=212.54..212.54 rows=48 width=0) (actual time=3.690..3.690 rows=0 loops=1)'\n' Buffers: shared hit=92 read=53'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.157..0.157 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-d791e838d0354ea59aa1c04622b7c8be'::citext)'\n' Buffers: shared read=3'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.163..0.163 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-44502144c7be49f4840d9d30c724f11b'::citext)'\n' Buffers: shared hit=1 read=2'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.075..0.075 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-4c4f9f3bb1a344f294612cfeb1ac6838'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.077..0.077 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-dd23d23ea6ca459ab6fc3256682df66a'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.091..0.091 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-9239a9fa93c9459387d564940c0b4289'::citext)'\n' Buffers: shared hit=1 read=2'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.101..0.101 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-f268ba1f12014f07b1b34fd9050aa92d'::citext)'\n' Buffers: shared hit=1 read=2'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.071..0.071 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-8e365fa8461043a69950a638d3f3830a'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.067..0.067 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-da2e9a38f45b41e9baea8c35b45577dc'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.073..0.073 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-df0d9473d3934de29435d1c22fc9a269'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.096..0.096 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-bd704daa55d24f12a54da6d5df68d05c'::citext)'\n' Buffers: shared hit=1 read=2'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-4bed7c372fd44b2e96dd4bce44e2ab79'::citext)'\n' Buffers: shared hit=3'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-4c0afdbbcb394670b8d93e39aa403e86'::citext)'\n' Buffers: shared hit=3'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.070..0.070 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-d0c049f6459e4174bb4e2ea025104298'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.101..0.101 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-f5fca0c13c454a04939b6f6a4871d647'::citext)'\n' Buffers: shared hit=1 read=2'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.055..0.055 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-fb0e56e0b896448cbd3adff8212b3ddc'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.066..0.066 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-4316868d400d450fb60bb620a89778f2'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.069..0.069 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-4abdb84db1414bd1abbb66f2a35de267'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.063..0.063 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-fbb28f59448d44adb65c1145b94e23fc'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.080..0.080 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-02577caeab904f37b6d13bb761805e02'::citext)'\n' Buffers: shared hit=1 read=2'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.072..0.072 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-ecde76cbefd847ed9602a2c875529123'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.071..0.071 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-7b6e946f4e074cf6a8cd2fcec864cc3e'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.069..0.069 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-55cf16be8f6e43aba7813d7dd898432c'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.070..0.070 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-e1903455cdc14ce1a8f05a43ee452a7f'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.066..0.066 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-81071273eacc44c4a46180be3a7d6a04'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.066..0.066 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-74cf5387522b4a238483b258f3b0bb7a'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.064..0.064 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-0ed0ff8956a84c598226f7e71f37f012'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.072..0.072 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-7fc180b8d2944391b41ed90d70915357'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.088..0.088 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-1f9e9cc0d2c4481199f98c898abf8b1b'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.068..0.068 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-5dfbe9c70fe64a4080052f1d36ad654a'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.057..0.057 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-ff83ae4d7a5a4906b97f2f78122324e4'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.091..0.091 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-8f298f3c25c24f28943dd8cd98df748f'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.068..0.068 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-78263146f1694c39935578c3fa4c6415'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.071..0.071 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-ce1c830ed02540a58c3aaea265fa52af'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.069..0.069 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-8dd73d417cf84827bc3708a362c7ee40'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.070..0.070 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-83e223fa1b364ac8b20e396b21387758'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.083..0.083 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-a6eb0ec674d242b793a26b259d15435f'::citext)'\n' Buffers: shared hit=1 read=2'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.073..0.073 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-195dfbe207a64130b3bc686bfdabe051'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.051..0.051 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-7ba86277cbce489694ba03c98e7d2059'::citext)'\n' Buffers: shared hit=3'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.079..0.079 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-c7675935bd974244939ccac9181d9129'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.081..0.081 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-64c958575289438bb86455ed81517df1'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.084..0.084 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-05e14b018be14c4ea60f977f91b3fe04'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.077..0.077 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-462d7db8d54541b996bbc977e3f4e6ec'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.069..0.069 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-42de43dda54a4a018c0038c0de241da1'::citext)'\n' Buffers: shared hit=3 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.067..0.067 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-e31f38e2a95e44bfa8b71ee1d31a66fa'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.066..0.066 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-56e85efaaa5f42c0913fed3745687a23'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.050..0.050 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-def2602379db49cfadf6c31d7dfc4872'::citext)'\n' Buffers: shared hit=3'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.070..0.070 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-d81dc80af7af4ad8a8383e9834207e0b'::citext)'\n' Buffers: shared hit=2 read=1'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.067..0.067 rows=1 loops=1)'\n' Index Cond: (c179 = 'OI-6f3333da01f349a3a17a5714a82530a6'::citext)'\n' Buffers: shared hit=2 read=1'\n'Execution time: 5.150 ms'\n\n\nGeneric Plan for Case Insensitive ---\n\n'Limit (cost=12.67..12.68 rows=1 width=52) (actual time=5531.555..5531.634 rows=48 loops=1)'\n' Buffers: shared hit=54716 read=398'\n' -> Sort (cost=12.67..12.68 rows=1 width=52) (actual time=5531.552..5531.580 rows=48 loops=1)'\n' Sort Key: c1'\n' Sort Method: quicksort Memory: 31kB'\n' Buffers: shared hit=54716 read=398'\n' -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1187.686..5531.421 rows=48 loops=1)'\n' Index Cond: ((c400129200 = $1) AND (c400127400 = $2))'\n' Filter: (((c400129100 <> $3) OR (c400129100 IS NULL)) AND ((c179 = $4) OR (c179 = $5) OR (c179 = $6) OR (c179 = $7) OR (c179 = $8) OR (c179 = $9) OR (c179 = $10) OR (c179 = $11) OR (c179 = $12) OR (c179 = $13) OR (c179 = $14) OR (c179 = $15) OR (c179 = $16) OR (c179 = $17) OR (c179 = $18) OR (c179 = $19) OR (c179 = $20) OR (c179 = $21) OR (c179 = $22) OR (c179 = $23) OR (c179 = $24) OR (c179 = $25) OR (c179 = $26) OR (c179 = $27) OR (c179 = $28) OR (c179 = $29) OR (c179 = $30) OR (c179 = $31) OR (c179 = $32) OR (c179 = $33) OR (c179 = $34) OR (c179 = $35) OR (c179 = $36) OR (c179 = $37) OR (c179 = $38) OR (c179 = $39) OR (c179 = $40) OR (c179 = $41) OR (c179 = $42) OR (c179 = $43) OR (c179 = $44) OR (c179 = $45) OR (c179 = $46) OR (c179 = $47) OR (c179 = $48) OR (c179 = $49) OR (c179 = $50) OR (c179 = $51)))'\n' Rows Removed by Filter: 55322'\n' Buffers: shared hit=54716 read=398'\n'Execution time: 5531.741 ms'\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nCase Sensitive -\n\nPREPARE fastquery (text,text,int,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text) as \nSELECT\n T776.C179,\n T776.C1,\n T776.C400129200\nFROM\n T776 \nWHERE\n (\n(T776.C400129200 = $1) \n AND \n (\n T776.C400127400 = $2\n )\n AND \n (\n(T776.C400129100 <> $3) \n OR \n (\n T776.C400129100 IS NULL\n )\n )\n AND \n (\n(T776.C179 = $4) \n OR \n (\n T776.C179 = $5\n )\n OR \n (\n T776.C179 = $6\n )\n OR \n (\n T776.C179 = $7\n )\n OR \n (\n T776.C179 = $8\n )\n OR \n (\n T776.C179 = $9\n )\n OR \n (\n T776.C179 = $10\n )\n OR \n (\n T776.C179 = $11\n )\n OR \n (\n T776.C179 = $12\n )\n OR \n (\n T776.C179 = $13\n )\n OR \n (\n T776.C179 = $14\n )\n OR \n (\n T776.C179 = $15\n )\n OR \n (\n T776.C179 = $16\n )\n OR \n (\n T776.C179 = $17\n ) \n OR \n (\n T776.C179 = $18\n )\n OR \n (\n T776.C179 = $19\n )\n OR \n (\n T776.C179 = $20\n )\n OR \n (\n T776.C179 = $21\n )\n OR \n (\n T776.C179 = $22\n )\n OR \n (\n T776.C179 = $23\n )\n OR \n (\n T776.C179 = $24\n )\n OR \n (\n T776.C179 = $25\n )\n OR \n (\n T776.C179 = $26\n )\n OR \n (\n T776.C179 = $27\n )\n OR \n (\n T776.C179 = $28\n )\n OR \n (\n T776.C179 = $29\n )\n OR \n (\n T776.C179 = $30\n )\n OR \n (\n T776.C179 = $31\n )\n OR \n (\n T776.C179 = $32\n )\n OR \n (\n T776.C179 = $33\n )\n OR \n (\n T776.C179 = $34\n )\n OR \n (\n T776.C179 = $35\n )\n OR \n (\n T776.C179 = $36\n )\n OR \n (\n T776.C179 = $37\n )\n OR \n (\n T776.C179 = $38\n )\n OR \n (\n T776.C179 = $39\n )\n OR \n (\n T776.C179 = $40\n )\n OR \n (\n T776.C179 = $41\n )\n OR \n (\n T776.C179 = $42\n )\n OR \n (\n T776.C179 = $43\n )\n OR \n (\n T776.C179 = $44\n )\n OR \n (\n T776.C179 = $45\n )\n OR \n (\n T776.C179 = $46\n )\n OR \n (\n T776.C179 = $47\n )\n OR \n (\n T776.C179 = $48\n )\n OR \n (\n T776.C179 = $49\n )\n OR \n (\n T776.C179 = $50\n )\n OR \n (\n T776.C179 = $51\n )\n )\n )\nORDER BY\n T776.C1 ASC LIMIT 2001 OFFSET 0;\n \nEXPLAIN analyze EXECUTE fastquery ('0','DATASET1M', 1,N'OI-941ed5dc3b644849afd6bae91ebf02d1','OI-476186266411406ba9967c732fc6f1f2','OI-d627a532701942129f531c74ab40e05b','OI-6d2c55fa269c47789130f05afc8ffa6d','OI-f1734c5368c4496c9a13035b8b236d13','OI-a63664f325144f958332044a4ea2705c','OI-70f148ef11e241409191faf63650a8a8','OI-c24bc2a9e24b4c8b8c9c11061a1bf631','OI-27ec4c51369d49958fc04ae9a6fe547f','OI-0555e41446ef420d93a78214f5253e1c','OI-95e0ca98affb4d5ebab38fe1990cf4be','OI-800e9fb833724a8585920f7a169556eb','OI-1c11e40c56904ecea9a78653f04bde84','OI-4b8f52e78d124ba89d7fde2b0fb6a720','OI-1d64f5df07ee490c88cdacabb5eb740a','OI-af68ae5b648f46ab926d9fafde6a5bb7','OI-5a0f26ba1d35460d953316496f7b7899','OI-3709034c00774804801227d21a5b1e41','OI-11fe926e91db4950b1c24159bb2022da','OI-836924722a304f8a86ff88783166e437','OI-c3a1738a5d384544b70dc3670831033f','OI-467d16d39a0e45dbbefdf20ec3c68b0c','OI-ceee9fa8436a4f72991883387074b744','OI-523324e70f8f4ae3b717b29a82776f33','OI-1a790b65e7c7458ba1567bd2c2ff35be','OI-4115e27566474081b0881ea8de0fcb88','OI-b9366dd534ae4d16a92e17abca8ae097','OI-3c3d9217564e4a82b43a230aa6e3f091','OI-8ca511ce33a84941868bd59b3e54b6b0','OI-77b1d7fa60ce4aa9899c4a56b6037cc6','OI-cd099418c1394100b7c14de9306521bd','OI-fc32fa20d0fb4e40bfad8c361889bcb6','OI-0e7ff2d492d5476b8d390456b4d619f0','OI-289fbe99682948ae86eb8e1fbf7e2350','OI-1e8ac9e7b1924505919c5e703838be54','OI-15672685a4ee4642a9f2f4926c8dace0','OI-1d6eb6a8fb0c437593d46099ef8544ed','OI-ba1326a7763240b19f0ac49934e815ac','OI-ce1e718ec2a844c383743755b976fc70','OI-454967f97851473baba213b03f4099d3','OI-699ac5def19744bf9ceee531b1c4b05d','OI-8f7140b0c06b482e8c8d9123cfe23d73','OI-295d7dc1291f45e1abf8354e735a191a','OI-813ad79d8ed14dff82a6ae0960c65515','OI-28d4d1da3a284f2e8ce5de08d8049819','OI-e0da6cbc49f44977b147cecf9da3c0c2','OI-2bf0a9c92a0543019fcefeb7b227dbf8','OI-e4fd3311fe7240019b6344ad0e357c4c')\n\n\nCustom Plan for Case Sensitive-\n'Limit (cost=404.05..404.08 rows=12 width=70) (actual time=0.740..0.818 rows=48 loops=1)'\n' -> Sort (cost=404.05..404.08 rows=12 width=70) (actual time=0.737..0.765 rows=48 loops=1)'\n' Sort Key: c1'\n' Sort Method: quicksort Memory: 31kB'\n' -> Bitmap Heap Scan on t776 (cost=212.54..403.83 rows=12 width=70) (actual time=0.530..0.624 rows=48 loops=1)'\n' Recheck Cond: (((c179)::text = 'OI-941ed5dc3b644849afd6bae91ebf02d1'::text) OR ((c179)::text = 'OI-476186266411406ba9967c732fc6f1f2'::text) OR ((c179)::text = 'OI-d627a532701942129f531c74ab40e05b'::text) OR ((c179)::text = 'OI-6d2c55fa269c47789130f05afc8ffa6d'::text) OR ((c179)::text = 'OI-f1734c5368c4496c9a13035b8b236d13'::text) OR ((c179)::text = 'OI-a63664f325144f958332044a4ea2705c'::text) OR ((c179)::text = 'OI-70f148ef11e241409191faf63650a8a8'::text) OR ((c179)::text = 'OI-c24bc2a9e24b4c8b8c9c11061a1bf631'::text) OR ((c179)::text = 'OI-27ec4c51369d49958fc04ae9a6fe547f'::text) OR ((c179)::text = 'OI-0555e41446ef420d93a78214f5253e1c'::text) OR ((c179)::text = 'OI-95e0ca98affb4d5ebab38fe1990cf4be'::text) OR ((c179)::text = 'OI-800e9fb833724a8585920f7a169556eb'::text) OR ((c179)::text = 'OI-1c11e40c56904ecea9a78653f04bde84'::text) OR ((c179)::text = 'OI-4b8f52e78d124ba89d7fde2b0fb6a720'::text) OR ((c179)::text = 'OI-1d64f5df07ee490c88cdacabb5eb740a'::text) OR ((c179)::text = 'OI-af68ae5b648f46ab926d9fafde6a5bb7'::text) OR ((c179)::text = 'OI-5a0f26ba1d35460d953316496f7b7899'::text) OR ((c179)::text = 'OI-3709034c00774804801227d21a5b1e41'::text) OR ((c179)::text = 'OI-11fe926e91db4950b1c24159bb2022da'::text) OR ((c179)::text = 'OI-836924722a304f8a86ff88783166e437'::text) OR ((c179)::text = 'OI-c3a1738a5d384544b70dc3670831033f'::text) OR ((c179)::text = 'OI-467d16d39a0e45dbbefdf20ec3c68b0c'::text) OR ((c179)::text = 'OI-ceee9fa8436a4f72991883387074b744'::text) OR ((c179)::text = 'OI-523324e70f8f4ae3b717b29a82776f33'::text) OR ((c179)::text = 'OI-1a790b65e7c7458ba1567bd2c2ff35be'::text) OR ((c179)::text = 'OI-4115e27566474081b0881ea8de0fcb88'::text) OR ((c179)::text = 'OI-b9366dd534ae4d16a92e17abca8ae097'::text) OR ((c179)::text = 'OI-3c3d9217564e4a82b43a230aa6e3f091'::text) OR ((c179)::text = 'OI-8ca511ce33a84941868bd59b3e54b6b0'::text) OR ((c179)::text = 'OI-77b1d7fa60ce4aa9899c4a56b6037cc6'::text) OR ((c179)::text = 'OI-cd099418c1394100b7c14de9306521bd'::text) OR ((c179)::text = 'OI-fc32fa20d0fb4e40bfad8c361889bcb6'::text) OR ((c179)::text = 'OI-0e7ff2d492d5476b8d390456b4d619f0'::text) OR ((c179)::text = 'OI-289fbe99682948ae86eb8e1fbf7e2350'::text) OR ((c179)::text = 'OI-1e8ac9e7b1924505919c5e703838be54'::text) OR ((c179)::text = 'OI-15672685a4ee4642a9f2f4926c8dace0'::text) OR ((c179)::text = 'OI-1d6eb6a8fb0c437593d46099ef8544ed'::text) OR ((c179)::text = 'OI-ba1326a7763240b19f0ac49934e815ac'::text) OR ((c179)::text = 'OI-ce1e718ec2a844c383743755b976fc70'::text) OR ((c179)::text = 'OI-454967f97851473baba213b03f4099d3'::text) OR ((c179)::text = 'OI-699ac5def19744bf9ceee531b1c4b05d'::text) OR ((c179)::text = 'OI-8f7140b0c06b482e8c8d9123cfe23d73'::text) OR ((c179)::text = 'OI-295d7dc1291f45e1abf8354e735a191a'::text) OR ((c179)::text = 'OI-813ad79d8ed14dff82a6ae0960c65515'::text) OR ((c179)::text = 'OI-28d4d1da3a284f2e8ce5de08d8049819'::text) OR ((c179)::text = 'OI-e0da6cbc49f44977b147cecf9da3c0c2'::text) OR ((c179)::text = 'OI-2bf0a9c92a0543019fcefeb7b227dbf8'::text) OR ((c179)::text = 'OI-e4fd3311fe7240019b6344ad0e357c4c'::text))'\n' Filter: (((c400129100 <> 1) OR (c400129100 IS NULL)) AND ((c400129200)::text = '0'::text) AND ((c400127400)::text = 'DATASET1M'::text))'\n' Heap Blocks: exact=41'\n' -> BitmapOr (cost=212.54..212.54 rows=48 width=0) (actual time=0.516..0.516 rows=0 loops=1)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.023..0.023 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-941ed5dc3b644849afd6bae91ebf02d1'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.011..0.011 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-476186266411406ba9967c732fc6f1f2'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.010..0.010 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-d627a532701942129f531c74ab40e05b'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-6d2c55fa269c47789130f05afc8ffa6d'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-f1734c5368c4496c9a13035b8b236d13'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-a63664f325144f958332044a4ea2705c'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-70f148ef11e241409191faf63650a8a8'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-c24bc2a9e24b4c8b8c9c11061a1bf631'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-27ec4c51369d49958fc04ae9a6fe547f'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-0555e41446ef420d93a78214f5253e1c'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-95e0ca98affb4d5ebab38fe1990cf4be'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-800e9fb833724a8585920f7a169556eb'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-1c11e40c56904ecea9a78653f04bde84'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-4b8f52e78d124ba89d7fde2b0fb6a720'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-1d64f5df07ee490c88cdacabb5eb740a'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-af68ae5b648f46ab926d9fafde6a5bb7'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-5a0f26ba1d35460d953316496f7b7899'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-3709034c00774804801227d21a5b1e41'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-11fe926e91db4950b1c24159bb2022da'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-836924722a304f8a86ff88783166e437'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-c3a1738a5d384544b70dc3670831033f'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.043..0.043 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-467d16d39a0e45dbbefdf20ec3c68b0c'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.010..0.010 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-ceee9fa8436a4f72991883387074b744'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-523324e70f8f4ae3b717b29a82776f33'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-1a790b65e7c7458ba1567bd2c2ff35be'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-4115e27566474081b0881ea8de0fcb88'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-b9366dd534ae4d16a92e17abca8ae097'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-3c3d9217564e4a82b43a230aa6e3f091'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-8ca511ce33a84941868bd59b3e54b6b0'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-77b1d7fa60ce4aa9899c4a56b6037cc6'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-cd099418c1394100b7c14de9306521bd'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-fc32fa20d0fb4e40bfad8c361889bcb6'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-0e7ff2d492d5476b8d390456b4d619f0'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-289fbe99682948ae86eb8e1fbf7e2350'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-1e8ac9e7b1924505919c5e703838be54'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-15672685a4ee4642a9f2f4926c8dace0'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-1d6eb6a8fb0c437593d46099ef8544ed'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.010..0.010 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-ba1326a7763240b19f0ac49934e815ac'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-ce1e718ec2a844c383743755b976fc70'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-454967f97851473baba213b03f4099d3'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-699ac5def19744bf9ceee531b1c4b05d'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-8f7140b0c06b482e8c8d9123cfe23d73'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-295d7dc1291f45e1abf8354e735a191a'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-813ad79d8ed14dff82a6ae0960c65515'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-28d4d1da3a284f2e8ce5de08d8049819'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-e0da6cbc49f44977b147cecf9da3c0c2'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-2bf0a9c92a0543019fcefeb7b227dbf8'::text)'\n' -> Bitmap Index Scan on i776_0_179_t776 (cost=0.00..4.43 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)'\n' Index Cond: ((c179)::text = 'OI-e4fd3311fe7240019b6344ad0e357c4c'::text)'\n'Execution time: 1.013 ms'\n\n\nGeneric Plan for Case Sensitive -\n\n'Limit (cost=12.74..12.75 rows=1 width=70) (actual time=185.728..185.806 rows=48 loops=1)'\n' -> Sort (cost=12.74..12.75 rows=1 width=70) (actual time=185.726..185.753 rows=48 loops=1)'\n' Sort Key: c1'\n' Sort Method: quicksort Memory: 31kB'\n' -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.73 rows=1 width=70) (actual time=39.277..185.650 rows=48 loops=1)'\n' Index Cond: (((c400129200)::text = $1) AND ((c400127400)::text = $2))'\n' Filter: (((c400129100 <> $3) OR (c400129100 IS NULL)) AND (((c179)::text = $4) OR ((c179)::text = $5) OR ((c179)::text = $6) OR ((c179)::text = $7) OR ((c179)::text = $8) OR ((c179)::text = $9) OR ((c179)::text = $10) OR ((c179)::text = $11) OR ((c179)::text = $12) OR ((c179)::text = $13) OR ((c179)::text = $14) OR ((c179)::text = $15) OR ((c179)::text = $16) OR ((c179)::text = $17) OR ((c179)::text = $18) OR ((c179)::text = $19) OR ((c179)::text = $20) OR ((c179)::text = $21) OR ((c179)::text = $22) OR ((c179)::text = $23) OR ((c179)::text = $24) OR ((c179)::text = $25) OR ((c179)::text = $26) OR ((c179)::text = $27) OR ((c179)::text = $28) OR ((c179)::text = $29) OR ((c179)::text = $30) OR ((c179)::text = $31) OR ((c179)::text = $32) OR ((c179)::text = $33) OR ((c179)::text = $34) OR ((c179)::text = $35) OR ((c179)::text = $36) OR ((c179)::text = $37) OR ((c179)::text = $38) OR ((c179)::text = $39) OR ((c179)::text = $40) OR ((c179)::text = $41) OR ((c179)::text = $42) OR ((c179)::text = $43) OR ((c179)::text = $44) OR ((c179)::text = $45) OR ((c179)::text = $46) OR ((c179)::text = $47) OR ((c179)::text = $48) OR ((c179)::text = $49) OR ((c179)::text = $50) OR ((c179)::text = $51)))'\n' Rows Removed by Filter: 55322'\n'Execution time: 185.916 ms'\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n-Thanks and Regards,\nSameer Naik\n\n-----Original Message-----\nFrom: Tomas Vondra <[email protected]> \nSent: Tuesday, May 21, 2019 3:47 AM\nTo: Deepak Somaiya <[email protected]>\nCc: Tom Lane <[email protected]>; Bruce Momjian <[email protected]>; [email protected]; Naik, Sameer <[email protected]>; [email protected]\nSubject: [EXTERNAL] Re: Generic Plans for Prepared Statement are 158155 times slower than Custom Plans\n\nOn Mon, May 20, 2019 at 09:37:34PM +0000, Deepak Somaiya wrote:\n> wow this is interesting!\n>@Tom, Bruce, David - Experts\n>Any idea why would changing the datatype would cause so much degradation - this is even when plan remains the same ,data is same.\n>Deepak\n> On Friday, May 17, 2019, 2:36:05 AM PDT, Naik, Sameer <[email protected]> wrote:\n>\n>\n>Deepak,\n>\n>I changed the datatype from citext to text and now everything works fine.\n>\n>The data distribution is same, plan is same, yet there is a huge performance degradation when citext is used instead of text.\n>\n>However the business case requires case insensitive string handling.\n>\n>I am looking forward to some expert advice here when dealing with citext data type.\n>\n> \n\nIt's generally a good idea to share explain analyze output for both versions of the query - both with citext and text.\n\n\nregards\n\n-- \nTomas Vondra https://urldefense.proofpoint.com/v2/url?u=http-3A__www.2ndQuadrant.com&d=DwIDAw&c=UrUhmHsiTVT5qkaA4d_oSzcamb9hmamiCDMzBAEwC7E&r=K893err8oTutgRKCeLUAsHd_iqcPBdCmI71ID5BjsTk&m=3dYLVBgo4Y0o0EkCgQ-pKShXctMnCCJCaKme72rIPeI&s=XeEyBe6Oi1N5Bqgt9HnirKF_kBqs5QYEgNtxf8UZiyc&e=\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 23 May 2019 06:37:19 +0000",
"msg_from": "\"Naik, Sameer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: Generic Plans for Prepared Statement are 158155 times slower\n than\n Custom Plans"
},
{
"msg_contents": "\"Naik, Sameer\" <[email protected]> writes:\n> On Mon, May 20, 2019 at 09:37:34PM +0000, Deepak Somaiya wrote:\n>> wow this is interesting!\n>> @Tom, Bruce, David - Experts\n>> Any idea why would changing the datatype would cause so much degradation - this is even when plan remains the same ,data is same.\n\nI see nothing very exciting here. text equality comparison reduces to\na memcmp, while citext equality comparison is quite expensive, since\nit has to case-fold both inputs before it can memcmp them.\n\nFor the given test case:\n\n> ' -> Index Scan using i776_0_400129200_t776 on t776 (cost=0.42..12.66 rows=1 width=52) (actual time=1187.686..5531.421 rows=48 loops=1)'\n> ' Index Cond: ((c400129200 = $1) AND (c400127400 = $2))'\n> ' Filter: (((c400129100 <> $3) OR (c400129100 IS NULL)) AND ((c179 = $4) OR (c179 = $5) OR (c179 = $6) OR (c179 = $7) OR (c179 = $8) OR (c179 = $9) OR (c179 = $10) OR (c179 = $11) OR (c179 = $12) OR (c179 = $13) OR (c179 = $14) OR (c179 = $15) OR (c179 = $16) OR (c179 = $17) OR (c179 = $18) OR (c179 = $19) OR (c179 = $20) OR (c179 = $21) OR (c179 = $22) OR (c179 = $23) OR (c179 = $24) OR (c179 = $25) OR (c179 = $26) OR (c179 = $27) OR (c179 = $28) OR (c179 = $29) OR (c179 = $30) OR (c179 = $31) OR (c179 = $32) OR (c179 = $33) OR (c179 = $34) OR (c179 = $35) OR (c179 = $36) OR (c179 = $37) OR (c179 = $38) OR (c179 = $39) OR (c179 = $40) OR (c179 = $41) OR (c179 = $42) OR (c179 = $43) OR (c179 = $44) OR (c179 = $45) OR (c179 = $46) OR (c179 = $47) OR (c179 = $48) OR (c179 = $49) OR (c179 = $50) OR (c179 = $51)))'\n> ' Rows Removed by Filter: 55322'\n\nit's reasonable to suppose that not many of the rows are failing the\nc400129100 conditions, so that in order to decide that a row doesn't\npass the filter, we are forced to perform each of the OR'd c179\ncomparisons. So this query did something like 48 * 55322 equality\ncomparisons for c179. If the cost of a citexteq evaluation is\naround 2 microseconds, that'd fully explain the runtime differential.\n\nThe OP didn't say what locale or encoding he's using. Maybe switching\nto some other settings would improve matters ... though if non-ASCII\ncase folding is a business requirement, that likely won't go far.\n\nOr you could get rid of the need for the repetitive case-folding,\nsay by storing lower(c179) in a separate column and doing plain\ntext comparisons to pre-lowercased input values.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 13:17:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Generic Plans for Prepared Statement are 158155 times slower than\n Custom Plans"
}
] |
[
{
"msg_contents": "Hi,\n\nThe queries in what follows can be executed on the following fiddle:\n*https://dbfiddle.uk/?rdbms=postgres_10&fiddle=64542f2d987d3ce0d85bbc40ddadf7d6\n<https://dbfiddle.uk/?rdbms=postgres_10&fiddle=64542f2d987d3ce0d85bbc40ddadf7d6>*\n- Please\nnote that the queries/functions might look silly/pointless, I extracted the\nperformance issue I am seeing to a minimal reproducible example.\n\nI have the following schema:\n\ncreate table parent\n(\n id int primary key\n);\ncreate table child\n(\n id int primary key,\n parent_id int references parent(id)\n);\ncreate index on child(parent_id);\n\nLet's start with the following inlinable setof-returning function which\nbasically returns the child rows for a given parent identifier:\n\ncreate function child_get_1(int) returns table(a int) as\n$$\n select child.id\n from child\n left join parent\n on parent.id = child.parent_id\n where child.parent_id = $1;\n$$\nlanguage sql stable;\n\nNote that the left join branch is intentionally not used, and thus could be\neliminated by the planner.\n\nWhen executing the following query, I get a satisfying hash (left) join\n(and the left join to parent is indeed eliminated):\n\nexplain analyze\nselect ch.* from parent p, child_get_1(p.id) ch;\n\n+--------------------------------------------------------------------------------------------------------------------+\n| Hash Join (cost=3.25..17163.23 rows=999900 width=4) (actual\ntime=0.025..194.279 rows=999900 loops=1) |\n| Hash Cond: (child.parent_id = p.id)\n |\n| -> Seq Scan on child (cost=0.00..14424.00 rows=999900 width=8)\n(actual time=0.005..47.215 rows=999900 loops=1) |\n| -> Hash (cost=2.00..2.00 rows=100 width=4) (actual time=0.016..0.016\nrows=100 loops=1) |\n| Buckets: 1024 Batches: 1 Memory Usage: 12kB\n |\n| -> Seq Scan on parent p (cost=0.00..2.00 rows=100 width=4)\n(actual time=0.001..0.007 rows=100 loops=1) |\n+--------------------------------------------------------------------------------------------------------------------+\n\n\nNow, I introduce a convenience function, also inlinable, which fetches the\nchild rows by its parent id:\n\ncreate function t(int) returns setof child as\n$$\n select child.* from child where child.parent_id = $1;\n$$\nlanguage sql stable;\n\nI refactor `child_get_1(int)` from above as following:\n\ncreate function child_get_2(int) returns table(a int) as\n$$\n select child.id\n from t($1) child\n left join parent\n on parent.id = child.parent_id;\n$$\nlanguage sql stable;\n\nexplain analyze\nselect ch.* from parent p, child_get_2(p.id) ch;\n\nNow, I get a nested loop, which as expected performs quite badly:\n\n+--------------------------------------------------------------------------------------------------------------------------------------------+\n| Nested Loop (cost=189.92..493990.48 rows=999900 width=4) (actual\ntime=1.519..713.680 rows=999900 loops=1) |\n| -> Seq Scan on parent p (cost=0.00..2.00 rows=100 width=4) (actual\ntime=0.004..0.081 rows=100 loops=1) |\n| -> Bitmap Heap Scan on child (cost=189.92..4739.90 rows=9999 width=4)\n(actual time=1.365..6.332 rows=9999 loops=100) |\n| Recheck Cond: (parent_id = p.id)\n |\n| Heap Blocks: exact=442476\n |\n| -> Bitmap Index Scan on child_parent_id_idx (cost=0.00..187.42\nrows=9999 width=0) (actual time=0.838..0.838 rows=9999 loops=100) |\n| Index Cond: (parent_id = p.id)\n |\n+--------------------------------------------------------------------------------------------------------------------------------------------+\n\nFor some reason I cannot explain we now end up with a nested loop, instead\nan hash join. The fairly trivial introduction of `t(int)` messes up with\nreordering, but I fail to see why. I manually upped the from and join\ncollapse limit to 32 - just to be sure -, but no effect. Also, the left\njoin branch could not be eliminated. I believe this is related to the usage\nof the implicit lateral join to `child_get_2(p.id)` in the main query,\nwhich somehow messes up with the reordering of `from t($1) as child` in\n`child_get_2(int)`, though I am not 100% sure.\n\nAlso, note that when we apply an inner join instead of a left join, the\nproblem goes away. The planner now manages to end up with a hash join in\nboth cases.\n\nI am seeing this on v10 and v11.\n\nAny ideas?\n\nThank you. Best regards.\n\nHi,The queries in what follows can be executed on the following fiddle: https://dbfiddle.uk/?rdbms=postgres_10&fiddle=64542f2d987d3ce0d85bbc40ddadf7d6 - Please note that the queries/functions might look silly/pointless, I extracted the performance issue I am seeing to a minimal reproducible example.I have the following schema:create table parent( id int primary key);create table child( id int primary key, parent_id int references parent(id));create index on child(parent_id);Let's start with the following inlinable setof-returning function which basically returns the child rows for a given parent identifier:create function child_get_1(int) returns table(a int) as$$ select child.id from child left join parent on parent.id = child.parent_id where child.parent_id = $1;$$language sql stable;Note that the left join branch is intentionally not used, and thus could be eliminated by the planner.When executing the following query, I get a satisfying hash (left) join (and the left join to parent is indeed eliminated):explain analyzeselect ch.* from parent p, child_get_1(p.id) ch;+--------------------------------------------------------------------------------------------------------------------+| Hash Join (cost=3.25..17163.23 rows=999900 width=4) (actual time=0.025..194.279 rows=999900 loops=1) || Hash Cond: (child.parent_id = p.id) || -> Seq Scan on child (cost=0.00..14424.00 rows=999900 width=8) (actual time=0.005..47.215 rows=999900 loops=1) || -> Hash (cost=2.00..2.00 rows=100 width=4) (actual time=0.016..0.016 rows=100 loops=1) || Buckets: 1024 Batches: 1 Memory Usage: 12kB || -> Seq Scan on parent p (cost=0.00..2.00 rows=100 width=4) (actual time=0.001..0.007 rows=100 loops=1) |+--------------------------------------------------------------------------------------------------------------------+Now, I introduce a convenience function, also inlinable, which fetches the child rows by its parent id:create function t(int) returns setof child as$$ select child.* from child where child.parent_id = $1;$$language sql stable;I refactor `child_get_1(int)` from above as following:create function child_get_2(int) returns table(a int) as$$ select child.id from t($1) child left join parent on parent.id = child.parent_id;$$language sql stable;explain analyzeselect ch.* from parent p, child_get_2(p.id) ch;Now, I get a nested loop, which as expected performs quite badly:+--------------------------------------------------------------------------------------------------------------------------------------------+| Nested Loop (cost=189.92..493990.48 rows=999900 width=4) (actual time=1.519..713.680 rows=999900 loops=1) || -> Seq Scan on parent p (cost=0.00..2.00 rows=100 width=4) (actual time=0.004..0.081 rows=100 loops=1) || -> Bitmap Heap Scan on child (cost=189.92..4739.90 rows=9999 width=4) (actual time=1.365..6.332 rows=9999 loops=100) || Recheck Cond: (parent_id = p.id) || Heap Blocks: exact=442476 || -> Bitmap Index Scan on child_parent_id_idx (cost=0.00..187.42 rows=9999 width=0) (actual time=0.838..0.838 rows=9999 loops=100) || Index Cond: (parent_id = p.id) |+--------------------------------------------------------------------------------------------------------------------------------------------+For some reason I cannot explain we now end up with a nested loop, instead an hash join. The fairly trivial introduction of `t(int)` messes up with reordering, but I fail to see why. I manually upped the from and join collapse limit to 32 - just to be sure -, but no effect. Also, the left join branch could not be eliminated. I believe this is related to the usage of the implicit lateral join to `child_get_2(p.id)` in the main query, which somehow messes up with the reordering of `from t($1) as child` in `child_get_2(int)`, though I am not 100% sure.Also, note that when we apply an inner join instead of a left join, the problem goes away. The planner now manages to end up with a hash join in both cases.I am seeing this on v10 and v11.Any ideas?Thank you. Best regards.",
"msg_date": "Tue, 30 Apr 2019 20:57:07 +0200",
"msg_from": "Peter Billen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Failure to reordering in case of a lateral join in combination with a\n left join (not inner join) resulting in suboptimal nested loop plan"
},
{
"msg_contents": "Peter Billen <[email protected]> writes:\n> For some reason I cannot explain we now end up with a nested loop, instead\n> an hash join. The fairly trivial introduction of `t(int)` messes up with\n> reordering, but I fail to see why.\n\nI traced through this and determined that it's got nothing to do with\nfunction inlining; you can reproduce the same plan with the functions\nwritten out by hand:\n\nexplain\nselect ch.* from parent p,\nlateral ( select child.id\n from \n ( select child.* from child where child.parent_id = p.id ) child\n left join parent\n on parent.id = child.parent_id\n ) ch;\n\nThe problem here actually is that the planner refuses to flatten the\nLATERAL subquery. You don't see a SubqueryScan in the finished plan,\nbut that's just because it gets optimized away at the end. Because\nof the lack of flattening, we don't get a terribly good plan\nfor the outermost join.\n\nThe reason for the flattening failure is some probably-overly-conservative\nanalysis in is_simple_subquery and jointree_contains_lateral_outer_refs:\n\n /*\n * The subquery's WHERE and JOIN/ON quals mustn't contain any lateral\n * references to rels outside a higher outer join (including the case\n * where the outer join is within the subquery itself). In such a\n * case, pulling up would result in a situation where we need to\n * postpone quals from below an outer join to above it, which is\n * probably completely wrong and in any case is a complication that\n * doesn't seem worth addressing at the moment.\n */\n\nThe lateral reference to p.id is syntactically underneath the LEFT JOIN\nin the subquery, so this restriction is violated.\n\nIt seems like we could possibly conclude that the restriction doesn't\nhave to apply to the outer side of the LEFT JOIN, but proving that and\nthen tightening up the logic is not a task I care to undertake right now.\n\nThis code dates back to c64de21e9625acad57e2caf8f22435e1617fb1ce\nif you want to do some excavation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 16:38:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failure to reordering in case of a lateral join in combination\n with a left join (not inner join) resulting in suboptimal nested loop plan"
}
] |
[
{
"msg_contents": "Hello all,\n\nI faced strange behavior of PostgreSQL during the query execution.\n\nSo, I have to databases: local and foreign. There are foreign server\ndefinitions in the local database (via postgres_fdw). The local database\nhas table 'local_table'. The foreign database has table 'foreign_table'.\nBoth of them have only 1 column: 'primary_uuid'. This column in both\ndatabases is a primary key column. Schema on a local server that stores\nremote server definitions is 'foreign_server'. Each table has 100K rows.\nVacuum analyze has been run for both servers.\n\nWhen I run a query:\nSELECT *\nFROM\n(\nSELECT foreign_table.primary_uuid\n FROM foreign_server.foreign_table\n UNION ALL\n SELECT local_table.primary_uuid\n FROM local_table\n)\njoin_view\nWHERE\n join_view.primary_uuid in (select\n'19b2db7e-db89-48eb-90b1-0bd468a2346b'::uuid)\n\nI expect that the server will use the pkey index for the local table. But\nit uses seq scan instead!\n\n\"Hash Semi Join (cost=100.03..3346.23 rows=51024 width=16) (actual\ntime=482.235..482.235 rows=0 loops=1)\"\n\" Output: foreign_table.primary_uuid\"\n\" Hash Cond: (foreign_table.primary_uuid =\n('ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid))\"\n\" -> Append (cost=100.00..2510.68 rows=102048 width=16) (actual\ntime=0.529..463.563 rows=200000 loops=1)\"\n\" -> Foreign Scan on foreign_server.foreign_table\n(cost=100.00..171.44 rows=2048 width=16) (actual time=0.528..446.715\nrows=100000 loops=1)\"\n\" Output: foreign_table.primary_uuid\"\n\" Remote SQL: SELECT primary_uuid FROM public.foreign_table\"\n\" -> Seq Scan on public.local_table (cost=0.00..1829.00\nrows=100000 width=16) (actual time=0.021..6.358 rows=100000 loops=1)\"\n\" Output: local_table.primary_uuid\"\n\" -> Hash (cost=0.02..0.02 rows=1 width=16) (actual time=0.006..0.006\nrows=1 loops=1)\"\n\" Output: ('ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 9kB\"\n\" -> Result (cost=0.00..0.01 rows=1 width=16) (actual\ntime=0.001..0.001 rows=1 loops=1)\"\n\" Output: 'ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid\"\n\"Planning Time: 0.126 ms\"\n\"Execution Time: 482.572 ms\"\"Execution Time: 509.315 ms\"\n\nSo, as you can see, the execution time is 509 ms! It could be very fast if\nPostgreSQL used primary key index!\n\nAlso, please, note, that SQL without WHERE clause has been set to the\nforeign server:\n\" Remote SQL: SELECT primary_uuid FROM public.foreign_table\"\n\nSo, the optimizer doesn't select optimal plans for such executions :(\n\nLooks like it's an optimizer inadequacy.\n\nDoes someone know, how to optimize this query without query rewriting\n(queries like this are generated from the Data Access layer and it's hard\nto rebuild that layer)?\n\nThank you\n\nP.S.: Answers to standard questions:\n> PostgreSQL version number you are running:\nPostgreSQL 11.2, compiled by Visual C++ build 1914, 64-bit\n\n> How you installed PostgreSQL:\nBy downloaded standard Windows 64 installer\n\n> Changes made to the settings in the postgresql.conf file:\nshared_preload_libraries = '$libdir/pg_stat_statements'\n\n> Operating system and version:\nWindows 10 Enterprise 64-bit\n\n> What program you're using to connect to PostgreSQL:\npgAdmin III\n\n> Is there anything relevant or unusual in the PostgreSQL server logs?:\nNope\n\n\nP.P.S.: DDL scripts:\nfor the foreign database:\nCREATE TABLE public.foreign_table\n(\n primary_uuid uuid NOT NULL,\n CONSTRAINT \"PKEY\" PRIMARY KEY (primary_uuid)\n)\n\nfor local database:\nCREATE TABLE public.local_table\n(\n primary_uuid uuid NOT NULL,\n CONSTRAINT local_table_pkey PRIMARY KEY (primary_uuid)\n)\n\nᐧ\n\nHello all,I faced strange behavior of PostgreSQL during the query execution.So, I have to databases: local and foreign. There are foreign server definitions in the local database (via postgres_fdw). The local database has table 'local_table'. The foreign database has table 'foreign_table'. Both of them have only 1 column: 'primary_uuid'. This column in both databases is a primary key column. Schema on a local server that stores remote server definitions is 'foreign_server'. Each table has 100K rows. Vacuum analyze has been run for both servers.When I run a query:SELECT *FROM (SELECT foreign_table.primary_uuid FROM foreign_server.foreign_table UNION ALL SELECT local_table.primary_uuid FROM local_table)join_viewWHERE join_view.primary_uuid in (select '19b2db7e-db89-48eb-90b1-0bd468a2346b'::uuid)I expect that the server will use the pkey index for the local table. But it uses seq scan instead!\"Hash Semi Join (cost=100.03..3346.23 rows=51024 width=16) (actual time=482.235..482.235 rows=0 loops=1)\"\" Output: foreign_table.primary_uuid\"\" Hash Cond: (foreign_table.primary_uuid = ('ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid))\"\" -> Append (cost=100.00..2510.68 rows=102048 width=16) (actual time=0.529..463.563 rows=200000 loops=1)\"\" -> Foreign Scan on foreign_server.foreign_table (cost=100.00..171.44 rows=2048 width=16) (actual time=0.528..446.715 rows=100000 loops=1)\"\" Output: foreign_table.primary_uuid\"\" Remote SQL: SELECT primary_uuid FROM public.foreign_table\"\" -> Seq Scan on public.local_table (cost=0.00..1829.00 rows=100000 width=16) (actual time=0.021..6.358 rows=100000 loops=1)\"\" Output: local_table.primary_uuid\"\" -> Hash (cost=0.02..0.02 rows=1 width=16) (actual time=0.006..0.006 rows=1 loops=1)\"\" Output: ('ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid)\"\" Buckets: 1024 Batches: 1 Memory Usage: 9kB\"\" -> Result (cost=0.00..0.01 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=1)\"\" Output: 'ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid\"\"Planning Time: 0.126 ms\"\"Execution Time: 482.572 ms\"\"Execution Time: 509.315 ms\"So, as you can see, the execution time is 509 ms! It could be very fast if PostgreSQL used primary key index!Also, please, note, that SQL without WHERE clause has been set to the foreign server:\" Remote SQL: SELECT primary_uuid FROM public.foreign_table\"So, the optimizer doesn't select optimal plans for such executions :(Looks like it's an optimizer inadequacy.Does someone know, how to optimize this query without query rewriting (queries like this are generated from the Data Access layer and it's hard to rebuild that layer)? Thank youP.S.: Answers to standard questions:> PostgreSQL version number you are running:PostgreSQL 11.2, compiled by Visual C++ build 1914, 64-bit> How you installed PostgreSQL:By downloaded standard Windows 64 installer> Changes made to the settings in the postgresql.conf file: shared_preload_libraries = '$libdir/pg_stat_statements'> Operating system and version:Windows 10 Enterprise 64-bit> What program you're using to connect to PostgreSQL:pgAdmin III > Is there anything relevant or unusual in the PostgreSQL server logs?:NopeP.P.S.: DDL scripts:for the foreign database:CREATE TABLE public.foreign_table( primary_uuid uuid NOT NULL, CONSTRAINT \"PKEY\" PRIMARY KEY (primary_uuid))for local database:CREATE TABLE public.local_table( primary_uuid uuid NOT NULL, CONSTRAINT local_table_pkey PRIMARY KEY (primary_uuid))ᐧ",
"msg_date": "Mon, 6 May 2019 17:43:39 +0300",
"msg_from": "Vitaly Baranovsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL optimizer use seq scan instead of pkey index only scan (in\n queries with postgres_fdw)"
},
{
"msg_contents": "On Mon, May 6, 2019 at 10:44 AM Vitaly Baranovsky <[email protected]>\nwrote:\n\n> Hello all,\n>\n> I faced strange behavior of PostgreSQL during the query execution.\n>\n\n ...\n\n\n> Also, please, note, that SQL without WHERE clause has been set to the\n> foreign server:\n> \" Remote SQL: SELECT primary_uuid FROM public.foreign_table\"\n>\n> So, the optimizer doesn't select optimal plans for such executions :(\n>\n\nIt works the way you want in version 12, which is currently under\ndevelopment and should be released in 5 months or so.\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, May 6, 2019 at 10:44 AM Vitaly Baranovsky <[email protected]> wrote:Hello all,I faced strange behavior of PostgreSQL during the query execution. ... Also, please, note, that SQL without WHERE clause has been set to the foreign server:\" Remote SQL: SELECT primary_uuid FROM public.foreign_table\"So, the optimizer doesn't select optimal plans for such executions :(It works the way you want in version 12, which is currently under development and should be released in 5 months or so.Cheers,Jeff",
"msg_date": "Mon, 6 May 2019 11:32:03 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL optimizer use seq scan instead of pkey index only scan\n (in queries with postgres_fdw)"
},
{
"msg_contents": "Thank you, Jeff!\n\nWe'll be looking forward to the next version of Postgres in this case.\n\nAs far as I understand, you've answered about sending filtering condition\nto a foreign server... Could you, please, clarify about another (the first)\npart of my question? Why the server choose seq scan instead of pk key index\nonly scan for the local table?\n\nThank you\nᐧ\n\nOn Mon, May 6, 2019 at 6:32 PM Jeff Janes <[email protected]> wrote:\n\n> On Mon, May 6, 2019 at 10:44 AM Vitaly Baranovsky <\n> [email protected]> wrote:\n>\n>> Hello all,\n>>\n>> I faced strange behavior of PostgreSQL during the query execution.\n>>\n>\n> ...\n>\n>\n>> Also, please, note, that SQL without WHERE clause has been set to the\n>> foreign server:\n>> \" Remote SQL: SELECT primary_uuid FROM public.foreign_table\"\n>>\n>> So, the optimizer doesn't select optimal plans for such executions :(\n>>\n>\n> It works the way you want in version 12, which is currently under\n> development and should be released in 5 months or so.\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\nThank you, Jeff!We'll be looking forward to the next version of Postgres in this case.As far as I understand, you've answered about sending filtering condition to a foreign server... Could you, please, clarify about another (the first) part of my question? Why the server choose seq scan instead of pk key index only scan for the local table? Thank youᐧOn Mon, May 6, 2019 at 6:32 PM Jeff Janes <[email protected]> wrote:On Mon, May 6, 2019 at 10:44 AM Vitaly Baranovsky <[email protected]> wrote:Hello all,I faced strange behavior of PostgreSQL during the query execution. ... Also, please, note, that SQL without WHERE clause has been set to the foreign server:\" Remote SQL: SELECT primary_uuid FROM public.foreign_table\"So, the optimizer doesn't select optimal plans for such executions :(It works the way you want in version 12, which is currently under development and should be released in 5 months or so.Cheers,Jeff",
"msg_date": "Mon, 6 May 2019 18:38:42 +0300",
"msg_from": "Vitaly Baranovsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL optimizer use seq scan instead of pkey index only scan\n (in queries with postgres_fdw)"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> It works the way you want in version 12, which is currently under\n> development and should be released in 5 months or so.\n\nEven in older versions, the OP would get a significantly smarter\nplan after setting use_remote_estimate = on. I think the core\nissue here is that we won't generate remote parameterized paths\nwithout that:\n\n\t/*\n\t * If we're not using remote estimates, stop here. We have no way to\n\t * estimate whether any join clauses would be worth sending across, so\n\t * don't bother building parameterized paths.\n\t */\n\tif (!fpinfo->use_remote_estimate)\n\t\treturn;\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 11:53:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL optimizer use seq scan instead of pkey index only scan\n (in queries with postgres_fdw)"
},
{
"msg_contents": "On Mon, May 6, 2019 at 11:38 AM Vitaly Baranovsky <[email protected]>\nwrote:\n\n> Thank you, Jeff!\n>\n> We'll be looking forward to the next version of Postgres in this case.\n>\n> As far as I understand, you've answered about sending filtering condition\n> to a foreign server... Could you, please, clarify about another (the first)\n> part of my question? Why the server choose seq scan instead of pk key index\n> only scan for the local table?\n>\n> Thank you\n>\n>\nAren't those the same thing? The foreign server can't use the where\nclause, if it doesn't get sent.\n\nCheers,\n\nJeff\n\nOn Mon, May 6, 2019 at 11:38 AM Vitaly Baranovsky <[email protected]> wrote:Thank you, Jeff!We'll be looking forward to the next version of Postgres in this case.As far as I understand, you've answered about sending filtering condition to a foreign server... Could you, please, clarify about another (the first) part of my question? Why the server choose seq scan instead of pk key index only scan for the local table? Thank youAren't those the same thing? The foreign server can't use the where clause, if it doesn't get sent. Cheers,Jeff",
"msg_date": "Mon, 6 May 2019 11:53:46 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL optimizer use seq scan instead of pkey index only scan\n (in queries with postgres_fdw)"
},
{
"msg_contents": "Ough, I believed I had use_remote_estimate = true in my database, but it\nwas false :(\n\nWith use_remote_estimate = true everything works well!\n\nHere is explain analyze with use_remote_estimate = true:\n\"Nested Loop (cost=100.45..108.97 rows=100000 width=16) (actual\ntime=1.037..1.037 rows=0 loops=1)\"\n\" Output: foreign_table.primary_uuid\"\n\" -> HashAggregate (cost=0.02..0.03 rows=1 width=16) (actual\ntime=0.004..0.004 rows=1 loops=1)\"\n\" Output: ('ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid)\"\n\" Group Key: 'ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid\"\n\" -> Result (cost=0.00..0.01 rows=1 width=16) (actual\ntime=0.001..0.001 rows=1 loops=1)\"\n\" Output: 'ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid\"\n\" -> Append (cost=100.43..108.92 rows=2 width=16) (actual\ntime=1.032..1.032 rows=0 loops=1)\"\n\" -> Foreign Scan on foreign_server.foreign_table\n(cost=100.43..104.47 rows=1 width=16) (actual time=0.994..0.994 rows=0\nloops=1)\"\n\" Output: foreign_table.primary_uuid\"\n\" Remote SQL: SELECT primary_uuid FROM public.foreign_table\nWHERE (($1::uuid = primary_uuid))\"\n\" -> Index Only Scan using local_table_pkey on public.local_table\n(cost=0.42..4.44 rows=1 width=16) (actual time=0.035..0.035 rows=0 loops=1)\"\n\" Output: local_table.primary_uuid\"\n\" Index Cond: (local_table.primary_uuid =\n('ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid))\"\n\" Heap Fetches: 0\"\n\"Planning Time: 100.619 ms\"\n\"Execution Time: 1.243 ms\"\n\nI tried this with use_remote_estimate = true for different real queries\nwith a lot of joins and everything works well!\nᐧ\n\nOn Mon, May 6, 2019 at 6:53 PM Tom Lane <[email protected]> wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > It works the way you want in version 12, which is currently under\n> > development and should be released in 5 months or so.\n>\n> Even in older versions, the OP would get a significantly smarter\n> plan after setting use_remote_estimate = on. I think the core\n> issue here is that we won't generate remote parameterized paths\n> without that:\n>\n> /*\n> * If we're not using remote estimates, stop here. We have no way\n> to\n> * estimate whether any join clauses would be worth sending\n> across, so\n> * don't bother building parameterized paths.\n> */\n> if (!fpinfo->use_remote_estimate)\n> return;\n>\n> regards, tom lane\n>\n\nOugh, I believed I had use_remote_estimate = true in my database, but it was false :(With use_remote_estimate = true everything works well!Here is explain analyze with use_remote_estimate = true:\"Nested Loop (cost=100.45..108.97 rows=100000 width=16) (actual time=1.037..1.037 rows=0 loops=1)\"\" Output: foreign_table.primary_uuid\"\" -> HashAggregate (cost=0.02..0.03 rows=1 width=16) (actual time=0.004..0.004 rows=1 loops=1)\"\" Output: ('ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid)\"\" Group Key: 'ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid\"\" -> Result (cost=0.00..0.01 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=1)\"\" Output: 'ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid\"\" -> Append (cost=100.43..108.92 rows=2 width=16) (actual time=1.032..1.032 rows=0 loops=1)\"\" -> Foreign Scan on foreign_server.foreign_table (cost=100.43..104.47 rows=1 width=16) (actual time=0.994..0.994 rows=0 loops=1)\"\" Output: foreign_table.primary_uuid\"\" Remote SQL: SELECT primary_uuid FROM public.foreign_table WHERE (($1::uuid = primary_uuid))\"\" -> Index Only Scan using local_table_pkey on public.local_table (cost=0.42..4.44 rows=1 width=16) (actual time=0.035..0.035 rows=0 loops=1)\"\" Output: local_table.primary_uuid\"\" Index Cond: (local_table.primary_uuid = ('ef89a151-3eab-42af-8ecc-8850053aa1bb'::uuid))\"\" Heap Fetches: 0\"\"Planning Time: 100.619 ms\"\"Execution Time: 1.243 ms\"I tried this with use_remote_estimate = true for different real queries with a lot of joins and everything works well!ᐧOn Mon, May 6, 2019 at 6:53 PM Tom Lane <[email protected]> wrote:Jeff Janes <[email protected]> writes:\n> It works the way you want in version 12, which is currently under\n> development and should be released in 5 months or so.\n\nEven in older versions, the OP would get a significantly smarter\nplan after setting use_remote_estimate = on. I think the core\nissue here is that we won't generate remote parameterized paths\nwithout that:\n\n /*\n * If we're not using remote estimates, stop here. We have no way to\n * estimate whether any join clauses would be worth sending across, so\n * don't bother building parameterized paths.\n */\n if (!fpinfo->use_remote_estimate)\n return;\n\n regards, tom lane",
"msg_date": "Mon, 6 May 2019 19:36:56 +0300",
"msg_from": "Vitaly Baranovsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL optimizer use seq scan instead of pkey index only scan\n (in queries with postgres_fdw)"
},
{
"msg_contents": "On Mon, May 6, 2019 at 11:53 AM Jeff Janes <[email protected]> wrote:\n\n> On Mon, May 6, 2019 at 11:38 AM Vitaly Baranovsky <\n> [email protected]> wrote:\n>\n>> Thank you, Jeff!\n>>\n>> We'll be looking forward to the next version of Postgres in this case.\n>>\n>> As far as I understand, you've answered about sending filtering condition\n>> to a foreign server... Could you, please, clarify about another (the first)\n>> part of my question? Why the server choose seq scan instead of pk key index\n>> only scan for the local table?\n>>\n>> Thank you\n>>\n>>\n> Aren't those the same thing? The foreign server can't use the where\n> clause, if it doesn't get sent.\n>\n\nNevermind. When you said local table, I had some tunnel vision and was\nthinking of the foreign table as viewed from the perspective of the foreign\nserver (to which it is local), not the actual local table. That too is\n\"fixed\" in the same commit to the 12dev branch as the other issue is:\n\ncommit 4be058fe9ec5e630239b656af21fc083371f30ed\nDate: Mon Jan 28 17:54:10 2019 -0500\n\n In the planner, replace an empty FROM clause with a dummy RTE.\n\n\nMy tests are all done with empty, unanalyzed tables as I just took you DDL\nwithout inventing my own DML, so may be different than what what you were\nseeing with your populated tables.\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, May 6, 2019 at 11:53 AM Jeff Janes <[email protected]> wrote:On Mon, May 6, 2019 at 11:38 AM Vitaly Baranovsky <[email protected]> wrote:Thank you, Jeff!We'll be looking forward to the next version of Postgres in this case.As far as I understand, you've answered about sending filtering condition to a foreign server... Could you, please, clarify about another (the first) part of my question? Why the server choose seq scan instead of pk key index only scan for the local table? Thank youAren't those the same thing? The foreign server can't use the where clause, if it doesn't get sent. Nevermind. When you said local table, I had some tunnel vision and was thinking of the foreign table as viewed from the perspective of the foreign server (to which it is local), not the actual local table. That too is \"fixed\" in the same commit to the 12dev branch as the other issue is: commit 4be058fe9ec5e630239b656af21fc083371f30edDate: Mon Jan 28 17:54:10 2019 -0500 In the planner, replace an empty FROM clause with a dummy RTE. My tests are all done with empty, unanalyzed tables as I just took you DDL without inventing my own DML, so may be different than what what you were seeing with your populated tables.Cheers,Jeff",
"msg_date": "Mon, 6 May 2019 13:17:36 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL optimizer use seq scan instead of pkey index only scan\n (in queries with postgres_fdw)"
}
] |
[
{
"msg_contents": "Hi,\n\n(Apologies if this isn't the right place to post this)\n\nA few days ago a blog post appeared on phoronix.com[1] comparing GCC 8.3.0 against 9.0.1 on Intel cascadelake processors.\nA notable difference was seen in the PostgreSQL benchmark (v10.3, pgbench, read/write, more detail below), both when compiling with -march=native and -march=skylake:\n\nGCC version | -march= | TPS\n 8.3.0 | skylake | 5667\n 9.0.1 | skylake | 11684 (2.06x speed up)\n 8.3.0 | native | 8075\n 9.0.1 | native | 11274 (1.40x speed up)\n\nI'm interested to know the devs' take on this is - does GCC 9 contain some new feature(s) that are particularly well suited to compiling and optimising Postgres? Or was GCC 8 particularly bad?\n\n\nThe test script seems to be this one[2], and goes something like this:\n\n- Postgres 10.3 is configure using --without-readline and --without-zlib (after patching it so that it can run as root). The remaining compiler options seem to be (implicitly?) \"-fno-strict-aliasing -fwrapv -O3 -lpgcommon -lpq -lpthread -lrt -lcrypt -ldl -lm\", plus the -march setting under test.\n\n- initdb is run with --encoding=SQL_ASCII --locale=C\n\n- the db is started with \"pg_ctl start -o '-c autovacuum=false'\"\n\n- createdb pgbench\n\n- pgbench -i -s <system memory in MB * 0.003> pgbench\n\n- pgbench -j <number of cores> -c <number of cores * 4> -T 60 pgbench\n\n\nCheers,\nSteven.\n\n[1] https://www.phoronix.com/scan.php?page=news_item&px=Intel-Cascade-Lake-GCC9\n[2] https://openbenchmarking.org/innhold/b53a0ca6dcfdc9b8597a7b144fae2110fa6af1fb\n\n\n\nThis email is confidential. If you are not the intended recipient, please advise us immediately and delete this message. \nThe registered name of Cantab- part of GAM Systematic is Cantab Capital Partners LLP. \nSee - http://www.gam.com/en/Legal/Email+disclosures+EU for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. \nIf you cannot access this link, please notify us by reply message and we will send the contents to you.\n\nGAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and use information about you in the course of your interactions with us. \nFull details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. \nPlease familiarise yourself with this policy and check it from time to time for updates as it supplements this notice.\n\nHi,\n\n(Apologies if this isn't the right place to post this)\n\nA few days ago a blog post appeared on phoronix.com[1] comparing GCC 8.3.0 against 9.0.1 on Intel cascadelake processors.\nA notable difference was seen in the PostgreSQL benchmark (v10.3, pgbench, read/write, more detail below), both when compiling with -march=native and -march=skylake:\n\nGCC version | -march= | TPS\n 8.3.0 | skylake | 5667\n 9.0.1 | skylake | 11684 (2.06x speed up)\n 8.3.0 | native | 8075\n 9.0.1 | native | 11274 (1.40x speed up)\n\nI'm interested to know the devs' take on this is - does GCC 9 contain some new feature(s) that are particularly well suited to compiling and optimising Postgres? Or was GCC 8 particularly bad?\n\n\nThe test script seems to be this one[2], and goes something like this:\n\n- Postgres 10.3 is configure using --without-readline and --without-zlib (after patching it so that it can run as root). The remaining compiler options seem to be (implicitly?) \"-fno-strict-aliasing -fwrapv -O3 -lpgcommon -lpq -lpthread -lrt -lcrypt -ldl -lm\", plus the -march setting under test.\n\n- initdb is run with --encoding=SQL_ASCII --locale=C\n\n- the db is started with \"pg_ctl start -o '-c autovacuum=false'\"\n\n- createdb pgbench\n\n- pgbench -i -s <system memory in MB * 0.003> pgbench\n\n- pgbench -j <number of cores> -c <number of cores * 4> -T 60 pgbench\n\n\nCheers,\nSteven.\n\n[1] https://www.phoronix.com/scan.php?page=news_item&px=Intel-Cascade-Lake-GCC9\n[2] https://openbenchmarking.org/innhold/b53a0ca6dcfdc9b8597a7b144fae2110fa6af1fb\n\n\n\n This email is confidential. If you are not the intended recipient, please advise us immediately and delete this message. The registered name of Cantab- part of GAM Systematic is Cantab Capital Partners LLP. See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. If you cannot access this link, please notify us by reply message and we will send the contents to you.GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and use information about you in the course of your interactions with us. Full details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with this policy and check it from time to time for updates as it supplements this notice",
"msg_date": "Tue, 7 May 2019 16:14:43 +0000",
"msg_from": "Steven Winfield <[email protected]>",
"msg_from_op": true,
"msg_subject": "GCC 8.3.0 vs. 9.0.1"
},
{
"msg_contents": "Steven Winfield <[email protected]> writes:\n> A few days ago a blog post appeared on phoronix.com[1] comparing GCC 8.3.0 against 9.0.1 on Intel cascadelake processors.\n> A notable difference was seen in the PostgreSQL benchmark (v10.3, pgbench, read/write, more detail below), both when compiling with -march=native and -march=skylake:\n> I'm interested to know the devs' take on this is - does GCC 9 contain some new feature(s) that are particularly well suited to compiling and optimising Postgres? Or was GCC 8 particularly bad?\n\nGiven the described test setup, I'd put basically no stock in these\nnumbers. It's unlikely that this test case's performance is CPU-bound\nper se; more likely, I/O and lock contention are dominant factors.\nSo I'm afraid whatever they're measuring is a more-or-less chance\neffect rather than a real system-wide code improvement.\n\nIt is an interesting report, all the same.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 13:05:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GCC 8.3.0 vs. 9.0.1"
},
{
"msg_contents": "On Tue, May 7, 2019 at 10:06 AM Tom Lane <[email protected]> wrote:\n> Given the described test setup, I'd put basically no stock in these\n> numbers. It's unlikely that this test case's performance is CPU-bound\n> per se; more likely, I/O and lock contention are dominant factors.\n> So I'm afraid whatever they're measuring is a more-or-less chance\n> effect rather than a real system-wide code improvement.\n\nOr a compiler bug. Link-time optimizations give the compiler a view of\nthe program as a whole, not just a single TU at a time. This enables\nit to perform additional aggressive optimization.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 7 May 2019 10:28:16 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GCC 8.3.0 vs. 9.0.1"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 16:14:43 +0000, Steven Winfield wrote:\n> (Apologies if this isn't the right place to post this)\n\nSeems right.\n\n\n> A few days ago a blog post appeared on phoronix.com[1] comparing GCC 8.3.0 against 9.0.1 on Intel cascadelake processors.\n> A notable difference was seen in the PostgreSQL benchmark (v10.3, pgbench, read/write, more detail below), both when compiling with -march=native and -march=skylake:\n> \n> GCC version | -march= | TPS\n> 8.3.0 | skylake | 5667\n> 9.0.1 | skylake | 11684 (2.06x speed up)\n> 8.3.0 | native | 8075\n> 9.0.1 | native | 11274 (1.40x speed up)\n> \n> I'm interested to know the devs' take on this is - does GCC 9 contain some new feature(s) that are particularly well suited to compiling and optimising Postgres? Or was GCC 8 particularly bad?\n\nI think those numbers are just plain bogus. read/write pgbench is\ncommonly IO bound. My suspicion is much more that the tests for gcc 8\nand 9 were executed in the same postgres cluster (in which case the\nsecond will be faster, because it'll have pre-initialized WAL files).\nOr something of that vein.\n\n\n> (after patching it so that it can run as root)\n\nThat, uh, seems odd.\n\n\n> - pgbench -i -s <system memory in MB * 0.003> pgbench\n\nThat's pretty small, but whatever.\n\n\nHere's my results:\n\nI ran:\n\npgbench -i -q -s 96 && pgbench -n -c 8 -j 8 -T 100 -P1\n\n\ngcc 8.3, march=native (on skylake):\n\nfirst run:\ntps = 14436.465265 (excluding connections establishing)\n\nsecond run:\ntps = 13293.266789 (excluding connections establishing)\n\nthird run after postgres restart (and thus a checkpoint):\ntps = 14270.248273 (excluding connections establishing)\n\n\ngcc 9.1, march=native (on skylake):\n\nfirst run:\ntps = 13836.231981 (excluding connections establishing)\n\nsecond run:\ntps = 13304.975550 (excluding connections establishing)\n\nthird run after postgres restart (and thus a checkpoint):\ntps = 14390.246324 (excluding connections establishing)\n\n\nAs you can see the test results are somewhat unstable - the test\nduration of 60s is just not long enough. But there's no meaningful\nevidence of a large speedup here.\n\n\n\n\n> This email is confidential. If you are not the intended recipient, please advise us immediately and delete this message. \n> The registered name of Cantab- part of GAM Systematic is Cantab Capital Partners LLP. \n> See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. \n> If you cannot access this link, please notify us by reply message and we will send the contents to you.\n> \n> GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and use information about you in the course of your interactions with us. \n> Full details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. \n> Please familiarise yourself with this policy and check it from time to time for updates as it supplements this notice.\n\nThis is a public list.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 10:32:45 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GCC 8.3.0 vs. 9.0.1"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 10:28:16 -0700, Peter Geoghegan wrote:\n> On Tue, May 7, 2019 at 10:06 AM Tom Lane <[email protected]> wrote:\n> > Given the described test setup, I'd put basically no stock in these\n> > numbers. It's unlikely that this test case's performance is CPU-bound\n> > per se; more likely, I/O and lock contention are dominant factors.\n> > So I'm afraid whatever they're measuring is a more-or-less chance\n> > effect rather than a real system-wide code improvement.\n> \n> Or a compiler bug. Link-time optimizations give the compiler a view of\n> the program as a whole, not just a single TU at a time. This enables\n> it to perform additional aggressive optimization.\n\nNote that the flags described don't enable LTO.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 10:42:47 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GCC 8.3.0 vs. 9.0.1"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 10:32:45 -0700, Andres Freund wrote:\n> pgbench -i -q -s 96 && pgbench -n -c 8 -j 8 -T 100 -P1\n\npossibly also worthwhile to note: Adding -M prepared (which I think\nphoronix doesn't specify) makes this considerably faster...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 11:04:22 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GCC 8.3.0 vs. 9.0.1"
},
{
"msg_contents": "Thanks, everyone, for your comments.\nI guess if something looks too good to be true then it usually is!\n\nSteven.\n\n(P.S. Apologies for the email disclaimer - it is added by our mail server, not my mail client, and its exclusion list is on the fritz)\n\n\n\n",
"msg_date": "Thu, 9 May 2019 14:04:26 +0000",
"msg_from": "Steven Winfield <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: GCC 8.3.0 vs. 9.0.1"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe would need to integrate Postgres Users Authentication with our own LDAP Server.\n\nBasically as of now we are able to login to Postgress DB with a user/password credential.\n[cid:[email protected]]\n\nThese user objects are the part of Postgres DB server. Now we want that these users should be authenticated by LDAP server.\nWe would want the authentication to be done with LDAP, so basically the user credentials should be store in LDAP server\n\nCan you mention the prescribed steps in Postgres needed for this integration with LDAP Server?\n\nRegards\nTarkeshwar",
"msg_date": "Thu, 9 May 2019 04:51:02 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "On 9/5/19 7:51 π.μ., M Tarkeshwar Rao wrote:\n>\n> Hi all,\n>\n> We would need to integrate Postgres Users Authentication with our own LDAP Server.\n>\n> Basically as of now we are able to login to Postgress DB with a user/password credential.\n>\n> These user objects are the part of Postgres DB server. Now we want that these users should be authenticated by LDAP server.\n>\n> We would want the authentication to be done with LDAP, so basically the user credentials should be store in LDAP server\n>\n> Can you mention the prescribed steps in Postgres needed for this integration with LDAP Server?\n>\nThe users must be existent as postgresql users. Authorization : roles, privileges etc also will be taken by postgresql definitions, grants, etc. But the authentication will be done in LDAP.\nIt is done in pg_hba.conf. There are two ways to do this (with 1 or 2 phases). We have successfully used both Lotus Notes LDAP and FreeIPA LDAP with our production PostgreSQL servers, I have tested \nwith openldap as well, so I guess chances are that it will work with yours.\n>\n> Regards\n>\n> Tarkeshwar\n>\n\n\n-- \nAchilleas Mantzios\nIT DEV Lead\nIT DEPT\nDynacom Tankers Mgmt",
"msg_date": "Thu, 9 May 2019 09:17:37 +0300",
"msg_from": "Achilleas Mantzios <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "On Thu, 2019-05-09 at 04:51 +0000, M Tarkeshwar Rao wrote:\n> We would need to integrate Postgres Users Authentication with our own LDAP Server. \n> \n> Basically as of now we are able to login to Postgress DB with a user/password credential.\n>\n> [roles \"pg_signal_backend\" and \"postgres\"]\n> \n> These user objects are the part of Postgres DB server. Now we want that these users should be authenticated by LDAP server.\n> We would want the authentication to be done with LDAP, so basically the user credentials should be store in LDAP server\n> \n> Can you mention the prescribed steps in Postgres needed for this integration with LDAP Server?\n\nLDAP authentication is well documented:\nhttps://www.postgresql.org/docs/current/auth-ldap.html\n\nBut I don't think you are on the right track.\n\n\"pg_signal_backend\" cannot login, it is a role to which you add a login user\nto give it certain privileges. So you don't need to authenticate the role.\n\n\"postgres\" is the installation superuser. If security is important for you,\nyou won't set a password for that user and you won't allow remote logins\nwith that user.\n\nBut for your application users LDAP authentication is a fine thing, and not\nhard to set up if you know a little bit about LDAP.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 09 May 2019 08:42:28 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "We want to setup ldap authentication in pg_hba.conf, for Postgresql users(other than postgres super user).\r\n\r\nWe are getting issue with special characters by following steps given in postgres documentation. \r\nIt is not accepting any special characters as special characters are mandatory in our use case.\r\n\r\nCan you please help us or have you any steps by which we can configure any postgres with LDAP?\r\n-----Original Message-----\r\nFrom: Laurenz Albe <[email protected]> \r\nSent: Thursday, May 9, 2019 12:12 PM\r\nTo: M Tarkeshwar Rao <[email protected]>; pgsql-general <[email protected]>; '[email protected]' <[email protected]>; '[email protected]' <[email protected]>; [email protected]; [email protected]; '[email protected]' <[email protected]>; Aashish Nagpaul <[email protected]>\r\nSubject: Re: integrate Postgres Users Authentication with our own LDAP Server\r\n\r\nOn Thu, 2019-05-09 at 04:51 +0000, M Tarkeshwar Rao wrote:\r\n> We would need to integrate Postgres Users Authentication with our own LDAP Server. \r\n> \r\n> Basically as of now we are able to login to Postgress DB with a user/password credential.\r\n>\r\n> [roles \"pg_signal_backend\" and \"postgres\"]\r\n> \r\n> These user objects are the part of Postgres DB server. Now we want that these users should be authenticated by LDAP server.\r\n> We would want the authentication to be done with LDAP, so basically \r\n> the user credentials should be store in LDAP server\r\n> \r\n> Can you mention the prescribed steps in Postgres needed for this integration with LDAP Server?\r\n\r\nLDAP authentication is well documented:\r\nhttps://www.postgresql.org/docs/current/auth-ldap.html\r\n\r\nBut I don't think you are on the right track.\r\n\r\n\"pg_signal_backend\" cannot login, it is a role to which you add a login user to give it certain privileges. So you don't need to authenticate the role.\r\n\r\n\"postgres\" is the installation superuser. If security is important for you, you won't set a password for that user and you won't allow remote logins with that user.\r\n\r\nBut for your application users LDAP authentication is a fine thing, and not hard to set up if you know a little bit about LDAP.\r\n\r\nYours,\r\nLaurenz Albe\r\n--\r\nCybertec | https://protect2.fireeye.com/url?k=4f372c5d-13a52101-4f376cc6-0cc47ad93d46-aed009fdc0b3e18f&u=https://www.cybertec-postgresql.com/\r\n\r\n",
"msg_date": "Thu, 9 May 2019 07:11:24 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "On Thu, 2019-05-09 at 07:11 +0000, M Tarkeshwar Rao wrote:\n> We want to setup ldap authentication in pg_hba.conf, for Postgresql users(other than postgres super user).\n> \n> We are getting issue with special characters by following steps given in postgres documentation. \n> It is not accepting any special characters as special characters are mandatory in our use case.\n> \n> Can you please help us or have you any steps by which we can configure any postgres with LDAP?\n\nIt was very inconsiderate of you to write to 100 PostgreSQL lists at once (and I was stupid\nenough not to notice right away).\n\nThen, please don't top-post on these lists. Write your reply *below* what you quote.\n\nWhat exactly is your problem? \"We are getting issues\" is not detailed enough.\nYou probably just have to get the encoding right.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 09 May 2019 09:23:50 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "On Thu, May 09, 2019 at 07:11:24AM +0000, M Tarkeshwar Rao wrote:\n>We want to setup ldap authentication in pg_hba.conf, for Postgresql\n>users(other than postgres super user).\n>\n>We are getting issue with special characters by following steps given in\n>postgres documentation. It is not accepting any special characters as\n>special characters are mandatory in our use case.\n>\n>Can you please help us or have you any steps by which we can configure\n>any postgres with LDAP?\n\nPlease don't cross-post - this is a fairly generic question, it has\nnothing to do with performance or development, so the right thing is to\nsend it to pgsql-general. Likewise, it makes little sense to send\nquestions to the \"owner\". I've removed the other lists from CC.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 9 May 2019 14:43:20 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "Greetings,\n\n(Dropping all the extra mailing lists and such, please do *not*\ncross-post like that)\n\n* M Tarkeshwar Rao ([email protected]) wrote:\n> We want to setup ldap authentication in pg_hba.conf, for Postgresql users(other than postgres super user).\n> \n> We are getting issue with special characters by following steps given in postgres documentation. \n> It is not accepting any special characters as special characters are mandatory in our use case.\n> \n> Can you please help us or have you any steps by which we can configure any postgres with LDAP?\n\nIs this an active directory environment? If so, you should probably be\nusing GSSAPI anyway and not LDAP for the actual authentication.\n\nAs for the \"special characters\", you really need to provide specifics\nand be able to show us the actual errors that you're getting.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 May 2019 15:24:44 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
}
] |
[
{
"msg_contents": "Hello\n\nHow can we generate in the log of executed querys (directory pg_log) the\namount of bytes transferred between the server and the client of the result\nof a query?\n\nExample:\n\na) select now (); - few bytes transferred\nb) select * from large_table; - 20,000,000 bytes transferred\n\nI understand that this parameter can reduce the performance of the database\nin general. I intend to use this information to measure the impact of each\nquery on the total volume of bytes transferred by the network interface by\nIP address in a log analysis tool such as pgBadger\n\n-- \nregards,\n\nFranklin\n\nHelloHow can we generate in the log of executed querys (directory pg_log) the amount of bytes transferred between the server and the client of the result of a query?Example:a) select now (); - few bytes transferredb) select * from large_table; - 20,000,000 bytes transferredI understand that this parameter can reduce the performance of the database in general. I intend to use this information to measure the impact of each query on the total volume of bytes transferred by the network interface by IP address in a log analysis tool such as pgBadger-- regards,Franklin",
"msg_date": "Fri, 10 May 2019 11:10:48 -0300",
"msg_from": "Franklin Haut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Log size in bytes of query result"
},
{
"msg_contents": "Franklin Haut wrote:\n> How can we generate in the log of executed querys (directory pg_log)\n> the amount of bytes transferred between the server and the client\n> of the result of a query?\n\nAs far as I know, there is no parameter to do that.\n\nYou'd have to write an extension that hooks into PostgreSQL, but I\nhave no idea how hard that would be.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 10 May 2019 18:09:41 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Log size in bytes of query result"
},
{
"msg_contents": "Hi\n\n> extension that hooks into PostgreSQL\n\nWe have any hooks that can be used for such purposes?\nSometimes I think how to implement counters \"bytes sent to client\"/\"bytes recv from client\" in pg_stat_statements but did not found good place. Where we can accumulate such counters and how they can be accessible from extension? Extend DestReceiver or add counter directly in src/backend/libpq/pqcomm.c ?\n\nPS: some returned with feedback old patch: https://www.postgresql.org/message-id/flat/CAHhq2wJXRqTMJXZwMAOdtQOkxSKxg_aMxxofhvCo%3DRGXvh0AUg%40mail.gmail.com\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 10 May 2019 19:42:18 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Log size in bytes of query result"
},
{
"msg_contents": "@Sergei and @Laurenz Thank you for reply.\n\nI think it is important to have other resource indicators consumed by the\nquery besides the execution time as the amount of Bytes sent / received by\neach query, how many blocks / bytes were read / written from the cache or\nhad to be loaded from the disk or even CPU cycles .\n\nWith this information we can make more efficient adjustments to the queries\nand improve the management of the resources, especially in virtualized\nenvironments, where we often can not identify the real reason for poor\nperformance delivery, whether it is in the structure modeling or if there\nis a dispute over resources by other virtual machines.\n\nBy my analysis, I see that the most efficient way to perform this control\nwould be in the existing medium in postgresql that is the log file (pg_log)\nadding a few more variables for each query executed.\n\n* Bytes Received (ethernet)\n* Bytes Sent (ethernet)\n* Bytes Written (disk)\n* Bytes read only cache (cache hit)\n* Bytes Read from disk (cache miss)\n* CPU Time\n\nUnfortunately I do not have the necessary knowledge to assist in the\nimplementation of these features and I want to leave it as a suggestion for\na new version.\n\nregards,\n\nEm sex, 10 de mai de 2019 às 13:42, Sergei Kornilov <[email protected]> escreveu:\n\n> Hi\n>\n> > extension that hooks into PostgreSQL\n>\n> We have any hooks that can be used for such purposes?\n> Sometimes I think how to implement counters \"bytes sent to client\"/\"bytes\n> recv from client\" in pg_stat_statements but did not found good place. Where\n> we can accumulate such counters and how they can be accessible from\n> extension? Extend DestReceiver or add counter directly in\n> src/backend/libpq/pqcomm.c ?\n>\n> PS: some returned with feedback old patch:\n> https://www.postgresql.org/message-id/flat/CAHhq2wJXRqTMJXZwMAOdtQOkxSKxg_aMxxofhvCo%3DRGXvh0AUg%40mail.gmail.com\n>\n> regards, Sergei\n>\n\n\n-- \nAtenciosamente,\n\n\nFranklin Haut\n\n@Sergei and @Laurenz Thank you for reply.I think it is important to have other resource indicators consumed by the query besides the execution time as the amount of Bytes sent / received by each query, how many blocks / bytes were read / written from the cache or had to be loaded from the disk or even CPU cycles .With this information we can make more efficient adjustments to the queries and improve the management of the resources, especially in virtualized environments, where we often can not identify the real reason for poor performance delivery, whether it is in the structure modeling or if there is a dispute over resources by other virtual machines.By my analysis, I see that the most efficient way to perform this control would be in the existing medium in postgresql that is the log file (pg_log) adding a few more variables for each query executed.* Bytes Received (ethernet)* Bytes Sent (ethernet)* Bytes Written (disk)* Bytes read only cache (cache hit)* Bytes Read from disk (cache miss)* CPU TimeUnfortunately I do not have the necessary knowledge to assist in the implementation of these features and I want to leave it as a suggestion for a new version.regards,Em sex, 10 de mai de 2019 às 13:42, Sergei Kornilov <[email protected]> escreveu:Hi\n\n> extension that hooks into PostgreSQL\n\nWe have any hooks that can be used for such purposes?\nSometimes I think how to implement counters \"bytes sent to client\"/\"bytes recv from client\" in pg_stat_statements but did not found good place. Where we can accumulate such counters and how they can be accessible from extension? Extend DestReceiver or add counter directly in src/backend/libpq/pqcomm.c ?\n\nPS: some returned with feedback old patch: https://www.postgresql.org/message-id/flat/CAHhq2wJXRqTMJXZwMAOdtQOkxSKxg_aMxxofhvCo%3DRGXvh0AUg%40mail.gmail.com\n\nregards, Sergei\n-- Atenciosamente,Franklin Haut",
"msg_date": "Wed, 22 May 2019 09:50:52 -0300",
"msg_from": "Franklin Haut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Log size in bytes of query result"
},
{
"msg_contents": "> On Wed, May 22, 2019 at 2:51 PM Franklin Haut <[email protected]> wrote:\n>\n> By my analysis, I see that the most efficient way to perform this control\n> would be in the existing medium in postgresql that is the log file (pg_log)\n> adding a few more variables for each query executed.\n\n> On Fri, May 10, 2019 at 6:42 PM Sergei Kornilov <[email protected]> wrote:\n>\n> We have any hooks that can be used for such purposes? Sometimes I think how\n> to implement counters \"bytes sent to client\"/\"bytes recv from client\" in\n> pg_stat_statements but did not found good place. Where we can accumulate such\n> counters and how they can be accessible from extension? Extend DestReceiver\n> or add counter directly in src/backend/libpq/pqcomm.c ?\n\nFor the records, I guess on Linux you can gather such kind of information via\nebpf, even without hooks in Postgres (if you avoid too frequent context\nswitches between kernel/user space via e.g. relying on send/recv, it should be\nalso efficient). I have a POC in my postgres-bcc repo, it looks like this:\n\n $ net_per_query.py bin/postgres -c $container_id\n attaching...\n listening...\n detaching...\n\n sent\n [16397:4026532567] copy pgbench_accounts from stdin: 16b\n [16397:4026532567] alter table pgbench_accounts add primary key (aid): 96b\n [16428:4026532567] postgres: backend 16428: 2k\n\n received\n [16397:4026532567] copy pgbench_accounts from stdin: 16m\n\n\n",
"msg_date": "Wed, 22 May 2019 15:15:16 +0200",
"msg_from": "Dmitry Dolgov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Log size in bytes of query result"
}
] |
[
{
"msg_contents": "Hi,\n\nI recently stumbled across an interesting query performance question\nover at StackOverflow [1], which caught my attention and I started to\ninvestigate the issue further.\n\nOne of the things consuming most of the time was an Index Only Scan\nexecuted millions of times. And on top came the Nested Loop which\nfinally reduced the rows but also took a lot of time to do so.\n\nExplain plan: https://explain.depesz.com/s/4GYT\n\nMy initial test using a ~200M row table [2] revealed a completely\ndifferent plan with a no-brainer performance of ~10.7 milliseconds vs.\nthe original 6.25 minutes (10x more rows cannot explain that).\n\nAs the query included a LIMIT I also started to play around with OFFSET\nbecause I know that at least OFFSET + LIMIT (sorted) rows have to be\nread, with interesting result using one of my optimization attempts:\n\nTo eliminate the JOIN and the massive amount of loops I decided to\ngenerate a huge UNION ALL query using a function [3] on-the-fly and\nalthough performing pretty poor for no OFFSET, it makes quite a huge\ndifference with higher ones.\n\nHere, the times for \"exec_a\" are with plans as generated by PostgreSQL\n11.2 and \"exec_d\" with my UNION ALL version (all times include planning\nand execution):\n\n c_offset | exec_a | exec_d\n----------+-----------+-----------\n 0 | 10.694 | 746.892\n 10000 | 175.858 | 653.218\n 100000 | 1632.205 | 791.913\n 1000000 | 11244.091 | 2274.160\n 5000000 | 11567.438 | 9428.352\n 10000000 | 13442.229 | 17026.783\n\nComplete plans for all executions here:\nexec_a: https://explain.depesz.com/s/Ck1\nexec_d: https://explain.depesz.com/s/ZoUu\n\nA retest after upgrading to PostgreSQL 11.3 and adding another 200M rows\nrevealed even different numbers:\n\n c_offset | exec_a | exec_a_x2 | exec_d | exec_d_x2\n----------+-----------+-----------+-----------+-----------\n 0 | 10.694 | 16.616 | 746.892 | 630.440\n 10000 | 175.858 | 182.922 | 653.218 | 646.173\n 100000 | 1632.205 | 1682.033 | 791.913 | 782.874\n 1000000 | 11244.091 | 24781.706 | 2274.160 | 2306.577\n 5000000 | 11567.438 | 24798.120 | 9428.352 | 8886.781\n 10000000 | 13442.229 | 27315.650 | 17026.783 | 16808.223\n\nOne major difference for the \"exec_a\" plans is that starting with OFFSET\nof 1000000, the planner switches from a \"Merge Append\" + \"Nested Loop\"\nto a \"Parallel Append\" + \"Hash Join\" + \"Sort\" + \"Gather Merge\", whereas\nthe plans for \"exec_d\" always remain single-threaded.\n\nMy question now is why can't the optimizer generate a plan that in this\ncase does 114 loops of \"events\" scans instead of a million loops on the\n\"subscription_signal\"? There even is an index that spans both relevant\ncolumns here (see [2]) which is used extensively in my UNION variant (as\nintended and expected).\n\nAlso I observed that while the parallel append is going to be faster\neventually due to better I/O scalability (at least on my system using an\nSSD separately for log and different index/data tablespaces) it leads to\na lot of CPU cores being saturated as well as a lot more I/O in general\nand also includes the bottleneck of per-worker disk-sorts. From the\nperspective of system resources this is not really helpful and it also\ndoesn't seem to bring much benefit in my case as parallel append just\nsaves ~10-20% (for OFFSET 1000000) vs. standard Append (with parallel\nindex/seq scans of partitions).\n\nUsing a single-threaded approach (to preserve resources for concurrent\nqueries, max_parallel_workers_per_gather = 0), the UNION ALL approach is\nsuperior starting at offset 100000:\n\n c_offset | exec_a | exec_d\n----------+-----------+-----------\n 0 | 18.028 | 292.762\n 10000 | 188.548 | 308.824\n 100000 | 1710.029 | 455.101\n 1000000 | 81325.527 | 1993.886\n 5000000 | 84206.901 | 8638.194\n 10000000 | 84846.488 | 16814.890\n\nOne thing that really disturbs me in this case is the decision of the\noptimizer to generate an Append + Hash starting with offset 1000000\ninstead of simply continuing with a Merge Append, which pushes down\nlimits and returns just 10M intermediate rows whereas Append does not -\nyet - and results into 270M intermediate rows, resulting these numbers\n(enable_hashjoin turned off to force a Merge Append):\n\n c_offset | exec_a | exec_a_m\n----------+-----------+------------\n 1000000 | 81325.527 | 16517.566\n\n...but then degrades further because it switches to Append again (no way\nto test a Merge Append performance here, I guess):\n 5000000 | 84206.901 | 107161.533\n 10000000 | 84846.488 | 109368.087\n\n\nIs there anything I can do about it (apart from my generated huge UNION)\nto speed things up?\n\nPlease note that I'm using Timescale extension just as a simple way of\nmanaging the partitions and indexes and intentionally set the time\ncolumn to a one different from the query filter to not have it optimize\nthings away under the hood.\n\nLooking forward to any pointers here.\n\nCheers,\n\n\tAncoron\n\nRefs:\n[1] https://stackoverflow.com/questions/55470713\n[2] https://paste.ofcode.org/szj7f7fCSYk7jQNdd5Wvbx\n[3] https://paste.ofcode.org/ibZ8fNmNFDrsyxa3NktdWB\n\n\n\n",
"msg_date": "Sun, 12 May 2019 16:08:44 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Huge generated UNION ALL faster than JOIN?"
},
{
"msg_contents": "Ancoron Luciferis <[email protected]> writes:\n> One of the things consuming most of the time was an Index Only Scan\n> executed millions of times. And on top came the Nested Loop which\n> finally reduced the rows but also took a lot of time to do so.\n\n> Explain plan: https://explain.depesz.com/s/4GYT\n\nThe core problem you've got there is the misestimation of the join size:\n\nNested Loop (cost=0.71..0.30 rows=72,839,557 width=33) (actual time=19,504.096..315,933.158 rows=274 loops=1)\n\nAnytime the planner is off by a factor of 250000x, it's not going to end\nwell. In this case, it's imagining that the LIMIT will kick in after just\na very small part of the join is executed --- but in reality, the LIMIT\nis larger than the join output, so that we have to execute the whole join.\nWith a more accurate idea of the join result size, it would have chosen\na different plan.\n\nWhat you ought to look into is why is that estimate so badly off.\nMaybe out-of-date stats, or you need to raise the stats target for\none or both tables?\n\n> My question now is why can't the optimizer generate a plan that in this\n> case does 114 loops of \"events\" scans instead of a million loops on the\n> \"subscription_signal\"?\n\nI don't see any \"events\" table in that query, so this question isn't\nmaking a lot of sense to me. But in any case, the answer probably boils\ndown to \"it's guessing that a plan like this will stop early without\nhaving to scan all of the large table\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 May 2019 14:08:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge generated UNION ALL faster than JOIN?"
},
{
"msg_contents": "On 12/05/2019 20:08, Tom Lane wrote:\n> Ancoron Luciferis <[email protected]> writes:\n>> One of the things consuming most of the time was an Index Only Scan\n>> executed millions of times. And on top came the Nested Loop which\n>> finally reduced the rows but also took a lot of time to do so.\n> \n>> Explain plan: https://explain.depesz.com/s/4GYT\n> \n> The core problem you've got there is the misestimation of the join size:\n> \n> Nested Loop (cost=0.71..0.30 rows=72,839,557 width=33) (actual time=19,504.096..315,933.158 rows=274 loops=1)\n> \n> Anytime the planner is off by a factor of 250000x, it's not going to end\n> well. In this case, it's imagining that the LIMIT will kick in after just\n> a very small part of the join is executed --- but in reality, the LIMIT\n> is larger than the join output, so that we have to execute the whole join.\n> With a more accurate idea of the join result size, it would have chosen\n> a different plan.\n> \n> What you ought to look into is why is that estimate so badly off.\n> Maybe out-of-date stats, or you need to raise the stats target for\n> one or both tables?\n> \n\nI thought so as well and that's why I started investigating, but after\ncreating my own data set and a final analyze of both tables I ended up\nwith similar difference in estimation vs. actual:\n\nhttps://explain.depesz.com/s/R7jp\n\nNested Loop (cost=25.17..514,965,251.12 rows=27,021,979 width=56)\n(actual time=0.568..5.686 rows=274 loops=1)\n\n...but this was fast due to the Merge Append being used and pushed-down\nLIMIT.\n\n>> My question now is why can't the optimizer generate a plan that in this\n>> case does 114 loops of \"events\" scans instead of a million loops on the\n>> \"subscription_signal\"?\n> \n> I don't see any \"events\" table in that query, so this question isn't\n> making a lot of sense to me. But in any case, the answer probably boils\n> down to \"it's guessing that a plan like this will stop early without\n> having to scan all of the large table\".\n> \n> \t\t\tregards, tom lane\n> \n\nYes, Timescale extension is mangling the partition names quite a lot. I\nwonder if it would be possible to hold the result of the estimated\nsmaller reference data (114 subscription_signal.signal_id entries in\nthis case) in a VALUES list and then use that to filter the table with\nthe larger estimate instead of looping over.\n\nCheers,\n\n\tAncoron\n\n\n\n\n",
"msg_date": "Mon, 13 May 2019 00:23:14 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Huge generated UNION ALL faster than JOIN?"
}
] |
[
{
"msg_contents": "Hello,\n\nI have an inefficient query execution for queries with postgres_fdw.\n\nI have an ineffective query with remote tables (postgres_fdw) that works\nfor about 1 second. The same query with local tables (with the same data)\ninstead of foreign ones has execution time less than 5 ms. So, the\ndifference is almost 200 times.\n\nSo, the query works with 3 tables. 2 of them are remote, 1 is a local one.\n\nRemote db (works on the same Postgres instance):\n\nforeign_table\n\nforeign_filter_table\n\nLocal db:\n\nlocal_table\n\nThe idea that local_table and foreign_table have the same structure. They\nhave 2 columns: primary key (primary_uuid) and foreign key (fkey_uuid).\n\nAlso, we have foreign_filter_table, that is a master table for 2 tables\nmentioned above. In addition to the primary key (primary_uuid), it also has\na column filter_uuid.\n\nWhat is the aim of a query: to filter the master table by filter_uuid, and\nthen select corresponding data from unioned (union all) local_table and\nforeign_table, and make pagination sorted by the foreign key.\n\nThe master table (foreign_filter_table) contains 100K records. Slave tables\ncontain about 100K records each, where they refer to about 5% of master\ntable (each slave table contains 20 rows for 5% of master table rows).\n\nNote, use_remote_estimate is true for the foreign server.\n\nLogically, query execution should be: select rows from the master query and\nmake merge join between them. It works this way when I use local tables\ninstead of remote ones.\n\nBut with foreign tables it union child tables first, and then make a nested\nloop for each row with the mater table. As a result, the query becomes very\nslow...\n\nSo, the query is:\n\nselect *\n\nfrom\n\n(select * from local_table lt\n\nunion all\n\nselect * from foreign_server.foreign_table ft) a\n\njoin foreign_server.foreign_filter_table on a.fkey_uuid =\nforeign_server.foreign_filter_table.primary_uuid\n\nwhere foreign_server.foreign_filter_table.filter_uuid between\n'56c77b02-8309-42f1-ae02-8d6922ea7dba' and\n'67c77b02-8309-42f1-ae02-8d6922ea7dba'\n\norder by a.fkey_uuid\n\nlimit 10 offset 90\n\nA query plan is:\n\n\"Limit (cost=527.23..563.45 rows=10 width=80) (actual\ntime=915.919..920.302 rows=10 loops=1)\"\n\n\" Output: lt.fkey_uuid, lt.primary_uuid,\nforeign_filter_table.primary_uuid, foreign_filter_table.filter_uuid,\nlt.fkey_uuid\"\n\n\" Buffers: shared hit=949\"\n\n\" -> Nested Loop (cost=201.28..20859148.42 rows=5759398 width=80) (actual\ntime=118.138..920.282 rows=100 loops=1)\"\n\n\" Output: lt.fkey_uuid, lt.primary_uuid,\nforeign_filter_table.primary_uuid, foreign_filter_table.filter_uuid,\nlt.fkey_uuid\"\n\n\" Buffers: shared hit=949\"\n\n\" -> Merge Append (cost=100.85..19272.58 rows=192108 width=32)\n(actual time=1.133..16.119 rows=1864 loops=1)\"\n\n\" Sort Key: lt.fkey_uuid\"\n\n\" Buffers: shared hit=949\"\n\n\" -> Index Scan using fkey_uuid_idx on public.local_table lt\n (cost=0.42..8937.22 rows=96054 width=32) (actual time=0.021..5.159\nrows=940 loops=1)\"\n\n\" Output: lt.fkey_uuid, lt.primary_uuid\"\n\n\" Buffers: shared hit=949\"\n\n\" -> Foreign Scan on foreign_server.foreign_table ft\n (cost=100.42..8414.27 rows=96054 width=32) (actual time=1.109..9.756\nrows=925 loops=1)\"\n\n\" Output: ft.fkey_uuid, ft.primary_uuid\"\n\n\" Remote SQL: SELECT fkey_uuid, primary_uuid FROM\npublic.foreign_table ORDER BY fkey_uuid ASC NULLS LAST\"\n\n\" -> Foreign Scan on foreign_server.foreign_filter_table\n (cost=100.43..108.47 rows=1 width=32) (actual time=0.380..0.380 rows=0\nloops=1864)\"\n\n\" Output: foreign_filter_table.primary_uuid,\nforeign_filter_table.filter_uuid\"\n\n\" Remote SQL: SELECT primary_uuid, filter_uuid FROM\npublic.foreign_filter_table WHERE ((filter_uuid >=\n'56c77b02-8309-42f1-ae02-8d6922ea7dba'::uuid)) AND ((filter_uuid <=\n'67c77b02-8309-42f1-ae02-8d6922ea7dba'::uuid)) AND (($1::uuid = primary_u\n(...)\"\n\n\"Planning Time: 1.825 ms\"\n\n\"Execution Time: 920.617 ms\"\n\nBut, when I do the same locally on a remote database with a query:\n\nselect *\n\nfrom\n\n(select * from foreign_table ft) a\n\njoin foreign_filter_table on a.fkey_uuid = foreign_filter_table.primary_uuid\n\nwhere foreign_filter_table.filter_uuid between\n'57c77b02-8309-42f1-ae02-8d6922ea7dba' and\n'67c77b02-8309-42f1-ae02-8d6922ea7dba'\n\norder by a.fkey_uuid\n\nlimit 10 offset 90\n\nI get a query plan:\n\n\"Limit (cost=248.72..272.37 rows=10 width=80) (actual time=4.366..4.384\nrows=10 loops=1)\"\n\n\" Output: ft.fkey_uuid, ft.primary_uuid,\nforeign_filter_table.primary_uuid, foreign_filter_table.filter_uuid,\nft.fkey_uuid\"\n\n\" -> Merge Join (cost=35.91..13665.71 rows=5764 width=80) (actual\ntime=0.558..4.378 rows=100 loops=1)\"\n\n\" Output: ft.fkey_uuid, ft.primary_uuid,\nforeign_filter_table.primary_uuid, foreign_filter_table.filter_uuid,\nft.fkey_uuid\"\n\n\" Inner Unique: true\"\n\n\" Merge Cond: (ft.fkey_uuid = foreign_filter_table.primary_uuid)\"\n\n\" -> Index Scan using fkey_uuid_idx on public.foreign_table ft\n (cost=0.42..6393.19 rows=96054 width=32) (actual time=0.005..2.556\nrows=1297 loops=1)\"\n\n\" Output: ft.fkey_uuid, ft.primary_uuid\"\n\n\" -> Index Scan using filter_table_pk on public.foreign_filter_table\n (cost=0.42..6994.83 rows=5996 width=32) (actual time=0.043..1.684 rows=85\nloops=1)\"\n\n\" Output: foreign_filter_table.primary_uuid,\nforeign_filter_table.filter_uuid\"\n\n\" Filter: ((foreign_filter_table.filter_uuid >=\n'57c77b02-8309-42f1-ae02-8d6922ea7dba'::uuid) AND\n(foreign_filter_table.filter_uuid <=\n'67c77b02-8309-42f1-ae02-8d6922ea7dba'::uuid))\"\n\n\" Rows Removed by Filter: 1095\"\n\n\"Planning Time: 1.816 ms\"\n\n\"Execution Time: 4.605 ms\"\n\n\nSo, why the behavior of the Postgres is like this? How can I optimize such\na query? It looks like query optimizer builds an ineffective plan, but\nmaybe I’m wrong\n\nThank you.\n\nP.S.: scripts are attached\nᐧ",
"msg_date": "Tue, 14 May 2019 19:08:58 +0300",
"msg_from": "Vitaly Baranovsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "The wrong (?) query plan for queries with remote (postgres_fdw)\n tables"
}
] |
[
{
"msg_contents": "Hello,\n\n\n\nWe have several select statements whose performance is greatly improved by\ndeleting some stats from pg_statistic. With the stats present the database\nreaches 100% cpu at 13k queries per second. Without these stats, the same\nmachine can handle over 29k queries per second. We were able replicate this\nbehavior with just a single join that all these queries contain. When the\nstats are present the planner chooses to hash join, and without stats\nperform a nested loop. The plan using a hash join has a higher estimated\ncost, and as previously mentioned, uses more cpu.\n\n\n\nThe two tables involved in this query are described below; bag_type and\nbag. There are 6 bag_type rows and around 6 million bag rows. During this\nsimplified scenario, no writes were occurring. Under normal circumstances\nrows can be inserted into bag, and no rows in these tables are updated or\ndeleted.\n\n\n\n\\d bag_type\n\n Table \"public.bag_type\"\n\n Column | Type | Collation | Nullable | Default\n\n-----------+---------+-----------+----------+--------------------------------------\n\n id | bigint | | not null |\nnextval('bag_type_id_seq'::regclass)\n\n name | text | | not null |\n\n has_slots | boolean | | not null |\n\n game | text | | not null |\n\nIndexes:\n\n \"bag_type_pk\" PRIMARY KEY, btree (id)\n\n \"bag_name_u1\" UNIQUE CONSTRAINT, btree (name, game)\n\nReferenced by:\n\n TABLE \"bag\" CONSTRAINT \"bag_fk1\" FOREIGN KEY (bag_type_id) REFERENCES\nbag_type(id)\n\n\n\n\\d bag\n\n Table \"public.bag\"\n\n Column | Type | Collation | Nullable | Default\n\n-------------+--------+-----------+----------+---------------------------------\n\n id | bigint | | not null |\nnextval('bag_id_seq'::regclass)\n\n owner_id | uuid | | not null |\n\n bag_type_id | bigint | | not null |\n\nIndexes:\n\n \"bag_pk\" PRIMARY KEY, btree (id)\n\n \"bag_owner_type_u1\" UNIQUE CONSTRAINT, btree (owner_id, bag_type_id)\n\nForeign-key constraints:\n\n \"bag_fk1\" FOREIGN KEY (bag_type_id) REFERENCES bag_type(id)\n\nReferenced by:\n\n TABLE \"item\" CONSTRAINT \"item_fk1\" FOREIGN KEY (bag_id) REFERENCES\nbag(id)\n\n\n\nThe pared down query joins the two tables.\n\n\n\nEXPLAIN (ANALYZE, BUFFERS)\n\nSELECT 1\n\nFROM bag\n\nINNER JOIN bag_type ON bag.bag_type_id = bag_type.id\n\nWHERE owner_id = '00000000-0000-0000-0000-000000076100'\n\nAND game = 'test_alpha'\n\nAND name = ANY(ARRAY['item','wallet','buildingFixed']);\n\n\n\nWith stats on the bag_type table present, the planner uses a hash join. I\nnoticed that the estimate of the index scan of bag_owner_type_u1 is too\nhigh at 8 rows. No owner can have more than 6 bags, so 8 should be\nlogically impossible. Also, given 3 bag_types and a specific owner, there\ncan't be more than 3 rows due to the bag_owner_type_u1 index.\n\n\n\nANALYZE bag_type;\n\n\n\nhttps://explain.depesz.com/s/zcI <https://explain.depesz.com/s/uRXC>(Slower,\nhash join)\n\n\n\nIf I remove the stats on the bag_type table, the planner estimates 1 row\nand uses a nested loop.\n\n\n\nDELETE FROM pg_statistic s\n\nUSING pg_class c\n\nWHERE c.oid = s.starelid\n\nAND c.relname = 'bag_type';\n\n\n\nhttps://explain.depesz.com/s/yBuEo <https://explain.depesz.com/s/2AyP>\n(nested loop)\n\n\n\nBelow are various stats and configuration options, in case they are\nhelpful. I've tried reindexing everything, clustering the tables and ran\nvacuum full as well. I've tried increasing the default statistics target\n(this actually made performance much worse). I’ve tested this on fresh\nvolumes with synthetic data, as well as on replicas of prod data. I’ve\nalso tested this on different ec2 instance types (r4.16xl and c4.8xl). In\nall cases the bag_type stats resulted in worse performance. I was hoping\nsomeone would be able to give advice on how to improve these queries that\ndoesn’t involve deleting stats.\n\n\n\nThanks\n\n--Jeremy\n\n\n\n\n\nSELECT version();\n\n version\n\n-------------------------------------------------------------------------------------------------------------------------\n\nPostgreSQL 10.7 (Debian 10.7-1.pgdg80+1) on x86_64-pc-linux-gnu, compiled\nby gcc (Debian 4.9.2-10+deb8u2) 4.9.2, 64-bit\n\n(1 row)\n\n\n\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\nrelname='bag_type' OR relname = 'bag';\n\nrelname | relpages | reltuples | relallvisible | relkind | relnatts |\nrelhassubclass | reloptions | pg_table_size\n\n----------+----------+-------------+---------------+---------+----------+----------------+------------+---------------\n\n bag | 44115 | 5.99964e+06 | 0 | r | 3 |\nf | | 361390080\n\n bag_type | 1 | 6 | 0 | r | 4 |\nf | | 16384\n\n(2 rows)\n\n\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\ntablename, attname, inherited, null_frac, n_distinct,\narray_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1)\nn_hist, correlation FROM pg_stats WHERE attname='bag_type_id' AND\ntablename='bag' ORDER BY 1 DESC;\n\nfrac_mcv | tablename | attname | inherited | null_frac | n_distinct |\nn_mcv | n_hist | correlation\n\n----------+-----------+-------------+-----------+-----------+------------+-------+--------+-------------\n\n 1 | bag | bag_type_id | f | 0 | 6\n| 6 | | 0.167682\n\n(1 row)\n\n\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\ntablename, attname, inherited, null_frac, n_distinct,\narray_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1)\nn_hist, correlation FROM pg_stats WHERE attname='owner_id' AND\ntablename='bag' ORDER BY 1 DESC;\n\n frac_mcv | tablename | attname | inherited | null_frac | n_distinct |\nn_mcv | n_hist | correlation\n\n------------+-----------+----------+-----------+-----------+------------+-------+--------+-------------\n\n 0.00680001 | bag | owner_id | f | 0 | -0.123982 |\n100 | 101 | 0.994306\n\n(1 row)\n\n\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\ntablename, attname, inherited, null_frac, n_distinct,\narray_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1)\nn_hist, correlation FROM pg_stats WHERE attname='name' AND\ntablename='bag_type' ORDER BY 1 DESC;\n\nfrac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv\n| n_hist | correlation\n\n----------+-----------+---------+-----------+-----------+------------+-------+--------+-------------\n\n | bag_type | name | f | 0 | -1\n| | 6 | -0.428571\n\n(1 row)\n\n\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\ntablename, attname, inherited, null_frac, n_distinct,\narray_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1)\nn_hist, correlation FROM pg_stats WHERE attname='game' AND\ntablename='bag_type' ORDER BY 1 DESC;\n\nfrac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv\n| n_hist | correlation\n\n----------+-----------+---------+-----------+-----------+------------+-------+--------+-------------\n\n 1 | bag_type | game | f | 0 | -0.166667 |\n1 | | 1\n\n(1 row)\n\n\n\nSELECT name, current_setting(name), SOURCE\n\nFROM pg_settings\n\nWHERE SOURCE NOT IN ('default', 'override');\n\n name |\ncurrent_setting | source\n\n-------------------------------------+---------------------------------------------+--------------------\n\n application_name |\npsql | client\n\n archive_command | /wal-e-shim wal-push\n%p | configuration file\n\n archive_mode |\non | configuration file\n\n archive_timeout |\n1min | configuration file\n\n autovacuum |\non | configuration file\n\n autovacuum_max_workers |\n6 | configuration file\n\n autovacuum_vacuum_scale_factor |\n0 | configuration file\n\n autovacuum_vacuum_threshold |\n10000 | configuration file\n\n autovacuum_work_mem |\n-1 | configuration file\n\n checkpoint_completion_target |\n0.9 | configuration file\n\n checkpoint_timeout |\n30min | configuration file\n\n checkpoint_warning |\n30s | configuration file\n\n client_encoding |\nSQL_ASCII | client\n\n DateStyle | ISO,\nMDY | configuration file\n\n dynamic_shared_memory_type |\nposix | configuration file\n\n effective_cache_size |\n42432000kB | configuration file\n\n fsync |\non | configuration file\n\n full_page_writes |\non | configuration file\n\n huge_pages |\ntry | configuration file\n\n idle_in_transaction_session_timeout |\n10min | configuration file\n\n lc_messages |\nC | configuration file\n\n lc_monetary |\nC | configuration file\n\n lc_numeric |\nC | configuration file\n\n lc_time |\nC | configuration file\n\n listen_addresses |\n* | configuration file\n\n log_autovacuum_min_duration |\n0 | configuration file\n\n log_checkpoints |\non | configuration file\n\n log_destination |\nstderr | configuration file\n\n log_line_prefix | %t [%p-%l] %q%u@%d\n| configuration file\n\n log_lock_waits |\non | configuration file\n\n log_min_duration_statement |\n1s | configuration file\n\n log_temp_files |\n0 | configuration file\n\n log_timezone |\nUTC | configuration file\n\n maintenance_work_mem |\n3536000kB | configuration file\n\n max_connections |\n400 | configuration file\n\n max_prepared_transactions |\n100 | configuration file\n\n max_stack_depth |\n2MB | configuration file\n\n max_wal_senders |\n5 | configuration file\n\n max_wal_size |\n34GB | configuration file\n\n pg_partman_bgw.dbname | redacted\n | configuration file\n\n pg_partman_bgw.interval |\n3600 | configuration file\n\n pg_partman_bgw.role |\npostgres | configuration file\n\n pg_stat_statements.track |\nall | configuration file\n\n port |\n5432 | command line\n\n random_page_cost |\n1.1 | configuration file\n\n shared_buffers |\n14144000kB | configuration file\n\n shared_preload_libraries | plpgsql, pg_partman_bgw,\npg_stat_statements | configuration file\n\n stats_temp_directory |\n/var/run/postgresql/pg_stat_tmp | configuration file\n\n superuser_reserved_connections |\n5 | configuration file\n\n synchronous_commit |\non | configuration file\n\n TimeZone |\nUTC | configuration file\n\n unix_socket_directories |\n/var/run/postgresql | configuration file\n\n unix_socket_group |\npostgres | configuration file\n\n unix_socket_permissions |\n0700 | configuration file\n\n wal_keep_segments |\n64 | configuration file\n\n wal_level |\nreplica | configuration file\n\n wal_sync_method |\nfsync | configuration file\n\n work_mem |\n141440kB | configuration file\n\n(58 rows)\n\n\nHello,\n \nWe have several select statements whose performance is greatly\nimproved by deleting some stats from pg_statistic. With the stats present\nthe database reaches 100% cpu at 13k queries per second. Without these stats,\nthe same machine can handle over 29k queries per second. We were able replicate\nthis behavior with just a single join that all these queries contain. When the\nstats are present the planner chooses to hash join, and without stats perform a\nnested loop. The plan using a hash join has a higher estimated cost, and as\npreviously mentioned, uses more cpu.\n \nThe two tables involved in this query are described below; bag_type\nand bag. There are 6 bag_type rows and around 6 million bag rows. During this\nsimplified scenario, no writes were occurring. Under normal circumstances rows\ncan be inserted into bag, and no rows in these tables are updated or deleted.\n \n\\d\nbag_type\n \nTable \"public.bag_type\"\n \nColumn | Type | Collation | Nullable\n| \nDefault\n-----------+---------+-----------+----------+--------------------------------------\n id \n| bigint | |\nnot null | nextval('bag_type_id_seq'::regclass)\n name \n| text \n| | not null |\n has_slots\n| boolean | | not\nnull |\n game \n| text \n| | not null |\nIndexes:\n \n\"bag_type_pk\" PRIMARY KEY, btree (id)\n \n\"bag_name_u1\" UNIQUE CONSTRAINT, btree (name, game)\nReferenced\nby:\n \nTABLE \"bag\" CONSTRAINT \"bag_fk1\" FOREIGN KEY (bag_type_id)\nREFERENCES bag_type(id)\n \n\\d bag\n \nTable \"public.bag\"\n \nColumn | Type | Collation | Nullable\n| \nDefault\n-------------+--------+-----------+----------+---------------------------------\n id \n| bigint | | not\nnull | nextval('bag_id_seq'::regclass)\n owner_id \n| uuid \n| | not null |\n bag_type_id\n| bigint | | not\nnull |\nIndexes:\n \n\"bag_pk\" PRIMARY KEY, btree (id)\n \n\"bag_owner_type_u1\" UNIQUE CONSTRAINT, btree (owner_id, bag_type_id)\nForeign-key\nconstraints:\n \n\"bag_fk1\" FOREIGN KEY (bag_type_id) REFERENCES bag_type(id)\nReferenced\nby:\n \nTABLE \"item\" CONSTRAINT \"item_fk1\" FOREIGN KEY (bag_id)\nREFERENCES bag(id)\n \nThe pared down query joins the two tables.\n \nEXPLAIN\n(ANALYZE, BUFFERS)\nSELECT 1\nFROM bag\nINNER\nJOIN bag_type ON bag.bag_type_id = bag_type.id\nWHERE\nowner_id = '00000000-0000-0000-0000-000000076100'\nAND game\n= 'test_alpha'\nAND name\n= ANY(ARRAY['item','wallet','buildingFixed']);\n \nWith stats on the bag_type table present, the planner uses a\nhash join. I noticed that the estimate of the index scan of bag_owner_type_u1\nis too high at 8 rows. No owner can have more than 6 bags, so 8 should be\nlogically impossible. Also, given 3 bag_types and a specific owner, there can't\nbe more than 3 rows due to the bag_owner_type_u1 index.\n \nANALYZE\nbag_type;\n \nhttps://explain.depesz.com/s/zcI (Slower,\nhash join)\n \nIf I remove the stats on the bag_type table, the planner\nestimates 1 row and uses a nested loop.\n \nDELETE\nFROM pg_statistic s\nUSING\npg_class c\nWHERE\nc.oid = s.starelid \nAND\nc.relname = 'bag_type';\n \nhttps://explain.depesz.com/s/yBuEo (nested\nloop)\n \nBelow are various stats and configuration options, in case\nthey are helpful. I've tried reindexing everything, clustering the tables and\nran vacuum full as well. I've tried increasing the default statistics target\n(this actually made performance much worse). I’ve tested this on fresh volumes\nwith synthetic data, as well as on replicas of prod data. I’ve also\ntested this on different ec2 instance types (r4.16xl and c4.8xl). In all cases\nthe bag_type stats resulted in worse performance. I was hoping someone would be\nable to give advice on how to improve these queries that doesn’t involve\ndeleting stats.\n \nThanks\n--Jeremy\n \n \nSELECT\nversion();\n \nversion\n-------------------------------------------------------------------------------------------------------------------------\nPostgreSQL\n10.7 (Debian 10.7-1.pgdg80+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian\n4.9.2-10+deb8u2) 4.9.2, 64-bit\n(1 row)\n \nSELECT\nrelname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass,\nreloptions, pg_table_size(oid) FROM pg_class WHERE relname='bag_type' OR\nrelname = 'bag';\nrelname \n| relpages | reltuples | relallvisible | relkind | relnatts |\nrelhassubclass | reloptions | pg_table_size\n----------+----------+-------------+---------------+---------+----------+----------------+------------+---------------\n bag \n| 44115 | 5.99964e+06\n| 0 |\nr \n| 3 |\nf \n| \n| 361390080\n bag_type\n| 1\n| 6\n| 0 |\nr | \n4 |\nf \n| \n| 16384\n(2 rows)\n \nSELECT\n(SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname,\ninherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv,\narray_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE\nattname='bag_type_id' AND tablename='bag' ORDER BY 1 DESC; \nfrac_mcv\n| tablename | attname | inherited | null_frac |\nn_distinct | n_mcv | n_hist | correlation\n----------+-----------+-------------+-----------+-----------+------------+-------+--------+-------------\n \n1 | bag | bag_type_id |\nf \n| 0\n| 6\n| 6 | \n| 0.167682\n(1 row)\n \nSELECT\n(SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname,\ninherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv,\narray_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE\nattname='owner_id' AND tablename='bag' ORDER BY 1 DESC; \n \nfrac_mcv | tablename | attname | inherited | null_frac | n_distinct\n| n_mcv | n_hist | correlation\n------------+-----------+----------+-----------+-----------+------------+-------+--------+-------------\n 0.00680001\n| bag | owner_id |\nf \n| 0 | -0.123982\n| 100 | 101 | 0.994306\n(1 row)\n \nSELECT\n(SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname,\ninherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv,\narray_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE\nattname='name' AND tablename='bag_type' ORDER BY 1 DESC; \nfrac_mcv\n| tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist |\ncorrelation\n----------+-----------+---------+-----------+-----------+------------+-------+--------+-------------\n \n| bag_type | name |\nf \n| 0\n| -1 | \n| 6 | -0.428571\n(1 row)\n \nSELECT\n(SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname,\ninherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv,\narray_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE\nattname='game' AND tablename='bag_type' ORDER BY 1 DESC; \nfrac_mcv\n| tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist |\ncorrelation\n----------+-----------+---------+-----------+-----------+------------+-------+--------+-------------\n \n1 | bag_type | game | f \n| 0 | -0.166667\n| 1 | \n| 1\n(1 row)\n \nSELECT\nname, current_setting(name), SOURCE\nFROM\npg_settings\nWHERE\nSOURCE NOT IN ('default', 'override');\n \nname \n| \ncurrent_setting \n| source\n-------------------------------------+---------------------------------------------+--------------------\n application_name \n|\npsql \n| client\n archive_command \n| /wal-e-shim wal-push\n%p \n| configuration file\n archive_mode \n|\non \n| configuration file\n archive_timeout \n|\n1min \n| configuration file\n autovacuum \n|\non \n| configuration file\n autovacuum_max_workers \n|\n6 \n| configuration file\n autovacuum_vacuum_scale_factor \n|\n0 \n| configuration file\n autovacuum_vacuum_threshold \n|\n10000 \n| configuration file\n autovacuum_work_mem \n|\n-1 \n| configuration file\n checkpoint_completion_target \n|\n0.9 \n| configuration file\n checkpoint_timeout \n|\n30min \n| configuration file\n checkpoint_warning \n|\n30s \n| configuration file\n client_encoding \n|\nSQL_ASCII \n| client\n DateStyle \n| ISO,\nMDY \n| configuration file\n dynamic_shared_memory_type \n|\nposix \n| configuration file\n effective_cache_size \n|\n42432000kB \n| configuration file\n fsync \n|\non \n| configuration file\n full_page_writes \n|\non \n| configuration file\n huge_pages \n|\ntry \n| configuration file\n idle_in_transaction_session_timeout\n|\n10min \n| configuration file\n lc_messages \n|\nC \n| configuration file\n lc_monetary \n|\nC \n| configuration file\n lc_numeric \n|\nC \n| configuration file\n lc_time \n|\nC \n| configuration file\n listen_addresses \n|\n* \n| configuration file\n log_autovacuum_min_duration \n|\n0 \n| configuration file\n log_checkpoints \n|\non \n| configuration file\n log_destination \n|\nstderr \n| configuration file\n log_line_prefix \n| %t [%p-%l]\n%q%u@%d \n| configuration file\n log_lock_waits \n|\non \n| configuration file\n log_min_duration_statement \n|\n1s \n| configuration file\n log_temp_files \n|\n0 \n| configuration file\n log_timezone \n|\nUTC \n| configuration file\n maintenance_work_mem \n|\n3536000kB \n| configuration file\n max_connections \n|\n400 \n| configuration file\n max_prepared_transactions \n|\n100 \n| configuration file\n max_stack_depth \n|\n2MB \n| configuration file\n max_wal_senders \n|\n5 \n| configuration file\n max_wal_size \n|\n34GB \n| configuration file\n pg_partman_bgw.dbname \n| redacted \n| configuration file\n pg_partman_bgw.interval \n|\n3600 \n| configuration file\n pg_partman_bgw.role \n|\npostgres \n| configuration file\n pg_stat_statements.track \n|\nall \n| configuration file\n port \n|\n5432 \n| command line\n random_page_cost \n|\n1.1 \n| configuration file\n shared_buffers \n|\n14144000kB \n| configuration file\n shared_preload_libraries \n| plpgsql, pg_partman_bgw, pg_stat_statements | configuration file\n stats_temp_directory \n|\n/var/run/postgresql/pg_stat_tmp \n| configuration file\n superuser_reserved_connections \n|\n5 \n| configuration file\n synchronous_commit \n|\non \n| configuration file\n TimeZone \n| UTC \n| configuration file\n unix_socket_directories \n|\n/var/run/postgresql \n| configuration file\n unix_socket_group \n|\npostgres \n| configuration file\n unix_socket_permissions \n|\n0700 \n| configuration file\n wal_keep_segments \n|\n64 \n| configuration file\n wal_level \n| replica \n| configuration file\n wal_sync_method \n|\nfsync \n| configuration file\n work_mem \n|\n141440kB \n| configuration file\n(58 rows)",
"msg_date": "Thu, 16 May 2019 16:03:18 -0400",
"msg_from": "Jeremy Altavilla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Analyze results in more expensive query plan"
},
{
"msg_contents": "Jeremy Altavilla <[email protected]> writes:\n> We have several select statements whose performance is greatly improved by\n> deleting some stats from pg_statistic.\n\nYou might have better results by setting up some \"extended stats\" for\nthe combination of bag_type columns that this query depends on. Per your\ndescription, there's a fair amount of cross-column correlation, which\nthe planner will not expect without some extended stats to tell it so.\n\nhttps://www.postgresql.org/docs/10/planner-stats.html#PLANNER-STATS-EXTENDED\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 09:35:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze results in more expensive query plan"
},
{
"msg_contents": "Thanks for the suggestion. I created extended statistics objects for the\ntwo tables in question. Unfortunately the resulting plan was the same (and\nhad the same estimates). It looks like the extended stats discovered a\npotentially useful correlation on bag: \"2, 3 => 1\" (owner_id, bag_type_id\n=> id). I'm guessing this wasn't usable because the docs state \"They are\nnot used to improve estimates for equality conditions comparing two\ncolumns\".\n\nI created functional dependency extended stats (none of our queries use\ngroup by), and ran analyze. The resulting objects are below. For\ncorrelations of 1, the results seemed logically correct (I'm not sure how\nto interpret the .966 values). The limitations section said that extended\nstats are only applied for simple equality conditions, so I modified the\nquery to use equality instead of any. That still resulted in the same plan\nand estimate. Just to be thorough, I tried with all permutations of zero,\none or both stats objects. In all cases the resulting plan and estimates\ndidn't change from the slow hash join.\n\ncreate statistics bag_type_stats (dependencies) on id, name, game from\nbag_type;\nanalyze bag_type;\ncreate statistics bag_stats (dependencies) on id, owner_id, bag_type_id\nfrom bag;\nanalyze bag;\n\nselect * from pg_statistic_ext;\n-[ RECORD 1\n]---+------------------------------------------------------------------------------------------------------------------------------------------------------\nstxrelid | 16411\nstxname | bag_stats\nstxnamespace | 2200\nstxowner | 10\nstxkeys | 1 2 3\nstxkind | {f}\nstxndistinct |\nstxdependencies | {\"1 => 2\": 1.000000, \"1 => 3\": 1.000000, \"2 => 1\":\n0.966567, \"2 => 3\": 0.966567, \"1, 2 => 3\": 1.000000, \"1, 3 => 2\": 1.000000,\n\"2, 3 => 1\": 1.000000}\n-[ RECORD 2\n]---+------------------------------------------------------------------------------------------------------------------------------------------------------\nstxrelid | 16398\nstxname | bag_type_stats\nstxnamespace | 2200\nstxowner | 10\nstxkeys | 1 2 4\nstxkind | {f}\nstxndistinct |\nstxdependencies | {\"1 => 2\": 1.000000, \"1 => 4\": 1.000000, \"2 => 1\":\n1.000000, \"2 => 4\": 1.000000, \"1, 2 => 4\": 1.000000, \"1, 4 => 2\": 1.000000,\n\"2, 4 => 1\": 1.000000}\n\nFor bag keys 1, 2, 3 are id, owner_id and bag_type_id. For bag_type 1, 2, 4\nare id, name and game.\n\n--Thanks\n--Jeremy\n\nOn Fri, May 17, 2019 at 9:35 AM Tom Lane <[email protected]> wrote:\n>\n> Jeremy Altavilla <[email protected]> writes:\n> > We have several select statements whose performance is greatly improved\nby\n> > deleting some stats from pg_statistic.\n>\n> You might have better results by setting up some \"extended stats\" for\n> the combination of bag_type columns that this query depends on. Per your\n> description, there's a fair amount of cross-column correlation, which\n> the planner will not expect without some extended stats to tell it so.\n>\n>\nhttps://www.postgresql.org/docs/10/planner-stats.html#PLANNER-STATS-EXTENDED\n>\n> regards, tom lane\n\nThanks for the suggestion. I created extended statistics objects for the two tables in question. Unfortunately the resulting plan was the same (and had the same estimates). \nIt looks like the extended stats discovered a potentially useful \ncorrelation on bag: \"2, 3 => 1\" (owner_id, bag_type_id => id). I'm guessing this wasn't usable because the docs state \"They are not used to improve estimates for equality conditions comparing two columns\".\n\nI created functional dependency extended stats (none of our queries use group by), and ran analyze. The resulting objects are below. For correlations of 1, the results seemed logically correct (I'm not sure how to interpret the .966 values). The limitations section said that extended stats are only applied for simple equality conditions, so I modified the query to use equality instead of any. That still resulted in the same plan and estimate. Just to be thorough, I tried with all permutations of zero, one or both stats objects. In all cases the resulting plan and estimates didn't change from the slow hash join. create statistics bag_type_stats (dependencies) on id, name, game from bag_type;analyze bag_type;create statistics bag_stats (dependencies) on id, owner_id, bag_type_id from bag;analyze bag;select * from pg_statistic_ext;-[ RECORD 1 ]---+------------------------------------------------------------------------------------------------------------------------------------------------------stxrelid | 16411stxname | bag_statsstxnamespace | 2200stxowner | 10stxkeys | 1 2 3stxkind | {f}stxndistinct |stxdependencies | {\"1 => 2\": 1.000000, \"1 => 3\": 1.000000, \"2 => 1\": 0.966567, \"2 => 3\": 0.966567, \"1, 2 => 3\": 1.000000, \"1, 3 => 2\": 1.000000, \"2, 3 => 1\": 1.000000}-[ RECORD 2 ]---+------------------------------------------------------------------------------------------------------------------------------------------------------stxrelid | 16398stxname | bag_type_statsstxnamespace | 2200stxowner | 10stxkeys | 1 2 4stxkind | {f}stxndistinct |stxdependencies | {\"1 => 2\": 1.000000, \"1 => 4\": 1.000000, \"2 => 1\": 1.000000, \"2 => 4\": 1.000000, \"1, 2 => 4\": 1.000000, \"1, 4 => 2\": 1.000000, \"2, 4 => 1\": 1.000000}For bag keys 1, 2, 3 are id, owner_id and bag_type_id. For bag_type 1, 2, 4 are id, name and game. --Thanks--Jeremy\nOn Fri, May 17, 2019 at 9:35 AM Tom Lane <[email protected]> wrote:>> Jeremy Altavilla <[email protected]> writes:> > We have several select statements whose performance is greatly improved by> > deleting some stats from pg_statistic.>> You might have better results by setting up some \"extended stats\" for> the combination of bag_type columns that this query depends on. Per your> description, there's a fair amount of cross-column correlation, which> the planner will not expect without some extended stats to tell it so.>> https://www.postgresql.org/docs/10/planner-stats.html#PLANNER-STATS-EXTENDED>> regards, tom lane",
"msg_date": "Mon, 20 May 2019 16:19:48 -0400",
"msg_from": "Jeremy Altavilla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Analyze results in more expensive query plan"
},
{
"msg_contents": "On Tue, 21 May 2019 at 08:23, Jeremy Altavilla\n<[email protected]> wrote:\n>\n> Thanks for the suggestion. I created extended statistics objects for the two tables in question. Unfortunately the resulting plan was the same (and had the same estimates). It looks like the extended stats discovered a potentially useful correlation on bag: \"2, 3 => 1\" (owner_id, bag_type_id => id). I'm guessing this wasn't usable because the docs state \"They are not used to improve estimates for equality conditions comparing two columns\".\n\nI'd say that since the time spent planning is near 3x what is spent\nduring execution that you're wasting your time trying to speed up the\nexecution. What you should be thinking about is using PREPAREd\nstatements to avoid the planning overhead completely. If that's not\npossible then you've more chance of reducing the time spent planning\nby reducing the statistics on the table rather than adding more\nplanning overhead by adding extended stats. You might want to\nexperiment with ALTER TABLE ... ALTER COLUMN ... SET STATISTICS ..;\nand setting those down a bit then analyzing the tables again.\nAlthough, that's likely only going to make a very small difference, if\nany, than getting rid of the planning completely.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 21 May 2019 14:04:43 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze results in more expensive query plan"
},
{
"msg_contents": "Thanks for the help. In our prod environment, we shouldn't be\nplanning unnecessarily. Our app uses the extended query protocol\n(prepare/bind/exec) to call pg/plsql stored procedures. I left out a\nlot of context and background in my question, because I hoped it\nsimplified things. I might have left out too much though. Immediately\nafter we upgraded our prod database from postgres 9.6 to 10, it\nstarted using 2-3x more cpu with no change in requests per second. We\nhave a load test for this app / database; using it we eventually\ndiscovered the effect of having stats on the bag_type table. After\nthat, It made sense that the upgrade triggered this, as a step in the\nupgrade process is to run analyze new cluster.\n\nI experimented with changing the per column statistics value. Setting\nname and game to 0, results in no stats for those columns, and the\nplanner choosing the better plan. Pretty much any other set of values\nresulted in the more expensive plan. I'm not sure if this is fixing\nthe problem, or hiding the problem, but it's definitely less fragile\nthan hoping the table never gets analyzed.\n\n--Thanks\n--Jeremy\n\n\nOn Mon, May 20, 2019 at 10:04 PM David Rowley\n<[email protected]> wrote:\n>\n> On Tue, 21 May 2019 at 08:23, Jeremy Altavilla\n> <[email protected]> wrote:\n> >\n> > Thanks for the suggestion. I created extended statistics objects for the two tables in question. Unfortunately the resulting plan was the same (and had the same estimates). It looks like the extended stats discovered a potentially useful correlation on bag: \"2, 3 => 1\" (owner_id, bag_type_id => id). I'm guessing this wasn't usable because the docs state \"They are not used to improve estimates for equality conditions comparing two columns\".\n>\n> I'd say that since the time spent planning is near 3x what is spent\n> during execution that you're wasting your time trying to speed up the\n> execution. What you should be thinking about is using PREPAREd\n> statements to avoid the planning overhead completely. If that's not\n> possible then you've more chance of reducing the time spent planning\n> by reducing the statistics on the table rather than adding more\n> planning overhead by adding extended stats. You might want to\n> experiment with ALTER TABLE ... ALTER COLUMN ... SET STATISTICS ..;\n> and setting those down a bit then analyzing the tables again.\n> Although, that's likely only going to make a very small difference, if\n> any, than getting rid of the planning completely.\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 18:21:38 -0400",
"msg_from": "Jeremy Altavilla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Analyze results in more expensive query plan"
}
] |
[
{
"msg_contents": "Hi,\n\nI am joining the union of three tables with another table. Postgresql uses\nthe index when only two tables are in the union. If I add one more table to\nthe union, it switches to seq scan. Apparently it also uses the index when\nonly one table is joined.\n\nThe SQL is:\nselect * from (\nSELECT 'NEWS' datatype, n.id, mbct_id FROM news n\nunion all\nSELECT 'SPEECH' datatype, s.id, mbct_id FROM speech s\nunion all\nSELECT 'NOTICE' datatype, notice.id, mbct_id FROM notice\n) x join NBSMultiBroadcast y on x.mbct_id=y.id where y.zhtw_grp_bct between\n'2019-05-10' and '2019-05-17';\n\nThe estimated number of rows is not off against the actual number of rows,\nwhich is around 120. So, I don't really understand why PostgreSQL seems to\nbelieve it should use Seq Scan due to a relatively large number of rows are\nexpected.\n\nI am using v11.3:\n\nPostgreSQL 11.3 (Ubuntu 11.3-1.pgdg16.04+1) on i686-pc-linux-gnu, compiled\nby gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609, 32-bit\n\nThe output of explain analyze is:\n Hash Join (cost=1153.01..6273.58 rows=134 width=1856) (actual\ntime=46.937..50.557 rows=120 loops=1)\n Hash Cond: (n.mbct_id = y.id)\n -> Append (cost=0.00..5043.33 rows=29422 width=48) (actual\ntime=0.015..42.237 rows=29422 loops=1)\n -> Seq Scan on news n (cost=0.00..4588.30 rows=27430 width=48)\n(actual time=0.015..35.902 rows=27430 loops=1)\n -> Seq Scan on speech s (cost=0.00..26.26 rows=226 width=48)\n(actual time=0.009..0.182 rows=226 loops=1)\n -> Seq Scan on notice (cost=0.00..281.66 rows=1766 width=48)\n(actual time=0.005..1.283 rows=1766 loops=1)\n -> Hash (cost=1151.24..1151.24 rows=142 width=1808) (actual\ntime=2.466..2.466 rows=130 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 34kB\n -> Index Scan using zhtw_grp_bct on nbsmultibroadcast y\n(cost=0.29..1151.24 rows=142 width=1808) (actual time=2.279..2.396 rows=130\nloops=1)\n Index Cond: ((zhtw_grp_bct >= '2019-05-10\n00:00:00'::timestamp without time zone) AND (zhtw_grp_bct <= '2019-05-17\n00:00:00'::timestamp without time zone))\n Planning Time: 0.749 ms\n Execution Time: 50.637 ms\n\nThe output of explain analyze for just two tables in the union is:\n Nested Loop (cost=0.57..5863.96 rows=126 width=1856) (actual\ntime=2.199..21.513 rows=103 loops=1)\n -> Index Scan using zhtw_grp_bct on nbsmultibroadcast y\n(cost=0.29..1151.24 rows=142 width=1808) (actual time=2.172..2.313 rows=130\nloops=1)\n Index Cond: ((zhtw_grp_bct >= '2019-05-10 00:00:00'::timestamp\nwithout time zone) AND (zhtw_grp_bct <= '2019-05-17 00:00:00'::timestamp\nwithout time zone))\n -> Append (cost=0.29..33.17 rows=2 width=48) (actual time=0.035..0.146\nrows=1 loops=130)\n -> Index Scan using news_mbct_id_idx on news n (cost=0.29..6.33\nrows=1 width=48) (actual time=0.004..0.005 rows=1 loops=130)\n Index Cond: (mbct_id = y.id)\n -> Seq Scan on speech s (cost=0.00..26.82 rows=1 width=48)\n(actual time=0.139..0.139 rows=0 loops=130)\n Filter: (y.id = mbct_id)\n Rows Removed by Filter: 226\n Planning Time: 0.639 ms\n Execution Time: 21.604 m\n\nThe size of the three tables are 27430, 226 and 1766 respectively.\n\nMany thanks for any help!\n-- \nKent Tong\nIT author and consultant, child education coach\n\nHi,I am joining the union of three tables with another table. Postgresql uses the index when only two tables are in the union. If I add one more table to the union, it switches to seq scan. Apparently it also uses the index when only one table is joined.The SQL is:select * from (SELECT 'NEWS' datatype, n.id, mbct_id FROM news nunion allSELECT 'SPEECH' datatype, s.id, mbct_id FROM speech sunion allSELECT 'NOTICE' datatype, notice.id, mbct_id FROM notice ) x join NBSMultiBroadcast y on x.mbct_id=y.id where y.zhtw_grp_bct between '2019-05-10' and '2019-05-17';The estimated number of rows is not off against the actual number of rows, which is around 120. So, I don't really understand why PostgreSQL seems to believe it should use Seq Scan due to a relatively large number of rows are expected.I am using v11.3:PostgreSQL 11.3 (Ubuntu 11.3-1.pgdg16.04+1) on i686-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609, 32-bitThe output of explain analyze is: Hash Join (cost=1153.01..6273.58 rows=134 width=1856) (actual time=46.937..50.557 rows=120 loops=1) Hash Cond: (n.mbct_id = y.id) -> Append (cost=0.00..5043.33 rows=29422 width=48) (actual time=0.015..42.237 rows=29422 loops=1) -> Seq Scan on news n (cost=0.00..4588.30 rows=27430 width=48) (actual time=0.015..35.902 rows=27430 loops=1) -> Seq Scan on speech s (cost=0.00..26.26 rows=226 width=48) (actual time=0.009..0.182 rows=226 loops=1) -> Seq Scan on notice (cost=0.00..281.66 rows=1766 width=48) (actual time=0.005..1.283 rows=1766 loops=1) -> Hash (cost=1151.24..1151.24 rows=142 width=1808) (actual time=2.466..2.466 rows=130 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 34kB -> Index Scan using zhtw_grp_bct on nbsmultibroadcast y (cost=0.29..1151.24 rows=142 width=1808) (actual time=2.279..2.396 rows=130 loops=1) Index Cond: ((zhtw_grp_bct >= '2019-05-10 00:00:00'::timestamp without time zone) AND (zhtw_grp_bct <= '2019-05-17 00:00:00'::timestamp without time zone)) Planning Time: 0.749 ms Execution Time: 50.637 msThe output of explain analyze for just two tables in the union is: Nested Loop (cost=0.57..5863.96 rows=126 width=1856) (actual time=2.199..21.513 rows=103 loops=1) -> Index Scan using zhtw_grp_bct on nbsmultibroadcast y (cost=0.29..1151.24 rows=142 width=1808) (actual time=2.172..2.313 rows=130 loops=1) Index Cond: ((zhtw_grp_bct >= '2019-05-10 00:00:00'::timestamp without time zone) AND (zhtw_grp_bct <= '2019-05-17 00:00:00'::timestamp without time zone)) -> Append (cost=0.29..33.17 rows=2 width=48) (actual time=0.035..0.146 rows=1 loops=130) -> Index Scan using news_mbct_id_idx on news n (cost=0.29..6.33 rows=1 width=48) (actual time=0.004..0.005 rows=1 loops=130) Index Cond: (mbct_id = y.id) -> Seq Scan on speech s (cost=0.00..26.82 rows=1 width=48) (actual time=0.139..0.139 rows=0 loops=130) Filter: (y.id = mbct_id) Rows Removed by Filter: 226 Planning Time: 0.639 ms Execution Time: 21.604 mThe size of the three tables are 27430, 226 and 1766 respectively.Many thanks for any help!-- Kent TongIT author and consultant, child education coach",
"msg_date": "Fri, 17 May 2019 17:15:19 +0800",
"msg_from": "Kent Tong <[email protected]>",
"msg_from_op": true,
"msg_subject": "using sequential scan instead of index for join with a union"
},
{
"msg_contents": "Hi\n\nPlease check datatypes in union all part. Possible, notice.id or notice.mbct_id datatypes does not match with other tables.\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 17 May 2019 12:23:19 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: using sequential scan instead of index for join with a union"
},
{
"msg_contents": "Hi, Sergei\n\nThanks! I've just double checked and they are the same:\n\n\\d notice\n id | bigint | | not null |\nnextval('notice_id_seq'::regclass)\n mbct_id | bigint | | |\n\n\\d news\n id | bigint | | not null |\nnextval('news_id_seq'::regclass)\n mbct_id | bigint | | |\n\n\n\nOn Fri, May 17, 2019 at 5:23 PM Sergei Kornilov <[email protected]> wrote:\n\n> Hi\n>\n> Please check datatypes in union all part. Possible, notice.id or\n> notice.mbct_id datatypes does not match with other tables.\n>\n> regards, Sergei\n>\n\n\n-- \nKent Tong\nIT author and consultant, child education coach\n\nHi, SergeiThanks! I've just double checked and they are the same:\\d notice id | bigint | | not null | nextval('notice_id_seq'::regclass) mbct_id | bigint | | |\\d news id | bigint | | not null | nextval('news_id_seq'::regclass) mbct_id | bigint | | | On Fri, May 17, 2019 at 5:23 PM Sergei Kornilov <[email protected]> wrote:Hi\n\nPlease check datatypes in union all part. Possible, notice.id or notice.mbct_id datatypes does not match with other tables.\n\nregards, Sergei\n-- Kent TongIT author and consultant, child education coach",
"msg_date": "Fri, 17 May 2019 17:36:03 +0800",
"msg_from": "Kent Tong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: using sequential scan instead of index for join with a union"
}
] |
[
{
"msg_contents": "Hey,\nI'm trying to handle a corruption that one of our customers is facing.\nHis disk space was full and as a result of that he decided to run\npg_resetxlog a few times(bad idea..) .\nWhen I connected to the machine I saw that the db was down.\nWhen I started the db (service postgresql start) I saw the next error in\nthe logs :\n\nDETAIL: Could not open file \"pg_multixact/offsets/0000\": No such file or\ndirectory.\n\nThe pg_multixact/offset dir contained one file (0025).\nThe pg_multixact/members dir contains 2 files : 0000 and 0001.\n\nI tried to follow the documentation of pg_resetxlog, and run pg_resetxlog\nwith -m 0xF0A604,0xEA50CE which are 0025*65536 and 0026*65536 in hexa.\nHowever, it didnt help and the same error appeared.\nSo I tried to rename the file to 0000 and then the db searched for a file\nin members that wasnt exist.\nI followed the documentation and changed the multitransaction offset (-O)\nand the transactions id (-c ) based on the doc and then the db was started\nsuccesfully.\nHowever after it started I saw the next msg in the logs :\nMultixact member wraparound protections are disabled because oldest\ncheckpointed Multixact 65536 doesnt exist. In addition, no one is able to\nconnect to the db (we keep getting errors database doesnt exist or user\ndoesnt exist , even for postgresql user).\n\ncurrent relevant rows from the control data :\n\npg_control version number: 960\n\nCatalog version number: 201608131\n\nDatabase system identifier: 6692952810876880414\n\nDatabase cluster state: shut down\n\npg_control last modified: Mon 20 May 2019 07:07:30 AM PDT\n\nLatest checkpoint location: 1837/E3000028\n\nPrior checkpoint location: 1837/E2000028\n\nLatest checkpoint's REDO location: 1837/E3000028\n\nLatest checkpoint's REDO WAL file: 0000000100001837000000E3\n\nLatest checkpoint's TimeLineID: 1\n\nLatest checkpoint's PrevTimeLineID: 1\n\nLatest checkpoint's full_page_writes: on\n\nLatest checkpoint's NextXID: 0:3\n\nLatest checkpoint's NextOID: 10000\n\nLatest checkpoint's NextMultiXactId: 131072\n\nLatest checkpoint's NextMultiOffset: 52352\n\nLatest checkpoint's oldestXID: 3\n\nLatest checkpoint's oldestXID's DB: 0\n\nLatest checkpoint's oldestActiveXID: 0\n\nLatest checkpoint's oldestMultiXid: 65536\n\nLatest checkpoint's oldestMulti's DB: 0\n\nLatest checkpoint's oldestCommitTsXid:4604\n\nLatest checkpoint's newestCommitTsXid:5041\n\n\n\nI also checked and I saw that the customer has all the wals (backed up) but\nwithout any basebackup..\nAny recommendations how to handle the case ?\n\nHey,I'm trying to handle a corruption that one of our customers is facing.His disk space was full and as a result of that he decided to run pg_resetxlog a few times(bad idea..) .When I connected to the machine I saw that the db was down. When I started the db (service postgresql start) I saw the next error in the logs :DETAIL: Could not open file \"pg_multixact/offsets/0000\": No such file or directory.The pg_multixact/offset dir contained one file (0025).The pg_multixact/members dir contains 2 files : 0000 and 0001.I tried to follow the documentation of pg_resetxlog, and run pg_resetxlog with -m 0xF0A604,0xEA50CE which are 0025*65536 and 0026*65536 in hexa.However, it didnt help and the same error appeared.So I tried to rename the file to 0000 and then the db searched for a file in members that wasnt exist.I followed the documentation and changed the multitransaction offset (-O) and the transactions id (-c ) based on the doc and then the db was started succesfully.However after it started I saw the next msg in the logs : Multixact member wraparound protections are disabled because oldest checkpointed Multixact 65536 doesnt exist. In addition, no one is able to connect to the db (we keep getting errors database doesnt exist or user doesnt exist , even for postgresql user).current relevant rows from the control data : pg_control version number: 960\nCatalog version number: 201608131\nDatabase system identifier: 6692952810876880414\nDatabase cluster state: shut down\npg_control last modified: Mon 20 May 2019 07:07:30 AM PDT\nLatest checkpoint location: 1837/E3000028\nPrior checkpoint location: 1837/E2000028\nLatest checkpoint's REDO location: 1837/E3000028\nLatest checkpoint's REDO WAL file: 0000000100001837000000E3\nLatest checkpoint's TimeLineID: 1\nLatest checkpoint's PrevTimeLineID: 1\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:3\nLatest checkpoint's NextOID: 10000\nLatest checkpoint's NextMultiXactId: 131072\nLatest checkpoint's NextMultiOffset: 52352\nLatest checkpoint's oldestXID: 3\nLatest checkpoint's oldestXID's DB: 0\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 65536\nLatest checkpoint's oldestMulti's DB: 0\nLatest checkpoint's oldestCommitTsXid:4604\nLatest checkpoint's newestCommitTsXid:5041I also checked and I saw that the customer has all the wals (backed up) but without any basebackup..Any recommendations how to handle the case ?",
"msg_date": "Mon, 20 May 2019 17:39:48 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trying to handle db corruption 9.6"
},
{
"msg_contents": "Hi,\n\nFirst of all, as stated in the wiki, you'll need to do a filesystem level\ncopy of the database files and put them on another drive before attempting\nto do anything else !\n\nhttps://wiki.postgresql.org/wiki/Corruption\n\nregards,\nFlo\n\nOn Mon, May 20, 2019 at 4:40 PM Mariel Cherkassky <\[email protected]> wrote:\n\n> Hey,\n> I'm trying to handle a corruption that one of our customers is facing.\n> His disk space was full and as a result of that he decided to run\n> pg_resetxlog a few times(bad idea..) .\n> When I connected to the machine I saw that the db was down.\n> When I started the db (service postgresql start) I saw the next error in\n> the logs :\n>\n> DETAIL: Could not open file \"pg_multixact/offsets/0000\": No such file or\n> directory.\n>\n> The pg_multixact/offset dir contained one file (0025).\n> The pg_multixact/members dir contains 2 files : 0000 and 0001.\n>\n> I tried to follow the documentation of pg_resetxlog, and run pg_resetxlog\n> with -m 0xF0A604,0xEA50CE which are 0025*65536 and 0026*65536 in hexa.\n> However, it didnt help and the same error appeared.\n> So I tried to rename the file to 0000 and then the db searched for a file\n> in members that wasnt exist.\n> I followed the documentation and changed the multitransaction offset (-O)\n> and the transactions id (-c ) based on the doc and then the db was started\n> succesfully.\n> However after it started I saw the next msg in the logs :\n> Multixact member wraparound protections are disabled because oldest\n> checkpointed Multixact 65536 doesnt exist. In addition, no one is able to\n> connect to the db (we keep getting errors database doesnt exist or user\n> doesnt exist , even for postgresql user).\n>\n> current relevant rows from the control data :\n>\n> pg_control version number: 960\n>\n> Catalog version number: 201608131\n>\n> Database system identifier: 6692952810876880414\n>\n> Database cluster state: shut down\n>\n> pg_control last modified: Mon 20 May 2019 07:07:30 AM PDT\n>\n> Latest checkpoint location: 1837/E3000028\n>\n> Prior checkpoint location: 1837/E2000028\n>\n> Latest checkpoint's REDO location: 1837/E3000028\n>\n> Latest checkpoint's REDO WAL file: 0000000100001837000000E3\n>\n> Latest checkpoint's TimeLineID: 1\n>\n> Latest checkpoint's PrevTimeLineID: 1\n>\n> Latest checkpoint's full_page_writes: on\n>\n> Latest checkpoint's NextXID: 0:3\n>\n> Latest checkpoint's NextOID: 10000\n>\n> Latest checkpoint's NextMultiXactId: 131072\n>\n> Latest checkpoint's NextMultiOffset: 52352\n>\n> Latest checkpoint's oldestXID: 3\n>\n> Latest checkpoint's oldestXID's DB: 0\n>\n> Latest checkpoint's oldestActiveXID: 0\n>\n> Latest checkpoint's oldestMultiXid: 65536\n>\n> Latest checkpoint's oldestMulti's DB: 0\n>\n> Latest checkpoint's oldestCommitTsXid:4604\n>\n> Latest checkpoint's newestCommitTsXid:5041\n>\n>\n>\n> I also checked and I saw that the customer has all the wals (backed up)\n> but without any basebackup..\n> Any recommendations how to handle the case ?\n>\n\nHi,First of all, as stated in the wiki, you'll need to do a filesystem level copy of the database files and put them on another drive before attempting to do anything else !https://wiki.postgresql.org/wiki/Corruptionregards,FloOn Mon, May 20, 2019 at 4:40 PM Mariel Cherkassky <[email protected]> wrote:Hey,I'm trying to handle a corruption that one of our customers is facing.His disk space was full and as a result of that he decided to run pg_resetxlog a few times(bad idea..) .When I connected to the machine I saw that the db was down. When I started the db (service postgresql start) I saw the next error in the logs :DETAIL: Could not open file \"pg_multixact/offsets/0000\": No such file or directory.The pg_multixact/offset dir contained one file (0025).The pg_multixact/members dir contains 2 files : 0000 and 0001.I tried to follow the documentation of pg_resetxlog, and run pg_resetxlog with -m 0xF0A604,0xEA50CE which are 0025*65536 and 0026*65536 in hexa.However, it didnt help and the same error appeared.So I tried to rename the file to 0000 and then the db searched for a file in members that wasnt exist.I followed the documentation and changed the multitransaction offset (-O) and the transactions id (-c ) based on the doc and then the db was started succesfully.However after it started I saw the next msg in the logs : Multixact member wraparound protections are disabled because oldest checkpointed Multixact 65536 doesnt exist. In addition, no one is able to connect to the db (we keep getting errors database doesnt exist or user doesnt exist , even for postgresql user).current relevant rows from the control data : pg_control version number: 960\nCatalog version number: 201608131\nDatabase system identifier: 6692952810876880414\nDatabase cluster state: shut down\npg_control last modified: Mon 20 May 2019 07:07:30 AM PDT\nLatest checkpoint location: 1837/E3000028\nPrior checkpoint location: 1837/E2000028\nLatest checkpoint's REDO location: 1837/E3000028\nLatest checkpoint's REDO WAL file: 0000000100001837000000E3\nLatest checkpoint's TimeLineID: 1\nLatest checkpoint's PrevTimeLineID: 1\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:3\nLatest checkpoint's NextOID: 10000\nLatest checkpoint's NextMultiXactId: 131072\nLatest checkpoint's NextMultiOffset: 52352\nLatest checkpoint's oldestXID: 3\nLatest checkpoint's oldestXID's DB: 0\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 65536\nLatest checkpoint's oldestMulti's DB: 0\nLatest checkpoint's oldestCommitTsXid:4604\nLatest checkpoint's newestCommitTsXid:5041I also checked and I saw that the customer has all the wals (backed up) but without any basebackup..Any recommendations how to handle the case ?",
"msg_date": "Mon, 20 May 2019 16:49:35 +0200",
"msg_from": "Flo Rance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "Yes I understand that.. I'm trying to handle it after the backup that I\nhave taken..\n\nOn Mon, May 20, 2019, 5:49 PM Flo Rance <[email protected]> wrote:\n\n> Hi,\n>\n> First of all, as stated in the wiki, you'll need to do a filesystem level\n> copy of the database files and put them on another drive before attempting\n> to do anything else !\n>\n> https://wiki.postgresql.org/wiki/Corruption\n>\n> regards,\n> Flo\n>\n> On Mon, May 20, 2019 at 4:40 PM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> Hey,\n>> I'm trying to handle a corruption that one of our customers is facing.\n>> His disk space was full and as a result of that he decided to run\n>> pg_resetxlog a few times(bad idea..) .\n>> When I connected to the machine I saw that the db was down.\n>> When I started the db (service postgresql start) I saw the next error in\n>> the logs :\n>>\n>> DETAIL: Could not open file \"pg_multixact/offsets/0000\": No such file or\n>> directory.\n>>\n>> The pg_multixact/offset dir contained one file (0025).\n>> The pg_multixact/members dir contains 2 files : 0000 and 0001.\n>>\n>> I tried to follow the documentation of pg_resetxlog, and run pg_resetxlog\n>> with -m 0xF0A604,0xEA50CE which are 0025*65536 and 0026*65536 in hexa.\n>> However, it didnt help and the same error appeared.\n>> So I tried to rename the file to 0000 and then the db searched for a file\n>> in members that wasnt exist.\n>> I followed the documentation and changed the multitransaction offset\n>> (-O) and the transactions id (-c ) based on the doc and then the db was\n>> started succesfully.\n>> However after it started I saw the next msg in the logs :\n>> Multixact member wraparound protections are disabled because oldest\n>> checkpointed Multixact 65536 doesnt exist. In addition, no one is able to\n>> connect to the db (we keep getting errors database doesnt exist or user\n>> doesnt exist , even for postgresql user).\n>>\n>> current relevant rows from the control data :\n>>\n>> pg_control version number: 960\n>>\n>> Catalog version number: 201608131\n>>\n>> Database system identifier: 6692952810876880414\n>>\n>> Database cluster state: shut down\n>>\n>> pg_control last modified: Mon 20 May 2019 07:07:30 AM PDT\n>>\n>> Latest checkpoint location: 1837/E3000028\n>>\n>> Prior checkpoint location: 1837/E2000028\n>>\n>> Latest checkpoint's REDO location: 1837/E3000028\n>>\n>> Latest checkpoint's REDO WAL file: 0000000100001837000000E3\n>>\n>> Latest checkpoint's TimeLineID: 1\n>>\n>> Latest checkpoint's PrevTimeLineID: 1\n>>\n>> Latest checkpoint's full_page_writes: on\n>>\n>> Latest checkpoint's NextXID: 0:3\n>>\n>> Latest checkpoint's NextOID: 10000\n>>\n>> Latest checkpoint's NextMultiXactId: 131072\n>>\n>> Latest checkpoint's NextMultiOffset: 52352\n>>\n>> Latest checkpoint's oldestXID: 3\n>>\n>> Latest checkpoint's oldestXID's DB: 0\n>>\n>> Latest checkpoint's oldestActiveXID: 0\n>>\n>> Latest checkpoint's oldestMultiXid: 65536\n>>\n>> Latest checkpoint's oldestMulti's DB: 0\n>>\n>> Latest checkpoint's oldestCommitTsXid:4604\n>>\n>> Latest checkpoint's newestCommitTsXid:5041\n>>\n>>\n>>\n>> I also checked and I saw that the customer has all the wals (backed up)\n>> but without any basebackup..\n>> Any recommendations how to handle the case ?\n>>\n>\n\nYes I understand that.. I'm trying to handle it after the backup that I have taken..On Mon, May 20, 2019, 5:49 PM Flo Rance <[email protected]> wrote:Hi,First of all, as stated in the wiki, you'll need to do a filesystem level copy of the database files and put them on another drive before attempting to do anything else !https://wiki.postgresql.org/wiki/Corruptionregards,FloOn Mon, May 20, 2019 at 4:40 PM Mariel Cherkassky <[email protected]> wrote:Hey,I'm trying to handle a corruption that one of our customers is facing.His disk space was full and as a result of that he decided to run pg_resetxlog a few times(bad idea..) .When I connected to the machine I saw that the db was down. When I started the db (service postgresql start) I saw the next error in the logs :DETAIL: Could not open file \"pg_multixact/offsets/0000\": No such file or directory.The pg_multixact/offset dir contained one file (0025).The pg_multixact/members dir contains 2 files : 0000 and 0001.I tried to follow the documentation of pg_resetxlog, and run pg_resetxlog with -m 0xF0A604,0xEA50CE which are 0025*65536 and 0026*65536 in hexa.However, it didnt help and the same error appeared.So I tried to rename the file to 0000 and then the db searched for a file in members that wasnt exist.I followed the documentation and changed the multitransaction offset (-O) and the transactions id (-c ) based on the doc and then the db was started succesfully.However after it started I saw the next msg in the logs : Multixact member wraparound protections are disabled because oldest checkpointed Multixact 65536 doesnt exist. In addition, no one is able to connect to the db (we keep getting errors database doesnt exist or user doesnt exist , even for postgresql user).current relevant rows from the control data : pg_control version number: 960\nCatalog version number: 201608131\nDatabase system identifier: 6692952810876880414\nDatabase cluster state: shut down\npg_control last modified: Mon 20 May 2019 07:07:30 AM PDT\nLatest checkpoint location: 1837/E3000028\nPrior checkpoint location: 1837/E2000028\nLatest checkpoint's REDO location: 1837/E3000028\nLatest checkpoint's REDO WAL file: 0000000100001837000000E3\nLatest checkpoint's TimeLineID: 1\nLatest checkpoint's PrevTimeLineID: 1\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:3\nLatest checkpoint's NextOID: 10000\nLatest checkpoint's NextMultiXactId: 131072\nLatest checkpoint's NextMultiOffset: 52352\nLatest checkpoint's oldestXID: 3\nLatest checkpoint's oldestXID's DB: 0\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 65536\nLatest checkpoint's oldestMulti's DB: 0\nLatest checkpoint's oldestCommitTsXid:4604\nLatest checkpoint's newestCommitTsXid:5041I also checked and I saw that the customer has all the wals (backed up) but without any basebackup..Any recommendations how to handle the case ?",
"msg_date": "Mon, 20 May 2019 18:00:11 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "A backup was made after the corruption appeared but before I tried using\nthe pg_resetxlog command. Basically I just want to start the database with\nthe data that is available in the files(I'm ok with loosing data that was\nin the cache and wasnt written to disk).\nMy question is how can I continue from here ?\nI also sent this mail to pgadmin mail list..\n\nבתאריך יום ב׳, 20 במאי 2019 ב-18:59 מאת Greg Clough <\[email protected]>:\n\n> > Yes I understand that.. I'm trying to handle it after the backup that I\n> have taken..\n>\n>\n> IMHO the best option here is to keep safe a copy as you have already done\n> and then restore from a backup, and replay whatever WAL you have. The\n> database you have is terminally corrupted, and should never be relied upon\n> going forward.\n>\n>\n>\n> You can try to get it running, and then extract the data with pg_dump...\n> but even then you will need to manually verify it’s OK because you have no\n> idea which dirty blocks from memory have been written to disk and which\n> have not. Without the WAL you have no way of making it consistent, and if\n> they have been destroyed then you’re out of luck.\n>\n>\n>\n> If you don’t have backups and archived WAL then fixing what you’ve got may\n> be your only option, but you should only go down that route if you have\n> to. If you have to “repair”, then I’d recommend engaging a reputable\n> PostgreSQL consultancy to help you.\n>\n>\n>\n> Regards,\n>\n> Greg.\n>\n> P.S. This conversation should probably be moved to something like\n> pgsql-admin\n>\n>\n>\n> ------------------------------\n>\n> This e-mail, including accompanying communications and attachments, is\n> strictly confidential and only for the intended recipient. Any retention,\n> use or disclosure not expressly authorised by IHSMarkit is prohibited. This\n> email is subject to all waivers and other terms at the following link:\n> https://ihsmarkit.com/Legal/EmailDisclaimer.html\n>\n> Please visit www.ihsmarkit.com/about/contact-us.html for contact\n> information on our offices worldwide.\n>\n\nA backup was made after the corruption appeared but before I tried using the pg_resetxlog command. Basically I just want to start the database with the data that is available in the files(I'm ok with loosing data that was in the cache and wasnt written to disk).My question is how can I continue from here ?I also sent this mail to pgadmin mail list..בתאריך יום ב׳, 20 במאי 2019 ב-18:59 מאת Greg Clough <[email protected]>:\n\n\n> Yes I understand that.. I'm trying to handle it after the backup that I have taken..\n\nIMHO the best option here is to keep safe a copy as you have already done and then restore from a backup, and replay whatever WAL you have. The database you have is terminally corrupted, and should never be relied upon going forward.\n \nYou can try to get it running, and then extract the data with pg_dump... but even then you will need to manually verify it’s OK because you have no idea which dirty blocks from memory have been written to disk and which\n have not. Without the WAL you have no way of making it consistent, and if they have been destroyed then you’re out of luck.\n \nIf you don’t have backups and archived WAL then fixing what you’ve got may be your only option, but you should only go down that route if you have to. If you have to “repair”, then I’d recommend engaging a reputable\n PostgreSQL consultancy to help you.\n \nRegards,\nGreg.\n\nP.S. This conversation should probably be moved to something like pgsql-admin \n\n \n\n\n\n\n\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and\n other terms at the following link: https://ihsmarkit.com/Legal/EmailDisclaimer.html\n\nPlease visit www.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.",
"msg_date": "Mon, 20 May 2019 19:07:45 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "I had ran into same issue about year back, luckily I had standby to quickly promote. But, I wish there was better a documentation on how to handle WAL log fill up and resetting them. \n On Monday, May 20, 2019, 9:08:19 AM PDT, Mariel Cherkassky <[email protected]> wrote: \n \n A backup was made after the corruption appeared but before I tried using the pg_resetxlog command. Basically I just want to start the database with the data that is available in the files(I'm ok with loosing data that was in the cache and wasnt written to disk).My question is how can I continue from here ?I also sent this mail to pgadmin mail list..\nבתאריך יום ב׳, 20 במאי 2019 ב-18:59 מאת Greg Clough <[email protected]>:\n\n\n> Yes I understand that.. I'm trying to handle it after the backup that I have taken..\n\n\nIMHO the best option here is to keep safe a copy as you have already done and then restore from a backup, and replay whatever WAL you have. The database you have is terminally corrupted, and should never be relied upon going forward.\n\n \n\nYou can try to get it running, and then extract the data with pg_dump... but even then you will need to manually verify it’s OK because you have no idea which dirty blocks from memory have been written to disk and which have not. Without the WAL you have no way of making it consistent, and if they have been destroyed then you’re out of luck.\n\n \n\nIf you don’t have backups and archived WAL then fixing what you’ve got may be your only option, but you should only go down that route if you have to. If you have to “repair”, then I’d recommend engaging a reputable PostgreSQL consultancy to help you.\n\n \n\nRegards,\n\nGreg.\n\nP.S. This conversation should probably be moved to something like pgsql-admin \n\n \n\n\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and other terms at the following link: https://ihsmarkit.com/Legal/EmailDisclaimer.html\n\nPlease visit www.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.\n\n \n\nI had ran into same issue about year back, luckily I had standby to quickly promote. But, I wish there was better a documentation on how to handle WAL log fill up and resetting them. \n\n\n\n On Monday, May 20, 2019, 9:08:19 AM PDT, Mariel Cherkassky <[email protected]> wrote:\n \n\n\nA backup was made after the corruption appeared but before I tried using the pg_resetxlog command. Basically I just want to start the database with the data that is available in the files(I'm ok with loosing data that was in the cache and wasnt written to disk).My question is how can I continue from here ?I also sent this mail to pgadmin mail list..בתאריך יום ב׳, 20 במאי 2019 ב-18:59 מאת Greg Clough <[email protected]>:\n\n\n> Yes I understand that.. I'm trying to handle it after the backup that I have taken..\n\nIMHO the best option here is to keep safe a copy as you have already done and then restore from a backup, and replay whatever WAL you have. The database you have is terminally corrupted, and should never be relied upon going forward.\n \nYou can try to get it running, and then extract the data with pg_dump... but even then you will need to manually verify it’s OK because you have no idea which dirty blocks from memory have been written to disk and which\n have not. Without the WAL you have no way of making it consistent, and if they have been destroyed then you’re out of luck.\n \nIf you don’t have backups and archived WAL then fixing what you’ve got may be your only option, but you should only go down that route if you have to. If you have to “repair”, then I’d recommend engaging a reputable\n PostgreSQL consultancy to help you.\n \nRegards,\nGreg.\n\nP.S. This conversation should probably be moved to something like pgsql-admin \n\n \n\n\n\n\n\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and\n other terms at the following link: https://ihsmarkit.com/Legal/EmailDisclaimer.html\n\nPlease visit www.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.",
"msg_date": "Mon, 20 May 2019 16:20:45 +0000 (UTC)",
"msg_from": "Bimal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "Hey Greg,\nBasically my backup was made after the first pg_resetxlog so I was wrong.\nHowever, the customer had a secondary machine that wasn't synced for a\nmonth. I have all the walls since the moment the secondary went out of\nsync. Once I started it I hoped that it will start recover the wals and\nfill the gap. However I got an error in the secondary :\n 2019-05-20 10:11:28 PDT 19021 LOG: entering standby mode\n2019-05-20 10:11:28 PDT 19021 LOG: invalid primary checkpoint record\n2019-05-20 10:11:28 PDT 19021 LOG: invalid secondary checkpoint link in\ncontrol file\n2019-05-20 10:11:28 PDT 19021 PANIC: could not locate a valid checkpoint\nrecord\n2019-05-20 10:11:28 PDT 19018 LOG: startup process (PID 19021) was\nterminated by signal 6: Aborted\n2019-05-20 10:11:28 PDT 19018 LOG: aborting startup due to startup\nprocess failure\n2019-05-20 10:11:28 PDT 19018 LOG: database system is shut down.\n I checked my secondary archive dir and pg_xlog dir and\nit seems that the restore command doesnt work. My restore_command:\nrestore_command = 'rsync -avzhe ssh [email protected]:/var/lib/pgsql/archive/%f\n/var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p'\narchive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n/var/lib/pgsql/archive %r'\n\n\nOn Mon, May 20, 2019, 7:20 PM Bimal <[email protected]> wrote:\n\n> I had ran into same issue about year back, luckily I had standby to\n> quickly promote. But, I wish there was better a documentation on how to\n> handle WAL log fill up and resetting them.\n>\n> On Monday, May 20, 2019, 9:08:19 AM PDT, Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>\n> A backup was made after the corruption appeared but before I tried using\n> the pg_resetxlog command. Basically I just want to start the database with\n> the data that is available in the files(I'm ok with loosing data that was\n> in the cache and wasnt written to disk).\n> My question is how can I continue from here ?\n> I also sent this mail to pgadmin mail list..\n>\n> בתאריך יום ב׳, 20 במאי 2019 ב-18:59 מאת Greg Clough <\n> [email protected]>:\n>\n> > Yes I understand that.. I'm trying to handle it after the backup that I\n> have taken..\n>\n>\n> IMHO the best option here is to keep safe a copy as you have already done\n> and then restore from a backup, and replay whatever WAL you have. The\n> database you have is terminally corrupted, and should never be relied upon\n> going forward.\n>\n>\n>\n> You can try to get it running, and then extract the data with pg_dump...\n> but even then you will need to manually verify it’s OK because you have no\n> idea which dirty blocks from memory have been written to disk and which\n> have not. Without the WAL you have no way of making it consistent, and if\n> they have been destroyed then you’re out of luck.\n>\n>\n>\n> If you don’t have backups and archived WAL then fixing what you’ve got may\n> be your only option, but you should only go down that route if you have\n> to. If you have to “repair”, then I’d recommend engaging a reputable\n> PostgreSQL consultancy to help you.\n>\n>\n>\n> Regards,\n>\n> Greg.\n>\n> P.S. This conversation should probably be moved to something like\n> pgsql-admin\n>\n>\n>\n> ------------------------------\n>\n> This e-mail, including accompanying communications and attachments, is\n> strictly confidential and only for the intended recipient. Any retention,\n> use or disclosure not expressly authorised by IHSMarkit is prohibited. This\n> email is subject to all waivers and other terms at the following link:\n> https://ihsmarkit.com/Legal/EmailDisclaimer.html\n>\n> Please visit www.ihsmarkit.com/about/contact-us.html for contact\n> information on our offices worldwide.\n>\n>\n\nHey Greg,Basically my backup was made after the first pg_resetxlog so I was wrong. However, the customer had a secondary machine that wasn't synced for a month. I have all the walls since the moment the secondary went out of sync. Once I started it I hoped that it will start recover the wals and fill the gap. However I got an error in the secondary : 2019-05-20 10:11:28 PDT 19021 LOG: entering standby mode2019-05-20 10:11:28 PDT 19021 LOG: invalid primary checkpoint record2019-05-20 10:11:28 PDT 19021 LOG: invalid secondary checkpoint link in control file2019-05-20 10:11:28 PDT 19021 PANIC: could not locate a valid checkpoint record2019-05-20 10:11:28 PDT 19018 LOG: startup process (PID 19021) was terminated by signal 6: Aborted2019-05-20 10:11:28 PDT 19018 LOG: aborting startup due to startup process failure2019-05-20 10:11:28 PDT 19018 LOG: database system is shut down. I checked my secondary archive dir and pg_xlog dir and it seems that the restore command doesnt work. My restore_command: restore_command = 'rsync -avzhe ssh [email protected]:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p'archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup /var/lib/pgsql/archive %r'On Mon, May 20, 2019, 7:20 PM Bimal <[email protected]> wrote:\nI had ran into same issue about year back, luckily I had standby to quickly promote. But, I wish there was better a documentation on how to handle WAL log fill up and resetting them. \n\n\n\n On Monday, May 20, 2019, 9:08:19 AM PDT, Mariel Cherkassky <[email protected]> wrote:\n \n\n\nA backup was made after the corruption appeared but before I tried using the pg_resetxlog command. Basically I just want to start the database with the data that is available in the files(I'm ok with loosing data that was in the cache and wasnt written to disk).My question is how can I continue from here ?I also sent this mail to pgadmin mail list..בתאריך יום ב׳, 20 במאי 2019 ב-18:59 מאת Greg Clough <[email protected]>:\n\n\n> Yes I understand that.. I'm trying to handle it after the backup that I have taken..\n\nIMHO the best option here is to keep safe a copy as you have already done and then restore from a backup, and replay whatever WAL you have. The database you have is terminally corrupted, and should never be relied upon going forward.\n \nYou can try to get it running, and then extract the data with pg_dump... but even then you will need to manually verify it’s OK because you have no idea which dirty blocks from memory have been written to disk and which\n have not. Without the WAL you have no way of making it consistent, and if they have been destroyed then you’re out of luck.\n \nIf you don’t have backups and archived WAL then fixing what you’ve got may be your only option, but you should only go down that route if you have to. If you have to “repair”, then I’d recommend engaging a reputable\n PostgreSQL consultancy to help you.\n \nRegards,\nGreg.\n\nP.S. This conversation should probably be moved to something like pgsql-admin \n\n \n\n\n\n\n\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and\n other terms at the following link: https://ihsmarkit.com/Legal/EmailDisclaimer.html\n\nPlease visit www.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.",
"msg_date": "Mon, 20 May 2019 20:20:33 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "On Mon, May 20, 2019 at 04:20:45PM +0000, Bimal wrote:\n> I had ran into same issue about year back, luckily I had standby to\n> quickly promote. But, I wish there was better a documentation on how to\n> handle WAL log fill up and resetting them. \n\npg_resetxlog is not a tool to deal with \"WAL fill up\". It's a last\nresort option to deal with corrupted WAL, and can easily make matters\nworse when used without due consideration. That seems to be the case\nhere, unfortunately.\n\nOn a properly behaving system, running out of disk space for pg_xlog\nresults in database shutdown. If you also get corrupted WAL, you have\nbigger problems, I'm afraid.\n\nAlso, data corruption issues are one-off events, mostly unique. That\nmakes it rather difficult (~impossible) to write docs about recovering\nfrom them. And it's why there are no magic tools.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 20 May 2019 22:55:53 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "On Mon, May 20, 2019 at 08:20:33PM +0300, Mariel Cherkassky wrote:\n> Hey Greg,\n> Basically my backup was made after the first pg_resetxlog so I was wrong.\n\nBummer.\n\n> However, the customer had a secondary machine that wasn't synced for a\n> month. I have all the walls since the moment the secondary went out of\n> sync. Once I started it I hoped that it will start recover the wals and\n> fill the gap. However I got an error in the secondary :� � � � �\n> �2019-05-20 10:11:28 PDT� 19021� LOG:� entering standby mode\n> 2019-05-20 10:11:28 PDT� 19021� LOG:� invalid primary checkpoint record\n> 2019-05-20 10:11:28 PDT� 19021� LOG:� invalid secondary checkpoint link in\n> control file\n> 2019-05-20 10:11:28 PDT� 19021� PANIC:� could not locate a valid\n> checkpoint record\n> 2019-05-20 10:11:28 PDT� 19018� LOG:� startup process (PID 19021) was\n> terminated by signal 6: Aborted\n> 2019-05-20 10:11:28 PDT� 19018� LOG:� aborting startup due to startup\n> process failure\n> 2019-05-20 10:11:28 PDT� 19018� LOG:� database system is shut down.� � � �\n> � � � � � � � � � � I checked my secondary archive dir and pg_xlog dir and\n> it seems that the restore command doesnt work. My restore_command:� � ��\n> restore_command = 'rsync -avzhe ssh\n> [email protected]:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ;\n> gunzip < /var/lib/pgsql/archive/%f > %p'\n> archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> /var/lib/pgsql/archive %r'\n\nWell, when you say it does not work, why do you think so? Does it print\nsome error, or what? Does it even get executed? It does not seem to be\nthe case, judging by the log (there's no archive_command message).\n\nHow was the \"secondary machine\" created? You said you have all the WAL\nsince then - how do you know that?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 20 May 2019 23:04:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "Tomas :\n\nWell, when you say it does not work, why do you think so? Does it print\nsome error, or what? Does it even get executed? It does not seem to be\nthe case, judging by the log (there's no archive_command message).\n\nHow was the \"secondary machine\" created? You said you have all the WAL\nsince then - how do you know that?\n\nWell, when I start the secondary in recovery mode (the primary is down,\nauto failover is disabled..) it doesnt start recovering the archive wals\nfrom the primary. The logs of the secondary :\nreceiving incremental file list\nrsync: link_stat \"/var/lib/pgsql/archive/00000002.history\" failed: No such\nfile or directory (2)\n\nsent 8 bytes received 10 bytes 36.00 bytes/sec\ntotal size is 0 speedup is 0.00\nrsync error: some files/attrs were not transferred (see previous errors)\n(code 23) at main.c(1505) [receiver=3.0.6]\nsh: /var/lib/pgsql/archive/00000002.history: No such file or directory\n2019-05-20 09:41:33 PDT 18558 LOG: entering standby mode\n2019-05-20 09:41:33 PDT 18558 LOG: invalid primary checkpoint record\n2019-05-20 09:41:33 PDT 18558 LOG: invalid secondary checkpoint link in\ncontrol file\n2019-05-20 09:41:33 PDT 18558 PANIC: could not locate a valid checkpoint\nrecord\n2019-05-20 09:41:33 PDT 18555 LOG: startup process (PID 18558) was\nterminated by signal 6: Aborted\n2019-05-20 09:41:33 PDT 18555 LOG: aborting startup due to startup\nprocess failure\n2019-05-20 09:41:33 PDT 18555 LOG: database system is shut down\n2019-05-20 09:56:12 PDT 18701 LOG: database system was shut down in\nrecovery at 2019-05-01 09:40:02 PDT\n\nAs I said, the secondary was down for a month and I have all the archives\nof the wals in my primary. I was hoping that the secondary will use the\nrestore_command to restore them :\nrestore_command = 'rsync -avzhe ssh [email protected]:/var/lib/pgsql/archive/%f\n/var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p'\n\nmy archive_command on the primary was :\narchive_command = 'gzip < %p > /var/lib/pgsql/archive/%f'\n\nAm I missing something ?\n\nAnother question, If I'll run initdb and initiate a new cluster and i'll\ncopy the data files of my old cluster into the new one, is there any chance\nthat it will work ?\nI mean right now, my primary is down and cant start up because it is\nmissing an offset file in the pg_multixtrans/offset dir.\n\nבתאריך יום ג׳, 21 במאי 2019 ב-0:04 מאת Tomas Vondra <\[email protected]>:\n\n> On Mon, May 20, 2019 at 08:20:33PM +0300, Mariel Cherkassky wrote:\n> > Hey Greg,\n> > Basically my backup was made after the first pg_resetxlog so I was\n> wrong.\n>\n> Bummer.\n>\n> > However, the customer had a secondary machine that wasn't synced for a\n> > month. I have all the walls since the moment the secondary went out of\n> > sync. Once I started it I hoped that it will start recover the wals and\n> > fill the gap. However I got an error in the secondary :\n> > 2019-05-20 10:11:28 PDT 19021 LOG: entering standby mode\n> > 2019-05-20 10:11:28 PDT 19021 LOG: invalid primary checkpoint record\n> > 2019-05-20 10:11:28 PDT 19021 LOG: invalid secondary checkpoint\n> link in\n> > control file\n> > 2019-05-20 10:11:28 PDT 19021 PANIC: could not locate a valid\n> > checkpoint record\n> > 2019-05-20 10:11:28 PDT 19018 LOG: startup process (PID 19021) was\n> > terminated by signal 6: Aborted\n> > 2019-05-20 10:11:28 PDT 19018 LOG: aborting startup due to startup\n> > process failure\n> > 2019-05-20 10:11:28 PDT 19018 LOG: database system is shut down.\n>\n> > I checked my secondary archive dir and pg_xlog dir\n> and\n> > it seems that the restore command doesnt work. My restore_command:\n>\n> > restore_command = 'rsync -avzhe ssh\n> > [email protected]:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ;\n> > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > /var/lib/pgsql/archive %r'\n>\n> Well, when you say it does not work, why do you think so? Does it print\n> some error, or what? Does it even get executed? It does not seem to be\n> the case, judging by the log (there's no archive_command message).\n>\n> How was the \"secondary machine\" created? You said you have all the WAL\n> since then - how do you know that?\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nTomas : Well, when you say it does not work, why do you think so? Does it printsome error, or what? Does it even get executed? It does not seem to bethe case, judging by the log (there's no archive_command message).How was the \"secondary machine\" created? You said you have all the WALsince then - how do you know that? Well, when I start the secondary in recovery mode (the primary is down, auto failover is disabled..) it doesnt start recovering the archive wals from the primary. The logs of the secondary : receiving incremental file listrsync: link_stat \"/var/lib/pgsql/archive/00000002.history\" failed: No such file or directory (2)sent 8 bytes received 10 bytes 36.00 bytes/sectotal size is 0 speedup is 0.00rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1505) [receiver=3.0.6]sh: /var/lib/pgsql/archive/00000002.history: No such file or directory2019-05-20 09:41:33 PDT 18558 LOG: entering standby mode2019-05-20 09:41:33 PDT 18558 LOG: invalid primary checkpoint record2019-05-20 09:41:33 PDT 18558 LOG: invalid secondary checkpoint link in control file2019-05-20 09:41:33 PDT 18558 PANIC: could not locate a valid checkpoint record2019-05-20 09:41:33 PDT 18555 LOG: startup process (PID 18558) was terminated by signal 6: Aborted2019-05-20 09:41:33 PDT 18555 LOG: aborting startup due to startup process failure2019-05-20 09:41:33 PDT 18555 LOG: database system is shut down2019-05-20 09:56:12 PDT 18701 LOG: database system was shut down in recovery at 2019-05-01 09:40:02 PDTAs I said, the secondary was down for a month and I have all the archives of the wals in my primary. I was hoping that the secondary will use the restore_command to restore them :restore_command = 'rsync -avzhe ssh [email protected]:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p'my archive_command on the primary was : archive_command = 'gzip < %p > /var/lib/pgsql/archive/%f'Am I missing something ?Another question, If I'll run initdb and initiate a new cluster and i'll copy the data files of my old cluster into the new one, is there any chance that it will work ?I mean right now, my primary is down and cant start up because it is missing an offset file in the pg_multixtrans/offset dir.בתאריך יום ג׳, 21 במאי 2019 ב-0:04 מאת Tomas Vondra <[email protected]>:On Mon, May 20, 2019 at 08:20:33PM +0300, Mariel Cherkassky wrote:\n> Hey Greg,\n> Basically my backup was made after the first pg_resetxlog so I was wrong.\n\nBummer.\n\n> However, the customer had a secondary machine that wasn't synced for a\n> month. I have all the walls since the moment the secondary went out of\n> sync. Once I started it I hoped that it will start recover the wals and\n> fill the gap. However I got an error in the secondary : \n> 2019-05-20 10:11:28 PDT 19021 LOG: entering standby mode\n> 2019-05-20 10:11:28 PDT 19021 LOG: invalid primary checkpoint record\n> 2019-05-20 10:11:28 PDT 19021 LOG: invalid secondary checkpoint link in\n> control file\n> 2019-05-20 10:11:28 PDT 19021 PANIC: could not locate a valid\n> checkpoint record\n> 2019-05-20 10:11:28 PDT 19018 LOG: startup process (PID 19021) was\n> terminated by signal 6: Aborted\n> 2019-05-20 10:11:28 PDT 19018 LOG: aborting startup due to startup\n> process failure\n> 2019-05-20 10:11:28 PDT 19018 LOG: database system is shut down. \n> I checked my secondary archive dir and pg_xlog dir and\n> it seems that the restore command doesnt work. My restore_command: \n> restore_command = 'rsync -avzhe ssh\n> [email protected]:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ;\n> gunzip < /var/lib/pgsql/archive/%f > %p'\n> archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> /var/lib/pgsql/archive %r'\n\nWell, when you say it does not work, why do you think so? Does it print\nsome error, or what? Does it even get executed? It does not seem to be\nthe case, judging by the log (there's no archive_command message).\n\nHow was the \"secondary machine\" created? You said you have all the WAL\nsince then - how do you know that?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 21 May 2019 12:01:31 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "On Tue, May 21, 2019 at 12:01:31PM +0300, Mariel Cherkassky wrote:\n>Tomas :\n>\n>Well, when you say it does not work, why do you think so? Does it print\n>some error, or what? Does it even get executed? It does not seem to be\n>the case, judging by the log (there's no archive_command message).\n>\n>How was the \"secondary machine\" created? You said you have all the WAL\n>since then - how do you know that?\n>\n>Well, when I start the secondary in recovery mode (the primary is down,\n>auto failover is disabled..) it doesnt start recovering the archive wals\n>from the primary. The logs of the secondary :\n>receiving incremental file list\n>rsync: link_stat \"/var/lib/pgsql/archive/00000002.history\" failed: No such\n>file or directory (2)\n>\n>sent 8 bytes received 10 bytes 36.00 bytes/sec\n>total size is 0 speedup is 0.00\n>rsync error: some files/attrs were not transferred (see previous errors)\n>(code 23) at main.c(1505) [receiver=3.0.6]\n>sh: /var/lib/pgsql/archive/00000002.history: No such file or directory\n>2019-05-20 09:41:33 PDT 18558 LOG: entering standby mode\n>2019-05-20 09:41:33 PDT 18558 LOG: invalid primary checkpoint record\n>2019-05-20 09:41:33 PDT 18558 LOG: invalid secondary checkpoint link in\n>control file\n>2019-05-20 09:41:33 PDT 18558 PANIC: could not locate a valid checkpoint\n>record\n>2019-05-20 09:41:33 PDT 18555 LOG: startup process (PID 18558) was\n>terminated by signal 6: Aborted\n>2019-05-20 09:41:33 PDT 18555 LOG: aborting startup due to startup\n>process failure\n>2019-05-20 09:41:33 PDT 18555 LOG: database system is shut down\n>2019-05-20 09:56:12 PDT 18701 LOG: database system was shut down in\n>recovery at 2019-05-01 09:40:02 PDT\n>\n>As I said, the secondary was down for a month and I have all the archives\n>of the wals in my primary. I was hoping that the secondary will use the\n>restore_command to restore them :\n>restore_command = 'rsync -avzhe ssh [email protected]:/var/lib/pgsql/archive/%f\n>/var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p'\n>\n>my archive_command on the primary was :\n>archive_command = 'gzip < %p > /var/lib/pgsql/archive/%f'\n>\n>Am I missing something ?\n>\n\nFirst of all, the way you quote message is damn confusing - there's no\nclear difference between your message and the message you quote. I don't\nknow which mail client you're using, but I suppose it can be configured to\nquote sensibly ...\n\nWell, clearly the standby tries to fetch WAL from archive, but the rsync\ncommand fails for some reason. You're in the position to investigate\nfurther, because you can run it manually - we can't. This has nothing to\ndo with PostgreSQL. My guess is you don't have /var/lib/pgsql/archive on\nthe standby, and it's confusing because archive uses the same path.\n\n\n>Another question, If I'll run initdb and initiate a new cluster and i'll\n>copy the data files of my old cluster into the new one, is there any chance\n>that it will work ?\n>I mean right now, my primary is down and cant start up because it is\n>missing an offset file in the pg_multixtrans/offset dir.\n>\n\nNo, because you won't have contents of system catalogs, mapping the data\nfiles to relations (tables, indexes) and containing information about the\nstructure (which columns / data types are in the data).\n\nThe data files are pretty useless on their own. It might be possible to do\nsome manualy recovery - say, you might create the same tables in the new\nschema, and then guess which data files belong to them. But there are\nvarious caveats e.g. due to dropped columns, etc.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 21 May 2019 15:07:14 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "Tomas - Well, when I run the restore_command manually it works (archive\ndir exists on the secondary..). Thank for the explanation on the system\ncatalogs..\n\nGreg - My restore command copy the wals from archive dir in the primary to\nan archive dir in the secondary(different from the pg_xlog in the\nsecondary). Should I run it manually and see if the archives are copied to\nthe archive dir in the secondary or should I just copy all of them to the\nxlog dir in the secondary ?\nI tried to start the secondary as a primary (I have a backup..) but I still\ngot an error (invalid checkpoint record from primary./ secondary). Does it\nmeans that my backup is corrupted ?\n\nבתאריך יום ג׳, 21 במאי 2019 ב-16:07 מאת Tomas Vondra <\[email protected]>:\n\n> On Tue, May 21, 2019 at 12:01:31PM +0300, Mariel Cherkassky wrote:\n> >Tomas :\n> >\n> >Well, when you say it does not work, why do you think so? Does it print\n> >some error, or what? Does it even get executed? It does not seem to be\n> >the case, judging by the log (there's no archive_command message).\n> >\n> >How was the \"secondary machine\" created? You said you have all the WAL\n> >since then - how do you know that?\n> >\n> >Well, when I start the secondary in recovery mode (the primary is down,\n> >auto failover is disabled..) it doesnt start recovering the archive wals\n> >from the primary. The logs of the secondary :\n> >receiving incremental file list\n> >rsync: link_stat \"/var/lib/pgsql/archive/00000002.history\" failed: No such\n> >file or directory (2)\n> >\n> >sent 8 bytes received 10 bytes 36.00 bytes/sec\n> >total size is 0 speedup is 0.00\n> >rsync error: some files/attrs were not transferred (see previous errors)\n> >(code 23) at main.c(1505) [receiver=3.0.6]\n> >sh: /var/lib/pgsql/archive/00000002.history: No such file or directory\n> >2019-05-20 09:41:33 PDT 18558 LOG: entering standby mode\n> >2019-05-20 09:41:33 PDT 18558 LOG: invalid primary checkpoint record\n> >2019-05-20 09:41:33 PDT 18558 LOG: invalid secondary checkpoint link in\n> >control file\n> >2019-05-20 09:41:33 PDT 18558 PANIC: could not locate a valid\n> checkpoint\n> >record\n> >2019-05-20 09:41:33 PDT 18555 LOG: startup process (PID 18558) was\n> >terminated by signal 6: Aborted\n> >2019-05-20 09:41:33 PDT 18555 LOG: aborting startup due to startup\n> >process failure\n> >2019-05-20 09:41:33 PDT 18555 LOG: database system is shut down\n> >2019-05-20 09:56:12 PDT 18701 LOG: database system was shut down in\n> >recovery at 2019-05-01 09:40:02 PDT\n> >\n> >As I said, the secondary was down for a month and I have all the archives\n> >of the wals in my primary. I was hoping that the secondary will use the\n> >restore_command to restore them :\n> >restore_command = 'rsync -avzhe ssh [email protected]\n> :/var/lib/pgsql/archive/%f\n> >/var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p'\n> >\n> >my archive_command on the primary was :\n> >archive_command = 'gzip < %p > /var/lib/pgsql/archive/%f'\n> >\n> >Am I missing something ?\n> >\n>\n> First of all, the way you quote message is damn confusing - there's no\n> clear difference between your message and the message you quote. I don't\n> know which mail client you're using, but I suppose it can be configured to\n> quote sensibly ...\n>\n> Well, clearly the standby tries to fetch WAL from archive, but the rsync\n> command fails for some reason. You're in the position to investigate\n> further, because you can run it manually - we can't. This has nothing to\n> do with PostgreSQL. My guess is you don't have /var/lib/pgsql/archive on\n> the standby, and it's confusing because archive uses the same path.\n>\n>\n> >Another question, If I'll run initdb and initiate a new cluster and i'll\n> >copy the data files of my old cluster into the new one, is there any\n> chance\n> >that it will work ?\n> >I mean right now, my primary is down and cant start up because it is\n> >missing an offset file in the pg_multixtrans/offset dir.\n> >\n>\n> No, because you won't have contents of system catalogs, mapping the data\n> files to relations (tables, indexes) and containing information about the\n> structure (which columns / data types are in the data).\n>\n> The data files are pretty useless on their own. It might be possible to do\n> some manualy recovery - say, you might create the same tables in the new\n> schema, and then guess which data files belong to them. But there are\n> various caveats e.g. due to dropped columns, etc.\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\nTomas - Well, when I run the restore_command manually it works (archive dir exists on the secondary..). Thank for the explanation on the system catalogs..Greg - My restore command copy the wals from archive dir in the primary to an archive dir in the secondary(different from the pg_xlog in the secondary). Should I run it manually and see if the archives are copied to the archive dir in the secondary or should I just copy all of them to the xlog dir in the secondary ? I tried to start the secondary as a primary (I have a backup..) but I still got an error (invalid checkpoint record from primary./ secondary). Does it means that my backup is corrupted ?בתאריך יום ג׳, 21 במאי 2019 ב-16:07 מאת Tomas Vondra <[email protected]>:On Tue, May 21, 2019 at 12:01:31PM +0300, Mariel Cherkassky wrote:\n>Tomas :\n>\n>Well, when you say it does not work, why do you think so? Does it print\n>some error, or what? Does it even get executed? It does not seem to be\n>the case, judging by the log (there's no archive_command message).\n>\n>How was the \"secondary machine\" created? You said you have all the WAL\n>since then - how do you know that?\n>\n>Well, when I start the secondary in recovery mode (the primary is down,\n>auto failover is disabled..) it doesnt start recovering the archive wals\n>from the primary. The logs of the secondary :\n>receiving incremental file list\n>rsync: link_stat \"/var/lib/pgsql/archive/00000002.history\" failed: No such\n>file or directory (2)\n>\n>sent 8 bytes received 10 bytes 36.00 bytes/sec\n>total size is 0 speedup is 0.00\n>rsync error: some files/attrs were not transferred (see previous errors)\n>(code 23) at main.c(1505) [receiver=3.0.6]\n>sh: /var/lib/pgsql/archive/00000002.history: No such file or directory\n>2019-05-20 09:41:33 PDT 18558 LOG: entering standby mode\n>2019-05-20 09:41:33 PDT 18558 LOG: invalid primary checkpoint record\n>2019-05-20 09:41:33 PDT 18558 LOG: invalid secondary checkpoint link in\n>control file\n>2019-05-20 09:41:33 PDT 18558 PANIC: could not locate a valid checkpoint\n>record\n>2019-05-20 09:41:33 PDT 18555 LOG: startup process (PID 18558) was\n>terminated by signal 6: Aborted\n>2019-05-20 09:41:33 PDT 18555 LOG: aborting startup due to startup\n>process failure\n>2019-05-20 09:41:33 PDT 18555 LOG: database system is shut down\n>2019-05-20 09:56:12 PDT 18701 LOG: database system was shut down in\n>recovery at 2019-05-01 09:40:02 PDT\n>\n>As I said, the secondary was down for a month and I have all the archives\n>of the wals in my primary. I was hoping that the secondary will use the\n>restore_command to restore them :\n>restore_command = 'rsync -avzhe ssh [email protected]:/var/lib/pgsql/archive/%f\n>/var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p'\n>\n>my archive_command on the primary was :\n>archive_command = 'gzip < %p > /var/lib/pgsql/archive/%f'\n>\n>Am I missing something ?\n>\n\nFirst of all, the way you quote message is damn confusing - there's no\nclear difference between your message and the message you quote. I don't\nknow which mail client you're using, but I suppose it can be configured to\nquote sensibly ...\n\nWell, clearly the standby tries to fetch WAL from archive, but the rsync\ncommand fails for some reason. You're in the position to investigate\nfurther, because you can run it manually - we can't. This has nothing to\ndo with PostgreSQL. My guess is you don't have /var/lib/pgsql/archive on\nthe standby, and it's confusing because archive uses the same path.\n\n\n>Another question, If I'll run initdb and initiate a new cluster and i'll\n>copy the data files of my old cluster into the new one, is there any chance\n>that it will work ?\n>I mean right now, my primary is down and cant start up because it is\n>missing an offset file in the pg_multixtrans/offset dir.\n>\n\nNo, because you won't have contents of system catalogs, mapping the data\nfiles to relations (tables, indexes) and containing information about the\nstructure (which columns / data types are in the data).\n\nThe data files are pretty useless on their own. It might be possible to do\nsome manualy recovery - say, you might create the same tables in the new\nschema, and then guess which data files belong to them. But there are\nvarious caveats e.g. due to dropped columns, etc.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 21 May 2019 16:12:57 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to handle db corruption 9.6"
},
{
"msg_contents": "On Tue, May 21, 2019 at 04:03:52PM +0000, Greg Clough wrote:\n>> My restore command copy the wals from archive dir in the primary to an\n>> archive dir in the secondary(different from the pg_xlog in the\n>> secondary)\n>\n>I think that you're restore command puts them back into the archive, and\n>then uncompresses them into pg_xlog, which is what %p represents.\n>\n>\n>> Should I run it manually and see if the archives are copied to the\n>> archive dir in the secondary or should I just copy all of them to the\n>> xlog dir in the secondary ?\n>\n>That would be my first test, but as Thomas mentioned, you don't have any\n>hint of WAL archives being restored in the postgresql.log... so it's not\n>even trying. It's not likely that archive_command is your problem at the\n>moment.\n>\n>\n>> I tried to start the secondary as a primary (I have a backup..) but I\n>> still got an error (invalid checkpoint record from primary./\n>> secondary). Does it means that my backup is corrupted ?\n>\n>I think so, but Thomas could probably confirm if all hope is lost. Also,\n>I'm not sure if there is a terminology difference but a \"standby\" is\n>never considered a \"backup\". I realise it's late in the day, but even if\n>you have a correctly configured Standby you should also take backups with\n>pg_basebackup, Barman, pgBackRest, etc.\n>\n\nWell, I have no idea. We still got no information about how the standby\nwas created, if it was ever running fine, and so on. Considering it does\nnot seem to be getting data from the archive, it might be the case it was\ncreated in some strange way and never really worked. And if there really\nare no log messages about the restore_command, it probably fails before\nthe standby even tries to execute it.\n\nSo I don't know.\n\n>Restoring backups is where I would be heading now, as things seem\n>terribly broken.\n>\n\nRight. But my impression is there are no backups ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 21 May 2019 18:37:27 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to handle db corruption 9.6"
}
] |
[
{
"msg_contents": "We had a mysterious (to us) slowdown today that I'm hoping someone can\nexplain just based on PG's principles of operation. It got better by itself\nso it seems like it was \"normal\" behavior -- I just don't know what\nbehavior it was exhibiting.\n\nWe have a table of user notifications containing about 80 million rows. It\ngets a lot of inserts continually, and is cleaned up once a day. There are\nno updates. In all history there have been about 330 million rows created.\n\nToday we deleted about 15 million rows in one transaction from this table.\nImmediately afterwards, a particular SELECT started running very slowly --\n500 to 3000 ms rather than the usual <1ms.\n\nWe did an EXPLAIN ANALYZE on this select and it was still doing an index\nscan as usual. The *planning time* for the query is what had gotten slow.\nThe query itself was still executing in <1ms.\n\nOver the next few hours the time slowly improved, until it returned to the\nformer performance. You can see a graph at https://imgur.com/a/zIfqkF5.\n\nIs this sort of thing expected after a large delete, and if so, can someone\nexplain the mechanism behind it? I've looked for an explanation of what\ncould cause this kind of excess planning time and haven't found one. I'm\nhoping someone will just recognize what's going on here.\n\nHere is the pg_class data for the table and index:\n\nrelname=notifications\nrelpages=2799880\nreltuples=7.15229e+07\nrelallvisible=1219791\nrelkind=r\nrelnatts=11\nrelhassubclass=f\nreloptions=\npg_table_size=22943326208\n\nrelname=index_notifications_on_person_id_and_created_at\nrelpages=473208\nreltuples=7.03404e+07\nrelallvisible=0\nrelkind=i\nrelnatts=2\nrelhassubclass=f\nreloptions=\npg_table_size=3877494784\n\nThanks,\nWalter\n\nWe had a mysterious (to us) slowdown today that I'm hoping someone can explain just based on PG's principles of operation. It got better by itself so it seems like it was \"normal\" behavior -- I just don't know what behavior it was exhibiting.We have a table of user notifications containing about 80 million rows. It gets a lot of inserts continually, and is cleaned up once a day. There are no updates. In all history there have been about 330 million rows created.Today we deleted about 15 million rows in one transaction from this table. Immediately afterwards, a particular SELECT started running very slowly -- 500 to 3000 ms rather than the usual <1ms.We did an EXPLAIN ANALYZE on this select and it was still doing an index scan as usual. The *planning time* for the query is what had gotten slow. The query itself was still executing in <1ms.Over the next few hours the time slowly improved, until it returned to the former performance. You can see a graph at https://imgur.com/a/zIfqkF5.Is this sort of thing expected after a large delete, and if so, can someone explain the mechanism behind it? I've looked for an explanation of what could cause this kind of excess planning time and haven't found one. I'm hoping someone will just recognize what's going on here.Here is the pg_class data for the table and index:relname=notificationsrelpages=2799880reltuples=7.15229e+07relallvisible=1219791relkind=rrelnatts=11relhassubclass=freloptions=pg_table_size=22943326208relname=index_notifications_on_person_id_and_created_atrelpages=473208reltuples=7.03404e+07relallvisible=0relkind=irelnatts=2relhassubclass=freloptions=pg_table_size=3877494784Thanks,Walter",
"msg_date": "Mon, 20 May 2019 17:43:45 -0700",
"msg_from": "Walter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "On Tue, 21 May 2019 at 12:44, Walter Smith <[email protected]> wrote:\n>\n> We had a mysterious (to us) slowdown today that I'm hoping someone can explain just based on PG's principles of operation. It got better by itself so it seems like it was \"normal\" behavior -- I just don't know what behavior it was exhibiting.\n>\n> We have a table of user notifications containing about 80 million rows. It gets a lot of inserts continually, and is cleaned up once a day. There are no updates. In all history there have been about 330 million rows created.\n>\n> Today we deleted about 15 million rows in one transaction from this table. Immediately afterwards, a particular SELECT started running very slowly -- 500 to 3000 ms rather than the usual <1ms.\n>\n> We did an EXPLAIN ANALYZE on this select and it was still doing an index scan as usual. The *planning time* for the query is what had gotten slow. The query itself was still executing in <1ms.\n\nIt would be good to know which version you're running here. It\nbasically sounds very much like get_actual_variable_range() will be\nthe culprit. Basically, if a constant value that's being used by the\nplanner to determine row estimates with falls outside the statistic's\nhistogram and a btree index exists that we can use to look up the\nactual bound of the data, then we do so in that function. If you've\njust deleted a bunch of rows then that index scan may have to traverse\na bunch of dead tuples before finding that first live tuple. This\ncode has changed a few times in recent times, see fccebe421 and\n3ca930fc3, which is why your version is of interest.\n\nApart from that, if you want to confirm that's the issue and you just\nwant it fixed, just VACUUM the table. You should likely be doing that\nanyway directly after your bulk delete.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 21 May 2019 13:00:38 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "Walter Smith <[email protected]> writes:\n> Today we deleted about 15 million rows in one transaction from this table.\n> Immediately afterwards, a particular SELECT started running very slowly --\n> 500 to 3000 ms rather than the usual <1ms.\n\n> We did an EXPLAIN ANALYZE on this select and it was still doing an index\n> scan as usual. The *planning time* for the query is what had gotten slow.\n> The query itself was still executing in <1ms.\n\n> Over the next few hours the time slowly improved, until it returned to the\n> former performance. You can see a graph at https://imgur.com/a/zIfqkF5.\n\nWere the deleted rows all at one end of the index in question?\n\nIf so, this is likely down to the planner trying to use the index to\nidentify the extremal live value of the column, which it wants to know\nin connection with planning mergejoins (so I'm assuming your problem\nquery involved a join on the indexed column --- whether or not the\nfinal plan did a mergejoin, the planner would consider this). As\nlong as there's a live value near the end of the index, this is pretty\ncheap. If the planner has to trawl through a bunch of dead entries\nto find the nearest-to-the-end live one, not so much.\n\nSubsequent vacuuming would eventually delete the dead index entries\nand return things to normal; although usually the performance comes\nback all-of-a-sudden at the next (auto)VACUUM of the table. So I'm\na bit intrigued by your seeing it \"gradually\" improve. Maybe you\nhad old open transactions that were limiting VACUUM's ability to\nremove rows?\n\nWe've made a couple of rounds of adjustments of the behavior to try\nto avoid/reduce this penalty, but since you didn't say what PG version\nyou're running, it's hard to tell whether an upgrade would help.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 May 2019 21:04:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "I'm so sorry -- I meant to give the version, of course. It's 9.6.13.\n\nThanks,\nWalter\n\n\nOn Mon, May 20, 2019 at 6:05 PM Tom Lane <[email protected]> wrote:\n\n> Walter Smith <[email protected]> writes:\n> > Today we deleted about 15 million rows in one transaction from this\n> table.\n> > Immediately afterwards, a particular SELECT started running very slowly\n> --\n> > 500 to 3000 ms rather than the usual <1ms.\n>\n> > We did an EXPLAIN ANALYZE on this select and it was still doing an index\n> > scan as usual. The *planning time* for the query is what had gotten slow.\n> > The query itself was still executing in <1ms.\n>\n> > Over the next few hours the time slowly improved, until it returned to\n> the\n> > former performance. You can see a graph at https://imgur.com/a/zIfqkF5.\n>\n> Were the deleted rows all at one end of the index in question?\n>\n> If so, this is likely down to the planner trying to use the index to\n> identify the extremal live value of the column, which it wants to know\n> in connection with planning mergejoins (so I'm assuming your problem\n> query involved a join on the indexed column --- whether or not the\n> final plan did a mergejoin, the planner would consider this). As\n> long as there's a live value near the end of the index, this is pretty\n> cheap. If the planner has to trawl through a bunch of dead entries\n> to find the nearest-to-the-end live one, not so much.\n>\n> Subsequent vacuuming would eventually delete the dead index entries\n> and return things to normal; although usually the performance comes\n> back all-of-a-sudden at the next (auto)VACUUM of the table. So I'm\n> a bit intrigued by your seeing it \"gradually\" improve. Maybe you\n> had old open transactions that were limiting VACUUM's ability to\n> remove rows?\n>\n> We've made a couple of rounds of adjustments of the behavior to try\n> to avoid/reduce this penalty, but since you didn't say what PG version\n> you're running, it's hard to tell whether an upgrade would help.\n>\n> regards, tom lane\n>\n\nI'm so sorry -- I meant to give the version, of course. It's 9.6.13.Thanks,WalterOn Mon, May 20, 2019 at 6:05 PM Tom Lane <[email protected]> wrote:Walter Smith <[email protected]> writes:\n> Today we deleted about 15 million rows in one transaction from this table.\n> Immediately afterwards, a particular SELECT started running very slowly --\n> 500 to 3000 ms rather than the usual <1ms.\n\n> We did an EXPLAIN ANALYZE on this select and it was still doing an index\n> scan as usual. The *planning time* for the query is what had gotten slow.\n> The query itself was still executing in <1ms.\n\n> Over the next few hours the time slowly improved, until it returned to the\n> former performance. You can see a graph at https://imgur.com/a/zIfqkF5.\n\nWere the deleted rows all at one end of the index in question?\n\nIf so, this is likely down to the planner trying to use the index to\nidentify the extremal live value of the column, which it wants to know\nin connection with planning mergejoins (so I'm assuming your problem\nquery involved a join on the indexed column --- whether or not the\nfinal plan did a mergejoin, the planner would consider this). As\nlong as there's a live value near the end of the index, this is pretty\ncheap. If the planner has to trawl through a bunch of dead entries\nto find the nearest-to-the-end live one, not so much.\n\nSubsequent vacuuming would eventually delete the dead index entries\nand return things to normal; although usually the performance comes\nback all-of-a-sudden at the next (auto)VACUUM of the table. So I'm\na bit intrigued by your seeing it \"gradually\" improve. Maybe you\nhad old open transactions that were limiting VACUUM's ability to\nremove rows?\n\nWe've made a couple of rounds of adjustments of the behavior to try\nto avoid/reduce this penalty, but since you didn't say what PG version\nyou're running, it's hard to tell whether an upgrade would help.\n\n regards, tom lane",
"msg_date": "Mon, 20 May 2019 19:04:44 -0700",
"msg_from": "Walter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n>I'm assuming your problem\n>query involved a join on the indexed column --- whether or not the\n>final plan did a mergejoin, the planner would consider this\n\nThere's no join -- the query is\n\nSELECT \"notifications\".*\nFROM \"notifications\"\nWHERE \"notifications\".\"person_id\" = ? AND\n \"notifications\".\"app_category\" = ? AND\n (id > ?)\nORDER BY created_at DESC\nLIMIT ?\n\nAnd the whole query plan is one step:\n\nIndex Scan using index_notifications_on_person_id_and_created_at on\nnotifications (cost=0.57..212.16 rows=52 width=231)\n\n>Subsequent vacuuming would eventually delete the dead index entries\n>and return things to normal; although usually the performance comes\n>back all-of-a-sudden at the next (auto)VACUUM of the table. So I'm\n>a bit intrigued by your seeing it \"gradually\" improve. Maybe you\n>had old open transactions that were limiting VACUUM's ability to\n>remove rows?'\n\nWe shouldn't have any long-running transactions at all, certainly not open\nfor a couple of hours.\n\nThanks,\nWalter\n\nTom Lane <[email protected]> writes:>I'm assuming your problem>query involved a join on the indexed column --- whether or not the>final plan did a mergejoin, the planner would consider thisThere's no join -- the query isSELECT \"notifications\".*FROM \"notifications\"WHERE \"notifications\".\"person_id\" = ? AND \"notifications\".\"app_category\" = ? AND (id > ?)ORDER BY created_at DESCLIMIT ? And the whole query plan is one step:Index Scan using index_notifications_on_person_id_and_created_at on notifications (cost=0.57..212.16 rows=52 width=231)>Subsequent vacuuming would eventually delete the dead index entries>and return things to normal; although usually the performance comes>back all-of-a-sudden at the next (auto)VACUUM of the table. So I'm>a bit intrigued by your seeing it \"gradually\" improve. Maybe you>had old open transactions that were limiting VACUUM's ability to>remove rows?'We shouldn't have any long-running transactions at all, certainly not open for a couple of hours.Thanks,Walter",
"msg_date": "Mon, 20 May 2019 19:12:05 -0700",
"msg_from": "Walter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "On Tue, 21 May 2019 at 14:04, Walter Smith <[email protected]> wrote:\n> I'm so sorry -- I meant to give the version, of course. It's 9.6.13.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ca930fc3\nhas been applied since then.\n\nIt would be good if you could confirm the problem is resolved after a\nvacuum. Maybe run VACUUM VERBOSE on the table and double check\nthere's not some large amount of tuples that are \"nonremovable\".\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 21 May 2019 14:15:09 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "On Mon, May 20, 2019 at 7:15 PM David Rowley <[email protected]>\nwrote:\n\n> It would be good if you could confirm the problem is resolved after a\n> vacuum. Maybe run VACUUM VERBOSE on the table and double check\n> there's not some large amount of tuples that are \"nonremovable\".\n>\n\nAs I say, the problem resolved itself over the next couple of hours.\nPerhaps something autovacuum did? Or if the index extrema hypothesis is\ncorrect, perhaps the new rows being inserted for various users slowly\nchanged that situation?\n\nI did a VACUUM overnight and got the following. The thing that stands out\nto me is that one index (index_unproc_notifications_on_notifiable_type)\ntook 100x longer to scan than the others. That's not the index used in the\nslow query, though.\n\nINFO: vacuuming \"public.notifications\"\nINFO: scanned index \"notifications_pkey\" to remove 16596527 row versions\nDETAIL: CPU 12.11s/11.04u sec elapsed 39.62 sec\nINFO: scanned index \"index_notifications_on_person_id_and_created_at\" to\nremove 16596527 row versions\nDETAIL: CPU 15.86s/49.85u sec elapsed 92.07 sec\nINFO: scanned index \"index_unproc_notifications_on_notifiable_type\" to\nremove 16596527 row versions\nDETAIL: CPU 224.08s/10934.81u sec elapsed 11208.37 sec\nINFO: scanned index \"index_notifications_on_person_id_id\" to remove\n16596527 row versions\nDETAIL: CPU 11.58s/59.54u sec elapsed 91.40 sec\nINFO: scanned index \"index_notifications_on_about_id\" to remove 16596527\nrow versions\nDETAIL: CPU 11.70s/57.75u sec elapsed 87.81 sec\nINFO: scanned index\n\"index_notifications_on_notifiable_type_and_notifiable_id\" to remove\n16596527 row versions\nDETAIL: CPU 19.95s/70.46u sec elapsed 126.08 sec\nINFO: scanned index \"index_notifications_on_created_at\" to remove 16596527\nrow versions\nDETAIL: CPU 5.87s/13.07u sec elapsed 30.69 sec\nINFO: \"notifications\": removed 16596527 row versions in 2569217 pages\nDETAIL: CPU 84.77s/35.24u sec elapsed 295.30 sec\nINFO: index \"notifications_pkey\" now contains 56704023 row versions in\n930088 pages\nDETAIL: 902246 index row versions were removed.\n570997 index pages have been deleted, 570906 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: index \"index_notifications_on_person_id_and_created_at\" now contains\n56704024 row versions in 473208 pages\nDETAIL: 902246 index row versions were removed.\n8765 index pages have been deleted, 8743 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"index_unproc_notifications_on_notifiable_type\" now contains\n56705182 row versions in 1549089 pages\nDETAIL: 13354803 index row versions were removed.\n934133 index pages have been deleted, 182731 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: index \"index_notifications_on_person_id_id\" now contains 56705323\nrow versions in 331156 pages\nDETAIL: 16594039 index row versions were removed.\n4786 index pages have been deleted, 1674 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: index \"index_notifications_on_about_id\" now contains 56705325 row\nversions in 332666 pages\nDETAIL: 16596527 index row versions were removed.\n11240 index pages have been deleted, 2835 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"index_notifications_on_notifiable_type_and_notifiable_id\" now\ncontains 56705325 row versions in 666755 pages\nDETAIL: 16596527 index row versions were removed.\n52936 index pages have been deleted, 2693 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"index_notifications_on_created_at\" now contains 56705331 row\nversions in 196271 pages\nDETAIL: 14874162 index row versions were removed.\n37884 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"notifications\": found 890395 removable, 56698057 nonremovable row\nversions in 2797452 out of 2799880 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 29497175 unused item pointers.\nSkipped 0 pages due to buffer pins.\n0 pages are entirely empty.\nCPU 452.97s/11252.42u sec elapsed 12186.90 sec.\nINFO: vacuuming \"pg_toast.pg_toast_27436\"\nINFO: index \"pg_toast_27436_index\" now contains 72 row versions in 2 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_27436\": found 0 removable, 2 nonremovable row versions in\n1 out of 36 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 3 unused item pointers.\nSkipped 0 pages due to buffer pins.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nThanks,\nWalter\n\nOn Mon, May 20, 2019 at 7:15 PM David Rowley <[email protected]> wrote:It would be good if you could confirm the problem is resolved after a\nvacuum. Maybe run VACUUM VERBOSE on the table and double check\nthere's not some large amount of tuples that are \"nonremovable\".As I say, the problem resolved itself over the next couple of hours. Perhaps something autovacuum did? Or if the index extrema hypothesis is correct, perhaps the new rows being inserted for various users slowly changed that situation?I did a VACUUM overnight and got the following. The thing that stands out to me is that one index (index_unproc_notifications_on_notifiable_type) took 100x longer to scan than the others. That's not the index used in the slow query, though.INFO: vacuuming \"public.notifications\"INFO: scanned index \"notifications_pkey\" to remove 16596527 row versionsDETAIL: CPU 12.11s/11.04u sec elapsed 39.62 secINFO: scanned index \"index_notifications_on_person_id_and_created_at\" to remove 16596527 row versionsDETAIL: CPU 15.86s/49.85u sec elapsed 92.07 secINFO: scanned index \"index_unproc_notifications_on_notifiable_type\" to remove 16596527 row versionsDETAIL: CPU 224.08s/10934.81u sec elapsed 11208.37 secINFO: scanned index \"index_notifications_on_person_id_id\" to remove 16596527 row versionsDETAIL: CPU 11.58s/59.54u sec elapsed 91.40 secINFO: scanned index \"index_notifications_on_about_id\" to remove 16596527 row versionsDETAIL: CPU 11.70s/57.75u sec elapsed 87.81 secINFO: scanned index \"index_notifications_on_notifiable_type_and_notifiable_id\" to remove 16596527 row versionsDETAIL: CPU 19.95s/70.46u sec elapsed 126.08 secINFO: scanned index \"index_notifications_on_created_at\" to remove 16596527 row versionsDETAIL: CPU 5.87s/13.07u sec elapsed 30.69 secINFO: \"notifications\": removed 16596527 row versions in 2569217 pagesDETAIL: CPU 84.77s/35.24u sec elapsed 295.30 secINFO: index \"notifications_pkey\" now contains 56704023 row versions in 930088 pagesDETAIL: 902246 index row versions were removed.570997 index pages have been deleted, 570906 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.01 sec.INFO: index \"index_notifications_on_person_id_and_created_at\" now contains 56704024 row versions in 473208 pagesDETAIL: 902246 index row versions were removed.8765 index pages have been deleted, 8743 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"index_unproc_notifications_on_notifiable_type\" now contains 56705182 row versions in 1549089 pagesDETAIL: 13354803 index row versions were removed.934133 index pages have been deleted, 182731 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.02 sec.INFO: index \"index_notifications_on_person_id_id\" now contains 56705323 row versions in 331156 pagesDETAIL: 16594039 index row versions were removed.4786 index pages have been deleted, 1674 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.01 sec.INFO: index \"index_notifications_on_about_id\" now contains 56705325 row versions in 332666 pagesDETAIL: 16596527 index row versions were removed.11240 index pages have been deleted, 2835 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"index_notifications_on_notifiable_type_and_notifiable_id\" now contains 56705325 row versions in 666755 pagesDETAIL: 16596527 index row versions were removed.52936 index pages have been deleted, 2693 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"index_notifications_on_created_at\" now contains 56705331 row versions in 196271 pagesDETAIL: 14874162 index row versions were removed.37884 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"notifications\": found 890395 removable, 56698057 nonremovable row versions in 2797452 out of 2799880 pagesDETAIL: 0 dead row versions cannot be removed yet.There were 29497175 unused item pointers.Skipped 0 pages due to buffer pins.0 pages are entirely empty.CPU 452.97s/11252.42u sec elapsed 12186.90 sec.INFO: vacuuming \"pg_toast.pg_toast_27436\"INFO: index \"pg_toast_27436_index\" now contains 72 row versions in 2 pagesDETAIL: 0 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"pg_toast_27436\": found 0 removable, 2 nonremovable row versions in 1 out of 36 pagesDETAIL: 0 dead row versions cannot be removed yet.There were 3 unused item pointers.Skipped 0 pages due to buffer pins.0 pages are entirely empty.CPU 0.00s/0.00u sec elapsed 0.00 sec.Thanks,Walter",
"msg_date": "Tue, 21 May 2019 11:12:32 -0700",
"msg_from": "Walter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "On Tue, May 21, 2019 at 11:12 AM Walter Smith <[email protected]> wrote\n> I did a VACUUM overnight and got the following. The thing that stands out to me is that one index (index_unproc_notifications_on_notifiable_type) took 100x longer to scan than the others. That's not the index used in the slow query, though.\n\nWhat columns are indexed by\nindex_unproc_notifications_on_notifiable_type, and what are their\ndatatypes?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 21 May 2019 11:15:07 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "On Tue, May 21, 2019 at 11:15 AM Peter Geoghegan <[email protected]> wrote:\n\n> On Tue, May 21, 2019 at 11:12 AM Walter Smith <[email protected]> wrote\n> > I did a VACUUM overnight and got the following. The thing that stands\n> out to me is that one index (index_unproc_notifications_on_notifiable_type)\n> took 100x longer to scan than the others. That's not the index used in the\n> slow query, though.\n>\n> What columns are indexed by\n> index_unproc_notifications_on_notifiable_type, and what are their\n> datatypes?\n>\n\nIt occurs to me that is a somewhat unusual index -- it tracks unprocessed\nnotifications so it gets an insert and delete for every row, and is\nnormally almost empty.\n\nIndex \"public.index_unproc_notifications_on_notifiable_type\"\n Column | Type | Definition\n-----------------+------------------------+-----------------\n notifiable_type | character varying(255) | notifiable_type\nbtree, for table \"public.notifications\", predicate (processed = false)\n\nThanks,\nWalter\n\nOn Tue, May 21, 2019 at 11:15 AM Peter Geoghegan <[email protected]> wrote:On Tue, May 21, 2019 at 11:12 AM Walter Smith <[email protected]> wrote\n> I did a VACUUM overnight and got the following. The thing that stands out to me is that one index (index_unproc_notifications_on_notifiable_type) took 100x longer to scan than the others. That's not the index used in the slow query, though.\n\nWhat columns are indexed by\nindex_unproc_notifications_on_notifiable_type, and what are their\ndatatypes?It occurs to me that is a somewhat unusual index -- it tracks unprocessed notifications so it gets an insert and delete for every row, and is normally almost empty.Index \"public.index_unproc_notifications_on_notifiable_type\" Column | Type | Definition-----------------+------------------------+----------------- notifiable_type | character varying(255) | notifiable_typebtree, for table \"public.notifications\", predicate (processed = false)Thanks,Walter",
"msg_date": "Tue, 21 May 2019 11:16:05 -0700",
"msg_from": "Walter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "On Tue, May 21, 2019 at 11:16 AM Walter Smith <[email protected]> wrote:\n> It occurs to me that is a somewhat unusual index -- it tracks unprocessed notifications so it gets an insert and delete for every row, and is normally almost empty.\n\nIs it a very low cardinality index? In other words, is the total\nnumber of distinct keys rather low? Not just at any given time, but\nover time?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 21 May 2019 11:17:31 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "On Tue, May 21, 2019 at 11:17 AM Peter Geoghegan <[email protected]> wrote:\n\n> On Tue, May 21, 2019 at 11:16 AM Walter Smith <[email protected]> wrote:\n> > It occurs to me that is a somewhat unusual index -- it tracks\n> unprocessed notifications so it gets an insert and delete for every row,\n> and is normally almost empty.\n>\n> Is it a very low cardinality index? In other words, is the total\n> number of distinct keys rather low? Not just at any given time, but\n> over time?\n\n\nVery low. Probably less than ten over all time. I suspect the only use of\nthe index is to rapidly find the processed=false rows, so the\nnotifiable_type value isn’t important, really. It would probably work just\nas well on any other column.\n\n— Walter\n\nOn Tue, May 21, 2019 at 11:17 AM Peter Geoghegan <[email protected]> wrote:On Tue, May 21, 2019 at 11:16 AM Walter Smith <[email protected]> wrote:\n> It occurs to me that is a somewhat unusual index -- it tracks unprocessed notifications so it gets an insert and delete for every row, and is normally almost empty.\n\nIs it a very low cardinality index? In other words, is the total\nnumber of distinct keys rather low? Not just at any given time, but\nover time?Very low. Probably less than ten over all time. I suspect the only use of the index is to rapidly find the processed=false rows, so the notifiable_type value isn’t important, really. It would probably work just as well on any other column.— Walter",
"msg_date": "Tue, 21 May 2019 11:26:49 -0700",
"msg_from": "Walter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "On Tue, May 21, 2019 at 8:27 PM Walter Smith <[email protected]> wrote:\n\n> On Tue, May 21, 2019 at 11:17 AM Peter Geoghegan <[email protected]> wrote:\n>\n>> On Tue, May 21, 2019 at 11:16 AM Walter Smith <[email protected]>\n>> wrote:\n>> > It occurs to me that is a somewhat unusual index -- it tracks\n>> unprocessed notifications so it gets an insert and delete for every row,\n>> and is normally almost empty.\n>>\n>> Is it a very low cardinality index? In other words, is the total\n>> number of distinct keys rather low? Not just at any given time, but\n>> over time?\n>\n>\n> Very low. Probably less than ten over all time. I suspect the only use of\n> the index is to rapidly find the processed=false rows, so the\n> notifiable_type value isn’t important, really. It would probably work just\n> as well on any other column.\n>\n> — Walter\n>\n>\n>\n>\n\nOn Tue, May 21, 2019 at 8:27 PM Walter Smith <[email protected]> wrote:On Tue, May 21, 2019 at 11:17 AM Peter Geoghegan <[email protected]> wrote:On Tue, May 21, 2019 at 11:16 AM Walter Smith <[email protected]> wrote:\n> It occurs to me that is a somewhat unusual index -- it tracks unprocessed notifications so it gets an insert and delete for every row, and is normally almost empty.\n\nIs it a very low cardinality index? In other words, is the total\nnumber of distinct keys rather low? Not just at any given time, but\nover time?Very low. Probably less than ten over all time. I suspect the only use of the index is to rapidly find the processed=false rows, so the notifiable_type value isn’t important, really. It would probably work just as well on any other column.— Walter",
"msg_date": "Tue, 21 May 2019 20:32:06 +0200",
"msg_from": "didier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
},
{
"msg_contents": "On Tue, May 21, 2019 at 11:27 AM Walter Smith <[email protected]> wrote:\n> Very low. Probably less than ten over all time. I suspect the only use of the index is to rapidly find the processed=false rows, so the notifiable_type value isn’t important, really. It would probably work just as well on any other column.\n\nThis problem has been fixed in Postgres 12, which treats heap TID as a\ntiebreaker column within B-Tree indexes. It sounds like you have the\nright idea about how to work around the problem.\n\nVACUUM will need to kill tuples in random locations in the low\ncardinality index, since the order of tuples is unspecified between\nduplicate tuples -- it is more or less random. VACUUM will tend to\ndirty far more pages than is truly necessary in this scenario, because\nthere is no natural temporal locality that concentrates dead tuples in\none or two particular places in the index. This has a far more\nnoticeable impact on VACUUM duration than you might expect, since\nautovacuum is throttled by delays that vary according to how many\npages were dirtied (and other such factors).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 21 May 2019 11:36:30 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily very slow planning time after a big delete"
}
] |
[
{
"msg_contents": "Is it efficient to use Postgres as a column store by creating one table per\ncolumn?\n\nI would query it with something like `[...] UNION SELECT value AS <table>\nFROM <table> WHERE value = <value> UNION [...]` to build a row.\n\nI'm thinking since Postgres stores tables in continuous blocks of 16MB each\n(I think that's the default page size?) I would get efficient reads and\nwith parallel queries I could benefit from multiple cores.\n\nThanks!\n\nBest,\nLev\n\nIs it efficient to use Postgres as a column store by creating one table per column?I would query it with something like `[...] UNION SELECT value AS <table> FROM <table> WHERE value = <value> UNION [...]` to build a row.I'm thinking since Postgres stores tables in continuous blocks of 16MB each (I think that's the default page size?) I would get efficient reads and with parallel queries I could benefit from multiple cores.Thanks!Best,Lev",
"msg_date": "Tue, 21 May 2019 21:28:07 -0700",
"msg_from": "Lev Kokotov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use Postgres as a column store by creating one table per column"
},
{
"msg_contents": "On Tue, May 21, 2019 at 09:28:07PM -0700, Lev Kokotov wrote:\n> Is it efficient to use Postgres as a column store by creating one table per\n> column?\n> \n> I would query it with something like `[...] UNION SELECT value AS <table>\n> FROM <table> WHERE value = <value> UNION [...]` to build a row.\n\nI think you mean JOIN not UNION.\n\nIt'd be awful (At one point I tried it very briefly). If you were joining 2,\n10 column tables, that'd be 19 joins. I imagine the tables would be \"serial id\nunique, float value\" or similar, so the execution might not be terrible, as\nit'd be using an index lookup for each column. But the planner would suffer,\nbadly. Don't even try to read EXPLAIN.\n\nActually, the execution would also be hitting at least 2x files per \"column\"\n(one for the index and one for the table data), so that's not great.\n\nAlso, the overhead of a 2-column table is high, so your DB would be much bigger\nand have very high overhead. Sorry to reference a 2ndary source, but..\nhttps://stackoverflow.com/questions/13570613/making-sense-of-postgres-row-sizes\n\n> I'm thinking since Postgres stores tables in continuous blocks of 16MB each\n> (I think that's the default page size?) I would get efficient reads and\n> with parallel queries I could benefit from multiple cores.\n\nDefault page size is 8kb\n\nJustin\n\n\n",
"msg_date": "Tue, 21 May 2019 23:43:09 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use Postgres as a column store by creating one table per column"
},
{
"msg_contents": "Greetings,\n\n* Lev Kokotov ([email protected]) wrote:\n> Is it efficient to use Postgres as a column store by creating one table per\n> column?\n\nShort answer is no, not in a traditional arrangement, anyway. The tuple\noverhead would be extremely painful. It's possible to improve on that,\nbut it requires sacrificing what the tuple header gives you- visibility\ninformation, along with some other things. The question will be if\nthat's acceptable or not.\n\n> I'm thinking since Postgres stores tables in continuous blocks of 16MB each\n> (I think that's the default page size?) I would get efficient reads and\n> with parallel queries I could benefit from multiple cores.\n\nThe page size in PG is 8k, not 16MB.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 24 May 2019 15:38:15 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use Postgres as a column store by creating one table per column"
},
{
"msg_contents": "\nOn 22/05/19 4:28 PM, Lev Kokotov wrote:\n> Is it efficient to use Postgres as a column store by creating one \n> table per column?\n>\n> I would query it with something like `[...] UNION SELECT value AS \n> <table> FROM <table> WHERE value = <value> UNION [...]` to build a row.\n>\n> I'm thinking since Postgres stores tables in continuous blocks of 16MB \n> each (I think that's the default page size?) I would get efficient \n> reads and with parallel queries I could benefit from multiple cores.\n>\n>\nTake a look at Zedstore, which is a column store built to plug into v12 \nstorage layer:\n\nhttps://www.postgresql.org/message-id/CALfoeiuF-m5jg51mJUPm5GN8u396o5sA2AF5N97vTRAEDYac7w%40mail.gmail.com\n\n\n\n",
"msg_date": "Sat, 25 May 2019 15:55:52 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use Postgres as a column store by creating one table per column"
}
] |
[
{
"msg_contents": "Hey,\nI'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).\nEach dump contains only one database.\nThe sizes :\nA-10GB\nB-20GB\nC-5GB.\n\nFor unclear reason the restore of the third database is taking alot of\ntime. It isnt stuck but it continues creating db rules. This database has\nmore then 400K rules.\n\nI changed a few postgresql.conf parameters :\nshared_buffers = 2GB\neffective_cache_size = 65GB\ncheckpoint_segments =20\ncheckpoint_completion_target = 0.9\nmaintenance_work_mem = 10GB\ncheckpoint_timeout=30min\nwork_mem=64MB\nautovacuum = off\nfull_page_writes=off\nwal_buffers=50MB\n\nmy machine has 31 cpu and 130GB of ram.\n\nAny idea why the restore of the two dbs takes about 15 minutes while the\nthird db which is the smallest takes more than 1 hour ?\nI restore the dump with pg_restore with 5 jobs (-j).\n\nI know that it is an old version, just trying to help..\n\nHey,I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).Each dump contains only one database.The sizes : A-10GBB-20GBC-5GB.For unclear reason the restore of the third database is taking alot of time. It isnt stuck but it continues creating db rules. This database has more then 400K rules. I changed a few postgresql.conf parameters :shared_buffers = 2GBeffective_cache_size = 65GBcheckpoint_segments =20 checkpoint_completion_target = 0.9maintenance_work_mem = 10GBcheckpoint_timeout=30minwork_mem=64MBautovacuum = offfull_page_writes=offwal_buffers=50MBmy machine has 31 cpu and 130GB of ram.Any idea why the restore of the two dbs takes about 15 minutes while the third db which is the smallest takes more than 1 hour ? I restore the dump with pg_restore with 5 jobs (-j). I know that it is an old version, just trying to help..",
"msg_date": "Wed, 22 May 2019 18:26:49 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_restore takes more time on creation of rules"
},
{
"msg_contents": "On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:\n>Hey,\n>I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).\n>Each dump contains only one database.\n>The sizes :\n>A-10GB\n>B-20GB\n>C-5GB.\n>\n>For unclear reason the restore of the third database is taking alot of\n>time. It isnt stuck but it continues creating db rules. This database has\n>more then 400K rules.\n>\n\nWhat do you mean by \"rules\"?\n\n>I changed a few postgresql.conf parameters :\n>shared_buffers = 2GB\n>effective_cache_size = 65GB\n>checkpoint_segments =20\n>checkpoint_completion_target = 0.9\n>maintenance_work_mem = 10GB\n>checkpoint_timeout=30min\n>work_mem=64MB\n>autovacuum = off\n>full_page_writes=off\n>wal_buffers=50MB\n>\n>my machine has 31 cpu and 130GB of ram.\n>\n>Any idea why the restore of the two dbs takes about 15 minutes while the\n>third db which is the smallest takes more than 1 hour ? I restore the\n>dump with pg_restore with 5 jobs (-j).\n>\n\nWell, presumably the third database has complexity in other places,\npossibly spending a lot of time on CPU, while the other databases don't\nhave such issue.\n\nWhat would help is a CPU profile, e.g. from perf.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 22 May 2019 17:41:02 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore takes more time on creation of rules"
},
{
"msg_contents": "By rules I mean DB rules (simillar to triggers but different)\n\nבתאריך יום ד׳, 22 במאי 2019 ב-18:41 מאת Tomas Vondra <\[email protected]>:\n\n> On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:\n> >Hey,\n> >I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).\n> >Each dump contains only one database.\n> >The sizes :\n> >A-10GB\n> >B-20GB\n> >C-5GB.\n> >\n> >For unclear reason the restore of the third database is taking alot of\n> >time. It isnt stuck but it continues creating db rules. This database has\n> >more then 400K rules.\n> >\n>\n> What do you mean by \"rules\"?\n>\n> >I changed a few postgresql.conf parameters :\n> >shared_buffers = 2GB\n> >effective_cache_size = 65GB\n> >checkpoint_segments =20\n> >checkpoint_completion_target = 0.9\n> >maintenance_work_mem = 10GB\n> >checkpoint_timeout=30min\n> >work_mem=64MB\n> >autovacuum = off\n> >full_page_writes=off\n> >wal_buffers=50MB\n> >\n> >my machine has 31 cpu and 130GB of ram.\n> >\n> >Any idea why the restore of the two dbs takes about 15 minutes while the\n> >third db which is the smallest takes more than 1 hour ? I restore the\n> >dump with pg_restore with 5 jobs (-j).\n> >\n>\n> Well, presumably the third database has complexity in other places,\n> possibly spending a lot of time on CPU, while the other databases don't\n> have such issue.\n>\n> What would help is a CPU profile, e.g. from perf.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\nBy rules I mean DB rules (simillar to triggers but different)בתאריך יום ד׳, 22 במאי 2019 ב-18:41 מאת Tomas Vondra <[email protected]>:On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:\n>Hey,\n>I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).\n>Each dump contains only one database.\n>The sizes :\n>A-10GB\n>B-20GB\n>C-5GB.\n>\n>For unclear reason the restore of the third database is taking alot of\n>time. It isnt stuck but it continues creating db rules. This database has\n>more then 400K rules.\n>\n\nWhat do you mean by \"rules\"?\n\n>I changed a few postgresql.conf parameters :\n>shared_buffers = 2GB\n>effective_cache_size = 65GB\n>checkpoint_segments =20\n>checkpoint_completion_target = 0.9\n>maintenance_work_mem = 10GB\n>checkpoint_timeout=30min\n>work_mem=64MB\n>autovacuum = off\n>full_page_writes=off\n>wal_buffers=50MB\n>\n>my machine has 31 cpu and 130GB of ram.\n>\n>Any idea why the restore of the two dbs takes about 15 minutes while the\n>third db which is the smallest takes more than 1 hour ? I restore the\n>dump with pg_restore with 5 jobs (-j).\n>\n\nWell, presumably the third database has complexity in other places,\npossibly spending a lot of time on CPU, while the other databases don't\nhave such issue.\n\nWhat would help is a CPU profile, e.g. from perf.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 22 May 2019 18:44:29 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_restore takes more time on creation of rules"
},
{
"msg_contents": "I'd redirect stderr to a file and tail it to monitor progress.\n\nOn 5/22/19 10:44 AM, Mariel Cherkassky wrote:\n> By rules I mean DB rules (simillar to triggers but different)\n>\n> בתאריך יום ד׳, 22 במאי 2019 ב-18:41 מאת Tomas Vondra \n> <[email protected] <mailto:[email protected]>>:\n>\n> On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:\n> >Hey,\n> >I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).\n> >Each dump contains only one database.\n> >The sizes :\n> >A-10GB\n> >B-20GB\n> >C-5GB.\n> >\n> >For unclear reason the restore of the third database is taking alot of\n> >time. It isnt stuck but it continues creating db rules. This database has\n> >more then 400K rules.\n> >\n>\n> What do you mean by \"rules\"?\n>\n> >I changed a few postgresql.conf parameters :\n> >shared_buffers = 2GB\n> >effective_cache_size = 65GB\n> >checkpoint_segments =20\n> >checkpoint_completion_target = 0.9\n> >maintenance_work_mem = 10GB\n> >checkpoint_timeout=30min\n> >work_mem=64MB\n> >autovacuum = off\n> >full_page_writes=off\n> >wal_buffers=50MB\n> >\n> >my machine has 31 cpu and 130GB of ram.\n> >\n> >Any idea why the restore of the two dbs takes about 15 minutes while the\n> >third db which is the smallest takes more than 1 hour ? I restore the\n> >dump with pg_restore with 5 jobs (-j).\n> >\n>\n> Well, presumably the third database has complexity in other places,\n> possibly spending a lot of time on CPU, while the other databases don't\n> have such issue.\n>\n> What would help is a CPU profile, e.g. from perf.\n>\n>\n> regards\n>\n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n I'd redirect stderr to a file and tail it to monitor progress.\n\nOn 5/22/19 10:44 AM, Mariel Cherkassky\n wrote:\n\n\n\n\nBy rules I mean DB rules (simillar to\n triggers but different)\n\n\n\nבתאריך יום ד׳, 22 במאי 2019\n ב-18:41 מאת Tomas Vondra <[email protected]>:\n\nOn\n Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky\n wrote:\n >Hey,\n >I'm trying to restore a cluster (9.2) from 3 binary dumps\n (pg_dump -Fc).\n >Each dump contains only one database.\n >The sizes :\n >A-10GB\n >B-20GB\n >C-5GB.\n >\n >For unclear reason the restore of the third database is\n taking alot of\n >time. It isnt stuck but it continues creating db rules.\n This database has\n >more then 400K rules.\n >\n\n What do you mean by \"rules\"?\n\n >I changed a few postgresql.conf parameters :\n >shared_buffers = 2GB\n >effective_cache_size = 65GB\n >checkpoint_segments =20\n >checkpoint_completion_target = 0.9\n >maintenance_work_mem = 10GB\n >checkpoint_timeout=30min\n >work_mem=64MB\n >autovacuum = off\n >full_page_writes=off\n >wal_buffers=50MB\n >\n >my machine has 31 cpu and 130GB of ram.\n >\n >Any idea why the restore of the two dbs takes about 15\n minutes while the\n >third db which is the smallest takes more than 1 hour ? I\n restore the\n >dump with pg_restore with 5 jobs (-j).\n >\n\n Well, presumably the third database has complexity in other\n places,\n possibly spending a lot of time on CPU, while the other\n databases don't\n have such issue.\n\n What would help is a CPU profile, e.g. from perf.\n\n\n regards\n\n -- \n Tomas Vondra http://www.2ndQuadrant.com\n PostgreSQL Development, 24x7 Support, Remote DBA, Training\n & Services\n\n\n\n\n\n-- \n Angular momentum makes the world go 'round.",
"msg_date": "Wed, 22 May 2019 11:00:51 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore takes more time on creation of rules"
},
{
"msg_contents": "Basically if I didnt redirect stderr i should be redirected to the screen ?\nI didnt see any errors, I used -v (verbose) when I run the restore, and I\nsee that it restores the rules (but a lot of them..)\n\nבתאריך יום ד׳, 22 במאי 2019 ב-19:01 מאת Ron <[email protected]\n>:\n\n> I'd redirect stderr to a file and tail it to monitor progress.\n>\n> On 5/22/19 10:44 AM, Mariel Cherkassky wrote:\n>\n> By rules I mean DB rules (simillar to triggers but different)\n>\n> בתאריך יום ד׳, 22 במאי 2019 ב-18:41 מאת Tomas Vondra <\n> [email protected]>:\n>\n>> On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:\n>> >Hey,\n>> >I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).\n>> >Each dump contains only one database.\n>> >The sizes :\n>> >A-10GB\n>> >B-20GB\n>> >C-5GB.\n>> >\n>> >For unclear reason the restore of the third database is taking alot of\n>> >time. It isnt stuck but it continues creating db rules. This database has\n>> >more then 400K rules.\n>> >\n>>\n>> What do you mean by \"rules\"?\n>>\n>> >I changed a few postgresql.conf parameters :\n>> >shared_buffers = 2GB\n>> >effective_cache_size = 65GB\n>> >checkpoint_segments =20\n>> >checkpoint_completion_target = 0.9\n>> >maintenance_work_mem = 10GB\n>> >checkpoint_timeout=30min\n>> >work_mem=64MB\n>> >autovacuum = off\n>> >full_page_writes=off\n>> >wal_buffers=50MB\n>> >\n>> >my machine has 31 cpu and 130GB of ram.\n>> >\n>> >Any idea why the restore of the two dbs takes about 15 minutes while the\n>> >third db which is the smallest takes more than 1 hour ? I restore the\n>> >dump with pg_restore with 5 jobs (-j).\n>> >\n>>\n>> Well, presumably the third database has complexity in other places,\n>> possibly spending a lot of time on CPU, while the other databases don't\n>> have such issue.\n>>\n>> What would help is a CPU profile, e.g. from perf.\n>>\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra http://www.2ndQuadrant.com\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>>\n> --\n> Angular momentum makes the world go 'round.\n>\n\nBasically if I didnt redirect stderr i should be redirected to the screen ? I didnt see any errors, I used -v (verbose) when I run the restore, and I see that it restores the rules (but a lot of them..)בתאריך יום ד׳, 22 במאי 2019 ב-19:01 מאת Ron <[email protected]>:\n\n I'd redirect stderr to a file and tail it to monitor progress.\n\nOn 5/22/19 10:44 AM, Mariel Cherkassky\n wrote:\n\n\n\nBy rules I mean DB rules (simillar to\n triggers but different)\n\n\n\nבתאריך יום ד׳, 22 במאי 2019\n ב-18:41 מאת Tomas Vondra <[email protected]>:\n\nOn\n Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky\n wrote:\n >Hey,\n >I'm trying to restore a cluster (9.2) from 3 binary dumps\n (pg_dump -Fc).\n >Each dump contains only one database.\n >The sizes :\n >A-10GB\n >B-20GB\n >C-5GB.\n >\n >For unclear reason the restore of the third database is\n taking alot of\n >time. It isnt stuck but it continues creating db rules.\n This database has\n >more then 400K rules.\n >\n\n What do you mean by \"rules\"?\n\n >I changed a few postgresql.conf parameters :\n >shared_buffers = 2GB\n >effective_cache_size = 65GB\n >checkpoint_segments =20\n >checkpoint_completion_target = 0.9\n >maintenance_work_mem = 10GB\n >checkpoint_timeout=30min\n >work_mem=64MB\n >autovacuum = off\n >full_page_writes=off\n >wal_buffers=50MB\n >\n >my machine has 31 cpu and 130GB of ram.\n >\n >Any idea why the restore of the two dbs takes about 15\n minutes while the\n >third db which is the smallest takes more than 1 hour ? I\n restore the\n >dump with pg_restore with 5 jobs (-j).\n >\n\n Well, presumably the third database has complexity in other\n places,\n possibly spending a lot of time on CPU, while the other\n databases don't\n have such issue.\n\n What would help is a CPU profile, e.g. from perf.\n\n\n regards\n\n -- \n Tomas Vondra http://www.2ndQuadrant.com\n PostgreSQL Development, 24x7 Support, Remote DBA, Training\n & Services\n\n\n\n\n\n-- \n Angular momentum makes the world go 'round.",
"msg_date": "Wed, 22 May 2019 19:02:59 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_restore takes more time on creation of rules"
},
{
"msg_contents": "Not looking for errors, but redirecting it to a file for posterity.\n\nWere/are the rules being created, but just slowly?\n\nOn 5/22/19 11:02 AM, Mariel Cherkassky wrote:\n> Basically if I didnt redirect stderr i should be redirected to the screen \n> ? I didnt see any errors, I used -v (verbose) when I run the restore, and \n> I see that it restores the rules (but a lot of them..)\n>\n> בתאריך יום ד׳, 22 במאי 2019 ב-19:01 מאת Ron <[email protected] \n> <mailto:[email protected]>>:\n>\n> I'd redirect stderr to a file and tail it to monitor progress.\n>\n> On 5/22/19 10:44 AM, Mariel Cherkassky wrote:\n>> By rules I mean DB rules (simillar to triggers but different)\n>>\n>> בתאריך יום ד׳, 22 במאי 2019 ב-18:41 מאת Tomas Vondra\n>> <[email protected] <mailto:[email protected]>>:\n>>\n>> On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:\n>> >Hey,\n>> >I'm trying to restore a cluster (9.2) from 3 binary dumps\n>> (pg_dump -Fc).\n>> >Each dump contains only one database.\n>> >The sizes :\n>> >A-10GB\n>> >B-20GB\n>> >C-5GB.\n>> >\n>> >For unclear reason the restore of the third database is taking\n>> alot of\n>> >time. It isnt stuck but it continues creating db rules. This\n>> database has\n>> >more then 400K rules.\n>> >\n>>\n>> What do you mean by \"rules\"?\n>>\n>> >I changed a few postgresql.conf parameters :\n>> >shared_buffers = 2GB\n>> >effective_cache_size = 65GB\n>> >checkpoint_segments =20\n>> >checkpoint_completion_target = 0.9\n>> >maintenance_work_mem = 10GB\n>> >checkpoint_timeout=30min\n>> >work_mem=64MB\n>> >autovacuum = off\n>> >full_page_writes=off\n>> >wal_buffers=50MB\n>> >\n>> >my machine has 31 cpu and 130GB of ram.\n>> >\n>> >Any idea why the restore of the two dbs takes about 15 minutes\n>> while the\n>> >third db which is the smallest takes more than 1 hour ? I\n>> restore the\n>> >dump with pg_restore with 5 jobs (-j).\n>> >\n>>\n>> Well, presumably the third database has complexity in other places,\n>> possibly spending a lot of time on CPU, while the other databases\n>> don't\n>> have such issue.\n>>\n>> What would help is a CPU profile, e.g. from perf.\n>>\n>>\n>> regards\n>>\n>> -- \n>> Tomas Vondra http://www.2ndQuadrant.com\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>\n> -- \n> Angular momentum makes the world go 'round.\n>\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n\n Not looking for errors, but redirecting it to a file for posterity.\n\n Were/are the rules being created, but just slowly?\n\nOn 5/22/19 11:02 AM, Mariel Cherkassky\n wrote:\n\n\n\n\nBasically if I didnt redirect stderr i\n should be redirected to the screen ? I didnt see any errors, I\n used -v (verbose) when I run the restore, and I see that it\n restores the rules (but a lot of them..)\n\n\n\nבתאריך יום ד׳, 22 במאי 2019\n ב-19:01 מאת Ron <[email protected]>:\n\n\n I'd redirect stderr to a file and tail\n it to monitor progress.\n\nOn\n 5/22/19 10:44 AM, Mariel Cherkassky wrote:\n\n\n\nBy rules I mean DB rules (simillar to\n triggers but different)\n\n\n\nבתאריך יום ד׳, 22\n במאי 2019 ב-18:41 מאת Tomas Vondra <[email protected]>:\n\nOn Wed, May 22,\n 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:\n >Hey,\n >I'm trying to restore a cluster (9.2) from 3\n binary dumps (pg_dump -Fc).\n >Each dump contains only one database.\n >The sizes :\n >A-10GB\n >B-20GB\n >C-5GB.\n >\n >For unclear reason the restore of the third\n database is taking alot of\n >time. It isnt stuck but it continues creating db\n rules. This database has\n >more then 400K rules.\n >\n\n What do you mean by \"rules\"?\n\n >I changed a few postgresql.conf parameters :\n >shared_buffers = 2GB\n >effective_cache_size = 65GB\n >checkpoint_segments =20\n >checkpoint_completion_target = 0.9\n >maintenance_work_mem = 10GB\n >checkpoint_timeout=30min\n >work_mem=64MB\n >autovacuum = off\n >full_page_writes=off\n >wal_buffers=50MB\n >\n >my machine has 31 cpu and 130GB of ram.\n >\n >Any idea why the restore of the two dbs takes\n about 15 minutes while the\n >third db which is the smallest takes more than 1\n hour ? I restore the\n >dump with pg_restore with 5 jobs (-j).\n >\n\n Well, presumably the third database has complexity in\n other places,\n possibly spending a lot of time on CPU, while the\n other databases don't\n have such issue.\n\n What would help is a CPU profile, e.g. from perf.\n\n\n regards\n\n -- \n Tomas Vondra http://www.2ndQuadrant.com\n PostgreSQL Development, 24x7 Support, Remote DBA,\n Training & Services\n\n\n\n\n\n-- \n Angular momentum makes the world go 'round.\n\n\n\n\n\n-- \n Angular momentum makes the world go 'round.",
"msg_date": "Wed, 22 May 2019 11:27:40 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore takes more time on creation of rules"
},
{
"msg_contents": "Well, it isn't taking more than 1 sec to be created, but I guess that there\nare a lot of rules. Still, the DB sze is by far smaller than the other dbs.\n\nOn Wed, May 22, 2019, 7:27 PM Ron <[email protected]> wrote:\n\n>\n> Not looking for errors, but redirecting it to a file for posterity.\n>\n> Were/are the rules being created, but just slowly?\n>\n> On 5/22/19 11:02 AM, Mariel Cherkassky wrote:\n>\n> Basically if I didnt redirect stderr i should be redirected to the screen\n> ? I didnt see any errors, I used -v (verbose) when I run the restore, and I\n> see that it restores the rules (but a lot of them..)\n>\n> בתאריך יום ד׳, 22 במאי 2019 ב-19:01 מאת Ron <[email protected]\n> >:\n>\n>> I'd redirect stderr to a file and tail it to monitor progress.\n>>\n>> On 5/22/19 10:44 AM, Mariel Cherkassky wrote:\n>>\n>> By rules I mean DB rules (simillar to triggers but different)\n>>\n>> בתאריך יום ד׳, 22 במאי 2019 ב-18:41 מאת Tomas Vondra <\n>> [email protected]>:\n>>\n>>> On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:\n>>> >Hey,\n>>> >I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).\n>>> >Each dump contains only one database.\n>>> >The sizes :\n>>> >A-10GB\n>>> >B-20GB\n>>> >C-5GB.\n>>> >\n>>> >For unclear reason the restore of the third database is taking alot of\n>>> >time. It isnt stuck but it continues creating db rules. This database\n>>> has\n>>> >more then 400K rules.\n>>> >\n>>>\n>>> What do you mean by \"rules\"?\n>>>\n>>> >I changed a few postgresql.conf parameters :\n>>> >shared_buffers = 2GB\n>>> >effective_cache_size = 65GB\n>>> >checkpoint_segments =20\n>>> >checkpoint_completion_target = 0.9\n>>> >maintenance_work_mem = 10GB\n>>> >checkpoint_timeout=30min\n>>> >work_mem=64MB\n>>> >autovacuum = off\n>>> >full_page_writes=off\n>>> >wal_buffers=50MB\n>>> >\n>>> >my machine has 31 cpu and 130GB of ram.\n>>> >\n>>> >Any idea why the restore of the two dbs takes about 15 minutes while the\n>>> >third db which is the smallest takes more than 1 hour ? I restore the\n>>> >dump with pg_restore with 5 jobs (-j).\n>>> >\n>>>\n>>> Well, presumably the third database has complexity in other places,\n>>> possibly spending a lot of time on CPU, while the other databases don't\n>>> have such issue.\n>>>\n>>> What would help is a CPU profile, e.g. from perf.\n>>>\n>>>\n>>> regards\n>>>\n>>> --\n>>> Tomas Vondra http://www.2ndQuadrant.com\n>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>>\n>>>\n>> --\n>> Angular momentum makes the world go 'round.\n>>\n>\n> --\n> Angular momentum makes the world go 'round.\n>\n\nWell, it isn't taking more than 1 sec to be created, but I guess that there are a lot of rules. Still, the DB sze is by far smaller than the other dbs.On Wed, May 22, 2019, 7:27 PM Ron <[email protected]> wrote:\n\n\n Not looking for errors, but redirecting it to a file for posterity.\n\n Were/are the rules being created, but just slowly?\n\nOn 5/22/19 11:02 AM, Mariel Cherkassky\n wrote:\n\n\n\nBasically if I didnt redirect stderr i\n should be redirected to the screen ? I didnt see any errors, I\n used -v (verbose) when I run the restore, and I see that it\n restores the rules (but a lot of them..)\n\n\n\nבתאריך יום ד׳, 22 במאי 2019\n ב-19:01 מאת Ron <[email protected]>:\n\n\n I'd redirect stderr to a file and tail\n it to monitor progress.\n\nOn\n 5/22/19 10:44 AM, Mariel Cherkassky wrote:\n\n\n\nBy rules I mean DB rules (simillar to\n triggers but different)\n\n\n\nבתאריך יום ד׳, 22\n במאי 2019 ב-18:41 מאת Tomas Vondra <[email protected]>:\n\nOn Wed, May 22,\n 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:\n >Hey,\n >I'm trying to restore a cluster (9.2) from 3\n binary dumps (pg_dump -Fc).\n >Each dump contains only one database.\n >The sizes :\n >A-10GB\n >B-20GB\n >C-5GB.\n >\n >For unclear reason the restore of the third\n database is taking alot of\n >time. It isnt stuck but it continues creating db\n rules. This database has\n >more then 400K rules.\n >\n\n What do you mean by \"rules\"?\n\n >I changed a few postgresql.conf parameters :\n >shared_buffers = 2GB\n >effective_cache_size = 65GB\n >checkpoint_segments =20\n >checkpoint_completion_target = 0.9\n >maintenance_work_mem = 10GB\n >checkpoint_timeout=30min\n >work_mem=64MB\n >autovacuum = off\n >full_page_writes=off\n >wal_buffers=50MB\n >\n >my machine has 31 cpu and 130GB of ram.\n >\n >Any idea why the restore of the two dbs takes\n about 15 minutes while the\n >third db which is the smallest takes more than 1\n hour ? I restore the\n >dump with pg_restore with 5 jobs (-j).\n >\n\n Well, presumably the third database has complexity in\n other places,\n possibly spending a lot of time on CPU, while the\n other databases don't\n have such issue.\n\n What would help is a CPU profile, e.g. from perf.\n\n\n regards\n\n -- \n Tomas Vondra http://www.2ndQuadrant.com\n PostgreSQL Development, 24x7 Support, Remote DBA,\n Training & Services\n\n\n\n\n\n-- \n Angular momentum makes the world go 'round.\n\n\n\n\n\n-- \n Angular momentum makes the world go 'round.",
"msg_date": "Wed, 22 May 2019 19:44:13 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_restore takes more time on creation of rules"
},
{
"msg_contents": "On Wed, May 22, 2019 at 06:44:29PM +0300, Mariel Cherkassky wrote:\n>By rules I mean DB rules (simillar to triggers but different)\n>\n\nI very much doubt such high number of rules was expected during the\ndesign (especially if it's on a single table), so perhaps there's an\nO(N^2) piece of code somewhere. I suggest you do a bit of profiling, for\nexample using perf [1], which would show where the time is spent.\n\n[1] https://wiki.postgresql.org/wiki/Profiling_with_perf\n\nAnd please stop top-posting, it makes it much harder to follow the\ndiscussion.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 22 May 2019 19:00:51 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore takes more time on creation of rules"
}
] |
[
{
"msg_contents": "On Tue, 21 May 2019 21:28:07 -0700, Lev Kokotov <[email protected]> \nwrote:\n\n >Is it efficient to use Postgres as a column store by creating one \ntable per\n >column?\n >\n >I would query it with something like `[...] UNION SELECT value AS <table>\n >FROM <table> WHERE value = <value> UNION [...]` to build a row.\n\nI think you mean JOIN.\n\nYou'd need more than that: Postgresql uses MVCC for concurrency, so \nwhenever you update any row in a table, the ordering of the rows within \nthe table changes.� And the JOIN operation inherently is unordered - you \nneed to sort the result deliberately to control ordering.\n\nTo emulate a column-store, at the very least you need a way to associate \nvalues from different \"columns\" that belong to the same \"row\" of the \nvirtual table.� IOW, every value in every \"column\" needs an explicit \n\"row\" identifier.� E.g.,\n\n �� col1 = { rowid, value1 }, col2 = { rowid, value2 }, ...\n\nFor performance you would need to have indexes on at least the rowid in \neach of the \"column\" tables.\n\nThis is a bare minimum and can only work if the columns of your virtual \ntable and the queries against it are application controlled or \nstatically known.� If you want to do something more flexible that will \nsupport ad hoc table modifications, elastically sized values (strings, \nbytes, arrays, JSON, XML), etc. this example is not suffice and the \nimplementation can get very complicated very quickly\n\n\nJustin Pryzby was not joking when he said the performance could be awful \n... at least as compared to a more normal row-oriented structure.� \nPerformance of a query that involves more than a handful of \"columns\", \nin general, will be horrible.� It is up to you to decide whether some \n(maybe little) increase in performance in processing *single* columns \nwill offset likely MASSIVE loss of performance in processing multiple \ncolumns.\n\n\n >I'm thinking since Postgres stores tables in continuous blocks of 16MB \neach\n >(I think that's the default page size?)\n\nDefault page size is 8 KB.� You'd have to recompile to change that, and \nit might break something - a whole lot of code depends on the knowing \nthe size of storage pages.\n\n\nGeorge\n\n\n\n",
"msg_date": "Thu, 23 May 2019 01:08:42 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use Postgres as a column store by creating one table per column"
},
{
"msg_contents": "On Thu, May 23, 2019 at 01:08:42AM -0400, George Neuner wrote:\n>On Tue, 21 May 2019 21:28:07 -0700, Lev Kokotov \n><[email protected]> wrote:\n>\n>>Is it efficient to use Postgres as a column store by creating one \n>table per\n>>column?\n>>\n>>I would query it with something like `[...] UNION SELECT value AS <table>\n>>FROM <table> WHERE value = <value> UNION [...]` to build a row.\n>\n>I think you mean JOIN.\n>\n>You'd need more than that: Postgresql uses MVCC for concurrency, so \n>whenever you update any row in a table, the ordering of the rows \n>within the table changes.� And the JOIN operation inherently is \n>unordered - you need to sort the result deliberately to control \n>ordering.\n>\n>To emulate a column-store, at the very least you need a way to \n>associate values from different \"columns\" that belong to the same \n>\"row\" of the virtual table.� IOW, every value in every \"column\" needs \n>an explicit \"row\" identifier.� E.g.,\n>\n>�� col1 = { rowid, value1 }, col2 = { rowid, value2 }, ...\n>\n>For performance you would need to have indexes on at least the rowid \n>in each of the \"column\" tables.\n>\n>This is a bare minimum and can only work if the columns of your \n>virtual table and the queries against it are application controlled or \n>statically known.� If you want to do something more flexible that will \n>support ad hoc table modifications, elastically sized values (strings, \n>bytes, arrays, JSON, XML), etc. this example is not suffice and the \n>implementation can get very complicated very quickly\n>\n>\n>Justin Pryzby was not joking when he said the performance could be \n>awful ... at least as compared to a more normal row-oriented \n>structure.� Performance of a query that involves more than a handful \n>of \"columns\", in general, will be horrible.� It is up to you to decide \n>whether some (maybe little) increase in performance in processing \n>*single* columns will offset likely MASSIVE loss of performance in \n>processing multiple columns.\n>\n\nMaybe take a look at this paper:\n\n http://db.csail.mit.edu/projects/cstore/abadi-sigmod08.pdf\n\nwhich essentially compares this approach to a \"real\" column store.\n\nIt certainly won't give you performance comparable to column store, it\nadds quite a bit of overhead (disk space because of row headers, CPU\nbecause of extra joins, etc.).\n\nAnd it can't give you the column-store benefits - compression and/or\nmore efficient execution.\n\n>\n>>I'm thinking since Postgres stores tables in continuous blocks of \n>16MB each\n>>(I think that's the default page size?)\n>\n>Default page size is 8 KB.� You'd have to recompile to change that, \n>and it might break something - a whole lot of code depends on the \n>knowing the size of storage pages.\n>\n>\n\nRight. And the largest page size is 64kB. But 8kB is a pretty good\ntrade-off, in most cases.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 24 May 2019 20:06:29 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use Postgres as a column store by creating one table per column"
}
] |
[
{
"msg_contents": "Hey,\nI have 2 nodes that are configured with streaming replication (PG 9.6,\nrepmgr 4.3).\nI was trying to upgrade the nodes to PG11 with the doc -\nhttps://www.postgresql.org/docs/11/pgupgrade.html\nEverything goes well until I try to start the secondary and then it fails\non the next error :\n\n2019-05-23 04:17:02 EDT 23593 FATAL: database files are incompatible\nwith server\n2019-05-23 04:17:02 EDT 23593 DETAIL: The database cluster was\ninitialized with PG_CONTROL_VERSION 960, but the server was compiled with\nPG_CONTROL_VERSION 1100.\n2019-05-23 04:17:02 EDT 23593 HINT: It looks like you need to initdb.\n2019-05-23 04:17:02 EDT 23593 LOG: database system is shut down\n\nI upgraded the primary, then I run the rsync command in the primary :\nrsync --archive --delete --hard-links --size-only --no-inc-recursive\n/var/lib/pgsql/data /var/lib/pgsql/11/data/\nsecondary_ip:/var/lib/pgsql/data/\n\nin the secondary I checked the version file and it was 11 :\n[secondary]# cat PG_VERSION\n11\n\nAny idea how to handle it ?\n\nHey,I have 2 nodes that are configured with streaming replication (PG 9.6, repmgr 4.3).I was trying to upgrade the nodes to PG11 with the doc - https://www.postgresql.org/docs/11/pgupgrade.htmlEverything goes well until I try to start the secondary and then it fails on the next error : 2019-05-23 04:17:02 EDT 23593 FATAL: database files are incompatible with server2019-05-23 04:17:02 EDT 23593 DETAIL: The database cluster was initialized with PG_CONTROL_VERSION 960, but the server was compiled with PG_CONTROL_VERSION 1100.2019-05-23 04:17:02 EDT 23593 HINT: It looks like you need to initdb.2019-05-23 04:17:02 EDT 23593 LOG: database system is shut downI upgraded the primary, then I run the rsync command in the primary : rsync --archive --delete --hard-links --size-only --no-inc-recursive /var/lib/pgsql/data /var/lib/pgsql/11/data/ secondary_ip:/var/lib/pgsql/data/in the secondary I checked the version file and it was 11 : [secondary]# cat PG_VERSION11Any idea how to handle it ?",
"msg_date": "Thu, 23 May 2019 11:20:25 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "Hey,\nI have 2 nodes that are configured with streaming replication (PG 9.6,\nrepmgr 4.3).\nI was trying to upgrade the nodes to PG11 with the doc -\nhttps://www.postgresql.org/docs/11/pgupgrade.html\nEverything goes well until I try to start the secondary and then it fails\non the next error :\n\n2019-05-23 04:17:02 EDT 23593 FATAL: database files are incompatible\nwith server\n2019-05-23 04:17:02 EDT 23593 DETAIL: The database cluster was\ninitialized with PG_CONTROL_VERSION 960, but the server was compiled with\nPG_CONTROL_VERSION 1100.\n2019-05-23 04:17:02 EDT 23593 HINT: It looks like you need to initdb.\n2019-05-23 04:17:02 EDT 23593 LOG: database system is shut down\n\nI upgraded the primary, then I run the rsync command in the primary :\nrsync --archive --delete --hard-links --size-only --no-inc-recursive\n/var/lib/pgsql/data /var/lib/pgsql/11/data/\nsecondary_ip:/var/lib/pgsql/data/\n\nin the secondary I checked the version file and it was 11 :\n[secondary]# cat PG_VERSION\n11\n\nAny idea how to handle it ?\n\nHey,I have 2 nodes that are configured with streaming replication (PG 9.6, repmgr 4.3).I was trying to upgrade the nodes to PG11 with the doc - https://www.postgresql.org/docs/11/pgupgrade.htmlEverything goes well until I try to start the secondary and then it fails on the next error : 2019-05-23 04:17:02 EDT 23593 FATAL: database files are incompatible with server2019-05-23 04:17:02 EDT 23593 DETAIL: The database cluster was initialized with PG_CONTROL_VERSION 960, but the server was compiled with PG_CONTROL_VERSION 1100.2019-05-23 04:17:02 EDT 23593 HINT: It looks like you need to initdb.2019-05-23 04:17:02 EDT 23593 LOG: database system is shut downI upgraded the primary, then I run the rsync command in the primary : rsync --archive --delete --hard-links --size-only --no-inc-recursive /var/lib/pgsql/data /var/lib/pgsql/11/data/ secondary_ip:/var/lib/pgsql/data/in the secondary I checked the version file and it was 11 : [secondary]# cat PG_VERSION11Any idea how to handle it ?",
"msg_date": "Thu, 23 May 2019 11:58:29 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "Hey,\nI have 2 nodes that are configured with streaming replication (PG 9.6,\nrepmgr 4.3).\nI was trying to upgrade the nodes to PG11 with the doc -\nhttps://www.postgresql.org/docs/11/pgupgrade.html\nEverything goes well until I try to start the secondary and then it fails\non the next error :\n\n2019-05-23 04:17:02 EDT 23593 FATAL: database files are incompatible\nwith server\n2019-05-23 04:17:02 EDT 23593 DETAIL: The database cluster was\ninitialized with PG_CONTROL_VERSION 960, but the server was compiled with\nPG_CONTROL_VERSION 1100.\n2019-05-23 04:17:02 EDT 23593 HINT: It looks like you need to initdb.\n2019-05-23 04:17:02 EDT 23593 LOG: database system is shut down\n\nI upgraded the primary, then I run the rsync command in the primary :\nrsync --archive --delete --hard-links --size-only --no-inc-recursive\n/var/lib/pgsql/data /var/lib/pgsql/11/data/\nsecondary_ip:/var/lib/pgsql/data/\n\nin the secondary I checked the version file and it was 11 :\n[secondary]# cat PG_VERSION\n11\n\nAny idea how to handle it ? I'm sending it to the performance mail list\nbecause no one answered it in the admin list ..\n\nHey,I have 2 nodes that are configured with streaming replication (PG 9.6, repmgr 4.3).I was trying to upgrade the nodes to PG11 with the doc - https://www.postgresql.org/docs/11/pgupgrade.htmlEverything goes well until I try to start the secondary and then it fails on the next error : 2019-05-23 04:17:02 EDT 23593 FATAL: database files are incompatible with server2019-05-23 04:17:02 EDT 23593 DETAIL: The database cluster was initialized with PG_CONTROL_VERSION 960, but the server was compiled with PG_CONTROL_VERSION 1100.2019-05-23 04:17:02 EDT 23593 HINT: It looks like you need to initdb.2019-05-23 04:17:02 EDT 23593 LOG: database system is shut downI upgraded the primary, then I run the rsync command in the primary : rsync --archive --delete --hard-links --size-only --no-inc-recursive /var/lib/pgsql/data /var/lib/pgsql/11/data/ secondary_ip:/var/lib/pgsql/data/in the secondary I checked the version file and it was 11 : [secondary]# cat PG_VERSION11Any idea how to handle it ? I'm sending it to the performance mail list because no one answered it in the admin list ..",
"msg_date": "Thu, 23 May 2019 14:07:30 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "> 2019-05-23 04:17:02 EDT 23593 DETAIL: The database cluster was initialized with PG_CONTROL_VERSION 960, but the server was compiled with PG_CONTROL_VERSION 1100.\r\n\r\nIt appears that you have not upgraded the standby server, so either use \"rsync\" or simply destroy and rebuild it from scratch \"repmgr standby clone...\"\r\n\r\n https://www.postgresql.org/docs/11/pgupgrade.html\r\n\r\n Upgrade Streaming Replication and Log-Shipping standby servers\r\n\r\n If you used link mode and have Streaming Replication (see Section 26.2.5) or Log-Shipping (see Section 26.2) standby servers, you can follow these steps to quickly upgrade them. You will not be running pg_upgrade on the standby servers, but rather rsync on the primary. Do not start any servers yet.\r\n\r\n If you did not use link mode, do not have or do not want to use rsync, or want an easier solution, skip the instructions in this section and simply recreate the standby servers once pg_upgrade completes and the new primary is running.\r\n\r\nGreg.\r\n\r\n\r\n\r\n\r\n\r\n________________________________\r\n\r\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and other terms at the following link: https://ihsmarkit.com/Legal/EmailDisclaimer.html\r\n\r\nPlease visit www.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.\r\n",
"msg_date": "Thu, 23 May 2019 11:23:47 +0000",
"msg_from": "Greg Clough <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "Hi Mariel,\n\n\nOn 5/23/19 1:07 PM, Mariel Cherkassky wrote:\n> \n> \n> Hey,\n> \n> I upgraded the primary, then I run the rsync command in the primary : \n> rsync --archive --delete --hard-links --size-only --no-inc-recursive\n> /var/lib/pgsql/data /var/lib/pgsql/11/data/\n> secondary_ip:/var/lib/pgsql/data/\n\nrsync needs only 2 arguments, not 3.\n\nYou are here passing /var/lib/pgsql/data /var/lib/pgsql/11/data/\nsecondary_ip:/var/lib/pgsql/data/\n\nand if you try to do that, you will end up copying the content of the\nfirst folder into the third.\n\nTherefore your secondary database will contain what on the primary is in\n/var/lib/pgsql/data/ (guess, 9.6.0)\n\nAlso, I do not think it best practice (or perhaps not correct at all) to\nuse '--size-only'\n\n> \n> in the secondary I checked the version file and it was 11 : \n> [secondary]# cat PG_VERSION\n> 11\n> \n\nfrom which folder are you running that? And what is the PGDATA of your\nstandby server?\n\n\n\nregards,\n\nfabio pardi\n\n\n",
"msg_date": "Thu, 23 May 2019 13:31:01 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "Greetings,\n\nPlease don't post these kinds of questions to this list, it's not the\nright list.\n\nPick the correct list to use in the future, and don't cross-post to\nmultiple lists.\n\nThis list is specifically for performance issues and questions regarding\nPostgreSQL, not about how to upgrade. For that, I would suggest either\n-general OR -admin (not both).\n\n> Any idea how to handle it ? I'm sending it to the performance mail list\n> because no one answered it in the admin list ..\n\nThis isn't an acceptable reason to forward it to another list. These\nlists have specific purposes and should be used for those purposes.\nFurther, no one is under any obligation to respond to questions posed to\nthese lists and any help provided is entirely at the discretion of those\non the list as to if they wish to, and have time to, help, or not.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 23 May 2019 11:06:58 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "Greetings (moved to -admin, where it should be...),\n\n* Greg Clough ([email protected]) wrote:\n> > 2019-05-23 04:17:02 EDT 23593 DETAIL: The database cluster was initialized with PG_CONTROL_VERSION 960, but the server was compiled with PG_CONTROL_VERSION 1100.\n> \n> It appears that you have not upgraded the standby server, so either use \"rsync\" or simply destroy and rebuild it from scratch \"repmgr standby clone...\"\n\nYou can't use the rsync method described in the pg_upgrade documentation\nafter the primary has been started, you'll end up with corruption.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 23 May 2019 11:08:35 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "On Thu, May 23, 2019 at 01:31:01PM +0200, Fabio Pardi wrote:\n> Hi Mariel,\n> \n> \n> On 5/23/19 1:07 PM, Mariel Cherkassky wrote:\n> > \n> > \n> > Hey,\n> > \n> > I upgraded the primary, then I run the rsync command in the primary :�\n> > rsync --archive --delete --hard-links --size-only --no-inc-recursive\n> > /var/lib/pgsql/data /var/lib/pgsql/11/data/\n> > secondary_ip:/var/lib/pgsql/data/\n> \n> rsync needs only 2 arguments, not 3.\n> \n> You are here passing /var/lib/pgsql/data /var/lib/pgsql/11/data/\n> secondary_ip:/var/lib/pgsql/data/\n> \n> and if you try to do that, you will end up copying the content of the\n> first folder into the third.\n> \n> Therefore your secondary database will contain what on the primary is in\n> /var/lib/pgsql/data/ (guess, 9.6.0)\n> \n> Also, I do not think it best practice (or perhaps not correct at all) to\n> use '--size-only'\n\n--size-only is correct, as far as I know.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 13 Jun 2019 23:30:01 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "Hi Bruce,\n\n\nOn 6/14/19 5:30 AM, Bruce Momjian wrote:\n\n>> Also, I do not think it best practice (or perhaps not correct at all) to\n>> use '--size-only'\n> \n> --size-only is correct, as far as I know.\n> \n\n\nMaybe I am missing something, but I am of the opinion that --size-only\nshould not be used when syncing database content (and probably in many\nother use cases where content can change over time).\n\nThe reason is that db allocates blocks, 8K by default regardless from\nthe content.\n\nUsing --size-only, tells rsync to only check the size of the blocks.\nThat is: if the block is present on the destination, and is the same\nsize as the origin, then skip.\n\nI understand that in this thread we are contextualizing in a step by\nstep procedure to create a new standby, but I have anyway a few remarks\nabout it (and the documentation where it has been copied from) and I\nwould be glad if you or somebody else could shed some light on it.\n\n\n*) It might happen in some corner cases that when syncing the standby,\nrsync dies and the DBA does not realize it. It will then start the\nmaster and some data gets modified. At the time the DBA realizes the\nissue on the standby, he will stop master and resume the sync.\nChanges happened on the master will then not be propagated to the\nstandby if they happened on files already present on the standby.\n\n\n*) It might be a long shot because I do not have time now to reproduce\nthe situation of the standby at that exact point in time, but I think\nthat --size-only option is there probably to speed up operations. In\nthat case I do not see a reason for it since the data folder on the\nstandby is assumed to be empty\n\n\n\nregards,\n\nfabio pardi\n\n\n",
"msg_date": "Fri, 14 Jun 2019 15:12:29 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 03:12:29PM +0200, Fabio Pardi wrote:\n> Hi Bruce,\n> \n> \n> On 6/14/19 5:30 AM, Bruce Momjian wrote:\n> \n> >> Also, I do not think it best practice (or perhaps not correct at all) to\n> >> use '--size-only'\n> > \n> > --size-only is correct, as far as I know.\n> > \n> \n> \n> Maybe I am missing something, but I am of the opinion that --size-only\n> should not be used when syncing database content (and probably in many\n> other use cases where content can change over time).\n> \n> The reason is that db allocates blocks, 8K by default regardless from\n> the content.\n> \n> Using --size-only, tells rsync to only check the size of the blocks.\n> That is: if the block is present on the destination, and is the same\n> size as the origin, then skip.\n\nThe files are _exactly_ the same on primary and standby, so we don't\nneed to check anything. Frankly, it is really only doing hard linking\nof the files.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 14 Jun 2019 10:39:40 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 10:39:40AM -0400, Bruce Momjian wrote:\n> On Fri, Jun 14, 2019 at 03:12:29PM +0200, Fabio Pardi wrote:\n> > Using --size-only, tells rsync to only check the size of the blocks.\n> > That is: if the block is present on the destination, and is the same\n> > size as the origin, then skip.\n> \n> The files are _exactly_ the same on primary and standby, so we don't\n> need to check anything. Frankly, it is really only doing hard linking\n> of the files.\n\nHere is the description from our docs:\n\n What this does is to record the links created by pg_upgrade's\n link mode that connect files in the old and new clusters on the\n primary server. It then finds matching files in the standby's old\n cluster and creates links for them in the standby's new cluster.\n Files that were not linked on the primary are copied from the\n primary to the standby. (They are usually small.) This provides\n rapid standby upgrades. Unfortunately, rsync needlessly copies\n files associated with temporary and unlogged tables because these\n files don't normally exist on standby servers.\n\nThe primary and standby have to be binary the same or WAL replay would\nnot work on the standby. (Yes, I sometimes forgot how this worked so I\nwrote it down in the docs.) :-)\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 14 Jun 2019 10:44:09 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian ([email protected]) wrote:\n> On Fri, Jun 14, 2019 at 10:39:40AM -0400, Bruce Momjian wrote:\n> > On Fri, Jun 14, 2019 at 03:12:29PM +0200, Fabio Pardi wrote:\n> > > Using --size-only, tells rsync to only check the size of the blocks.\n> > > That is: if the block is present on the destination, and is the same\n> > > size as the origin, then skip.\n> > \n> > The files are _exactly_ the same on primary and standby, so we don't\n> > need to check anything. Frankly, it is really only doing hard linking\n> > of the files.\n> \n> Here is the description from our docs:\n> \n> What this does is to record the links created by pg_upgrade's\n> link mode that connect files in the old and new clusters on the\n> primary server. It then finds matching files in the standby's old\n> cluster and creates links for them in the standby's new cluster.\n> Files that were not linked on the primary are copied from the\n> primary to the standby. (They are usually small.) This provides\n> rapid standby upgrades. Unfortunately, rsync needlessly copies\n> files associated with temporary and unlogged tables because these\n> files don't normally exist on standby servers.\n> \n> The primary and standby have to be binary the same or WAL replay would\n> not work on the standby. (Yes, I sometimes forgot how this worked so I\n> wrote it down in the docs.) :-)\n\nRight- this is *not* a general process for building a replica, this is\nspecifically *only* for when doing a pg_upgrade and *everything* is shut\ndown when it runs, and every step is checked to ensure that there are no\nerrors during the process.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 14 Jun 2019 12:15:38 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
},
{
"msg_contents": "Greetings,\n\n* Fabio Pardi ([email protected]) wrote:\n> I understand that in this thread we are contextualizing in a step by\n> step procedure to create a new standby, but I have anyway a few remarks\n> about it (and the documentation where it has been copied from) and I\n> would be glad if you or somebody else could shed some light on it.\n\nThis is not a procedure for creating a new standby.\n\nTo create a *new* standby, with the primary online, use pg_basebackup or\nanother tool that knows how to issue the appropriate start/stop backup\nand takes care of the WAL.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 14 Jun 2019 12:16:52 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: upgrade to PG11 on secondary fails (no initdb was launched)"
}
] |
[
{
"msg_contents": "Hi all,\n\nSome time ago, I was having trouble with some rather high load OLTP\napplication (in Java, but that doesn't really matter) that was using v1\nUUID's for primary keys and after some time, the bloat of certain\nindexes went quite high.\n\nSo I investigated the PostgreSQL code to see how it is handling UUID's\nwith respect to storage, sorting, aso. but all I could find was that it\nbasically falls back to the 16-byte.\n\nAfter struggling to find a way to optimize things inside the database, I\nreverted to introduce a hack into the application by not shuffling the\ntimestamp bytes for the UUID's, which makes it look quite serial in\nterms of byte order.\n\nWith that, we were able to reduce bloat by magnitudes and finally VACUUM\nalso was able to reclaim index space.\n\nSo, my question now is: Would it make sense for you to handle these\ntime-based UUID's differently internally? Specifically un-shuffling the\ntimestamp before they are going to storage?\n\nA second question would be whether native support functions could be\nintroduced? Most interesting for me would be:\n* timestamp\n* version\n* variant\n\n\nHere are some numbers from the tests I have run (against a PostgreSQL 10\nserver):\n\n1. insert 5,000,000 rows\n2. delete 2,500,000 rows\n\nIndex Bloat:\n idxname | bloat_ratio | bloat_size | real_size\n---------------------+-------------+------------+-----------\n uuid_v1_pkey | 23.6 | 46 MB | 195 MB\n uuid_serial_pkey | 50.4 | 76 MB | 150 MB\n\nHigher ratio for \"serial\", but still lower total index size. :)\n\nNow, the performance of VACUUM is also very interesting here:\n\n# vacuum (verbose, analyze, freeze) uuid_serial;\nINFO: vacuuming \"public.uuid_serial\"\nINFO: index \"uuid_serial_pkey\" now contains 2500001 row versions in\n19253 pages\nDETAIL: 0 index row versions were removed.ce(toast.reltuples, 0) / 4 )\n* bs ) as expected_bytes\n9624 index pages have been deleted, 9624 are currently reusable.\nCPU: user: 0.03 s, system: 0.01 s, elapsed: 0.05 s.t.oid\nINFO: \"uuid_serial\": found 0 removable, 2500001 nonremovable row\nversions in 13515 out of 27028 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 270712\nThere were 94 unused item pointers.\nSkipped 0 pages due to buffer pins, 13513 frozen pages.\n0 pages are entirely empty.be reused\nCPU: user: 0.37 s, system: 0.16 s, elapsed: 2.83 s.\nINFO: analyzing \"public.uuid_serial\"e compressed\nINFO: \"uuid_serial\": scanned 27028 of 27028 pages, containing 2500001\nlive rows and 0 dead rows; 30000 rows in sample, 2500001 estimated total\nrows\nVACUUM schemaname, tablename, can_estimate,\nTime: 3969.812 ms (00:03.970)\n\n# vacuum (verbose, analyze, freeze) uuid_v1;\nINFO: vacuuming \"public.uuid_v1\"\nINFO: scanned index \"uuid_v1_pkey\" to remove 2499999 row versions\nDETAIL: CPU: user: 1.95 s, system: 0.13 s, elapsed: 5.09 s\nINFO: \"uuid_v1\": removed 2499999 row versions in 27028 pages\nDETAIL: CPU: user: 0.22 s, system: 0.26 s, elapsed: 3.93 s\nINFO: index \"uuid_v1_pkey\" now contains 2500001 row versions in 24991 pages\nDETAIL: 2499999 index row versions were removed.\n12111 index pages have been deleted, 0 are currently reusable.\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nINFO: \"uuid_v1\": found 1791266 removable, 2500001 nonremovable row\nversions in 27028 out of 27028 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 270716\nThere were 0 unused item pointers.\nSkipped 0 pages due to buffer pins, 0 frozen pages.\n0 pages are entirely empty.\nCPU: user: 2.90 s, system: 0.71 s, elapsed: 14.54 s.\nINFO: analyzing \"public.uuid_v1\"\nINFO: \"uuid_v1\": scanned 27028 of 27028 pages, containing 2500001 live\nrows and 0 dead rows; 30000 rows in sample, 2500001 estimated total rows\nVACUUM\nTime: 15702.803 ms (00:15.703)\n\n...almost 5x faster!\n\nNow insert another 20 million:\n\nCOPY uuid_serial FROM '...' WITH ( FORMAT text );\nCOPY 20000000\nTime: 76249.142 ms (01:16.249)\n\nCOPY uuid_v1 FROM '...' WITH ( FORMAT text );\nCOPY 20000000\nTime: 804291.611 ms (13:24.292)\n\n...more than 10x faster!\n\n...and the resulting bloat (no VACUUM in between):\n idxname | bloat_ratio | bloat_size | real_size\n---------------------+-------------+------------+-----------\n uuid_v1_pkey | 30.5 | 295 MB | 966 MB\n uuid_serial_pkey | 0.9 | 6056 kB | 677 MB\n\n...still 30% savings in space.\n\n\nCheers,\n\n\tAncoron\n\n\n\n",
"msg_date": "Sat, 25 May 2019 15:45:53 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "UUID v1 optimizations..."
},
{
"msg_contents": "On 2019-05-25 15:45, Ancoron Luciferis wrote:\n> So, my question now is: Would it make sense for you to handle these\n> time-based UUID's differently internally? Specifically un-shuffling the\n> timestamp before they are going to storage?\n\nIt seems unlikely that we would do that, because that would break\nexisting stored UUIDs, and we'd also need to figure out a way to store\nnon-v1 UUIDs under this different scheme. The use case is pretty narrow\ncompared to the enormous effort.\n\nIt is well understood that using UUIDs or other somewhat-random keys\nperform worse than serial-like keys.\n\nBtw., it might be nice to rerun your tests with PostgreSQL 12beta1. The\nbtree storage has gotten some improvements. I don't think it's going to\nfundamentally solve your problem, but it would be useful feedback.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 25 May 2019 16:19:59 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "Ancoron Luciferis <[email protected]> writes:\n> So I investigated the PostgreSQL code to see how it is handling UUID's\n> with respect to storage, sorting, aso. but all I could find was that it\n> basically falls back to the 16-byte.\n\nYup, they're just blobs to us.\n\n> After struggling to find a way to optimize things inside the database, I\n> reverted to introduce a hack into the application by not shuffling the\n> timestamp bytes for the UUID's, which makes it look quite serial in\n> terms of byte order.\n\n> So, my question now is: Would it make sense for you to handle these\n> time-based UUID's differently internally? Specifically un-shuffling the\n> timestamp before they are going to storage?\n\nNo, because\n\n(1) UUID layout is standardized;\n\n(2) such a change would break on-disk compatibility for existing\ndatabases;\n\n(3) particularly for the case of application-generated UUIDs, we do\nnot have enough information to know that this would actually do anything\nuseful;\n\n(4) it in fact *wouldn't* do anything useful, because we'd still have\nto sort UUIDs in the same order as today, meaning that btree index behavior\nwould remain the same as before. Plus UUID comparison would get a lot\nmore complicated and slower than it is now.\n\n(5) even if we ignored all that and did it anyway, it would only help\nfor version-1 UUIDs. The index performance issue would still remain for\nversion-4 (random) UUIDs, which are if anything more common than v1.\n\n\nFWIW, I don't know what tool you're using to get those \"bloat\" numbers,\nbut just because somebody calls it bloat doesn't mean that it is.\nThe normal, steady-state load factor for a btree index is generally\nunderstood to be about 2/3rds, and that looks to be about what\nyou're getting for the regular-UUID-format index. The fact that the\nserially-loaded index has nearly no dead space is because we hack the\npage split logic to make that happen --- but that is a hack, and it's\nnot without downsides. It should *not* be taken to be an indication\nof what you can expect for any other insertion pattern.\n\nThe insertion-speed aspect is a real problem, but the core of that problem\nis that use of any sort of standard-format UUID turns applications that\nmight have had considerable locality of reference into applications that\nhave none. If you can manage to keep your whole index in RAM that would\nnot hurt too much, but as soon as it doesn't fit you have a problem.\nWhen your app has more or less predictable reference patterns it's best\nto design a unique key that matches that, instead of expecting that\nessentially-random keys will work well.\n\nOr in short, hacking up the way your app generates UUIDs is exactly\nthe right way to proceed here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 May 2019 10:57:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On 25/05/2019 16:19, Peter Eisentraut wrote:\n> On 2019-05-25 15:45, Ancoron Luciferis wrote:\n>> So, my question now is: Would it make sense for you to handle these\n>> time-based UUID's differently internally? Specifically un-shuffling the\n>> timestamp before they are going to storage?\n> \n> It seems unlikely that we would do that, because that would break\n> existing stored UUIDs, and we'd also need to figure out a way to store\n> non-v1 UUIDs under this different scheme. The use case is pretty narrow\n> compared to the enormous effort.\n\nI agree, the backwards compatibility really would be a show stopper here.\n\nAbout the \"enormous effort\" I was thinking the simplest \"solution\" would\nbe to have the version being detected at he time when the internal byte\narray is created from the provided representation which then could\ndirectly provide the unshuffled byte order.\n\n> \n> It is well understood that using UUIDs or other somewhat-random keys\n> perform worse than serial-like keys.\n\nYes I know. We're using these UUID's for more than just some primary\nkey, because they also tell use the creation time of the entry and which\n\"node\" of the highly distributed application generated the entry. It is\nlike an audit-log for us without the need for extra columns and of fixed\nsize, which helps performance also at the application level.\n\n> \n> Btw., it might be nice to rerun your tests with PostgreSQL 12beta1. The\n> btree storage has gotten some improvements. I don't think it's going to\n> fundamentally solve your problem, but it would be useful feedback.\n> \n\nThank you for the pointer to 12beta1. I've just read about it and it\nmight help a bit. I'll give it a try, for sure and report back.\n\nI also have to rerun those tests against PG 11 anyway.\n\nCheers,\n\n\tAncoron\n\n\n",
"msg_date": "Sat, 25 May 2019 19:27:21 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On 25/05/2019 16:57, Tom Lane wrote:\n> Ancoron Luciferis <[email protected]> writes:\n>> So I investigated the PostgreSQL code to see how it is handling UUID's\n>> with respect to storage, sorting, aso. but all I could find was that it\n>> basically falls back to the 16-byte.\n> \n> Yup, they're just blobs to us.\n> \n>> After struggling to find a way to optimize things inside the database, I\n>> reverted to introduce a hack into the application by not shuffling the\n>> timestamp bytes for the UUID's, which makes it look quite serial in\n>> terms of byte order.\n> \n>> So, my question now is: Would it make sense for you to handle these\n>> time-based UUID's differently internally? Specifically un-shuffling the\n>> timestamp before they are going to storage?\n> \n> No, because\n> \n> (1) UUID layout is standardized;\n\nYou mean the presentation at the byte-level is. ;)\n\n> \n> (2) such a change would break on-disk compatibility for existing\n> databases;\n\nYes, that certainly is a show-stopper.\n\n> \n> (3) particularly for the case of application-generated UUIDs, we do\n> not have enough information to know that this would actually do anything\n> useful;\n\nWell, not only the layout is standardized, but also there is a semantic\nto it depending on the version. Specifically for version 1, it has:\n1. a timestamp\n2. a clock sequence\n3. a node id\n\nAns as PostgreSQL already provides this pretty concrete data type, it\ncould be a natural consequence to also support the semantic of it.\n\nE.g. the network types also come with a lot of additional operators and\nfunctions. So I don't see a reason not to respect the additional\ncapabilities of a UUID.\n\nFor other versions of UUID's, functions like timestamp would certainly\nnot be available (return NULL?), respecting the concrete semantic.\n\n> \n> (4) it in fact *wouldn't* do anything useful, because we'd still have\n> to sort UUIDs in the same order as today, meaning that btree index behavior\n> would remain the same as before. Plus UUID comparison would get a lot\n> more complicated and slower than it is now.\n\nI get your first sentence, but not your second. I know that when\nchanging the internal byte order we'd have to completed re-compute\neverything on-disk (from table to index data), but why would the sorting\nin the index have to be the same?\n\nAnd actually, comparison logic wouldn't need to be changed at all if the\nbyte order is \"changed\" when the UUID is read in when reading the\nrepresentation into the internal UUID's byte array.\n\nFunction:\nstring_to_uuid(const char *source, pg_uuid_t *uuid);\n\n^^ here I would apply the change. And of course, reverse it for\ngenerating the textual representation.\n\nThat would slow down writes a bit, but that shouldn't be the case\nbecause index insertions are speed up even more.\n\nBut still, on-disk change is still a show-stopper, I guess.\n\n> \n> (5) even if we ignored all that and did it anyway, it would only help\n> for version-1 UUIDs. The index performance issue would still remain for\n> version-4 (random) UUIDs, which are if anything more common than v1.\n\nYes, I am aware that the changes might be of very limited gain. V4\nUUID's are usually used for external identifiers. For internal ones,\nthey don't make sense to me (too long, issues with randomness/enthropie\nunder high load, ...). ;)\n\nI just recently used these UUID's also together with a function for\nTimescaleDB auto-partitioning to provide the timestamp for the\npartitioning logic instead of the need for a separate timestamp column.\n\nThis is also one of the reasons why I was also asking for native support\nfunctions to extract the timestamp. I am apparently not very good at C\nso I am currently using Python and/or PgPLSQL for it, which is pretty slow.\n\n> \n> \n> FWIW, I don't know what tool you're using to get those \"bloat\" numbers,\n> but just because somebody calls it bloat doesn't mean that it is.\n> The normal, steady-state load factor for a btree index is generally\n> understood to be about 2/3rds, and that looks to be about what\n> you're getting for the regular-UUID-format index. The fact that the\n> serially-loaded index has nearly no dead space is because we hack the\n> page split logic to make that happen --- but that is a hack, and it's\n> not without downsides. It should *not* be taken to be an indication\n> of what you can expect for any other insertion pattern.\n\nOK, understood. I was actually a bit surprised by those numbers myself\nas these \"serial\" UUID's still only have the timestamp bytes in\nascending order, the clock sequence and node is still pretty random (but\nnot inside a single transaction, which might help the hack).\n\n> \n> The insertion-speed aspect is a real problem, but the core of that problem\n> is that use of any sort of standard-format UUID turns applications that\n> might have had considerable locality of reference into applications that\n> have none. If you can manage to keep your whole index in RAM that would\n> not hurt too much, but as soon as it doesn't fit you have a problem.\n> When your app has more or less predictable reference patterns it's best\n> to design a unique key that matches that, instead of expecting that\n> essentially-random keys will work well.\n\nThe system was configured to have more than enough space for the index\nand table data to fit into memory, but I am not sure. How can I verify\nthat? An EXPLAIN on the INSERT apparently doesn't include index insertion.\n\n> \n> Or in short, hacking up the way your app generates UUIDs is exactly\n> the right way to proceed here.\n\nOK. Glad to hear that.\n\nOne last question, though:\nWould it make sense to create a specialized UUID v1 type (e.g. with an\nextension) that does the transformation and delegates for all other\nthings to the existing UUID type support?\n\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n",
"msg_date": "Sat, 25 May 2019 20:20:58 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "I am not sure why do you want to change on-disk storage format? If we are\ntalking about indexes, it's more about comparison function (opclass) that\nis used in an index.\nAm I wrong?\n\nсб, 25 трав. 2019 о 11:21 Ancoron Luciferis <\[email protected]> пише:\n\n> On 25/05/2019 16:57, Tom Lane wrote:\n> > Ancoron Luciferis <[email protected]> writes:\n> >> So I investigated the PostgreSQL code to see how it is handling UUID's\n> >> with respect to storage, sorting, aso. but all I could find was that it\n> >> basically falls back to the 16-byte.\n> >\n> > Yup, they're just blobs to us.\n> >\n> >> After struggling to find a way to optimize things inside the database, I\n> >> reverted to introduce a hack into the application by not shuffling the\n> >> timestamp bytes for the UUID's, which makes it look quite serial in\n> >> terms of byte order.\n> >\n> >> So, my question now is: Would it make sense for you to handle these\n> >> time-based UUID's differently internally? Specifically un-shuffling the\n> >> timestamp before they are going to storage?\n> >\n> > No, because\n> >\n> > (1) UUID layout is standardized;\n>\n> You mean the presentation at the byte-level is. ;)\n>\n> >\n> > (2) such a change would break on-disk compatibility for existing\n> > databases;\n>\n> Yes, that certainly is a show-stopper.\n>\n> >\n> > (3) particularly for the case of application-generated UUIDs, we do\n> > not have enough information to know that this would actually do anything\n> > useful;\n>\n> Well, not only the layout is standardized, but also there is a semantic\n> to it depending on the version. Specifically for version 1, it has:\n> 1. a timestamp\n> 2. a clock sequence\n> 3. a node id\n>\n> Ans as PostgreSQL already provides this pretty concrete data type, it\n> could be a natural consequence to also support the semantic of it.\n>\n> E.g. the network types also come with a lot of additional operators and\n> functions. So I don't see a reason not to respect the additional\n> capabilities of a UUID.\n>\n> For other versions of UUID's, functions like timestamp would certainly\n> not be available (return NULL?), respecting the concrete semantic.\n>\n> >\n> > (4) it in fact *wouldn't* do anything useful, because we'd still have\n> > to sort UUIDs in the same order as today, meaning that btree index\n> behavior\n> > would remain the same as before. Plus UUID comparison would get a lot\n> > more complicated and slower than it is now.\n>\n> I get your first sentence, but not your second. I know that when\n> changing the internal byte order we'd have to completed re-compute\n> everything on-disk (from table to index data), but why would the sorting\n> in the index have to be the same?\n>\n> And actually, comparison logic wouldn't need to be changed at all if the\n> byte order is \"changed\" when the UUID is read in when reading the\n> representation into the internal UUID's byte array.\n>\n> Function:\n> string_to_uuid(const char *source, pg_uuid_t *uuid);\n>\n> ^^ here I would apply the change. And of course, reverse it for\n> generating the textual representation.\n>\n> That would slow down writes a bit, but that shouldn't be the case\n> because index insertions are speed up even more.\n>\n> But still, on-disk change is still a show-stopper, I guess.\n>\n> >\n> > (5) even if we ignored all that and did it anyway, it would only help\n> > for version-1 UUIDs. The index performance issue would still remain for\n> > version-4 (random) UUIDs, which are if anything more common than v1.\n>\n> Yes, I am aware that the changes might be of very limited gain. V4\n> UUID's are usually used for external identifiers. For internal ones,\n> they don't make sense to me (too long, issues with randomness/enthropie\n> under high load, ...). ;)\n>\n> I just recently used these UUID's also together with a function for\n> TimescaleDB auto-partitioning to provide the timestamp for the\n> partitioning logic instead of the need for a separate timestamp column.\n>\n> This is also one of the reasons why I was also asking for native support\n> functions to extract the timestamp. I am apparently not very good at C\n> so I am currently using Python and/or PgPLSQL for it, which is pretty slow.\n>\n> >\n> >\n> > FWIW, I don't know what tool you're using to get those \"bloat\" numbers,\n> > but just because somebody calls it bloat doesn't mean that it is.\n> > The normal, steady-state load factor for a btree index is generally\n> > understood to be about 2/3rds, and that looks to be about what\n> > you're getting for the regular-UUID-format index. The fact that the\n> > serially-loaded index has nearly no dead space is because we hack the\n> > page split logic to make that happen --- but that is a hack, and it's\n> > not without downsides. It should *not* be taken to be an indication\n> > of what you can expect for any other insertion pattern.\n>\n> OK, understood. I was actually a bit surprised by those numbers myself\n> as these \"serial\" UUID's still only have the timestamp bytes in\n> ascending order, the clock sequence and node is still pretty random (but\n> not inside a single transaction, which might help the hack).\n>\n> >\n> > The insertion-speed aspect is a real problem, but the core of that\n> problem\n> > is that use of any sort of standard-format UUID turns applications that\n> > might have had considerable locality of reference into applications that\n> > have none. If you can manage to keep your whole index in RAM that would\n> > not hurt too much, but as soon as it doesn't fit you have a problem.\n> > When your app has more or less predictable reference patterns it's best\n> > to design a unique key that matches that, instead of expecting that\n> > essentially-random keys will work well.\n>\n> The system was configured to have more than enough space for the index\n> and table data to fit into memory, but I am not sure. How can I verify\n> that? An EXPLAIN on the INSERT apparently doesn't include index insertion.\n>\n> >\n> > Or in short, hacking up the way your app generates UUIDs is exactly\n> > the right way to proceed here.\n>\n> OK. Glad to hear that.\n>\n> One last question, though:\n> Would it make sense to create a specialized UUID v1 type (e.g. with an\n> extension) that does the transformation and delegates for all other\n> things to the existing UUID type support?\n>\n> >\n> > regards, tom lane\n> >\n>\n>\n>\n\nI am not sure why do you want to change on-disk storage format? If we are talking about indexes, it's more about comparison function (opclass) that is used in an index. Am I wrong?сб, 25 трав. 2019 о 11:21 Ancoron Luciferis <[email protected]> пише:On 25/05/2019 16:57, Tom Lane wrote:\n> Ancoron Luciferis <[email protected]> writes:\n>> So I investigated the PostgreSQL code to see how it is handling UUID's\n>> with respect to storage, sorting, aso. but all I could find was that it\n>> basically falls back to the 16-byte.\n> \n> Yup, they're just blobs to us.\n> \n>> After struggling to find a way to optimize things inside the database, I\n>> reverted to introduce a hack into the application by not shuffling the\n>> timestamp bytes for the UUID's, which makes it look quite serial in\n>> terms of byte order.\n> \n>> So, my question now is: Would it make sense for you to handle these\n>> time-based UUID's differently internally? Specifically un-shuffling the\n>> timestamp before they are going to storage?\n> \n> No, because\n> \n> (1) UUID layout is standardized;\n\nYou mean the presentation at the byte-level is. ;)\n\n> \n> (2) such a change would break on-disk compatibility for existing\n> databases;\n\nYes, that certainly is a show-stopper.\n\n> \n> (3) particularly for the case of application-generated UUIDs, we do\n> not have enough information to know that this would actually do anything\n> useful;\n\nWell, not only the layout is standardized, but also there is a semantic\nto it depending on the version. Specifically for version 1, it has:\n1. a timestamp\n2. a clock sequence\n3. a node id\n\nAns as PostgreSQL already provides this pretty concrete data type, it\ncould be a natural consequence to also support the semantic of it.\n\nE.g. the network types also come with a lot of additional operators and\nfunctions. So I don't see a reason not to respect the additional\ncapabilities of a UUID.\n\nFor other versions of UUID's, functions like timestamp would certainly\nnot be available (return NULL?), respecting the concrete semantic.\n\n> \n> (4) it in fact *wouldn't* do anything useful, because we'd still have\n> to sort UUIDs in the same order as today, meaning that btree index behavior\n> would remain the same as before. Plus UUID comparison would get a lot\n> more complicated and slower than it is now.\n\nI get your first sentence, but not your second. I know that when\nchanging the internal byte order we'd have to completed re-compute\neverything on-disk (from table to index data), but why would the sorting\nin the index have to be the same?\n\nAnd actually, comparison logic wouldn't need to be changed at all if the\nbyte order is \"changed\" when the UUID is read in when reading the\nrepresentation into the internal UUID's byte array.\n\nFunction:\nstring_to_uuid(const char *source, pg_uuid_t *uuid);\n\n^^ here I would apply the change. And of course, reverse it for\ngenerating the textual representation.\n\nThat would slow down writes a bit, but that shouldn't be the case\nbecause index insertions are speed up even more.\n\nBut still, on-disk change is still a show-stopper, I guess.\n\n> \n> (5) even if we ignored all that and did it anyway, it would only help\n> for version-1 UUIDs. The index performance issue would still remain for\n> version-4 (random) UUIDs, which are if anything more common than v1.\n\nYes, I am aware that the changes might be of very limited gain. V4\nUUID's are usually used for external identifiers. For internal ones,\nthey don't make sense to me (too long, issues with randomness/enthropie\nunder high load, ...). ;)\n\nI just recently used these UUID's also together with a function for\nTimescaleDB auto-partitioning to provide the timestamp for the\npartitioning logic instead of the need for a separate timestamp column.\n\nThis is also one of the reasons why I was also asking for native support\nfunctions to extract the timestamp. I am apparently not very good at C\nso I am currently using Python and/or PgPLSQL for it, which is pretty slow.\n\n> \n> \n> FWIW, I don't know what tool you're using to get those \"bloat\" numbers,\n> but just because somebody calls it bloat doesn't mean that it is.\n> The normal, steady-state load factor for a btree index is generally\n> understood to be about 2/3rds, and that looks to be about what\n> you're getting for the regular-UUID-format index. The fact that the\n> serially-loaded index has nearly no dead space is because we hack the\n> page split logic to make that happen --- but that is a hack, and it's\n> not without downsides. It should *not* be taken to be an indication\n> of what you can expect for any other insertion pattern.\n\nOK, understood. I was actually a bit surprised by those numbers myself\nas these \"serial\" UUID's still only have the timestamp bytes in\nascending order, the clock sequence and node is still pretty random (but\nnot inside a single transaction, which might help the hack).\n\n> \n> The insertion-speed aspect is a real problem, but the core of that problem\n> is that use of any sort of standard-format UUID turns applications that\n> might have had considerable locality of reference into applications that\n> have none. If you can manage to keep your whole index in RAM that would\n> not hurt too much, but as soon as it doesn't fit you have a problem.\n> When your app has more or less predictable reference patterns it's best\n> to design a unique key that matches that, instead of expecting that\n> essentially-random keys will work well.\n\nThe system was configured to have more than enough space for the index\nand table data to fit into memory, but I am not sure. How can I verify\nthat? An EXPLAIN on the INSERT apparently doesn't include index insertion.\n\n> \n> Or in short, hacking up the way your app generates UUIDs is exactly\n> the right way to proceed here.\n\nOK. Glad to hear that.\n\nOne last question, though:\nWould it make sense to create a specialized UUID v1 type (e.g. with an\nextension) that does the transformation and delegates for all other\nthings to the existing UUID type support?\n\n> \n> regards, tom lane\n>",
"msg_date": "Sat, 25 May 2019 12:00:08 -0700",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On 25/05/2019 21:00, Vitalii Tymchyshyn wrote:\n> I am not sure why do you want to change on-disk storage format? If we\n> are talking about indexes, it's more about comparison function (opclass)\n> that is used in an index. \n> Am I wrong?\n\nI don't \"want\" to change the on-disk format of the v1 UUID's but that\nseems to be the most efficient approach to me (if anyone would ever want\nto implement that) as not changing it would mean to introduce the\nunshuffling of the timestamp bytes in a lot of other places where it\nwould decrease the performance instead of increasing it, not to speak of\nVACUUM, REINDEX, CLUSTER, ....\n\nOne of my use-cases would also benefit a lot here when trying to find\nentries that have been created within a given time range. To do that\n(without an additional timestamp column), I'd end up with a full index\nscan to determine the rows to read as there is no correlation between\nthose (timestamp and value sort order).\n\nFor example I would like to query the DB with:\n\nWHERE id >= '7d9c4835-5378-11ec-0000-000000000000' AND id <\n'5cdcac78-5a9a-11ec-0000-000000000000'\n\n...which is impossible atm. I have written myself a helper function that\nextracts the timestamp, but that is executed (of course) for each and\nevery entry while filtering or increases (if used as an index\nexpression) the write RTT by quite a lot (if you're dealing with\npeak-load style applications like we have):\n\nWHERE time_part_uuidv1(id) >= '2021-12-02 14:02:31.021778+00' AND\ntime_part_uuidv1(id) < '2021-12-11 15:52:37.107212+00'\n\nSo, although possible, it is not really an option for an application\nlike we have (and we don't just want to throw hardware at it).\n\nI don't know if all that makes any sense to you or if I'm just missing a\npiece of the puzzle inside the PostgreSQL code base. If I do, please\ntell me. :)\n\nCheers,\n\n\tAncoron\n\n> \n> сб, 25 трав. 2019 о 11:21 Ancoron Luciferis\n> <[email protected]\n> <mailto:[email protected]>> пише:\n> \n> On 25/05/2019 16:57, Tom Lane wrote:\n> > Ancoron Luciferis <[email protected]\n> <mailto:[email protected]>> writes:\n> >> So I investigated the PostgreSQL code to see how it is handling\n> UUID's\n> >> with respect to storage, sorting, aso. but all I could find was\n> that it\n> >> basically falls back to the 16-byte.\n> >\n> > Yup, they're just blobs to us.\n> >\n> >> After struggling to find a way to optimize things inside the\n> database, I\n> >> reverted to introduce a hack into the application by not\n> shuffling the\n> >> timestamp bytes for the UUID's, which makes it look quite serial in\n> >> terms of byte order.\n> >\n> >> So, my question now is: Would it make sense for you to handle these\n> >> time-based UUID's differently internally? Specifically\n> un-shuffling the\n> >> timestamp before they are going to storage?\n> >\n> > No, because\n> >\n> > (1) UUID layout is standardized;\n> \n> You mean the presentation at the byte-level is. ;)\n> \n> >\n> > (2) such a change would break on-disk compatibility for existing\n> > databases;\n> \n> Yes, that certainly is a show-stopper.\n> \n> >\n> > (3) particularly for the case of application-generated UUIDs, we do\n> > not have enough information to know that this would actually do\n> anything\n> > useful;\n> \n> Well, not only the layout is standardized, but also there is a semantic\n> to it depending on the version. Specifically for version 1, it has:\n> 1. a timestamp\n> 2. a clock sequence\n> 3. a node id\n> \n> Ans as PostgreSQL already provides this pretty concrete data type, it\n> could be a natural consequence to also support the semantic of it.\n> \n> E.g. the network types also come with a lot of additional operators and\n> functions. So I don't see a reason not to respect the additional\n> capabilities of a UUID.\n> \n> For other versions of UUID's, functions like timestamp would certainly\n> not be available (return NULL?), respecting the concrete semantic.\n> \n> >\n> > (4) it in fact *wouldn't* do anything useful, because we'd still have\n> > to sort UUIDs in the same order as today, meaning that btree index\n> behavior\n> > would remain the same as before. Plus UUID comparison would get a lot\n> > more complicated and slower than it is now.\n> \n> I get your first sentence, but not your second. I know that when\n> changing the internal byte order we'd have to completed re-compute\n> everything on-disk (from table to index data), but why would the sorting\n> in the index have to be the same?\n> \n> And actually, comparison logic wouldn't need to be changed at all if the\n> byte order is \"changed\" when the UUID is read in when reading the\n> representation into the internal UUID's byte array.\n> \n> Function:\n> string_to_uuid(const char *source, pg_uuid_t *uuid);\n> \n> ^^ here I would apply the change. And of course, reverse it for\n> generating the textual representation.\n> \n> That would slow down writes a bit, but that shouldn't be the case\n> because index insertions are speed up even more.\n> \n> But still, on-disk change is still a show-stopper, I guess.\n> \n> >\n> > (5) even if we ignored all that and did it anyway, it would only help\n> > for version-1 UUIDs. The index performance issue would still\n> remain for\n> > version-4 (random) UUIDs, which are if anything more common than v1.\n> \n> Yes, I am aware that the changes might be of very limited gain. V4\n> UUID's are usually used for external identifiers. For internal ones,\n> they don't make sense to me (too long, issues with randomness/enthropie\n> under high load, ...). ;)\n> \n> I just recently used these UUID's also together with a function for\n> TimescaleDB auto-partitioning to provide the timestamp for the\n> partitioning logic instead of the need for a separate timestamp column.\n> \n> This is also one of the reasons why I was also asking for native support\n> functions to extract the timestamp. I am apparently not very good at C\n> so I am currently using Python and/or PgPLSQL for it, which is\n> pretty slow.\n> \n> >\n> >\n> > FWIW, I don't know what tool you're using to get those \"bloat\"\n> numbers,\n> > but just because somebody calls it bloat doesn't mean that it is.\n> > The normal, steady-state load factor for a btree index is generally\n> > understood to be about 2/3rds, and that looks to be about what\n> > you're getting for the regular-UUID-format index. The fact that the\n> > serially-loaded index has nearly no dead space is because we hack the\n> > page split logic to make that happen --- but that is a hack, and it's\n> > not without downsides. It should *not* be taken to be an indication\n> > of what you can expect for any other insertion pattern.\n> \n> OK, understood. I was actually a bit surprised by those numbers myself\n> as these \"serial\" UUID's still only have the timestamp bytes in\n> ascending order, the clock sequence and node is still pretty random (but\n> not inside a single transaction, which might help the hack).\n> \n> >\n> > The insertion-speed aspect is a real problem, but the core of that\n> problem\n> > is that use of any sort of standard-format UUID turns applications\n> that\n> > might have had considerable locality of reference into\n> applications that\n> > have none. If you can manage to keep your whole index in RAM that\n> would\n> > not hurt too much, but as soon as it doesn't fit you have a problem.\n> > When your app has more or less predictable reference patterns it's\n> best\n> > to design a unique key that matches that, instead of expecting that\n> > essentially-random keys will work well.\n> \n> The system was configured to have more than enough space for the index\n> and table data to fit into memory, but I am not sure. How can I verify\n> that? An EXPLAIN on the INSERT apparently doesn't include index\n> insertion.\n> \n> >\n> > Or in short, hacking up the way your app generates UUIDs is exactly\n> > the right way to proceed here.\n> \n> OK. Glad to hear that.\n> \n> One last question, though:\n> Would it make sense to create a specialized UUID v1 type (e.g. with an\n> extension) that does the transformation and delegates for all other\n> things to the existing UUID type support?\n> \n> >\n> > regards, tom lane\n> >\n> \n> \n\n\n\n",
"msg_date": "Sat, 25 May 2019 23:02:01 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On 25/05/2019 21:00, Vitalii Tymchyshyn wrote:\n> I am not sure why do you want to change on-disk storage format? If we\n> are talking about indexes, it's more about comparison function (opclass)\n> that is used in an index. \n> Am I wrong?\n\nI don't \"want\" to change the on-disk format of the v1 UUID's but that\nseems to be the most efficient approach to me (if anyone would ever want\nto implement that) as not changing it would mean to introduce the\nunshuffling of the timestamp bytes in a lot of other places where it\nwould decrease the performance instead of increasing it, not to speak of\nVACUUM, REINDEX, CLUSTER, ....\n\nOne of my use-cases would also benefit a lot here when trying to find\nentries that have been created within a given time range. To do that\n(without an additional timestamp column), I'd end up with a full index\nscan to determine the rows to read as there is no correlation between\nthose (timestamp and value sort order).\n\nFor example I would like to query the DB with:\n\nWHERE id >= '7d9c4835-5378-11ec-0000-000000000000' AND id <\n'5cdcac78-5a9a-11ec-0000-000000000000'\n\n...which is impossible atm. I have written myself a helper function that\nextracts the timestamp, but that is executed (of course) for each and\nevery entry while filtering or increases (if used as an index\nexpression) the write RTT by quite a lot (if you're dealing with\npeak-load style applications like we have):\n\nWHERE time_part_uuidv1(id) >= '2021-12-02 14:02:31.021778+00' AND\ntime_part_uuidv1(id) < '2021-12-11 15:52:37.107212+00'\n\nSo, although possible, it is not really an option for an application\nlike we have (and we don't just want to throw hardware at it).\n\nI don't know if all that makes any sense to you or if I'm just missing a\npiece of the puzzle inside the PostgreSQL code base. If I do, please\ntell me. :)\n\nCheers,\n\n\tAncoron\n\n> \n> сб, 25 трав. 2019 о 11:21 Ancoron Luciferis\n> <[email protected]\n> <mailto:[email protected]>> пише:\n> \n> On 25/05/2019 16:57, Tom Lane wrote:\n> > Ancoron Luciferis <[email protected]\n> <mailto:[email protected]>> writes:\n> >> So I investigated the PostgreSQL code to see how it is handling\n> UUID's\n> >> with respect to storage, sorting, aso. but all I could find was\n> that it\n> >> basically falls back to the 16-byte.\n> >\n> > Yup, they're just blobs to us.\n> >\n> >> After struggling to find a way to optimize things inside the\n> database, I\n> >> reverted to introduce a hack into the application by not\n> shuffling the\n> >> timestamp bytes for the UUID's, which makes it look quite serial in\n> >> terms of byte order.\n> >\n> >> So, my question now is: Would it make sense for you to handle these\n> >> time-based UUID's differently internally? Specifically\n> un-shuffling the\n> >> timestamp before they are going to storage?\n> >\n> > No, because\n> >\n> > (1) UUID layout is standardized;\n> \n> You mean the presentation at the byte-level is. ;)\n> \n> >\n> > (2) such a change would break on-disk compatibility for existing\n> > databases;\n> \n> Yes, that certainly is a show-stopper.\n> \n> >\n> > (3) particularly for the case of application-generated UUIDs, we do\n> > not have enough information to know that this would actually do\n> anything\n> > useful;\n> \n> Well, not only the layout is standardized, but also there is a semantic\n> to it depending on the version. Specifically for version 1, it has:\n> 1. a timestamp\n> 2. a clock sequence\n> 3. a node id\n> \n> Ans as PostgreSQL already provides this pretty concrete data type, it\n> could be a natural consequence to also support the semantic of it.\n> \n> E.g. the network types also come with a lot of additional operators and\n> functions. So I don't see a reason not to respect the additional\n> capabilities of a UUID.\n> \n> For other versions of UUID's, functions like timestamp would certainly\n> not be available (return NULL?), respecting the concrete semantic.\n> \n> >\n> > (4) it in fact *wouldn't* do anything useful, because we'd still have\n> > to sort UUIDs in the same order as today, meaning that btree index\n> behavior\n> > would remain the same as before. Plus UUID comparison would get a lot\n> > more complicated and slower than it is now.\n> \n> I get your first sentence, but not your second. I know that when\n> changing the internal byte order we'd have to completed re-compute\n> everything on-disk (from table to index data), but why would the sorting\n> in the index have to be the same?\n> \n> And actually, comparison logic wouldn't need to be changed at all if the\n> byte order is \"changed\" when the UUID is read in when reading the\n> representation into the internal UUID's byte array.\n> \n> Function:\n> string_to_uuid(const char *source, pg_uuid_t *uuid);\n> \n> ^^ here I would apply the change. And of course, reverse it for\n> generating the textual representation.\n> \n> That would slow down writes a bit, but that shouldn't be the case\n> because index insertions are speed up even more.\n> \n> But still, on-disk change is still a show-stopper, I guess.\n> \n> >\n> > (5) even if we ignored all that and did it anyway, it would only help\n> > for version-1 UUIDs. The index performance issue would still\n> remain for\n> > version-4 (random) UUIDs, which are if anything more common than v1.\n> \n> Yes, I am aware that the changes might be of very limited gain. V4\n> UUID's are usually used for external identifiers. For internal ones,\n> they don't make sense to me (too long, issues with randomness/enthropie\n> under high load, ...). ;)\n> \n> I just recently used these UUID's also together with a function for\n> TimescaleDB auto-partitioning to provide the timestamp for the\n> partitioning logic instead of the need for a separate timestamp column.\n> \n> This is also one of the reasons why I was also asking for native support\n> functions to extract the timestamp. I am apparently not very good at C\n> so I am currently using Python and/or PgPLSQL for it, which is\n> pretty slow.\n> \n> >\n> >\n> > FWIW, I don't know what tool you're using to get those \"bloat\"\n> numbers,\n> > but just because somebody calls it bloat doesn't mean that it is.\n> > The normal, steady-state load factor for a btree index is generally\n> > understood to be about 2/3rds, and that looks to be about what\n> > you're getting for the regular-UUID-format index. The fact that the\n> > serially-loaded index has nearly no dead space is because we hack the\n> > page split logic to make that happen --- but that is a hack, and it's\n> > not without downsides. It should *not* be taken to be an indication\n> > of what you can expect for any other insertion pattern.\n> \n> OK, understood. I was actually a bit surprised by those numbers myself\n> as these \"serial\" UUID's still only have the timestamp bytes in\n> ascending order, the clock sequence and node is still pretty random (but\n> not inside a single transaction, which might help the hack).\n> \n> >\n> > The insertion-speed aspect is a real problem, but the core of that\n> problem\n> > is that use of any sort of standard-format UUID turns applications\n> that\n> > might have had considerable locality of reference into\n> applications that\n> > have none. If you can manage to keep your whole index in RAM that\n> would\n> > not hurt too much, but as soon as it doesn't fit you have a problem.\n> > When your app has more or less predictable reference patterns it's\n> best\n> > to design a unique key that matches that, instead of expecting that\n> > essentially-random keys will work well.\n> \n> The system was configured to have more than enough space for the index\n> and table data to fit into memory, but I am not sure. How can I verify\n> that? An EXPLAIN on the INSERT apparently doesn't include index\n> insertion.\n> \n> >\n> > Or in short, hacking up the way your app generates UUIDs is exactly\n> > the right way to proceed here.\n> \n> OK. Glad to hear that.\n> \n> One last question, though:\n> Would it make sense to create a specialized UUID v1 type (e.g. with an\n> extension) that does the transformation and delegates for all other\n> things to the existing UUID type support?\n> \n> >\n> > regards, tom lane\n> >\n> \n> \n\n\n\n",
"msg_date": "Sat, 25 May 2019 23:02:10 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "Ancoron Luciferis <[email protected]> writes:\n> On 25/05/2019 16:57, Tom Lane wrote:\n>> (4) it in fact *wouldn't* do anything useful, because we'd still have\n>> to sort UUIDs in the same order as today, meaning that btree index behavior\n>> would remain the same as before. Plus UUID comparison would get a lot\n>> more complicated and slower than it is now.\n\n> I get your first sentence, but not your second. I know that when\n> changing the internal byte order we'd have to completed re-compute\n> everything on-disk (from table to index data), but why would the sorting\n> in the index have to be the same?\n\nBecause we aren't going to change the existing sort order of UUIDs.\nWe have no idea what applications might be dependent on that.\n\nAs Vitalii correctly pointed out, your beef is not with the physical\nstorage of UUIDs anyway: you just wish they'd sort differently, since\nthat is what determines the behavior of a btree index. But we aren't\ngoing to change the sort ordering because that's an even bigger\ncompatibility break than changing the physical storage; it'd affect\napplication-visible semantics.\n\nWhat you might want to think about is creating a function that maps\nUUIDs into an ordering that makes sense to you, and then creating\na unique index over that function instead of the raw UUIDs. That\nwould give the results you want without having to negotiate with the\nrest of the world about whether it's okay to change the semantics\nof type uuid.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 May 2019 17:54:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n>Ancoron Luciferis <[email protected]> writes:\n>> On 25/05/2019 16:57, Tom Lane wrote:\n>>> (4) it in fact *wouldn't* do anything useful, because we'd still have\n>>> to sort UUIDs in the same order as today, meaning that btree index behavior\n>>> would remain the same as before. Plus UUID comparison would get a lot\n>>> more complicated and slower than it is now.\n>\n>> I get your first sentence, but not your second. I know that when\n>> changing the internal byte order we'd have to completed re-compute\n>> everything on-disk (from table to index data), but why would the sorting\n>> in the index have to be the same?\n>\n>Because we aren't going to change the existing sort order of UUIDs.\n>We have no idea what applications might be dependent on that.\n>\n>As Vitalii correctly pointed out, your beef is not with the physical\n>storage of UUIDs anyway: you just wish they'd sort differently, since\n>that is what determines the behavior of a btree index. But we aren't\n>going to change the sort ordering because that's an even bigger\n>compatibility break than changing the physical storage; it'd affect\n>application-visible semantics.\n>\n>What you might want to think about is creating a function that maps\n>UUIDs into an ordering that makes sense to you, and then creating\n>a unique index over that function instead of the raw UUIDs. That\n>would give the results you want without having to negotiate with the\n>rest of the world about whether it's okay to change the semantics\n>of type uuid.\n>\n\nFWIW that's essentially what I implemented as an extension some time\nago. See [1] for a more detailed explanation and some benchmarks.\n\nThe thing is - it's not really desirable to get perfectly ordered\nordering, because that would mean we never get back to older parts of\nthe index (so if you delete data, we'd never fill that space).\n\n[1] https://www.2ndquadrant.com/en/blog/sequential-uuid-generators/\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 26 May 2019 00:14:28 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n>> What you might want to think about is creating a function that maps\n>> UUIDs into an ordering that makes sense to you, and then creating\n>> a unique index over that function instead of the raw UUIDs. That\n>> would give the results you want without having to negotiate with the\n>> rest of the world about whether it's okay to change the semantics\n>> of type uuid.\n\n> FWIW that's essentially what I implemented as an extension some time\n> ago. See [1] for a more detailed explanation and some benchmarks.\n\nAlso, another way to attack this is to create a new set of ordering\noperators for UUID and an associated non-default btree opclass.\nThen you declare your index using that opclass and you're done.\nThe key advantage of this way is that the regular UUID equality\noperator can still be a member of that opclass, meaning that searches\nof the form \"uuidcol = constant\" can still use this index, so you\ndon't have to change your queries (at least not for that common case).\nLook at the interrelationship of the regular text btree operators and\nthe \"pattern_ops\" btree operators for a precedent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 May 2019 18:38:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On 25/05/2019 23:54, Tom Lane wrote:\n> Ancoron Luciferis <[email protected]> writes:\n>> On 25/05/2019 16:57, Tom Lane wrote:\n>>> (4) it in fact *wouldn't* do anything useful, because we'd still have\n>>> to sort UUIDs in the same order as today, meaning that btree index behavior\n>>> would remain the same as before. Plus UUID comparison would get a lot\n>>> more complicated and slower than it is now.\n> \n>> I get your first sentence, but not your second. I know that when\n>> changing the internal byte order we'd have to completed re-compute\n>> everything on-disk (from table to index data), but why would the sorting\n>> in the index have to be the same?\n> \n> Because we aren't going to change the existing sort order of UUIDs.\n> We have no idea what applications might be dependent on that.\n> \n> As Vitalii correctly pointed out, your beef is not with the physical\n> storage of UUIDs anyway: you just wish they'd sort differently, since\n> that is what determines the behavior of a btree index. But we aren't\n> going to change the sort ordering because that's an even bigger\n> compatibility break than changing the physical storage; it'd affect\n> application-visible semantics.\n> \n> What you might want to think about is creating a function that maps\n> UUIDs into an ordering that makes sense to you, and then creating\n> a unique index over that function instead of the raw UUIDs. That\n> would give the results you want without having to negotiate with the\n> rest of the world about whether it's okay to change the semantics\n> of type uuid.\n> \n> \t\t\tregards, tom lane\n> \n\nI understand. Point taken, I really didn't think about someone could\ndepend on an index order of a (pretty random) UUID.\n\nThe whole point of me starting this discussion was about performance in\nmultiple areas, but INSERT performance was really becoming an issue for\nus apart from the index bloat, which was way higher than just the 30% at\nseveral occasions (including out-of-disk-space in the early days), apart\nfrom the fact that the index was regularly dismissed due to it not being\nin memory. In that sense, just creating additional indexes with\nfunctions doesn't really solve the core issues that we had.\n\nNot to mention the performance of VACUUM, among other things.\n\nSo, even we currently \"solved\" a lot of these issues at the application\nlevel, we now have UUID's that look like v1 UUID's but in fact will not\nbe readable (in the representation as returned by PostgreSQL) by any\nother application that doesn't know our specific implementation. This\nforces us to hack other tools written in other languages that would\notherwise understand and handle regular v1 UUID's as well.\n\nI should add that the tests I have made where all running on dedicated\nSSD's, one for the table data, one for the indexes and one for the WAL.\nIf those where running against the same disks the difference would\nprobably be much higher during writes.\n\nI'll think about creating an extension to provide a custom data type\ninstead. So nobody would be at risk and anyone would decide explicitly\nfor it with all consequences.\n\nThank you for your time and precious input. :)\n\n\nCheers,\n\n\tAncoron\n\n\n",
"msg_date": "Sun, 26 May 2019 01:00:54 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On Sat, May 25, 2019 at 06:38:08PM -0400, Tom Lane wrote:\n>Tomas Vondra <[email protected]> writes:\n>> On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n>>> What you might want to think about is creating a function that maps\n>>> UUIDs into an ordering that makes sense to you, and then creating\n>>> a unique index over that function instead of the raw UUIDs. That\n>>> would give the results you want without having to negotiate with the\n>>> rest of the world about whether it's okay to change the semantics\n>>> of type uuid.\n>\n>> FWIW that's essentially what I implemented as an extension some time\n>> ago. See [1] for a more detailed explanation and some benchmarks.\n>\n>Also, another way to attack this is to create a new set of ordering\n>operators for UUID and an associated non-default btree opclass.\n>Then you declare your index using that opclass and you're done.\n>The key advantage of this way is that the regular UUID equality\n>operator can still be a member of that opclass, meaning that searches\n>of the form \"uuidcol = constant\" can still use this index, so you\n>don't have to change your queries (at least not for that common case).\n>Look at the interrelationship of the regular text btree operators and\n>the \"pattern_ops\" btree operators for a precedent.\n>\n\nPerhaps. But it does not allow to tune how often the values \"wrap\" and,\nwhich I think is an useful capability.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 26 May 2019 01:44:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On 26/05/2019 00:14, Tomas Vondra wrote:\n> On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n>> Ancoron Luciferis <[email protected]> writes:\n>>> On 25/05/2019 16:57, Tom Lane wrote:\n>>>> (4) it in fact *wouldn't* do anything useful, because we'd still have\n>>>> to sort UUIDs in the same order as today, meaning that btree index\n>>>> behavior\n>>>> would remain the same as before. Plus UUID comparison would get a lot\n>>>> more complicated and slower than it is now.\n>>\n>>> I get your first sentence, but not your second. I know that when\n>>> changing the internal byte order we'd have to completed re-compute\n>>> everything on-disk (from table to index data), but why would the sorting\n>>> in the index have to be the same?\n>>\n>> Because we aren't going to change the existing sort order of UUIDs.\n>> We have no idea what applications might be dependent on that.\n>>\n>> As Vitalii correctly pointed out, your beef is not with the physical\n>> storage of UUIDs anyway: you just wish they'd sort differently, since\n>> that is what determines the behavior of a btree index. But we aren't\n>> going to change the sort ordering because that's an even bigger\n>> compatibility break than changing the physical storage; it'd affect\n>> application-visible semantics.\n>>\n>> What you might want to think about is creating a function that maps\n>> UUIDs into an ordering that makes sense to you, and then creating\n>> a unique index over that function instead of the raw UUIDs. That\n>> would give the results you want without having to negotiate with the\n>> rest of the world about whether it's okay to change the semantics\n>> of type uuid.\n>>\n> \n> FWIW that's essentially what I implemented as an extension some time\n> ago. See [1] for a more detailed explanation and some benchmarks.\n\nYes, I've seen that before. Pretty nice work you but together there and\nI'll surely have a look at it but we certainly need the node id in\ncompliance with v1 UUID's so that's why we've been generating UUID's at\nthe application side from day 1.\n\n> \n> The thing is - it's not really desirable to get perfectly ordered\n> ordering, because that would mean we never get back to older parts of\n> the index (so if you delete data, we'd never fill that space).\n\nWouldn't this apply also to any sequential-looking index (e.g. on\nserial)? The main issue with the UUID's is that it almost instantly\nconsumes a big part of the total value space (e.g. first value is\n'01...' and second is coming as 'f3...') which I would assume not being\nvery efficient with btrees (space reclaim? - bloat).\n\nOne of our major concerns is to keep index size small (VACUUM can't be\nrun every minute) to fit into memory next to a lot of others.\n\nI've experimented with the rollover \"prefix\" myself but found that it\nmakes the index too big (same or larger index size than standard v1\nUUIDs) and VACUUM too slow (almost as slow as a standard V1 UUID),\nalthough INSERT performance wasn't that bad, our sequential UUID's where\nway faster (at least pre-generated and imported with COPY to eliminate\nany value generation impact).\n\nCheers,\n\n\tAncoron\n\n> \n> [1] https://www.2ndquadrant.com/en/blog/sequential-uuid-generators/\n> \n> \n> regards\n> \n\n\n\n",
"msg_date": "Sun, 26 May 2019 01:49:30 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On Sun, May 26, 2019 at 01:49:30AM +0200, Ancoron Luciferis wrote:\n>On 26/05/2019 00:14, Tomas Vondra wrote:\n>> On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n>>> Ancoron Luciferis <[email protected]> writes:\n>>>> On 25/05/2019 16:57, Tom Lane wrote:\n>>>>> (4) it in fact *wouldn't* do anything useful, because we'd still have\n>>>>> to sort UUIDs in the same order as today, meaning that btree index\n>>>>> behavior\n>>>>> would remain the same as before.� Plus UUID comparison would get a lot\n>>>>> more complicated and slower than it is now.\n>>>\n>>>> I get your first sentence, but not your second. I know that when\n>>>> changing the internal byte order we'd have to completed re-compute\n>>>> everything on-disk (from table to index data), but why would the sorting\n>>>> in the index have to be the same?\n>>>\n>>> Because we aren't going to change the existing sort order of UUIDs.\n>>> We have no idea what applications might be dependent on that.\n>>>\n>>> As Vitalii correctly pointed out, your beef is not with the physical\n>>> storage of UUIDs anyway: you just wish they'd sort differently, since\n>>> that is what determines the behavior of a btree index.� But we aren't\n>>> going to change the sort ordering because that's an even bigger\n>>> compatibility break than changing the physical storage; it'd affect\n>>> application-visible semantics.\n>>>\n>>> What you might want to think about is creating a function that maps\n>>> UUIDs into an ordering that makes sense to you, and then creating\n>>> a unique index over that function instead of the raw UUIDs.� That\n>>> would give the results you want without having to negotiate with the\n>>> rest of the world about whether it's okay to change the semantics\n>>> of type uuid.\n>>>\n>>\n>> FWIW that's essentially what I implemented as an extension some time\n>> ago. See [1] for a more detailed explanation and some benchmarks.\n>\n>Yes, I've seen that before. Pretty nice work you but together there and\n>I'll surely have a look at it but we certainly need the node id in\n>compliance with v1 UUID's so that's why we've been generating UUID's at\n>the application side from day 1.\n>\n>>\n>> The thing is - it's not really desirable to get perfectly ordered\n>> ordering, because that would mean we never get back to older parts of\n>> the index (so if you delete data, we'd never fill that space).\n>\n>Wouldn't this apply also to any sequential-looking index (e.g. on\n>serial)?\n\nYes, it does apply to any index on sequential (ordered) data. If you\ndelete data from the \"old\" part (but not all, so the pages don't get\ncompletely empty), that space is lost. It's available for new data, but\nif we only insert to \"new\" part of the index, that's useless.\n\n> The main issue with the UUID's is that it almost instantly\n>consumes a big part of the total value space (e.g. first value is\n>'01...' and second is coming as 'f3...') which I would assume not being\n>very efficient with btrees (space reclaim? - bloat).\n>\n\nI don't understand what you mean here. Perhaps you misunderstand how\nbtree indexes grow? It's not like we allocate separate pages for\ndifferent values/prefixes - we insert the data until a page gets full,\nthen it's split in half. There is some dependency on the order in which\nthe values are inserted, but AFAIK random order is generally fine.\n\n>One of our major concerns is to keep index size small (VACUUM can't be\n>run every minute) to fit into memory next to a lot of others.\n>\n\nI don't think this has much to do with vacuum - I don't see how it's\nrelated to the ordering of generated UUID values. And I don't see where\nthe \"can't run vacuum every minute\" comes from.\n\n>I've experimented with the rollover \"prefix\" myself but found that it\n>makes the index too big (same or larger index size than standard v1\n>UUIDs) and VACUUM too slow (almost as slow as a standard V1 UUID),\n>although INSERT performance wasn't that bad, our sequential UUID's where\n>way faster (at least pre-generated and imported with COPY to eliminate\n>any value generation impact).\n>\n\nI very much doubt that has anything to do with the prefix. You'll need\nto share more details about how you did your tests.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 26 May 2019 03:09:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "I'm not worthy to post here, but a bit of a random thought.\n\nIf I've followed the conversation correctly, the reason for a V1 UUID is\npartly to order and partition rows by a timestamp value, but without the\ncost of a timestamp column. As I was told as a boy, \"Smart numbers aren't.\"\nIs it _absolutely_ the case that you can't afford another column? I don't\nknow the ins and outs of the Postgres row format, but my impression is that\nit's a fixed size, so you may be able to add the column without splitting\nrows? Anyway, even if that's not true and the extra column costs you disk\nspace, is it the index that concerns you? Have you considered a timestamp\ncolumn, or a numeric column with an epoch offset, and a BRIN index? If you\ninsert data is in pretty much chronological order, that might work well for\nyou.\n\nBest of luck, I've enjoyed following the commentary.\n\n\nOn Sun, May 26, 2019 at 11:09 AM Tomas Vondra <[email protected]>\nwrote:\n\n> On Sun, May 26, 2019 at 01:49:30AM +0200, Ancoron Luciferis wrote:\n> >On 26/05/2019 00:14, Tomas Vondra wrote:\n> >> On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n> >>> Ancoron Luciferis <[email protected]> writes:\n> >>>> On 25/05/2019 16:57, Tom Lane wrote:\n> >>>>> (4) it in fact *wouldn't* do anything useful, because we'd still have\n> >>>>> to sort UUIDs in the same order as today, meaning that btree index\n> >>>>> behavior\n> >>>>> would remain the same as before. Plus UUID comparison would get a\n> lot\n> >>>>> more complicated and slower than it is now.\n> >>>\n> >>>> I get your first sentence, but not your second. I know that when\n> >>>> changing the internal byte order we'd have to completed re-compute\n> >>>> everything on-disk (from table to index data), but why would the\n> sorting\n> >>>> in the index have to be the same?\n> >>>\n> >>> Because we aren't going to change the existing sort order of UUIDs.\n> >>> We have no idea what applications might be dependent on that.\n> >>>\n> >>> As Vitalii correctly pointed out, your beef is not with the physical\n> >>> storage of UUIDs anyway: you just wish they'd sort differently, since\n> >>> that is what determines the behavior of a btree index. But we aren't\n> >>> going to change the sort ordering because that's an even bigger\n> >>> compatibility break than changing the physical storage; it'd affect\n> >>> application-visible semantics.\n> >>>\n> >>> What you might want to think about is creating a function that maps\n> >>> UUIDs into an ordering that makes sense to you, and then creating\n> >>> a unique index over that function instead of the raw UUIDs. That\n> >>> would give the results you want without having to negotiate with the\n> >>> rest of the world about whether it's okay to change the semantics\n> >>> of type uuid.\n> >>>\n> >>\n> >> FWIW that's essentially what I implemented as an extension some time\n> >> ago. See [1] for a more detailed explanation and some benchmarks.\n> >\n> >Yes, I've seen that before. Pretty nice work you but together there and\n> >I'll surely have a look at it but we certainly need the node id in\n> >compliance with v1 UUID's so that's why we've been generating UUID's at\n> >the application side from day 1.\n> >\n> >>\n> >> The thing is - it's not really desirable to get perfectly ordered\n> >> ordering, because that would mean we never get back to older parts of\n> >> the index (so if you delete data, we'd never fill that space).\n> >\n> >Wouldn't this apply also to any sequential-looking index (e.g. on\n> >serial)?\n>\n> Yes, it does apply to any index on sequential (ordered) data. If you\n> delete data from the \"old\" part (but not all, so the pages don't get\n> completely empty), that space is lost. It's available for new data, but\n> if we only insert to \"new\" part of the index, that's useless.\n>\n> > The main issue with the UUID's is that it almost instantly\n> >consumes a big part of the total value space (e.g. first value is\n> >'01...' and second is coming as 'f3...') which I would assume not being\n> >very efficient with btrees (space reclaim? - bloat).\n> >\n>\n> I don't understand what you mean here. Perhaps you misunderstand how\n> btree indexes grow? It's not like we allocate separate pages for\n> different values/prefixes - we insert the data until a page gets full,\n> then it's split in half. There is some dependency on the order in which\n> the values are inserted, but AFAIK random order is generally fine.\n>\n> >One of our major concerns is to keep index size small (VACUUM can't be\n> >run every minute) to fit into memory next to a lot of others.\n> >\n>\n> I don't think this has much to do with vacuum - I don't see how it's\n> related to the ordering of generated UUID values. And I don't see where\n> the \"can't run vacuum every minute\" comes from.\n>\n> >I've experimented with the rollover \"prefix\" myself but found that it\n> >makes the index too big (same or larger index size than standard v1\n> >UUIDs) and VACUUM too slow (almost as slow as a standard V1 UUID),\n> >although INSERT performance wasn't that bad, our sequential UUID's where\n> >way faster (at least pre-generated and imported with COPY to eliminate\n> >any value generation impact).\n> >\n>\n> I very much doubt that has anything to do with the prefix. You'll need\n> to share more details about how you did your tests.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\nI'm not worthy to post here, but a bit of a random thought.If I've followed the conversation correctly, the reason for a V1 UUID is partly to order and partition rows by a timestamp value, but without the cost of a timestamp column. As I was told as a boy, \"Smart numbers aren't.\" Is it _absolutely_ the case that you can't afford another column? I don't know the ins and outs of the Postgres row format, but my impression is that it's a fixed size, so you may be able to add the column without splitting rows? Anyway, even if that's not true and the extra column costs you disk space, is it the index that concerns you? Have you considered a timestamp column, or a numeric column with an epoch offset, and a BRIN index? If you insert data is in pretty much chronological order, that might work well for you.Best of luck, I've enjoyed following the commentary.On Sun, May 26, 2019 at 11:09 AM Tomas Vondra <[email protected]> wrote:On Sun, May 26, 2019 at 01:49:30AM +0200, Ancoron Luciferis wrote:\n>On 26/05/2019 00:14, Tomas Vondra wrote:\n>> On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n>>> Ancoron Luciferis <[email protected]> writes:\n>>>> On 25/05/2019 16:57, Tom Lane wrote:\n>>>>> (4) it in fact *wouldn't* do anything useful, because we'd still have\n>>>>> to sort UUIDs in the same order as today, meaning that btree index\n>>>>> behavior\n>>>>> would remain the same as before. Plus UUID comparison would get a lot\n>>>>> more complicated and slower than it is now.\n>>>\n>>>> I get your first sentence, but not your second. I know that when\n>>>> changing the internal byte order we'd have to completed re-compute\n>>>> everything on-disk (from table to index data), but why would the sorting\n>>>> in the index have to be the same?\n>>>\n>>> Because we aren't going to change the existing sort order of UUIDs.\n>>> We have no idea what applications might be dependent on that.\n>>>\n>>> As Vitalii correctly pointed out, your beef is not with the physical\n>>> storage of UUIDs anyway: you just wish they'd sort differently, since\n>>> that is what determines the behavior of a btree index. But we aren't\n>>> going to change the sort ordering because that's an even bigger\n>>> compatibility break than changing the physical storage; it'd affect\n>>> application-visible semantics.\n>>>\n>>> What you might want to think about is creating a function that maps\n>>> UUIDs into an ordering that makes sense to you, and then creating\n>>> a unique index over that function instead of the raw UUIDs. That\n>>> would give the results you want without having to negotiate with the\n>>> rest of the world about whether it's okay to change the semantics\n>>> of type uuid.\n>>>\n>>\n>> FWIW that's essentially what I implemented as an extension some time\n>> ago. See [1] for a more detailed explanation and some benchmarks.\n>\n>Yes, I've seen that before. Pretty nice work you but together there and\n>I'll surely have a look at it but we certainly need the node id in\n>compliance with v1 UUID's so that's why we've been generating UUID's at\n>the application side from day 1.\n>\n>>\n>> The thing is - it's not really desirable to get perfectly ordered\n>> ordering, because that would mean we never get back to older parts of\n>> the index (so if you delete data, we'd never fill that space).\n>\n>Wouldn't this apply also to any sequential-looking index (e.g. on\n>serial)?\n\nYes, it does apply to any index on sequential (ordered) data. If you\ndelete data from the \"old\" part (but not all, so the pages don't get\ncompletely empty), that space is lost. It's available for new data, but\nif we only insert to \"new\" part of the index, that's useless.\n\n> The main issue with the UUID's is that it almost instantly\n>consumes a big part of the total value space (e.g. first value is\n>'01...' and second is coming as 'f3...') which I would assume not being\n>very efficient with btrees (space reclaim? - bloat).\n>\n\nI don't understand what you mean here. Perhaps you misunderstand how\nbtree indexes grow? It's not like we allocate separate pages for\ndifferent values/prefixes - we insert the data until a page gets full,\nthen it's split in half. There is some dependency on the order in which\nthe values are inserted, but AFAIK random order is generally fine.\n\n>One of our major concerns is to keep index size small (VACUUM can't be\n>run every minute) to fit into memory next to a lot of others.\n>\n\nI don't think this has much to do with vacuum - I don't see how it's\nrelated to the ordering of generated UUID values. And I don't see where\nthe \"can't run vacuum every minute\" comes from.\n\n>I've experimented with the rollover \"prefix\" myself but found that it\n>makes the index too big (same or larger index size than standard v1\n>UUIDs) and VACUUM too slow (almost as slow as a standard V1 UUID),\n>although INSERT performance wasn't that bad, our sequential UUID's where\n>way faster (at least pre-generated and imported with COPY to eliminate\n>any value generation impact).\n>\n\nI very much doubt that has anything to do with the prefix. You'll need\nto share more details about how you did your tests.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 26 May 2019 14:27:05 +1000",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On 26/05/2019 03:09, Tomas Vondra wrote:\n> On Sun, May 26, 2019 at 01:49:30AM +0200, Ancoron Luciferis wrote:\n>> On 26/05/2019 00:14, Tomas Vondra wrote:\n>>> On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n>>>> Ancoron Luciferis <[email protected]> writes:\n>>>>> On 25/05/2019 16:57, Tom Lane wrote:\n>>>>>> (4) it in fact *wouldn't* do anything useful, because we'd still have\n>>>>>> to sort UUIDs in the same order as today, meaning that btree index\n>>>>>> behavior\n>>>>>> would remain the same as before. Plus UUID comparison would get a\n>>>>>> lot\n>>>>>> more complicated and slower than it is now.\n>>>>\n>>>>> I get your first sentence, but not your second. I know that when\n>>>>> changing the internal byte order we'd have to completed re-compute\n>>>>> everything on-disk (from table to index data), but why would the\n>>>>> sorting\n>>>>> in the index have to be the same?\n>>>>\n>>>> Because we aren't going to change the existing sort order of UUIDs.\n>>>> We have no idea what applications might be dependent on that.\n>>>>\n>>>> As Vitalii correctly pointed out, your beef is not with the physical\n>>>> storage of UUIDs anyway: you just wish they'd sort differently, since\n>>>> that is what determines the behavior of a btree index. But we aren't\n>>>> going to change the sort ordering because that's an even bigger\n>>>> compatibility break than changing the physical storage; it'd affect\n>>>> application-visible semantics.\n>>>>\n>>>> What you might want to think about is creating a function that maps\n>>>> UUIDs into an ordering that makes sense to you, and then creating\n>>>> a unique index over that function instead of the raw UUIDs. That\n>>>> would give the results you want without having to negotiate with the\n>>>> rest of the world about whether it's okay to change the semantics\n>>>> of type uuid.\n>>>>\n>>>\n>>> FWIW that's essentially what I implemented as an extension some time\n>>> ago. See [1] for a more detailed explanation and some benchmarks.\n>>\n>> Yes, I've seen that before. Pretty nice work you but together there and\n>> I'll surely have a look at it but we certainly need the node id in\n>> compliance with v1 UUID's so that's why we've been generating UUID's at\n>> the application side from day 1.\n>>\n>>>\n>>> The thing is - it's not really desirable to get perfectly ordered\n>>> ordering, because that would mean we never get back to older parts of\n>>> the index (so if you delete data, we'd never fill that space).\n>>\n>> Wouldn't this apply also to any sequential-looking index (e.g. on\n>> serial)?\n> \n> Yes, it does apply to any index on sequential (ordered) data. If you\n> delete data from the \"old\" part (but not all, so the pages don't get\n> completely empty), that space is lost. It's available for new data, but\n> if we only insert to \"new\" part of the index, that's useless.\n\nOK, thanks for clearing that up for me.\n\n> \n>> The main issue with the UUID's is that it almost instantly\n>> consumes a big part of the total value space (e.g. first value is\n>> '01...' and second is coming as 'f3...') which I would assume not being\n>> very efficient with btrees (space reclaim? - bloat).\n>>\n> \n> I don't understand what you mean here. Perhaps you misunderstand how\n> btree indexes grow? It's not like we allocate separate pages for\n> different values/prefixes - we insert the data until a page gets full,\n> then it's split in half. There is some dependency on the order in which\n> the values are inserted, but AFAIK random order is generally fine.\n\nOK, I might not understand the basics of the btree implementation. Sorry\nfor that.\n\nHowever, one of our issues with standard v1 UUID's was bloat of the\nindexes, although we kept only a few months of data in it. I think, this\nwas due to the pages still containing at least one value and not\nreclaimed by vacuum. It just continued to grow.\n\nNow, as we have this different ever-increasing prefix, we still have\nsome constant growing but we see that whenever historic data get's\ndeleted (in a separate process), space get's reclaimed.\n\n\n> \n>> One of our major concerns is to keep index size small (VACUUM can't be\n>> run every minute) to fit into memory next to a lot of others.\n>>\n> \n> I don't think this has much to do with vacuum - I don't see how it's\n> related to the ordering of generated UUID values. And I don't see where\n> the \"can't run vacuum every minute\" comes from.\n\nOK, numbers (after VACUUM) which I really found strange using the query\nfrom pgexperts [1]:\n\n index_name | bloat_pct | bloat_mb | index_mb | table_mb\n---------------------+-----------+----------+----------+----------\n uuid_v1_pkey | 38 | 363 | 965.742 | 950.172\n uuid_serial_pkey | 11 | 74 | 676.844 | 950.172\n uuid_serial_8_pkey | 46 | 519 | 1122.031 | 950.172\n uuid_serial_16_pkey | 39 | 389 | 991.195 | 950.172\n\n...where the \"8\" and \"16\" is a \"shift\" of the timestamp value,\nimplemented with:\n\ntimestamp = (timestamp >>> (60 - shift)) | (timestamp << (shift + 4))\n\nIf someone could shed some light on why that is (the huge difference in\nindex sizes) I'd be happy.\n\n> \n>> I've experimented with the rollover \"prefix\" myself but found that it\n>> makes the index too big (same or larger index size than standard v1\n>> UUIDs) and VACUUM too slow (almost as slow as a standard V1 UUID),\n>> although INSERT performance wasn't that bad, our sequential UUID's where\n>> way faster (at least pre-generated and imported with COPY to eliminate\n>> any value generation impact).\n>>\n> \n> I very much doubt that has anything to do with the prefix. You'll need\n> to share more details about how you did your tests.\n\nOK, I'll see if I can prepare something and publish it.\n\n> \n> \n> regards\n> \n\n\nRefs:\n[1]\nhttps://github.com/pgexperts/pgx_scripts/blob/master/bloat/index_bloat_check.sql\n\n\n",
"msg_date": "Sun, 26 May 2019 11:01:36 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On 26/05/2019 06:27, Morris de Oryx wrote:\n> I'm not worthy to post here, but a bit of a random thought.\n> \n> If I've followed the conversation correctly, the reason for a V1 UUID is\n> partly to order and partition rows by a timestamp value, but without the\n> cost of a timestamp column. As I was told as a boy, \"Smart numbers\n> aren't.\" Is it _absolutely_ the case that you can't afford another\n> column? I don't know the ins and outs of the Postgres row format, but my\n> impression is that it's a fixed size, so you may be able to add the\n> column without splitting rows? Anyway, even if that's not true and the\n> extra column costs you disk space, is it the index that concerns you? \n> Have you considered a timestamp column, or a numeric column with an\n> epoch offset, and a BRIN index? If you insert data is in pretty much\n> chronological order, that might work well for you.\n\nExactly, we are using the actual information combined within a v1 UUID\nin multiple places and would like to avoid redundancy of information in\nthe database as we strive to keep as much of it in memory and\npartitioning as well as timestamp (range) searching and sorting is a\npretty common thing for us.\n\nFor us, it's not an absolute \"no-go\" to have an additional column but\nthe semantics of the v1 UUID already guarantees us uniqueness across all\npartitions, is the internal primary key and has additional information\nwe are using (creation time, node).\n\nIn addition, the extra column would need yet another index which brings\nour write performance down again. So, while it would improve reading,\nwe're currently (still) more concerned about the write performance.\n\nThe BRIN index is something I might need to test, though.\n\n> \n> Best of luck, I've enjoyed following the commentary.\n> \n> \n> On Sun, May 26, 2019 at 11:09 AM Tomas Vondra\n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> On Sun, May 26, 2019 at 01:49:30AM +0200, Ancoron Luciferis wrote:\n> >On 26/05/2019 00:14, Tomas Vondra wrote:\n> >> On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n> >>> Ancoron Luciferis <[email protected]\n> <mailto:[email protected]>> writes:\n> >>>> On 25/05/2019 16:57, Tom Lane wrote:\n> >>>>> (4) it in fact *wouldn't* do anything useful, because we'd\n> still have\n> >>>>> to sort UUIDs in the same order as today, meaning that btree index\n> >>>>> behavior\n> >>>>> would remain the same as before. Plus UUID comparison would\n> get a lot\n> >>>>> more complicated and slower than it is now.\n> >>>\n> >>>> I get your first sentence, but not your second. I know that when\n> >>>> changing the internal byte order we'd have to completed re-compute\n> >>>> everything on-disk (from table to index data), but why would\n> the sorting\n> >>>> in the index have to be the same?\n> >>>\n> >>> Because we aren't going to change the existing sort order of UUIDs.\n> >>> We have no idea what applications might be dependent on that.\n> >>>\n> >>> As Vitalii correctly pointed out, your beef is not with the physical\n> >>> storage of UUIDs anyway: you just wish they'd sort differently,\n> since\n> >>> that is what determines the behavior of a btree index. But we\n> aren't\n> >>> going to change the sort ordering because that's an even bigger\n> >>> compatibility break than changing the physical storage; it'd affect\n> >>> application-visible semantics.\n> >>>\n> >>> What you might want to think about is creating a function that maps\n> >>> UUIDs into an ordering that makes sense to you, and then creating\n> >>> a unique index over that function instead of the raw UUIDs. That\n> >>> would give the results you want without having to negotiate with the\n> >>> rest of the world about whether it's okay to change the semantics\n> >>> of type uuid.\n> >>>\n> >>\n> >> FWIW that's essentially what I implemented as an extension some time\n> >> ago. See [1] for a more detailed explanation and some benchmarks.\n> >\n> >Yes, I've seen that before. Pretty nice work you but together there and\n> >I'll surely have a look at it but we certainly need the node id in\n> >compliance with v1 UUID's so that's why we've been generating UUID's at\n> >the application side from day 1.\n> >\n> >>\n> >> The thing is - it's not really desirable to get perfectly ordered\n> >> ordering, because that would mean we never get back to older parts of\n> >> the index (so if you delete data, we'd never fill that space).\n> >\n> >Wouldn't this apply also to any sequential-looking index (e.g. on\n> >serial)?\n> \n> Yes, it does apply to any index on sequential (ordered) data. If you\n> delete data from the \"old\" part (but not all, so the pages don't get\n> completely empty), that space is lost. It's available for new data, but\n> if we only insert to \"new\" part of the index, that's useless.\n> \n> > The main issue with the UUID's is that it almost instantly\n> >consumes a big part of the total value space (e.g. first value is\n> >'01...' and second is coming as 'f3...') which I would assume not being\n> >very efficient with btrees (space reclaim? - bloat).\n> >\n> \n> I don't understand what you mean here. Perhaps you misunderstand how\n> btree indexes grow? It's not like we allocate separate pages for\n> different values/prefixes - we insert the data until a page gets full,\n> then it's split in half. There is some dependency on the order in which\n> the values are inserted, but AFAIK random order is generally fine.\n> \n> >One of our major concerns is to keep index size small (VACUUM can't be\n> >run every minute) to fit into memory next to a lot of others.\n> >\n> \n> I don't think this has much to do with vacuum - I don't see how it's\n> related to the ordering of generated UUID values. And I don't see where\n> the \"can't run vacuum every minute\" comes from.\n> \n> >I've experimented with the rollover \"prefix\" myself but found that it\n> >makes the index too big (same or larger index size than standard v1\n> >UUIDs) and VACUUM too slow (almost as slow as a standard V1 UUID),\n> >although INSERT performance wasn't that bad, our sequential UUID's\n> where\n> >way faster (at least pre-generated and imported with COPY to eliminate\n> >any value generation impact).\n> >\n> \n> I very much doubt that has anything to do with the prefix. You'll need\n> to share more details about how you did your tests.\n> \n> \n> regards\n> \n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n> \n> \n\n\n\n",
"msg_date": "Sun, 26 May 2019 11:38:48 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On Sun, May 26, 2019 at 7:38 PM Ancoron Luciferis <\[email protected]> wrote:\n\nThe BRIN index is something I might need to test, though.\n>\n\nYes, check that out, it might give you some ideas. A B-tree (in whatever\nvariant) is *inherently *a large index type. They're ideal for finding\nunique values quickly, not ideal for storing redundant values, and pretty\ndecent at finding ranges. A BRIN (Block Range Index), as implemented in\nPostgres, is good for finding unique values and and ranges. But here's the\nthing, a BRIN index takes some absurdly small % of the space of a B-tree.\nYou have to blink and check again to be sure you've figured it right.\n\nHow can a BRIN index be so much smaller? By throwing virtually everything\nout. A BRIN index doesn't store all of the values in a page, it stores the\nmin/max value and that's it. So it's a probabilistic index (of sorts.) Is\nthe value you're seeking on such and so page? The answer is \"No\" or\n\"Maybe.\" That's a fast test on a very cheap data structure.\n\nWhen the answer is \"maybe\", the full has to be loaded and scanned to\ndetermine if a specific value is found. So, you have a very small index\nstructure, but have to do more sequential scanning to determine if a record\nis indexed in that page or not. In the real world, this can work out to be\na high-performance structure at very low cost. But for it to work, your\nrecords need to be physically ordered (CLUSTER) by the condition in that\nindex. And, going forward, you ought to be inserting in order\ntoo.(More-or-less.) So, a BRIN index is a great option *if *you have an\ninsertion pattern that allows it to remain efficient, and if you're goal is\nrange searching without a heavy B-tree index to maintain.\n\nI have no clue how BRIN indexes and partitioning interact.\n\nOn Sun, May 26, 2019 at 7:38 PM Ancoron Luciferis <[email protected]> wrote:The BRIN index is something I might need to test, though.Yes, check that out, it might give you some ideas. A B-tree (in whatever variant) is inherently a large index type. They're ideal for finding unique values quickly, not ideal for storing redundant values, and pretty decent at finding ranges. A BRIN (Block Range Index), as implemented in Postgres, is good for finding unique values and and ranges. But here's the thing, a BRIN index takes some absurdly small % of the space of a B-tree. You have to blink and check again to be sure you've figured it right.How can a BRIN index be so much smaller? By throwing virtually everything out. A BRIN index doesn't store all of the values in a page, it stores the min/max value and that's it. So it's a probabilistic index (of sorts.) Is the value you're seeking on such and so page? The answer is \"No\" or \"Maybe.\" That's a fast test on a very cheap data structure.When the answer is \"maybe\", the full has to be loaded and scanned to determine if a specific value is found. So, you have a very small index structure, but have to do more sequential scanning to determine if a record is indexed in that page or not. In the real world, this can work out to be a high-performance structure at very low cost. But for it to work, your records need to be physically ordered (CLUSTER) by the condition in that index. And, going forward, you ought to be inserting in order too.(More-or-less.) So, a BRIN index is a great option if you have an insertion pattern that allows it to remain efficient, and if you're goal is range searching without a heavy B-tree index to maintain.I have no clue how BRIN indexes and partitioning interact.",
"msg_date": "Sun, 26 May 2019 20:24:14 +1000",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On Sun, May 26, 2019 at 02:27:05PM +1000, Morris de Oryx wrote:\n>I'm not worthy to post here, but a bit of a random thought.\n>\n>If I've followed the conversation correctly, the reason for a V1 UUID is\n>partly to order and partition rows by a timestamp value, but without the\n>cost of a timestamp column. As I was told as a boy, \"Smart numbers aren't.\"\n>Is it _absolutely_ the case that you can't afford another column? I don't\n>know the ins and outs of the Postgres row format, but my impression is that\n>it's a fixed size, so you may be able to add the column without splitting\n>rows? Anyway, even if that's not true and the extra column costs you disk\n>space, is it the index that concerns you? Have you considered a timestamp\n>column, or a numeric column with an epoch offset, and a BRIN index? If you\n>insert data is in pretty much chronological order, that might work well for\n>you.\n>\n>Best of luck, I've enjoyed following the commentary.\n>\n\nNo, an extra column is not a solution, because it has no impact on the\nindex on the UUID column. One of the problems with indexes on random\ndata is that the entries go to random parts of the index. In the extreme\ncase, each index insert goes to a different index page (since the last\ncheckpoint) and therefore has to write the whole page into the WAL.\nThat's what full-page writes do. This inflates the amount of WAL, may\ntrigger more frequent checkpoints and (of course) reduces the cache hit\nratio for index pages (because we have to touch many of them).\n\nThe point of generating UUIDs in a more sequential way is to limit this\nbehavior by \"concentrating\" the index inserts into a smaller part of the\nindex. That's why indexes on sequential data (say, generated from a\nSERIAL column) perform better.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 26 May 2019 12:37:07 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "Here's what I was thinking of regarding disk space:\n\nhttps://www.postgresql.org/docs/11/storage-page-layout.html\n\nThat's the kind of low-level detail I try *not* to worry about, but like to\nhave in the back of my mind at least a little. If I read it correctly, a\nfixed-length field is going to be marked as present with a bit, and then\ninlined in the row storage without any extra cost for an address. So not\nmuch in the way of extra overhead. Spending the space on a field, reduces\nthe compute needed to constantly perform extracts on the UUID field to\naccess the same information.\n\nBut that particular trade-off is an ancient discussion and judgement call,\nyou know your requirements and constraints better than anyone else. So,\nI'll leave it at that.\n\nOn Sun, May 26, 2019 at 8:24 PM Morris de Oryx <[email protected]>\nwrote:\n\n> On Sun, May 26, 2019 at 7:38 PM Ancoron Luciferis <\n> [email protected]> wrote:\n>\n> The BRIN index is something I might need to test, though.\n>>\n>\n> Yes, check that out, it might give you some ideas. A B-tree (in whatever\n> variant) is *inherently *a large index type. They're ideal for finding\n> unique values quickly, not ideal for storing redundant values, and pretty\n> decent at finding ranges. A BRIN (Block Range Index), as implemented in\n> Postgres, is good for finding unique values and and ranges. But here's the\n> thing, a BRIN index takes some absurdly small % of the space of a B-tree.\n> You have to blink and check again to be sure you've figured it right.\n>\n> How can a BRIN index be so much smaller? By throwing virtually everything\n> out. A BRIN index doesn't store all of the values in a page, it stores the\n> min/max value and that's it. So it's a probabilistic index (of sorts.) Is\n> the value you're seeking on such and so page? The answer is \"No\" or\n> \"Maybe.\" That's a fast test on a very cheap data structure.\n>\n> When the answer is \"maybe\", the full has to be loaded and scanned to\n> determine if a specific value is found. So, you have a very small index\n> structure, but have to do more sequential scanning to determine if a record\n> is indexed in that page or not. In the real world, this can work out to be\n> a high-performance structure at very low cost. But for it to work, your\n> records need to be physically ordered (CLUSTER) by the condition in that\n> index. And, going forward, you ought to be inserting in order\n> too.(More-or-less.) So, a BRIN index is a great option *if *you have an\n> insertion pattern that allows it to remain efficient, and if you're goal is\n> range searching without a heavy B-tree index to maintain.\n>\n> I have no clue how BRIN indexes and partitioning interact.\n>\n>\n\nHere's what I was thinking of regarding disk space:https://www.postgresql.org/docs/11/storage-page-layout.htmlThat's the kind of low-level detail I try not to worry about, but like to have in the back of my mind at least a little. If I read it correctly, a fixed-length field is going to be marked as present with a bit, and then inlined in the row storage without any extra cost for an address. So not much in the way of extra overhead. Spending the space on a field, reduces the compute needed to constantly perform extracts on the UUID field to access the same information.But that particular trade-off is an ancient discussion and judgement call, you know your requirements and constraints better than anyone else. So, I'll leave it at that.On Sun, May 26, 2019 at 8:24 PM Morris de Oryx <[email protected]> wrote:On Sun, May 26, 2019 at 7:38 PM Ancoron Luciferis <[email protected]> wrote:The BRIN index is something I might need to test, though.Yes, check that out, it might give you some ideas. A B-tree (in whatever variant) is inherently a large index type. They're ideal for finding unique values quickly, not ideal for storing redundant values, and pretty decent at finding ranges. A BRIN (Block Range Index), as implemented in Postgres, is good for finding unique values and and ranges. But here's the thing, a BRIN index takes some absurdly small % of the space of a B-tree. You have to blink and check again to be sure you've figured it right.How can a BRIN index be so much smaller? By throwing virtually everything out. A BRIN index doesn't store all of the values in a page, it stores the min/max value and that's it. So it's a probabilistic index (of sorts.) Is the value you're seeking on such and so page? The answer is \"No\" or \"Maybe.\" That's a fast test on a very cheap data structure.When the answer is \"maybe\", the full has to be loaded and scanned to determine if a specific value is found. So, you have a very small index structure, but have to do more sequential scanning to determine if a record is indexed in that page or not. In the real world, this can work out to be a high-performance structure at very low cost. But for it to work, your records need to be physically ordered (CLUSTER) by the condition in that index. And, going forward, you ought to be inserting in order too.(More-or-less.) So, a BRIN index is a great option if you have an insertion pattern that allows it to remain efficient, and if you're goal is range searching without a heavy B-tree index to maintain.I have no clue how BRIN indexes and partitioning interact.",
"msg_date": "Sun, 26 May 2019 20:38:31 +1000",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On Sun, May 26, 2019 at 8:37 PM Tomas Vondra <[email protected]>\nwrote:\n\nNo, an extra column is not a solution, because it has no impact on the\n> index on the UUID column.\n\n\nPossibly talking at cross-purposes here. I was honing in on the OPs wish to\nsearch and sort by creation order. For which my first (and only) instinct\nwould be to have a timestamp. In fact, the OP wants to work with multiple\nsubcomponents encoded in their magic number, so I'm likely off base\nentirely. I have a long-standing allergy to concatenated key-like fields as\nthey're opaque, collapse multiple values into a single column (0NF), and\ninevitably (in my experience) get you into a bind when requirements change.\n\nBut everyone's got their own point of view on such judgement calls. I'm not\ncurrently dealing with anything where the cost of adding a few small,\nfixed-type columns would give me a moment's hesitation. I'm sure we all\nlike to save space, but when saving space costs you clarity, flexibility,\nand compute, the \"savings\" aren't free. So, it's a judgment call. The OP\nmay well have 1B rows and really quite good reasons for worrying about\ndisk-level optimizations.\n\nOn Sun, May 26, 2019 at 8:37 PM Tomas Vondra <[email protected]> wrote:No, an extra column is not a solution, because it has no impact on the\nindex on the UUID column. Possibly talking at cross-purposes here. I was honing in on the OPs wish to search and sort by creation order. For which my first (and only) instinct would be to have a timestamp. In fact, the OP wants to work with multiple subcomponents encoded in their magic number, so I'm likely off base entirely. I have a long-standing allergy to concatenated key-like fields as they're opaque, collapse multiple values into a single column (0NF), and inevitably (in my experience) get you into a bind when requirements change.But everyone's got their own point of view on such judgement calls. I'm not currently dealing with anything where the cost of adding a few small, fixed-type columns would give me a moment's hesitation. I'm sure we all like to save space, but when saving space costs you clarity, flexibility, and compute, the \"savings\" aren't free. So, it's a judgment call. The OP may well have 1B rows and really quite good reasons for worrying about disk-level optimizations.",
"msg_date": "Sun, 26 May 2019 20:46:21 +1000",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "Hi,\n\nI've finally found some time to redo some tests using PostgreSQL 11.3.\n\nScenario is the following:\n\n1.) add 10 M rows\n2.) add another 30 M rows\n3.) delete the first 10 M rows\n4.) VACUUM\n5.) REINDEX\n\nMy goal is to find the most efficient way for UUID values to somewhat\noptimize for index page/node reclaim (or least amount of \"bloat\")\nwithout the need to execute a REINDEX (although we're currently doing\nthat on partitions where we can be sure that no rows will be added/changed).\n\nI have tested the following:\n\nTable \"uuid_v1\": standard V1 UUID\n\nTable \"uuid_v1_timestamp\": standard V1 UUID with additional index using\ntimestamp extract function\n\nTable \"uuid_seq\": sequential UUID from Thomas Vondra\n\nTable \"uuid_serial\": \"serial\" UUID from my Java implementation\n\n\nFor the test, I created a little PostgreSQL extension for support\nfunctions using standard version 1 UUID's (mainly to extract the timestamp):\nhttps://github.com/ancoron/pg-uuid-ext\n\nMy C skills really suck, so if someone could review that and tell if\nthere are more efficient ways of doing certain things I'd be happy!\n\nAnd for generating the UUID values I've created another Java project:\nhttps://github.com/ancoron/java-uuid-serial\n\nNow some test results, which are somewhat in-line with what I've already\nseen in previous tests for PG 10, but less dramatic performance\ndifferences here:\n\n1.) add 10 M rows\nTable: uuid_v1 Time: 33183.639 ms (00:33.184)\nTable: uuid_v1_timestamp Time: 49869.763 ms (00:49.870)\nTable: uuid_seq Time: 35794.906 ms (00:35.795)\nTable: uuid_serial Time: 22835.071 ms (00:22.835)\n\nAs expected, the table with the additional function index is slowest but\nsurprisingly the sequential UUID was not faster than the standard V1\nUUID here. Still, serial version is the fastest.\n\nPrimary key indexes after an ANALYZE:\n table_name | bloat | index_mb | table_mb\n-------------------+----------------+----------+----------\n uuid_v1 | 128 MiB (32 %) | 395.570 | 422.305\n uuid_v1_timestamp | 128 MiB (32 %) | 395.570 | 422.305\n uuid_seq | 123 MiB (31 %) | 390.430 | 422.305\n uuid_serial | 108 MiB (29 %) | 376.023 | 422.305\n\n\n2.) add another 30 M rows\nTable: uuid_v1 Time: 136109.046 ms (02:16.109)\nTable: uuid_v1_timestamp Time: 193172.454 ms (03:13.172)\nTable: uuid_seq Time: 124713.530 ms (02:04.714)\nTable: uuid_serial Time: 78362.209 ms (01:18.362)\n\nNow the performance difference gets much more dramatic.\n\nPrimary key indexes after an ANALYZE:\n table_name | bloat | index_mb | table_mb\n-------------------+----------------+----------+----------\n uuid_v1 | 500 MiB (32 %) | 1571.039 | 1689.195\n uuid_v1_timestamp | 500 MiB (32 %) | 1571.039 | 1689.195\n uuid_seq | 492 MiB (31 %) | 1562.766 | 1689.195\n uuid_serial | 433 MiB (29 %) | 1504.047 | 1689.195\n\nStill no noticeable difference but I wonder why it looks like a\nfill-factor of 70 instead of the default 90.\n\n\n3.) delete the first 10 M rows\n\nImplementations differ for the tables due to difference capabilities:\nDELETE FROM uuid_v1 WHERE id IN (select id from uuid_v1 limit 10000000);\nDELETE FROM uuid_v1_timestamp WHERE uuid_v1_timestamp(id) <\n'2019-03-09T07:58:02.056';\nDELETE FROM uuid_seq WHERE id IN (select id from uuid_seq limit 10000000);\nDELETE FROM uuid_serial WHERE id < '1e942411-004c-1d50-0000-000000000000';\n\nTable: uuid_v1 Time: 38308.299 ms (00:38.308)\nTable: uuid_v1_timestamp Time: 11589.941 ms (00:11.590)\nTable: uuid_seq Time: 37171.331 ms (00:37.171)\nTable: uuid_serial Time: 11694.893 ms (00:11.695)\n\nAs expected, using the timestamp index directly of being able to compare\non the UUID in a time-wise manner produces the best results.\n\n\n4.) VACUUM\nTable: uuid_v1 Time: 69740.952 ms (01:09.741)\nTable: uuid_v1_timestamp Time: 67347.469 ms (01:07.347)\nTable: uuid_seq Time: 25832.355 ms (00:25.832)\nTable: uuid_serial Time: 12339.531 ms (00:12.340)\n\nThis is pretty important to us as it consumes system resources and we\nhave quite a lot of big tables. So my serial implementation seems to\nbeat even the sequential one which was pretty surprising for me to see.\n\nPrimary key indexes after an ANALYZE:\n table_name | bloat | index_mb | table_mb\n-------------------+----------------+----------+----------\n uuid_v1 | 767 MiB (49 %) | 1571.039 | 1689.195\n uuid_v1_timestamp | 768 MiB (49 %) | 1571.039 | 1689.195\n uuid_seq | 759 MiB (49 %) | 1562.766 | 1689.195\n uuid_serial | 700 MiB (47 %) | 1504.047 | 1689.195\n\nOK, sadly no reclaim in any of them.\n\n\n5.) REINDEX\nTable: uuid_v1 Time: 21549.860 ms (00:21.550)\nTable: uuid_v1_timestamp Time: 27367.817 ms (00:27.368)\nTable: uuid_seq Time: 19142.711 ms (00:19.143)\nTable: uuid_serial Time: 16889.807 ms (00:16.890)\n\nEven in this case it looks as if my implementation is faster than\nanything else - which I really don't get.\n\nOverall, as write performance is our major concern atm. without\nsacrificing read performance we'll continue using our \"serial\"\nimplementation for the time being.\n\nNevertheless, I am still puzzled by the index organization and how to\nkeep it within a certain size in cases where you have a rolling window\nof value space to avoid unnecessary disk I/O (even if it is SSD).\n\nSo I wonder if there is anything that can be done on the index side?\n\nI might implement a different opclass for the standard UUID to enable\ntime-wise index sort order. This will naturally be very close to\nphysical order but I doubt that this is something I can tell PostgreSQL, or?\n\nCheers,\n\n\tAncoron\n\n\nOn 26/05/2019 11:01, Ancoron Luciferis wrote:\n> On 26/05/2019 03:09, Tomas Vondra wrote:\n>> On Sun, May 26, 2019 at 01:49:30AM +0200, Ancoron Luciferis wrote:\n>>> On 26/05/2019 00:14, Tomas Vondra wrote:\n>>>> On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:\n>>>>> Ancoron Luciferis <[email protected]> writes:\n>>>>>> On 25/05/2019 16:57, Tom Lane wrote:\n>>>>>>> (4) it in fact *wouldn't* do anything useful, because we'd still have\n>>>>>>> to sort UUIDs in the same order as today, meaning that btree index\n>>>>>>> behavior\n>>>>>>> would remain the same as before. Plus UUID comparison would get a\n>>>>>>> lot\n>>>>>>> more complicated and slower than it is now.\n>>>>>\n>>>>>> I get your first sentence, but not your second. I know that when\n>>>>>> changing the internal byte order we'd have to completed re-compute\n>>>>>> everything on-disk (from table to index data), but why would the\n>>>>>> sorting\n>>>>>> in the index have to be the same?\n>>>>>\n>>>>> Because we aren't going to change the existing sort order of UUIDs.\n>>>>> We have no idea what applications might be dependent on that.\n>>>>>\n>>>>> As Vitalii correctly pointed out, your beef is not with the physical\n>>>>> storage of UUIDs anyway: you just wish they'd sort differently, since\n>>>>> that is what determines the behavior of a btree index. But we aren't\n>>>>> going to change the sort ordering because that's an even bigger\n>>>>> compatibility break than changing the physical storage; it'd affect\n>>>>> application-visible semantics.\n>>>>>\n>>>>> What you might want to think about is creating a function that maps\n>>>>> UUIDs into an ordering that makes sense to you, and then creating\n>>>>> a unique index over that function instead of the raw UUIDs. That\n>>>>> would give the results you want without having to negotiate with the\n>>>>> rest of the world about whether it's okay to change the semantics\n>>>>> of type uuid.\n>>>>>\n>>>>\n>>>> FWIW that's essentially what I implemented as an extension some time\n>>>> ago. See [1] for a more detailed explanation and some benchmarks.\n>>>\n>>> Yes, I've seen that before. Pretty nice work you but together there and\n>>> I'll surely have a look at it but we certainly need the node id in\n>>> compliance with v1 UUID's so that's why we've been generating UUID's at\n>>> the application side from day 1.\n>>>\n>>>>\n>>>> The thing is - it's not really desirable to get perfectly ordered\n>>>> ordering, because that would mean we never get back to older parts of\n>>>> the index (so if you delete data, we'd never fill that space).\n>>>\n>>> Wouldn't this apply also to any sequential-looking index (e.g. on\n>>> serial)?\n>>\n>> Yes, it does apply to any index on sequential (ordered) data. If you\n>> delete data from the \"old\" part (but not all, so the pages don't get\n>> completely empty), that space is lost. It's available for new data, but\n>> if we only insert to \"new\" part of the index, that's useless.\n> \n> OK, thanks for clearing that up for me.\n> \n>>\n>>> The main issue with the UUID's is that it almost instantly\n>>> consumes a big part of the total value space (e.g. first value is\n>>> '01...' and second is coming as 'f3...') which I would assume not being\n>>> very efficient with btrees (space reclaim? - bloat).\n>>>\n>>\n>> I don't understand what you mean here. Perhaps you misunderstand how\n>> btree indexes grow? It's not like we allocate separate pages for\n>> different values/prefixes - we insert the data until a page gets full,\n>> then it's split in half. There is some dependency on the order in which\n>> the values are inserted, but AFAIK random order is generally fine.\n> \n> OK, I might not understand the basics of the btree implementation. Sorry\n> for that.\n> \n> However, one of our issues with standard v1 UUID's was bloat of the\n> indexes, although we kept only a few months of data in it. I think, this\n> was due to the pages still containing at least one value and not\n> reclaimed by vacuum. It just continued to grow.\n> \n> Now, as we have this different ever-increasing prefix, we still have\n> some constant growing but we see that whenever historic data get's\n> deleted (in a separate process), space get's reclaimed.\n> \n> \n>>\n>>> One of our major concerns is to keep index size small (VACUUM can't be\n>>> run every minute) to fit into memory next to a lot of others.\n>>>\n>>\n>> I don't think this has much to do with vacuum - I don't see how it's\n>> related to the ordering of generated UUID values. And I don't see where\n>> the \"can't run vacuum every minute\" comes from.\n> \n> OK, numbers (after VACUUM) which I really found strange using the query\n> from pgexperts [1]:\n> \n> index_name | bloat_pct | bloat_mb | index_mb | table_mb\n> ---------------------+-----------+----------+----------+----------\n> uuid_v1_pkey | 38 | 363 | 965.742 | 950.172\n> uuid_serial_pkey | 11 | 74 | 676.844 | 950.172\n> uuid_serial_8_pkey | 46 | 519 | 1122.031 | 950.172\n> uuid_serial_16_pkey | 39 | 389 | 991.195 | 950.172\n> \n> ...where the \"8\" and \"16\" is a \"shift\" of the timestamp value,\n> implemented with:\n> \n> timestamp = (timestamp >>> (60 - shift)) | (timestamp << (shift + 4))\n> \n> If someone could shed some light on why that is (the huge difference in\n> index sizes) I'd be happy.\n> \n>>\n>>> I've experimented with the rollover \"prefix\" myself but found that it\n>>> makes the index too big (same or larger index size than standard v1\n>>> UUIDs) and VACUUM too slow (almost as slow as a standard V1 UUID),\n>>> although INSERT performance wasn't that bad, our sequential UUID's where\n>>> way faster (at least pre-generated and imported with COPY to eliminate\n>>> any value generation impact).\n>>>\n>>\n>> I very much doubt that has anything to do with the prefix. You'll need\n>> to share more details about how you did your tests.\n> \n> OK, I'll see if I can prepare something and publish it.\n> \n>>\n>>\n>> regards\n>>\n> \n> \n> Refs:\n> [1]\n> https://github.com/pgexperts/pgx_scripts/blob/master/bloat/index_bloat_check.sql\n> \n\n\n\n",
"msg_date": "Tue, 11 Jun 2019 22:27:04 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "Please don't top post -- trim the your response down so that only\nstill-relevant text remains.\n\nOn Tue, Jun 11, 2019 at 1:27 PM Ancoron Luciferis\n<[email protected]> wrote:\n> Primary key indexes after an ANALYZE:\n> table_name | bloat | index_mb | table_mb\n> -------------------+----------------+----------+----------\n> uuid_v1 | 767 MiB (49 %) | 1571.039 | 1689.195\n> uuid_v1_timestamp | 768 MiB (49 %) | 1571.039 | 1689.195\n> uuid_seq | 759 MiB (49 %) | 1562.766 | 1689.195\n> uuid_serial | 700 MiB (47 %) | 1504.047 | 1689.195\n>\n> OK, sadly no reclaim in any of them.\n\nI don't know how you got these figures, but most likely they don't\ntake into account the fact that the FSM for the index has free blocks\navailable. You'll only notice that if you have additional page splits\nthat can recycle that space. Or, you could use pg_freespacemap to get\nsome idea.\n\n> 5.) REINDEX\n> Table: uuid_v1 Time: 21549.860 ms (00:21.550)\n> Table: uuid_v1_timestamp Time: 27367.817 ms (00:27.368)\n> Table: uuid_seq Time: 19142.711 ms (00:19.143)\n> Table: uuid_serial Time: 16889.807 ms (00:16.890)\n>\n> Even in this case it looks as if my implementation is faster than\n> anything else - which I really don't get.\n\nSorting already-sorted data is faster. CREATE INDEX is mostly a big\nsort operation in the case of B-Tree indexes.\n\n> I might implement a different opclass for the standard UUID to enable\n> time-wise index sort order. This will naturally be very close to\n> physical order but I doubt that this is something I can tell PostgreSQL, or?\n\nPostgreSQL only knows whether or not your page splits occur in the\nrightmost page in the index -- it fills the page differently according\nto whether or not that is the case.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 7 Jul 2019 17:26:23 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID v1 optimizations..."
},
{
"msg_contents": "On 08/07/2019 02:26, Peter Geoghegan wrote:\n> Please don't top post -- trim the your response down so that only\n> still-relevant text remains.\n> \n> On Tue, Jun 11, 2019 at 1:27 PM Ancoron Luciferis\n> <[email protected]> wrote:\n>> Primary key indexes after an ANALYZE:\n>> table_name | bloat | index_mb | table_mb\n>> -------------------+----------------+----------+----------\n>> uuid_v1 | 767 MiB (49 %) | 1571.039 | 1689.195\n>> uuid_v1_timestamp | 768 MiB (49 %) | 1571.039 | 1689.195\n>> uuid_seq | 759 MiB (49 %) | 1562.766 | 1689.195\n>> uuid_serial | 700 MiB (47 %) | 1504.047 | 1689.195\n>>\n>> OK, sadly no reclaim in any of them.\n> \n> I don't know how you got these figures, but most likely they don't\n> take into account the fact that the FSM for the index has free blocks\n> available. You'll only notice that if you have additional page splits\n> that can recycle that space. Or, you could use pg_freespacemap to get\n> some idea.\n\nHm, I think I've already read quite a bit about the internals of the PG\nb-tree index implementation but still cannot get to the answer how I\ncould influence that on my end as I want to stay compatible with the\nstandard UUID data storage but need time sorting support.\n\nAnyway, I've made a bit of progress in testing and now have the full\ntests executing unattended with the help of a script:\nhttps://github.com/ancoron/pg-uuid-test\n\nI've uploaded one of the test run results here:\nhttps://gist.github.com/ancoron/d5114b0907e8974b6808077e02f8d109\n\nAfter the first mass deletion, I can now see quite some savings for\nboth, serial and for my new time-sorted index:\n table_name | bloat | index_mb | table_mb\n-------------+-----------------+----------+----------\n uuid_v1 | 1500 MiB (48 %) | 3106.406 | 3378.383\n uuid_serial | 800 MiB (33 %) | 2406.453 | 3378.383\n uuid_v1_ext | 800 MiB (33 %) | 2406.453 | 3378.383\n\n...but in a second case (DELETE old + INSERT new), the savings are gone\nagain in both cases:\n table_name | bloat | index_mb | table_mb\n-------------+-----------------+----------+----------\n uuid_v1 | 1547 MiB (49 %) | 3153.859 | 3378.383\n uuid_serial | 1402 MiB (47 %) | 3008.055 | 3378.383\n uuid_v1_ext | 1403 MiB (47 %) | 3008.055 | 3378.383\n\nSo, the question for me would be: Is there any kind of data that plays\noptimal with space-savings in a rolling (e.g. last X rows) scenario?\n\n> \n>> 5.) REINDEX\n>> Table: uuid_v1 Time: 21549.860 ms (00:21.550)\n>> Table: uuid_v1_timestamp Time: 27367.817 ms (00:27.368)\n>> Table: uuid_seq Time: 19142.711 ms (00:19.143)\n>> Table: uuid_serial Time: 16889.807 ms (00:16.890)\n>>\n>> Even in this case it looks as if my implementation is faster than\n>> anything else - which I really don't get.\n> \n> Sorting already-sorted data is faster. CREATE INDEX is mostly a big\n> sort operation in the case of B-Tree indexes.\n\nUnderstood, this seems to be confirmed by my time-sorted index in the\nnew tests:\nuuid_v1: 27632.660 ms (00:27.633)\nuuid_serial: 20519.363 ms (00:20.519) x1.35\nuuid_v1_ext: 23846.474 ms (00:23.846) x1.16\n\n> \n>> I might implement a different opclass for the standard UUID to enable\n>> time-wise index sort order. This will naturally be very close to\n>> physical order but I doubt that this is something I can tell PostgreSQL, or?\n> \n> PostgreSQL only knows whether or not your page splits occur in the\n> rightmost page in the index -- it fills the page differently according\n> to whether or not that is the case.\n> \n\nAs I've implemented the new opclass and the new tests showing the\nresults now, I think I can say that the time-sorting behavior as opposed\nto rather random really benefits the overall performance, which is what\nI actually care about most.\n\n\nCheers,\n\n\tAncoron\n\n\n",
"msg_date": "Wed, 10 Jul 2019 00:08:34 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UUID v1 optimizations..."
}
] |
[
{
"msg_contents": "Hey,\nPG 9.6, I have a standalone configured. I tried to start up a secondary,\nrun standby clone (repmgr). The clone process took 3 hours and during that\ntime wals were generated(mostly because of the checkpoint_timeout). As a\nresult of that, when I start the secondary ,I see that the secondary keeps\ngetting the wals but I dont see any messages that indicate that the\nsecondary tried to replay the wals.\nmessages that i see :\nreceiving incremental file list\n000000010000377B000000DE\n\nsent 30 bytes received 4.11M bytes 8.22M bytes/sec\ntotal size is 4.15M speedup is 1.01\n2019-05-22 12:48:10 EEST 60942 LOG: restored log file\n\"000000010000377B000000DE\" from archive\n2019-05-22 12:48:11 EEST db63311 FATAL: the database system is starting up\n2019-05-22 12:48:12 EEST db63313 FATAL: the database system is starting\nup\n\nI was hoping to see the following messages (taken from a different machine)\n:\n2019-05-27 01:15:37 EDT 7428 LOG: restartpoint starting: time\n2019-05-27 01:16:18 EDT 7428 LOG: restartpoint complete: wrote 406\nbuffers (0.2%); 1 transaction log file(s) added, 0 removed, 0 recycled;\nwrite=41.390 s, sync=0.001 s, total=41.582 s; sync file\ns=128, longest=0.000 s, average=0.000 s; distance=2005 kB, estimate=2699 kB\n2019-05-27 01:16:18 EDT 7428 LOG: recovery restart point at 4/D096C4F8\n\nMy primary settings(wals settings) :\nwal_buffers = 16MB\ncheckpoint_completion_target = 0.9\ncheckpoint_timeout = 30min\n\nAny idea what can explain why the secondary doesnt replay the wals ?\n\nHey,PG 9.6, I have a standalone configured. I tried to start up a secondary, run standby clone (repmgr). The clone process took 3 hours and during that time wals were generated(mostly because of the checkpoint_timeout). As a result of that, when I start the secondary ,I see that the secondary keeps getting the wals but I dont see any messages that indicate that the secondary tried to replay the wals. messages that i see :receiving incremental file list000000010000377B000000DEsent 30 bytes received 4.11M bytes 8.22M bytes/sectotal size is 4.15M speedup is 1.012019-05-22 12:48:10 EEST 60942 LOG: restored log file \"000000010000377B000000DE\" from archive2019-05-22 12:48:11 EEST db63311 FATAL: the database system is starting up2019-05-22 12:48:12 EEST db63313 FATAL: the database system is starting up I was hoping to see the following messages (taken from a different machine) : 2019-05-27 01:15:37 EDT 7428 LOG: restartpoint starting: time2019-05-27 01:16:18 EDT 7428 LOG: restartpoint complete: wrote 406 buffers (0.2%); 1 transaction log file(s) added, 0 removed, 0 recycled; write=41.390 s, sync=0.001 s, total=41.582 s; sync files=128, longest=0.000 s, average=0.000 s; distance=2005 kB, estimate=2699 kB2019-05-27 01:16:18 EDT 7428 LOG: recovery restart point at 4/D096C4F8My primary settings(wals settings) : wal_buffers = 16MBcheckpoint_completion_target = 0.9checkpoint_timeout = 30minAny idea what can explain why the secondary doesnt replay the wals ?",
"msg_date": "Mon, 27 May 2019 11:49:13 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "improve wals replay on secondary"
},
{
"msg_contents": "Hi Mariel,\n\nif i m not wrong, on the secondary you will see the messages you\nmentioned when a checkpoint happens.\n\nWhat are checkpoint_timeout and max_wal_size on your standby?\n\nDid you ever see this on your standby log?\n\n\"consistent recovery state reached at ..\"\n\n\nMaybe you can post your whole configuration of your standby for easier\ndebug.\n\nregards,\n\nfabio pardi\n\n\n\n\nOn 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> Hey,\n> PG 9.6, I have a standalone configured. I tried to start up a secondary,\n> run standby clone (repmgr). The clone process took 3 hours and during\n> that time wals were generated(mostly because of the checkpoint_timeout).\n> As a result of that, when I start the secondary ,I see that the\n> secondary keeps getting the wals but I dont see any messages that\n> indicate that the secondary tried to replay the wals. \n> messages that i see :\n> receiving incremental file list\n> 000000010000377B000000DE\n> \n> sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> total size is 4.15M speedup is 1.01\n> 2019-05-22 12:48:10 EEST 60942 LOG: restored log file\n> \"000000010000377B000000DE\" from archive\n> 2019-05-22 12:48:11 EEST db63311 FATAL: the database system is starting up\n> 2019-05-22 12:48:12 EEST db63313 FATAL: the database system is\n> starting up \n> \n> I was hoping to see the following messages (taken from a different\n> machine) : \n> 2019-05-27 01:15:37 EDT 7428 LOG: restartpoint starting: time\n> 2019-05-27 01:16:18 EDT 7428 LOG: restartpoint complete: wrote 406\n> buffers (0.2%); 1 transaction log file(s) added, 0 removed, 0 recycled;\n> write=41.390 s, sync=0.001 s, total=41.582 s; sync file\n> s=128, longest=0.000 s, average=0.000 s; distance=2005 kB, estimate=2699 kB\n> 2019-05-27 01:16:18 EDT 7428 LOG: recovery restart point at 4/D096C4F8\n> \n> My primary settings(wals settings) : \n> wal_buffers = 16MB\n> checkpoint_completion_target = 0.9\n> checkpoint_timeout = 30min\n> \n> Any idea what can explain why the secondary doesnt replay the wals ?\n\n\n",
"msg_date": "Mon, 27 May 2019 11:12:30 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "Hi Mariel,\n\nlet s keep the list in cc...\n\nsettings look ok.\n\nwhat's in the recovery.conf file then?\n\nregards,\n\nfabio pardi\n\nOn 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> Hey,\n> the configuration is the same as in the primary : \n> max_wal_size = 2GB\n> min_wal_size = 1GB\n> wal_buffers = 16MB\n> checkpoint_completion_target = 0.9\n> checkpoint_timeout = 30min\n> \n> Regarding your question, I didnt see this message (consistent recovery\n> state reached at), I guess thats why the secondary isnt avaialble yet..\n> \n> Maybe I'm wrong, but what I understood from the documentation- restart\n> point is generated only after the secondary had a checkpoint wihch means\n> only after 30 minutes or after max_wal_size is reached ? But still, why\n> wont the secondary reach a consisteny recovery state (does it requires a\n> restart point to be generated ? )\n> \n> \n> בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>>:\n> \n> Hi Mariel,\n> \n> if i m not wrong, on the secondary you will see the messages you\n> mentioned when a checkpoint happens.\n> \n> What are checkpoint_timeout and max_wal_size on your standby?\n> \n> Did you ever see this on your standby log?\n> \n> \"consistent recovery state reached at ..\"\n> \n> \n> Maybe you can post your whole configuration of your standby for easier\n> debug.\n> \n> regards,\n> \n> fabio pardi\n> \n> \n> \n> \n> On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > Hey,\n> > PG 9.6, I have a standalone configured. I tried to start up a\n> secondary,\n> > run standby clone (repmgr). The clone process took 3 hours and during\n> > that time wals were generated(mostly because of the\n> checkpoint_timeout).\n> > As a result of that, when I start the secondary ,I see that the\n> > secondary keeps getting the wals but I dont see any messages that\n> > indicate that the secondary tried to replay the wals. \n> > messages that i see :\n> > receiving incremental file list\n> > 000000010000377B000000DE\n> >\n> > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > total size is 4.15M speedup is 1.01\n> > 2019-05-22 12:48:10 EEST 60942 LOG: restored log file\n> > \"000000010000377B000000DE\" from archive\n> > 2019-05-22 12:48:11 EEST db63311 FATAL: the database system is\n> starting up\n> > 2019-05-22 12:48:12 EEST db63313 FATAL: the database system is\n> > starting up \n> >\n> > I was hoping to see the following messages (taken from a different\n> > machine) : \n> > 2019-05-27 01:15:37 EDT 7428 LOG: restartpoint starting: time\n> > 2019-05-27 01:16:18 EDT 7428 LOG: restartpoint complete: wrote 406\n> > buffers (0.2%); 1 transaction log file(s) added, 0 removed, 0\n> recycled;\n> > write=41.390 s, sync=0.001 s, total=41.582 s; sync file\n> > s=128, longest=0.000 s, average=0.000 s; distance=2005 kB,\n> estimate=2699 kB\n> > 2019-05-27 01:16:18 EDT 7428 LOG: recovery restart point at\n> 4/D096C4F8\n> >\n> > My primary settings(wals settings) : \n> > wal_buffers = 16MB\n> > checkpoint_completion_target = 0.9\n> > checkpoint_timeout = 30min\n> >\n> > Any idea what can explain why the secondary doesnt replay the wals ?\n> \n> \n\n\n",
"msg_date": "Mon, 27 May 2019 11:29:42 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "standby_mode = 'on'\nprimary_conninfo = 'host=X.X.X.X user=repmgr connect_timeout=10 '\nrecovery_target_timeline = 'latest'\nprimary_slot_name = repmgr_slot_1\nrestore_command = 'rsync -avzhe ssh [email protected]:/var/lib/pgsql/archive/%f\n/var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p'\narchive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n/var/lib/pgsql/archive %r'\n\nבתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi <\[email protected]>:\n\n> Hi Mariel,\n>\n> let s keep the list in cc...\n>\n> settings look ok.\n>\n> what's in the recovery.conf file then?\n>\n> regards,\n>\n> fabio pardi\n>\n> On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > Hey,\n> > the configuration is the same as in the primary :\n> > max_wal_size = 2GB\n> > min_wal_size = 1GB\n> > wal_buffers = 16MB\n> > checkpoint_completion_target = 0.9\n> > checkpoint_timeout = 30min\n> >\n> > Regarding your question, I didnt see this message (consistent recovery\n> > state reached at), I guess thats why the secondary isnt avaialble yet..\n> >\n> > Maybe I'm wrong, but what I understood from the documentation- restart\n> > point is generated only after the secondary had a checkpoint wihch means\n> > only after 30 minutes or after max_wal_size is reached ? But still, why\n> > wont the secondary reach a consisteny recovery state (does it requires a\n> > restart point to be generated ? )\n> >\n> >\n> > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > <[email protected] <mailto:[email protected]>>:\n> >\n> > Hi Mariel,\n> >\n> > if i m not wrong, on the secondary you will see the messages you\n> > mentioned when a checkpoint happens.\n> >\n> > What are checkpoint_timeout and max_wal_size on your standby?\n> >\n> > Did you ever see this on your standby log?\n> >\n> > \"consistent recovery state reached at ..\"\n> >\n> >\n> > Maybe you can post your whole configuration of your standby for\n> easier\n> > debug.\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> >\n> >\n> >\n> > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > Hey,\n> > > PG 9.6, I have a standalone configured. I tried to start up a\n> > secondary,\n> > > run standby clone (repmgr). The clone process took 3 hours and\n> during\n> > > that time wals were generated(mostly because of the\n> > checkpoint_timeout).\n> > > As a result of that, when I start the secondary ,I see that the\n> > > secondary keeps getting the wals but I dont see any messages that\n> > > indicate that the secondary tried to replay the wals.\n> > > messages that i see :\n> > > receiving incremental file list\n> > > 000000010000377B000000DE\n> > >\n> > > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > > total size is 4.15M speedup is 1.01\n> > > 2019-05-22 12:48:10 EEST 60942 LOG: restored log file\n> > > \"000000010000377B000000DE\" from archive\n> > > 2019-05-22 12:48:11 EEST db63311 FATAL: the database system is\n> > starting up\n> > > 2019-05-22 12:48:12 EEST db63313 FATAL: the database system is\n> > > starting up\n> > >\n> > > I was hoping to see the following messages (taken from a different\n> > > machine) :\n> > > 2019-05-27 01:15:37 EDT 7428 LOG: restartpoint starting: time\n> > > 2019-05-27 01:16:18 EDT 7428 LOG: restartpoint complete: wrote\n> 406\n> > > buffers (0.2%); 1 transaction log file(s) added, 0 removed, 0\n> > recycled;\n> > > write=41.390 s, sync=0.001 s, total=41.582 s; sync file\n> > > s=128, longest=0.000 s, average=0.000 s; distance=2005 kB,\n> > estimate=2699 kB\n> > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery restart point at\n> > 4/D096C4F8\n> > >\n> > > My primary settings(wals settings) :\n> > > wal_buffers = 16MB\n> > > checkpoint_completion_target = 0.9\n> > > checkpoint_timeout = 30min\n> > >\n> > > Any idea what can explain why the secondary doesnt replay the wals\n> ?\n> >\n> >\n>\n\nstandby_mode = 'on'primary_conninfo = 'host=X.X.X.X user=repmgr connect_timeout=10 'recovery_target_timeline = 'latest'primary_slot_name = repmgr_slot_1restore_command = 'rsync -avzhe ssh [email protected]:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p'archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup /var/lib/pgsql/archive %r'בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi <[email protected]>:Hi Mariel,\n\nlet s keep the list in cc...\n\nsettings look ok.\n\nwhat's in the recovery.conf file then?\n\nregards,\n\nfabio pardi\n\nOn 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> Hey,\n> the configuration is the same as in the primary : \n> max_wal_size = 2GB\n> min_wal_size = 1GB\n> wal_buffers = 16MB\n> checkpoint_completion_target = 0.9\n> checkpoint_timeout = 30min\n> \n> Regarding your question, I didnt see this message (consistent recovery\n> state reached at), I guess thats why the secondary isnt avaialble yet..\n> \n> Maybe I'm wrong, but what I understood from the documentation- restart\n> point is generated only after the secondary had a checkpoint wihch means\n> only after 30 minutes or after max_wal_size is reached ? But still, why\n> wont the secondary reach a consisteny recovery state (does it requires a\n> restart point to be generated ? )\n> \n> \n> בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>>:\n> \n> Hi Mariel,\n> \n> if i m not wrong, on the secondary you will see the messages you\n> mentioned when a checkpoint happens.\n> \n> What are checkpoint_timeout and max_wal_size on your standby?\n> \n> Did you ever see this on your standby log?\n> \n> \"consistent recovery state reached at ..\"\n> \n> \n> Maybe you can post your whole configuration of your standby for easier\n> debug.\n> \n> regards,\n> \n> fabio pardi\n> \n> \n> \n> \n> On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > Hey,\n> > PG 9.6, I have a standalone configured. I tried to start up a\n> secondary,\n> > run standby clone (repmgr). The clone process took 3 hours and during\n> > that time wals were generated(mostly because of the\n> checkpoint_timeout).\n> > As a result of that, when I start the secondary ,I see that the\n> > secondary keeps getting the wals but I dont see any messages that\n> > indicate that the secondary tried to replay the wals. \n> > messages that i see :\n> > receiving incremental file list\n> > 000000010000377B000000DE\n> >\n> > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > total size is 4.15M speedup is 1.01\n> > 2019-05-22 12:48:10 EEST 60942 LOG: restored log file\n> > \"000000010000377B000000DE\" from archive\n> > 2019-05-22 12:48:11 EEST db63311 FATAL: the database system is\n> starting up\n> > 2019-05-22 12:48:12 EEST db63313 FATAL: the database system is\n> > starting up \n> >\n> > I was hoping to see the following messages (taken from a different\n> > machine) : \n> > 2019-05-27 01:15:37 EDT 7428 LOG: restartpoint starting: time\n> > 2019-05-27 01:16:18 EDT 7428 LOG: restartpoint complete: wrote 406\n> > buffers (0.2%); 1 transaction log file(s) added, 0 removed, 0\n> recycled;\n> > write=41.390 s, sync=0.001 s, total=41.582 s; sync file\n> > s=128, longest=0.000 s, average=0.000 s; distance=2005 kB,\n> estimate=2699 kB\n> > 2019-05-27 01:16:18 EDT 7428 LOG: recovery restart point at\n> 4/D096C4F8\n> >\n> > My primary settings(wals settings) : \n> > wal_buffers = 16MB\n> > checkpoint_completion_target = 0.9\n> > checkpoint_timeout = 30min\n> >\n> > Any idea what can explain why the secondary doesnt replay the wals ?\n> \n>",
"msg_date": "Mon, 27 May 2019 13:17:49 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "If you did not even see this messages on your standby logs:\n\nrestartpoint starting: xlog\n\nthen it means that the checkpoint was even never started.\n\nIn that case, I have no clue.\n\nTry to describe step by step how to reproduce the problem together with\nyour setup and the version number of Postgres and repmgr, and i might be\nable to help you further.\n\nregards,\n\nfabio pardi\n\nOn 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> standby_mode = 'on'\n> primary_conninfo = 'host=X.X.X.X user=repmgr connect_timeout=10 '\n> recovery_target_timeline = 'latest'\n> primary_slot_name = repmgr_slot_1\n> restore_command = 'rsync -avzhe ssh\n> [email protected]:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ;\n> gunzip < /var/lib/pgsql/archive/%f > %p'\n> archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> /var/lib/pgsql/archive %r'\n> \n> בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>>:\n> \n> Hi Mariel,\n> \n> let s keep the list in cc...\n> \n> settings look ok.\n> \n> what's in the recovery.conf file then?\n> \n> regards,\n> \n> fabio pardi\n> \n> On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > Hey,\n> > the configuration is the same as in the primary : \n> > max_wal_size = 2GB\n> > min_wal_size = 1GB\n> > wal_buffers = 16MB\n> > checkpoint_completion_target = 0.9\n> > checkpoint_timeout = 30min\n> >\n> > Regarding your question, I didnt see this message (consistent recovery\n> > state reached at), I guess thats why the secondary isnt avaialble\n> yet..\n> >\n> > Maybe I'm wrong, but what I understood from the documentation- restart\n> > point is generated only after the secondary had a checkpoint wihch\n> means\n> > only after 30 minutes or after max_wal_size is reached ? But\n> still, why\n> > wont the secondary reach a consisteny recovery state (does it\n> requires a\n> > restart point to be generated ? )\n> >\n> >\n> > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>:\n> >\n> > Hi Mariel,\n> >\n> > if i m not wrong, on the secondary you will see the messages you\n> > mentioned when a checkpoint happens.\n> >\n> > What are checkpoint_timeout and max_wal_size on your standby?\n> >\n> > Did you ever see this on your standby log?\n> >\n> > \"consistent recovery state reached at ..\"\n> >\n> >\n> > Maybe you can post your whole configuration of your standby\n> for easier\n> > debug.\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> >\n> >\n> >\n> > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > Hey,\n> > > PG 9.6, I have a standalone configured. I tried to start up a\n> > secondary,\n> > > run standby clone (repmgr). The clone process took 3 hours\n> and during\n> > > that time wals were generated(mostly because of the\n> > checkpoint_timeout).\n> > > As a result of that, when I start the secondary ,I see that the\n> > > secondary keeps getting the wals but I dont see any messages\n> that\n> > > indicate that the secondary tried to replay the wals. \n> > > messages that i see :\n> > > receiving incremental file list\n> > > 000000010000377B000000DE\n> > >\n> > > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > > total size is 4.15M speedup is 1.01\n> > > 2019-05-22 12:48:10 EEST 60942 LOG: restored log file\n> > > \"000000010000377B000000DE\" from archive\n> > > 2019-05-22 12:48:11 EEST db63311 FATAL: the database system is\n> > starting up\n> > > 2019-05-22 12:48:12 EEST db63313 FATAL: the database system is\n> > > starting up \n> > >\n> > > I was hoping to see the following messages (taken from a\n> different\n> > > machine) : \n> > > 2019-05-27 01:15:37 EDT 7428 LOG: restartpoint starting: time\n> > > 2019-05-27 01:16:18 EDT 7428 LOG: restartpoint complete:\n> wrote 406\n> > > buffers (0.2%); 1 transaction log file(s) added, 0 removed, 0\n> > recycled;\n> > > write=41.390 s, sync=0.001 s, total=41.582 s; sync file\n> > > s=128, longest=0.000 s, average=0.000 s; distance=2005 kB,\n> > estimate=2699 kB\n> > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery restart point at\n> > 4/D096C4F8\n> > >\n> > > My primary settings(wals settings) : \n> > > wal_buffers = 16MB\n> > > checkpoint_completion_target = 0.9\n> > > checkpoint_timeout = 30min\n> > >\n> > > Any idea what can explain why the secondary doesnt replay\n> the wals ?\n> >\n> >\n> \n\n\n",
"msg_date": "Mon, 27 May 2019 13:04:51 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "Hi Mariel,\n\nplease keep the list posted. When you reply, use 'reply all'. That will maybe help others in the community and you might also get more help from others.\n\nanswers are in line here below\n\n\n\nOn 28/05/2019 10:54, Mariel Cherkassky wrote:\n> I have pg 9.6, repmgr version 4.3 .\n> I see in the logs that wal files are restored : \n> 2019-05-22 12:35:12 EEST 60942 LOG: restored log file \"000000010000377B000000DB\" from archive\n> that means that the restore_command worked right ? \n> \n\nright\n\n> According to the docs :\n> \"In standby mode, a restartpoint is also triggered if checkpoint_segments log segments have been replayed since last restartpoint and at least one checkpoint record has been replayed. Restartpoints can't be performed more frequently than checkpoints in the master because restartpoints can only be performed at checkpoint records\"\n> so maybe I should decrease max_wal_size or even checkpoint_timeout to force a restartpoint ? \n> During this gap (standby clone) 6-7 wals were generated on the primary\n> \n\n\n From what you posted earlier, you should in any case have hit a checkpoint every 30 minutes. (That was also the assumption in the previous messages. If that's not happening, then i would really investigate.)\n\nThat is, if during your cloning only a few WAL files were generated, then it is not enough to trigger a checkpoint and you fallback to the 30 minutes.\n\nI would not be bothered if i were you, but can always force a checkpoint on the master issuing:\n\nCHECKPOINT ;\n\nat that stage, on the standby logs you will see the messages:\n\nrestartpoint starting: ..\n\nrestartpoint complete: .. \n\n\n\nregards,\n\nfabio pardi\n\n> \n> בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi <[email protected] <mailto:[email protected]>>:\n> \n> If you did not even see this messages on your standby logs:\n> \n> restartpoint starting: xlog\n> \n> then it means that the checkpoint was even never started.\n> \n> In that case, I have no clue.\n> \n> Try to describe step by step how to reproduce the problem together with\n> your setup and the version number of Postgres and repmgr, and i might be\n> able to help you further.\n> \n> regards,\n> \n> fabio pardi\n> \n> On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> > standby_mode = 'on'\n> > primary_conninfo = 'host=X.X.X.X user=repmgr connect_timeout=10 '\n> > recovery_target_timeline = 'latest'\n> > primary_slot_name = repmgr_slot_1\n> > restore_command = 'rsync -avzhe ssh\n> > [email protected]:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ;\n> > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > /var/lib/pgsql/archive %r'\n> >\n> > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> > <[email protected] <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>>:\n> >\n> > Hi Mariel,\n> >\n> > let s keep the list in cc...\n> >\n> > settings look ok.\n> >\n> > what's in the recovery.conf file then?\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > > Hey,\n> > > the configuration is the same as in the primary : \n> > > max_wal_size = 2GB\n> > > min_wal_size = 1GB\n> > > wal_buffers = 16MB\n> > > checkpoint_completion_target = 0.9\n> > > checkpoint_timeout = 30min\n> > >\n> > > Regarding your question, I didnt see this message (consistent recovery\n> > > state reached at), I guess thats why the secondary isnt avaialble\n> > yet..\n> > >\n> > > Maybe I'm wrong, but what I understood from the documentation- restart\n> > > point is generated only after the secondary had a checkpoint wihch\n> > means\n> > > only after 30 minutes or after max_wal_size is reached ? But\n> > still, why\n> > > wont the secondary reach a consisteny recovery state (does it\n> > requires a\n> > > restart point to be generated ? )\n> > >\n> > >\n> > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > > <[email protected] <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>\n> > <mailto:[email protected] <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>>>:\n> > >\n> > > Hi Mariel,\n> > >\n> > > if i m not wrong, on the secondary you will see the messages you\n> > > mentioned when a checkpoint happens.\n> > >\n> > > What are checkpoint_timeout and max_wal_size on your standby?\n> > >\n> > > Did you ever see this on your standby log?\n> > >\n> > > \"consistent recovery state reached at ..\"\n> > >\n> > >\n> > > Maybe you can post your whole configuration of your standby\n> > for easier\n> > > debug.\n> > >\n> > > regards,\n> > >\n> > > fabio pardi\n> > >\n> > >\n> > >\n> > >\n> > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > > Hey,\n> > > > PG 9.6, I have a standalone configured. I tried to start up a\n> > > secondary,\n> > > > run standby clone (repmgr). The clone process took 3 hours\n> > and during\n> > > > that time wals were generated(mostly because of the\n> > > checkpoint_timeout).\n> > > > As a result of that, when I start the secondary ,I see that the\n> > > > secondary keeps getting the wals but I dont see any messages\n> > that\n> > > > indicate that the secondary tried to replay the wals. \n> > > > messages that i see :\n> > > > receiving incremental file list\n> > > > 000000010000377B000000DE\n> > > >\n> > > > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > > > total size is 4.15M speedup is 1.01\n> > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored log file\n> > > > \"000000010000377B000000DE\" from archive\n> > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the database system is\n> > > starting up\n> > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the database system is\n> > > > starting up \n> > > >\n> > > > I was hoping to see the following messages (taken from a\n> > different\n> > > > machine) : \n> > > > 2019-05-27 01:15:37 EDT 7428 LOG: restartpoint starting: time\n> > > > 2019-05-27 01:16:18 EDT 7428 LOG: restartpoint complete:\n> > wrote 406\n> > > > buffers (0.2%); 1 transaction log file(s) added, 0 removed, 0\n> > > recycled;\n> > > > write=41.390 s, sync=0.001 s, total=41.582 s; sync file\n> > > > s=128, longest=0.000 s, average=0.000 s; distance=2005 kB,\n> > > estimate=2699 kB\n> > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery restart point at\n> > > 4/D096C4F8\n> > > >\n> > > > My primary settings(wals settings) : \n> > > > wal_buffers = 16MB\n> > > > checkpoint_completion_target = 0.9\n> > > > checkpoint_timeout = 30min\n> > > >\n> > > > Any idea what can explain why the secondary doesnt replay\n> > the wals ?\n> > >\n> > >\n> >\n> \n\n\n",
"msg_date": "Tue, 28 May 2019 12:54:51 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "First of all thanks Fabio.\nI think that I'm missing something :\nIn the next questionI'm not talking about streaming replication,rather on\nrecovery :\n\n1.When the secondary get the wals from the primary it tries to replay them\ncorrect ?\n\n2. By replaying it just go over the wal records and run them in the\nsecondary ?\n\n3.All those changes are saved in the shared_buffer(secondary) or the\nchanged are immediately done on the data files blocks ?\n\n4.The secondary will need a checkpoint in order to flush those changes to\nthe data files and in order to reach a restart point ?\n\nSo, basically If I had a checkpoint during the clone, the secondary should\nalso have a checkpoint when I recover the secondary right ?\n\n\nבתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi <\[email protected]>:\n\n> Hi Mariel,\n>\n> please keep the list posted. When you reply, use 'reply all'. That will\n> maybe help others in the community and you might also get more help from\n> others.\n>\n> answers are in line here below\n>\n>\n>\n> On 28/05/2019 10:54, Mariel Cherkassky wrote:\n> > I have pg 9.6, repmgr version 4.3 .\n> > I see in the logs that wal files are restored :\n> > 2019-05-22 12:35:12 EEST 60942 LOG: restored log file\n> \"000000010000377B000000DB\" from archive\n> > that means that the restore_command worked right ?\n> >\n>\n> right\n>\n> > According to the docs :\n> > \"In standby mode, a restartpoint is also triggered\n> if checkpoint_segments log segments have been replayed since last\n> restartpoint and at least one checkpoint record has been replayed.\n> Restartpoints can't be performed more frequently than checkpoints in the\n> master because restartpoints can only be performed at checkpoint records\"\n> > so maybe I should decrease max_wal_size or even checkpoint_timeout to\n> force a restartpoint ?\n> > During this gap (standby clone) 6-7 wals were generated on the primary\n> >\n>\n>\n> From what you posted earlier, you should in any case have hit a checkpoint\n> every 30 minutes. (That was also the assumption in the previous messages.\n> If that's not happening, then i would really investigate.)\n>\n> That is, if during your cloning only a few WAL files were generated, then\n> it is not enough to trigger a checkpoint and you fallback to the 30 minutes.\n>\n> I would not be bothered if i were you, but can always force a checkpoint\n> on the master issuing:\n>\n> CHECKPOINT ;\n>\n> at that stage, on the standby logs you will see the messages:\n>\n> restartpoint starting: ..\n>\n> restartpoint complete: ..\n>\n>\n>\n> regards,\n>\n> fabio pardi\n>\n> >\n> > בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi <\n> [email protected] <mailto:[email protected]>>:\n> >\n> > If you did not even see this messages on your standby logs:\n> >\n> > restartpoint starting: xlog\n> >\n> > then it means that the checkpoint was even never started.\n> >\n> > In that case, I have no clue.\n> >\n> > Try to describe step by step how to reproduce the problem together\n> with\n> > your setup and the version number of Postgres and repmgr, and i\n> might be\n> > able to help you further.\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> > On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> > > standby_mode = 'on'\n> > > primary_conninfo = 'host=X.X.X.X user=repmgr connect_timeout=10 '\n> > > recovery_target_timeline = 'latest'\n> > > primary_slot_name = repmgr_slot_1\n> > > restore_command = 'rsync -avzhe ssh\n> > > [email protected]:/var/lib/pgsql/archive/%f\n> /var/lib/pgsql/archive/%f ;\n> > > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > > archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > > /var/lib/pgsql/archive %r'\n> > >\n> > > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> > > <[email protected] <mailto:[email protected]> <mailto:\n> [email protected] <mailto:[email protected]>>>:\n> > >\n> > > Hi Mariel,\n> > >\n> > > let s keep the list in cc...\n> > >\n> > > settings look ok.\n> > >\n> > > what's in the recovery.conf file then?\n> > >\n> > > regards,\n> > >\n> > > fabio pardi\n> > >\n> > > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > > > Hey,\n> > > > the configuration is the same as in the primary :\n> > > > max_wal_size = 2GB\n> > > > min_wal_size = 1GB\n> > > > wal_buffers = 16MB\n> > > > checkpoint_completion_target = 0.9\n> > > > checkpoint_timeout = 30min\n> > > >\n> > > > Regarding your question, I didnt see this message\n> (consistent recovery\n> > > > state reached at), I guess thats why the secondary isnt\n> avaialble\n> > > yet..\n> > > >\n> > > > Maybe I'm wrong, but what I understood from the\n> documentation- restart\n> > > > point is generated only after the secondary had a checkpoint\n> wihch\n> > > means\n> > > > only after 30 minutes or after max_wal_size is reached ? But\n> > > still, why\n> > > > wont the secondary reach a consisteny recovery state (does it\n> > > requires a\n> > > > restart point to be generated ? )\n> > > >\n> > > >\n> > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> > > <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>>:\n> > > >\n> > > > Hi Mariel,\n> > > >\n> > > > if i m not wrong, on the secondary you will see the\n> messages you\n> > > > mentioned when a checkpoint happens.\n> > > >\n> > > > What are checkpoint_timeout and max_wal_size on your\n> standby?\n> > > >\n> > > > Did you ever see this on your standby log?\n> > > >\n> > > > \"consistent recovery state reached at ..\"\n> > > >\n> > > >\n> > > > Maybe you can post your whole configuration of your\n> standby\n> > > for easier\n> > > > debug.\n> > > >\n> > > > regards,\n> > > >\n> > > > fabio pardi\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > > > Hey,\n> > > > > PG 9.6, I have a standalone configured. I tried to\n> start up a\n> > > > secondary,\n> > > > > run standby clone (repmgr). The clone process took 3\n> hours\n> > > and during\n> > > > > that time wals were generated(mostly because of the\n> > > > checkpoint_timeout).\n> > > > > As a result of that, when I start the secondary ,I see\n> that the\n> > > > > secondary keeps getting the wals but I dont see any\n> messages\n> > > that\n> > > > > indicate that the secondary tried to replay the wals.\n> > > > > messages that i see :\n> > > > > receiving incremental file list\n> > > > > 000000010000377B000000DE\n> > > > >\n> > > > > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > > > > total size is 4.15M speedup is 1.01\n> > > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored log\n> file\n> > > > > \"000000010000377B000000DE\" from archive\n> > > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the database\n> system is\n> > > > starting up\n> > > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the database\n> system is\n> > > > > starting up\n> > > > >\n> > > > > I was hoping to see the following messages (taken from\n> a\n> > > different\n> > > > > machine) :\n> > > > > 2019-05-27 01:15:37 EDT 7428 LOG: restartpoint\n> starting: time\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG: restartpoint\n> complete:\n> > > wrote 406\n> > > > > buffers (0.2%); 1 transaction log file(s) added, 0\n> removed, 0\n> > > > recycled;\n> > > > > write=41.390 s, sync=0.001 s, total=41.582 s; sync file\n> > > > > s=128, longest=0.000 s, average=0.000 s; distance=2005\n> kB,\n> > > > estimate=2699 kB\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery restart\n> point at\n> > > > 4/D096C4F8\n> > > > >\n> > > > > My primary settings(wals settings) :\n> > > > > wal_buffers = 16MB\n> > > > > checkpoint_completion_target = 0.9\n> > > > > checkpoint_timeout = 30min\n> > > > >\n> > > > > Any idea what can explain why the secondary doesnt\n> replay\n> > > the wals ?\n> > > >\n> > > >\n> > >\n> >\n>\n\nFirst of all thanks Fabio.I think that I'm missing something : In the next questionI'm not talking about streaming replication,rather on recovery : 1.When the secondary get the wals from the primary it tries to replay them correct ? 2. By replaying it just go over the wal records and run them in the secondary ?3.All those changes are saved in the shared_buffer(secondary) or the changed are immediately done on the data files blocks ?4.The secondary will need a checkpoint in order to flush those changes to the data files and in order to reach a restart point ?So, basically If I had a checkpoint during the clone, the secondary should also have a checkpoint when I recover the secondary right ?בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi <[email protected]>:Hi Mariel,\n\nplease keep the list posted. When you reply, use 'reply all'. That will maybe help others in the community and you might also get more help from others.\n\nanswers are in line here below\n\n\n\nOn 28/05/2019 10:54, Mariel Cherkassky wrote:\n> I have pg 9.6, repmgr version 4.3 .\n> I see in the logs that wal files are restored : \n> 2019-05-22 12:35:12 EEST 60942 LOG: restored log file \"000000010000377B000000DB\" from archive\n> that means that the restore_command worked right ? \n> \n\nright\n\n> According to the docs :\n> \"In standby mode, a restartpoint is also triggered if checkpoint_segments log segments have been replayed since last restartpoint and at least one checkpoint record has been replayed. Restartpoints can't be performed more frequently than checkpoints in the master because restartpoints can only be performed at checkpoint records\"\n> so maybe I should decrease max_wal_size or even checkpoint_timeout to force a restartpoint ? \n> During this gap (standby clone) 6-7 wals were generated on the primary\n> \n\n\n From what you posted earlier, you should in any case have hit a checkpoint every 30 minutes. (That was also the assumption in the previous messages. If that's not happening, then i would really investigate.)\n\nThat is, if during your cloning only a few WAL files were generated, then it is not enough to trigger a checkpoint and you fallback to the 30 minutes.\n\nI would not be bothered if i were you, but can always force a checkpoint on the master issuing:\n\nCHECKPOINT ;\n\nat that stage, on the standby logs you will see the messages:\n\nrestartpoint starting: ..\n\nrestartpoint complete: .. \n\n\n\nregards,\n\nfabio pardi\n\n> \n> בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi <[email protected] <mailto:[email protected]>>:\n> \n> If you did not even see this messages on your standby logs:\n> \n> restartpoint starting: xlog\n> \n> then it means that the checkpoint was even never started.\n> \n> In that case, I have no clue.\n> \n> Try to describe step by step how to reproduce the problem together with\n> your setup and the version number of Postgres and repmgr, and i might be\n> able to help you further.\n> \n> regards,\n> \n> fabio pardi\n> \n> On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> > standby_mode = 'on'\n> > primary_conninfo = 'host=X.X.X.X user=repmgr connect_timeout=10 '\n> > recovery_target_timeline = 'latest'\n> > primary_slot_name = repmgr_slot_1\n> > restore_command = 'rsync -avzhe ssh\n> > [email protected]:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ;\n> > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > /var/lib/pgsql/archive %r'\n> >\n> > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> > <[email protected] <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>>:\n> >\n> > Hi Mariel,\n> >\n> > let s keep the list in cc...\n> >\n> > settings look ok.\n> >\n> > what's in the recovery.conf file then?\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > > Hey,\n> > > the configuration is the same as in the primary : \n> > > max_wal_size = 2GB\n> > > min_wal_size = 1GB\n> > > wal_buffers = 16MB\n> > > checkpoint_completion_target = 0.9\n> > > checkpoint_timeout = 30min\n> > >\n> > > Regarding your question, I didnt see this message (consistent recovery\n> > > state reached at), I guess thats why the secondary isnt avaialble\n> > yet..\n> > >\n> > > Maybe I'm wrong, but what I understood from the documentation- restart\n> > > point is generated only after the secondary had a checkpoint wihch\n> > means\n> > > only after 30 minutes or after max_wal_size is reached ? But\n> > still, why\n> > > wont the secondary reach a consisteny recovery state (does it\n> > requires a\n> > > restart point to be generated ? )\n> > >\n> > >\n> > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > > <[email protected] <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>\n> > <mailto:[email protected] <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>>>:\n> > >\n> > > Hi Mariel,\n> > >\n> > > if i m not wrong, on the secondary you will see the messages you\n> > > mentioned when a checkpoint happens.\n> > >\n> > > What are checkpoint_timeout and max_wal_size on your standby?\n> > >\n> > > Did you ever see this on your standby log?\n> > >\n> > > \"consistent recovery state reached at ..\"\n> > >\n> > >\n> > > Maybe you can post your whole configuration of your standby\n> > for easier\n> > > debug.\n> > >\n> > > regards,\n> > >\n> > > fabio pardi\n> > >\n> > >\n> > >\n> > >\n> > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > > Hey,\n> > > > PG 9.6, I have a standalone configured. I tried to start up a\n> > > secondary,\n> > > > run standby clone (repmgr). The clone process took 3 hours\n> > and during\n> > > > that time wals were generated(mostly because of the\n> > > checkpoint_timeout).\n> > > > As a result of that, when I start the secondary ,I see that the\n> > > > secondary keeps getting the wals but I dont see any messages\n> > that\n> > > > indicate that the secondary tried to replay the wals. \n> > > > messages that i see :\n> > > > receiving incremental file list\n> > > > 000000010000377B000000DE\n> > > >\n> > > > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > > > total size is 4.15M speedup is 1.01\n> > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored log file\n> > > > \"000000010000377B000000DE\" from archive\n> > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the database system is\n> > > starting up\n> > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the database system is\n> > > > starting up \n> > > >\n> > > > I was hoping to see the following messages (taken from a\n> > different\n> > > > machine) : \n> > > > 2019-05-27 01:15:37 EDT 7428 LOG: restartpoint starting: time\n> > > > 2019-05-27 01:16:18 EDT 7428 LOG: restartpoint complete:\n> > wrote 406\n> > > > buffers (0.2%); 1 transaction log file(s) added, 0 removed, 0\n> > > recycled;\n> > > > write=41.390 s, sync=0.001 s, total=41.582 s; sync file\n> > > > s=128, longest=0.000 s, average=0.000 s; distance=2005 kB,\n> > > estimate=2699 kB\n> > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery restart point at\n> > > 4/D096C4F8\n> > > >\n> > > > My primary settings(wals settings) : \n> > > > wal_buffers = 16MB\n> > > > checkpoint_completion_target = 0.9\n> > > > checkpoint_timeout = 30min\n> > > >\n> > > > Any idea what can explain why the secondary doesnt replay\n> > the wals ?\n> > >\n> > >\n> >\n>",
"msg_date": "Wed, 29 May 2019 10:20:17 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "\n\nOn 5/29/19 9:20 AM, Mariel Cherkassky wrote:\n> First of all thanks Fabio.\n> I think that I'm missing something : \n> In the next questionI'm not talking about streaming replication,rather\n> on recovery : \n> \n> 1.When the secondary get the wals from the primary it tries to replay\n> them correct ? \n\n\ncorrect\n\n> \n> 2. By replaying it just go over the wal records and run them in the\n> secondary ?\n> \n\ncorrect\n\n> 3.All those changes are saved in the shared_buffer(secondary) or the\n> changed are immediately done on the data files blocks ?\n> \n\nthe changes are not saved to your datafile yet. That happens at\ncheckpoint time.\n\n> 4.The secondary will need a checkpoint in order to flush those changes\n> to the data files and in order to reach a restart point ?\n> \n\nyes\n\n> So, basically If I had a checkpoint during the clone, the secondary\n> should also have a checkpoint when I recover the secondary right ?\n> \n\ncorrect. Even after being in sync with master, if you restart Postgres\non standby, it will then re-apply the WAL files from the last checkpoint.\n\nIn the logfile of the standby, you will see as many messages reporting\n\"restored log file\" as many WAL files were produced since the last\ncheckpoint\n\nHope it helps to clarify.\n\nregards,\n\nfabio pardi\n> \n> בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>>:\n> \n> Hi Mariel,\n> \n> please keep the list posted. When you reply, use 'reply all'. That\n> will maybe help others in the community and you might also get more\n> help from others.\n> \n> answers are in line here below\n> \n> \n> \n> On 28/05/2019 10:54, Mariel Cherkassky wrote:\n> > I have pg 9.6, repmgr version 4.3 .\n> > I see in the logs that wal files are restored : \n> > 2019-05-22 12:35:12 EEST 60942 LOG: restored log file\n> \"000000010000377B000000DB\" from archive\n> > that means that the restore_command worked right ? \n> >\n> \n> right\n> \n> > According to the docs :\n> > \"In standby mode, a restartpoint is also triggered\n> if checkpoint_segments log segments have been replayed since last\n> restartpoint and at least one checkpoint record has been replayed.\n> Restartpoints can't be performed more frequently than checkpoints in\n> the master because restartpoints can only be performed at checkpoint\n> records\"\n> > so maybe I should decrease max_wal_size or even checkpoint_timeout\n> to force a restartpoint ? \n> > During this gap (standby clone) 6-7 wals were generated on the primary\n> >\n> \n> \n> From what you posted earlier, you should in any case have hit a\n> checkpoint every 30 minutes. (That was also the assumption in the\n> previous messages. If that's not happening, then i would really\n> investigate.)\n> \n> That is, if during your cloning only a few WAL files were generated,\n> then it is not enough to trigger a checkpoint and you fallback to\n> the 30 minutes.\n> \n> I would not be bothered if i were you, but can always force a\n> checkpoint on the master issuing:\n> \n> CHECKPOINT ;\n> \n> at that stage, on the standby logs you will see the messages:\n> \n> restartpoint starting: ..\n> \n> restartpoint complete: ..\n> \n> \n> \n> regards,\n> \n> fabio pardi\n> \n> >\n> > בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>:\n> >\n> > If you did not even see this messages on your standby logs:\n> >\n> > restartpoint starting: xlog\n> >\n> > then it means that the checkpoint was even never started.\n> >\n> > In that case, I have no clue.\n> >\n> > Try to describe step by step how to reproduce the problem\n> together with\n> > your setup and the version number of Postgres and repmgr, and\n> i might be\n> > able to help you further.\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> > On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> > > standby_mode = 'on'\n> > > primary_conninfo = 'host=X.X.X.X user=repmgr \n> connect_timeout=10 '\n> > > recovery_target_timeline = 'latest'\n> > > primary_slot_name = repmgr_slot_1\n> > > restore_command = 'rsync -avzhe ssh\n> > > [email protected]:/var/lib/pgsql/archive/%f\n> /var/lib/pgsql/archive/%f ;\n> > > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > > archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > > /var/lib/pgsql/archive %r'\n> > >\n> > > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>>:\n> > >\n> > > Hi Mariel,\n> > >\n> > > let s keep the list in cc...\n> > >\n> > > settings look ok.\n> > >\n> > > what's in the recovery.conf file then?\n> > >\n> > > regards,\n> > >\n> > > fabio pardi\n> > >\n> > > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > > > Hey,\n> > > > the configuration is the same as in the primary : \n> > > > max_wal_size = 2GB\n> > > > min_wal_size = 1GB\n> > > > wal_buffers = 16MB\n> > > > checkpoint_completion_target = 0.9\n> > > > checkpoint_timeout = 30min\n> > > >\n> > > > Regarding your question, I didnt see this message\n> (consistent recovery\n> > > > state reached at), I guess thats why the secondary\n> isnt avaialble\n> > > yet..\n> > > >\n> > > > Maybe I'm wrong, but what I understood from the\n> documentation- restart\n> > > > point is generated only after the secondary had a\n> checkpoint wihch\n> > > means\n> > > > only after 30 minutes or after max_wal_size is reached\n> ? But\n> > > still, why\n> > > > wont the secondary reach a consisteny recovery state\n> (does it\n> > > requires a\n> > > > restart point to be generated ? )\n> > > >\n> > > >\n> > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>\n> > > <mailto:[email protected]\n> <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>> <mailto:[email protected]\n> <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>>>>>:\n> > > >\n> > > > Hi Mariel,\n> > > >\n> > > > if i m not wrong, on the secondary you will see\n> the messages you\n> > > > mentioned when a checkpoint happens.\n> > > >\n> > > > What are checkpoint_timeout and max_wal_size on\n> your standby?\n> > > >\n> > > > Did you ever see this on your standby log?\n> > > >\n> > > > \"consistent recovery state reached at ..\"\n> > > >\n> > > >\n> > > > Maybe you can post your whole configuration of\n> your standby\n> > > for easier\n> > > > debug.\n> > > >\n> > > > regards,\n> > > >\n> > > > fabio pardi\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > > > Hey,\n> > > > > PG 9.6, I have a standalone configured. I tried\n> to start up a\n> > > > secondary,\n> > > > > run standby clone (repmgr). The clone process\n> took 3 hours\n> > > and during\n> > > > > that time wals were generated(mostly because of the\n> > > > checkpoint_timeout).\n> > > > > As a result of that, when I start the secondary\n> ,I see that the\n> > > > > secondary keeps getting the wals but I dont see\n> any messages\n> > > that\n> > > > > indicate that the secondary tried to replay the\n> wals. \n> > > > > messages that i see :\n> > > > > receiving incremental file list\n> > > > > 000000010000377B000000DE\n> > > > >\n> > > > > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > > > > total size is 4.15M speedup is 1.01\n> > > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored\n> log file\n> > > > > \"000000010000377B000000DE\" from archive\n> > > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the\n> database system is\n> > > > starting up\n> > > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the\n> database system is\n> > > > > starting up \n> > > > >\n> > > > > I was hoping to see the following messages\n> (taken from a\n> > > different\n> > > > > machine) : \n> > > > > 2019-05-27 01:15:37 EDT 7428 LOG:\n> restartpoint starting: time\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG:\n> restartpoint complete:\n> > > wrote 406\n> > > > > buffers (0.2%); 1 transaction log file(s) added,\n> 0 removed, 0\n> > > > recycled;\n> > > > > write=41.390 s, sync=0.001 s, total=41.582 s;\n> sync file\n> > > > > s=128, longest=0.000 s, average=0.000 s;\n> distance=2005 kB,\n> > > > estimate=2699 kB\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery\n> restart point at\n> > > > 4/D096C4F8\n> > > > >\n> > > > > My primary settings(wals settings) : \n> > > > > wal_buffers = 16MB\n> > > > > checkpoint_completion_target = 0.9\n> > > > > checkpoint_timeout = 30min\n> > > > >\n> > > > > Any idea what can explain why the secondary\n> doesnt replay\n> > > the wals ?\n> > > >\n> > > >\n> > >\n> >\n> \n\n\n",
"msg_date": "Wed, 29 May 2019 10:20:11 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "Is there any messages that indicates that the secondary replayed a specific\nwal ? \"restored 00000...\" means that the restore_command succeeded but\nthere isnt any proof that it replayed the wal.\n\nMy theory regarding the issue :\n It seems, that my customer stopped the db 20 minutes after the clone have\nfinished. During those 20 minutes the secondary didnt get enough wal\nrecords (6 wal files) so it didnt reach the max_wal_size. My\ncheckpoint_timeout is set to 30minutes, therefore there wasnt any\ncheckpoint. As a result of that the secondary didnt reach a restart point.\nDoes that sounds reasonable ?\n\nSo basically, if I clone a small primary db, the secondary would reach a\nrestart point only if it reached a checkpoint (checkpoint_timeout or\nmax_wal_size). However, I have cloned many small dbs and saw the it takes a\nsec to start the secondary (which means that restartpoint was reached). So\nwhat am I missing ?\n\nבתאריך יום ד׳, 29 במאי 2019 ב-11:20 מאת Fabio Pardi <\[email protected]>:\n\n>\n>\n> On 5/29/19 9:20 AM, Mariel Cherkassky wrote:\n> > First of all thanks Fabio.\n> > I think that I'm missing something :\n> > In the next questionI'm not talking about streaming replication,rather\n> > on recovery :\n> >\n> > 1.When the secondary get the wals from the primary it tries to replay\n> > them correct ?\n>\n>\n> correct\n>\n> >\n> > 2. By replaying it just go over the wal records and run them in the\n> > secondary ?\n> >\n>\n> correct\n>\n> > 3.All those changes are saved in the shared_buffer(secondary) or the\n> > changed are immediately done on the data files blocks ?\n> >\n>\n> the changes are not saved to your datafile yet. That happens at\n> checkpoint time.\n>\n> > 4.The secondary will need a checkpoint in order to flush those changes\n> > to the data files and in order to reach a restart point ?\n> >\n>\n> yes\n>\n> > So, basically If I had a checkpoint during the clone, the secondary\n> > should also have a checkpoint when I recover the secondary right ?\n> >\n>\n> correct. Even after being in sync with master, if you restart Postgres\n> on standby, it will then re-apply the WAL files from the last checkpoint.\n>\n> In the logfile of the standby, you will see as many messages reporting\n> \"restored log file\" as many WAL files were produced since the last\n> checkpoint\n>\n> Hope it helps to clarify.\n>\n> regards,\n>\n> fabio pardi\n> >\n> > בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi\n> > <[email protected] <mailto:[email protected]>>:\n> >\n> > Hi Mariel,\n> >\n> > please keep the list posted. When you reply, use 'reply all'. That\n> > will maybe help others in the community and you might also get more\n> > help from others.\n> >\n> > answers are in line here below\n> >\n> >\n> >\n> > On 28/05/2019 10:54, Mariel Cherkassky wrote:\n> > > I have pg 9.6, repmgr version 4.3 .\n> > > I see in the logs that wal files are restored :\n> > > 2019-05-22 12:35:12 EEST 60942 LOG: restored log file\n> > \"000000010000377B000000DB\" from archive\n> > > that means that the restore_command worked right ?\n> > >\n> >\n> > right\n> >\n> > > According to the docs :\n> > > \"In standby mode, a restartpoint is also triggered\n> > if checkpoint_segments log segments have been replayed since last\n> > restartpoint and at least one checkpoint record has been replayed.\n> > Restartpoints can't be performed more frequently than checkpoints in\n> > the master because restartpoints can only be performed at checkpoint\n> > records\"\n> > > so maybe I should decrease max_wal_size or even checkpoint_timeout\n> > to force a restartpoint ?\n> > > During this gap (standby clone) 6-7 wals were generated on the\n> primary\n> > >\n> >\n> >\n> > From what you posted earlier, you should in any case have hit a\n> > checkpoint every 30 minutes. (That was also the assumption in the\n> > previous messages. If that's not happening, then i would really\n> > investigate.)\n> >\n> > That is, if during your cloning only a few WAL files were generated,\n> > then it is not enough to trigger a checkpoint and you fallback to\n> > the 30 minutes.\n> >\n> > I would not be bothered if i were you, but can always force a\n> > checkpoint on the master issuing:\n> >\n> > CHECKPOINT ;\n> >\n> > at that stage, on the standby logs you will see the messages:\n> >\n> > restartpoint starting: ..\n> >\n> > restartpoint complete: ..\n> >\n> >\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> > >\n> > > בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi\n> > <[email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>>:\n> > >\n> > > If you did not even see this messages on your standby logs:\n> > >\n> > > restartpoint starting: xlog\n> > >\n> > > then it means that the checkpoint was even never started.\n> > >\n> > > In that case, I have no clue.\n> > >\n> > > Try to describe step by step how to reproduce the problem\n> > together with\n> > > your setup and the version number of Postgres and repmgr, and\n> > i might be\n> > > able to help you further.\n> > >\n> > > regards,\n> > >\n> > > fabio pardi\n> > >\n> > > On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> > > > standby_mode = 'on'\n> > > > primary_conninfo = 'host=X.X.X.X user=repmgr\n> > connect_timeout=10 '\n> > > > recovery_target_timeline = 'latest'\n> > > > primary_slot_name = repmgr_slot_1\n> > > > restore_command = 'rsync -avzhe ssh\n> > > > [email protected]:/var/lib/pgsql/archive/%f\n> > /var/lib/pgsql/archive/%f ;\n> > > > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > > > archive_cleanup_command =\n> '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > > > /var/lib/pgsql/archive %r'\n> > > >\n> > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> > > > <[email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>\n> > <mailto:[email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>>>:\n> > > >\n> > > > Hi Mariel,\n> > > >\n> > > > let s keep the list in cc...\n> > > >\n> > > > settings look ok.\n> > > >\n> > > > what's in the recovery.conf file then?\n> > > >\n> > > > regards,\n> > > >\n> > > > fabio pardi\n> > > >\n> > > > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > > > > Hey,\n> > > > > the configuration is the same as in the primary :\n> > > > > max_wal_size = 2GB\n> > > > > min_wal_size = 1GB\n> > > > > wal_buffers = 16MB\n> > > > > checkpoint_completion_target = 0.9\n> > > > > checkpoint_timeout = 30min\n> > > > >\n> > > > > Regarding your question, I didnt see this message\n> > (consistent recovery\n> > > > > state reached at), I guess thats why the secondary\n> > isnt avaialble\n> > > > yet..\n> > > > >\n> > > > > Maybe I'm wrong, but what I understood from the\n> > documentation- restart\n> > > > > point is generated only after the secondary had a\n> > checkpoint wihch\n> > > > means\n> > > > > only after 30 minutes or after max_wal_size is reached\n> > ? But\n> > > > still, why\n> > > > > wont the secondary reach a consisteny recovery state\n> > (does it\n> > > > requires a\n> > > > > restart point to be generated ? )\n> > > > >\n> > > > >\n> > > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > > > > <[email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>\n> > <mailto:[email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>>\n> > > > <mailto:[email protected]\n> > <mailto:[email protected]> <mailto:[email protected]\n> > <mailto:[email protected]>> <mailto:[email protected]\n> > <mailto:[email protected]> <mailto:[email protected]\n> > <mailto:[email protected]>>>>>:\n> > > > >\n> > > > > Hi Mariel,\n> > > > >\n> > > > > if i m not wrong, on the secondary you will see\n> > the messages you\n> > > > > mentioned when a checkpoint happens.\n> > > > >\n> > > > > What are checkpoint_timeout and max_wal_size on\n> > your standby?\n> > > > >\n> > > > > Did you ever see this on your standby log?\n> > > > >\n> > > > > \"consistent recovery state reached at ..\"\n> > > > >\n> > > > >\n> > > > > Maybe you can post your whole configuration of\n> > your standby\n> > > > for easier\n> > > > > debug.\n> > > > >\n> > > > > regards,\n> > > > >\n> > > > > fabio pardi\n> > > > >\n> > > > >\n> > > > >\n> > > > >\n> > > > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > > > > Hey,\n> > > > > > PG 9.6, I have a standalone configured. I tried\n> > to start up a\n> > > > > secondary,\n> > > > > > run standby clone (repmgr). The clone process\n> > took 3 hours\n> > > > and during\n> > > > > > that time wals were generated(mostly because of\n> the\n> > > > > checkpoint_timeout).\n> > > > > > As a result of that, when I start the secondary\n> > ,I see that the\n> > > > > > secondary keeps getting the wals but I dont see\n> > any messages\n> > > > that\n> > > > > > indicate that the secondary tried to replay the\n> > wals.\n> > > > > > messages that i see :\n> > > > > > receiving incremental file list\n> > > > > > 000000010000377B000000DE\n> > > > > >\n> > > > > > sent 30 bytes received 4.11M bytes 8.22M\n> bytes/sec\n> > > > > > total size is 4.15M speedup is 1.01\n> > > > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored\n> > log file\n> > > > > > \"000000010000377B000000DE\" from archive\n> > > > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the\n> > database system is\n> > > > > starting up\n> > > > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the\n> > database system is\n> > > > > > starting up\n> > > > > >\n> > > > > > I was hoping to see the following messages\n> > (taken from a\n> > > > different\n> > > > > > machine) :\n> > > > > > 2019-05-27 01:15:37 EDT 7428 LOG:\n> > restartpoint starting: time\n> > > > > > 2019-05-27 01:16:18 EDT 7428 LOG:\n> > restartpoint complete:\n> > > > wrote 406\n> > > > > > buffers (0.2%); 1 transaction log file(s) added,\n> > 0 removed, 0\n> > > > > recycled;\n> > > > > > write=41.390 s, sync=0.001 s, total=41.582 s;\n> > sync file\n> > > > > > s=128, longest=0.000 s, average=0.000 s;\n> > distance=2005 kB,\n> > > > > estimate=2699 kB\n> > > > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery\n> > restart point at\n> > > > > 4/D096C4F8\n> > > > > >\n> > > > > > My primary settings(wals settings) :\n> > > > > > wal_buffers = 16MB\n> > > > > > checkpoint_completion_target = 0.9\n> > > > > > checkpoint_timeout = 30min\n> > > > > >\n> > > > > > Any idea what can explain why the secondary\n> > doesnt replay\n> > > > the wals ?\n> > > > >\n> > > > >\n> > > >\n> > >\n> >\n>\n\nIs there any messages that indicates that the secondary replayed a specific wal ? \"restored 00000...\" means that the restore_command succeeded but there isnt any proof that it replayed the wal.My theory regarding the issue : It seems, that my customer stopped the db 20 minutes after the clone have finished. During those 20 minutes the secondary didnt get enough wal records (6 wal files) so it didnt reach the max_wal_size. My checkpoint_timeout is set to 30minutes, therefore there wasnt any checkpoint. As a result of that the secondary didnt reach a restart point. Does that sounds reasonable ?So basically, if I clone a small primary db, the secondary would reach a restart point only if it reached a checkpoint (checkpoint_timeout or max_wal_size). However, I have cloned many small dbs and saw the it takes a sec to start the secondary (which means that restartpoint was reached). So what am I missing ?בתאריך יום ד׳, 29 במאי 2019 ב-11:20 מאת Fabio Pardi <[email protected]>:\n\nOn 5/29/19 9:20 AM, Mariel Cherkassky wrote:\n> First of all thanks Fabio.\n> I think that I'm missing something : \n> In the next questionI'm not talking about streaming replication,rather\n> on recovery : \n> \n> 1.When the secondary get the wals from the primary it tries to replay\n> them correct ? \n\n\ncorrect\n\n> \n> 2. By replaying it just go over the wal records and run them in the\n> secondary ?\n> \n\ncorrect\n\n> 3.All those changes are saved in the shared_buffer(secondary) or the\n> changed are immediately done on the data files blocks ?\n> \n\nthe changes are not saved to your datafile yet. That happens at\ncheckpoint time.\n\n> 4.The secondary will need a checkpoint in order to flush those changes\n> to the data files and in order to reach a restart point ?\n> \n\nyes\n\n> So, basically If I had a checkpoint during the clone, the secondary\n> should also have a checkpoint when I recover the secondary right ?\n> \n\ncorrect. Even after being in sync with master, if you restart Postgres\non standby, it will then re-apply the WAL files from the last checkpoint.\n\nIn the logfile of the standby, you will see as many messages reporting\n\"restored log file\" as many WAL files were produced since the last\ncheckpoint\n\nHope it helps to clarify.\n\nregards,\n\nfabio pardi\n> \n> בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>>:\n> \n> Hi Mariel,\n> \n> please keep the list posted. When you reply, use 'reply all'. That\n> will maybe help others in the community and you might also get more\n> help from others.\n> \n> answers are in line here below\n> \n> \n> \n> On 28/05/2019 10:54, Mariel Cherkassky wrote:\n> > I have pg 9.6, repmgr version 4.3 .\n> > I see in the logs that wal files are restored : \n> > 2019-05-22 12:35:12 EEST 60942 LOG: restored log file\n> \"000000010000377B000000DB\" from archive\n> > that means that the restore_command worked right ? \n> >\n> \n> right\n> \n> > According to the docs :\n> > \"In standby mode, a restartpoint is also triggered\n> if checkpoint_segments log segments have been replayed since last\n> restartpoint and at least one checkpoint record has been replayed.\n> Restartpoints can't be performed more frequently than checkpoints in\n> the master because restartpoints can only be performed at checkpoint\n> records\"\n> > so maybe I should decrease max_wal_size or even checkpoint_timeout\n> to force a restartpoint ? \n> > During this gap (standby clone) 6-7 wals were generated on the primary\n> >\n> \n> \n> From what you posted earlier, you should in any case have hit a\n> checkpoint every 30 minutes. (That was also the assumption in the\n> previous messages. If that's not happening, then i would really\n> investigate.)\n> \n> That is, if during your cloning only a few WAL files were generated,\n> then it is not enough to trigger a checkpoint and you fallback to\n> the 30 minutes.\n> \n> I would not be bothered if i were you, but can always force a\n> checkpoint on the master issuing:\n> \n> CHECKPOINT ;\n> \n> at that stage, on the standby logs you will see the messages:\n> \n> restartpoint starting: ..\n> \n> restartpoint complete: ..\n> \n> \n> \n> regards,\n> \n> fabio pardi\n> \n> >\n> > בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>:\n> >\n> > If you did not even see this messages on your standby logs:\n> >\n> > restartpoint starting: xlog\n> >\n> > then it means that the checkpoint was even never started.\n> >\n> > In that case, I have no clue.\n> >\n> > Try to describe step by step how to reproduce the problem\n> together with\n> > your setup and the version number of Postgres and repmgr, and\n> i might be\n> > able to help you further.\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> > On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> > > standby_mode = 'on'\n> > > primary_conninfo = 'host=X.X.X.X user=repmgr \n> connect_timeout=10 '\n> > > recovery_target_timeline = 'latest'\n> > > primary_slot_name = repmgr_slot_1\n> > > restore_command = 'rsync -avzhe ssh\n> > > [email protected]:/var/lib/pgsql/archive/%f\n> /var/lib/pgsql/archive/%f ;\n> > > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > > archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > > /var/lib/pgsql/archive %r'\n> > >\n> > > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>>:\n> > >\n> > > Hi Mariel,\n> > >\n> > > let s keep the list in cc...\n> > >\n> > > settings look ok.\n> > >\n> > > what's in the recovery.conf file then?\n> > >\n> > > regards,\n> > >\n> > > fabio pardi\n> > >\n> > > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > > > Hey,\n> > > > the configuration is the same as in the primary : \n> > > > max_wal_size = 2GB\n> > > > min_wal_size = 1GB\n> > > > wal_buffers = 16MB\n> > > > checkpoint_completion_target = 0.9\n> > > > checkpoint_timeout = 30min\n> > > >\n> > > > Regarding your question, I didnt see this message\n> (consistent recovery\n> > > > state reached at), I guess thats why the secondary\n> isnt avaialble\n> > > yet..\n> > > >\n> > > > Maybe I'm wrong, but what I understood from the\n> documentation- restart\n> > > > point is generated only after the secondary had a\n> checkpoint wihch\n> > > means\n> > > > only after 30 minutes or after max_wal_size is reached\n> ? But\n> > > still, why\n> > > > wont the secondary reach a consisteny recovery state\n> (does it\n> > > requires a\n> > > > restart point to be generated ? )\n> > > >\n> > > >\n> > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>\n> > > <mailto:[email protected]\n> <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>> <mailto:[email protected]\n> <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>>>>>:\n> > > >\n> > > > Hi Mariel,\n> > > >\n> > > > if i m not wrong, on the secondary you will see\n> the messages you\n> > > > mentioned when a checkpoint happens.\n> > > >\n> > > > What are checkpoint_timeout and max_wal_size on\n> your standby?\n> > > >\n> > > > Did you ever see this on your standby log?\n> > > >\n> > > > \"consistent recovery state reached at ..\"\n> > > >\n> > > >\n> > > > Maybe you can post your whole configuration of\n> your standby\n> > > for easier\n> > > > debug.\n> > > >\n> > > > regards,\n> > > >\n> > > > fabio pardi\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > > > Hey,\n> > > > > PG 9.6, I have a standalone configured. I tried\n> to start up a\n> > > > secondary,\n> > > > > run standby clone (repmgr). The clone process\n> took 3 hours\n> > > and during\n> > > > > that time wals were generated(mostly because of the\n> > > > checkpoint_timeout).\n> > > > > As a result of that, when I start the secondary\n> ,I see that the\n> > > > > secondary keeps getting the wals but I dont see\n> any messages\n> > > that\n> > > > > indicate that the secondary tried to replay the\n> wals. \n> > > > > messages that i see :\n> > > > > receiving incremental file list\n> > > > > 000000010000377B000000DE\n> > > > >\n> > > > > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > > > > total size is 4.15M speedup is 1.01\n> > > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored\n> log file\n> > > > > \"000000010000377B000000DE\" from archive\n> > > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the\n> database system is\n> > > > starting up\n> > > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the\n> database system is\n> > > > > starting up \n> > > > >\n> > > > > I was hoping to see the following messages\n> (taken from a\n> > > different\n> > > > > machine) : \n> > > > > 2019-05-27 01:15:37 EDT 7428 LOG:\n> restartpoint starting: time\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG:\n> restartpoint complete:\n> > > wrote 406\n> > > > > buffers (0.2%); 1 transaction log file(s) added,\n> 0 removed, 0\n> > > > recycled;\n> > > > > write=41.390 s, sync=0.001 s, total=41.582 s;\n> sync file\n> > > > > s=128, longest=0.000 s, average=0.000 s;\n> distance=2005 kB,\n> > > > estimate=2699 kB\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery\n> restart point at\n> > > > 4/D096C4F8\n> > > > >\n> > > > > My primary settings(wals settings) : \n> > > > > wal_buffers = 16MB\n> > > > > checkpoint_completion_target = 0.9\n> > > > > checkpoint_timeout = 30min\n> > > > >\n> > > > > Any idea what can explain why the secondary\n> doesnt replay\n> > > the wals ?\n> > > >\n> > > >\n> > >\n> >\n>",
"msg_date": "Wed, 29 May 2019 11:39:33 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "Hello Mariel,\n\n1) Have you tried to run “CHECKPOINT;” on the standby to perform\nrestartpoint explicitly? It is possible.\n\n2) If we talk about streaming replication, do you use replication slots? If\nso, have you checked pg_replication_slots and pg_stat_replication? They can\nhelp to troubleshoot streaming replication delays — see columns\nsent_location, write_location, flush_location, and replay_location in\npg_stat_replication and restart_lsn in pg_replication_slots. If you have\ndelay in replaying, it should be seen there.\n\nOn Wed, May 29, 2019 at 11:39 Mariel Cherkassky <[email protected]>\nwrote:\n\n> Is there any messages that indicates that the secondary replayed a\n> specific wal ? \"restored 00000...\" means that the restore_command succeeded\n> but there isnt any proof that it replayed the wal.\n>\n> My theory regarding the issue :\n> It seems, that my customer stopped the db 20 minutes after the clone have\n> finished. During those 20 minutes the secondary didnt get enough wal\n> records (6 wal files) so it didnt reach the max_wal_size. My\n> checkpoint_timeout is set to 30minutes, therefore there wasnt any\n> checkpoint. As a result of that the secondary didnt reach a restart point.\n> Does that sounds reasonable ?\n>\n> So basically, if I clone a small primary db, the secondary would reach a\n> restart point only if it reached a checkpoint (checkpoint_timeout or\n> max_wal_size). However, I have cloned many small dbs and saw the it takes a\n> sec to start the secondary (which means that restartpoint was reached). So\n> what am I missing ?\n>\n> בתאריך יום ד׳, 29 במאי 2019 ב-11:20 מאת Fabio Pardi <\n> [email protected]>:\n>\n>>\n>>\n>> On 5/29/19 9:20 AM, Mariel Cherkassky wrote:\n>> > First of all thanks Fabio.\n>> > I think that I'm missing something :\n>> > In the next questionI'm not talking about streaming replication,rather\n>> > on recovery :\n>> >\n>> > 1.When the secondary get the wals from the primary it tries to replay\n>> > them correct ?\n>>\n>>\n>> correct\n>>\n>> >\n>> > 2. By replaying it just go over the wal records and run them in the\n>> > secondary ?\n>> >\n>>\n>> correct\n>>\n>> > 3.All those changes are saved in the shared_buffer(secondary) or the\n>> > changed are immediately done on the data files blocks ?\n>> >\n>>\n>> the changes are not saved to your datafile yet. That happens at\n>> checkpoint time.\n>>\n>> > 4.The secondary will need a checkpoint in order to flush those changes\n>> > to the data files and in order to reach a restart point ?\n>> >\n>>\n>> yes\n>>\n>> > So, basically If I had a checkpoint during the clone, the secondary\n>> > should also have a checkpoint when I recover the secondary right ?\n>> >\n>>\n>> correct. Even after being in sync with master, if you restart Postgres\n>> on standby, it will then re-apply the WAL files from the last checkpoint.\n>>\n>> In the logfile of the standby, you will see as many messages reporting\n>> \"restored log file\" as many WAL files were produced since the last\n>> checkpoint\n>>\n>> Hope it helps to clarify.\n>>\n>> regards,\n>>\n>> fabio pardi\n>> >\n>> > בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi\n>> > <[email protected] <mailto:[email protected]>>:\n>> >\n>> > Hi Mariel,\n>> >\n>> > please keep the list posted. When you reply, use 'reply all'. That\n>> > will maybe help others in the community and you might also get more\n>> > help from others.\n>> >\n>> > answers are in line here below\n>> >\n>> >\n>> >\n>> > On 28/05/2019 10:54, Mariel Cherkassky wrote:\n>> > > I have pg 9.6, repmgr version 4.3 .\n>> > > I see in the logs that wal files are restored :\n>> > > 2019-05-22 12:35:12 EEST 60942 LOG: restored log file\n>> > \"000000010000377B000000DB\" from archive\n>> > > that means that the restore_command worked right ?\n>> > >\n>> >\n>> > right\n>> >\n>> > > According to the docs :\n>> > > \"In standby mode, a restartpoint is also triggered\n>> > if checkpoint_segments log segments have been replayed since last\n>> > restartpoint and at least one checkpoint record has been replayed.\n>> > Restartpoints can't be performed more frequently than checkpoints in\n>> > the master because restartpoints can only be performed at checkpoint\n>> > records\"\n>> > > so maybe I should decrease max_wal_size or even checkpoint_timeout\n>> > to force a restartpoint ?\n>> > > During this gap (standby clone) 6-7 wals were generated on the\n>> primary\n>> > >\n>> >\n>> >\n>> > From what you posted earlier, you should in any case have hit a\n>> > checkpoint every 30 minutes. (That was also the assumption in the\n>> > previous messages. If that's not happening, then i would really\n>> > investigate.)\n>> >\n>> > That is, if during your cloning only a few WAL files were generated,\n>> > then it is not enough to trigger a checkpoint and you fallback to\n>> > the 30 minutes.\n>> >\n>> > I would not be bothered if i were you, but can always force a\n>> > checkpoint on the master issuing:\n>> >\n>> > CHECKPOINT ;\n>> >\n>> > at that stage, on the standby logs you will see the messages:\n>> >\n>> > restartpoint starting: ..\n>> >\n>> > restartpoint complete: ..\n>> >\n>> >\n>> >\n>> > regards,\n>> >\n>> > fabio pardi\n>> >\n>> > >\n>> > > בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi\n>> > <[email protected] <mailto:[email protected]>\n>> > <mailto:[email protected] <mailto:[email protected]>>>:\n>> > >\n>> > > If you did not even see this messages on your standby logs:\n>> > >\n>> > > restartpoint starting: xlog\n>> > >\n>> > > then it means that the checkpoint was even never started.\n>> > >\n>> > > In that case, I have no clue.\n>> > >\n>> > > Try to describe step by step how to reproduce the problem\n>> > together with\n>> > > your setup and the version number of Postgres and repmgr, and\n>> > i might be\n>> > > able to help you further.\n>> > >\n>> > > regards,\n>> > >\n>> > > fabio pardi\n>> > >\n>> > > On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n>> > > > standby_mode = 'on'\n>> > > > primary_conninfo = 'host=X.X.X.X user=repmgr\n>> > connect_timeout=10 '\n>> > > > recovery_target_timeline = 'latest'\n>> > > > primary_slot_name = repmgr_slot_1\n>> > > > restore_command = 'rsync -avzhe ssh\n>> > > > [email protected]:/var/lib/pgsql/archive/%f\n>> > /var/lib/pgsql/archive/%f ;\n>> > > > gunzip < /var/lib/pgsql/archive/%f > %p'\n>> > > > archive_cleanup_command =\n>> '/usr/pgsql-9.6/bin/pg_archivecleanup\n>> > > > /var/lib/pgsql/archive %r'\n>> > > >\n>> > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n>> > > > <[email protected] <mailto:[email protected]>\n>> > <mailto:[email protected] <mailto:[email protected]>>\n>> > <mailto:[email protected] <mailto:[email protected]>\n>> > <mailto:[email protected] <mailto:[email protected]>>>>:\n>> > > >\n>> > > > Hi Mariel,\n>> > > >\n>> > > > let s keep the list in cc...\n>> > > >\n>> > > > settings look ok.\n>> > > >\n>> > > > what's in the recovery.conf file then?\n>> > > >\n>> > > > regards,\n>> > > >\n>> > > > fabio pardi\n>> > > >\n>> > > > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n>> > > > > Hey,\n>> > > > > the configuration is the same as in the primary :\n>> > > > > max_wal_size = 2GB\n>> > > > > min_wal_size = 1GB\n>> > > > > wal_buffers = 16MB\n>> > > > > checkpoint_completion_target = 0.9\n>> > > > > checkpoint_timeout = 30min\n>> > > > >\n>> > > > > Regarding your question, I didnt see this message\n>> > (consistent recovery\n>> > > > > state reached at), I guess thats why the secondary\n>> > isnt avaialble\n>> > > > yet..\n>> > > > >\n>> > > > > Maybe I'm wrong, but what I understood from the\n>> > documentation- restart\n>> > > > > point is generated only after the secondary had a\n>> > checkpoint wihch\n>> > > > means\n>> > > > > only after 30 minutes or after max_wal_size is reached\n>> > ? But\n>> > > > still, why\n>> > > > > wont the secondary reach a consisteny recovery state\n>> > (does it\n>> > > > requires a\n>> > > > > restart point to be generated ? )\n>> > > > >\n>> > > > >\n>> > > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio\n>> Pardi\n>> > > > > <[email protected] <mailto:[email protected]>\n>> > <mailto:[email protected] <mailto:[email protected]>>\n>> > <mailto:[email protected] <mailto:[email protected]>\n>> > <mailto:[email protected] <mailto:[email protected]>>>\n>> > > > <mailto:[email protected]\n>> > <mailto:[email protected]> <mailto:[email protected]\n>> > <mailto:[email protected]>> <mailto:[email protected]\n>> > <mailto:[email protected]> <mailto:[email protected]\n>> > <mailto:[email protected]>>>>>:\n>> > > > >\n>> > > > > Hi Mariel,\n>> > > > >\n>> > > > > if i m not wrong, on the secondary you will see\n>> > the messages you\n>> > > > > mentioned when a checkpoint happens.\n>> > > > >\n>> > > > > What are checkpoint_timeout and max_wal_size on\n>> > your standby?\n>> > > > >\n>> > > > > Did you ever see this on your standby log?\n>> > > > >\n>> > > > > \"consistent recovery state reached at ..\"\n>> > > > >\n>> > > > >\n>> > > > > Maybe you can post your whole configuration of\n>> > your standby\n>> > > > for easier\n>> > > > > debug.\n>> > > > >\n>> > > > > regards,\n>> > > > >\n>> > > > > fabio pardi\n>> > > > >\n>> > > > >\n>> > > > >\n>> > > > >\n>> > > > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n>> > > > > > Hey,\n>> > > > > > PG 9.6, I have a standalone configured. I tried\n>> > to start up a\n>> > > > > secondary,\n>> > > > > > run standby clone (repmgr). The clone process\n>> > took 3 hours\n>> > > > and during\n>> > > > > > that time wals were generated(mostly because of\n>> the\n>> > > > > checkpoint_timeout).\n>> > > > > > As a result of that, when I start the secondary\n>> > ,I see that the\n>> > > > > > secondary keeps getting the wals but I dont see\n>> > any messages\n>> > > > that\n>> > > > > > indicate that the secondary tried to replay the\n>> > wals.\n>> > > > > > messages that i see :\n>> > > > > > receiving incremental file list\n>> > > > > > 000000010000377B000000DE\n>> > > > > >\n>> > > > > > sent 30 bytes received 4.11M bytes 8.22M\n>> bytes/sec\n>> > > > > > total size is 4.15M speedup is 1.01\n>> > > > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored\n>> > log file\n>> > > > > > \"000000010000377B000000DE\" from archive\n>> > > > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the\n>> > database system is\n>> > > > > starting up\n>> > > > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the\n>> > database system is\n>> > > > > > starting up\n>> > > > > >\n>> > > > > > I was hoping to see the following messages\n>> > (taken from a\n>> > > > different\n>> > > > > > machine) :\n>> > > > > > 2019-05-27 01:15:37 EDT 7428 LOG:\n>> > restartpoint starting: time\n>> > > > > > 2019-05-27 01:16:18 EDT 7428 LOG:\n>> > restartpoint complete:\n>> > > > wrote 406\n>> > > > > > buffers (0.2%); 1 transaction log file(s) added,\n>> > 0 removed, 0\n>> > > > > recycled;\n>> > > > > > write=41.390 s, sync=0.001 s, total=41.582 s;\n>> > sync file\n>> > > > > > s=128, longest=0.000 s, average=0.000 s;\n>> > distance=2005 kB,\n>> > > > > estimate=2699 kB\n>> > > > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery\n>> > restart point at\n>> > > > > 4/D096C4F8\n>> > > > > >\n>> > > > > > My primary settings(wals settings) :\n>> > > > > > wal_buffers = 16MB\n>> > > > > > checkpoint_completion_target = 0.9\n>> > > > > > checkpoint_timeout = 30min\n>> > > > > >\n>> > > > > > Any idea what can explain why the secondary\n>> > doesnt replay\n>> > > > the wals ?\n>> > > > >\n>> > > > >\n>> > > >\n>> > >\n>> >\n>>\n>\n\nHello Mariel,1) Have you tried to run “CHECKPOINT;” on the standby to perform restartpoint explicitly? It is possible.2) If we talk about streaming replication, do you use replication slots? If so, have you checked pg_replication_slots and pg_stat_replication? They can help to troubleshoot streaming replication delays — see columns sent_location, write_location, flush_location, and replay_location in pg_stat_replication and restart_lsn in pg_replication_slots. If you have delay in replaying, it should be seen there.On Wed, May 29, 2019 at 11:39 Mariel Cherkassky <[email protected]> wrote:Is there any messages that indicates that the secondary replayed a specific wal ? \"restored 00000...\" means that the restore_command succeeded but there isnt any proof that it replayed the wal.My theory regarding the issue : It seems, that my customer stopped the db 20 minutes after the clone have finished. During those 20 minutes the secondary didnt get enough wal records (6 wal files) so it didnt reach the max_wal_size. My checkpoint_timeout is set to 30minutes, therefore there wasnt any checkpoint. As a result of that the secondary didnt reach a restart point. Does that sounds reasonable ?So basically, if I clone a small primary db, the secondary would reach a restart point only if it reached a checkpoint (checkpoint_timeout or max_wal_size). However, I have cloned many small dbs and saw the it takes a sec to start the secondary (which means that restartpoint was reached). So what am I missing ?בתאריך יום ד׳, 29 במאי 2019 ב-11:20 מאת Fabio Pardi <[email protected]>:\n\nOn 5/29/19 9:20 AM, Mariel Cherkassky wrote:\n> First of all thanks Fabio.\n> I think that I'm missing something : \n> In the next questionI'm not talking about streaming replication,rather\n> on recovery : \n> \n> 1.When the secondary get the wals from the primary it tries to replay\n> them correct ? \n\n\ncorrect\n\n> \n> 2. By replaying it just go over the wal records and run them in the\n> secondary ?\n> \n\ncorrect\n\n> 3.All those changes are saved in the shared_buffer(secondary) or the\n> changed are immediately done on the data files blocks ?\n> \n\nthe changes are not saved to your datafile yet. That happens at\ncheckpoint time.\n\n> 4.The secondary will need a checkpoint in order to flush those changes\n> to the data files and in order to reach a restart point ?\n> \n\nyes\n\n> So, basically If I had a checkpoint during the clone, the secondary\n> should also have a checkpoint when I recover the secondary right ?\n> \n\ncorrect. Even after being in sync with master, if you restart Postgres\non standby, it will then re-apply the WAL files from the last checkpoint.\n\nIn the logfile of the standby, you will see as many messages reporting\n\"restored log file\" as many WAL files were produced since the last\ncheckpoint\n\nHope it helps to clarify.\n\nregards,\n\nfabio pardi\n> \n> בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>>:\n> \n> Hi Mariel,\n> \n> please keep the list posted. When you reply, use 'reply all'. That\n> will maybe help others in the community and you might also get more\n> help from others.\n> \n> answers are in line here below\n> \n> \n> \n> On 28/05/2019 10:54, Mariel Cherkassky wrote:\n> > I have pg 9.6, repmgr version 4.3 .\n> > I see in the logs that wal files are restored : \n> > 2019-05-22 12:35:12 EEST 60942 LOG: restored log file\n> \"000000010000377B000000DB\" from archive\n> > that means that the restore_command worked right ? \n> >\n> \n> right\n> \n> > According to the docs :\n> > \"In standby mode, a restartpoint is also triggered\n> if checkpoint_segments log segments have been replayed since last\n> restartpoint and at least one checkpoint record has been replayed.\n> Restartpoints can't be performed more frequently than checkpoints in\n> the master because restartpoints can only be performed at checkpoint\n> records\"\n> > so maybe I should decrease max_wal_size or even checkpoint_timeout\n> to force a restartpoint ? \n> > During this gap (standby clone) 6-7 wals were generated on the primary\n> >\n> \n> \n> From what you posted earlier, you should in any case have hit a\n> checkpoint every 30 minutes. (That was also the assumption in the\n> previous messages. If that's not happening, then i would really\n> investigate.)\n> \n> That is, if during your cloning only a few WAL files were generated,\n> then it is not enough to trigger a checkpoint and you fallback to\n> the 30 minutes.\n> \n> I would not be bothered if i were you, but can always force a\n> checkpoint on the master issuing:\n> \n> CHECKPOINT ;\n> \n> at that stage, on the standby logs you will see the messages:\n> \n> restartpoint starting: ..\n> \n> restartpoint complete: ..\n> \n> \n> \n> regards,\n> \n> fabio pardi\n> \n> >\n> > בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>:\n> >\n> > If you did not even see this messages on your standby logs:\n> >\n> > restartpoint starting: xlog\n> >\n> > then it means that the checkpoint was even never started.\n> >\n> > In that case, I have no clue.\n> >\n> > Try to describe step by step how to reproduce the problem\n> together with\n> > your setup and the version number of Postgres and repmgr, and\n> i might be\n> > able to help you further.\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> > On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> > > standby_mode = 'on'\n> > > primary_conninfo = 'host=X.X.X.X user=repmgr \n> connect_timeout=10 '\n> > > recovery_target_timeline = 'latest'\n> > > primary_slot_name = repmgr_slot_1\n> > > restore_command = 'rsync -avzhe ssh\n> > > [email protected]:/var/lib/pgsql/archive/%f\n> /var/lib/pgsql/archive/%f ;\n> > > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > > archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > > /var/lib/pgsql/archive %r'\n> > >\n> > > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>>:\n> > >\n> > > Hi Mariel,\n> > >\n> > > let s keep the list in cc...\n> > >\n> > > settings look ok.\n> > >\n> > > what's in the recovery.conf file then?\n> > >\n> > > regards,\n> > >\n> > > fabio pardi\n> > >\n> > > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > > > Hey,\n> > > > the configuration is the same as in the primary : \n> > > > max_wal_size = 2GB\n> > > > min_wal_size = 1GB\n> > > > wal_buffers = 16MB\n> > > > checkpoint_completion_target = 0.9\n> > > > checkpoint_timeout = 30min\n> > > >\n> > > > Regarding your question, I didnt see this message\n> (consistent recovery\n> > > > state reached at), I guess thats why the secondary\n> isnt avaialble\n> > > yet..\n> > > >\n> > > > Maybe I'm wrong, but what I understood from the\n> documentation- restart\n> > > > point is generated only after the secondary had a\n> checkpoint wihch\n> > > means\n> > > > only after 30 minutes or after max_wal_size is reached\n> ? But\n> > > still, why\n> > > > wont the secondary reach a consisteny recovery state\n> (does it\n> > > requires a\n> > > > restart point to be generated ? )\n> > > >\n> > > >\n> > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>\n> > > <mailto:[email protected]\n> <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>> <mailto:[email protected]\n> <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>>>>>:\n> > > >\n> > > > Hi Mariel,\n> > > >\n> > > > if i m not wrong, on the secondary you will see\n> the messages you\n> > > > mentioned when a checkpoint happens.\n> > > >\n> > > > What are checkpoint_timeout and max_wal_size on\n> your standby?\n> > > >\n> > > > Did you ever see this on your standby log?\n> > > >\n> > > > \"consistent recovery state reached at ..\"\n> > > >\n> > > >\n> > > > Maybe you can post your whole configuration of\n> your standby\n> > > for easier\n> > > > debug.\n> > > >\n> > > > regards,\n> > > >\n> > > > fabio pardi\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > > > Hey,\n> > > > > PG 9.6, I have a standalone configured. I tried\n> to start up a\n> > > > secondary,\n> > > > > run standby clone (repmgr). The clone process\n> took 3 hours\n> > > and during\n> > > > > that time wals were generated(mostly because of the\n> > > > checkpoint_timeout).\n> > > > > As a result of that, when I start the secondary\n> ,I see that the\n> > > > > secondary keeps getting the wals but I dont see\n> any messages\n> > > that\n> > > > > indicate that the secondary tried to replay the\n> wals. \n> > > > > messages that i see :\n> > > > > receiving incremental file list\n> > > > > 000000010000377B000000DE\n> > > > >\n> > > > > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > > > > total size is 4.15M speedup is 1.01\n> > > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored\n> log file\n> > > > > \"000000010000377B000000DE\" from archive\n> > > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the\n> database system is\n> > > > starting up\n> > > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the\n> database system is\n> > > > > starting up \n> > > > >\n> > > > > I was hoping to see the following messages\n> (taken from a\n> > > different\n> > > > > machine) : \n> > > > > 2019-05-27 01:15:37 EDT 7428 LOG:\n> restartpoint starting: time\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG:\n> restartpoint complete:\n> > > wrote 406\n> > > > > buffers (0.2%); 1 transaction log file(s) added,\n> 0 removed, 0\n> > > > recycled;\n> > > > > write=41.390 s, sync=0.001 s, total=41.582 s;\n> sync file\n> > > > > s=128, longest=0.000 s, average=0.000 s;\n> distance=2005 kB,\n> > > > estimate=2699 kB\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery\n> restart point at\n> > > > 4/D096C4F8\n> > > > >\n> > > > > My primary settings(wals settings) : \n> > > > > wal_buffers = 16MB\n> > > > > checkpoint_completion_target = 0.9\n> > > > > checkpoint_timeout = 30min\n> > > > >\n> > > > > Any idea what can explain why the secondary\n> doesnt replay\n> > > the wals ?\n> > > >\n> > > >\n> > >\n> >\n>",
"msg_date": "Wed, 29 May 2019 13:54:03 +0300",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "In my case, I'm talking about creating a standby -> standby clone and then\nrecover the wals . How can I run checkpoint in the secondary if it isnt up ?\n\nבתאריך יום ד׳, 29 במאי 2019 ב-13:54 מאת Nikolay Samokhvalov <\[email protected]>:\n\n> Hello Mariel,\n>\n> 1) Have you tried to run “CHECKPOINT;” on the standby to perform\n> restartpoint explicitly? It is possible.\n>\n> 2) If we talk about streaming replication, do you use replication slots?\n> If so, have you checked pg_replication_slots and pg_stat_replication? They\n> can help to troubleshoot streaming replication delays — see columns\n> sent_location, write_location, flush_location, and replay_location in\n> pg_stat_replication and restart_lsn in pg_replication_slots. If you have\n> delay in replaying, it should be seen there.\n>\n> On Wed, May 29, 2019 at 11:39 Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> Is there any messages that indicates that the secondary replayed a\n>> specific wal ? \"restored 00000...\" means that the restore_command succeeded\n>> but there isnt any proof that it replayed the wal.\n>>\n>> My theory regarding the issue :\n>> It seems, that my customer stopped the db 20 minutes after the clone\n>> have finished. During those 20 minutes the secondary didnt get enough wal\n>> records (6 wal files) so it didnt reach the max_wal_size. My\n>> checkpoint_timeout is set to 30minutes, therefore there wasnt any\n>> checkpoint. As a result of that the secondary didnt reach a restart point.\n>> Does that sounds reasonable ?\n>>\n>> So basically, if I clone a small primary db, the secondary would reach a\n>> restart point only if it reached a checkpoint (checkpoint_timeout or\n>> max_wal_size). However, I have cloned many small dbs and saw the it takes a\n>> sec to start the secondary (which means that restartpoint was reached). So\n>> what am I missing ?\n>>\n>> בתאריך יום ד׳, 29 במאי 2019 ב-11:20 מאת Fabio Pardi <\n>> [email protected]>:\n>>\n>>>\n>>>\n>>> On 5/29/19 9:20 AM, Mariel Cherkassky wrote:\n>>> > First of all thanks Fabio.\n>>> > I think that I'm missing something :\n>>> > In the next questionI'm not talking about streaming replication,rather\n>>> > on recovery :\n>>> >\n>>> > 1.When the secondary get the wals from the primary it tries to replay\n>>> > them correct ?\n>>>\n>>>\n>>> correct\n>>>\n>>> >\n>>> > 2. By replaying it just go over the wal records and run them in the\n>>> > secondary ?\n>>> >\n>>>\n>>> correct\n>>>\n>>> > 3.All those changes are saved in the shared_buffer(secondary) or the\n>>> > changed are immediately done on the data files blocks ?\n>>> >\n>>>\n>>> the changes are not saved to your datafile yet. That happens at\n>>> checkpoint time.\n>>>\n>>> > 4.The secondary will need a checkpoint in order to flush those changes\n>>> > to the data files and in order to reach a restart point ?\n>>> >\n>>>\n>>> yes\n>>>\n>>> > So, basically If I had a checkpoint during the clone, the secondary\n>>> > should also have a checkpoint when I recover the secondary right ?\n>>> >\n>>>\n>>> correct. Even after being in sync with master, if you restart Postgres\n>>> on standby, it will then re-apply the WAL files from the last checkpoint.\n>>>\n>>> In the logfile of the standby, you will see as many messages reporting\n>>> \"restored log file\" as many WAL files were produced since the last\n>>> checkpoint\n>>>\n>>> Hope it helps to clarify.\n>>>\n>>> regards,\n>>>\n>>> fabio pardi\n>>> >\n>>> > בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi\n>>> > <[email protected] <mailto:[email protected]>>:\n>>> >\n>>> > Hi Mariel,\n>>> >\n>>> > please keep the list posted. When you reply, use 'reply all'. That\n>>> > will maybe help others in the community and you might also get more\n>>> > help from others.\n>>> >\n>>> > answers are in line here below\n>>> >\n>>> >\n>>> >\n>>> > On 28/05/2019 10:54, Mariel Cherkassky wrote:\n>>> > > I have pg 9.6, repmgr version 4.3 .\n>>> > > I see in the logs that wal files are restored :\n>>> > > 2019-05-22 12:35:12 EEST 60942 LOG: restored log file\n>>> > \"000000010000377B000000DB\" from archive\n>>> > > that means that the restore_command worked right ?\n>>> > >\n>>> >\n>>> > right\n>>> >\n>>> > > According to the docs :\n>>> > > \"In standby mode, a restartpoint is also triggered\n>>> > if checkpoint_segments log segments have been replayed since last\n>>> > restartpoint and at least one checkpoint record has been replayed.\n>>> > Restartpoints can't be performed more frequently than checkpoints\n>>> in\n>>> > the master because restartpoints can only be performed at\n>>> checkpoint\n>>> > records\"\n>>> > > so maybe I should decrease max_wal_size or even\n>>> checkpoint_timeout\n>>> > to force a restartpoint ?\n>>> > > During this gap (standby clone) 6-7 wals were generated on the\n>>> primary\n>>> > >\n>>> >\n>>> >\n>>> > From what you posted earlier, you should in any case have hit a\n>>> > checkpoint every 30 minutes. (That was also the assumption in the\n>>> > previous messages. If that's not happening, then i would really\n>>> > investigate.)\n>>> >\n>>> > That is, if during your cloning only a few WAL files were\n>>> generated,\n>>> > then it is not enough to trigger a checkpoint and you fallback to\n>>> > the 30 minutes.\n>>> >\n>>> > I would not be bothered if i were you, but can always force a\n>>> > checkpoint on the master issuing:\n>>> >\n>>> > CHECKPOINT ;\n>>> >\n>>> > at that stage, on the standby logs you will see the messages:\n>>> >\n>>> > restartpoint starting: ..\n>>> >\n>>> > restartpoint complete: ..\n>>> >\n>>> >\n>>> >\n>>> > regards,\n>>> >\n>>> > fabio pardi\n>>> >\n>>> > >\n>>> > > בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi\n>>> > <[email protected] <mailto:[email protected]>\n>>> > <mailto:[email protected] <mailto:[email protected]>>>:\n>>> > >\n>>> > > If you did not even see this messages on your standby logs:\n>>> > >\n>>> > > restartpoint starting: xlog\n>>> > >\n>>> > > then it means that the checkpoint was even never started.\n>>> > >\n>>> > > In that case, I have no clue.\n>>> > >\n>>> > > Try to describe step by step how to reproduce the problem\n>>> > together with\n>>> > > your setup and the version number of Postgres and repmgr, and\n>>> > i might be\n>>> > > able to help you further.\n>>> > >\n>>> > > regards,\n>>> > >\n>>> > > fabio pardi\n>>> > >\n>>> > > On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n>>> > > > standby_mode = 'on'\n>>> > > > primary_conninfo = 'host=X.X.X.X user=repmgr\n>>> > connect_timeout=10 '\n>>> > > > recovery_target_timeline = 'latest'\n>>> > > > primary_slot_name = repmgr_slot_1\n>>> > > > restore_command = 'rsync -avzhe ssh\n>>> > > > [email protected]:/var/lib/pgsql/archive/%f\n>>> > /var/lib/pgsql/archive/%f ;\n>>> > > > gunzip < /var/lib/pgsql/archive/%f > %p'\n>>> > > > archive_cleanup_command =\n>>> '/usr/pgsql-9.6/bin/pg_archivecleanup\n>>> > > > /var/lib/pgsql/archive %r'\n>>> > > >\n>>> > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n>>> > > > <[email protected] <mailto:[email protected]>\n>>> > <mailto:[email protected] <mailto:[email protected]>>\n>>> > <mailto:[email protected] <mailto:[email protected]>\n>>> > <mailto:[email protected] <mailto:[email protected]>>>>:\n>>> > > >\n>>> > > > Hi Mariel,\n>>> > > >\n>>> > > > let s keep the list in cc...\n>>> > > >\n>>> > > > settings look ok.\n>>> > > >\n>>> > > > what's in the recovery.conf file then?\n>>> > > >\n>>> > > > regards,\n>>> > > >\n>>> > > > fabio pardi\n>>> > > >\n>>> > > > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n>>> > > > > Hey,\n>>> > > > > the configuration is the same as in the primary :\n>>> > > > > max_wal_size = 2GB\n>>> > > > > min_wal_size = 1GB\n>>> > > > > wal_buffers = 16MB\n>>> > > > > checkpoint_completion_target = 0.9\n>>> > > > > checkpoint_timeout = 30min\n>>> > > > >\n>>> > > > > Regarding your question, I didnt see this message\n>>> > (consistent recovery\n>>> > > > > state reached at), I guess thats why the secondary\n>>> > isnt avaialble\n>>> > > > yet..\n>>> > > > >\n>>> > > > > Maybe I'm wrong, but what I understood from the\n>>> > documentation- restart\n>>> > > > > point is generated only after the secondary had a\n>>> > checkpoint wihch\n>>> > > > means\n>>> > > > > only after 30 minutes or after max_wal_size is\n>>> reached\n>>> > ? But\n>>> > > > still, why\n>>> > > > > wont the secondary reach a consisteny recovery state\n>>> > (does it\n>>> > > > requires a\n>>> > > > > restart point to be generated ? )\n>>> > > > >\n>>> > > > >\n>>> > > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio\n>>> Pardi\n>>> > > > > <[email protected] <mailto:[email protected]>\n>>> > <mailto:[email protected] <mailto:[email protected]>>\n>>> > <mailto:[email protected] <mailto:[email protected]>\n>>> > <mailto:[email protected] <mailto:[email protected]>>>\n>>> > > > <mailto:[email protected]\n>>> > <mailto:[email protected]> <mailto:[email protected]\n>>> > <mailto:[email protected]>> <mailto:[email protected]\n>>> > <mailto:[email protected]> <mailto:[email protected]\n>>> > <mailto:[email protected]>>>>>:\n>>> > > > >\n>>> > > > > Hi Mariel,\n>>> > > > >\n>>> > > > > if i m not wrong, on the secondary you will see\n>>> > the messages you\n>>> > > > > mentioned when a checkpoint happens.\n>>> > > > >\n>>> > > > > What are checkpoint_timeout and max_wal_size on\n>>> > your standby?\n>>> > > > >\n>>> > > > > Did you ever see this on your standby log?\n>>> > > > >\n>>> > > > > \"consistent recovery state reached at ..\"\n>>> > > > >\n>>> > > > >\n>>> > > > > Maybe you can post your whole configuration of\n>>> > your standby\n>>> > > > for easier\n>>> > > > > debug.\n>>> > > > >\n>>> > > > > regards,\n>>> > > > >\n>>> > > > > fabio pardi\n>>> > > > >\n>>> > > > >\n>>> > > > >\n>>> > > > >\n>>> > > > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n>>> > > > > > Hey,\n>>> > > > > > PG 9.6, I have a standalone configured. I tried\n>>> > to start up a\n>>> > > > > secondary,\n>>> > > > > > run standby clone (repmgr). The clone process\n>>> > took 3 hours\n>>> > > > and during\n>>> > > > > > that time wals were generated(mostly because\n>>> of the\n>>> > > > > checkpoint_timeout).\n>>> > > > > > As a result of that, when I start the secondary\n>>> > ,I see that the\n>>> > > > > > secondary keeps getting the wals but I dont see\n>>> > any messages\n>>> > > > that\n>>> > > > > > indicate that the secondary tried to replay the\n>>> > wals.\n>>> > > > > > messages that i see :\n>>> > > > > > receiving incremental file list\n>>> > > > > > 000000010000377B000000DE\n>>> > > > > >\n>>> > > > > > sent 30 bytes received 4.11M bytes 8.22M\n>>> bytes/sec\n>>> > > > > > total size is 4.15M speedup is 1.01\n>>> > > > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored\n>>> > log file\n>>> > > > > > \"000000010000377B000000DE\" from archive\n>>> > > > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the\n>>> > database system is\n>>> > > > > starting up\n>>> > > > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the\n>>> > database system is\n>>> > > > > > starting up\n>>> > > > > >\n>>> > > > > > I was hoping to see the following messages\n>>> > (taken from a\n>>> > > > different\n>>> > > > > > machine) :\n>>> > > > > > 2019-05-27 01:15:37 EDT 7428 LOG:\n>>> > restartpoint starting: time\n>>> > > > > > 2019-05-27 01:16:18 EDT 7428 LOG:\n>>> > restartpoint complete:\n>>> > > > wrote 406\n>>> > > > > > buffers (0.2%); 1 transaction log file(s)\n>>> added,\n>>> > 0 removed, 0\n>>> > > > > recycled;\n>>> > > > > > write=41.390 s, sync=0.001 s, total=41.582 s;\n>>> > sync file\n>>> > > > > > s=128, longest=0.000 s, average=0.000 s;\n>>> > distance=2005 kB,\n>>> > > > > estimate=2699 kB\n>>> > > > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery\n>>> > restart point at\n>>> > > > > 4/D096C4F8\n>>> > > > > >\n>>> > > > > > My primary settings(wals settings) :\n>>> > > > > > wal_buffers = 16MB\n>>> > > > > > checkpoint_completion_target = 0.9\n>>> > > > > > checkpoint_timeout = 30min\n>>> > > > > >\n>>> > > > > > Any idea what can explain why the secondary\n>>> > doesnt replay\n>>> > > > the wals ?\n>>> > > > >\n>>> > > > >\n>>> > > >\n>>> > >\n>>> >\n>>>\n>>\n\nIn my case, I'm talking about creating a standby -> standby clone and then recover the wals . How can I run checkpoint in the secondary if it isnt up ?בתאריך יום ד׳, 29 במאי 2019 ב-13:54 מאת Nikolay Samokhvalov <[email protected]>:Hello Mariel,1) Have you tried to run “CHECKPOINT;” on the standby to perform restartpoint explicitly? It is possible.2) If we talk about streaming replication, do you use replication slots? If so, have you checked pg_replication_slots and pg_stat_replication? They can help to troubleshoot streaming replication delays — see columns sent_location, write_location, flush_location, and replay_location in pg_stat_replication and restart_lsn in pg_replication_slots. If you have delay in replaying, it should be seen there.On Wed, May 29, 2019 at 11:39 Mariel Cherkassky <[email protected]> wrote:Is there any messages that indicates that the secondary replayed a specific wal ? \"restored 00000...\" means that the restore_command succeeded but there isnt any proof that it replayed the wal.My theory regarding the issue : It seems, that my customer stopped the db 20 minutes after the clone have finished. During those 20 minutes the secondary didnt get enough wal records (6 wal files) so it didnt reach the max_wal_size. My checkpoint_timeout is set to 30minutes, therefore there wasnt any checkpoint. As a result of that the secondary didnt reach a restart point. Does that sounds reasonable ?So basically, if I clone a small primary db, the secondary would reach a restart point only if it reached a checkpoint (checkpoint_timeout or max_wal_size). However, I have cloned many small dbs and saw the it takes a sec to start the secondary (which means that restartpoint was reached). So what am I missing ?בתאריך יום ד׳, 29 במאי 2019 ב-11:20 מאת Fabio Pardi <[email protected]>:\n\nOn 5/29/19 9:20 AM, Mariel Cherkassky wrote:\n> First of all thanks Fabio.\n> I think that I'm missing something : \n> In the next questionI'm not talking about streaming replication,rather\n> on recovery : \n> \n> 1.When the secondary get the wals from the primary it tries to replay\n> them correct ? \n\n\ncorrect\n\n> \n> 2. By replaying it just go over the wal records and run them in the\n> secondary ?\n> \n\ncorrect\n\n> 3.All those changes are saved in the shared_buffer(secondary) or the\n> changed are immediately done on the data files blocks ?\n> \n\nthe changes are not saved to your datafile yet. That happens at\ncheckpoint time.\n\n> 4.The secondary will need a checkpoint in order to flush those changes\n> to the data files and in order to reach a restart point ?\n> \n\nyes\n\n> So, basically If I had a checkpoint during the clone, the secondary\n> should also have a checkpoint when I recover the secondary right ?\n> \n\ncorrect. Even after being in sync with master, if you restart Postgres\non standby, it will then re-apply the WAL files from the last checkpoint.\n\nIn the logfile of the standby, you will see as many messages reporting\n\"restored log file\" as many WAL files were produced since the last\ncheckpoint\n\nHope it helps to clarify.\n\nregards,\n\nfabio pardi\n> \n> בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>>:\n> \n> Hi Mariel,\n> \n> please keep the list posted. When you reply, use 'reply all'. That\n> will maybe help others in the community and you might also get more\n> help from others.\n> \n> answers are in line here below\n> \n> \n> \n> On 28/05/2019 10:54, Mariel Cherkassky wrote:\n> > I have pg 9.6, repmgr version 4.3 .\n> > I see in the logs that wal files are restored : \n> > 2019-05-22 12:35:12 EEST 60942 LOG: restored log file\n> \"000000010000377B000000DB\" from archive\n> > that means that the restore_command worked right ? \n> >\n> \n> right\n> \n> > According to the docs :\n> > \"In standby mode, a restartpoint is also triggered\n> if checkpoint_segments log segments have been replayed since last\n> restartpoint and at least one checkpoint record has been replayed.\n> Restartpoints can't be performed more frequently than checkpoints in\n> the master because restartpoints can only be performed at checkpoint\n> records\"\n> > so maybe I should decrease max_wal_size or even checkpoint_timeout\n> to force a restartpoint ? \n> > During this gap (standby clone) 6-7 wals were generated on the primary\n> >\n> \n> \n> From what you posted earlier, you should in any case have hit a\n> checkpoint every 30 minutes. (That was also the assumption in the\n> previous messages. If that's not happening, then i would really\n> investigate.)\n> \n> That is, if during your cloning only a few WAL files were generated,\n> then it is not enough to trigger a checkpoint and you fallback to\n> the 30 minutes.\n> \n> I would not be bothered if i were you, but can always force a\n> checkpoint on the master issuing:\n> \n> CHECKPOINT ;\n> \n> at that stage, on the standby logs you will see the messages:\n> \n> restartpoint starting: ..\n> \n> restartpoint complete: ..\n> \n> \n> \n> regards,\n> \n> fabio pardi\n> \n> >\n> > בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>:\n> >\n> > If you did not even see this messages on your standby logs:\n> >\n> > restartpoint starting: xlog\n> >\n> > then it means that the checkpoint was even never started.\n> >\n> > In that case, I have no clue.\n> >\n> > Try to describe step by step how to reproduce the problem\n> together with\n> > your setup and the version number of Postgres and repmgr, and\n> i might be\n> > able to help you further.\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> > On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> > > standby_mode = 'on'\n> > > primary_conninfo = 'host=X.X.X.X user=repmgr \n> connect_timeout=10 '\n> > > recovery_target_timeline = 'latest'\n> > > primary_slot_name = repmgr_slot_1\n> > > restore_command = 'rsync -avzhe ssh\n> > > [email protected]:/var/lib/pgsql/archive/%f\n> /var/lib/pgsql/archive/%f ;\n> > > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > > archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > > /var/lib/pgsql/archive %r'\n> > >\n> > > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>>:\n> > >\n> > > Hi Mariel,\n> > >\n> > > let s keep the list in cc...\n> > >\n> > > settings look ok.\n> > >\n> > > what's in the recovery.conf file then?\n> > >\n> > > regards,\n> > >\n> > > fabio pardi\n> > >\n> > > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > > > Hey,\n> > > > the configuration is the same as in the primary : \n> > > > max_wal_size = 2GB\n> > > > min_wal_size = 1GB\n> > > > wal_buffers = 16MB\n> > > > checkpoint_completion_target = 0.9\n> > > > checkpoint_timeout = 30min\n> > > >\n> > > > Regarding your question, I didnt see this message\n> (consistent recovery\n> > > > state reached at), I guess thats why the secondary\n> isnt avaialble\n> > > yet..\n> > > >\n> > > > Maybe I'm wrong, but what I understood from the\n> documentation- restart\n> > > > point is generated only after the secondary had a\n> checkpoint wihch\n> > > means\n> > > > only after 30 minutes or after max_wal_size is reached\n> ? But\n> > > still, why\n> > > > wont the secondary reach a consisteny recovery state\n> (does it\n> > > requires a\n> > > > restart point to be generated ? )\n> > > >\n> > > >\n> > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio Pardi\n> > > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>\n> > > <mailto:[email protected]\n> <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>> <mailto:[email protected]\n> <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>>>>>:\n> > > >\n> > > > Hi Mariel,\n> > > >\n> > > > if i m not wrong, on the secondary you will see\n> the messages you\n> > > > mentioned when a checkpoint happens.\n> > > >\n> > > > What are checkpoint_timeout and max_wal_size on\n> your standby?\n> > > >\n> > > > Did you ever see this on your standby log?\n> > > >\n> > > > \"consistent recovery state reached at ..\"\n> > > >\n> > > >\n> > > > Maybe you can post your whole configuration of\n> your standby\n> > > for easier\n> > > > debug.\n> > > >\n> > > > regards,\n> > > >\n> > > > fabio pardi\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > > > Hey,\n> > > > > PG 9.6, I have a standalone configured. I tried\n> to start up a\n> > > > secondary,\n> > > > > run standby clone (repmgr). The clone process\n> took 3 hours\n> > > and during\n> > > > > that time wals were generated(mostly because of the\n> > > > checkpoint_timeout).\n> > > > > As a result of that, when I start the secondary\n> ,I see that the\n> > > > > secondary keeps getting the wals but I dont see\n> any messages\n> > > that\n> > > > > indicate that the secondary tried to replay the\n> wals. \n> > > > > messages that i see :\n> > > > > receiving incremental file list\n> > > > > 000000010000377B000000DE\n> > > > >\n> > > > > sent 30 bytes received 4.11M bytes 8.22M bytes/sec\n> > > > > total size is 4.15M speedup is 1.01\n> > > > > 2019-05-22 12:48:10 EEST 60942 LOG: restored\n> log file\n> > > > > \"000000010000377B000000DE\" from archive\n> > > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the\n> database system is\n> > > > starting up\n> > > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the\n> database system is\n> > > > > starting up \n> > > > >\n> > > > > I was hoping to see the following messages\n> (taken from a\n> > > different\n> > > > > machine) : \n> > > > > 2019-05-27 01:15:37 EDT 7428 LOG:\n> restartpoint starting: time\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG:\n> restartpoint complete:\n> > > wrote 406\n> > > > > buffers (0.2%); 1 transaction log file(s) added,\n> 0 removed, 0\n> > > > recycled;\n> > > > > write=41.390 s, sync=0.001 s, total=41.582 s;\n> sync file\n> > > > > s=128, longest=0.000 s, average=0.000 s;\n> distance=2005 kB,\n> > > > estimate=2699 kB\n> > > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery\n> restart point at\n> > > > 4/D096C4F8\n> > > > >\n> > > > > My primary settings(wals settings) : \n> > > > > wal_buffers = 16MB\n> > > > > checkpoint_completion_target = 0.9\n> > > > > checkpoint_timeout = 30min\n> > > > >\n> > > > > Any idea what can explain why the secondary\n> doesnt replay\n> > > the wals ?\n> > > >\n> > > >\n> > >\n> >\n>",
"msg_date": "Wed, 29 May 2019 14:10:59 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: improve wals replay on secondary"
},
{
"msg_contents": "\n\nOn 5/29/19 10:39 AM, Mariel Cherkassky wrote:\n> Is there any messages that indicates that the secondary replayed a\n> specific wal ? \"restored 00000...\" means that the restore_command\n> succeeded but there isnt any proof that it replayed the wal.\n\nI believe that the message \"restored log file..' is an acknowledgement\nthat the file has been restored and the WAL file applied. At least\nthat's what i know and understand from the source code.\n\n\n\n> \n> My theory regarding the issue : \n> It seems, that my customer stopped the db 20 minutes after the clone\n> have finished. During those 20 minutes the secondary didnt get enough\n> wal records (6 wal files) so it didnt reach the max_wal_size. My\n> checkpoint_timeout is set to 30minutes, therefore there wasnt any\n> checkpoint. As a result of that the secondary didnt reach a restart\n> point. Does that sounds reasonable ?\n> \n\nIt could be, but i do not see the point here.\n\n> So basically, if I clone a small primary db, the secondary would reach a\n> restart point only if it reached a checkpoint (checkpoint_timeout or\n> max_wal_size). However, I have cloned many small dbs and saw the it\n> takes a sec to start the secondary (which means that restartpoint was\n> reached). So what am I missing ?\n> \n\nThe restart point is reached in relation to checkpoints.\n\nBut when all available WAL files have been applied, you should be able\nto connect to your standby, regardless to when the last checkpoint\noccurred, if your standby allows that (hot_standby = on).\n\n\n\n\nregards,\n\nfabio pardi\n\n> בתאריך יום ד׳, 29 במאי 2019 ב-11:20 מאת Fabio Pardi\n> <[email protected] <mailto:[email protected]>>:\n> \n> \n> \n> On 5/29/19 9:20 AM, Mariel Cherkassky wrote:\n> > First of all thanks Fabio.\n> > I think that I'm missing something : \n> > In the next questionI'm not talking about streaming replication,rather\n> > on recovery : \n> >\n> > 1.When the secondary get the wals from the primary it tries to replay\n> > them correct ? \n> \n> \n> correct\n> \n> >\n> > 2. By replaying it just go over the wal records and run them in the\n> > secondary ?\n> >\n> \n> correct\n> \n> > 3.All those changes are saved in the shared_buffer(secondary) or the\n> > changed are immediately done on the data files blocks ?\n> >\n> \n> the changes are not saved to your datafile yet. That happens at\n> checkpoint time.\n> \n> > 4.The secondary will need a checkpoint in order to flush those changes\n> > to the data files and in order to reach a restart point ?\n> >\n> \n> yes\n> \n> > So, basically If I had a checkpoint during the clone, the secondary\n> > should also have a checkpoint when I recover the secondary right ?\n> >\n> \n> correct. Even after being in sync with master, if you restart Postgres\n> on standby, it will then re-apply the WAL files from the last\n> checkpoint.\n> \n> In the logfile of the standby, you will see as many messages reporting\n> \"restored log file\" as many WAL files were produced since the last\n> checkpoint\n> \n> Hope it helps to clarify.\n> \n> regards,\n> \n> fabio pardi\n> >\n> > בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi\n> > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>:\n> >\n> > Hi Mariel,\n> >\n> > please keep the list posted. When you reply, use 'reply all'. That\n> > will maybe help others in the community and you might also get\n> more\n> > help from others.\n> >\n> > answers are in line here below\n> >\n> >\n> >\n> > On 28/05/2019 10:54, Mariel Cherkassky wrote:\n> > > I have pg 9.6, repmgr version 4.3 .\n> > > I see in the logs that wal files are restored : \n> > > 2019-05-22 12:35:12 EEST 60942 LOG: restored log file\n> > \"000000010000377B000000DB\" from archive\n> > > that means that the restore_command worked right ? \n> > >\n> >\n> > right\n> >\n> > > According to the docs :\n> > > \"In standby mode, a restartpoint is also triggered\n> > if checkpoint_segments log segments have been replayed since last\n> > restartpoint and at least one checkpoint record has been replayed.\n> > Restartpoints can't be performed more frequently than\n> checkpoints in\n> > the master because restartpoints can only be performed at\n> checkpoint\n> > records\"\n> > > so maybe I should decrease max_wal_size or even\n> checkpoint_timeout\n> > to force a restartpoint ? \n> > > During this gap (standby clone) 6-7 wals were generated on\n> the primary\n> > >\n> >\n> >\n> > From what you posted earlier, you should in any case have hit a\n> > checkpoint every 30 minutes. (That was also the assumption in the\n> > previous messages. If that's not happening, then i would really\n> > investigate.)\n> >\n> > That is, if during your cloning only a few WAL files were\n> generated,\n> > then it is not enough to trigger a checkpoint and you fallback to\n> > the 30 minutes.\n> >\n> > I would not be bothered if i were you, but can always force a\n> > checkpoint on the master issuing:\n> >\n> > CHECKPOINT ;\n> >\n> > at that stage, on the standby logs you will see the messages:\n> >\n> > restartpoint starting: ..\n> >\n> > restartpoint complete: ..\n> >\n> >\n> >\n> > regards,\n> >\n> > fabio pardi\n> >\n> > >\n> > > בתאריך יום ב׳, 27 במאי 2019 ב-14:04 מאת Fabio Pardi\n> > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> > <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>>:\n> > >\n> > > If you did not even see this messages on your standby logs:\n> > >\n> > > restartpoint starting: xlog\n> > >\n> > > then it means that the checkpoint was even never started.\n> > >\n> > > In that case, I have no clue.\n> > >\n> > > Try to describe step by step how to reproduce the problem\n> > together with\n> > > your setup and the version number of Postgres and\n> repmgr, and\n> > i might be\n> > > able to help you further.\n> > >\n> > > regards,\n> > >\n> > > fabio pardi\n> > >\n> > > On 5/27/19 12:17 PM, Mariel Cherkassky wrote:\n> > > > standby_mode = 'on'\n> > > > primary_conninfo = 'host=X.X.X.X user=repmgr \n> > connect_timeout=10 '\n> > > > recovery_target_timeline = 'latest'\n> > > > primary_slot_name = repmgr_slot_1\n> > > > restore_command = 'rsync -avzhe ssh\n> > > > [email protected]:/var/lib/pgsql/archive/%f\n> > /var/lib/pgsql/archive/%f ;\n> > > > gunzip < /var/lib/pgsql/archive/%f > %p'\n> > > > archive_cleanup_command =\n> '/usr/pgsql-9.6/bin/pg_archivecleanup\n> > > > /var/lib/pgsql/archive %r'\n> > > >\n> > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:29 מאת Fabio Pardi\n> > > > <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> > <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>\n> > <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> > <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>>>:\n> > > >\n> > > > Hi Mariel,\n> > > >\n> > > > let s keep the list in cc...\n> > > >\n> > > > settings look ok.\n> > > >\n> > > > what's in the recovery.conf file then?\n> > > >\n> > > > regards,\n> > > >\n> > > > fabio pardi\n> > > >\n> > > > On 5/27/19 11:23 AM, Mariel Cherkassky wrote:\n> > > > > Hey,\n> > > > > the configuration is the same as in the primary : \n> > > > > max_wal_size = 2GB\n> > > > > min_wal_size = 1GB\n> > > > > wal_buffers = 16MB\n> > > > > checkpoint_completion_target = 0.9\n> > > > > checkpoint_timeout = 30min\n> > > > >\n> > > > > Regarding your question, I didnt see this message\n> > (consistent recovery\n> > > > > state reached at), I guess thats why the secondary\n> > isnt avaialble\n> > > > yet..\n> > > > >\n> > > > > Maybe I'm wrong, but what I understood from the\n> > documentation- restart\n> > > > > point is generated only after the secondary had a\n> > checkpoint wihch\n> > > > means\n> > > > > only after 30 minutes or after max_wal_size is\n> reached\n> > ? But\n> > > > still, why\n> > > > > wont the secondary reach a consisteny recovery state\n> > (does it\n> > > > requires a\n> > > > > restart point to be generated ? )\n> > > > >\n> > > > >\n> > > > > בתאריך יום ב׳, 27 במאי 2019 ב-12:12 מאת Fabio\n> Pardi\n> > > > > <[email protected]\n> <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>>\n> > <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>\n> > <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> > <mailto:[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>>\n> > > > <mailto:[email protected]\n> <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>>\n> <mailto:[email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>\n> <mailto:[email protected] <mailto:[email protected]>\n> > <mailto:[email protected]\n> <mailto:[email protected]>>>>>>:\n> > > > >\n> > > > > Hi Mariel,\n> > > > >\n> > > > > if i m not wrong, on the secondary you will see\n> > the messages you\n> > > > > mentioned when a checkpoint happens.\n> > > > >\n> > > > > What are checkpoint_timeout and max_wal_size on\n> > your standby?\n> > > > >\n> > > > > Did you ever see this on your standby log?\n> > > > >\n> > > > > \"consistent recovery state reached at ..\"\n> > > > >\n> > > > >\n> > > > > Maybe you can post your whole configuration of\n> > your standby\n> > > > for easier\n> > > > > debug.\n> > > > >\n> > > > > regards,\n> > > > >\n> > > > > fabio pardi\n> > > > >\n> > > > >\n> > > > >\n> > > > >\n> > > > > On 5/27/19 10:49 AM, Mariel Cherkassky wrote:\n> > > > > > Hey,\n> > > > > > PG 9.6, I have a standalone configured. I\n> tried\n> > to start up a\n> > > > > secondary,\n> > > > > > run standby clone (repmgr). The clone process\n> > took 3 hours\n> > > > and during\n> > > > > > that time wals were generated(mostly\n> because of the\n> > > > > checkpoint_timeout).\n> > > > > > As a result of that, when I start the\n> secondary\n> > ,I see that the\n> > > > > > secondary keeps getting the wals but I\n> dont see\n> > any messages\n> > > > that\n> > > > > > indicate that the secondary tried to\n> replay the\n> > wals. \n> > > > > > messages that i see :\n> > > > > > receiving incremental file list\n> > > > > > 000000010000377B000000DE\n> > > > > >\n> > > > > > sent 30 bytes received 4.11M bytes 8.22M\n> bytes/sec\n> > > > > > total size is 4.15M speedup is 1.01\n> > > > > > 2019-05-22 12:48:10 EEST 60942 LOG:\n> restored\n> > log file\n> > > > > > \"000000010000377B000000DE\" from archive\n> > > > > > 2019-05-22 12:48:11 EEST db63311 FATAL: the\n> > database system is\n> > > > > starting up\n> > > > > > 2019-05-22 12:48:12 EEST db63313 FATAL: the\n> > database system is\n> > > > > > starting up \n> > > > > >\n> > > > > > I was hoping to see the following messages\n> > (taken from a\n> > > > different\n> > > > > > machine) : \n> > > > > > 2019-05-27 01:15:37 EDT 7428 LOG:\n> > restartpoint starting: time\n> > > > > > 2019-05-27 01:16:18 EDT 7428 LOG:\n> > restartpoint complete:\n> > > > wrote 406\n> > > > > > buffers (0.2%); 1 transaction log file(s)\n> added,\n> > 0 removed, 0\n> > > > > recycled;\n> > > > > > write=41.390 s, sync=0.001 s, total=41.582 s;\n> > sync file\n> > > > > > s=128, longest=0.000 s, average=0.000 s;\n> > distance=2005 kB,\n> > > > > estimate=2699 kB\n> > > > > > 2019-05-27 01:16:18 EDT 7428 LOG: recovery\n> > restart point at\n> > > > > 4/D096C4F8\n> > > > > >\n> > > > > > My primary settings(wals settings) : \n> > > > > > wal_buffers = 16MB\n> > > > > > checkpoint_completion_target = 0.9\n> > > > > > checkpoint_timeout = 30min\n> > > > > >\n> > > > > > Any idea what can explain why the secondary\n> > doesnt replay\n> > > > the wals ?\n> > > > >\n> > > > >\n> > > >\n> > >\n> >\n> \n\n\n",
"msg_date": "Fri, 31 May 2019 13:54:26 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: improve wals replay on secondary"
}
] |
[
{
"msg_contents": "Hi,\n\nBrowsing the PostgreSQL 12 release notes I noticed that JIT is now \nenabled by default. Having not followed PostgreSQL development closely - \ndoes this mean that compilation results are now getting cached and \ncompilation is no longer executed separately for each worker thread in a \nparallel query ?\n\nCheers,\nTobias\n\n\n\n",
"msg_date": "Wed, 29 May 2019 10:02:50 +0200",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "JIT in PostgreSQL 12 ?"
},
{
"msg_contents": "På onsdag 29. mai 2019 kl. 10:02:50, skrev Tobias Gierke <\[email protected] <mailto:[email protected]>>: Hi,\n\n Browsing the PostgreSQL 12 release notes I noticed that JIT is now\n enabled by default. Having not followed PostgreSQL development closely -\n does this mean that compilation results are now getting cached and\n compilation is no longer executed separately for each worker thread in a\n parallel query ? I don't know, but just want to chime in with my experience \nwith PG-12 and JIT: Execution-time is still way worse then JIT=off for your \nqueries so we'll turn JIT=off until we can mesure performance-gain. -- Andreas \nJoseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 [email protected]\n <mailto:[email protected]> www.visena.com <https://www.visena.com> \n<https://www.visena.com>",
"msg_date": "Wed, 29 May 2019 10:48:19 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Sv: JIT in PostgreSQL 12 ?"
},
{
"msg_contents": "Hi,\n> På onsdag 29. mai 2019 kl. 10:02:50, skrev Tobias Gierke \n> <[email protected] <mailto:[email protected]>>:\n>\n> Hi,\n>\n> Browsing the PostgreSQL 12 release notes I noticed that JIT is now\n> enabled by default. Having not followed PostgreSQL development\n> closely -\n> does this mean that compilation results are now getting cached and\n> compilation is no longer executed separately for each worker\n> thread in a\n> parallel query ?\n>\n> I don't know, but just want to chime in with my experience with PG-12 \n> and JIT: Execution-time is still way worse then JIT=off for your \n> queries so we'll turn JIT=off until we can mesure performance-gain.\n\nHm, that's a bummer. So I guess we'll also have to make sure JIT is \nturned off when upgrading.\n\nThanks,\nTobias\n\n\n\n\n\n\n\n\n Hi,\n \n\nPå onsdag 29. mai 2019 kl. 10:02:50, skrev Tobias Gierke <[email protected]>:\n\nHi,\n\n Browsing the PostgreSQL 12 release notes I noticed that JIT is\n now\n enabled by default. Having not followed PostgreSQL development\n closely -\n does this mean that compilation results are now getting cached\n and\n compilation is no longer executed separately for each worker\n thread in a\n parallel query ?\n\n \nI don't know, but just want to chime in with my experience\n with PG-12 and JIT: Execution-time is still way worse then\n JIT=off for your queries so we'll turn JIT=off until we can\n mesure performance-gain.\n\nHm, that's a bummer. So I guess we'll also have to make sure JIT\n is turned off when upgrading.\n\n Thanks,\n Tobias",
"msg_date": "Wed, 29 May 2019 11:13:10 +0200",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sv: JIT in PostgreSQL 12 ?"
},
{
"msg_contents": "På onsdag 29. mai 2019 kl. 11:13:10, skrev Tobias Gierke <\[email protected] <mailto:[email protected]>>: Hi, På \nonsdag 29. mai 2019 kl. 10:02:50, skrev Tobias Gierke <\[email protected] <mailto:[email protected]>>: Hi,\n\n Browsing the PostgreSQL 12 release notes I noticed that JIT is now\n enabled by default. Having not followed PostgreSQL development closely -\n does this mean that compilation results are now getting cached and\n compilation is no longer executed separately for each worker thread in a\n parallel query ? I don't know, but just want to chime in with my experience \nwith PG-12 and JIT: Execution-time is still way worse then JIT=off for your \nqueries so we'll turn JIT=off until we can mesure performance-gain. \nHm, that's a bummer. So I guess we'll also have to make sure JIT is turned off \nwhen upgrading.\nThat's what we'll do, unfortunately:-( -- Andreas Joseph Krogh CTO / Partner - \nVisena AS Mobile: +47 909 56 963 [email protected] <mailto:[email protected]> \nwww.visena.com <https://www.visena.com> <https://www.visena.com>",
"msg_date": "Wed, 29 May 2019 11:16:06 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sv: JIT in PostgreSQL 12 ?"
},
{
"msg_contents": "On 5/29/19 10:02 AM, Tobias Gierke wrote:\n> Browsing the PostgreSQL 12 release notes I noticed that JIT is now \n> enabled by default. Having not followed PostgreSQL development closely - \n> does this mean that compilation results are now getting cached and \n> compilation is no longer executed separately for each worker thread in a \n> parallel query ?\n\nNo, the compilation still happens once for each worker. PostgreSQL 12 \nincludes some smaller performance improvements for the JIT but nothing \nbig like that.\n\nAndreas\n\n\n",
"msg_date": "Wed, 29 May 2019 11:37:04 +0200",
"msg_from": "Andreas Karlsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JIT in PostgreSQL 12 ?"
},
{
"msg_contents": "On Wed, May 29, 2019 at 10:48:19AM +0200, Andreas Joseph Krogh wrote:\n> P� onsdag 29. mai 2019 kl. 10:02:50, skrev Tobias Gierke <[email protected]>\n> Hi,\n> \n> Browsing the PostgreSQL 12 release notes I noticed that JIT is now\n> enabled by default. Having not followed PostgreSQL development closely -\n> does this mean that compilation results are now getting cached and\n> compilation is no longer executed separately for each worker thread in a\n> parallel query ?\n\nThanks for starting the conversation.\n\n> I don't know, but just want to chime in with my experience \n> with PG-12 and JIT: Execution-time is still way worse then JIT=off for your \n> queries so we'll turn JIT=off until we can mesure performance-gain.\n\nThat's also been my consistent experience and conclusion.\n\nI gather the presumption has been that JIT will be enabled by default..\n..but perhaps this is a \"Decision to Recheck Mid-Beta\" (?)\n\nJustin\n\n\n",
"msg_date": "Fri, 31 May 2019 18:38:23 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sv: JIT in PostgreSQL 12 ?"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are migrating our PostgreSQL 9.6.10 database (with streaming replication\nactive) to a faster disk array.\nWe are using this opportunity to enable checksums, so we will have to do a\nfull backup-restore.\nThe database size is about 500GB, it takes about 2h:30min for a full\nbackup, and then about 1h to fully restore it with checksum enabled on the\nnew array, plus 2h to recreate the replica on the old array.\n\nAlthough all synthetic tests (pgbench) indicate the new disk array is\nfaster, we will only be 100% confident once we see its performance on\nproduction, so our backup plan is using our replica database on the older\narray. If the new array performance is poor during production ramp up, we\ncan switch to the replica with little impact to our customers.\n\nProblem is the offline window for backup, restore the full database with\nchecksum and recreate the replica is about 5h:30m.\n\nOne thing that occurred to us to shorten the offline window was restoring\nthe database to both the master and replica in parallel (of course we would\nconfigure the replica as master do restore the database), that would shave\n1h of the total time. Although this is not documented we thought that\nrestoring the same database to identical servers would result in binary\nidentical data files.\n\nWe tried this in lab. As this is not a kosher way to create a replica, we\nran a checksum comparison of all data files, and we ended up having a lot\nof differences. Bummer. Both master and replica worked (no errors on logs),\nbut we ended up insecure about this path because of the binary differences\non data files.\nBut in principle it should work, right?\nHas anyone been through this type of problem?\n\n\nRegards,\nHaroldo Kerry\n\nHello,We are migrating our PostgreSQL 9.6.10 database (with streaming replication active) to a faster disk array.We are using this opportunity to enable checksums, so we will have to do a full backup-restore.The database size is about 500GB, it takes about 2h:30min for a full backup, and then about 1h to fully restore it with checksum enabled on the new array, plus 2h to recreate the replica on the old array.Although all synthetic tests (pgbench) indicate the new disk array is faster, we will only be 100% confident once we see its performance on production, so our backup plan is using our replica database on the older array. If the new array performance is poor during production ramp up, we can switch to the replica with little impact to our customers.Problem is the offline window for backup, restore the full database with checksum and recreate the replica is about 5h:30m.One thing that occurred to us to shorten the offline window was restoring the database to both the master and replica in parallel (of course we would configure the replica as master do restore the database), that would shave 1h of the total time. Although this is not documented we thought that restoring the same database to identical servers would result in binary identical data files.We tried this in lab. As this is not a kosher way to create a replica, we ran a checksum comparison of all data files, and we ended up having a lot of differences. Bummer. Both master and replica worked (no errors on logs), but we ended up insecure about this path because of the binary differences on data files.But in principle it should work, right?Has anyone been through this type of problem?Regards,Haroldo Kerry",
"msg_date": "Thu, 30 May 2019 12:08:04 -0300",
"msg_from": "Haroldo Kerry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Shortest offline window on database migration"
},
{
"msg_contents": "On Thu, May 30, 2019 at 12:08:04PM -0300, Haroldo Kerry wrote:\n>Hello,\n>\n>We are migrating our PostgreSQL 9.6.10 database (with streaming replication\n>active) to a faster disk array.\n>We are using this opportunity to enable checksums, so we will have to do a\n>full backup-restore.\n>The database size is about 500GB, it takes about 2h:30min for a full\n>backup, and then about 1h to fully restore it with checksum enabled on the\n>new array, plus 2h to recreate the replica on the old array.\n>\n>Although all synthetic tests (pgbench) indicate the new disk array is\n>faster, we will only be 100% confident once we see its performance on\n>production, so our backup plan is using our replica database on the older\n>array. If the new array performance is poor during production ramp up, we\n>can switch to the replica with little impact to our customers.\n>\n>Problem is the offline window for backup, restore the full database with\n>checksum and recreate the replica is about 5h:30m.\n>\n>One thing that occurred to us to shorten the offline window was restoring\n>the database to both the master and replica in parallel (of course we would\n>configure the replica as master do restore the database), that would shave\n>1h of the total time. Although this is not documented we thought that\n>restoring the same database to identical servers would result in binary\n>identical data files.\n>\n>We tried this in lab. As this is not a kosher way to create a replica, we\n>ran a checksum comparison of all data files, and we ended up having a lot\n>of differences. Bummer. Both master and replica worked (no errors on logs),\n>but we ended up insecure about this path because of the binary differences\n>on data files.\n>But in principle it should work, right?\n\nWhat should work? Backup using pg_dump and restore certainly won't give\nyou the same binary files - the commit timestamps will be different,\noperations may happen in a different order (esp. with parallel restore),\nand so on. And the instances don't start as a copy anyway, so there will\nbe different system IDs, etc.\n\nSo no, this is not a valid way to provision master/standby cluster.\n\n>Has anyone been through this type of problem?\n>\n\nUnfortunately, I don't think there's a much better solution that what you\ninitially described - dump/restore, and then creating a replica.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 30 May 2019 17:31:28 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shortest offline window on database migration"
},
{
"msg_contents": ">Has anyone been through this type of problem?\n>\n\nYou could set up a new, empty db (with checksums enabled, etc.) on the new hardware and then use logical replication to sync across all the data from the existing cluster.\n(This logical replica could be doing binary replication to hot standbys too, if you like).\n\nWhen the sync has finished you could perhaps gradually shift read-only load over to the new db, and finally switch write load too - your downtime would then be limited to how long this final cut-over takes.\n\nSteve.\n\n\n\n",
"msg_date": "Thu, 30 May 2019 15:54:52 +0000",
"msg_from": "Steven Winfield <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Shortest offline window on database migration"
},
{
"msg_contents": "Hello Steven,\nThanks a lot for the idea, it had not thought about it.\n@Joshua @Tomas, thanks for clarifying why it doesn't work!\n\nBest regards,\nHaroldo Kerry\n\nOn Thu, May 30, 2019 at 12:54 PM Steven Winfield <\[email protected]> wrote:\n\n> >Has anyone been through this type of problem?\n> >\n>\n> You could set up a new, empty db (with checksums enabled, etc.) on the new\n> hardware and then use logical replication to sync across all the data from\n> the existing cluster.\n> (This logical replica could be doing binary replication to hot standbys\n> too, if you like).\n>\n> When the sync has finished you could perhaps gradually shift read-only\n> load over to the new db, and finally switch write load too - your downtime\n> would then be limited to how long this final cut-over takes.\n>\n> Steve.\n>\n>\n>\n> ------------------------------\n>\n>\n> *This email is confidential. If you are not the intended recipient, please\n> advise us immediately and delete this message. The registered name of\n> Cantab- part of GAM Systematic is Cantab Capital Partners LLP. See -\n> http://www.gam.com/en/Legal/Email+disclosures+EU\n> <http://www.gam.com/en/Legal/Email+disclosures+EU> for further information\n> on confidentiality, the risks of non-secure electronic communication, and\n> certain disclosures which we are required to make in accordance with\n> applicable legislation and regulations. If you cannot access this link,\n> please notify us by reply message and we will send the contents to you.GAM\n> Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and\n> use information about you in the course of your interactions with us. Full\n> details about the data types we collect and what we use this for and your\n> related rights is set out in our online privacy policy at\n> https://www.gam.com/en/legal/privacy-policy\n> <https://www.gam.com/en/legal/privacy-policy>. Please familiarise yourself\n> with this policy and check it from time to time for updates as it\n> supplements this notice------------------------------ *\n>\n\n\n-- \n\nHaroldo Kerry\n\nCTO/COO\n\nRua do Rócio, 220, 7° andar, conjunto 72\n\nSão Paulo – SP / CEP 04552-000\n\[email protected]\n\nwww.callix.com.br\n\nHello Steven,Thanks a lot for the idea, it had not thought about it.@Joshua @Tomas, thanks for clarifying why it doesn't work!Best regards,Haroldo KerryOn Thu, May 30, 2019 at 12:54 PM Steven Winfield <[email protected]> wrote:>Has anyone been through this type of problem?\n>\n\nYou could set up a new, empty db (with checksums enabled, etc.) on the new hardware and then use logical replication to sync across all the data from the existing cluster.\n(This logical replica could be doing binary replication to hot standbys too, if you like).\n\nWhen the sync has finished you could perhaps gradually shift read-only load over to the new db, and finally switch write load too - your downtime would then be limited to how long this final cut-over takes.\n\nSteve.\n\n\n\n This email is confidential. If you are not the intended recipient, please advise us immediately and delete this message. The registered name of Cantab- part of GAM Systematic is Cantab Capital Partners LLP. See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. If you cannot access this link, please notify us by reply message and we will send the contents to you.GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and use information about you in the course of your interactions with us. Full details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with this policy and check it from time to time for updates as it supplements this notice \n\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]",
"msg_date": "Thu, 30 May 2019 13:23:33 -0300",
"msg_from": "Haroldo Kerry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shortest offline window on database migration"
},
{
"msg_contents": "Hello Steven,\nUnfortunately logical replication is a pg10+ feature. One more reason for\nupgrading from 9.6.10 :-)\n\nRegards,\nHaroldo Kerry\n\nOn Thu, May 30, 2019 at 1:23 PM Haroldo Kerry <[email protected]> wrote:\n\n> Hello Steven,\n> Thanks a lot for the idea, it had not thought about it.\n> @Joshua @Tomas, thanks for clarifying why it doesn't work!\n>\n> Best regards,\n> Haroldo Kerry\n>\n> On Thu, May 30, 2019 at 12:54 PM Steven Winfield <\n> [email protected]> wrote:\n>\n>> >Has anyone been through this type of problem?\n>> >\n>>\n>> You could set up a new, empty db (with checksums enabled, etc.) on the\n>> new hardware and then use logical replication to sync across all the data\n>> from the existing cluster.\n>> (This logical replica could be doing binary replication to hot standbys\n>> too, if you like).\n>>\n>> When the sync has finished you could perhaps gradually shift read-only\n>> load over to the new db, and finally switch write load too - your downtime\n>> would then be limited to how long this final cut-over takes.\n>>\n>> Steve.\n>>\n>>\n>>\n>> ------------------------------\n>>\n>>\n>> *This email is confidential. If you are not the intended recipient,\n>> please advise us immediately and delete this message. The registered name\n>> of Cantab- part of GAM Systematic is Cantab Capital Partners LLP. See -\n>> http://www.gam.com/en/Legal/Email+disclosures+EU\n>> <http://www.gam.com/en/Legal/Email+disclosures+EU> for further information\n>> on confidentiality, the risks of non-secure electronic communication, and\n>> certain disclosures which we are required to make in accordance with\n>> applicable legislation and regulations. If you cannot access this link,\n>> please notify us by reply message and we will send the contents to you.GAM\n>> Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and\n>> use information about you in the course of your interactions with us. Full\n>> details about the data types we collect and what we use this for and your\n>> related rights is set out in our online privacy policy at\n>> https://www.gam.com/en/legal/privacy-policy\n>> <https://www.gam.com/en/legal/privacy-policy>. Please familiarise yourself\n>> with this policy and check it from time to time for updates as it\n>> supplements this notice------------------------------ *\n>>\n>\n>\n> --\n>\n> Haroldo Kerry\n>\n> CTO/COO\n>\n> Rua do Rócio, 220, 7° andar, conjunto 72\n>\n> São Paulo – SP / CEP 04552-000\n>\n> [email protected]\n>\n> www.callix.com.br\n>\n\n\n-- \n\nHaroldo Kerry\n\nCTO/COO\n\nRua do Rócio, 220, 7° andar, conjunto 72\n\nSão Paulo – SP / CEP 04552-000\n\[email protected]\n\nwww.callix.com.br\n\nHello Steven,Unfortunately logical replication is a pg10+ feature. One more reason for upgrading from 9.6.10 :-)Regards,Haroldo KerryOn Thu, May 30, 2019 at 1:23 PM Haroldo Kerry <[email protected]> wrote:Hello Steven,Thanks a lot for the idea, it had not thought about it.@Joshua @Tomas, thanks for clarifying why it doesn't work!Best regards,Haroldo KerryOn Thu, May 30, 2019 at 12:54 PM Steven Winfield <[email protected]> wrote:>Has anyone been through this type of problem?\n>\n\nYou could set up a new, empty db (with checksums enabled, etc.) on the new hardware and then use logical replication to sync across all the data from the existing cluster.\n(This logical replica could be doing binary replication to hot standbys too, if you like).\n\nWhen the sync has finished you could perhaps gradually shift read-only load over to the new db, and finally switch write load too - your downtime would then be limited to how long this final cut-over takes.\n\nSteve.\n\n\n\n This email is confidential. If you are not the intended recipient, please advise us immediately and delete this message. The registered name of Cantab- part of GAM Systematic is Cantab Capital Partners LLP. See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. If you cannot access this link, please notify us by reply message and we will send the contents to you.GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and use information about you in the course of your interactions with us. Full details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with this policy and check it from time to time for updates as it supplements this notice \n\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]",
"msg_date": "Thu, 30 May 2019 18:52:41 -0300",
"msg_from": "Haroldo Kerry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shortest offline window on database migration"
},
{
"msg_contents": "On Thu, May 30, 2019 at 11:08 AM Haroldo Kerry <[email protected]> wrote:\n\n> Hello,\n>\n> We are migrating our PostgreSQL 9.6.10 database (with streaming\n> replication active) to a faster disk array.\n> We are using this opportunity to enable checksums, so we will have to do a\n> full backup-restore.\n> The database size is about 500GB, it takes about 2h:30min for a full\n> backup, and then about 1h to fully restore it with checksum enabled on the\n> new array, plus 2h to recreate the replica on the old array.\n>\n\nAs others have noticed, your \"trick\" won't work. So back to basics. Are\nyou using the best degree of parallelization on each one of these tasks?\nWhat is the bottleneck of each one (CPU, disk, network)? how are you\ncreating the replica? Can you share the actual command lines for each\none? It seems odd that the dump (which only needs to dump the index and\nconstraint definitions) is so much slower than the restore (which actually\nneeds to build those indexes and validate the constraints). Is that because\nthe dump is happening from the old slow disk and restore a new fast ones?\nSame with creating the replica, why is that slower than actually doing the\nrestore?\n\nIt sounds like you are planning on blowing away the old master server on\nthe old array as soon as the upgrade is complete, so you can re-use that\nspace to build the new replica? That doesn't seem very safe to me--what if\nduring the rebuilding of the replica you run into a major problem and have\nto roll the whole thing back? What will the old array which is holding the\ncurrent replica server be doing in all of this?\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, May 30, 2019 at 11:08 AM Haroldo Kerry <[email protected]> wrote:Hello,We are migrating our PostgreSQL 9.6.10 database (with streaming replication active) to a faster disk array.We are using this opportunity to enable checksums, so we will have to do a full backup-restore.The database size is about 500GB, it takes about 2h:30min for a full backup, and then about 1h to fully restore it with checksum enabled on the new array, plus 2h to recreate the replica on the old array.As others have noticed, your \"trick\" won't work. So back to basics. Are you using the best degree of parallelization on each one of these tasks? What is the bottleneck of each one (CPU, disk, network)? how are you creating the replica? Can you share the actual command lines for each one? It seems odd that the dump (which only needs to dump the index and constraint definitions) is so much slower than the restore (which actually needs to build those indexes and validate the constraints). Is that because the dump is happening from the old slow disk and restore a new fast ones? Same with creating the replica, why is that slower than actually doing the restore?It sounds like you are planning on blowing away the old master server on the old array as soon as the upgrade is complete, so you can re-use that space to build the new replica? That doesn't seem very safe to me--what if during the rebuilding of the replica you run into a major problem and have to roll the whole thing back? What will the old array which is holding the current replica server be doing in all of this?Cheers,Jeff",
"msg_date": "Thu, 30 May 2019 21:41:50 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shortest offline window on database migration"
},
{
"msg_contents": "> 2019年5月31日(金) 6:53 Haroldo Kerry <[email protected]>:\n>>> On Thu, May 30, 2019 at 12:54 PM Steven Winfield <[email protected]> wrote:\n>>\n>> >Has anyone been through this type of problem?\n>> >\n>>\n>> You could set up a new, empty db (with checksums enabled, etc.) on the new hardware and then use logical replication to sync across all the data from the existing cluster.\n>> (This logical replica could be doing binary replication to hot standbys too, if you like).\n>>\n>> When the sync has finished you could perhaps gradually shift read-only load over to the new db, and finally switch write load too - your downtime would then be limited to how long this final cut-over takes.\n>>\n> Steve.\n> Hello Steven,\n> Unfortunately logical replication is a pg10+ feature. One more reason for upgrading from 9.6.10 :-)\n\nHave you looked at the pglogical extension from 2ndQuadrant?\n\n https://github.com/2ndQuadrant/pglogical/tree/REL2_x_STABLE\n\nRegards\n\nIan Barwick\n\n--\n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 31 May 2019 11:49:44 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shortest offline window on database migration"
},
{
"msg_contents": "On 5/30/19 5:08 PM, Haroldo Kerry wrote:\n> Hello,\n> \n> We are migrating our PostgreSQL 9.6.10 database (with streaming\n> replication active) to a faster disk array.\n> We are using this opportunity to enable checksums, \n\n\nI would stay away from performing 2 big changes in one go.\n\n\nregards,\n\nfabio pardi\n\n\n\n\n",
"msg_date": "Fri, 31 May 2019 11:40:54 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shortest offline window on database migration"
},
{
"msg_contents": "Jeff,\n\nWe are using the following command to dump the database:\n\ndocker exec pg-1 bash -c 'pg_dump -v -U postgres -Fc\n--file=/var/lib/postgresql/backup/sc2-ssd.bkp smartcenter2_prod' 2>>\n/var/log/sc2-bkp-ssd.log &\n\nThe bottleneck at dump is CPU (a single one, on a 44 thread server), as we\nare using the -Fc option, that does not allow multiple jobs.\nWe tried some time ago to use the --jobs option of pg_dump but it was\nslower, even with more threads. Our guess is the sheer volume of files\noutweighs the processing gains of using a compressed file output. Also\npg_dump even with multiple jobs spends a lot of time (1h+) on the \"reading\ndependency data\" section that seems to be single threaded (our application\nis multi-tenant and we use schemas to separate tenant data, hence we have a\nlot of tables).\nWe are dumping the backup to the old array.\n\nWe are creating the replica using :\ndocker exec pg-2 pg_basebackup -h 192.168.0.107 -U replication -P --xlog -D\n/var/lib/postgresql/data_9.6\nand it is taking 1h10m , instead of the 2h I reported initially, because we\nwere using rsync with checksums to do it, after experimenting with\npg_basebackup we found out it is faster, rsync was taking 1h just to\ncalculate all checksums. Thanks for your insight on this taking too long.\n\nRegarding blowing the old array after the upgrade is complete:\nOur plan keeps the old array data volume. We restore the backup to the new\narray with checksums, delete the old replica (space issues), and use the\nspace for the new replica with checksums.\nIf the new array does not work as expected, we switch to the replica with\nthe old array. If all things go wrong, we can just switch back to the old\narray data volume (without a replica for some time).\n\nI'm glad to report that we executed the plan over dawn (4h downtime in the\nend) and everything worked, the new array is performing as expected.\nNew array database volume:\n4 x Intel 960GB SATA SSD D3-S4610 on a RAID 10 configuration, on a local\nPERC H730 Controller.\nSpecs:\n\n - Mfr part number: SSDSC2KG960G801\n - Form Factor: 2.5 inch\n - Latency: read - 36 µs; write - 37 µs\n - Random Read (100% span): 96, 000 IOPS\n - Random Write (100% span): 51, 000 IOPS\n\n\nOld array database volume (now used for the replica server):\n7 x Samsung 480GB SAS SSD PM-1633 on a Dell SC2020 Compellent iSCSI\nstorage, with dual 1 Gbps interfaces (max bandwidth 2 Gbps)\nSpecs: Sequential Read Up to 1,400 MB/s Sequential Write Up to 930 MB/s\nRandom Read Up to 200,000 IOPS Random Write Up to 37,000 IOPS\n\nKind of surprisingly to us the local array outperforms the older (and more\nexpensive) by more than 2x.\n\nThanks for the help.\n\nRegards,\nHaroldo Kerry\n\nOn Thu, May 30, 2019 at 10:42 PM Jeff Janes <[email protected]> wrote:\n\n> On Thu, May 30, 2019 at 11:08 AM Haroldo Kerry <[email protected]>\n> wrote:\n>\n>> Hello,\n>>\n>> We are migrating our PostgreSQL 9.6.10 database (with streaming\n>> replication active) to a faster disk array.\n>> We are using this opportunity to enable checksums, so we will have to do\n>> a full backup-restore.\n>> The database size is about 500GB, it takes about 2h:30min for a full\n>> backup, and then about 1h to fully restore it with checksum enabled on the\n>> new array, plus 2h to recreate the replica on the old array.\n>>\n>\n> As others have noticed, your \"trick\" won't work. So back to basics. Are\n> you using the best degree of parallelization on each one of these tasks?\n> What is the bottleneck of each one (CPU, disk, network)? how are you\n> creating the replica? Can you share the actual command lines for each\n> one? It seems odd that the dump (which only needs to dump the index and\n> constraint definitions) is so much slower than the restore (which actually\n> needs to build those indexes and validate the constraints). Is that because\n> the dump is happening from the old slow disk and restore a new fast ones?\n> Same with creating the replica, why is that slower than actually doing the\n> restore?\n>\n> It sounds like you are planning on blowing away the old master server on\n> the old array as soon as the upgrade is complete, so you can re-use that\n> space to build the new replica? That doesn't seem very safe to me--what if\n> during the rebuilding of the replica you run into a major problem and have\n> to roll the whole thing back? What will the old array which is holding the\n> current replica server be doing in all of this?\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\n-- \n\nHaroldo Kerry\n\nCTO/COO\n\nRua do Rócio, 220, 7° andar, conjunto 72\n\nSão Paulo – SP / CEP 04552-000\n\[email protected]\n\nwww.callix.com.br\n\nJeff,We are using the following command to dump the database:docker exec pg-1 bash -c 'pg_dump -v -U postgres -Fc --file=/var/lib/postgresql/backup/sc2-ssd.bkp smartcenter2_prod' 2>> /var/log/sc2-bkp-ssd.log &The bottleneck at dump is CPU (a single one, on a 44 thread server), as we are using the -Fc option, that does not allow multiple jobs.We tried some time ago to use the --jobs option of pg_dump but it was slower, even with more threads. Our guess is the sheer volume of files outweighs the processing gains of using a compressed file output. Also pg_dump even with multiple jobs spends a lot of time (1h+) on the \"reading dependency data\" section that seems to be single threaded (our application is multi-tenant and we use schemas to separate tenant data, hence we have a lot of tables).We are dumping the backup to the old array.We are creating the replica using :docker exec pg-2 pg_basebackup -h 192.168.0.107 -U replication -P --xlog -D /var/lib/postgresql/data_9.6 and it is taking 1h10m , instead of the 2h I reported initially, because we were using rsync with checksums to do it, after experimenting with pg_basebackup we found out it is faster, rsync was taking 1h just to calculate all checksums. Thanks for your insight on this taking too long.Regarding blowing the old array after the upgrade is complete:Our plan keeps the old array data volume. We restore the backup to the new array with checksums, delete the old replica (space issues), and use the space for the new replica with checksums.If the new array does not work as expected, we switch to the replica with the old array. If all things go wrong, we can just switch back to the old array data volume (without a replica for some time). I'm glad to report that we executed the plan over dawn (4h downtime in the end) and everything worked, the new array is performing as expected.New array database volume:4 x Intel 960GB SATA SSD D3-S4610 on a RAID 10 configuration, on a local PERC H730 Controller.Specs:Mfr part number: SSDSC2KG960G801Form Factor: 2.5 inchLatency: read - 36 µs; write - 37 µsRandom Read (100% span): 96, 000 IOPSRandom Write (100% span): 51, 000 IOPSOld array database volume (now used for the replica server):7 x Samsung 480GB SAS SSD PM-1633 on a Dell SC2020 Compellent iSCSI storage, with dual 1 Gbps interfaces (max bandwidth 2 Gbps)Specs: Sequential Read Up to 1,400 MB/s\nSequential Write Up to 930 MB/s\nRandom Read Up to 200,000 IOPS\nRandom Write Up to 37,000 IOPS Kind of surprisingly to us the local array outperforms the older (and more expensive) by more than 2x. Thanks for the help.Regards,Haroldo KerryOn Thu, May 30, 2019 at 10:42 PM Jeff Janes <[email protected]> wrote:On Thu, May 30, 2019 at 11:08 AM Haroldo Kerry <[email protected]> wrote:Hello,We are migrating our PostgreSQL 9.6.10 database (with streaming replication active) to a faster disk array.We are using this opportunity to enable checksums, so we will have to do a full backup-restore.The database size is about 500GB, it takes about 2h:30min for a full backup, and then about 1h to fully restore it with checksum enabled on the new array, plus 2h to recreate the replica on the old array.As others have noticed, your \"trick\" won't work. So back to basics. Are you using the best degree of parallelization on each one of these tasks? What is the bottleneck of each one (CPU, disk, network)? how are you creating the replica? Can you share the actual command lines for each one? It seems odd that the dump (which only needs to dump the index and constraint definitions) is so much slower than the restore (which actually needs to build those indexes and validate the constraints). Is that because the dump is happening from the old slow disk and restore a new fast ones? Same with creating the replica, why is that slower than actually doing the restore?It sounds like you are planning on blowing away the old master server on the old array as soon as the upgrade is complete, so you can re-use that space to build the new replica? That doesn't seem very safe to me--what if during the rebuilding of the replica you run into a major problem and have to roll the whole thing back? What will the old array which is holding the current replica server be doing in all of this?Cheers,Jeff\n\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]",
"msg_date": "Sat, 1 Jun 2019 10:17:13 -0300",
"msg_from": "Haroldo Kerry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shortest offline window on database migration"
},
{
"msg_contents": "Greetings,\n\n* Haroldo Kerry ([email protected]) wrote:\n> The bottleneck at dump is CPU (a single one, on a 44 thread server), as we\n> are using the -Fc option, that does not allow multiple jobs.\n> We tried some time ago to use the --jobs option of pg_dump but it was\n> slower, even with more threads. Our guess is the sheer volume of files\n> outweighs the processing gains of using a compressed file output. Also\n> pg_dump even with multiple jobs spends a lot of time (1h+) on the \"reading\n> dependency data\" section that seems to be single threaded (our application\n> is multi-tenant and we use schemas to separate tenant data, hence we have a\n> lot of tables).\n\nYou might want to reconsider using the separate-schemas-for-tenants\napproach. This isn't the only annoyance you can run into with lots and\nlots of tables. That said, are you using the newer version of pg_dump\n(which is what you should be doing when migrating to a newer version of\nPG, always)? We've improved it over time, though I can't recall off-hand\nif this particular issue was improved of in-between the releases being\ndiscussed here. Of course, lots of little files and dealing with them\ncould drag down performance when working in parallel. Still a bit\nsurprised that it's ending up slower than -Fc.\n\n> We are creating the replica using :\n> docker exec pg-2 pg_basebackup -h 192.168.0.107 -U replication -P --xlog -D\n> /var/lib/postgresql/data_9.6\n> and it is taking 1h10m , instead of the 2h I reported initially, because we\n> were using rsync with checksums to do it, after experimenting with\n> pg_basebackup we found out it is faster, rsync was taking 1h just to\n> calculate all checksums. Thanks for your insight on this taking too long.\n\nSo, it's a bit awkward still, unfortunately, but you can use pgbackrest\nto effectively give you a parallel-replica-build. The steps are\nsomething like:\n\nGet pgbackrest WAL archiving up and going, with the repo on the\ndestination server/filesystem, but have 'compress=n' in the\npgbackrest.conf for the repo.\n\nRun: pgbackrest --stanza=mydb --type=full --process-max=8 backup\n\nOnce that's done, just do:\n\nmv /path/to/repo/backup/mydb/20190605-120000F/pg_data /new/pgdata\nchmod -R g-rwx /new/pgdata\n\nThen in /new/pgdata, create a recovery.conf file like:\n\nrestore_command = 'pgbackrest --stanza=mydb archive-get %f \"%p\"'\n\nAnd start up the DB server.\n\nWe have some ideas about how make that whole thing cleaner but the\nrewrite into C has delayed our efforts, perhaps once that's done (this\nfall), we can look at it.\n\nOf course, you won't have an actual backup of the new database server at\nthat point yet, so you'll want to clean things up and make that happen\nASAP. Another option, which is what I usually recommend, is just to\ntake a new backup (properly) and then do a restore from it, but that'll\nobviously take longer since there's two copies being done instead of one\n(though you can parallelize to your heart's content, so it can still be\nquite fast if you have enough CPU and I/O).\n\nThanks,\n\nStephen",
"msg_date": "Tue, 4 Jun 2019 16:18:35 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shortest offline window on database migration"
}
] |
[
{
"msg_contents": "Hello everyone, \nI started comparing performance between postgres 9.4 and 12beta1 more specifically comparing the new (materialized) CTE.\nThe statements i use are application statements that i have little control over,\nHardware is identical as both clusters are running on the same server, on the same disks, with the same data.\nAlso, cluster settings are almost identical and both clusters have been analyzed.\nIn all my tests 12 is faster , sometimes much faster, apart from one single query that takes ~12 seconds on 9.4 and nearly 300 seconds on 12.\n\nPlans for both :\nPlan for 12 <https://explain.depesz.com/s/wRXO>\nplan for 9.4 <https://explain.depesz.com/s/njtH>\n\nThe plans are obfuscated , apologies for that but what stands out is the following :\n\n\nHash Left Join <http://www.depesz.com/2013/05/09/explaining-the-unexplainable-part-3/#join-modifiers> (cost=200,673.150..203,301.940 rows=153,121 width=64) (actual time=1,485.883..284,536.440 rows=467,123 loops=1)\nHash Cond: (lima_sierra(six_lima_november2.victor_romeo, 1) = foxtrot_hotel.victor_romeo)\nJoin Filter: (whiskey_uniform1.six_sierra = foxtrot_hotel.uniform_juliet)\nRows Removed by Join Filter: 4549925366\n\n\nI really can't understand what these 4.5bil rows have been removed from, there is nothing suggesting that this dataset was ever created (eg. temp)\nand these numbers definitely don't match what i was expecting, which is more or less what i'm seeing in 9.4 plan.\n\nObviously i've tested this more than once and this behaviour consists.\n\n\nBest Regards,\nVasilis Ventirozos\n\n\nHello everyone, I started comparing performance between postgres 9.4 and 12beta1 more specifically comparing the new (materialized) CTE.The statements i use are application statements that i have little control over,Hardware is identical as both clusters are running on the same server, on the same disks, with the same data.Also, cluster settings are almost identical and both clusters have been analyzed.In all my tests 12 is faster , sometimes much faster, apart from one single query that takes ~12 seconds on 9.4 and nearly 300 seconds on 12.Plans for both :Plan for 12plan for 9.4The plans are obfuscated , apologies for that but what stands out is the following :Hash Left Join (cost=200,673.150..203,301.940 rows=153,121 width=64) (actual time=1,485.883..284,536.440 rows=467,123 loops=1)Hash Cond: (lima_sierra(six_lima_november2.victor_romeo, 1) = foxtrot_hotel.victor_romeo)Join Filter: (whiskey_uniform1.six_sierra = foxtrot_hotel.uniform_juliet)Rows Removed by Join Filter: 4549925366I really can't understand what these 4.5bil rows have been removed from, there is nothing suggesting that this dataset was ever created (eg. temp)and these numbers definitely don't match what i was expecting, which is more or less what i'm seeing in 9.4 plan.Obviously i've tested this more than once and this behaviour consists.Best Regards,Vasilis Ventirozos",
"msg_date": "Tue, 4 Jun 2019 17:34:07 +0300",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange query behaviour between 9.4 and 12beta1"
},
{
"msg_contents": "On Tue, Jun 04, 2019 at 05:34:07PM +0300, Vasilis Ventirozos wrote:\n>Hello everyone,\n>I started comparing performance between postgres 9.4 and 12beta1 more specifically comparing the new (materialized) CTE.\n\nAre you saying the CTE is specified as MATERIALIZED in the query on 12?\nBecause I don't see it in the explain plan (it's mentioned in the 9.4\nplan, though).\n\n>The statements i use are application statements that i have little control over,\n>Hardware is identical as both clusters are running on the same server, on the same disks, with the same data.\n>Also, cluster settings are almost identical and both clusters have been analyzed.\n>In all my tests 12 is faster , sometimes much faster, apart from one single query that takes ~12 seconds on 9.4 and nearly 300 seconds on 12.\n>\n>Plans for both :\n>Plan for 12 <https://explain.depesz.com/s/wRXO>\n>plan for 9.4 <https://explain.depesz.com/s/njtH>\n>\n>The plans are obfuscated , apologies for that but what stands out is the following :\n>\n\nMeh.\n\n>\n>Hash Left Join <http://www.depesz.com/2013/05/09/explaining-the-unexplainable-part-3/#join-modifiers> (cost=200,673.150..203,301.940 rows=153,121 width=64) (actual time=1,485.883..284,536.440 rows=467,123 loops=1)\n>Hash Cond: (lima_sierra(six_lima_november2.victor_romeo, 1) = foxtrot_hotel.victor_romeo)\n>Join Filter: (whiskey_uniform1.six_sierra = foxtrot_hotel.uniform_juliet)\n>Rows Removed by Join Filter: 4549925366\n>\n\nYou have two equality conditions for the join. The first one is used to\nmatch rows by the hash join itself - it's used to compute the hash\nvalue and lookups. But there may be multiple rows that match on either\nside, generating additional \"virtual rows\". Those are then removed by\nthe second condition.\n\nConsider for example simple cross-join on this table:\n\n a | b\n -------------\n 1 | a\n 1 | b\n 2 | a\n 2 | b\n\nand the query is\n\n SELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b)\n\nNow, in the first phase, the hash join might only do (t1.a = t2.a),\nwhich will generate 8 rows\n\n a | t1.b | t2.b\n ----------------\n 1 | a | a\n 1 | a | b\n 1 | b | a\n 1 | b | b\n 2 | a | a\n 2 | a | b\n 2 | b | a\n 2 | b | b\n\nAnd then it will apply the second condition (t1.b = t2.b) as a \"filter\"\nwhich will remove some of the rows. In your case the first step\ngenerates 4.5B rows the second step discards.\n\nI'm not sure why we don't use both conditions as a hash condition.\nPerhaps it's a data type that does not support hashing, and on 9.4 that\ndoes not matter because we end up using merge join.\n\n\n>\n>I really can't understand what these 4.5bil rows have been removed from, there is nothing suggesting that this dataset was ever created (eg. temp)\n>and these numbers definitely don't match what i was expecting, which is more or less what i'm seeing in 9.4 plan.\n>\n>Obviously i've tested this more than once and this behaviour consists.\n>\n>\n>Best Regards,\n>Vasilis Ventirozos\n>\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 5 Jun 2019 01:01:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange query behaviour between 9.4 and 12beta1"
}
] |
[
{
"msg_contents": "Hi,\n\n\nMaybe of some interest for the past, present and future community, I\nbenchmarked the impact of wal_log_hints with and without wal_compression\nenabled.\n\n\nhttps://portavita.github.io/2019-06-14-blog_PostgreSQL_wal_log_hints_benchmarked/\n\n\ncomments are welcome.\n\n\nregards,\n\nfabio pardi\n\n\n",
"msg_date": "Fri, 14 Jun 2019 15:46:30 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "wal_log_hints benchmarks"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 03:46:30PM +0200, Fabio Pardi wrote:\n> Maybe of some interest for the past, present and future community, I\n> benchmarked the impact of wal_log_hints with and without wal_compression\n> enabled.\n\npgbench data is rather compressible per the format of its attributes,\nhence I am ready to bet that the compressibility would much much less\nif you use random text data for example.\n\nThe amount of WAL generated also depends on the time it takes to run a\ntransactions, and on the checkpoint timing, so I think that would you\nget better results by using a fixed number of transactions if you use\npgbench, but that won't compare as much as a given workload in one\nsession so as you can make sure that the same amount of WAL and full\npages get generated.\n--\nMichael",
"msg_date": "Mon, 17 Jun 2019 11:01:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal_log_hints benchmarks"
},
{
"msg_contents": "Hi Michael,\n\nthanks for taking the time of reading it and for your feedback.\n\nOn 17/06/2019 04:01, Michael Paquier wrote:\n\n> \n> pgbench data is rather compressible per the format of its attributes,\n> hence I am ready to bet that the compressibility would much much less\n> if you use random text data for example.\n\nHaving compression enabled on production on several dbs, I can tell you that WAL production goes down around 50% in my case. \n\nIn my post, compression brings down WAL files to 1/2 when wal_log_hints is not enabled, and 1/3 when it is. \n\nSo, about compressibility, I think that pgbench data behaves similarly to production data, orat least the prod data I have in my databases.\n\nI am curious to know other people's experience in this ML.\n\n> \n> The amount of WAL generated also depends on the time it takes to run a\n> transactions, and on the checkpoint timing, so I think that would you\n> get better results by using a fixed number of transactions if you use\n> pgbench, but that won't compare as much as a given workload in one\n> session so as you can make sure that the same amount of WAL and full\n> pages get generated.\n\nThat's a good remark, thanks. I did not think about it and I will keep it in mind next time. I instead averaged the results over multiple runs, but setting an explicit number of transactions is the way to go.\n\nResults, by the way, were quite stable over all the runs (in terms of generated WAL files and TPS).\n\n\nregards,\n\nfabio pardi\n\n\n",
"msg_date": "Mon, 17 Jun 2019 09:53:13 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wal_log_hints benchmarks"
}
] |
[
{
"msg_contents": "Hi\n\nWe recently upgraded one of the instances from 9.6.3 to 9.6.12 and seeing\nfollowing issue that occurs for few cases.\n\nI have tried running analyze on the table with different values from 1000 -\n5000 but it doesn't seem to help the issue. There is some skew in a_id\nbut the combination index i_tc_adid_tid btree (a_id, id) makes the index\nunique as it includes primary key.\n\nIs there an explanation why it is using incorrect index?\n\nSQL:\nSELECT count(*) FROM tc WHERE ((tc.a_id = $1)) AND ((tc.m_id = $2)) AND\n((tc.ag_id is not null)) AND ((tc.id in ($3))) AND ((tc.pt in ($4, $5, $6)))\n\nIndexes on the table:\n i_tc_adid_tid btree (a_id, id)\n pk_id PRIMARY KEY, btree (id)\n i_agi_tc_tcn btree (ag_id, tname) ---> index that gets used\n\n\nduration: 49455.649 ms execute S_10: SELECT count(*) FROM tc WHERE\n((tc.a_id = $1)) AND ((tc.m_id = $2)) AND ((tc.ag_id is not null)) AND ((\ntc.id in ($3))) AND ((tc.pt in ($4, $5, $6)))\nDETAIL: parameters: $1 = '11786959222', $2 = '6', $3 = '54460816501', $4 =\n'3', $5 = '6', $6 = '103'\nLOG: duration: 49455.639 ms plan:\n Query Text: SELECT count(*) FROM tc WHERE ((tc.a_id = $1)) AND\n((tc.m_id = $2)) AND ((tc.ag_id is not null)) AND ((tc.id in ($3))) AND ((\ntc.pt in ($4, $5, $6)))\n Aggregate (cost=5009342.34..5009342.35 rows=1 width=8) (actual\ntime=49455.626..49455.626 rows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=56288997\n -> Index Scan using i_agi_tc_tcn on b.tc (cost=0.57..5009342.34\nrows=1 width=0) (actual time=46452.555..49455.616 rows=1 loops=1)\n Output: id, tname, ...\n Index Cond: (tc.ag_id IS NOT NULL)\n Filter: ((tc.a_id = '11786959222'::numeric) AND (tc.m_id =\n'6'::numeric) AND (tc.id = '54460816501'::numeric) AND (tc.pt = ANY\n('{3,6,103}'::numeric[])))\n Rows Removed by Filter: 70996637\n Buffers: shared hit=56288997\n\nThanks\n\nHiWe recently upgraded one of the instances from 9.6.3 to 9.6.12 and seeing following issue that occurs for few cases. I have tried running analyze on the table with different values from 1000 - 5000 but it doesn't seem to help the issue. There is some skew in a_id but the combination index i_tc_adid_tid btree (a_id, id) makes the index unique as it includes primary key. Is there an explanation why it is using incorrect index?SQL:SELECT count(*) FROM tc WHERE ((tc.a_id = $1)) AND ((tc.m_id = $2)) AND ((tc.ag_id is not null)) AND ((tc.id in ($3))) AND ((tc.pt in ($4, $5, $6)))Indexes on the table: i_tc_adid_tid btree (a_id, id) pk_id PRIMARY KEY, btree (id) i_agi_tc_tcn btree (ag_id, tname) ---> index that gets usedduration: 49455.649 ms execute S_10: SELECT count(*) FROM tc WHERE ((tc.a_id = $1)) AND ((tc.m_id = $2)) AND ((tc.ag_id is not null)) AND ((tc.id in ($3))) AND ((tc.pt in ($4, $5, $6)))DETAIL: parameters: $1 = '11786959222', $2 = '6', $3 = '54460816501', $4 = '3', $5 = '6', $6 = '103'LOG: duration: 49455.639 ms plan: Query Text: SELECT count(*) FROM tc WHERE ((tc.a_id = $1)) AND ((tc.m_id = $2)) AND ((tc.ag_id is not null)) AND ((tc.id in ($3))) AND ((tc.pt in ($4, $5, $6))) Aggregate (cost=5009342.34..5009342.35 rows=1 width=8) (actual time=49455.626..49455.626 rows=1 loops=1) Output: count(*) Buffers: shared hit=56288997 -> Index Scan using i_agi_tc_tcn on b.tc (cost=0.57..5009342.34 rows=1 width=0) (actual time=46452.555..49455.616 rows=1 loops=1) Output: id, tname, ... Index Cond: (tc.ag_id IS NOT NULL) Filter: ((tc.a_id = '11786959222'::numeric) AND (tc.m_id = '6'::numeric) AND (tc.id = '54460816501'::numeric) AND (tc.pt = ANY ('{3,6,103}'::numeric[]))) Rows Removed by Filter: 70996637 Buffers: shared hit=56288997Thanks",
"msg_date": "Tue, 18 Jun 2019 06:11:54 -0700",
"msg_from": "AminPG Jaffer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Incorrect index used in few cases.."
},
{
"msg_contents": "AminPG Jaffer <[email protected]> writes:\n> Is there an explanation why it is using incorrect index?\n\n> SQL:\n> SELECT count(*) FROM tc WHERE ((tc.a_id = $1)) AND ((tc.m_id = $2)) AND\n> ((tc.ag_id is not null)) AND ((tc.id in ($3))) AND ((tc.pt in ($4, $5, $6)))\n\nWhat data types are these columns? For that matter, could we see the\nwhole schema for the table (psql \\d+ output or equivalent)?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jun 2019 09:35:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect index used in few cases.."
},
{
"msg_contents": "Here is the table structure.\n\n Column | Type |\n Modifiers\n-----------------------+-----------------------------+-----------------------------------------------------------\n id | numeric(38,0) | not null\n tname | character varying(255) | not null\n ag_id | numeric(38,0) |\n tc | character varying(255) | not null\n status | numeric(10,0) | not null\n internal_status | numeric(10,0) | not null\n create_date | timestamp(6) with time zone | not null\n version | numeric(38,0) | not null\n match_type | numeric(10,0) | not null default 0\n c_id | numeric(38,0) | not null\n m_id | numeric(38,0) | not null\n a_id | numeric(38,0) | not null\n maxb | numeric(18,6) |\n b_cc | character varying(10) |\n ui_status | numeric(10,0) | not null default 0\n destination_url | character varying(2084) |\n created_by | character varying(64) | not null\n creation_date | timestamp(0) with time zone | not null default\ntimezone('UTC'::text, clock_timestamp())\n last_updated_by | character varying(64) | not null\n last_updated_date | timestamp(0) with time zone | not null\n pr | numeric(5,0) | not null default 0\n ts | numeric(1,0) | not null default 0\n uniqueness_hash_v2 | numeric(29,0) | not null\n pt | numeric(5,0) |\n history | bigint |\n t_secondary | text |\n\nIndexes:\n \"pk_id\" PRIMARY KEY, btree (id)\n \"i_agi_tc_tcn\" btree (ag_id, tname)\n \"i_cid_agid_tcn\" btree (c_id, ag_id, tname)\n \"i_tc_adid_tid\" btree (a_id, id)\n \"i_tc_advertiser_id\" btree (a_id)\n \"i_tc_campaign_id\" btree (c_id)\n \"i_tc_lud_agi\" btree (last_updated_date, ag_id)\n \"i_tc_uniqueness_hash_v2\" btree (uniqueness_hash_v2)\nCheck constraints:\n \"tc_secondary\" CHECK (length(t_secondary) <= 4500)\n\nOn Tue, Jun 18, 2019 at 6:35 AM Tom Lane <[email protected]> wrote:\n\n> AminPG Jaffer <[email protected]> writes:\n> > Is there an explanation why it is using incorrect index?\n>\n> > SQL:\n> > SELECT count(*) FROM tc WHERE ((tc.a_id = $1)) AND ((tc.m_id = $2)) AND\n> > ((tc.ag_id is not null)) AND ((tc.id in ($3))) AND ((tc.pt in ($4, $5,\n> $6)))\n>\n> What data types are these columns? For that matter, could we see the\n> whole schema for the table (psql \\d+ output or equivalent)?\n>\n> regards, tom lane\n>\n\nHere is the table structure. Column | Type | Modifiers-----------------------+-----------------------------+----------------------------------------------------------- id | numeric(38,0) | not null tname | character varying(255) | not null ag_id | numeric(38,0) | tc | character varying(255) | not null status | numeric(10,0) | not null internal_status | numeric(10,0) | not null create_date | timestamp(6) with time zone | not null version | numeric(38,0) | not null match_type | numeric(10,0) | not null default 0 c_id | numeric(38,0) | not null m_id | numeric(38,0) | not null a_id | numeric(38,0) | not null maxb | numeric(18,6) | b_cc | character varying(10) | ui_status | numeric(10,0) | not null default 0 destination_url | character varying(2084) | created_by | character varying(64) | not null creation_date | timestamp(0) with time zone | not null default timezone('UTC'::text, clock_timestamp()) last_updated_by | character varying(64) | not null last_updated_date | timestamp(0) with time zone | not null pr | numeric(5,0) | not null default 0 ts | numeric(1,0) | not null default 0 uniqueness_hash_v2 | numeric(29,0) | not null pt | numeric(5,0) | history | bigint | t_secondary | text |Indexes: \"pk_id\" PRIMARY KEY, btree (id) \"i_agi_tc_tcn\" btree (ag_id, tname) \"i_cid_agid_tcn\" btree (c_id, ag_id, tname) \"i_tc_adid_tid\" btree (a_id, id) \"i_tc_advertiser_id\" btree (a_id) \"i_tc_campaign_id\" btree (c_id) \"i_tc_lud_agi\" btree (last_updated_date, ag_id) \"i_tc_uniqueness_hash_v2\" btree (uniqueness_hash_v2)Check constraints: \"tc_secondary\" CHECK (length(t_secondary) <= 4500)On Tue, Jun 18, 2019 at 6:35 AM Tom Lane <[email protected]> wrote:AminPG Jaffer <[email protected]> writes:\r\n> Is there an explanation why it is using incorrect index?\n\r\n> SQL:\r\n> SELECT count(*) FROM tc WHERE ((tc.a_id = $1)) AND ((tc.m_id = $2)) AND\r\n> ((tc.ag_id is not null)) AND ((tc.id in ($3))) AND ((tc.pt in ($4, $5, $6)))\n\r\nWhat data types are these columns? For that matter, could we see the\r\nwhole schema for the table (psql \\d+ output or equivalent)?\n\r\n regards, tom lane",
"msg_date": "Tue, 18 Jun 2019 11:55:47 -0700",
"msg_from": "AminPG Jaffer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect index used in few cases.."
},
{
"msg_contents": "AminPG Jaffer <[email protected]> writes:\n> Here is the table structure.\n\nHpmh. I thought it was just barely possible that you had a datatype\nmismatch between the columns and the parameters, but nope, the columns\nare \"numeric\" just like the parameters.\n\nI'm pretty baffled. I tried to duplicate the problem with some dummy\ndata (as attached) and could not. In my hands, it wants to use the\ni_tc_adid_tid index, or if I drop that then the pkey index, and any\nother possible plan is orders of magnitude more expensive than those.\n\nAnother far-fetched theory is that the theoretically-better indexes\nare so badly bloated as to discourage the planner from using them.\nYou could eliminate that one by checking the index sizes with \"\\di+\".\n\nAre you perhaps running with non-default values for any planner cost\nparameters? Or it's not a stock build of Postgres?\n\nIf you could find a way to adjust the attached example so that it\nproduces the same misbehavior you see with live data, that would be\nvery interesting ...\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 18 Jun 2019 17:07:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect index used in few cases.."
},
{
"msg_contents": "On Tue, Jun 18, 2019 at 2:08 PM Tom Lane <[email protected]> wrote:\r\n> Are you perhaps running with non-default values for any planner cost\r\n> parameters? Or it's not a stock build of Postgres?\r\n>\r\n> If you could find a way to adjust the attached example so that it\r\n> produces the same misbehavior you see with live data, that would be\r\n> very interesting ...\r\n\r\nFWIW, if you move the CREATE INDEX statements before the INSERT, and\r\ncompared earlier versions of Postgres to 12, you'll see that the size\r\nof some of the indexes are a lot smaller on 12.\r\n\r\nv11 (representative of 9.6):\r\n\r\npg@tc:5411 [1067]=# \\di+ i_*\r\n List of relations\r\n Schema │ Name │ Type │ Owner │ Table │ Size │ Description\r\n────────┼─────────────────────────┼───────┼───────┼───────┼───────┼─────────────\r\n public │ i_agi_tc_tcn │ index │ pg │ tc │ 74 MB │\r\n public │ i_cid_agid_tcn │ index │ pg │ tc │ 82 MB │\r\n public │ i_tc_adid_tid │ index │ pg │ tc │ 57 MB │\r\n public │ i_tc_advertiser_id │ index │ pg │ tc │ 27 MB │\r\n public │ i_tc_campaign_id │ index │ pg │ tc │ 28 MB │\r\n public │ i_tc_lud_agi │ index │ pg │ tc │ 57 MB │\r\n public │ i_tc_uniqueness_hash_v2 │ index │ pg │ tc │ 21 MB │\r\n(7 rows)\r\n\r\nv12/master:\r\n\r\npg@regression:5432 [1022]=# \\di+ i_*\r\n List of relations\r\n Schema │ Name │ Type │ Owner │ Table │ Size │ Description\r\n────────┼─────────────────────────┼───────┼───────┼───────┼───────┼─────────────\r\n public │ i_agi_tc_tcn │ index │ pg │ tc │ 69 MB │\r\n public │ i_cid_agid_tcn │ index │ pg │ tc │ 78 MB │\r\n public │ i_tc_adid_tid │ index │ pg │ tc │ 36 MB │\r\n public │ i_tc_advertiser_id │ index │ pg │ tc │ 20 MB │\r\n public │ i_tc_campaign_id │ index │ pg │ tc │ 24 MB │\r\n public │ i_tc_lud_agi │ index │ pg │ tc │ 30 MB │\r\n public │ i_tc_uniqueness_hash_v2 │ index │ pg │ tc │ 21 MB │\r\n(7 rows)\r\n\r\nNote that i_tc_lud_agi is 30 MB, not 57MB, and that i_tc_adid_tid is\r\n36 MB, not 57 MB.\r\n\r\nI can see that both i_tc_lud_agi and i_tc_adid_tid consistently use\r\nthe \"split after new tuple\" optimization on v12.\r\n\r\n-- \r\nPeter Geoghegan\r\n",
"msg_date": "Tue, 18 Jun 2019 14:49:40 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect index used in few cases.."
},
{
"msg_contents": "Hi,\n\nOn 2019-06-18 06:11:54 -0700, AminPG Jaffer wrote:\n> We recently upgraded one of the instances from 9.6.3 to 9.6.12 and seeing\n> following issue that occurs for few cases.\n> \n> We recently upgraded one of the instances from 9.6.3 to 9.6.12 and seeing\n> following issue that occurs for few cases.\n> \n> I have tried running analyze on the table with different values from 1000 -\n> 5000 but it doesn't seem to help the issue. There is some skew in a_id\n> but the combination index i_tc_adid_tid btree (a_id, id) makes the index\n> unique as it includes primary key.\n> \n> Is there an explanation why it is using incorrect index?\n> \n> SQL:\n> SELECT count(*) FROM tc WHERE ((tc.a_id = $1)) AND ((tc.m_id = $2)) AND\n> ((tc.ag_id is not null)) AND ((tc.id in ($3))) AND ((tc.pt in ($4, $5, $6)))\n> \n> Indexes on the table:\n> i_tc_adid_tid btree (a_id, id)\n> pk_id PRIMARY KEY, btree (id)\n> i_agi_tc_tcn btree (ag_id, tname) ---> index that gets used\n\nAre those indexes used for other queries? Any chance they've been\nrecently created?\n\nSELECT indexrelid::regclass, xmin, indcheckxmin, indisvalid, indisready,\nindislive, txid_current(), txid_current_snapshot()\nFROM pg_index WHERE indrelid = 'tc'::regclass;\n\nmight tell us.\n\n\nOn 2019-06-18 17:07:55 -0400, Tom Lane wrote:\n> I'm pretty baffled. I tried to duplicate the problem with some dummy\n> data (as attached) and could not. In my hands, it wants to use the\n> i_tc_adid_tid index, or if I drop that then the pkey index, and any\n> other possible plan is orders of magnitude more expensive than those.\n\n> Another far-fetched theory is that the theoretically-better indexes\n> are so badly bloated as to discourage the planner from using them.\n> You could eliminate that one by checking the index sizes with \"\\di+\".\n> \n> Are you perhaps running with non-default values for any planner cost\n> parameters? Or it's not a stock build of Postgres?\n> \n> If you could find a way to adjust the attached example so that it\n> produces the same misbehavior you see with live data, that would be\n> very interesting ...\n\nAmin, might be worth to see what the query plan is if you disable that\nindex. I assume it's too big to quickly drop (based on the ?\n\nSomething like:\n\nBEGIN;\nLOCK tc;\nUPDATE pg_index SET indisvalid = false WHERE indexrelid = 'name_of_index'::regclass AND indisvalid;\nEXPLAIN yourquery;\nROLLBACK;\n\nmight allow to test that without actually dropping the index. But that\nof course requires superuser access.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jun 2019 15:13:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect index used in few cases.."
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Are those indexes used for other queries? Any chance they've been\n> recently created?\n\n> SELECT indexrelid::regclass, xmin, indcheckxmin, indisvalid, indisready,\n> indislive, txid_current(), txid_current_snapshot()\n> FROM pg_index WHERE indrelid = 'tc'::regclass;\n\n> might tell us.\n\nOh, that's a good idea.\n\n> Amin, might be worth to see what the query plan is if you disable that\n> index. I assume it's too big to quickly drop (based on the ?\n\nConsidering that the \"right\" query plan would have a cost estimate in\nthe single digits or close to it, I have to suppose that the planner is\nrejecting that index as unusable, not making a cost-based decision not\nto use it. (Well, maybe if it's bloated by three orders of magnitude\ncompared to the other indexes, it'd lose on cost. Doesn't seem likely\nthough.)\n\nSo I think we're looking for a hard \"can't use the index\" reason, and\nnow we've eliminated datatype mismatch which'd be the most obvious\nsuch reason. But index-isnt-valid or index-isnt-ready might do the\ntrick.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jun 2019 18:23:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect index used in few cases.."
},
{
"msg_contents": "Sorry for late reply.\n\nThe initial values before upgrade for seq_page_cost=1, random_page_cost=4\nand after upgrading when we started to see the issues as we were seeing\n\"Seq Scan\" we change them seq_page_cost=1, random_page_cost=1\n\nThe issue happens only in production so making the index invalid would\naffect service so it isn't something we can do.\nI have tried to rebuild the PK index to see it helps or not but it doesn't\nseem help.\n\nRelated to the same issue we sometimes see following Seq Scan on update\nwhen querying by PK alone which appears to be related.\n\n update tc set...where id = $1 and version <$2\n Update on tc (cost=10000000000.00..10003184001.52 rows=1 width=1848)\n -> Seq Scan on tc (cost=10000000000.00..10003184001.52 rows=1\nwidth=1848)\n Filter: ((version < '38'::numeric) AND (id =\n'53670604704'::numeric))\n\nI was trying to find where the cost=10000000000 is set in the source code\nbut wasn't able to find it, do anyone where it is set?\nAnd if you someone can point me to the code where it goes through the\nexecution plans when SQL is sent i can try to go through the code to see if\ncan figure out what it is doing behind to scene in it's calculation?\n\nThanks\n\nOn Tue, Jun 18, 2019 at 3:23 PM Tom Lane <[email protected]> wrote:\n\n> Andres Freund <[email protected]> writes:\n> > Are those indexes used for other queries? Any chance they've been\n> > recently created?\n>\n> > SELECT indexrelid::regclass, xmin, indcheckxmin, indisvalid, indisready,\n> > indislive, txid_current(), txid_current_snapshot()\n> > FROM pg_index WHERE indrelid = 'tc'::regclass;\n>\n> > might tell us.\n>\n> Oh, that's a good idea.\n>\n> > Amin, might be worth to see what the query plan is if you disable that\n> > index. I assume it's too big to quickly drop (based on the ?\n>\n> Considering that the \"right\" query plan would have a cost estimate in\n> the single digits or close to it, I have to suppose that the planner is\n> rejecting that index as unusable, not making a cost-based decision not\n> to use it. (Well, maybe if it's bloated by three orders of magnitude\n> compared to the other indexes, it'd lose on cost. Doesn't seem likely\n> though.)\n>\n> So I think we're looking for a hard \"can't use the index\" reason, and\n> now we've eliminated datatype mismatch which'd be the most obvious\n> such reason. But index-isnt-valid or index-isnt-ready might do the\n> trick.\n>\n> regards, tom lane\n>\n\nSorry for late reply.The initial values before upgrade for seq_page_cost=1, random_page_cost=4 and after upgrading when we started to see the issues as we were seeing \"Seq Scan\" we change them \n\nseq_page_cost=1, random_page_cost=1 The issue happens only in production so making the index invalid would affect service so it isn't something we can do. I have tried to rebuild the PK index to see it helps or not but it doesn't seem help.Related to the same issue we sometimes see following Seq Scan on update when querying by PK alone which appears to be related. update tc set...where id = $1 and version <$2 Update on tc (cost=10000000000.00..10003184001.52 rows=1 width=1848) -> Seq Scan on tc (cost=10000000000.00..10003184001.52 rows=1 width=1848) Filter: ((version < '38'::numeric) AND (id = '53670604704'::numeric))I was trying to find where the cost=10000000000 is set in the source code but wasn't able to find it, do anyone where it is set?And if you someone can point me to the code where it goes through the execution plans when SQL is sent i can try to go through the code to see if can figure out what it is doing behind to scene in it's calculation?ThanksOn Tue, Jun 18, 2019 at 3:23 PM Tom Lane <[email protected]> wrote:Andres Freund <[email protected]> writes:\n> Are those indexes used for other queries? Any chance they've been\n> recently created?\n\n> SELECT indexrelid::regclass, xmin, indcheckxmin, indisvalid, indisready,\n> indislive, txid_current(), txid_current_snapshot()\n> FROM pg_index WHERE indrelid = 'tc'::regclass;\n\n> might tell us.\n\nOh, that's a good idea.\n\n> Amin, might be worth to see what the query plan is if you disable that\n> index. I assume it's too big to quickly drop (based on the ?\n\nConsidering that the \"right\" query plan would have a cost estimate in\nthe single digits or close to it, I have to suppose that the planner is\nrejecting that index as unusable, not making a cost-based decision not\nto use it. (Well, maybe if it's bloated by three orders of magnitude\ncompared to the other indexes, it'd lose on cost. Doesn't seem likely\nthough.)\n\nSo I think we're looking for a hard \"can't use the index\" reason, and\nnow we've eliminated datatype mismatch which'd be the most obvious\nsuch reason. But index-isnt-valid or index-isnt-ready might do the\ntrick.\n\n regards, tom lane",
"msg_date": "Sun, 23 Jun 2019 08:07:56 -0700",
"msg_from": "AminPG Jaffer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect index used in few cases.."
},
{
"msg_contents": "Hi\n\nI didn't see my following response got posted on the mailing list so not\nsure if it is duplicate.\n\nSorry for late reply.\n\nThe initial values before upgrade for seq_page_cost=1, random_page_cost=4\nand after upgrading when we started to see the issues as we were seeing\n\"Seq Scan\" we change them seq_page_cost=1, random_page_cost=1\n\nThe issue happens only in production so making the index invalid would\naffect service so it isn't something we can do.\nI have tried to rebuild the PK index to see it helps or not but it doesn't\nseem help.\n\nRelated to the same issue we sometimes see following Seq Scan on update\nwhen querying by PK alone which appears to be related.\n\n update tc set...where id = $1 and version <$2\n Update on tc (cost=10000000000.00..10003184001.52 rows=1 width=1848)\n -> Seq Scan on tc (cost=10000000000.00..10003184001.52 rows=1\nwidth=1848)\n Filter: ((version < '38'::numeric) AND (id =\n'53670604704'::numeric))\n\nI was trying to find where the cost=10000000000 is set in the source code\nbut wasn't able to find it, do anyone where it is set?\nAnd if you someone can point me to the code where it goes through the\nexecution plans when SQL is sent i can try to go through the code to see if\ncan figure out what it is doing behind to scene in it's calculation?\n\nOn Sun, Jun 23, 2019 at 8:07 AM AminPG Jaffer <[email protected]>\nwrote:\n\n>\n> Sorry for late reply.\n>\n> The initial values before upgrade for seq_page_cost=1, random_page_cost=4\n> and after upgrading when we started to see the issues as we were seeing\n> \"Seq Scan\" we change them seq_page_cost=1, random_page_cost=1\n>\n> The issue happens only in production so making the index invalid would\n> affect service so it isn't something we can do.\n> I have tried to rebuild the PK index to see it helps or not but it doesn't\n> seem help.\n>\n> Related to the same issue we sometimes see following Seq Scan on update\n> when querying by PK alone which appears to be related.\n>\n> update tc set...where id = $1 and version <$2\n> Update on tc (cost=10000000000.00..10003184001.52 rows=1\n> width=1848)\n> -> Seq Scan on tc (cost=10000000000.00..10003184001.52 rows=1\n> width=1848)\n> Filter: ((version < '38'::numeric) AND (id =\n> '53670604704'::numeric))\n>\n> I was trying to find where the cost=10000000000 is set in the source code\n> but wasn't able to find it, do anyone where it is set?\n> And if you someone can point me to the code where it goes through the\n> execution plans when SQL is sent i can try to go through the code to see if\n> can figure out what it is doing behind to scene in it's calculation?\n>\n> Thanks\n>\n> On Tue, Jun 18, 2019 at 3:23 PM Tom Lane <[email protected]> wrote:\n>\n>> Andres Freund <[email protected]> writes:\n>> > Are those indexes used for other queries? Any chance they've been\n>> > recently created?\n>>\n>> > SELECT indexrelid::regclass, xmin, indcheckxmin, indisvalid, indisready,\n>> > indislive, txid_current(), txid_current_snapshot()\n>> > FROM pg_index WHERE indrelid = 'tc'::regclass;\n>>\n>> > might tell us.\n>>\n>> Oh, that's a good idea.\n>>\n>> > Amin, might be worth to see what the query plan is if you disable that\n>> > index. I assume it's too big to quickly drop (based on the ?\n>>\n>> Considering that the \"right\" query plan would have a cost estimate in\n>> the single digits or close to it, I have to suppose that the planner is\n>> rejecting that index as unusable, not making a cost-based decision not\n>> to use it. (Well, maybe if it's bloated by three orders of magnitude\n>> compared to the other indexes, it'd lose on cost. Doesn't seem likely\n>> though.)\n>>\n>> So I think we're looking for a hard \"can't use the index\" reason, and\n>> now we've eliminated datatype mismatch which'd be the most obvious\n>> such reason. But index-isnt-valid or index-isnt-ready might do the\n>> trick.\n>>\n>> regards, tom lane\n>>\n>\n\nHiI didn't see my following response got posted on the mailing list so not sure if it is duplicate.Sorry for late reply.The initial values before upgrade for seq_page_cost=1, random_page_cost=4 and after upgrading when we started to see the issues as we were seeing \"Seq Scan\" we change them seq_page_cost=1, random_page_cost=1 The issue happens only in production so making the index invalid would affect service so it isn't something we can do. I have tried to rebuild the PK index to see it helps or not but it doesn't seem help.Related to the same issue we sometimes see following Seq Scan on update when querying by PK alone which appears to be related. update tc set...where id = $1 and version <$2 Update on tc (cost=10000000000.00..10003184001.52 rows=1 width=1848) -> Seq Scan on tc (cost=10000000000.00..10003184001.52 rows=1 width=1848) Filter: ((version < '38'::numeric) AND (id = '53670604704'::numeric))I was trying to find where the cost=10000000000 is set in the source code but wasn't able to find it, do anyone where it is set?And if you someone can point me to the code where it goes through the execution plans when SQL is sent i can try to go through the code to see if can figure out what it is doing behind to scene in it's calculation?On Sun, Jun 23, 2019 at 8:07 AM AminPG Jaffer <[email protected]> wrote:Sorry for late reply.The initial values before upgrade for seq_page_cost=1, random_page_cost=4 and after upgrading when we started to see the issues as we were seeing \"Seq Scan\" we change them \n\nseq_page_cost=1, random_page_cost=1 The issue happens only in production so making the index invalid would affect service so it isn't something we can do. I have tried to rebuild the PK index to see it helps or not but it doesn't seem help.Related to the same issue we sometimes see following Seq Scan on update when querying by PK alone which appears to be related. update tc set...where id = $1 and version <$2 Update on tc (cost=10000000000.00..10003184001.52 rows=1 width=1848) -> Seq Scan on tc (cost=10000000000.00..10003184001.52 rows=1 width=1848) Filter: ((version < '38'::numeric) AND (id = '53670604704'::numeric))I was trying to find where the cost=10000000000 is set in the source code but wasn't able to find it, do anyone where it is set?And if you someone can point me to the code where it goes through the execution plans when SQL is sent i can try to go through the code to see if can figure out what it is doing behind to scene in it's calculation?ThanksOn Tue, Jun 18, 2019 at 3:23 PM Tom Lane <[email protected]> wrote:Andres Freund <[email protected]> writes:\n> Are those indexes used for other queries? Any chance they've been\n> recently created?\n\n> SELECT indexrelid::regclass, xmin, indcheckxmin, indisvalid, indisready,\n> indislive, txid_current(), txid_current_snapshot()\n> FROM pg_index WHERE indrelid = 'tc'::regclass;\n\n> might tell us.\n\nOh, that's a good idea.\n\n> Amin, might be worth to see what the query plan is if you disable that\n> index. I assume it's too big to quickly drop (based on the ?\n\nConsidering that the \"right\" query plan would have a cost estimate in\nthe single digits or close to it, I have to suppose that the planner is\nrejecting that index as unusable, not making a cost-based decision not\nto use it. (Well, maybe if it's bloated by three orders of magnitude\ncompared to the other indexes, it'd lose on cost. Doesn't seem likely\nthough.)\n\nSo I think we're looking for a hard \"can't use the index\" reason, and\nnow we've eliminated datatype mismatch which'd be the most obvious\nsuch reason. But index-isnt-valid or index-isnt-ready might do the\ntrick.\n\n regards, tom lane",
"msg_date": "Wed, 26 Jun 2019 07:24:25 -0700",
"msg_from": "AminPG Jaffer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect index used in few cases.."
}
] |
[
{
"msg_contents": "Dear Postgres performance experts,\n\nI noticed that when I added a BRIN index to a very large table, attempting to make a particular query faster, it became much slower instead. While trying to understand this, I noticed that the actual number of rows in the EXPLAIN ANALYZE output was much higher than I expected. I was able to produce a repeatable test case for this. I'm not sure if this is actually a bug, or simply that the \"number of rows\" means something different than I expected.\n\nThis reproducible test case is not especially slow, because I wanted to make it easy and fast to run and understand. Right now I'd just like to understand why it behaves this way.\n\nThe SQL is to create the test case is:\n\ndrop table brin_test;\ncreate table brin_test AS SELECT generate_series as id, generate_series % 100 as r from generate_series(1,100000);\ncreate index idx_brin_test_brin on brin_test using brin (id, r) with (pages_per_range = 32);\nvacuum analyze brin_test;\n\nAnd here are two queries to compare:\n\nexplain analyze select * from brin_test where id >= 90000;\nexplain analyze select * from brin_test where id >= 90000 and r in (1,3);\n\nWith the following results:\n\ntesting=# explain analyze select * from brin_test where id >= 90000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\n Recheck Cond: (id >= 90000)\n Rows Removed by Index Recheck: 3215\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640 loops=1)\n Index Cond: (id >= 90000)\nPlanning Time: 0.155 ms\nExecution Time: 2.133 ms\n(8 rows)\n\ntesting=# explain analyze select * from brin_test where id >= 90000 and r in (1,3);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n Rows Removed by Index Recheck: 13016\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280 loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\nPlanning Time: 0.071 ms\nExecution Time: 23.954 ms\n(8 rows)\n\nNote that introducing a disjunction (set of possible values) into the query doubles the number of actual rows returned, and increases the number removed by the index recheck. It looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding more values of R to see what I mean). The execution time also increases massively.\n\nCould anyone help me to understand what's going on here, and whether there's a bug or limitation of BRIN indexes? If it's a limitation, then the query planner does not seem to account for it, and chooses this plan even when it's a bad one (much worse than removing result rows using a filter).\n\nThanks, Chris.\n\n\n\n\n\n\n\n\n\nDear Postgres performance experts,\n \nI noticed that when I added a BRIN index to a very large table, attempting to make a particular query faster, it became much slower instead. While trying to understand this, I noticed that the actual number of rows in the EXPLAIN ANALYZE\n output was much higher than I expected. I was able to produce a repeatable test case for this. I’m not sure if this is actually a bug, or simply that the “number of rows” means something different than I expected.\n \nThis reproducible test case is not especially slow, because I wanted to make it easy and fast to run and understand. Right now I’d just like to understand why it behaves this way.\n \nThe SQL is to create the test case is:\n \ndrop\ntable brin_test;\ncreate\ntable brin_test\nAS\nSELECT\ngenerate_series\nas id,\ngenerate_series %\n100\nas r\nfrom\ngenerate_series(1,100000);\ncreate\nindex idx_brin_test_brin\non brin_test\nusing brin (id, r)\nwith (pages_per_range =\n32);\nvacuum\nanalyze brin_test;\n \nAnd here are two queries to compare:\n \nexplain\nanalyze\nselect *\nfrom brin_test\nwhere id >=\n90000;\nexplain\nanalyze\nselect *\nfrom brin_test\nwhere id >=\n90000\nand r\nin (1,3);\n \nWith the following results:\n \ntesting=# explain analyze select * from brin_test where id >= 90000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\n Recheck Cond: (id >= 90000)\n Rows Removed by Index Recheck:\n3215\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640 loops=1)\n Index Cond: (id >= 90000)\nPlanning Time: 0.155 ms\nExecution Time:\n2.133 ms\n(8 rows)\n \ntesting=# explain analyze select * from brin_test where id >= 90000\nand r in (1,3);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n Rows Removed by Index Recheck:\n13016\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280 loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\nPlanning Time: 0.071 ms\nExecution Time:\n23.954 ms\n(8 rows)\n \nNote that introducing a disjunction (set of possible values) into the query\ndoubles the number of actual rows returned, and increases the\nnumber removed by the index recheck. It looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding\n more values of R to see what I mean). The execution time also \nincreases massively.\n \nCould anyone help me to understand what’s going on here, and whether there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the query planner does not seem to account for it, and chooses this plan even when it’s a bad one\n (much worse than removing result rows using a filter).\n \nThanks, Chris.",
"msg_date": "Thu, 20 Jun 2019 15:12:18 +0000",
"msg_from": "Chris Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "On Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]>\nwrote:\n\n> Dear Postgres performance experts,\n>\n>\n>\n> I noticed that when I added a BRIN index to a very large table, attempting\n> to make a particular query faster, it became much slower instead. While\n> trying to understand this, I noticed that the actual number of rows in the\n> EXPLAIN ANALYZE output was much higher than I expected. I was able to\n> produce a repeatable test case for this. I’m not sure if this is actually a\n> bug, or simply that the “number of rows” means something different than I\n> expected.\n>\n>\n>\n> This reproducible test case is not especially slow, because I wanted to\n> make it easy and fast to run and understand. Right now I’d just like to\n> understand why it behaves this way.\n>\n>\n>\n> The SQL is to create the test case is:\n>\n>\n>\n> *drop* *table* brin_test;\n>\n> *create* *table* brin_test *AS* *SELECT* *generate_series* *as* id,\n> *generate_series* % 100 *as* r *from* *generate_series*(1,100000);\n>\n> *create* *index* idx_brin_test_brin *on* brin_test *using* brin (id, r)\n> *with* (pages_per_range = 32);\n>\n\nYou've created the index on (id,r) rather than just (id)\n\n\n> *vacuum* *analyze* brin_test;\n>\n>\n>\n> And here are two queries to compare:\n>\n>\n>\n> *explain* *analyze* *select* * *from* brin_test *where* id >= 90000;\n>\n> *explain* *analyze* *select* * *from* brin_test *where* id >= 90000 *and*\n> r *in* (1,3);\n>\n>\n>\n> With the following results:\n>\n>\n>\n> testing=# explain analyze select * from brin_test where id >= 90000;\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n>\n> Bitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8)\n> (actual time=0.474..1.796 rows=10001 loops=1)\n>\n> Recheck Cond: (id >= 90000)\n>\n> Rows Removed by Index Recheck: 3215\n>\n> Heap Blocks: lossy=59\n>\n> -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02\n> rows=14286 width=0) (actual time=0.026..0.026 rows=640 loops=1)\n>\n> Index Cond: (id >= 90000)\n>\n> Planning Time: 0.155 ms\n>\n> Execution Time: 2.133 ms\n>\n> (8 rows)\n>\n>\n>\n> testing=# explain analyze select * from brin_test where id >= 90000 and r\n> in (1,3);\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n>\n> Bitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8)\n> (actual time=6.101..23.927 rows=200 loops=1)\n>\n> Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n>\n> Rows Removed by Index Recheck: 13016\n>\n> Heap Blocks: lossy=59\n>\n> -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143\n> width=0) (actual time=0.038..0.038 rows=1280 loops=1)\n>\n> Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n>\n> Planning Time: 0.071 ms\n>\n> Execution Time: 23.954 ms\n>\n> (8 rows)\n>\n>\n>\n> Note that introducing a disjunction (set of possible values) into the\n> query doubles the number of actual rows returned, and increases the number\n> removed by the index recheck.\n>\n\nStrange, yes.\n\n\n> It looks to me as though perhaps the BRIN index does not completely\n> support queries with a set of possible values, and executes the query\n> multiple times (try adding more values of R to see what I mean).\n>\n\nThat doesn't appear to be happening.\n\n\n> The execution time also increases massively.\n>\n>\n>\n> Could anyone help me to understand what’s going on here, and whether\n> there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the\n> query planner does not seem to account for it, and chooses this plan even\n> when it’s a bad one (much worse than removing result rows using a filter).\n>\n\n The second column changes the way the index is defined. It appears there\nis very little locality for the r column, so try removing it.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]> wrote:\n\n\nDear Postgres performance experts,\n \nI noticed that when I added a BRIN index to a very large table, attempting to make a particular query faster, it became much slower instead. While trying to understand this, I noticed that the actual number of rows in the EXPLAIN ANALYZE\n output was much higher than I expected. I was able to produce a repeatable test case for this. I’m not sure if this is actually a bug, or simply that the “number of rows” means something different than I expected.\n \nThis reproducible test case is not especially slow, because I wanted to make it easy and fast to run and understand. Right now I’d just like to understand why it behaves this way.\n \nThe SQL is to create the test case is:\n \ndrop\ntable brin_test;\ncreate\ntable brin_test\nAS\nSELECT\ngenerate_series\nas id,\ngenerate_series %\n100\nas r\nfrom\ngenerate_series(1,100000);\ncreate\nindex idx_brin_test_brin\non brin_test\nusing brin (id, r)\nwith (pages_per_range =\n32);You've created the index on (id,r) rather than just (id) \nvacuum\nanalyze brin_test;\n \nAnd here are two queries to compare:\n \nexplain\nanalyze\nselect *\nfrom brin_test\nwhere id >=\n90000;\nexplain\nanalyze\nselect *\nfrom brin_test\nwhere id >=\n90000\nand r\nin (1,3);\n \nWith the following results:\n \ntesting=# explain analyze select * from brin_test where id >= 90000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\n Recheck Cond: (id >= 90000)\n Rows Removed by Index Recheck:\n3215\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640 loops=1)\n Index Cond: (id >= 90000)\nPlanning Time: 0.155 ms\nExecution Time:\n2.133 ms\n(8 rows)\n \ntesting=# explain analyze select * from brin_test where id >= 90000\nand r in (1,3);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n Rows Removed by Index Recheck:\n13016\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280 loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\nPlanning Time: 0.071 ms\nExecution Time:\n23.954 ms\n(8 rows)\n \nNote that introducing a disjunction (set of possible values) into the query\ndoubles the number of actual rows returned, and increases the\nnumber removed by the index recheck. Strange, yes. It looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding\n more values of R to see what I mean).That doesn't appear to be happening. The execution time also \nincreases massively.\n \nCould anyone help me to understand what’s going on here, and whether there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the query planner does not seem to account for it, and chooses this plan even when it’s a bad one\n (much worse than removing result rows using a filter). The second column changes the way the index is defined. It appears there is very little locality for the r column, so try removing it.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 20 Jun 2019 16:56:47 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "Hi Simon,\r\n\r\nI deliberately included r in the index, to demonstrate the issue that I’m seeing. I know that there is very little locality in this particular, dummy, arbitrary test case. I can try to produce a test case that has some locality, but I expect it to show exactly the same results, i.e. that the BRIN index performs much worse when we try to query on this column as well.\r\n\r\nThanks, Chris.\r\n\r\nFrom: Simon Riggs <[email protected]>\r\nSent: 20 June 2019 16:57\r\nTo: Chris Wilson <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction\r\n\r\nOn Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]<mailto:[email protected]>> wrote:\r\nDear Postgres performance experts,\r\n\r\nI noticed that when I added a BRIN index to a very large table, attempting to make a particular query faster, it became much slower instead. While trying to understand this, I noticed that the actual number of rows in the EXPLAIN ANALYZE output was much higher than I expected. I was able to produce a repeatable test case for this. I’m not sure if this is actually a bug, or simply that the “number of rows” means something different than I expected.\r\n\r\nThis reproducible test case is not especially slow, because I wanted to make it easy and fast to run and understand. Right now I’d just like to understand why it behaves this way.\r\n\r\nThe SQL is to create the test case is:\r\n\r\ndrop table brin_test;\r\ncreate table brin_test AS SELECT generate_series as id, generate_series % 100 as r from generate_series(1,100000);\r\ncreate index idx_brin_test_brin on brin_test using brin (id, r) with (pages_per_range = 32);\r\n\r\nYou've created the index on (id,r) rather than just (id)\r\n\r\nvacuum analyze brin_test;\r\n\r\nAnd here are two queries to compare:\r\n\r\nexplain analyze select * from brin_test where id >= 90000;\r\nexplain analyze select * from brin_test where id >= 90000 and r in (1,3);\r\n\r\nWith the following results:\r\n\r\ntesting=# explain analyze select * from brin_test where id >= 90000;\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------\r\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\r\n Recheck Cond: (id >= 90000)\r\n Rows Removed by Index Recheck: 3215\r\n Heap Blocks: lossy=59\r\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640 loops=1)\r\n Index Cond: (id >= 90000)\r\nPlanning Time: 0.155 ms\r\nExecution Time: 2.133 ms\r\n(8 rows)\r\n\r\ntesting=# explain analyze select * from brin_test where id >= 90000 and r in (1,3);\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------\r\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\r\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\r\n Rows Removed by Index Recheck: 13016\r\n Heap Blocks: lossy=59\r\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280 loops=1)\r\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\r\nPlanning Time: 0.071 ms\r\nExecution Time: 23.954 ms\r\n(8 rows)\r\n\r\nNote that introducing a disjunction (set of possible values) into the query doubles the number of actual rows returned, and increases the number removed by the index recheck.\r\n\r\nStrange, yes.\r\n\r\nIt looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding more values of R to see what I mean).\r\n\r\nThat doesn't appear to be happening.\r\n\r\nThe execution time also increases massively.\r\n\r\nCould anyone help me to understand what’s going on here, and whether there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the query planner does not seem to account for it, and chooses this plan even when it’s a bad one (much worse than removing result rows using a filter).\r\n\r\n The second column changes the way the index is defined. It appears there is very little locality for the r column, so try removing it.\r\n\r\n--\r\nSimon Riggs http://www.2ndQuadrant.com/<http://www.2ndquadrant.com/>\r\nPostgreSQL Solutions for the Enterprise\r\n\n\n\n\n\n\n\n\n\nHi Simon,\n \nI deliberately included r in the index, to demonstrate the issue that I’m seeing. I know that there is very little locality in this\r\n particular, dummy, arbitrary test case. I can try to produce a test case that has some locality, but I expect it to show exactly the same results, i.e. that the BRIN index performs much worse when we try to query on this column as well.\n \nThanks, Chris.\n \nFrom: Simon Riggs <[email protected]>\r\n\nSent: 20 June 2019 16:57\nTo: Chris Wilson <[email protected]>\nCc: [email protected]\nSubject: Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction\n \n\n\nOn Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]> wrote:\n\n\n\n\n\nDear Postgres performance experts,\n \nI noticed that when I added a BRIN index to a very large table, attempting to make a particular query faster, it became much slower instead. While trying to understand this, I noticed\r\n that the actual number of rows in the EXPLAIN ANALYZE output was much higher than I expected. I was able to produce a repeatable test case for this. I’m not sure if this is actually a bug, or simply that the “number of rows” means something different than\r\n I expected.\n \nThis reproducible test case is not especially slow, because I wanted to make it easy and fast to run and understand. Right now I’d just like to understand why it behaves this way.\n \nThe SQL is to create the test case is:\n \ndrop\ntable brin_test;\ncreate\ntable brin_test\r\nAS\nSELECT\ngenerate_series\nas id,\r\ngenerate_series %\r\n100\nas r\r\nfrom\ngenerate_series(1,100000);\ncreate\nindex idx_brin_test_brin\r\non brin_test\r\nusing brin (id, r)\r\nwith (pages_per_range =\r\n32);\n\n\n\n\n \n\n\nYou've created the index on (id,r) rather than just (id)\n\n\n \n\n\n\n\nvacuum\nanalyze brin_test;\n \nAnd here are two queries to compare:\n \nexplain\nanalyze\nselect *\r\nfrom brin_test\r\nwhere id >=\r\n90000;\nexplain\nanalyze\nselect *\r\nfrom brin_test\r\nwhere id >=\r\n90000\nand r\r\nin (1,3);\n \nWith the following results:\n \ntesting=# explain analyze select * from brin_test where id >= 90000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\n Recheck Cond: (id >= 90000)\n Rows Removed by Index Recheck:\r\n3215\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640\r\n loops=1)\n Index Cond: (id >= 90000)\nPlanning Time: 0.155 ms\nExecution Time:\r\n2.133 ms\n(8 rows)\n \ntesting=# explain analyze select * from brin_test where id >= 90000\r\nand r in (1,3);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n Rows Removed by Index Recheck:\r\n13016\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280\r\n loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\nPlanning Time: 0.071 ms\nExecution Time:\r\n23.954 ms\n(8 rows)\n \nNote that introducing a disjunction (set of possible values) into the query\r\ndoubles the number of actual rows returned, and increases the\r\nnumber removed by the index recheck. \n\n\n\n\n \n\n\nStrange, yes.\n\n\n \n\n\n\n\nIt looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding more values\r\n of R to see what I mean).\n\n\n\n\n \n\n\nThat doesn't appear to be happening.\n\n\n \n\n\n\n\nThe execution time also\r\nincreases massively.\n \nCould anyone help me to understand what’s going on here, and whether there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the query planner does not seem to account\r\n for it, and chooses this plan even when it’s a bad one (much worse than removing result rows using a filter).\n\n\n\n\n \n\n\n The second column changes the way the index is defined. It appears there is very little locality for the r column, so try removing it.\n\n\n\n \n\n-- \n\n\n\n\n\n\nSimon Riggs http://www.2ndQuadrant.com/\nPostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 20 Jun 2019 16:00:43 +0000",
"msg_from": "Chris Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "On Thu, 20 Jun 2019 at 17:01, Chris Wilson <[email protected]>\nwrote:\n\n\n> I deliberately included r in the index, to demonstrate the issue that I’m\n> seeing. I know that there is very little locality in this particular,\n> dummy, arbitrary test case. I can try to produce a test case that has some\n> locality, but I expect it to show exactly the same results, i.e. that the\n> BRIN index performs much worse when we try to query on this column as well.\n>\n\nI'm suggesting that adding the second column to the index is the source of\nyour problem, not adding the column to the query.\n\nHow does it perform with just the id column in the index?\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Thu, 20 Jun 2019 at 17:01, Chris Wilson <[email protected]> wrote: I deliberately included r in the index, to demonstrate the issue that I’m seeing. I know that there is very little locality in this\n particular, dummy, arbitrary test case. I can try to produce a test case that has some locality, but I expect it to show exactly the same results, i.e. that the BRIN index performs much worse when we try to query on this column as well.I'm suggesting that adding the second column to the index is the source of your problem, not adding the column to the query.How does it perform with just the id column in the index? -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 20 Jun 2019 17:18:33 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "On Thu, Jun 20, 2019 at 05:18:33PM +0100, Simon Riggs wrote:\n> On Thu, 20 Jun 2019 at 17:01, Chris Wilson <[email protected]>\n> wrote:\n> \n> \n> > I deliberately included r in the index, to demonstrate the issue that I’m\n> > seeing. I know that there is very little locality in this particular,\n> > dummy, arbitrary test case. I can try to produce a test case that has some\n> > locality, but I expect it to show exactly the same results, i.e. that the\n> > BRIN index performs much worse when we try to query on this column as well.\n> >\n> \n> I'm suggesting that adding the second column to the index is the source of\n> your problem, not adding the column to the query.\n\nBut it *is* odd that the index returns more rows with a strictly tighter\nconditions, right ?\n\nNote, it's not an issue of rowcount estimate being confused by redundant\nconditions, but real rowcount, and it returns more rows even when the\nconditions are duplicative. Compare:\n\npostgres=# explain analyze select * from brin_test where id >= 90000 and r in (1);\n...\n -> Bitmap Index Scan on brin_test_id_r_idx (cost=0.00..12.03 rows=28125 width=0) (actual time=0.136..0.137 rows=37120 loops=1)\n Index Cond: ((id >= 90000) AND (r = 1))\n\npostgres=# explain analyze select * from brin_test where id >= 90000 and r in (1,1);\n...\n -> Bitmap Index Scan on brin_test_id_r_idx (cost=0.00..12.03 rows=28125 width=0) (actual time=0.263..0.263 rows=74240 loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,1}'::integer[])))\n\npostgres=# explain analyze select * from brin_test where id >= 90000 and r in (1,1,1);\n...\n -> Bitmap Index Scan on brin_test_id_r_idx (cost=0.00..12.03 rows=28125 width=0) (actual time=0.387..0.387 rows=111360 loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,1,1}'::integer[])))\n\nNote, the docs say:\nhttps://www.postgresql.org/docs/devel/indexes-multicolumn.html\n|A multicolumn BRIN index can be used with query conditions that involve any\n|subset of the index's columns. Like GIN and unlike B-tree or GiST, index search\n|effectiveness is the same regardless of which index column(s) the query\n|conditions use. The only reason to have multiple BRIN indexes instead of one\n|multicolumn BRIN index on a single table is to have a different pages_per_range\n|storage parameter.\n\nJustin\n\n\n",
"msg_date": "Thu, 20 Jun 2019 11:30:44 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "On Thu, 20 Jun 2019 at 17:30, Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Jun 20, 2019 at 05:18:33PM +0100, Simon Riggs wrote:\n> > On Thu, 20 Jun 2019 at 17:01, Chris Wilson <\n> [email protected]>\n> > wrote:\n> >\n> >\n> > > I deliberately included r in the index, to demonstrate the issue that\n> I’m\n> > > seeing. I know that there is very little locality in this particular,\n> > > dummy, arbitrary test case. I can try to produce a test case that has\n> some\n> > > locality, but I expect it to show exactly the same results, i.e. that\n> the\n> > > BRIN index performs much worse when we try to query on this column as\n> well.\n> > >\n> >\n> > I'm suggesting that adding the second column to the index is the source\n> of\n> > your problem, not adding the column to the query.\n>\n> But it *is* odd that the index returns more rows with a strictly tighter\n> conditions, right ?\n>\n\nOh, very. I was seeing this as an optimization issue rather than a bug\nreport.\n\n\n> Note, it's not an issue of rowcount estimate being confused by redundant\n> conditions, but real rowcount, and it returns more rows even when the\n> conditions are duplicative. Compare:\n>\n> postgres=# explain analyze select * from brin_test where id >= 90000 and r\n> in (1);\n> ...\n> -> Bitmap Index Scan on brin_test_id_r_idx (cost=0.00..12.03\n> rows=28125 width=0) (actual time=0.136..0.137 rows=37120 loops=1)\n> Index Cond: ((id >= 90000) AND (r = 1))\n>\n> postgres=# explain analyze select * from brin_test where id >= 90000 and r\n> in (1,1);\n> ...\n> -> Bitmap Index Scan on brin_test_id_r_idx (cost=0.00..12.03\n> rows=28125 width=0) (actual time=0.263..0.263 rows=74240 loops=1)\n> Index Cond: ((id >= 90000) AND (r = ANY ('{1,1}'::integer[])))\n>\n> postgres=# explain analyze select * from brin_test where id >= 90000 and r\n> in (1,1,1);\n> ...\n> -> Bitmap Index Scan on brin_test_id_r_idx (cost=0.00..12.03\n> rows=28125 width=0) (actual time=0.387..0.387 rows=111360 loops=1)\n> Index Cond: ((id >= 90000) AND (r = ANY ('{1,1,1}'::integer[])))\n>\n> Note, the docs say:\n> https://www.postgresql.org/docs/devel/indexes-multicolumn.html\n> |A multicolumn BRIN index can be used with query conditions that involve\n> any\n> |subset of the index's columns. Like GIN and unlike B-tree or GiST, index\n> search\n> |effectiveness is the same regardless of which index column(s) the query\n> |conditions use. The only reason to have multiple BRIN indexes instead of\n> one\n> |multicolumn BRIN index on a single table is to have a different\n> pages_per_range\n> |storage parameter.\n>\n\nThe min/max values of each column are held for each block range.\n\nIf it scans using the \"r\" column it will identify more block ranges to scan\nthan if it used the id column and hence would scan more real rows, so that\npart is understandable.\n\nThe only question is why it chooses to scan on \"r\" and not \"id\", which\nneeds some investigation.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Thu, 20 Jun 2019 at 17:30, Justin Pryzby <[email protected]> wrote:On Thu, Jun 20, 2019 at 05:18:33PM +0100, Simon Riggs wrote:\n> On Thu, 20 Jun 2019 at 17:01, Chris Wilson <[email protected]>\n> wrote:\n> \n> \n> > I deliberately included r in the index, to demonstrate the issue that I’m\n> > seeing. I know that there is very little locality in this particular,\n> > dummy, arbitrary test case. I can try to produce a test case that has some\n> > locality, but I expect it to show exactly the same results, i.e. that the\n> > BRIN index performs much worse when we try to query on this column as well.\n> >\n> \n> I'm suggesting that adding the second column to the index is the source of\n> your problem, not adding the column to the query.\n\nBut it *is* odd that the index returns more rows with a strictly tighter\nconditions, right ?Oh, very. I was seeing this as an optimization issue rather than a bug report. \nNote, it's not an issue of rowcount estimate being confused by redundant\nconditions, but real rowcount, and it returns more rows even when the\nconditions are duplicative. Compare:\n\npostgres=# explain analyze select * from brin_test where id >= 90000 and r in (1);\n...\n -> Bitmap Index Scan on brin_test_id_r_idx (cost=0.00..12.03 rows=28125 width=0) (actual time=0.136..0.137 rows=37120 loops=1)\n Index Cond: ((id >= 90000) AND (r = 1))\n\npostgres=# explain analyze select * from brin_test where id >= 90000 and r in (1,1);\n...\n -> Bitmap Index Scan on brin_test_id_r_idx (cost=0.00..12.03 rows=28125 width=0) (actual time=0.263..0.263 rows=74240 loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,1}'::integer[])))\n\npostgres=# explain analyze select * from brin_test where id >= 90000 and r in (1,1,1);\n...\n -> Bitmap Index Scan on brin_test_id_r_idx (cost=0.00..12.03 rows=28125 width=0) (actual time=0.387..0.387 rows=111360 loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,1,1}'::integer[])))\n\nNote, the docs say:\nhttps://www.postgresql.org/docs/devel/indexes-multicolumn.html\n|A multicolumn BRIN index can be used with query conditions that involve any\n|subset of the index's columns. Like GIN and unlike B-tree or GiST, index search\n|effectiveness is the same regardless of which index column(s) the query\n|conditions use. The only reason to have multiple BRIN indexes instead of one\n|multicolumn BRIN index on a single table is to have a different pages_per_range\n|storage parameter.The min/max values of each column are held for each block range.If it scans using the \"r\" column it will identify more block ranges to scan than if it used the id column and hence would scan more real rows, so that part is understandable.The only question is why it chooses to scan on \"r\" and not \"id\", which needs some investigation.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 20 Jun 2019 17:51:26 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "For kicks I tried the example given and got the below which seems more\nexpected.\n\n\nexplain analyze select * from brin_test where id >= 90000;\n\nBitmap Heap Scan on brin_test (cost=5.78..627.36 rows=9861 width=8)\n(actual time=0.373..7.309 rows=10001 loops=1)\n Recheck Cond: (id >= 90000)\n Rows Removed by Index Recheck: 3215\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..3.32 rows=14286\nwidth=0) (actual time=0.018..0.019 rows=640 loops=1)\n Index Cond: (id >= 90000)\nPlanning Time: 0.101 ms\nExecution Time: *13.485 ms*\n\n\nexplain analyze select * from brin_test where id >= 90000 and r in (1,3);\n\nBitmap Heap Scan on brin_test (cost=3.36..553.50 rows=197 width=8) (actual\ntime=0.390..1.829 rows=200 loops=1)\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n Rows Removed by Index Recheck: 13016\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..3.31 rows=7143\nwidth=0) (actual time=0.026..0.027 rows=1280 loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\nPlanning Time: 0.089 ms\nExecution Time: *1.978 ms*\n\nFor kicks I tried the example given and got the below which seems more expected.explain analyze select * from brin_test where id >= 90000;Bitmap Heap Scan on brin_test (cost=5.78..627.36 rows=9861 width=8) (actual time=0.373..7.309 rows=10001 loops=1) Recheck Cond: (id >= 90000) Rows Removed by Index Recheck: 3215 Heap Blocks: lossy=59 -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..3.32 rows=14286 width=0) (actual time=0.018..0.019 rows=640 loops=1) Index Cond: (id >= 90000)Planning Time: 0.101 msExecution Time: 13.485 msexplain analyze select * from brin_test where id >= 90000 and r in (1,3);Bitmap Heap Scan on brin_test (cost=3.36..553.50 rows=197 width=8) (actual time=0.390..1.829 rows=200 loops=1) Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[]))) Rows Removed by Index Recheck: 13016 Heap Blocks: lossy=59 -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..3.31 rows=7143 width=0) (actual time=0.026..0.027 rows=1280 loops=1) Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))Planning Time: 0.089 msExecution Time: 1.978 ms",
"msg_date": "Thu, 20 Jun 2019 12:09:13 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "On Thu, Jun 20, 2019 at 12:09:13PM -0600, Michael Lewis wrote:\n> For kicks I tried the example given and got the below which seems more\n> expected.\n\nDo you just mean that it ran faster the 2nd time ? Isn't that just due just to\ncache effects ? Rerun them both back to back.\n\nSee that the \"actual\" rowcount of the Index Scan is higher with tigher\ncondition:\n\n> -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..3.32 rows=14286 width=0) (actual time=0.018..0.019 rows=640 loops=1)\n> Index Cond: (id >= 90000)\n\n> -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..3.31 rows=7143 width=0) (actual time=0.026..0.027 rows=1280 loops=1)\n> Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n\n\n",
"msg_date": "Thu, 20 Jun 2019 13:13:35 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "I ran both many times and got the same result. ::shrug::\n\nI ran both many times and got the same result. ::shrug::",
"msg_date": "Thu, 20 Jun 2019 12:16:23 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "On Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]>\nwrote:\n\n> With the following results:\n>\n>\n>\n> testing=# explain analyze select * from brin_test where id >= 90000;\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n>\n> Bitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8)\n> (actual time=0.474..1.796 rows=10001 loops=1)\n>\n> Recheck Cond: (id >= 90000)\n>\n> Rows Removed by Index Recheck: 3215\n>\n> Heap Blocks: lossy=59\n>\n> -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02\n> rows=14286 width=0) (actual time=0.026..0.026 rows=640 loops=1)\n>\n> Index Cond: (id >= 90000)\n>\n> Planning Time: 0.155 ms\n>\n> Execution Time: 2.133 ms\n>\n> (8 rows)\n>\n>\n>\n> testing=# explain analyze select * from brin_test where id >= 90000 and r\n> in (1,3);\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n>\n> Bitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8)\n> (actual time=6.101..23.927 rows=200 loops=1)\n>\n> Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n>\n> Rows Removed by Index Recheck: 13016\n>\n> Heap Blocks: lossy=59\n>\n> -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143\n> width=0) (actual time=0.038..0.038 rows=1280 loops=1)\n>\n> Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n>\n> Planning Time: 0.071 ms\n>\n> Execution Time: 23.954 ms\n>\n> (8 rows)\n>\n>\n>\n> Note that introducing a disjunction (set of possible values) into the\n> query doubles the number of actual rows returned, and increases the number\n> removed by the index recheck. It looks to me as though perhaps the BRIN\n> index does not completely support queries with a set of possible values,\n> and executes the query multiple times (try adding more values of R to see\n> what I mean). The execution time also increases massively.\n>\n>\n>\n> Could anyone help me to understand what’s going on here, and whether\n> there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the\n> query planner does not seem to account for it, and chooses this plan even\n> when it’s a bad one (much worse than removing result rows using a filter).\n>\n\nIn both cases the index is returning a lossy bitmap of 59 heap blocks. The\nsecond query is more restrictive, so the number removed by index recheck is\nhigher. The total of number rows returned plus the number of rows removed\nby index recheck is the same in both cases.\n\nThe only weirdness is why the index reports it has returned 640 rows in one\nquery and 1280 in second query. Since a lossy bitmap is returned, that\nfigure can only be an estimate. The estimate differs between queries, but\nis wrong in both cases.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]> wrote:\n\n\nWith the following results:\n \ntesting=# explain analyze select * from brin_test where id >= 90000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\n Recheck Cond: (id >= 90000)\n Rows Removed by Index Recheck:\n3215\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640 loops=1)\n Index Cond: (id >= 90000)\nPlanning Time: 0.155 ms\nExecution Time:\n2.133 ms\n(8 rows)\n \ntesting=# explain analyze select * from brin_test where id >= 90000\nand r in (1,3);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n Rows Removed by Index Recheck:\n13016\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280 loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\nPlanning Time: 0.071 ms\nExecution Time:\n23.954 ms\n(8 rows)\n \nNote that introducing a disjunction (set of possible values) into the query\ndoubles the number of actual rows returned, and increases the\nnumber removed by the index recheck. It looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding\n more values of R to see what I mean). The execution time also \nincreases massively.\n \nCould anyone help me to understand what’s going on here, and whether there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the query planner does not seem to account for it, and chooses this plan even when it’s a bad one\n (much worse than removing result rows using a filter).In both cases the index is returning a lossy bitmap of 59 heap blocks. The second query is more restrictive, so the number removed by index recheck is higher. The total of number rows returned plus the number of rows removed by index recheck is the same in both cases.The only weirdness is why the index reports it has returned 640 rows in one query and 1280 in second query. Since a lossy bitmap is returned, that figure can only be an estimate. The estimate differs between queries, but is wrong in both cases. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 21 Jun 2019 10:17:14 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "That makes perfect sense, thanks Simon!\r\n\r\nChris.\r\n\r\nFrom: Simon Riggs <[email protected]>\r\nSent: 21 June 2019 10:17\r\nTo: Chris Wilson <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction\r\n\r\nOn Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]<mailto:[email protected]>> wrote:\r\nWith the following results:\r\n\r\ntesting=# explain analyze select * from brin_test where id >= 90000;\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------\r\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\r\n Recheck Cond: (id >= 90000)\r\n Rows Removed by Index Recheck: 3215\r\n Heap Blocks: lossy=59\r\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640 loops=1)\r\n Index Cond: (id >= 90000)\r\nPlanning Time: 0.155 ms\r\nExecution Time: 2.133 ms\r\n(8 rows)\r\n\r\ntesting=# explain analyze select * from brin_test where id >= 90000 and r in (1,3);\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------\r\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\r\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\r\n Rows Removed by Index Recheck: 13016\r\n Heap Blocks: lossy=59\r\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280 loops=1)\r\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\r\nPlanning Time: 0.071 ms\r\nExecution Time: 23.954 ms\r\n(8 rows)\r\n\r\nNote that introducing a disjunction (set of possible values) into the query doubles the number of actual rows returned, and increases the number removed by the index recheck. It looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding more values of R to see what I mean). The execution time also increases massively.\r\n\r\nCould anyone help me to understand what’s going on here, and whether there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the query planner does not seem to account for it, and chooses this plan even when it’s a bad one (much worse than removing result rows using a filter).\r\n\r\nIn both cases the index is returning a lossy bitmap of 59 heap blocks. The second query is more restrictive, so the number removed by index recheck is higher. The total of number rows returned plus the number of rows removed by index recheck is the same in both cases.\r\n\r\nThe only weirdness is why the index reports it has returned 640 rows in one query and 1280 in second query. Since a lossy bitmap is returned, that figure can only be an estimate. The estimate differs between queries, but is wrong in both cases.\r\n\r\n--\r\nSimon Riggs http://www.2ndQuadrant.com/<http://www.2ndquadrant.com/>\r\nPostgreSQL Solutions for the Enterprise\r\n\n\n\n\n\n\n\n\n\nThat makes perfect sense, thanks Simon!\n \nChris.\n \nFrom: Simon Riggs <[email protected]>\r\n\nSent: 21 June 2019 10:17\nTo: Chris Wilson <[email protected]>\nCc: [email protected]\nSubject: Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction\n \n\n\nOn Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]> wrote:\n\n\n\n\n\nWith the following results:\n \ntesting=# explain analyze select * from brin_test where id >= 90000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\n Recheck Cond: (id >= 90000)\n Rows Removed by Index Recheck:\r\n3215\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640\r\n loops=1)\n Index Cond: (id >= 90000)\nPlanning Time: 0.155 ms\nExecution Time:\r\n2.133 ms\n(8 rows)\n \ntesting=# explain analyze select * from brin_test where id >= 90000\r\nand r in (1,3);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n Rows Removed by Index Recheck:\r\n13016\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280\r\n loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\nPlanning Time: 0.071 ms\nExecution Time:\r\n23.954 ms\n(8 rows)\n \nNote that introducing a disjunction (set of possible values) into the query\r\ndoubles the number of actual rows returned, and increases the\r\nnumber removed by the index recheck. It looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding more values of R to\r\n see what I mean). The execution time also increases massively.\n \nCould anyone help me to understand what’s going on here, and whether there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the query planner does not seem to account\r\n for it, and chooses this plan even when it’s a bad one (much worse than removing result rows using a filter).\n\n\n\n\n \n\n\nIn both cases the index is returning a lossy bitmap of 59 heap blocks. The second query is more restrictive, so the number removed by index recheck is higher. The total of number rows returned plus the number of rows removed by index recheck\r\n is the same in both cases.\n\n\n \n\n\nThe only weirdness is why the index reports it has returned 640 rows in one query and 1280 in second query. Since a lossy bitmap is returned, that figure can only be an estimate. The estimate differs between queries, but is wrong in both\r\n cases. \n\n\n\n \n\n-- \n\n\n\n\n\n\nSimon Riggs http://www.2ndQuadrant.com/\nPostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 21 Jun 2019 09:20:11 +0000",
"msg_from": "Chris Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
},
{
"msg_contents": "One possible optimisation occurred to me (that I guess we can’t currently do). If we use a larger example with some correlation in r, we can see that the time to execute the bitmap index scan is proportional to the number of items in the IN/ANY disjunction:\r\n\r\ndrop table brin_test;\r\ncreate table brin_test AS SELECT g as id, ((g / 100) + (g % 100)) as r\r\n from generate_series(1,10000000) as g;\r\ncreate index idx_brin_test_brin on brin_test using brin (id, r) with (pages_per_range = 32);\r\nvacuum analyze brin_test;\r\nset max_parallel_workers_per_gather = 0;\r\n\r\n/* Note that these queries return no results, so there is no time spent in bitmap heap scan, rechecking the conditions: */\r\nexplain analyze select * from brin_test where id >= 9000000 and r = any(array_fill(1, ARRAY[100]));\r\nexplain analyze select * from brin_test where id >= 9000000 and r = any(array_fill(1, ARRAY[1000]));\r\n\r\nWith the following results (long arrays elided):\r\n\r\ntesting=# explain analyze select * from brin_test where id >= 9000000 and r = any(array_fill(1, ARRAY[100]));\r\nBitmap Heap Scan on brin_test (cost=15.27..11781.13 rows=1031 width=8) (actual time=23.830..23.830 rows=0 loops=1)\r\n Recheck Cond: ((id >= 9000000) AND (r = ANY ('{1,…,1}'::integer[])))\r\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..15.01 rows=7231 width=0) (actual time=23.829..23.829 rows=0 loops=1)\r\n Index Cond: ((id >= 9000000) AND (r = ANY ('{1,…,1}'::integer[])))\r\nPlanning Time: 0.092 ms\r\nExecution Time: 23.853 ms\r\n(6 rows)\r\n\r\ntesting=# explain analyze select * from brin_test where id >= 9000000 and r = any(array_fill(1, ARRAY[1000]));\r\nBitmap Heap Scan on brin_test (cost=17.59..36546.51 rows=10308 width=8) (actual time=237.748..237.748 rows=0 loops=1)\r\n Recheck Cond: ((id >= 9000000) AND (r = ANY ('{1,…,1}'::integer[])))\r\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..15.02 rows=14461 width=0) (actual time=237.747..237.747 rows=0 loops=1)\r\n Index Cond: ((id >= 9000000) AND (r = ANY ('{1,…,1}'::integer[])))\r\nPlanning Time: 0.354 ms\r\nExecution Time: 237.817 ms\r\n(6 rows)\r\n\r\nWe can see that scanning 10x as many values takes 10x as long. It seems that we are checking each value in the array individually. However, since the BRIN index stores ranges of values in each block, all we care about (for the index scan) is whether the ranges overlap with the query. So we could compute the minimum and maximum in the array, and check whether each block contains any values in that range, and if so (and the other conditions are met) then emit the block for heap scanning. Does that make sense?\r\n\r\nIf I manually add those conditions to the query, it uses them and speeds up by about 1000 times:\r\n\r\ntesting=# explain analyze select * from brin_test where id >= 9000000 and r >= 1 and r <= 1 and r = any(array_fill(1, ARRAY[1000]));\r\nBitmap Heap Scan on brin_test (cost=15.01..19951.91 rows=1 width=8) (actual time=0.263..0.263 rows=0 loops=1)\r\n Recheck Cond: ((id >= 9000000) AND (r >= 1) AND (r <= 1))\r\n Filter: (r = ANY ('{1,…,1}'::integer[]))\r\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..15.01 rows=7231 width=0) (actual time=0.261..0.262 rows=0 loops=1)\r\n Index Cond: ((id >= 9000000) AND (r >= 1) AND (r <= 1))\r\nPlanning Time: 0.393 ms\r\nExecution Time: 0.280 ms\r\n(7 rows)\r\n\r\nFrom: Chris Wilson\r\nSent: 21 June 2019 10:20\r\nTo: 'Simon Riggs' <[email protected]>\r\nCc: [email protected]\r\nSubject: RE: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction\r\n\r\nThat makes perfect sense, thanks Simon!\r\n\r\nChris.\r\n\r\nFrom: Simon Riggs <[email protected]<mailto:[email protected]>>\r\nSent: 21 June 2019 10:17\r\nTo: Chris Wilson <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction\r\n\r\nOn Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]<mailto:[email protected]>> wrote:\r\nWith the following results:\r\n\r\ntesting=# explain analyze select * from brin_test where id >= 90000;\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------\r\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\r\n Recheck Cond: (id >= 90000)\r\n Rows Removed by Index Recheck: 3215\r\n Heap Blocks: lossy=59\r\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640 loops=1)\r\n Index Cond: (id >= 90000)\r\nPlanning Time: 0.155 ms\r\nExecution Time: 2.133 ms\r\n(8 rows)\r\n\r\ntesting=# explain analyze select * from brin_test where id >= 90000 and r in (1,3);\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------\r\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\r\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\r\n Rows Removed by Index Recheck: 13016\r\n Heap Blocks: lossy=59\r\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280 loops=1)\r\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\r\nPlanning Time: 0.071 ms\r\nExecution Time: 23.954 ms\r\n(8 rows)\r\n\r\nNote that introducing a disjunction (set of possible values) into the query doubles the number of actual rows returned, and increases the number removed by the index recheck. It looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding more values of R to see what I mean). The execution time also increases massively.\r\n\r\nCould anyone help me to understand what’s going on here, and whether there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the query planner does not seem to account for it, and chooses this plan even when it’s a bad one (much worse than removing result rows using a filter).\r\n\r\nIn both cases the index is returning a lossy bitmap of 59 heap blocks. The second query is more restrictive, so the number removed by index recheck is higher. The total of number rows returned plus the number of rows removed by index recheck is the same in both cases.\r\n\r\nThe only weirdness is why the index reports it has returned 640 rows in one query and 1280 in second query. Since a lossy bitmap is returned, that figure can only be an estimate. The estimate differs between queries, but is wrong in both cases.\r\n\r\n--\r\nSimon Riggs http://www.2ndQuadrant.com/<http://www.2ndquadrant.com/>\r\nPostgreSQL Solutions for the Enterprise\r\n\n\n\n\n\n\n\n\n\nOne possible optimisation occurred to me (that I guess we can’t currently do). If we use a larger example with some correlation in\r\n r, we can see that the time to execute the bitmap index scan is proportional to the number of items in the IN/ANY disjunction:\n \ndrop table brin_test;\ncreate table brin_test AS SELECT g as id, ((g / 100) + (g % 100)) as r\r\n\n from generate_series(1,10000000) as g;\ncreate index idx_brin_test_brin on brin_test using brin (id, r) with (pages_per_range = 32);\nvacuum analyze brin_test;\n\nset max_parallel_workers_per_gather = 0;\n \n/* Note that these queries return no results, so there is no time spent in bitmap heap scan, rechecking the conditions:\r\n */\nexplain analyze select * from brin_test where id >= 9000000 and r = any(array_fill(1, ARRAY[100]));\nexplain analyze select * from brin_test where id >= 9000000 and r = any(array_fill(1, ARRAY[1000]));\n \nWith the following results (long arrays elided):\n \ntesting=# explain analyze select * from brin_test where id >= 9000000 and r = any(array_fill(1, ARRAY[100]));\nBitmap Heap Scan on brin_test (cost=15.27..11781.13 rows=1031 width=8) (actual time=23.830..23.830 rows=0 loops=1)\n Recheck Cond: ((id >= 9000000) AND (r = ANY ('{1,…,1}'::integer[])))\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..15.01 rows=7231 width=0) (actual time=23.829..23.829 rows=0\r\n loops=1)\n Index Cond: ((id >= 9000000) AND (r = ANY ('{1,…,1}'::integer[])))\nPlanning Time: 0.092 ms\nExecution Time: 23.853 ms\n(6 rows)\n \ntesting=# explain analyze select * from brin_test where id >= 9000000 and r = any(array_fill(1, ARRAY[1000]));\nBitmap Heap Scan on brin_test (cost=17.59..36546.51 rows=10308 width=8) (actual time=237.748..237.748 rows=0 loops=1)\n Recheck Cond: ((id >= 9000000) AND (r = ANY ('{1,…,1}'::integer[])))\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..15.02 rows=14461 width=0) (actual time=237.747..237.747 rows=0\r\n loops=1)\n Index Cond: ((id >= 9000000) AND (r = ANY ('{1,…,1}'::integer[])))\nPlanning Time: 0.354 ms\nExecution Time: 237.817 ms\n(6 rows)\n \nWe can see that scanning 10x as many values takes 10x as long. It seems that we are checking each value in the array individually.\r\n However, since the BRIN index stores ranges of values in each block, all we care about (for the index scan) is whether the ranges overlap with the query. So we could compute the minimum and maximum in the array, and check whether each block contains any values\r\n in that range, and if so (and the other conditions are met) then emit the block for heap scanning. Does that make sense?\n \nIf I manually add those conditions to the query, it uses them and speeds up by about 1000 times:\n \ntesting=# explain analyze select * from brin_test where id >= 9000000 and r >= 1 and r <= 1 and r = any(array_fill(1,\r\n ARRAY[1000]));\nBitmap Heap Scan on brin_test (cost=15.01..19951.91 rows=1 width=8) (actual time=0.263..0.263 rows=0 loops=1)\n Recheck Cond: ((id >= 9000000) AND (r >= 1) AND (r <= 1))\n Filter: (r = ANY ('{1,…,1}'::integer[]))\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..15.01 rows=7231 width=0) (actual time=0.261..0.262 rows=0\r\n loops=1)\n Index Cond: ((id >= 9000000) AND (r >= 1) AND (r <= 1))\nPlanning Time: 0.393 ms\nExecution Time: 0.280 ms\n(7 rows)\n \n\n\nFrom: Chris Wilson\r\n\nSent: 21 June 2019 10:20\nTo: 'Simon Riggs' <[email protected]>\nCc: [email protected]\nSubject: RE: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction\n\n\n \nThat makes perfect sense, thanks Simon!\n \nChris.\n \nFrom: Simon Riggs <[email protected]>\r\n\nSent: 21 June 2019 10:17\nTo: Chris Wilson <[email protected]>\nCc: [email protected]\nSubject: Re: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction\n \n\n\nOn Thu, 20 Jun 2019 at 16:13, Chris Wilson <[email protected]> wrote:\n\n\n\n\n\nWith the following results:\n \ntesting=# explain analyze select * from brin_test where id >= 90000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=8.55..630.13 rows=10146 width=8) (actual time=0.474..1.796 rows=10001 loops=1)\n Recheck Cond: (id >= 90000)\n Rows Removed by Index Recheck:\r\n3215\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.02 rows=14286 width=0) (actual time=0.026..0.026 rows=640\r\n loops=1)\n Index Cond: (id >= 90000)\nPlanning Time: 0.155 ms\nExecution Time:\r\n2.133 ms\n(8 rows)\n \ntesting=# explain analyze select * from brin_test where id >= 90000\r\nand r in (1,3);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on brin_test (cost=6.06..556.21 rows=219 width=8) (actual time=6.101..23.927 rows=200 loops=1)\n Recheck Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\n Rows Removed by Index Recheck:\r\n13016\n Heap Blocks: lossy=59\n -> Bitmap Index Scan on idx_brin_test_brin (cost=0.00..6.01 rows=7143 width=0) (actual time=0.038..0.038 rows=1280\r\n loops=1)\n Index Cond: ((id >= 90000) AND (r = ANY ('{1,3}'::integer[])))\nPlanning Time: 0.071 ms\nExecution Time:\r\n23.954 ms\n(8 rows)\n \nNote that introducing a disjunction (set of possible values) into the query\r\ndoubles the number of actual rows returned, and increases the\r\nnumber removed by the index recheck. It looks to me as though perhaps the BRIN index does not completely support queries with a set of possible values, and executes the query multiple times (try adding more values of R to\r\n see what I mean). The execution time also increases massively.\n \nCould anyone help me to understand what’s going on here, and whether there’s a bug or limitation of BRIN indexes? If it’s a limitation, then the query planner does not seem to account\r\n for it, and chooses this plan even when it’s a bad one (much worse than removing result rows using a filter).\n\n\n\n\n \n\n\nIn both cases the index is returning a lossy bitmap of 59 heap blocks. The second query is more restrictive, so the number removed by index recheck is higher. The total of number rows returned plus the number of rows removed by index recheck\r\n is the same in both cases.\n\n\n \n\n\nThe only weirdness is why the index reports it has returned 640 rows in one query and 1280 in second query. Since a lossy bitmap is returned, that figure can only be an estimate. The estimate differs between queries, but is wrong in both\r\n cases. \n\n\n\n \n\n-- \n\n\n\n\n\n\nSimon Riggs http://www.2ndQuadrant.com/\nPostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 21 Jun 2019 09:43:33 +0000",
"msg_from": "Chris Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: EXPLAIN ANALYZE of BRIN bitmap index scan with disjunction"
}
] |
[
{
"msg_contents": "Hello everyone,\n\nI am attempting to set up a row level security policy based on geo-location\n(and the PostGIS extension). I am struggling to have it make use of column\nindexes.\n\nThe following example defines a table with geography points and aims to\nrestrict access to it based on distance to another set of points in a\nsecondary table. It has been tested on 11.2.\n\n\nCREATE EXTENSION postgis;\n\n-- This is the table we want to secure with RLS\nDROP TABLE IF EXISTS example1;\nCREATE TABLE example1 (\n id serial NOT NULL,\n geo geography NULL,\n CONSTRAINT example1_pk PRIMARY KEY (id)\n) with ( OIDS=FALSE );\n\n-- Seed the table with 100k random points\nINSERT INTO example1(geo)\nSELECT ST_SetSRID(\n ST_MakePoint(\n (random()*360.0) - 180.0,\n (random()*180.0) - 90.0),\n 4326) as geom\nFROM generate_series(1, 100000);\n\nCREATE INDEX example1_spx ON example1 USING GIST (geo);\n\n-- This table will hold points for the row level policy\nDROP TABLE IF EXISTS example_acl;\nCREATE TABLE example_acl (\n geo geography NULL\n) with ( OIDS=FALSE );\n\nINSERT INTO example_acl(geo)\nSELECT ST_SetSRID(\n ST_MakePoint(\n (random()*360.0) - 180.0,\n (random()*180.0) - 90.0),\n 4326) as geom\nFROM generate_series(1, 100);\n\n\n-- Simple query that performs an index scan\nEXPLAIN ANALYZE VERBOSE SELECT count(*) from example1\nINNER JOIN example_acl on st_dwithin(example_acl.geo, example1.geo, 1000)\n\n\nAggregate (cost=12364.11..12364.12 rows=1 width=8) (actual\ntime=4.802..4.802 rows=1 loops=1)\n Output: count(*)\n -> Nested Loop (cost=0.41..12364.00 rows=45 width=0) (actual\ntime=4.797..4.797 rows=0 loops=1)\n -> Seq Scan on public.example_acl (cost=0.00..23.60 rows=1360\nwidth=32) (actual time=0.034..0.066 rows=100 loops=1)\n Output: example_acl.geo\n -> Index Scan using example1_spx on public.example1\n (cost=0.41..9.06 rows=1 width=32) (actual time=0.044..0.044 rows=0\nloops=100)\n Output: example1.id, example1.geo\n Index Cond: (example1.geo && _st_expand(example_acl.geo,\n'1000'::double precision))\n Filter: ((example_acl.geo && _st_expand(example1.geo,\n'1000'::double precision)) AND _st_dwithin(example_acl.geo, example1.geo,\n'1000'::double precision, true))\nPlanning time: 60.690 ms\nExecution time: 5.006 ms\n\n\n-- Setting up the policy\nCREATE ROLE example_role;\nGRANT SELECT ON TABLE example1 to example_role;\nGRANT SELECT ON TABLE example_acl to example_role;\nALTER TABLE example1 ENABLE ROW LEVEL SECURITY;\n\nCREATE POLICY example_location_policy ON example1\n AS permissive\n FOR SELECT\n TO example_role\n USING (\n EXISTS (\n SELECT 1\n FROM example_acl\n WHERE (\n st_dwithin(example_acl.geo, example1.geo, 1000)\n )\n )\n );\n\n\nSET ROLE example_role;\nEXPLAIN ANALYZE VERBOSE SELECT count(*) from example1;\n\nAggregate (cost=5251959.00..5251959.01 rows=1 width=8) (actual\ntime=9256.606..9256.606 rows=1 loops=1)\n Output: count(*)\n -> Seq Scan on public.example1 (cost=0.00..5251834.00 rows=50000\nwidth=0) (actual time=9256.601..9256.601 rows=0 loops=1)\n Output: example1.id, example1.geo\n Filter: (SubPlan 1)\n Rows Removed by Filter: 100000\n SubPlan 1\n -> Seq Scan on public.example_acl (cost=0.00..52.50 rows=1\nwidth=0) (actual time=0.089..0.089 rows=0 loops=100000)\n Filter: ((example_acl.geo && _st_expand(example1.geo,\n'1000'::double precision)) AND (example1.geo && _st_expand(example_acl.geo,\n'1000'::double precision)) AND _st_dwithin(example_acl.geo, example1.geo,\n'1000'::double precision, true))\n Rows Removed by Filter: 100\nPlanning time: 67.601 ms\nExecution time: 9256.812 ms\n\nAs you can see, the policy does not use the index example1_spx on the\ngeography column.\nIs there a way to rewrite that policy so that it would make use of the\nindex?\n\nThank you in advance.\n\nBest regards,\nGrégory El Majjouti\n\nHello everyone,I am attempting to set up a row level security policy based on geo-location (and the PostGIS extension). I am struggling to have it make use of column indexes.The following example defines a table with geography points and aims to restrict access to it based on distance to another set of points in a secondary table. It has been tested on 11.2. CREATE EXTENSION postgis; -- This is the table we want to secure with RLSDROP TABLE IF EXISTS example1;CREATE TABLE example1 ( id serial NOT NULL, geo geography NULL, CONSTRAINT example1_pk PRIMARY KEY (id)) with ( OIDS=FALSE );-- Seed the table with 100k random points INSERT INTO example1(geo)SELECT ST_SetSRID( ST_MakePoint( (random()*360.0) - 180.0, (random()*180.0) - 90.0), 4326) as geomFROM generate_series(1, 100000);CREATE INDEX example1_spx ON example1 USING GIST (geo);-- This table will hold points for the row level policyDROP TABLE IF EXISTS example_acl;CREATE TABLE example_acl ( geo geography NULL) with ( OIDS=FALSE );INSERT INTO example_acl(geo)SELECT ST_SetSRID( ST_MakePoint( (random()*360.0) - 180.0, (random()*180.0) - 90.0), 4326) as geomFROM generate_series(1, 100);-- Simple query that performs an index scanEXPLAIN ANALYZE VERBOSE SELECT count(*) from example1 INNER JOIN example_acl on st_dwithin(example_acl.geo, example1.geo, 1000)Aggregate (cost=12364.11..12364.12 rows=1 width=8) (actual time=4.802..4.802 rows=1 loops=1) Output: count(*) -> Nested Loop (cost=0.41..12364.00 rows=45 width=0) (actual time=4.797..4.797 rows=0 loops=1) -> Seq Scan on public.example_acl (cost=0.00..23.60 rows=1360 width=32) (actual time=0.034..0.066 rows=100 loops=1) Output: example_acl.geo -> Index Scan using example1_spx on public.example1 (cost=0.41..9.06 rows=1 width=32) (actual time=0.044..0.044 rows=0 loops=100) Output: example1.id, example1.geo Index Cond: (example1.geo && _st_expand(example_acl.geo, '1000'::double precision)) Filter: ((example_acl.geo && _st_expand(example1.geo, '1000'::double precision)) AND _st_dwithin(example_acl.geo, example1.geo, '1000'::double precision, true))Planning time: 60.690 msExecution time: 5.006 ms-- Setting up the policyCREATE ROLE example_role; GRANT SELECT ON TABLE example1 to example_role;GRANT SELECT ON TABLE example_acl to example_role;ALTER TABLE example1 ENABLE ROW LEVEL SECURITY;CREATE POLICY example_location_policy ON example1 AS permissive FOR SELECT TO example_role USING ( EXISTS ( SELECT 1 FROM example_acl WHERE ( st_dwithin(example_acl.geo, example1.geo, 1000) ) ) );SET ROLE example_role;EXPLAIN ANALYZE VERBOSE SELECT count(*) from example1;Aggregate (cost=5251959.00..5251959.01 rows=1 width=8) (actual time=9256.606..9256.606 rows=1 loops=1) Output: count(*) -> Seq Scan on public.example1 (cost=0.00..5251834.00 rows=50000 width=0) (actual time=9256.601..9256.601 rows=0 loops=1) Output: example1.id, example1.geo Filter: (SubPlan 1) Rows Removed by Filter: 100000 SubPlan 1 -> Seq Scan on public.example_acl (cost=0.00..52.50 rows=1 width=0) (actual time=0.089..0.089 rows=0 loops=100000) Filter: ((example_acl.geo && _st_expand(example1.geo, '1000'::double precision)) AND (example1.geo && _st_expand(example_acl.geo, '1000'::double precision)) AND _st_dwithin(example_acl.geo, example1.geo, '1000'::double precision, true)) Rows Removed by Filter: 100Planning time: 67.601 msExecution time: 9256.812 msAs you can see, the policy does not use the index example1_spx on the geography column.Is there a way to rewrite that policy so that it would make use of the index?Thank you in advance.Best regards,Grégory El Majjouti",
"msg_date": "Fri, 21 Jun 2019 14:22:17 +0200",
"msg_from": "=?UTF-8?Q?Gr=C3=A9gory_EL_MAJJOUTI?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using indexes in RLS policies (sub)queries"
}
] |
[
{
"msg_contents": "I'm not sure where else to look, so I'm asking here for tips.\n\nI have a table in a remote (Heroku-managed) postgresql database (PG 10.7).\n\nOn the other end, (PG 11.3) I have a foreign table configured with a\nmaterialized view in front of it.\n\nUp until Thursday evening, it was taking about 12 - 15 seconds to refresh,\nit is only 15,000 rows with 41 columns. Since Thursday evening it has\nbeen taking 15 _minutes_ or more to refresh. Nothing changed on my end\nthat I'm aware of. It completes, it just takes forever.\n\nHere is a summary of what I've tried:\n\n1) Refreshing the materialized views of other tables from that same source\ndatabase, some much bigger, still perform within seconds as they always\nhave.\n2) Dropping the foreign table and the materialized view and recreating them\ndidn't help.\n3) It doesn't matter whether I refresh concurrently or not.\n4) Configuring the foreign table and materialized view on my laptop's\npostgresql instance exhibited the same behavior for just this one table.\n5) Querying the foreign table directly for a specific row was fast.\n6) Reindex and vacuum full analyze on the source table didn't help.\n7) Bumping the database on my end to 11.4, didn't help.\n8) There are no locks on either database that I can see while the query\nappears to be stalled.\n9) Running the materialized view select directly against the source table\ncompletes within seconds.\n10) Running the materialized view select directly against the foreign table\nalso completes within a few seconds.\n11) Dropping all of the indexes on the materialized view, including the\nunique one and running the refresh (without 'concurrently') does not help.\n\nI feel like I'm missing something obvious here, but I'm just not seeing\nit. Any thoughts about where else to look?\n\nI'm not sure where else to look, so I'm asking here for tips.I have a table in a remote (Heroku-managed) postgresql database (PG 10.7).On the other end, (PG 11.3) I have a foreign table configured with a materialized view in front of it.Up until Thursday evening, it was taking about 12 - 15 seconds to refresh, it is only 15,000 rows with 41 columns. Since Thursday evening it has been taking 15 _minutes_ or more to refresh. Nothing changed on my end that I'm aware of. It completes, it just takes forever.Here is a summary of what I've tried:1) Refreshing the materialized views of other tables from that same source database, some much bigger, still perform within seconds as they always have.2) Dropping the foreign table and the materialized view and recreating them didn't help.3) It doesn't matter whether I refresh concurrently or not.4) Configuring the foreign table and materialized view on my laptop's postgresql instance exhibited the same behavior for just this one table.5) Querying the foreign table directly for a specific row was fast.6) Reindex and vacuum full analyze on the source table didn't help.7) Bumping the database on my end to 11.4, didn't help.8) There are no locks on either database that I can see while the query appears to be stalled.9) Running the materialized view select directly against the source table completes within seconds.10) Running the materialized view select directly against the foreign table also completes within a few seconds.11) Dropping all of the indexes on the materialized view, including the unique one and running the refresh (without 'concurrently') does not help.I feel like I'm missing something obvious here, but I'm just not seeing it. Any thoughts about where else to look?",
"msg_date": "Sun, 23 Jun 2019 10:21:40 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": true,
"msg_subject": "materialized view refresh of a foreign table"
},
{
"msg_contents": "On Sun, Jun 23, 2019 at 10:21 AM Rick Otten <[email protected]>\nwrote:\n\n> I'm not sure where else to look, so I'm asking here for tips.\n>\n> I have a table in a remote (Heroku-managed) postgresql database (PG 10.7).\n>\n> On the other end, (PG 11.3) I have a foreign table configured with a\n> materialized view in front of it.\n>\n> Up until Thursday evening, it was taking about 12 - 15 seconds to refresh,\n> it is only 15,000 rows with 41 columns. Since Thursday evening it has\n> been taking 15 _minutes_ or more to refresh. Nothing changed on my end\n> that I'm aware of. It completes, it just takes forever.\n>\n>\nI believe I've solved this mystery. Thanks for hearing me out. Just the\nopportunity to summarize everything I'd tried helped me discover the root\ncause.\n\nIn the middle of the table there is a 'text' column. Since last Thursday\nthere were a number of rows that were populated with very long strings.\n(lots of text in that column). This appears to have completely bogged\ndown the materialized view refresh. Since we weren't using that column in\nour analytics database at this time, I simply removed it from the\nmaterialized view. If we do end up needing it, I'll give it its own\nmaterialized view and/or look at chopping up the text into just the bits we\nneed.\n\nOn Sun, Jun 23, 2019 at 10:21 AM Rick Otten <[email protected]> wrote:I'm not sure where else to look, so I'm asking here for tips.I have a table in a remote (Heroku-managed) postgresql database (PG 10.7).On the other end, (PG 11.3) I have a foreign table configured with a materialized view in front of it.Up until Thursday evening, it was taking about 12 - 15 seconds to refresh, it is only 15,000 rows with 41 columns. Since Thursday evening it has been taking 15 _minutes_ or more to refresh. Nothing changed on my end that I'm aware of. It completes, it just takes forever.I believe I've solved this mystery. Thanks for hearing me out. Just the opportunity to summarize everything I'd tried helped me discover the root cause.In the middle of the table there is a 'text' column. Since last Thursday there were a number of rows that were populated with very long strings. (lots of text in that column). This appears to have completely bogged down the materialized view refresh. Since we weren't using that column in our analytics database at this time, I simply removed it from the materialized view. If we do end up needing it, I'll give it its own materialized view and/or look at chopping up the text into just the bits we need.",
"msg_date": "Tue, 25 Jun 2019 07:03:31 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: materialized view refresh of a foreign table"
}
] |
[
{
"msg_contents": "Hi,\nI wrote a script that monitored the size of a specific table of mine(dead\ntuples_mb vs live tuples_mb). The script run a query on pg_stattuple every\n15 minutes : select * from pg_stattuple('table_name'). I know that every\nnight there is a huge delete query that deletes most of the table`s\ncontent. In addition, I set the following parameters for the table :\ntoast.autovacuum_vacuum_scale_factor=0,\n toast.autovacuum_vacuum_threshold=10000,\ntoast.autovacuum_vacuum_cost_limit=10000,\ntoast.autovacuum_vacuum_cost_delay=5\n\nAfter a week of monitoring I generates a csv of the results and I created a\ngraph from that data. However, the graph that I created confused me very\nmuch.\nA small sample of all the data that I gathered :\ndate toasted_live_tup_size_MB toasted_dead_tup_size_mb\n6/16/19 0:00 58.8537941 25.68760395\n6/16/19 0:15 8.725102425 25.02167416\n6/16/19 0:30 8.668716431 25.08410168\n6/16/19 0:45 8.810066223 24.94327927\n6/16/19 1:00 8.732183456 25.02435684\n6/16/19 1:15 8.67656517 20.01097107\n6/16/19 1:30 9.573832512 20.76298809\n6/16/19 1:45 9.562319756 20.7739706\n6/16/19 2:00 9.567030907 21.01560402\n6/16/19 2:15 9.576253891 70.62042999\n6/16/19 2:30 9.715950966 492.2445602\n6/16/19 2:45 9.59837532 801.455843\n6/16/19 3:00 9.599774361 1110.201434\n6/16/19 3:15 9.606671333 1402.255548\n6/16/19 3:30 9.601698875 1698.487226\n6/16/19 3:45 9.606934547 2003.051514\n6/16/19 4:00 9.600641251 2307.625901\n6/16/19 4:15 9.61320591 2612.196963\n6/16/19 4:30 9.606646538 2916.773588\n6/16/19 4:45 9.61294651 3221.337314\n6/16/19 5:00 9.607636452 3525.914713\n6/16/19 5:15 5.447218895 3826.313025\n6/16/19 5:30 9.621054649 4130.883012\n6/16/19 5:45 11.48730659 4433.29188\n6/16/19 6:00 7.311745644 4742.039024\n6/16/19 6:15 12.31321144 5135.994677\n6/16/19 6:30 12.12382507 5671.512811\n6/16/19 6:45 8.029448509 6171.677253\n6/16/19 7:00 7.955677986 6666.846472\n6/16/19 7:15 12.21173954 7161.934807\n6/16/19 7:30 7.96325779 7661.273341\n6/16/19 7:45 12.20623493 8156.362462\n6/16/19 8:00 7.960205078 8655.704986\n6/16/19 8:15 12.13819695 33.60424519\n6/16/19 8:30 12.21746635 57.87192154\n6/16/19 8:45 12.2179966 33.52415848\n6/16/19 9:00 12.14417744 33.60204792\n6/16/19 9:15 12.21954441 26.85134888\n\n\nAs you can see in this example, The size of the dead rows from 2am until\n8am increased while there isnt any change in the size of the live rows.\nDuring that time I know that there were a delete query that run and deleted\na lot of rows. That is why I'm confused here, if more dead rows are\ngenerated because of a delete, it means that number of live_tuples should\nbe decreased but it doesnt happen here. Any idea why ?\n\nHi,I wrote a script that monitored the size of a specific table of mine(dead tuples_mb vs live tuples_mb). The script run a query on pg_stattuple every 15 minutes : select * from pg_stattuple('table_name'). I know that every night there is a huge delete query that deletes most of the table`s content. In addition, I set the following parameters for the table : toast.autovacuum_vacuum_scale_factor=0, toast.autovacuum_vacuum_threshold=10000, toast.autovacuum_vacuum_cost_limit=10000, toast.autovacuum_vacuum_cost_delay=5 After a week of monitoring I generates a csv of the results and I created a graph from that data. However, the graph that I created confused me very much.A small sample of all the data that I gathered : \n\n\n\n\ndate\ntoasted_live_tup_size_MB\ntoasted_dead_tup_size_mb\n\n\n6/16/19 0:00\n58.8537941\n25.68760395\n\n\n6/16/19 0:15\n8.725102425\n25.02167416\n\n\n6/16/19 0:30\n8.668716431\n25.08410168\n\n\n6/16/19 0:45\n8.810066223\n24.94327927\n\n\n6/16/19 1:00\n8.732183456\n25.02435684\n\n\n6/16/19 1:15\n8.67656517\n20.01097107\n\n\n6/16/19 1:30\n9.573832512\n20.76298809\n\n\n6/16/19 1:45\n9.562319756\n20.7739706\n\n\n6/16/19 2:00\n9.567030907\n21.01560402\n\n\n6/16/19 2:15\n9.576253891\n70.62042999\n\n\n6/16/19 2:30\n9.715950966\n492.2445602\n\n\n6/16/19 2:45\n9.59837532\n801.455843\n\n\n6/16/19 3:00\n9.599774361\n1110.201434\n\n\n6/16/19 3:15\n9.606671333\n1402.255548\n\n\n6/16/19 3:30\n9.601698875\n1698.487226\n\n\n6/16/19 3:45\n9.606934547\n2003.051514\n\n\n6/16/19 4:00\n9.600641251\n2307.625901\n\n\n6/16/19 4:15\n9.61320591\n2612.196963\n\n\n6/16/19 4:30\n9.606646538\n2916.773588\n\n\n6/16/19 4:45\n9.61294651\n3221.337314\n\n\n6/16/19 5:00\n9.607636452\n3525.914713\n\n\n6/16/19 5:15\n5.447218895\n3826.313025\n\n\n6/16/19 5:30\n9.621054649\n4130.883012\n\n\n6/16/19 5:45\n11.48730659\n4433.29188\n\n\n6/16/19 6:00\n7.311745644\n4742.039024\n\n\n6/16/19 6:15\n12.31321144\n5135.994677\n\n\n6/16/19 6:30\n12.12382507\n5671.512811\n\n\n6/16/19 6:45\n8.029448509\n6171.677253\n\n\n6/16/19 7:00\n7.955677986\n6666.846472\n\n\n6/16/19 7:15\n12.21173954\n7161.934807\n\n\n6/16/19 7:30\n7.96325779\n7661.273341\n\n\n6/16/19 7:45\n12.20623493\n8156.362462\n\n\n6/16/19 8:00\n7.960205078\n8655.704986\n\n\n6/16/19 8:15\n12.13819695\n33.60424519\n\n\n6/16/19 8:30\n12.21746635\n57.87192154\n\n\n6/16/19 8:45\n12.2179966\n33.52415848\n\n\n6/16/19 9:00\n12.14417744\n33.60204792\n\n\n6/16/19 9:15\n12.21954441\n26.85134888\n\n\nAs you can see in this example, The size of the dead rows from 2am until 8am increased while there isnt any change in the size of the live rows. During that time I know that there were a delete query that run and deleted a lot of rows. That is why I'm confused here, if more dead rows are generated because of a delete, it means that number of live_tuples should be decreased but it doesnt happen here. Any idea why ?",
"msg_date": "Sun, 23 Jun 2019 17:24:53 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "monitoring tuple_count vs dead_tuple_count"
}
] |
[
{
"msg_contents": "I came across a poorly performing report with a subplan like this:\n\nts=# explain SELECT * FROM eric_enodeb_cell_metrics WHERE start_time BETWEEN '2019-01-01 04:00' AND '2019-01-01 05:00' OR start_time BETWEEN '2019-01-02 04:00' AND '2019-01-02 05:00';\n Append (cost=36.04..39668.56 rows=12817 width=2730)\n -> Bitmap Heap Scan on eric_enodeb_cell_20190101 (cost=36.04..19504.14 rows=6398 width=2730)\n Recheck Cond: (((start_time >= '2019-01-01 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-01 05:00:00-05'::timestamp with time zone)) OR ((start_time >= '2019-01-02 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-02 05:00:00-05'::timestamp with time zone)))\n -> BitmapOr (cost=36.04..36.04 rows=6723 width=0)\n -> Bitmap Index Scan on eric_enodeb_cell_20190101_idx (cost=0.00..16.81 rows=6465 width=0)\n Index Cond: ((start_time >= '2019-01-01 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-01 05:00:00-05'::timestamp with time zone))\n -> Bitmap Index Scan on eric_enodeb_cell_20190101_idx (cost=0.00..16.03 rows=259 width=0)\n Index Cond: ((start_time >= '2019-01-02 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-02 05:00:00-05'::timestamp with time zone))\n -> Bitmap Heap Scan on eric_enodeb_cell_20190102 (cost=36.08..20100.34 rows=6419 width=2730)\n Recheck Cond: (((start_time >= '2019-01-01 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-01 05:00:00-05'::timestamp with time zone)) OR ((start_time >= '2019-01-02 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-02 05:00:00-05'::timestamp with time zone)))\n -> BitmapOr (cost=36.08..36.08 rows=6982 width=0)\n -> Bitmap Index Scan on eric_enodeb_cell_20190102_idx (cost=0.00..16.03 rows=259 width=0)\n Index Cond: ((start_time >= '2019-01-01 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-01 05:00:00-05'::timestamp with time zone))\n -> Bitmap Index Scan on eric_enodeb_cell_20190102_idx (cost=0.00..16.84 rows=6723 width=0)\n Index Cond: ((start_time >= '2019-01-02 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-02 05:00:00-05'::timestamp with time zone))\n\nIs there some reason why the partition constraints aren't excluding any of the\nindex scans ? In the actual problem case, there's a longer list of \"OR\"\nconditions and it's even worse.\n\nThe partitions looks like this:\n\nPartition of: eric_enodeb_cell_metrics FOR VALUES FROM ('2019-01-02 00:00:00-05') TO ('2019-01-03 00:00:00-05')\nIndexes:\n \"eric_enodeb_cell_20190102_idx\" brin (start_time) WITH (autosummarize='true'), tablespace \"oldindex\"\n \"eric_enodeb_cell_20190102_site_idx\" btree (site_id) WITH (fillfactor='100'), tablespace \"oldindex\"\nCheck constraints:\n \"eric_enodeb_cell_20190102_start_time_check\" CHECK (start_time >= '2019-01-02 00:00:00-05'::timestamp with time zone AND start_time < '2019-01-03 00:00:00-05'::timestamp with time zone)\nTablespace: \"zfs\"\n\nAnd:\npg_get_partition_constraintdef | ((start_time IS NOT NULL) AND (start_time >= '2019-01-02 00:00:00-05'::timestamp with time zone) AND (start_time < '2019-01-03 00:00:00-05'::timestamp with time zone))\n\nts=# SELECT version();\nversion | PostgreSQL 11.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit\n\nts=# SHOW constraint_exclusion ;\nconstraint_exclusion | partition\n\nts=# SHOW enable_partition_pruning;\nenable_partition_pruning | on\n\nThanks in advance.\n\nJustin\n\n\n",
"msg_date": "Mon, 24 Jun 2019 12:31:46 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "scans on table fail to be excluded by partition bounds"
},
{
"msg_contents": "> ts=# explain SELECT * FROM eric_enodeb_cell_metrics WHERE start_time\n> BETWEEN '2019-01-01 04:00' AND '2019-01-01 05:00' OR start_time BETWEEN\n> '2019-01-02 04:00' AND '2019-01-02 05:00' \n\nMaybe it's because of the implicit usage of the local timezone when the strings are cast to (timestamp with time zone) in the values you give for start_time here?\nWhat happens if you specify it using \"TIMESTAMP WITH TIME ZONE '2019-01-01 04:00-05'\", etc.?\n\nSteve.\n\n\n\n",
"msg_date": "Tue, 25 Jun 2019 10:48:01 +0000",
"msg_from": "Steven Winfield <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: scans on table fail to be excluded by partition bounds"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 10:48:01AM +0000, Steven Winfield wrote:\n> > ts=# explain SELECT * FROM eric_enodeb_cell_metrics WHERE start_time\n> > BETWEEN '2019-01-01 04:00' AND '2019-01-01 05:00' OR start_time BETWEEN\n> > '2019-01-02 04:00' AND '2019-01-02 05:00' \n> \n> Maybe it's because of the implicit usage of the local timezone when the strings are cast to (timestamp with time zone) in the values you give for start_time here?\n> What happens if you specify it using \"TIMESTAMP WITH TIME ZONE '2019-01-01 04:00-05'\", etc.?\n\nIt's the same. The timezone in the constraints is the default timezone so the\nthat's correct.\n\nJustin\n\n\n",
"msg_date": "Wed, 26 Jun 2019 13:03:20 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: scans on table fail to be excluded by partition bounds"
},
{
"msg_contents": "On Tue, 25 Jun 2019 at 05:31, Justin Pryzby <[email protected]> wrote:\n> ts=# explain SELECT * FROM eric_enodeb_cell_metrics WHERE start_time BETWEEN '2019-01-01 04:00' AND '2019-01-01 05:00' OR start_time BETWEEN '2019-01-02 04:00' AND '2019-01-02 05:00';\n> Append (cost=36.04..39668.56 rows=12817 width=2730)\n> -> Bitmap Heap Scan on eric_enodeb_cell_20190101 (cost=36.04..19504.14 rows=6398 width=2730)\n> Recheck Cond: (((start_time >= '2019-01-01 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-01 05:00:00-05'::timestamp with time zone)) OR ((start_time >= '2019-01-02 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-02 05:00:00-05'::timestamp with time zone)))\n> -> BitmapOr (cost=36.04..36.04 rows=6723 width=0)\n> -> Bitmap Index Scan on eric_enodeb_cell_20190101_idx (cost=0.00..16.81 rows=6465 width=0)\n> Index Cond: ((start_time >= '2019-01-01 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-01 05:00:00-05'::timestamp with time zone))\n> -> Bitmap Index Scan on eric_enodeb_cell_20190101_idx (cost=0.00..16.03 rows=259 width=0)\n> Index Cond: ((start_time >= '2019-01-02 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-02 05:00:00-05'::timestamp with time zone))\n> -> Bitmap Heap Scan on eric_enodeb_cell_20190102 (cost=36.08..20100.34 rows=6419 width=2730)\n> Recheck Cond: (((start_time >= '2019-01-01 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-01 05:00:00-05'::timestamp with time zone)) OR ((start_time >= '2019-01-02 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-02 05:00:00-05'::timestamp with time zone)))\n> -> BitmapOr (cost=36.08..36.08 rows=6982 width=0)\n> -> Bitmap Index Scan on eric_enodeb_cell_20190102_idx (cost=0.00..16.03 rows=259 width=0)\n> Index Cond: ((start_time >= '2019-01-01 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-01 05:00:00-05'::timestamp with time zone))\n> -> Bitmap Index Scan on eric_enodeb_cell_20190102_idx (cost=0.00..16.84 rows=6723 width=0)\n> Index Cond: ((start_time >= '2019-01-02 04:00:00-05'::timestamp with time zone) AND (start_time <= '2019-01-02 05:00:00-05'::timestamp with time zone))\n>\n> Is there some reason why the partition constraints aren't excluding any of the\n> index scans ?\n\nYeah, we don't do anything to remove base quals that are redundant due\nto the partition constraint.\n\nThere was a patch [1] to try and fix this but it's not seen any recent activity.\n\n[1] https://commitfest.postgresql.org/19/1264/\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 27 Jun 2019 10:09:23 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scans on table fail to be excluded by partition bounds"
}
] |
[
{
"msg_contents": "I'm hoping people can help me figure out where to look to solve an odd\nPostgreSQL performance problem.\n\nA bit of background: We have a client with a database of approximately 450\nGB, that has a couple of tables storing large amounts of text, including\nfull HTML pages from the Internet. Last fall, they began experiencing\ndramatic and exponentially decreasing performance. We track certain query\ntimes, so we know how much time is being spent in calls to the database for\nthese functions. When this began, the times went from about an average of\napproximate 200 ms to 400 ms, rapidly climbing each day before reaching 900\nms, figures we had never seen before, within 4 days, with no appreciable\nchange in usage. It was at this point that we restarted the database server\nand times returned to the 400 ms range, but never back to their\nlong-running original levels. From this point onward, we had to restart the\ndatabase (originally the server, but eventually just the database process)\nevery 3-4 days, otherwise the application became unusable.\n\nAs they were still on PostgreSQL 8.2, we persuaded them to finally\nundertake our long-standing recommendation to upgrade, as there was no\npossibility of support on that platform. That upgrade to 11.2 was completed\nsuccessfully in mid-May, and although times have not returned to their\noriginal levels (they now average approximately 250 ms), the application\noverall seems much more responsive and faster (application servers were not\nchanged, other than minor changes --full text search, explicit casts,\netc.-- to conform to PostgreSQL 11's requirements).\n\nWhat we continued to notice was a milder but still definite trend of\nincreased query times, during the course of each week, from the mid to high\n200 ms, to the high 300 ms to low 400 ms. Some years ago, someone had\nnoticed that as the number of \"raw_page\" columns in a particular table\ngrew, performance would decline. They wrote a script that once a week locks\nthe table, deletes the processed large columns (they are not needed after\nprocessing), copies the remaining data to a backup table, truncates the\noriginal table, then copies it back. When this script runs we see an\nimmediate change in performance, from 380 ms in the hour before the drop,\nto 250 ms in the hour of the drop. As rows with these populated columns are\nadded during the course of a week, the performance drops, steadily, until\nthe next week's cleaning operation. Each week the performance increase is\nclear and significant.\n\nWhat is perplexing is (and I have triple checked), that this table is *not*\nreferenced in any way in the queries that we time (it is referenced by\nongoing administrative and processing queries). The operation that cleans\nit frees up approximately 15-20 GB of space each week. Our system\nmonitoring shows this change in free disk space, but this is 20 GB out of\napproximately 300 GB of free space (free space is just under 40% of volume\nsize), so disk space does not seem to be an issue. The table in question is\nabout 21 GB in size, with about 20 GB in toast data, at its largest.\n\nEven odder, the queries we time *do* reference a much larger table, which\ncontains very similar data, and multiple columns of it. It is 355 GB in\nsize, with 318 GB in toast data. It grows continually, with no cleaning.\n\nIf anyone has any suggestions as to what sort of statistics to look at, or\nwhy this would be happening, they would be greatly appreciated.\n\nThanks in advance,\nHugh\n\n--\nHugh Ranalli\nPrincipal Consultant\nWhite Horse Technology Consulting\ne: [email protected]\nc: +01-416-994-7957\n\nI'm hoping people can help me figure out where to look to solve an odd PostgreSQL performance problem. A bit of background: We have a client with a database of approximately 450 GB, that has a couple of tables storing large amounts of text, including full HTML pages from the Internet. Last fall, they began experiencing dramatic and exponentially decreasing performance. We track certain query times, so we know how much time is being spent in calls to the database for these functions. When this began, the times went from about an average of approximate 200 ms to 400 ms, rapidly climbing each day before reaching 900 ms, figures we had never seen before, within 4 days, with no appreciable change in usage. It was at this point that we restarted the database server and times returned to the 400 ms range, but never back to their long-running original levels. From this point onward, we had to restart the database (originally the server, but eventually just the database process) every 3-4 days, otherwise the application became unusable.As they were still on PostgreSQL 8.2, we persuaded them to finally undertake our long-standing recommendation to upgrade, as there was no possibility of support on that platform. That upgrade to 11.2 was completed successfully in mid-May, and although times have not returned to their original levels (they now average approximately 250 ms), the application overall seems much more responsive and faster (application servers were not changed, other than minor changes --full text search, explicit casts, etc.-- to conform to PostgreSQL 11's requirements).What we continued to notice was a milder but still definite trend of increased query times, during the course of each week, from the mid to high 200 ms, to the high 300 ms to low 400 ms. Some years ago, someone had noticed that as the number of \"raw_page\" columns in a particular table grew, performance would decline. They wrote a script that once a week locks the table, deletes the processed large columns (they are not needed after processing), copies the remaining data to a backup table, truncates the original table, then copies it back. When this script runs we see an immediate change in performance, from 380 ms in the hour before the drop, to 250 ms in the hour of the drop. As rows with these populated columns are added during the course of a week, the performance drops, steadily, until the next week's cleaning operation. Each week the performance increase is clear and significant.What is perplexing is (and I have triple checked), that this table is *not* referenced in any way in the queries that we time (it is referenced by ongoing administrative and processing queries). The operation that cleans it frees up approximately 15-20 GB of space each week. Our system monitoring shows this change in free disk space, but this is 20 GB out of approximately 300 GB of free space (free space is just under 40% of volume size), so disk space does not seem to be an issue. The table in question is about 21 GB in size, with about 20 GB in toast data, at its largest.Even odder, the queries we time *do* reference a much larger table, which contains very similar data, and multiple columns of it. It is 355 GB in size, with 318 GB in toast data. It grows continually, with no cleaning.If anyone has any suggestions as to what sort of statistics to look at, or why this would be happening, they would be greatly appreciated.Thanks in advance,Hugh--Hugh Ranalli Principal ConsultantWhite Horse Technology Consultinge: [email protected]: +01-416-994-7957",
"msg_date": "Tue, 25 Jun 2019 11:49:03 -0400",
"msg_from": "Hugh Ranalli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Perplexing, regular decline in performance"
},
{
"msg_contents": "Have you done a VACUUM ANALYZE FULL on your database? This needs to be done\nperiodically to inform the server of the statistics of how the data and\nrelations are distributed across the database. Without this bad assumptions\nby the planner can cause degradation of performance. Also, if you are using\nthe default settings in postgres.conf then understand those are established\nto use the absolute minimum amount of resources possible which means not\ntaking advantage of available memory or CPUs that may be available in your\nenvironment that would make the database server more performant.\n\nPlease investigate these and then report back any details of what you've\ndone to try to improve performance.\n\n best regards,\n\n -- Ben Scherrey\n\nhttps://www.postgresql.org/docs/11/sql-vacuum.html\n\nOn Tue, Jun 25, 2019 at 10:49 PM Hugh Ranalli <[email protected]> wrote:\n\n> I'm hoping people can help me figure out where to look to solve an odd\n> PostgreSQL performance problem.\n>\n> A bit of background: We have a client with a database of approximately 450\n> GB, that has a couple of tables storing large amounts of text, including\n> full HTML pages from the Internet. Last fall, they began experiencing\n> dramatic and exponentially decreasing performance. We track certain query\n> times, so we know how much time is being spent in calls to the database for\n> these functions. When this began, the times went from about an average of\n> approximate 200 ms to 400 ms, rapidly climbing each day before reaching 900\n> ms, figures we had never seen before, within 4 days, with no appreciable\n> change in usage. It was at this point that we restarted the database server\n> and times returned to the 400 ms range, but never back to their\n> long-running original levels. From this point onward, we had to restart the\n> database (originally the server, but eventually just the database process)\n> every 3-4 days, otherwise the application became unusable.\n>\n> As they were still on PostgreSQL 8.2, we persuaded them to finally\n> undertake our long-standing recommendation to upgrade, as there was no\n> possibility of support on that platform. That upgrade to 11.2 was completed\n> successfully in mid-May, and although times have not returned to their\n> original levels (they now average approximately 250 ms), the application\n> overall seems much more responsive and faster (application servers were not\n> changed, other than minor changes --full text search, explicit casts,\n> etc.-- to conform to PostgreSQL 11's requirements).\n>\n> What we continued to notice was a milder but still definite trend of\n> increased query times, during the course of each week, from the mid to high\n> 200 ms, to the high 300 ms to low 400 ms. Some years ago, someone had\n> noticed that as the number of \"raw_page\" columns in a particular table\n> grew, performance would decline. They wrote a script that once a week locks\n> the table, deletes the processed large columns (they are not needed after\n> processing), copies the remaining data to a backup table, truncates the\n> original table, then copies it back. When this script runs we see an\n> immediate change in performance, from 380 ms in the hour before the drop,\n> to 250 ms in the hour of the drop. As rows with these populated columns are\n> added during the course of a week, the performance drops, steadily, until\n> the next week's cleaning operation. Each week the performance increase is\n> clear and significant.\n>\n> What is perplexing is (and I have triple checked), that this table is\n> *not* referenced in any way in the queries that we time (it is referenced\n> by ongoing administrative and processing queries). The operation that\n> cleans it frees up approximately 15-20 GB of space each week. Our system\n> monitoring shows this change in free disk space, but this is 20 GB out of\n> approximately 300 GB of free space (free space is just under 40% of volume\n> size), so disk space does not seem to be an issue. The table in question is\n> about 21 GB in size, with about 20 GB in toast data, at its largest.\n>\n> Even odder, the queries we time *do* reference a much larger table, which\n> contains very similar data, and multiple columns of it. It is 355 GB in\n> size, with 318 GB in toast data. It grows continually, with no cleaning.\n>\n> If anyone has any suggestions as to what sort of statistics to look at, or\n> why this would be happening, they would be greatly appreciated.\n>\n> Thanks in advance,\n> Hugh\n>\n> --\n> Hugh Ranalli\n> Principal Consultant\n> White Horse Technology Consulting\n> e: [email protected]\n> c: +01-416-994-7957\n>\n\nHave you done a VACUUM ANALYZE FULL on your database? This needs to be done periodically to inform the server of the statistics of how the data and relations are distributed across the database. Without this bad assumptions by the planner can cause degradation of performance. Also, if you are using the default settings in postgres.conf then understand those are established to use the absolute minimum amount of resources possible which means not taking advantage of available memory or CPUs that may be available in your environment that would make the database server more performant.Please investigate these and then report back any details of what you've done to try to improve performance. best regards, -- Ben Scherreyhttps://www.postgresql.org/docs/11/sql-vacuum.htmlOn Tue, Jun 25, 2019 at 10:49 PM Hugh Ranalli <[email protected]> wrote:I'm hoping people can help me figure out where to look to solve an odd PostgreSQL performance problem. A bit of background: We have a client with a database of approximately 450 GB, that has a couple of tables storing large amounts of text, including full HTML pages from the Internet. Last fall, they began experiencing dramatic and exponentially decreasing performance. We track certain query times, so we know how much time is being spent in calls to the database for these functions. When this began, the times went from about an average of approximate 200 ms to 400 ms, rapidly climbing each day before reaching 900 ms, figures we had never seen before, within 4 days, with no appreciable change in usage. It was at this point that we restarted the database server and times returned to the 400 ms range, but never back to their long-running original levels. From this point onward, we had to restart the database (originally the server, but eventually just the database process) every 3-4 days, otherwise the application became unusable.As they were still on PostgreSQL 8.2, we persuaded them to finally undertake our long-standing recommendation to upgrade, as there was no possibility of support on that platform. That upgrade to 11.2 was completed successfully in mid-May, and although times have not returned to their original levels (they now average approximately 250 ms), the application overall seems much more responsive and faster (application servers were not changed, other than minor changes --full text search, explicit casts, etc.-- to conform to PostgreSQL 11's requirements).What we continued to notice was a milder but still definite trend of increased query times, during the course of each week, from the mid to high 200 ms, to the high 300 ms to low 400 ms. Some years ago, someone had noticed that as the number of \"raw_page\" columns in a particular table grew, performance would decline. They wrote a script that once a week locks the table, deletes the processed large columns (they are not needed after processing), copies the remaining data to a backup table, truncates the original table, then copies it back. When this script runs we see an immediate change in performance, from 380 ms in the hour before the drop, to 250 ms in the hour of the drop. As rows with these populated columns are added during the course of a week, the performance drops, steadily, until the next week's cleaning operation. Each week the performance increase is clear and significant.What is perplexing is (and I have triple checked), that this table is *not* referenced in any way in the queries that we time (it is referenced by ongoing administrative and processing queries). The operation that cleans it frees up approximately 15-20 GB of space each week. Our system monitoring shows this change in free disk space, but this is 20 GB out of approximately 300 GB of free space (free space is just under 40% of volume size), so disk space does not seem to be an issue. The table in question is about 21 GB in size, with about 20 GB in toast data, at its largest.Even odder, the queries we time *do* reference a much larger table, which contains very similar data, and multiple columns of it. It is 355 GB in size, with 318 GB in toast data. It grows continually, with no cleaning.If anyone has any suggestions as to what sort of statistics to look at, or why this would be happening, they would be greatly appreciated.Thanks in advance,Hugh--Hugh Ranalli Principal ConsultantWhite Horse Technology Consultinge: [email protected]: +01-416-994-7957",
"msg_date": "Tue, 25 Jun 2019 22:55:22 +0700",
"msg_from": "Benjamin Scherrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 10:55:22PM +0700, Benjamin Scherrey wrote:\n> Have you done a VACUUM ANALYZE FULL on your database? This needs to be done\n> periodically to inform the server of the statistics of how the data and\n> relations are distributed across the database.\n\nI think this is wrong.\n\nVACUUM and ANALYZE are good, but normally happen automatically by autovacuum.\n\nVACUUM FULL takes an exclusive lock on the table, and rewrites its data and\nindices from scratch. It's not normally necessary at all. It's probably most\nuseful to recover from badly-bloated table if autovacuum didn't run often\nenough, in which case the permanent solution is to change autovacuum settings\nto be more aggressive.\n\nDon't confuse with VACUUM FREEZE, which doesn't require exclusive lock, and\nnormally not necessary if autovacuum is working properly.\n\nJustin\n\n\n",
"msg_date": "Tue, 25 Jun 2019 11:01:06 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "I didn't say do it all the time, I said if he hasn't done it already he\nshould try that as a way of ensuring the database server's understanding of\nthe data as it stands is correct. Otherwise there isn't enough information\nto suggest other solutions as there is no description of the operating\nsystem or resources available to the database server itself. Regardless, if\nyou have better suggestions please serve them up. :-)\n\n -- Ben Scherrey\n\nOn Tue, Jun 25, 2019 at 11:01 PM Justin Pryzby <[email protected]> wrote:\n\n> On Tue, Jun 25, 2019 at 10:55:22PM +0700, Benjamin Scherrey wrote:\n> > Have you done a VACUUM ANALYZE FULL on your database? This needs to be\n> done\n> > periodically to inform the server of the statistics of how the data and\n> > relations are distributed across the database.\n>\n> I think this is wrong.\n>\n> VACUUM and ANALYZE are good, but normally happen automatically by\n> autovacuum.\n>\n> VACUUM FULL takes an exclusive lock on the table, and rewrites its data and\n> indices from scratch. It's not normally necessary at all. It's probably\n> most\n> useful to recover from badly-bloated table if autovacuum didn't run often\n> enough, in which case the permanent solution is to change autovacuum\n> settings\n> to be more aggressive.\n>\n> Don't confuse with VACUUM FREEZE, which doesn't require exclusive lock, and\n> normally not necessary if autovacuum is working properly.\n>\n> Justin\n>\n\nI didn't say do it all the time, I said if he hasn't done it already he should try that as a way of ensuring the database server's understanding of the data as it stands is correct. Otherwise there isn't enough information to suggest other solutions as there is no description of the operating system or resources available to the database server itself. Regardless, if you have better suggestions please serve them up. :-) -- Ben ScherreyOn Tue, Jun 25, 2019 at 11:01 PM Justin Pryzby <[email protected]> wrote:On Tue, Jun 25, 2019 at 10:55:22PM +0700, Benjamin Scherrey wrote:\n> Have you done a VACUUM ANALYZE FULL on your database? This needs to be done\n> periodically to inform the server of the statistics of how the data and\n> relations are distributed across the database.\n\nI think this is wrong.\n\nVACUUM and ANALYZE are good, but normally happen automatically by autovacuum.\n\nVACUUM FULL takes an exclusive lock on the table, and rewrites its data and\nindices from scratch. It's not normally necessary at all. It's probably most\nuseful to recover from badly-bloated table if autovacuum didn't run often\nenough, in which case the permanent solution is to change autovacuum settings\nto be more aggressive.\n\nDon't confuse with VACUUM FREEZE, which doesn't require exclusive lock, and\nnormally not necessary if autovacuum is working properly.\n\nJustin",
"msg_date": "Tue, 25 Jun 2019 23:06:17 +0700",
"msg_from": "Benjamin Scherrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 11:49:03AM -0400, Hugh Ranalli wrote:\n> I'm hoping people can help me figure out where to look to solve an odd\n> PostgreSQL performance problem.\n\nWhat kernel? Version? OS?\n\nIf Linux, I wonder if transparent hugepages or KSM are enabled ? It seems\npossible that truncating the table is clearing enough RAM to mitigate the\nissue, similar to restarting the DB.\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag\nhttps://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\n\n11.2 would have parallel query, and enabled by default. Are there other\nsettings you've changed (or not changed)?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nIt's possible that the \"administrative\" queries are using up lots of your\nshared_buffers, which are (also/more) needed by the customer-facing queries. I\nwould install pg_buffercache to investigate. Or, just pause the admin queries\nand see if that the issue goes away during that interval ?\n\nSELECT 1.0*COUNT(1)/sum(count(1))OVER(), COUNT(1), COUNT(nullif(isdirty,'f')), datname, COALESCE(c.relname, b.relfilenode::text), d.relname TOAST, 1.0*COUNT(nullif(isdirty,'f'))/count(1) dirtyfrac, avg(usagecount) FROM pg_buffercache b JOIN pg_database db ON b.reldatabase=db.oid LEFT JOIN pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) LEFT JOIN pg_class d ON c.oid=d.reltoastrelid GROUP BY 4,5,6 ORDER BY 1 DESC LIMIT 9; \n\nCould you send query plan for the slow (customer-facing) queries?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#EXPLAIN_.28ANALYZE.2C_BUFFERS.29.2C_not_just_EXPLAIN\n\n> A bit of background: We have a client with a database of approximately 450\n> GB, that has a couple of tables storing large amounts of text, including\n> full HTML pages from the Internet. Last fall, they began experiencing\n> dramatic and exponentially decreasing performance. We track certain query\n> times, so we know how much time is being spent in calls to the database for\n> these functions. When this began, the times went from about an average of\n> approximate 200 ms to 400 ms, rapidly climbing each day before reaching 900\n> ms, figures we had never seen before, within 4 days, with no appreciable\n> change in usage. It was at this point that we restarted the database server\n> and times returned to the 400 ms range, but never back to their\n> long-running original levels. From this point onward, we had to restart the\n> database (originally the server, but eventually just the database process)\n> every 3-4 days, otherwise the application became unusable.\n> \n> As they were still on PostgreSQL 8.2, we persuaded them to finally\n> undertake our long-standing recommendation to upgrade, as there was no\n> possibility of support on that platform. That upgrade to 11.2 was completed\n> successfully in mid-May, and although times have not returned to their\n> original levels (they now average approximately 250 ms), the application\n> overall seems much more responsive and faster (application servers were not\n> changed, other than minor changes --full text search, explicit casts,\n> etc.-- to conform to PostgreSQL 11's requirements).\n> \n> What we continued to notice was a milder but still definite trend of\n> increased query times, during the course of each week, from the mid to high\n> 200 ms, to the high 300 ms to low 400 ms. Some years ago, someone had\n> noticed that as the number of \"raw_page\" columns in a particular table\n> grew, performance would decline. They wrote a script that once a week locks\n> the table, deletes the processed large columns (they are not needed after\n> processing), copies the remaining data to a backup table, truncates the\n> original table, then copies it back. When this script runs we see an\n> immediate change in performance, from 380 ms in the hour before the drop,\n> to 250 ms in the hour of the drop. As rows with these populated columns are\n> added during the course of a week, the performance drops, steadily, until\n> the next week's cleaning operation. Each week the performance increase is\n> clear and significant.\n> \n> What is perplexing is (and I have triple checked), that this table is *not*\n> referenced in any way in the queries that we time (it is referenced by\n> ongoing administrative and processing queries). The operation that cleans\n> it frees up approximately 15-20 GB of space each week. Our system\n> monitoring shows this change in free disk space, but this is 20 GB out of\n> approximately 300 GB of free space (free space is just under 40% of volume\n> size), so disk space does not seem to be an issue. The table in question is\n> about 21 GB in size, with about 20 GB in toast data, at its largest.\n> \n> Even odder, the queries we time *do* reference a much larger table, which\n> contains very similar data, and multiple columns of it. It is 355 GB in\n> size, with 318 GB in toast data. It grows continually, with no cleaning.\n> \n> If anyone has any suggestions as to what sort of statistics to look at, or\n> why this would be happening, they would be greatly appreciated.\n\n\n",
"msg_date": "Tue, 25 Jun 2019 11:23:38 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Tue, 25 Jun 2019 at 11:55, Benjamin Scherrey <[email protected]>\nwrote:\n\n> Have you done a VACUUM ANALYZE FULL on your database? This needs to be\n> done periodically to inform the server of the statistics of how the data\n> and relations are distributed across the database. Without this bad\n> assumptions by the planner can cause degradation of performance.\n>\n\nAutovacuum is enabled. As well, we had problems with autovacum running\nreliably in 8.2, so we are still running a nightly script that runs VACUUM\nANALYZE on the complete database. As for VACUUM ANALYZE FULL, the database\nunderwent a full dump and reload, which, as I understand it, would have\nrebuilt the indexes, followed by an ANALYZE to update the planner. So I'm\nnot sure a VACUUM ANALYZE FULL would have much effect. I'm also not sure\nhow it bears on the problem stated here, where the planner shouldn't even\nbe looking at this table in the queries we are timing.\n\nAlso, if you are using the default settings in postgres.conf then\n> understand those are established to use the absolute minimum amount of\n> resources possible which means not taking advantage of available memory or\n> CPUs that may be available in your environment that would make the database\n> server more performant.\n>\n\nNo, we attempted to tune these, using https://pgtune.leopard.in.ua. The\nfollowing values are from our install script (hence why they don't look\nexactly like their .conf versions). And, as someone else asked, transparent\nhuge pages are enabled:\n\n# DB Version: 11\n# OS Type: linux\n# DB Type: web\n# Total Memory (RAM): 128 GB\n# CPUs = threads per core * cores per socket * sockets\n# CPUs num: 256\n# Connections num: 250\n# Data Storage: ssd\n\n\n# Set via sysctl\n# 64 GB in 4096 byte pages on our 128GB production system\nshmall = 15777216\n# 48 GB on our 128GB production system\nshmmax = 51,539,607,552\n\n# Huge Pages\n# Set via sysctl\nhuge-pages-alloc = 0\n\nshared-buffers = 32GB\nwork-mem = 1024kB\nmaintenance-work-mem = 2GB\nmax-stack-depth = 4MB\neffective-io-concurrency = 200\nmax-parallel-workers-per-gather = 128\nmax-parallel-workers = 256\n\n#\n# postgresql-conf-archive\n#\nwal-buffers = 16MB\nmin-wal-size = 1GB\nmax-wal-size = 2GB\ncheckpoint-completion-target = 0.7\narchive-mode = on\narchive-timeout = 900\n\n#\n# postgresql-conf-query\n#\n# 75% of production memory\neffective-cache-size = 96GB\n# SSD drives\nrandom-page-cost = 1.1\ndefault-statistics-target = 100\n\n\nI'll be providing further details in reply to another message in the thread.\n\nThanks!\n\nOn Tue, 25 Jun 2019 at 11:55, Benjamin Scherrey <[email protected]> wrote:Have you done a VACUUM ANALYZE FULL on your database? This needs to be done periodically to inform the server of the statistics of how the data and relations are distributed across the database. Without this bad assumptions by the planner can cause degradation of performance. Autovacuum is enabled. As well, we had problems with autovacum running reliably in 8.2, so we are still running a nightly script that runs VACUUM ANALYZE on the complete database. As for VACUUM ANALYZE FULL, the database underwent a full dump and reload, which, as I understand it, would have rebuilt the indexes, followed by an ANALYZE to update the planner. So I'm not sure a VACUUM ANALYZE FULL would have much effect. I'm also not sure how it bears on the problem stated here, where the planner shouldn't even be looking at this table in the queries we are timing.Also, if you are using the default settings in postgres.conf then understand those are established to use the absolute minimum amount of resources possible which means not taking advantage of available memory or CPUs that may be available in your environment that would make the database server more performant. No, we attempted to tune these, using https://pgtune.leopard.in.ua. The following values are from our install script (hence why they don't look exactly like their .conf versions). And, as someone else asked, transparent huge pages are enabled:# DB Version: 11# OS Type: linux# DB Type: web# Total Memory (RAM): 128 GB# CPUs = threads per core * cores per socket * sockets# CPUs num: 256# Connections num: 250# Data Storage: ssd# Set via sysctl# 64 GB in 4096 byte pages on our 128GB production systemshmall = 15777216# 48 GB on our 128GB production systemshmmax = 51,539,607,552# Huge Pages# Set via sysctlhuge-pages-alloc = 0shared-buffers = 32GBwork-mem = 1024kBmaintenance-work-mem = 2GBmax-stack-depth = 4MBeffective-io-concurrency = 200max-parallel-workers-per-gather = 128max-parallel-workers = 256## postgresql-conf-archive#wal-buffers = 16MBmin-wal-size = 1GBmax-wal-size = 2GBcheckpoint-completion-target = 0.7archive-mode = onarchive-timeout = 900 ## postgresql-conf-query## 75% of production memoryeffective-cache-size = 96GB# SSD drivesrandom-page-cost = 1.1default-statistics-target = 100I'll be providing further details in reply to another message in the thread.Thanks!",
"msg_date": "Wed, 26 Jun 2019 14:06:38 -0400",
"msg_from": "Hugh Ranalli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 8:49 AM Hugh Ranalli <[email protected]> wrote:\n> What we continued to notice was a milder but still definite trend of increased query times, during the course of each week, from the mid to high 200 ms, to the high 300 ms to low 400 ms. Some years ago, someone had noticed that as the number of \"raw_page\" columns in a particular table grew, performance would decline. They wrote a script that once a week locks the table, deletes the processed large columns (they are not needed after processing), copies the remaining data to a backup table, truncates the original table, then copies it back. When this script runs we see an immediate change in performance, from 380 ms in the hour before the drop, to 250 ms in the hour of the drop. As rows with these populated columns are added during the course of a week, the performance drops, steadily, until the next week's cleaning operation. Each week the performance increase is clear and significant.\n\nCan you show us the definition of the table, including its indexes?\nCan you describe the data and distribution of values within the\ncolumns, particularly where they're indexed?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 26 Jun 2019 11:52:46 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Tue, 25 Jun 2019 at 12:23, Justin Pryzby <[email protected]> wrote:\n\n> What kernel? Version? OS?\n>\nUbuntu 18.04; current kernel is 4.15.0-51-generic4\n\nIf Linux, I wonder if transparent hugepages or KSM are enabled ? It seems\n> possible that truncating the table is clearing enough RAM to mitigate the\n> issue, similar to restarting the DB.\n> tail /sys/kernel/mm/ksm/run\n> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag\n> /sys/kernel/mm/transparent_hugepage/enabled\n> /sys/kernel/mm/transparent_hugepage/defrag\n>\n> https://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\n\n==> /sys/kernel/mm/ksm/run <==\n0\n==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==\n1\n==> /sys/kernel/mm/transparent_hugepage/enabled <==\nalways [madvise] never\n==> /sys/kernel/mm/transparent_hugepage/defrag <==\nalways defer defer+madvise [madvise] never\n\n From my research in preparing for the upgrade, I understood transparent\nhuge pages were a good thing, and should be enabled. Is this not correct?\n\n\n11.2 would have parallel query, and enabled by default. Are there other\n> settings you've changed (or not changed)?\n> https://wiki.postgresql.org/wiki/Server_Configuration\n\n\nI've just posted the parameters we are changing from the default in a\nprevious reply, so I won't repeat them here unless you want me to.\n\n\n> It's possible that the \"administrative\" queries are using up lots of your\n> shared_buffers, which are (also/more) needed by the customer-facing\n> queries. I\n> would install pg_buffercache to investigate. Or, just pause the admin\n> queries\n> and see if that the issue goes away during that interval ?\n>\n\nPausing the admin queries isn't an option in our environment, especially as\nthe issue reveals itself over the course of days, not minutes or hours.\n ?column? | count | count | datname | coalesce\n | toast | dirtyfrac | avg\n------------------------+---------+-------+-----------+-------------------------+----------------+----------------------------+--------------------\n 0.24904101286779650995 | 1044545 | 0 | mydb | position\n | | 0.000000000000000000000000 | 4.8035517857057379\n 0.16701241622795295199 | 700495 | 0 | mydb | stat_position_click\n | | 0.000000000000000000000000 | 1.9870234619804567\n 0.09935032779251879171 | 416702 | 6964 | mydb | pg_toast_19788\n | harvested_job | 0.01671218280689797505 | 1.9346079452462431\n 0.06979762146872315533 | 292750 | 0 | mydb | url\n | | 0.000000000000000000000000 | 4.9627873612297182\n 0.03795774662998486745 | 159205 | 0 | mydb |\nstat_sponsored_position | | 0.000000000000000000000000 |\n1.8412361420809648\n 0.02923155381784048663 | 122605 | 0 | mydb | pg_toast_20174\n | page | 0.000000000000000000000000 | 3.0259532645487541\n 0.02755283459406156353 | 115564 | 0 | mydb | location\n | | 0.000000000000000000000000 | 4.9953532241874632\n 0.02015273698468076320 | 84526 | 1122 | mydb | harvested_job\n | | 0.01327402219435439983 | 4.9922154130090150\n 0.01913348905375406298 | 80251 | 0 | mydb | pg_toast_20257\n | position_index | 0.000000000000000000000000 | 4.9880001495308470\n\nharvested_job is the rapidly growing \"problematic\" table I am talking\nabout. page is the 355 GB table that gets referenced on the public\nsearches. I'll google, but is there a place I should look to understand\nwhat I am seeing here? Also, Should pg_buffercache perhaps be run at the\nbeginning and end of the week, to see if there is a significant difference?\n\n\n> Could you send query plan for the slow (customer-facing) queries?\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions#EXPLAIN_.28ANALYZE.2C_BUFFERS.29.2C_not_just_EXPLAIN\n>\n\nI can, but can I ask why this would matter? I'm not looking to optimise the\nquery (although I'm sure it could be; this is a legacy system with lots of\nbarnacles). The problem is that the same query performs increasingly slowly\nover the course of a week, seemingly in sync with the rows with a large\ntoast column added to one particular table (which, as I mentioned, isn't\nreferenced by the query in question). Wouldn't the plan be the same at both\nthe start of the week (when the problematic table is essentially empty) and\nat the end (when it is much larger)?\n\nThanks!\nHugh\n\nOn Tue, 25 Jun 2019 at 12:23, Justin Pryzby <[email protected]> wrote:What kernel? Version? OS?Ubuntu 18.04; current kernel is 4.15.0-51-generic4If Linux, I wonder if transparent hugepages or KSM are enabled ? It seems\npossible that truncating the table is clearing enough RAM to mitigate the\nissue, similar to restarting the DB.\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag\nhttps://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com==> /sys/kernel/mm/ksm/run <==0==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==1==> /sys/kernel/mm/transparent_hugepage/enabled <==always [madvise] never==> /sys/kernel/mm/transparent_hugepage/defrag <==always defer defer+madvise [madvise] never From my research in preparing for the upgrade, I understood transparent huge pages were a good thing, and should be enabled. Is this not correct?\n11.2 would have parallel query, and enabled by default. Are there other\nsettings you've changed (or not changed)?\nhttps://wiki.postgresql.org/wiki/Server_ConfigurationI've just posted the parameters we are changing from the default in a previous reply, so I won't repeat them here unless you want me to. \nIt's possible that the \"administrative\" queries are using up lots of your\nshared_buffers, which are (also/more) needed by the customer-facing queries. I\nwould install pg_buffercache to investigate. Or, just pause the admin queries\nand see if that the issue goes away during that interval ?Pausing the admin queries isn't an option in our environment, especially as the issue reveals itself over the course of days, not minutes or hours. ?column? | count | count | datname | coalesce | toast | dirtyfrac | avg ------------------------+---------+-------+-----------+-------------------------+----------------+----------------------------+-------------------- 0.24904101286779650995 | 1044545 | 0 | mydb | position | | 0.000000000000000000000000 | 4.8035517857057379 0.16701241622795295199 | 700495 | 0 | mydb | stat_position_click | | 0.000000000000000000000000 | 1.9870234619804567 0.09935032779251879171 | 416702 | 6964 | mydb | pg_toast_19788 | harvested_job | 0.01671218280689797505 | 1.9346079452462431 0.06979762146872315533 | 292750 | 0 | mydb | url | | 0.000000000000000000000000 | 4.9627873612297182 0.03795774662998486745 | 159205 | 0 | mydb | stat_sponsored_position | | 0.000000000000000000000000 | 1.8412361420809648 0.02923155381784048663 | 122605 | 0 | mydb | pg_toast_20174 | page | 0.000000000000000000000000 | 3.0259532645487541 0.02755283459406156353 | 115564 | 0 | mydb | location | | 0.000000000000000000000000 | 4.9953532241874632 0.02015273698468076320 | 84526 | 1122 | mydb | harvested_job | | 0.01327402219435439983 | 4.9922154130090150 0.01913348905375406298 | 80251 | 0 | mydb | pg_toast_20257 | position_index | 0.000000000000000000000000 | 4.9880001495308470harvested_job is the rapidly growing \"problematic\" table I am talking about. page is the 355 GB table that gets referenced on the public searches. I'll google, but is there a place I should look to understand what I am seeing here? Also, Should pg_buffercache perhaps be run at the beginning and end of the week, to see if there is a significant difference? \nCould you send query plan for the slow (customer-facing) queries?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#EXPLAIN_.28ANALYZE.2C_BUFFERS.29.2C_not_just_EXPLAINI can, but can I ask why this would matter? I'm not looking to optimise the query (although I'm sure it could be; this is a legacy system with lots of barnacles). The problem is that the same query performs increasingly slowly over the course of a week, seemingly in sync with the rows with a large toast column added to one particular table (which, as I mentioned, isn't referenced by the query in question). Wouldn't the plan be the same at both the start of the week (when the problematic table is essentially empty) and at the end (when it is much larger)? Thanks!Hugh",
"msg_date": "Wed, 26 Jun 2019 15:00:43 -0400",
"msg_from": "Hugh Ranalli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Wed, 26 Jun 2019 at 14:52, Peter Geoghegan <[email protected]> wrote:\n\n> Can you show us the definition of the table, including its indexes?\n> Can you describe the data and distribution of values within the\n> columns, particularly where they're indexed?\n>\n\nI'm sorry, but I'm not sure what you mean by the \"distribution of values\nwithin the columns.\" Can you clarify or provide an link to an example?\n\nThanks,\nHugh\n\nOn Wed, 26 Jun 2019 at 14:52, Peter Geoghegan <[email protected]> wrote:Can you show us the definition of the table, including its indexes?\nCan you describe the data and distribution of values within the\ncolumns, particularly where they're indexed?I'm sorry, but I'm not sure what you mean by the \"distribution of values within the columns.\" Can you clarify or provide an link to an example?Thanks,Hugh",
"msg_date": "Wed, 26 Jun 2019 15:02:15 -0400",
"msg_from": "Hugh Ranalli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Wed, Jun 26, 2019 at 12:02 PM Hugh Ranalli <[email protected]> wrote:\n> I'm sorry, but I'm not sure what you mean by the \"distribution of values within the columns.\" Can you clarify or provide an link to an example?\n\nI would mostly just like to see the schema of the table in question,\nincluding indexes, and a high-level description of the nature of the\ndata in the table. Ideally, you would also include pg_stats.*\ninformation for all columns in the table. That will actually let us\nsee a summary of the data. Though you should be careful about leaking\nsensitive information that happens to be contained in the statistics,\nsuch as the most common values.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 26 Jun 2019 12:05:56 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On 2019-Jun-26, Hugh Ranalli wrote:\n\n> From my research in preparing for the upgrade, I understood transparent\n> huge pages were a good thing, and should be enabled. Is this not correct?\n\nIt is not.\n\n> Wouldn't the plan be the same at both\n> the start of the week (when the problematic table is essentially empty) and\n> at the end (when it is much larger)?\n\nNot necessarily. Though, if a plan change was the culprit you would\nprobably see a sudden change in performance characteristics rather than\ngradual. Worth making sure, anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 26 Jun 2019 15:07:43 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2019-Jun-26, Hugh Ranalli wrote:\n>> From my research in preparing for the upgrade, I understood transparent\n>> huge pages were a good thing, and should be enabled. Is this not correct?\n\n> It is not.\n\nYeah ... they would be a good thing perhaps if the quality of the kernel\nimplementation were better. But there are way too many nasty corner\ncases, at least with the kernel versions people around here have\nexperimented with. You're best off to disable THP and instead manually\narrange for Postgres' shared memory to use huge pages. I forget where\nto look for docs about doing that, but I think we have some.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Jun 2019 15:18:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Wed, Jun 26, 2019 at 03:00:43PM -0400, Hugh Ranalli wrote:\n> Pausing the admin queries isn't an option in our environment, especially as\n> the issue reveals itself over the course of days, not minutes or hours.\n\nPerhaps you can pause it for a short while at EOW and see if there's a dramatic\nimprovement ?\n\n> ?column? | count | count | datname | coalesce | toast | dirtyfrac | avg\n> ------------------------+---------+-------+-----------+-------------------------+----------------+----------------------------+--------------------\n> 0.24904101286779650995 | 1044545 | 0 | mydb | position | | 0.000000000000000000000000 | 4.8035517857057379\n> 0.16701241622795295199 | 700495 | 0 | mydb | stat_position_click | | 0.000000000000000000000000 | 1.9870234619804567\n> 0.09935032779251879171 | 416702 | 6964 | mydb | pg_toast_19788 | harvested_job | 0.01671218280689797505 | 1.9346079452462431\n> 0.06979762146872315533 | 292750 | 0 | mydb | url | | 0.000000000000000000000000 | 4.9627873612297182\n> 0.03795774662998486745 | 159205 | 0 | mydb | stat_sponsored_position | | 0.000000000000000000000000 | 1.8412361420809648\n> 0.02923155381784048663 | 122605 | 0 | mydb | pg_toast_20174 | page | 0.000000000000000000000000 | 3.0259532645487541\n> 0.02755283459406156353 | 115564 | 0 | mydb | location | | 0.000000000000000000000000 | 4.9953532241874632\n> 0.02015273698468076320 | 84526 | 1122 | mydb | harvested_job | | 0.01327402219435439983 | 4.9922154130090150\n> 0.01913348905375406298 | 80251 | 0 | mydb | pg_toast_20257 | position_index | 0.000000000000000000000000 | 4.9880001495308470\n> \n> harvested_job is the rapidly growing \"problematic\" table I am talking\n> about. page is the 355 GB table that gets referenced on the public\n> searches. I'll google, but is there a place I should look to understand\n> what I am seeing here?\n\nI should label the columns:\n|buffer_fraction | nbuffers| ndirty| datname | relname | toast | dirtyfrac | avgusage\n\nIt looks like possibly harvested job is being index scanned, and its toast\ntable is using up many buffers. At the EOW, maybe that number is at the\nexpense of more important data. You could check pg_stat_user_tables/indexes\nfor stats on that. Possibly you could make use of index-only scans using\ncovering indexes (pg11 supports INCLUDE). Or maybe it's just too big (maybe it\nshould be partitioned or maybe index should be repacked?)\n\n> Also, Should pg_buffercache perhaps be run at the beginning and end of the\n> week, to see if there is a significant difference?\n\nYes; buffercache can be pretty volatile, so I'd save it numerous times each at\nbeginning and end of week.\n\n> > Could you send query plan for the slow (customer-facing) queries?\n> >\n> > https://wiki.postgresql.org/wiki/Slow_Query_Questions#EXPLAIN_.28ANALYZE.2C_BUFFERS.29.2C_not_just_EXPLAIN\n> \n> I can, but can I ask why this would matter?\n\nMy very tentative guess is that harvested_job itself isn't the issue, but some\nother, 3rd thing is the issue, which also increases (at least roughly) with\ntime, same as that table. It'd help to see the buffer cache hit rate for that\nquery (and its different query plan nodes), at beginning and EOW.\n\nJustin\n\n\n",
"msg_date": "Wed, 26 Jun 2019 19:24:42 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Wed, 26 Jun 2019 at 15:18, Tom Lane <[email protected]> wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n> > On 2019-Jun-26, Hugh Ranalli wrote:\n> >> From my research in preparing for the upgrade, I understood transparent\n> >> huge pages were a good thing, and should be enabled. Is this not\n> correct?\n>\n> > It is not.\n>\n> Yeah ... they would be a good thing perhaps if the quality of the kernel\n> implementation were better. But there are way too many nasty corner\n> cases, at least with the kernel versions people around here have\n> experimented with. You're best off to disable THP and instead manually\n> arrange for Postgres' shared memory to use huge pages. I forget where\n> to look for docs about doing that, but I think we have some.\n>\n\nWe've been dealing with some other production issues, so my apologies for\nnot replying sooner. I'm seeing now that I have confused huge pages with\n*transparent* huge pages. We have a maintenance window coming up this\nweekend, so we'll disable transparent huge pages and configure huge pages\nmanually. I found the docs here:\nhttps://www.postgresql.org/docs/11/kernel-resources.html#LINUX-HUGE-PAGES\n\nThank you very much!\n\nOn Wed, 26 Jun 2019 at 15:18, Tom Lane <[email protected]> wrote:Alvaro Herrera <[email protected]> writes:\n> On 2019-Jun-26, Hugh Ranalli wrote:\n>> From my research in preparing for the upgrade, I understood transparent\n>> huge pages were a good thing, and should be enabled. Is this not correct?\n\n> It is not.\n\nYeah ... they would be a good thing perhaps if the quality of the kernel\nimplementation were better. But there are way too many nasty corner\ncases, at least with the kernel versions people around here have\nexperimented with. You're best off to disable THP and instead manually\narrange for Postgres' shared memory to use huge pages. I forget where\nto look for docs about doing that, but I think we have some.We've been dealing with some other production issues, so my apologies for not replying sooner. I'm seeing now that I have confused huge pages with transparent huge pages. We have a maintenance window coming up this weekend, so we'll disable transparent huge pages and configure huge pages manually. I found the docs here: https://www.postgresql.org/docs/11/kernel-resources.html#LINUX-HUGE-PAGESThank you very much!",
"msg_date": "Wed, 17 Jul 2019 10:29:28 -0400",
"msg_from": "Hugh Ranalli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On 2019-Jun-26, Justin Pryzby wrote:\n\n> > Also, Should pg_buffercache perhaps be run at the beginning and end of the\n> > week, to see if there is a significant difference?\n> \n> Yes; buffercache can be pretty volatile, so I'd save it numerous times each at\n> beginning and end of week.\n\nBe careful with pg_buffercache though, as it can cause a hiccup in\noperation.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jul 2019 13:55:51 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "Hi\n\nOn 2019-07-17 13:55:51 -0400, Alvaro Herrera wrote:\n> Be careful with pg_buffercache though, as it can cause a hiccup in\n> operation.\n\nI think that's been fixed a few years back:\n\ncommit 6e654546fb61f62cc982d0c8f62241b3b30e7ef8\nAuthor: Heikki Linnakangas <[email protected]>\nDate: 2016-09-29 13:16:30 +0300\n\n Don't bother to lock bufmgr partitions in pg_buffercache.\n\n That makes the view a lot less disruptive to use on a production system.\n Without the locks, you don't get a consistent snapshot across all buffers,\n but that's OK. It wasn't a very useful guarantee in practice.\n\n Ivan Kartyshov, reviewed by Tomas Vondra and Robert Haas.\n\n Discusssion: <[email protected]>\n\nso everything from 10 onwards ought to be fine.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:52:54 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "On Tue, 25 Jun 2019 at 12:23, Justin Pryzby <[email protected]> wrote:\n\n> It's possible that the \"administrative\" queries are using up lots of your\n> shared_buffers, which are (also/more) needed by the customer-facing\n> queries. I\n> would install pg_buffercache to investigate. Or, just pause the admin\n> queries\n> and see if that the issue goes away during that interval ?\n>\n> SELECT 1.0*COUNT(1)/sum(count(1))OVER(), COUNT(1),\n> COUNT(nullif(isdirty,'f')), datname, COALESCE(c.relname,\n> b.relfilenode::text), d.relname TOAST,\n> 1.0*COUNT(nullif(isdirty,'f'))/count(1) dirtyfrac, avg(usagecount) FROM\n> pg_buffercache b JOIN pg_database db ON b.reldatabase=db.oid LEFT JOIN\n> pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) LEFT JOIN pg_class\n> d ON c.oid=d.reltoastrelid GROUP BY 4,5,6 ORDER BY 1 DESC LIMIT 9;\n>\n\nI've been going by a couple of articles I found about interpreting\npg_buffercache (\nhttps://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers),\nand so far shared buffers look okay. Our database is 486 GB, with shared\nbuffers set to 32 GB. The article suggests a query that can provide a\nguideline for what shared buffers should be:\n\nSELECT\n pg_size_pretty(count(*) * 8192) as ideal_shared_buffers\nFROM\n pg_class c\nINNER JOIN\npg_buffercache b ON b.relfilenode = c.relfilenode\nINNER JOIN\n pg_database d ON (b.reldatabase = d.oid AND d.datname =\ncurrent_database())\nWHERE\n usagecount >= 3;\n\n\nThis comes out to 25 GB, and even dropping the usage count to 1 only raises\nit to 30 GB. I realise this is only a guideline, and I may bump it to 36\nGB, to give a bit more space.\n\nI did run some further queries to look at usage (based on the same\narticle), and most of the tables that have very high usage on all the\nbuffered data are 100% buffered, so, if I understand it correctly, there\nshould be little churn there. The others seem to have sufficient\nless-accessed space to make room for data that they need to buffer:\n\n\n\n relname | buffered | buffers_percent | percent_of_relation\n-------------------------+----------+-----------------+---------------------\n position | 8301 MB | 25.3 | 99.2\n stat_position_click | 7359 MB | 22.5 | 76.5\n url | 2309 MB | 7.0 | 100.0\n pg_toast_19788 | 1954 MB | 6.0 | 49.3\n (harvested_job)\n stat_sponsored_position | 1585 MB | 4.8 | 92.3\n location | 927 MB | 2.8 | 98.7\n pg_toast_20174 | 866 MB | 2.6 | 0.3\n (page)\n pg_toast_20257 | 678 MB | 2.1 | 92.9\n (position_index)\n harvested_job | 656 MB | 2.0 | 100.0\n stat_employer_click | 605 MB | 1.8 | 100.0\n\nusagecount >= 5\n relname | pg_size_pretty\n-------------------------+----------------\n harvested_job | 655 MB\n location | 924 MB\n pg_toast_19788 | 502 MB\n pg_toast_20174 | 215 MB\n pg_toast_20257 | 677 MB\n position | 8203 MB\n stat_employer_click | 605 MB\n stat_position_click | 79 MB\n stat_sponsored_position | 304 kB\n url | 2307 MB\n\nusagecount >= 3\n relname | pg_size_pretty\n-------------------------+----------------\n harvested_job | 656 MB\n location | 927 MB\n pg_toast_19788 | 1809 MB\n pg_toast_20174 | 589 MB\n pg_toast_20257 | 679 MB\n position | 8258 MB\n stat_employer_click | 605 MB\n stat_position_click | 716 MB\n stat_sponsored_position | 2608 kB\n url | 2309 MB\n\nusagecount >= 1\n relname | pg_size_pretty\n-------------------------+----------------\n harvested_job | 656 MB\n location | 928 MB\n pg_toast_19788 | 3439 MB\n pg_toast_20174 | 842 MB\n pg_toast_20257 | 680 MB\n position | 8344 MB\n stat_employer_click | 605 MB\n stat_position_click | 4557 MB\n stat_sponsored_position | 86 MB\n url | 2309 MB\n\n\nIf I'm misreading this, please let me know. I know people also asked about\nquery plans and schema, which I'm going to look at next; I've just been\nknocking off one thing at at time.\n\nThanks,\nHugh\n\nOn Tue, 25 Jun 2019 at 12:23, Justin Pryzby <[email protected]> wrote:It's possible that the \"administrative\" queries are using up lots of your\nshared_buffers, which are (also/more) needed by the customer-facing queries. I\nwould install pg_buffercache to investigate. Or, just pause the admin queries\nand see if that the issue goes away during that interval ?\n\nSELECT 1.0*COUNT(1)/sum(count(1))OVER(), COUNT(1), COUNT(nullif(isdirty,'f')), datname, COALESCE(c.relname, b.relfilenode::text), d.relname TOAST, 1.0*COUNT(nullif(isdirty,'f'))/count(1) dirtyfrac, avg(usagecount) FROM pg_buffercache b JOIN pg_database db ON b.reldatabase=db.oid LEFT JOIN pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) LEFT JOIN pg_class d ON c.oid=d.reltoastrelid GROUP BY 4,5,6 ORDER BY 1 DESC LIMIT 9; I've been going by a couple of articles I found about interpreting pg_buffercache (https://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers), and so far shared buffers look okay. Our database is 486 GB, with shared buffers set to 32 GB. The article suggests a query that can provide a guideline for what shared buffers should be:SELECT pg_size_pretty(count(*) * 8192) as ideal_shared_buffersFROM pg_class cINNER JOIN pg_buffercache b ON b.relfilenode = c.relfilenodeINNER JOIN pg_database d ON (b.reldatabase = d.oid AND d.datname = current_database())WHERE usagecount >= 3;This comes out to 25 GB, and even dropping the usage count to 1 only raises it to 30 GB. I realise this is only a guideline, and I may bump it to 36 GB, to give a bit more space.I did run some further queries to look at usage (based on the same article), and most of the tables that have very high usage on all the buffered data are 100% buffered, so, if I understand it correctly, there should be little churn there. The others seem to have sufficient less-accessed space to make room for data that they need to buffer: relname | buffered | buffers_percent | percent_of_relation -------------------------+----------+-----------------+--------------------- position | 8301 MB | 25.3 | 99.2 stat_position_click | 7359 MB | 22.5 | 76.5 url | 2309 MB | 7.0 | 100.0 pg_toast_19788 | 1954 MB | 6.0 | 49.3 (harvested_job) stat_sponsored_position | 1585 MB | 4.8 | 92.3 location | 927 MB | 2.8 | 98.7 pg_toast_20174 | 866 MB | 2.6 | 0.3 (page) pg_toast_20257 | 678 MB | 2.1 | 92.9 (position_index) harvested_job | 656 MB | 2.0 | 100.0 stat_employer_click | 605 MB | 1.8 | 100.0 usagecount >= 5 relname | pg_size_pretty -------------------------+---------------- harvested_job | 655 MB location | 924 MB pg_toast_19788 | 502 MB pg_toast_20174 | 215 MB pg_toast_20257 | 677 MB position | 8203 MB stat_employer_click | 605 MB stat_position_click | 79 MB stat_sponsored_position | 304 kB url | 2307 MB usagecount >= 3 relname | pg_size_pretty -------------------------+---------------- harvested_job | 656 MB location | 927 MB pg_toast_19788 | 1809 MB pg_toast_20174 | 589 MB pg_toast_20257 | 679 MB position | 8258 MB stat_employer_click | 605 MB stat_position_click | 716 MB stat_sponsored_position | 2608 kB url | 2309 MBusagecount >= 1 relname | pg_size_pretty -------------------------+---------------- harvested_job | 656 MB location | 928 MB pg_toast_19788 | 3439 MB pg_toast_20174 | 842 MB pg_toast_20257 | 680 MB position | 8344 MB stat_employer_click | 605 MB stat_position_click | 4557 MB stat_sponsored_position | 86 MB url | 2309 MBIf I'm misreading this, please let me know. I know people also asked about query plans and schema, which I'm going to look at next; I've just been knocking off one thing at at time.Thanks,Hugh",
"msg_date": "Thu, 18 Jul 2019 16:01:46 -0400",
"msg_from": "Hugh Ranalli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Perplexing, regular decline in performance"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-18 16:01:46 -0400, Hugh Ranalli wrote:\n> I've been going by a couple of articles I found about interpreting\n> pg_buffercache (\n> https://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers),\n> and so far shared buffers look okay. Our database is 486 GB, with shared\n> buffers set to 32 GB. The article suggests a query that can provide a\n> guideline for what shared buffers should be:\n> \n> SELECT\n> pg_size_pretty(count(*) * 8192) as ideal_shared_buffers\n> FROM\n> pg_class c\n> INNER JOIN\n> pg_buffercache b ON b.relfilenode = c.relfilenode\n> INNER JOIN\n> pg_database d ON (b.reldatabase = d.oid AND d.datname =\n> current_database())\n> WHERE\n> usagecount >= 3;\n\nIMO that's not a meaningful way to determine the ideal size of shared\nbuffers. Except for the case where shared buffers is bigger than the\nentire working set (not just the hot working set), it's going to give\nyou completely bogus results.\n\nPretty much by definition it cannot give you a shared buffers size\nbigger than what it's currently set to, given that it starts with the\nnumber of shared buffers.\n\nAnd there's plenty scenarios where you'll commonly see many frequently\n(but not most frequently) used buffers with a usagecount < 3 even =\n0. If you e.g. have a shared_buffers size that's just a few megabytes\ntoo small, you'll need to throw some buffers out of shared buffers -\nthat means the buffer replacement search will go through all shared\nbuffers and decrement the usagecount by one, until it finds a buffer\nwith a count of 0 (before it has decremented the count). Which means\nit's extremely likely that there's moments where a substantial number of\nfrequently used buffers have a lowered usagecount (perhaps even 0).\n\nTherefore, the above query will commonly give you a lower number than\nshared buffers, if your working set size is *bigger* than shared memory.\n\n\nI think you can assume that shared buffers is too big if a substantial\nportion of buffers have relfilenode IS NOT NULL (i.e. are unused); at\nleast if you don't continually also DROP/TRUNCATE relations.\n\nIf there's a large fluctuation about which parts of buffercache has a\nhigh usagecount, then that's a good indication that very frequently new\nbuffers are needed (because that lowers a good portion of buffers to\nusagecount 0).\n\nI've had decent success in the past getting insights with a query like:\n\nSELECT\n ceil(bufferid/(nr_buffers/subdivisions::float))::int AS part,\n to_char(SUM((relfilenode IS NOT NULL)::int) / count(*)::float * 100, '999D99') AS pct_used,\n to_char(AVG(usagecount), '9D9') AS avg_usagecount,\n to_char(SUM((usagecount=0)::int) / SUM((relfilenode IS NOT NULL)::int)::float8 * 100, '999D99') AS pct_0\nFROM\n pg_buffercache,\n (SELECT 10) AS x(subdivisions),\n (SELECT setting::int nr_buffers FROM pg_settings WHERE name = 'shared_buffers') s\nGROUP BY 1 ORDER BY 1;\n\nwhich basically subdivides pg_buffercache's output into 10 parts (or use\nas much as fit comfortable in one screen / terminal).\n\nHere's e.g. the output of a benchmark (pgbench) running against a\ndatabase that's considerably smaller than shared memory (15GB database,\n1.5GB shared_buffers):\n\n┌──────┬──────────┬────────────────┬─────────┐\n│ part │ pct_used │ avg_usagecount │ pct_0 │\n├──────┼──────────┼────────────────┼─────────┤\n│ 1 │ 100.00 │ 1.0 │ 42.75 │\n│ 2 │ 100.00 │ .6 │ 47.85 │\n│ 3 │ 100.00 │ .6 │ 47.25 │\n│ 4 │ 100.00 │ .6 │ 47.52 │\n│ 5 │ 100.00 │ .6 │ 47.18 │\n│ 6 │ 100.00 │ .5 │ 48.47 │\n│ 7 │ 100.00 │ .5 │ 49.00 │\n│ 8 │ 100.00 │ .5 │ 48.52 │\n│ 9 │ 100.00 │ .5 │ 49.27 │\n│ 10 │ 100.00 │ .5 │ 49.58 │\n│ 11 │ 99.98 │ .6 │ 46.88 │\n│ 12 │ 100.00 │ .6 │ 45.23 │\n│ 13 │ 100.00 │ .6 │ 45.03 │\n│ 14 │ 100.00 │ .6 │ 44.90 │\n│ 15 │ 100.00 │ .6 │ 46.08 │\n│ 16 │ 100.00 │ .6 │ 44.84 │\n│ 17 │ 100.00 │ .6 │ 45.88 │\n│ 18 │ 100.00 │ .6 │ 46.46 │\n│ 19 │ 100.00 │ .6 │ 46.64 │\n│ 20 │ 100.00 │ .6 │ 47.05 │\n└──────┴──────────┴────────────────┴─────────┘\n\nAs you can see usagecounts are pretty low overall. That's because the\nbuffer replacement rate is so high, that the usagecount is very\nfrequently reduced to 0 (to get new buffers).\n\nYou can infer from that, that unless you add a lot of shared buffers,\nyou're not likely going to make a huge difference (but if you set it\n16GB, it'd obviously look much better).\n\n\nIn contrast to that, here's pgbench running on a smaller database, that\nnearly fits into shared buffers (2GB DB, 1.5GB shared_buffers):\n\n┌──────┬──────────┬────────────────┬─────────┐\n│ part │ pct_used │ avg_usagecount │ pct_0 │\n├──────┼──────────┼────────────────┼─────────┤\n│ 1 │ 100.00 │ 3.9 │ 1.45 │\n│ 2 │ 100.00 │ 3.8 │ 1.34 │\n│ 3 │ 100.00 │ 3.8 │ 1.69 │\n│ 4 │ 100.00 │ 3.7 │ 1.96 │\n│ 5 │ 100.00 │ 3.7 │ 2.01 │\n│ 6 │ 100.00 │ 3.6 │ 2.23 │\n│ 7 │ 100.00 │ 3.5 │ 2.60 │\n│ 8 │ 100.00 │ 3.5 │ 2.27 │\n│ 9 │ 100.00 │ 3.4 │ 2.82 │\n│ 10 │ 100.00 │ 3.3 │ 2.92 │\n│ 11 │ 100.00 │ 3.2 │ 3.43 │\n│ 12 │ 100.00 │ 3.1 │ 3.41 │\n│ 13 │ 100.00 │ 3.7 │ 1.91 │\n│ 14 │ 100.00 │ 4.0 │ 1.09 │\n│ 15 │ 100.00 │ 3.9 │ 1.39 │\n│ 16 │ 100.00 │ 4.0 │ 1.22 │\n│ 17 │ 100.00 │ 4.1 │ 1.16 │\n│ 18 │ 100.00 │ 4.0 │ 1.19 │\n│ 19 │ 100.00 │ 4.0 │ 1.29 │\n│ 20 │ 100.00 │ 4.0 │ 1.42 │\n└──────┴──────────┴────────────────┴─────────┘\n\nAs you can see, there's many fewer buffers that have a usagecount of 0 -\nthat's because the buffer replacement rate is much lower (as most\nbuffers are in shared buffers), and thus the usagecount has time to\n\"increase\" regularly.\n\nHere you can guess that even just increasing shared buffers slightly,\nwould increase the cache hit ratio substantially. E.g. the same\nworkload, but with shared_buffes increased to 1.6GB:\n┌──────┬──────────┬────────────────┬─────────┐\n│ part │ pct_used │ avg_usagecount │ pct_0 │\n├──────┼──────────┼────────────────┼─────────┤\n│ 1 │ 100.00 │ 5.0 │ .00 │\n│ 2 │ 100.00 │ 5.0 │ .00 │\n│ 3 │ 100.00 │ 5.0 │ .00 │\n│ 4 │ 100.00 │ 5.0 │ .00 │\n│ 5 │ 100.00 │ 5.0 │ .00 │\n│ 6 │ 100.00 │ 5.0 │ .00 │\n│ 7 │ 100.00 │ 5.0 │ .00 │\n│ 8 │ 100.00 │ 5.0 │ .00 │\n│ 9 │ 100.00 │ 5.0 │ .00 │\n│ 10 │ 100.00 │ 5.0 │ .00 │\n│ 11 │ 100.00 │ 5.0 │ .00 │\n│ 12 │ 100.00 │ 5.0 │ .00 │\n│ 13 │ 100.00 │ 5.0 │ .00 │\n│ 14 │ 100.00 │ 5.0 │ .00 │\n│ 15 │ 100.00 │ 5.0 │ .00 │\n│ 16 │ 100.00 │ 5.0 │ .00 │\n│ 17 │ 100.00 │ 5.0 │ .00 │\n│ 18 │ 100.00 │ 5.0 │ .00 │\n│ 19 │ 93.27 │ 5.0 │ .00 │\n│ 20 │ .00 │ (null) │ (null) │\n└──────┴──────────┴────────────────┴─────────┘\n\n\nNow, in reality things are rarely quite this neat - pgbench has a\nuniform access pattern, which isn't that common in the real world.\n\n\nI also suggest to monitor how the buffer hit ratio develops over\ntime. E.g. by doing a query like\n\nSELECT datname, (blks_hit - blks_read)::float/NULLIF(blks_hit, 0)::float FROM pg_stat_database;\n\nalthough that's not perfect, because it gives you the ratio since the\nlast time the stats have been reset, making it hard to see more recent\nchanges. So you either need to reset the stats, or just compute the\ndifference to what the values where when you wanted to start observing.\n\nE.g.\n\nDROP TABLE IF EXISTS pg_stat_database_snap;CREATE TEMPORARY TABLE pg_stat_database_snap AS SELECT * FROM pg_stat_database;\n\nSELECT datname,\n (blks_hit - blks_read)::float/NULLIF(blks_hit, 0)::float\nFROM (\n SELECT datname,\n pd.blks_read - ps.blks_read AS blks_read,\n pd.blks_hit - ps.blks_hit AS blks_hit\n FROM pg_stat_database pd JOIN pg_stat_database_snap ps USING (datname) ) pd_diff;\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Jul 2019 15:23:21 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perplexing, regular decline in performance"
}
] |
[
{
"msg_contents": "Hello team,\n\nWe have migrated our database from Oracle 12c to Postgres 11. I need your suggestions , we have sessions limit in Oracle = 3024 . Do we need to set the same connection limit in Postgres as well. How we can decide the max_connections limit for postgres. Are there any differences in managing connections in Oracle and postgres.\n\nSQL> show parameter sessions;\n\nNAME TYPE VALUE\n------------------------------------ ----------- ------------------------------\njava_max_sessionspace_size integer 0\njava_soft_sessionspace_limit integer 0\nlicense_max_sessions integer 0\nlicense_sessions_warning integer 0\nsessions integer 3024\nshared_server_sessions integer\nSQL>\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\n\nHello team,\n \nWe have migrated our database from Oracle 12c to Postgres 11. I need your suggestions , we have sessions limit in Oracle = 3024 .\nDo we need to set the same connection limit in Postgres as well. How we can decide the max_connections limit for postgres. Are there any differences in managing connections in Oracle and postgres.\n \nSQL> show parameter sessions;\n \nNAME TYPE VALUE\n------------------------------------ ----------- ------------------------------\njava_max_sessionspace_size integer 0\njava_soft_sessionspace_limit integer 0\nlicense_max_sessions integer 0\nlicense_sessions_warning integer 0\nsessions integer 3024\nshared_server_sessions integer\nSQL>\n \nRegards,\nDaulat",
"msg_date": "Wed, 26 Jun 2019 07:13:56 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Max_connections limit"
},
{
"msg_contents": "Daulat Ram wrote:\n> We have migrated our database from Oracle 12c to Postgres 11. I need your suggestions ,\n> we have sessions limit in Oracle = 3024 . Do we need to set the same connection limit\n> in Postgres as well. How we can decide the max_connections limit for postgres.\n> Are there any differences in managing connections in Oracle and postgres.\n\nI'd say that is way too high in both Oracle and PostgreSQL.\n\nSet the value to 50 or 100 and get a connection pooler if the\napplication cannot do that itself.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 26 Jun 2019 11:05:11 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Max_connections limit"
},
{
"msg_contents": "You now that Postgres don’t have any shared_pool as Oracle, and the session information ( execution plan, etc..) are only available for the current session. Therefore I also highly recommend to us a connection poll as Laurent wrote, in order to have higher chance that some stuff is already cached in the shared session available. \r\n\r\nRegards\r\nHerve \r\n\r\n\r\n\r\nEnvoyé de mon iPhone\r\n\r\n> Le 26 juin 2019 à 11:05, Laurenz Albe <[email protected]> a écrit :\r\n> \r\n> Daulat Ram wrote:\r\n>> We have migrated our database from Oracle 12c to Postgres 11. I need your suggestions ,\r\n>> we have sessions limit in Oracle = 3024 . Do we need to set the same connection limit\r\n>> in Postgres as well. How we can decide the max_connections limit for postgres.\r\n>> Are there any differences in managing connections in Oracle and postgres.\r\n> \r\n> I'd say that is way too high in both Oracle and PostgreSQL.\r\n> \r\n> Set the value to 50 or 100 and get a connection pooler if the\r\n> application cannot do that itself.\r\n> \r\n> Yours,\r\n> Laurenz Albe\r\n> -- \r\n> Cybertec | https://www.cybertec-postgresql.com\r\n> \r\n> \r\n> \r\n",
"msg_date": "Wed, 26 Jun 2019 09:15:45 +0000",
"msg_from": "=?utf-8?B?SGVydsOpIFNjaHdlaXR6ZXIgKEhFUik=?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Max_connections limit"
},
{
"msg_contents": "On Wed, Jun 26, 2019 at 5:16 AM Hervé Schweitzer (HER) <\[email protected]> wrote:\n\n> You now that Postgres don’t have any shared_pool as Oracle, and the\n> session information ( execution plan, etc..) are only available for the\n> current session. Therefore I also highly recommend to us a connection poll\n> as Laurent wrote, in order to have higher chance that some stuff is already\n> cached in the shared session available.\n>\n> Regards\n> Herve\n>\n>\nThe most popular stand-alone connection pooler for PostgreSQL is the oddly\nnamed \"pgbouncer\": https://wiki.postgresql.org/wiki/PgBouncer\nThere are others, of course.\n\nPgPool is also very popular:\nhttps://www.pgpool.net/mediawiki/index.php/Main_Page\n\nSome applications can also manage a connection pool efficiently entirely\nwithin the application itself.\n\nConfiguring the maximum number of concurrent connections your database\nsupports incurs significant overhead in the running database. New\nconnections and disconnections also have a high overhead as they occur. By\nmoving the connecting/disconnecting logic to a connection pooler you remove\na lot of overhead and load from the database - letting it focus on the\nimportant stuff -- your queries.\n\nIt is amazing how many fewer actual connections you need to the database\nwhen you configure a pooler. Most connections from applications and users\nare idle most of the time. Even on busy web servers. They just keep that\npathway open in case they need to run a query to save on the overhead of\nhaving to open a new one every time. By using a pooler you only need to\nconfigure connections for the number of concurrent _queries_ rather than\nconcurrent application and user open but idle connections.\n\nOn Wed, Jun 26, 2019 at 5:16 AM Hervé Schweitzer (HER) <[email protected]> wrote:You now that Postgres don’t have any shared_pool as Oracle, and the session information ( execution plan, etc..) are only available for the current session. Therefore I also highly recommend to us a connection poll as Laurent wrote, in order to have higher chance that some stuff is already cached in the shared session available. \n\nRegards\nHerve The most popular stand-alone connection pooler for PostgreSQL is the oddly named \"pgbouncer\": https://wiki.postgresql.org/wiki/PgBouncerThere are others, of course. PgPool is also very popular: https://www.pgpool.net/mediawiki/index.php/Main_PageSome applications can also manage a connection pool efficiently entirely within the application itself.Configuring the maximum number of concurrent connections your database supports incurs significant overhead in the running database. New connections and disconnections also have a high overhead as they occur. By moving the connecting/disconnecting logic to a connection pooler you remove a lot of overhead and load from the database - letting it focus on the important stuff -- your queries.It is amazing how many fewer actual connections you need to the database when you configure a pooler. Most connections from applications and users are idle most of the time. Even on busy web servers. They just keep that pathway open in case they need to run a query to save on the overhead of having to open a new one every time. By using a pooler you only need to configure connections for the number of concurrent _queries_ rather than concurrent application and user open but idle connections.",
"msg_date": "Wed, 26 Jun 2019 08:38:46 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Max_connections limit"
},
{
"msg_contents": "From: Daulat Ram [mailto:[email protected]]\nSent: Wednesday, June 26, 2019 3:14 AM\nTo: [email protected]\nSubject: Max_connections limit\n\nHello team,\n\nWe have migrated our database from Oracle 12c to Postgres 11. I need your suggestions , we have sessions limit in Oracle = 3024 . Do we need to set the same connection limit in Postgres as well. How we can decide the max_connections limit for postgres. Are there any differences in managing connections in Oracle and postgres.\n\nSQL> show parameter sessions;\n\nNAME TYPE VALUE\n------------------------------------ ----------- ------------------------------\njava_max_sessionspace_size integer 0\njava_soft_sessionspace_limit integer 0\nlicense_max_sessions integer 0\nlicense_sessions_warning integer 0\nsessions integer 3024\nshared_server_sessions integer\nSQL>\n\nRegards,\nDaulat\n\n\nThe difference between Oracle and PG is that Oracle has \"built-in\" connection pooler, and PG does not.\nYou should use external pooler (i.e. PgBouncer) and reduce number of allowed connections in PG config to about 50, while allowing thousands client connection when configuring PgBouncer.\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\nFrom: Daulat Ram [mailto:[email protected]] \nSent: Wednesday, June 26, 2019 3:14 AM\nTo: [email protected]\nSubject: Max_connections limit\n\n \nHello team,\n \nWe have migrated our database from Oracle 12c to Postgres 11. I need your suggestions , we have sessions limit in Oracle = 3024 .\nDo we need to set the same connection limit in Postgres as well. How we can decide the max_connections limit for postgres. Are there any differences in managing connections in Oracle and postgres.\n \nSQL> show parameter sessions;\n \nNAME TYPE VALUE\n------------------------------------ ----------- ------------------------------\njava_max_sessionspace_size integer 0\njava_soft_sessionspace_limit integer 0\nlicense_max_sessions integer 0\nlicense_sessions_warning integer 0\nsessions integer 3024\nshared_server_sessions integer\nSQL>\n \nRegards,\nDaulat\n\n \n\n \nThe difference between Oracle and PG is that Oracle has “built-in” connection pooler, and PG does not.\nYou should use external pooler (i.e. PgBouncer) and reduce number of allowed connections in PG config to about 50, while allowing thousands client connection when configuring PgBouncer.\n \nRegards,\nIgor Neyman",
"msg_date": "Wed, 26 Jun 2019 14:29:14 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Max_connections limit"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been wondering whether it is possible somehow to have the standard\ncolumn statistics to respect a certain operator class?\n\nThe reason why I am asking for this is that I have a UUID column with a\nunique index at it using a custom operator class which implies a\ndifferent sort order than for the default UUID operator class.\n\nThis results into planner mistakes when determining whether to use the\nindex for row selection or not. Too often it falls back into sequential\nscan due to this.\n\n\nThanx and Cheers,\n\n\tAncoron\n\n\n",
"msg_date": "Sat, 6 Jul 2019 11:02:27 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Custom opclass for column statistics?"
},
{
"msg_contents": "On Sat, Jul 06, 2019 at 11:02:27AM +0200, Ancoron Luciferis wrote:\n>Hi,\n>\n>I've been wondering whether it is possible somehow to have the standard\n>column statistics to respect a certain operator class?\n>\n>The reason why I am asking for this is that I have a UUID column with a\n>unique index at it using a custom operator class which implies a\n>different sort order than for the default UUID operator class.\n>\n>This results into planner mistakes when determining whether to use the\n>index for row selection or not. Too often it falls back into sequential\n>scan due to this.\n>\n\nCan you share an example demonstrating the issue?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 6 Jul 2019 15:38:58 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Custom opclass for column statistics?"
},
{
"msg_contents": "Ancoron Luciferis <[email protected]> writes:\n> I've been wondering whether it is possible somehow to have the standard\n> column statistics to respect a certain operator class?\n\nIn principle, pg_statistic can represent stats for a non-default opclass.\nTeaching ANALYZE to collect such stats when appropriate, and then teaching\nthe planner to use them when appropriate, is left as an exercise for the\nreader.\n\nI think the \"when appropriate\" bit is actually the hardest part of that.\nPossibly, if you were satisfied with a relatively manual approach,\nyou could proceed by using CREATE STATISTICS to declare interest in\nkeeping standard stats for a non-default sort order. Not sure what to\ndo if you want it to be automatic, because I don't think people would\nhold still for having ANALYZE collect stats for any random non-default\nopclass automatically. Maybe a new opclass property?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jul 2019 09:58:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Custom opclass for column statistics?"
},
{
"msg_contents": "On 06/07/2019 15:38, Tomas Vondra wrote:\n> On Sat, Jul 06, 2019 at 11:02:27AM +0200, Ancoron Luciferis wrote:\n>> Hi,\n>>\n>> I've been wondering whether it is possible somehow to have the standard\n>> column statistics to respect a certain operator class?\n>>\n>> The reason why I am asking for this is that I have a UUID column with a\n>> unique index at it using a custom operator class which implies a\n>> different sort order than for the default UUID operator class.\n>>\n>> This results into planner mistakes when determining whether to use the\n>> index for row selection or not. Too often it falls back into sequential\n>> scan due to this.\n>>\n> \n> Can you share an example demonstrating the issue?\n> \n> \n> regards\n> \n\nYes, I have an opclass as follows:\n\nCREATE OPERATOR CLASS uuid_timestamp_ops FOR TYPE uuid\n USING btree AS\n OPERATOR 1 <*,\n OPERATOR 1 <~ (uuid, timestamp with time zone),\n OPERATOR 2 <=*,\n OPERATOR 2 <=~ (uuid, timestamp with time zone),\n OPERATOR 3 =,\n OPERATOR 3 =~ (uuid, timestamp with time zone),\n OPERATOR 4 >=*,\n OPERATOR 4 >=~ (uuid, timestamp with time zone),\n OPERATOR 5 >*,\n OPERATOR 5 >~ (uuid, timestamp with time zone),\n FUNCTION 1 uuid_timestamp_cmp(uuid, uuid),\n FUNCTION 1 uuid_timestamp_only_cmp(uuid, timestamp\nwith time zone),\n FUNCTION 2 uuid_timestamp_sortsupport(internal)\n;\n\n...and e.g. operator \"<*\" is defined as:\n\nCREATE FUNCTION uuid_timestamp_lt(uuid, uuid)\nRETURNS bool\nAS 'MODULE_PATHNAME', 'uuid_timestamp_lt'\nLANGUAGE C\nIMMUTABLE\nLEAKPROOF\nSTRICT\nPARALLEL SAFE;\n\nCOMMENT ON FUNCTION uuid_timestamp_lt(uuid, uuid) IS 'lower than';\n\nCREATE OPERATOR <* (\n LEFTARG = uuid,\n RIGHTARG = uuid,\n PROCEDURE = uuid_timestamp_lt,\n COMMUTATOR = '>*',\n NEGATOR = '>=*',\n RESTRICT = scalarltsel,\n JOIN = scalarltjoinsel\n);\n\n\nThe function \"uuid_timestamp_lt\" is basically defined as follows:\n1. if not version 1 UUID fallback to standard uuid compare\n2. extract timestamp values and compare\n3. if equal timestamps fallback to standard uuid compare\n\n...so that a chronological order is established.\n\n\nThe test table is created as follows:\n\nCREATE TABLE uuid_v1_ext (id uuid);\nCREATE UNIQUE INDEX idx_uuid_v1_ext ON uuid_v1_ext (id uuid_timestamp_ops);\n\n\nThe values for \"histogram_bounds\" of the test table look like this (due\nto the default sort order for standard type UUID):\n\n00003789-97bf-11e9-b6bb-e03f49f7f733\n008b88f8-6deb-11e9-901a-e03f4947f477\n010a8b22-586a-11e9-8258-e03f49ce78f3\n...\n6f682e68-978d-11e9-901a-e03f4947f477\n6ff412ee-926f-11e9-901a-e03f4947f477\n7079ffe2-642f-11e9-b0cc-e03f49e7fd3b\n70ffaeca-4645-11e9-adf9-e03f494677fb\n...\nfef26b41-9b9d-11e9-b0cc-e03f49e7fd3b\nff779ce8-9e52-11e9-8258-e03f49ce78f3\nffff6bfc-4de4-11e9-b0d4-e03f49d6f6bf\n\n...and I think that's where the planner gets the decision for a query\nsuch as:\n\nDELETE FROM uuid_v1_ext WHERE id <* '4bdf6f81-56ad-11e9-8258-e03f49ce78f3';\n\n...which then get's executed as sequential scan instead of an index scan.\n\nI was also thinking about changing the selectivity function used by the\ncustom operator, but I didn't find any hints how to implement that\nwithout duplicating a lot of internal code.\n\n\nCheers,\n\n\tAncoron\n\n\n\n",
"msg_date": "Sat, 6 Jul 2019 17:35:33 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Custom opclass for column statistics?"
},
{
"msg_contents": "On Sat, Jul 06, 2019 at 05:35:33PM +0200, Ancoron Luciferis wrote:\n>On 06/07/2019 15:38, Tomas Vondra wrote:\n>> On Sat, Jul 06, 2019 at 11:02:27AM +0200, Ancoron Luciferis wrote:\n>>> Hi,\n>>>\n>>> I've been wondering whether it is possible somehow to have the standard\n>>> column statistics to respect a certain operator class?\n>>>\n>>> The reason why I am asking for this is that I have a UUID column with a\n>>> unique index at it using a custom operator class which implies a\n>>> different sort order than for the default UUID operator class.\n>>>\n>>> This results into planner mistakes when determining whether to use the\n>>> index for row selection or not. Too often it falls back into sequential\n>>> scan due to this.\n>>>\n>>\n>> Can you share an example demonstrating the issue?\n>>\n>>\n>> regards\n>>\n>\n>Yes, I have an opclass as follows:\n>\n>CREATE OPERATOR CLASS uuid_timestamp_ops FOR TYPE uuid\n> USING btree AS\n> OPERATOR 1 <*,\n> OPERATOR 1 <~ (uuid, timestamp with time zone),\n> OPERATOR 2 <=*,\n> OPERATOR 2 <=~ (uuid, timestamp with time zone),\n> OPERATOR 3 =,\n> OPERATOR 3 =~ (uuid, timestamp with time zone),\n> OPERATOR 4 >=*,\n> OPERATOR 4 >=~ (uuid, timestamp with time zone),\n> OPERATOR 5 >*,\n> OPERATOR 5 >~ (uuid, timestamp with time zone),\n> FUNCTION 1 uuid_timestamp_cmp(uuid, uuid),\n> FUNCTION 1 uuid_timestamp_only_cmp(uuid, timestamp\n>with time zone),\n> FUNCTION 2 uuid_timestamp_sortsupport(internal)\n>;\n>\n>...and e.g. operator \"<*\" is defined as:\n>\n>CREATE FUNCTION uuid_timestamp_lt(uuid, uuid)\n>RETURNS bool\n>AS 'MODULE_PATHNAME', 'uuid_timestamp_lt'\n>LANGUAGE C\n>IMMUTABLE\n>LEAKPROOF\n>STRICT\n>PARALLEL SAFE;\n>\n>COMMENT ON FUNCTION uuid_timestamp_lt(uuid, uuid) IS 'lower than';\n>\n>CREATE OPERATOR <* (\n> LEFTARG = uuid,\n> RIGHTARG = uuid,\n> PROCEDURE = uuid_timestamp_lt,\n> COMMUTATOR = '>*',\n> NEGATOR = '>=*',\n> RESTRICT = scalarltsel,\n> JOIN = scalarltjoinsel\n>);\n>\n>\n>The function \"uuid_timestamp_lt\" is basically defined as follows:\n>1. if not version 1 UUID fallback to standard uuid compare\n>2. extract timestamp values and compare\n>3. if equal timestamps fallback to standard uuid compare\n>\n>...so that a chronological order is established.\n>\n>\n>The test table is created as follows:\n>\n>CREATE TABLE uuid_v1_ext (id uuid);\n>CREATE UNIQUE INDEX idx_uuid_v1_ext ON uuid_v1_ext (id uuid_timestamp_ops);\n>\n>\n>The values for \"histogram_bounds\" of the test table look like this (due\n>to the default sort order for standard type UUID):\n>\n>00003789-97bf-11e9-b6bb-e03f49f7f733\n>008b88f8-6deb-11e9-901a-e03f4947f477\n>010a8b22-586a-11e9-8258-e03f49ce78f3\n>...\n>6f682e68-978d-11e9-901a-e03f4947f477\n>6ff412ee-926f-11e9-901a-e03f4947f477\n>7079ffe2-642f-11e9-b0cc-e03f49e7fd3b\n>70ffaeca-4645-11e9-adf9-e03f494677fb\n>...\n>fef26b41-9b9d-11e9-b0cc-e03f49e7fd3b\n>ff779ce8-9e52-11e9-8258-e03f49ce78f3\n>ffff6bfc-4de4-11e9-b0d4-e03f49d6f6bf\n>\n>...and I think that's where the planner gets the decision for a query\n>such as:\n>\n>DELETE FROM uuid_v1_ext WHERE id <* '4bdf6f81-56ad-11e9-8258-e03f49ce78f3';\n>\n>...which then get's executed as sequential scan instead of an index scan.\n>\n>I was also thinking about changing the selectivity function used by the\n>custom operator, but I didn't find any hints how to implement that\n>without duplicating a lot of internal code.\n>\n\nNot sure, I'm not very familiar with this code, so I'd have to play with\nit and try things. But that's hard when I don't have any code. Would it\nbe possible to share a small self-contained test case?\n\nI wonder what does uuid_timestamp_cmp do? I suppose it first compares by\na timestamp extracted from the UUID, right?\n\nIt'd be interesting to see\n\n(a) statistics for the column from pg_stats, both for the table and\nindex (which should have been built using the custom opclass, I think).\n\n(b) EXPLAIN ANALYZE for queries with your opclass, and perhaps with the\ndefault one (that can't use the timestamp condition, but it should be\npossible to generate smallers/largest uuid for a timestamp).\n\nBTW which PostgreSQL version is this?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 6 Jul 2019 17:57:53 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Custom opclass for column statistics?"
},
{
"msg_contents": "On 06/07/2019 17:57, Tomas Vondra wrote:\n> On Sat, Jul 06, 2019 at 05:35:33PM +0200, Ancoron Luciferis wrote:\n>> On 06/07/2019 15:38, Tomas Vondra wrote:\n>>> On Sat, Jul 06, 2019 at 11:02:27AM +0200, Ancoron Luciferis wrote:\n>>>> Hi,\n>>>>\n>>>> I've been wondering whether it is possible somehow to have the standard\n>>>> column statistics to respect a certain operator class?\n>>>>\n>>>> The reason why I am asking for this is that I have a UUID column with a\n>>>> unique index at it using a custom operator class which implies a\n>>>> different sort order than for the default UUID operator class.\n>>>>\n>>>> This results into planner mistakes when determining whether to use the\n>>>> index for row selection or not. Too often it falls back into sequential\n>>>> scan due to this.\n>>>>\n>>>\n>>> Can you share an example demonstrating the issue?\n>>>\n>>>\n>>> regards\n>>>\n>>\n>> Yes, I have an opclass as follows:\n>>\n>> CREATE OPERATOR CLASS uuid_timestamp_ops FOR TYPE uuid\n>> USING btree AS\n>> OPERATOR 1 <*,\n>> OPERATOR 1 <~ (uuid, timestamp with time zone),\n>> OPERATOR 2 <=*,\n>> OPERATOR 2 <=~ (uuid, timestamp with time zone),\n>> OPERATOR 3 =,\n>> OPERATOR 3 =~ (uuid, timestamp with time zone),\n>> OPERATOR 4 >=*,\n>> OPERATOR 4 >=~ (uuid, timestamp with time zone),\n>> OPERATOR 5 >*,\n>> OPERATOR 5 >~ (uuid, timestamp with time zone),\n>> FUNCTION 1 uuid_timestamp_cmp(uuid, uuid),\n>> FUNCTION 1 uuid_timestamp_only_cmp(uuid, timestamp\n>> with time zone),\n>> FUNCTION 2 uuid_timestamp_sortsupport(internal)\n>> ;\n>>\n>> ...and e.g. operator \"<*\" is defined as:\n>>\n>> CREATE FUNCTION uuid_timestamp_lt(uuid, uuid)\n>> RETURNS bool\n>> AS 'MODULE_PATHNAME', 'uuid_timestamp_lt'\n>> LANGUAGE C\n>> IMMUTABLE\n>> LEAKPROOF\n>> STRICT\n>> PARALLEL SAFE;\n>>\n>> COMMENT ON FUNCTION uuid_timestamp_lt(uuid, uuid) IS 'lower than';\n>>\n>> CREATE OPERATOR <* (\n>> LEFTARG = uuid,\n>> RIGHTARG = uuid,\n>> PROCEDURE = uuid_timestamp_lt,\n>> COMMUTATOR = '>*',\n>> NEGATOR = '>=*',\n>> RESTRICT = scalarltsel,\n>> JOIN = scalarltjoinsel\n>> );\n>>\n>>\n>> The function \"uuid_timestamp_lt\" is basically defined as follows:\n>> 1. if not version 1 UUID fallback to standard uuid compare\n>> 2. extract timestamp values and compare\n>> 3. if equal timestamps fallback to standard uuid compare\n>>\n>> ...so that a chronological order is established.\n>>\n>>\n>> The test table is created as follows:\n>>\n>> CREATE TABLE uuid_v1_ext (id uuid);\n>> CREATE UNIQUE INDEX idx_uuid_v1_ext ON uuid_v1_ext (id\n>> uuid_timestamp_ops);\n>>\n>>\n>> The values for \"histogram_bounds\" of the test table look like this (due\n>> to the default sort order for standard type UUID):\n>>\n>> 00003789-97bf-11e9-b6bb-e03f49f7f733\n>> 008b88f8-6deb-11e9-901a-e03f4947f477\n>> 010a8b22-586a-11e9-8258-e03f49ce78f3\n>> ...\n>> 6f682e68-978d-11e9-901a-e03f4947f477\n>> 6ff412ee-926f-11e9-901a-e03f4947f477\n>> 7079ffe2-642f-11e9-b0cc-e03f49e7fd3b\n>> 70ffaeca-4645-11e9-adf9-e03f494677fb\n>> ...\n>> fef26b41-9b9d-11e9-b0cc-e03f49e7fd3b\n>> ff779ce8-9e52-11e9-8258-e03f49ce78f3\n>> ffff6bfc-4de4-11e9-b0d4-e03f49d6f6bf\n>>\n>> ...and I think that's where the planner gets the decision for a query\n>> such as:\n>>\n>> DELETE FROM uuid_v1_ext WHERE id <*\n>> '4bdf6f81-56ad-11e9-8258-e03f49ce78f3';\n>>\n>> ...which then get's executed as sequential scan instead of an index scan.\n>>\n>> I was also thinking about changing the selectivity function used by the\n>> custom operator, but I didn't find any hints how to implement that\n>> without duplicating a lot of internal code.\n>>\n> \n> Not sure, I'm not very familiar with this code, so I'd have to play with\n> it and try things. But that's hard when I don't have any code. Would it\n> be possible to share a small self-contained test case?\n\nMy test uses historically generated UUID's and is currently not\nautomated (generates ~120M UUID's over a period of ~4 months). The tool\nI am using to generate the UUID's is a Java tool, so not very automation\nfriendly (using Maven, which downloads a lot at first build time).\n\nThe resulting data is ~4 GiB in size, but here is a guide I use for my\nmanual testing:\n\nhttps://gist.github.com/ancoron/b08ac4b1ceafda2a38ff12030c011385\n\nPlease note that my testing involves 4 SSD's (to avoid too much mixed\nI/O but not for functionality):\n1.) system + WAL\n2.) generated UUID files (for reading)\n3.) table data (tablespace \"fast\")\n4.) index data (tablespace \"faster\")\n\n\n> \n> I wonder what does uuid_timestamp_cmp do? I suppose it first compares by\n> a timestamp extracted from the UUID, right?\n\nexactly:\n\nhttps://github.com/ancoron/pg-uuid-ext/blob/master/uuid_ext.c#L234\n\n> \n> It'd be interesting to see\n> \n> (a) statistics for the column from pg_stats, both for the table and\n> index (which should have been built using the custom opclass, I think).\n\nFor the table column:\n\nschemaname | public\ntablename | uuid_v1_ext\nattname | id\ninherited | f\nnull_frac | 0\navg_width | 16\nn_distinct | -1\nmost_common_vals |\nmost_common_freqs |\nhistogram_bounds | {00003789-97bf-11e9-b6bb-e03f49f7f733, ...\ncorrelation | 0.00448091\nmost_common_elems |\nmost_common_elem_freqs |\nelem_count_histogram |\n\n\nThere is no statistic for the index (or I don't know how to retrieve it).\n\n> \n> (b) EXPLAIN ANALYZE for queries with your opclass, and perhaps with the\n> default one (that can't use the timestamp condition, but it should be\n> possible to generate smallers/largest uuid for a timestamp).\n\nAfter some further testing it seems that I am comparing apples with\nbananas to a certain extend as I also have another index using a uuid\ntimestamp extraction function and then using a timestamp for selecting\nthe entries to delete.\n\nFor uuid 4bdf6f81-56ad-11e9-8258-e03f49ce78f3, the query then looks as\nfollows:\n\nDELETE FROM uuid_v1_timestamp WHERE uuid_v1_timestamp(id) < '2019-04-04\n07:43:11.3776';\n\n...resulting in a plan like:\n\n Delete on uuid_v1_timestamp (cost=0.57..779651.49 rows=30077367 width=6)\n -> Index Scan using idx_uuid_v1_timestamp on uuid_v1_timestamp\n(cost=0.57..779651.49 rows=30077367 width=6)\n Index Cond: (uuid_v1_timestamp(id) < '2019-04-04\n07:43:11.3776+00'::timestamp with time zone)\n\nSo, as my opclass basically does the same internally, I was expecting\nthe same behavior but it turned out that the statistics are just off a\nbit. When changing the timestamp to a lower value (resulting in less\nrows selected), the planner correctly uses an index scan, e.g.:\n\n Delete on uuid_v1_ext (cost=0.57..767334.76 rows=1829994 width=6)\n -> Index Scan using idx_uuid_v1_ext on uuid_v1_ext\n(cost=0.57..767334.76 rows=1829994 width=6)\n Index Cond: (id <* '4e91eb0a-4e91-11e9-83fd-e03f49ef76f7'::uuid)\n\n...despite the fact that the bare uuid value is pretty high (the\ntimestamp value is 2019-03-25 00:00:28.846626+00).\n\nWhat I can see is that the row estimates are really off here:\n\n Seq Scan on uuid_v1_ext (cost=0.00..2184455.80 rows=46969870 width=16)\n(actual time=0.029..9372.892 rows=30000000 loops=1)\n Filter: (id <* '4bdf6f81-56ad-11e9-8258-e03f49ce78f3'::uuid)\n Rows Removed by Filter: 92000000\n Planning Time: 0.152 ms\n Execution Time: 10139.718 ms\n\nvs.\n\n Index Only Scan using idx_uuid_v1_ext on uuid_v1_ext\n(cost=0.57..767334.76 rows=1829994 width=16) (actual\ntime=0.042..3255.211 rows=19709001 loops=1)\n Index Cond: (id <* '4e91eb0a-4e91-11e9-83fd-e03f49ef76f7'::uuid)\n Heap Fetches: 19709001\n Planning Time: 0.182 ms\n Execution Time: 3763.380 ms\n\n^^ last case off by a factor of 10?\n\nHowever, I think the lower rows range for switching to an index scan may\nalso be influenced by the index width (16 byte uuid vs. 8 byte\ntimestamp). Or am I wrong here?\n\n> \n> BTW which PostgreSQL version is this?\n\nI am testing mainly on 11.4. I did do any testing on 12 beta so far.\n\n> \n> regards\n> \n\n\n\n",
"msg_date": "Sun, 7 Jul 2019 01:17:09 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Custom opclass for column statistics?"
},
{
"msg_contents": "On 06/07/2019 15:58, Tom Lane wrote:\n> Ancoron Luciferis <[email protected]> writes:\n>> I've been wondering whether it is possible somehow to have the standard\n>> column statistics to respect a certain operator class?\n> \n> In principle, pg_statistic can represent stats for a non-default opclass.\n> Teaching ANALYZE to collect such stats when appropriate, and then teaching\n> the planner to use them when appropriate, is left as an exercise for the\n> reader.\n\nHehe, now that you are saying it, I realize what I was actually asking\nfor with this... ;)\n\n> \n> I think the \"when appropriate\" bit is actually the hardest part of that.\n> Possibly, if you were satisfied with a relatively manual approach,\n> you could proceed by using CREATE STATISTICS to declare interest in\n> keeping standard stats for a non-default sort order. Not sure what to\n> do if you want it to be automatic, because I don't think people would\n> hold still for having ANALYZE collect stats for any random non-default\n> opclass automatically. Maybe a new opclass property?\n\nI totally agree with the complications around all that.\n\nNow I think if I want better statistics and better plans for my new\ntime-sorted index, I will need a new data type for which I can set the\nopclass as default, which also would provide users the guarantee that\nthey'll get what they expect.\n\nThanx and cheers,\n\n\tAncoron\n\n\n",
"msg_date": "Wed, 10 Jul 2019 00:16:38 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Custom opclass for column statistics?"
}
] |
[
{
"msg_contents": "Hi!\n\nI am attempting to replicate YouTube's subscription feed. Each user has a list \nof channel IDs (as text[]) that they are subscribed to. The `users` table \nlooks like:\n\n```\n=# \\d users\n Table \"public.users\"\n Column | Type | Collation | Nullable | Default\n-------------------+--------------------------+-----------+----------\n+---------\n email | text | | not null |\n subscriptions | text[] | | |\n feed_needs_update | boolean | | |\nIndexes:\n \"users_pkey\" PRIMARY KEY, btree (email)\n \"email_unique_idx\" UNIQUE, btree (lower(email))\n \"users_email_key\" UNIQUE CONSTRAINT, btree (email)\n```\n\nFor storing videos, I have a table `channel_videos` from which I want to \nselect all videos where the video's `ucid` (channel ID) is in the user's \nsubscriptions. `channel_videos` looks like:\n\n```\n=# \\d channel_videos\n Table \"public.channel_videos\"\n Column | Type | Collation | Nullable | \nDefault\n--------------------+--------------------------+-----------+----------\n+---------\n id | text | | not null |\n title | text | | |\n published | timestamp with time zone | | |\n updated | timestamp with time zone | | |\n ucid | text | | |\n author | text | | |\n views | bigint | | |\nIndexes:\n \"channel_videos_id_key\" UNIQUE CONSTRAINT, btree (id)\n \"channel_videos_published_idx\" btree (published)\n \"channel_videos_ucid_idx\" btree (ucid)\n```\n\nIn case it might help with indexing, a UCID always begins with `UC`, then 22 \nrandom characters in [a-zA-Z0-9_-] (Base64-urlsafe).\n\nCurrently, the query I'm using to generate a user's feed is:\n\n```\nSELECT * FROM channel_videos WHERE ucid IN (SELECT unnest(subscriptions) FROM \nusers WHERE email = $1) ORDER BY published DESC;\n```\n\nThis works great when `subscriptions` and `channel_videos` are both fairly \nsmall. Unfortunately, at this point `channel_videos` contains roughly 28M \nrows. Attempting to generate a feed for a user with 177 subscriptions takes \naround 40-70s, here's the relevant `EXPLAIN (ANALYZE, BUFFERS)`:\n\n```\n=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM channel_videos WHERE ucid IN \n(SELECT unnest(subscriptions) FROM users WHERE email = 'omarroth') ORDER BY \npublished DESC;\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=478207.59..478506.73 rows=119656 width=142) (actual \ntime=68599.562..68613.824 rows=46456 loops=1)\n Sort Key: channel_videos.published DESC\n Sort Method: external merge Disk: 6352kB\n Buffers: shared hit=456 read=13454 dirtied=13 written=456, temp read=794 \nwritten=797\n -> Nested Loop (cost=55.94..459526.49 rows=119656 width=142) (actual \ntime=0.555..68531.550 rows=46456 loops=1)\n Buffers: shared hit=453 read=13454 dirtied=13 written=456\n -> HashAggregate (cost=10.18..11.18 rows=100 width=32) (actual \ntime=0.417..0.850 rows=177 loops=1)\n Group Key: unnest(users.subscriptions)\n Buffers: shared hit=39 read=7\n -> ProjectSet (cost=0.41..8.93 rows=100 width=32) (actual \ntime=0.305..0.363 rows=177 loops=1)\n Buffers: shared hit=39 read=7\n -> Index Scan using users_email_key on users \n(cost=0.41..8.43 rows=1 width=127) (actual time=0.049..0.087 rows=1 loops=1)\n Index Cond: (email = 'omarroth'::text)\n Buffers: shared hit=5 read=4\n -> Bitmap Heap Scan on channel_videos (cost=45.76..4583.18 \nrows=1197 width=142) (actual time=28.835..387.068 rows=262 loops=177)\n Recheck Cond: (ucid = (unnest(users.subscriptions)))\n Heap Blocks: exact=12808\n Buffers: shared hit=414 read=13447 dirtied=13 written=456\n -> Bitmap Index Scan on channel_videos_ucid_idx \n(cost=0.00..45.46 rows=1197 width=0) (actual time=22.255..22.255 rows=263 \nloops=177)\n Index Cond: (ucid = (unnest(users.subscriptions)))\n Buffers: shared hit=291 read=762 written=27\n Planning Time: 1.463 ms\n Execution Time: 68619.316 ms\n(23 rows)\n\n```\n\nSince a feed is expected to be fairly consistent to what would appear on \nYouTube, `channel_videos` receives fairly frequent UPDATEs and INSERTs and is \nvacuumed fairly frequently:\n\n```\n=# SELECT * FROM pg_stat_user_tables WHERE relname = 'channel_videos';\n relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | \nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup \n| n_dead_tup | n_mod_since_analyze | last_vacuum | \nlast_autovacuum | last_analyze | last_autoanalyze \n| vacuum_count | autovacuum_count | analyze_count | autoanalyze_count\n-------+------------+----------------+----------+--------------+----------\n+---------------+-----------+-----------+-----------+---------------\n+------------+------------+---------------------\n+------------------------------+-----------------\n+-------------------------------+-------------------------------\n+--------------+------------------+---------------+-------------------\n 16444 | public | channel_videos | 3 | 6346042 | 28294649 | \n6605201260 | 218485 | 13280741 | 2413 | 11241541 | 24601363 | \n585451 | 763200 | 2019-07-05 17:57:21.34215+00 | \n| 2019-07-05 17:59:21.013308+00 | 2019-07-04 14:54:02.845591+00 | 4 \n| 0 | 4 | 2\n(1 row)\nThe machine running Postgres has 4 vCPUs, 8GB of RAM (which I expect is the \nproblem), and 160GB SSD. `uname -srv`:\n\n# uname -srv\nLinux 4.4.0-154-generic #181-Ubuntu SMP Tue Jun 25 05:29:03 UTC 2019\n```\nI am running Postgres 11.4, `SELECT version()`:\n\n PostgreSQL 11.4 (Ubuntu 11.4-1.pgdg16.04+1) on x86_64-pc-linux-gnu, compiled \nby gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609, 64-bit\n(1 row)\n\n\n\n\n",
"msg_date": "Sun, 7 Jul 2019 13:43:14 +0000",
"msg_from": "Omar Roth <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing `WHERE x IN` query"
},
{
"msg_contents": "Omar Roth schrieb am 07.07.2019 um 15:43:\n> Currently, the query I'm using to generate a user's feed is:\n>\n> ```\n> SELECT * FROM channel_videos WHERE ucid IN (SELECT unnest(subscriptions) FROM\n> users WHERE email = $1) ORDER BY published DESC;\n> ```\n\nYou could try an EXISTS query without unnest:\n\nselect cv.*\nfrom channel_videos cv\nwhere exists ucid (select *\n from users u\n where cv.ucid = any(u.subscriptions)\n and u.email = $1);\n\nDid you try if a properly normalized model performs better?\n\n\n\n",
"msg_date": "Sun, 7 Jul 2019 16:33:00 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing `WHERE x IN` query"
},
{
"msg_contents": "The suggested query indeed appears to be faster. Thank you.\n\n> Did you try if a properly normalized model performs better?\n\nI've tested the below schema, which doesn't appear to perform much better but \nhas a couple other advantages for my application:\n\n```\ncreate table subscriptions (\n email text references users(email),\n ucid text,\n primary key (email, ucid)\n);\n\nexplain (analyze, buffers) select cv.* from channel_videos cv, subscriptions s \nwhere cv.ucid = s.ucid and s.email = $1;\n\n```\n\nIs there something else here I'm missing?\n\n\n\n\n",
"msg_date": "Tue, 9 Jul 2019 14:44:37 +0000",
"msg_from": "Omar Roth <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing `WHERE x IN` query"
},
{
"msg_contents": "Le 07/07/2019 à 16:33, Thomas Kellerer a écrit :\n> Omar Roth schrieb am 07.07.2019 um 15:43:\n>> Currently, the query I'm using to generate a user's feed is:\n>>\n>> ```\n>> SELECT * FROM channel_videos WHERE ucid IN (SELECT \n>> unnest(subscriptions) FROM\n>> users WHERE email = $1) ORDER BY published DESC;\n>> ```\n>\n> You could try an EXISTS query without unnest:\n>\n> select cv.*\n> from channel_videos cv\n> where exists ucid (select *\n> from users u\n> where cv.ucid = any(u.subscriptions)\n> and u.email = $1);\n>\n> Did you try if a properly normalized model performs better?\n>\n>\nHi\n\n\nWe had big performance issues with queries like that, and we modified \nthem to use && (see \nhttps://www.postgresql.org/docs/current/functions-array.html ), \nresulting in a big perf boost\n\nso, with your model, the query could be\n\n```\nselect cv.*\nfrom channel_videos cv\n\ninner join user u on cv.ucid && u.subscription\n\nwhere u.email = $1;\n```\n\nor\n\n```\nselect cv.*\nfrom channel_videos cv\n\ninner join ( select subscription from user where email = $1) as u on \ncv.ucid && u.subscription ;\n\n```\n\n(disclaimer, I didn't try this queries, they may contain typos)\n\n\nRegards\n\nNicolas\n\n\n\n",
"msg_date": "Tue, 9 Jul 2019 17:21:34 +0200",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing `WHERE x IN` query"
},
{
"msg_contents": "> We had big performance issues with queries like that, and we modified \n> them to use && (see \n> https://www.postgresql.org/docs/current/functions-array.html ), \n> resulting in a big perf boost\n\nMuch appreciated! Unfortunately I'm having trouble turning your suggestions \ninto a working query.\n\n`cv.ucid && u.subscriptions` doesn't appear to work in my case since the \ncomparison is `text && text[]`. Something like:\n\n```\nexplain (analyze, buffers) select cv.* from channel_videos cv inner join \n(select subscriptions from users where email = $1) as u on array[cv.ucid] && \nu.subscriptions;\n```\n\ndoes work but unfortunately doesn't appear to be faster than the current \nquery.\n\nCheers\n\n\n\n\n",
"msg_date": "Thu, 11 Jul 2019 22:22:03 +0000",
"msg_from": "Omar Roth <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing `WHERE x IN` query"
},
{
"msg_contents": "Did you create a GIN index on subscriptions column to support the &&\noperator?\n\nDid you create a GIN index on subscriptions column to support the && operator?",
"msg_date": "Thu, 11 Jul 2019 16:24:33 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing `WHERE x IN` query"
}
] |
[
{
"msg_contents": "I am querying a remote server through a foreign table definition.\n\nCREATE TABLE example (id integer, product product_enum, status status_enum)\n\nWhere\n\nCREATE TYPE status AS ENUM ('active', 'testing', 'inactive', ...);\nCREATE TYPE product AS ENUM ('a', 'b', 'c', ...);\n\nI re-created enums on my server and created a foreign table as follows:\n\nCREATE FOREIGN TABLE example (id integer, product product_enum, status\nstatus_enum)\nSERVER remote;\n\nWhen I am querying the foreign table on enum predicate like\n\nselect * from example where product = 'a' and status = 'active'\n\nI see that filtering happens on my server which can be seen in the plan and\ncan be felt from the query performance (indices are not used of course).\n\nI tried to cheat this thing by defining the enum fields as text in the\nforeign table but then the remote query fails with\n\nERROR: operator does not exist: public.product = text HINT: No operator\nmatches the given name and argument type(s). You might need to add explicit\ntype casts.\n\nThis is ridiculous. Is there a way to workaround this and force it execute\nthe remote query as is?\n\nRegards,\nVlad\n\nI am querying a remote server through a foreign table definition.CREATE TABLE example (id integer, product product_enum, status status_enum)WhereCREATE TYPE status AS ENUM ('active', 'testing', 'inactive', ...);CREATE TYPE product AS ENUM ('a', 'b', 'c', ...);I re-created enums on my server and created a foreign table as follows:CREATE FOREIGN TABLE example (id integer, product product_enum, status status_enum)SERVER remote;When I am querying the foreign table on enum predicate likeselect * from example where product = 'a' and status = 'active'I see that filtering happens on my server which can be seen in the plan and can be felt from the query performance (indices are not used of course).I tried to cheat this thing by defining the enum fields as text in the foreign table but then the remote query fails withERROR: operator does not exist: public.product = text HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.This is ridiculous. Is there a way to workaround this and force it execute the remote query as is?Regards,Vlad",
"msg_date": "Tue, 16 Jul 2019 16:00:10 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Filtering on an enum field in a foreign table"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jul 16, 2019 at 4:00 PM Vladimir Ryabtsev <[email protected]>\nwrote:\n\n> I am querying a remote server through a foreign table definition.\n>\n> CREATE TABLE example (id integer, product product_enum, status status_enum)\n>\n...\n\n> When I am querying the foreign table on enum predicate like\n>\n> select * from example where product = 'a' and status = 'active'\n>\n> I see that filtering happens on my server which can be seen in the plan\n> and can be felt from the query performance (indices are not used of course).\n>\n\nWhat Postgres version do you use?\n\nAny changes in plans if you collect stats on the FDW table (\"analyze\nexample;\")?\n\nHave you considered changing the option \"use_remote_estimate\" (see\nhttps://www.postgresql.org/docs/current/postgres-fdw.html#id-1.11.7.42.10)?\n\nHi,On Tue, Jul 16, 2019 at 4:00 PM Vladimir Ryabtsev <[email protected]> wrote:I am querying a remote server through a foreign table definition.CREATE TABLE example (id integer, product product_enum, status status_enum)... When I am querying the foreign table on enum predicate likeselect * from example where product = 'a' and status = 'active'I see that filtering happens on my server which can be seen in the plan and can be felt from the query performance (indices are not used of course).What Postgres version do you use?Any changes in plans if you collect stats on the FDW table (\"analyze example;\")?Have you considered changing the option \"use_remote_estimate\" (see https://www.postgresql.org/docs/current/postgres-fdw.html#id-1.11.7.42.10)?",
"msg_date": "Tue, 16 Jul 2019 17:45:17 -0700",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filtering on an enum field in a foreign table"
},
{
"msg_contents": "Sorry, the version() is\n\n\"PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit\"\n\nI gave use_remote_estimate a try but unfortunately it is the same.\n\nAdditionally I see on your page (in \"Remote Execution Options\"):\n\n\"By default, only WHERE clauses using built-in operators and functions will\nbe considered for execution on the remote server. Clauses involving\nnon-built-in functions are checked locally after rows are fetched.\"\n\nI think enum types somehow fall into the same category and they are\nfiltered only locally, which is seen in the plan (Filter clause).\nIf I use only columns of built-in type in the predicate everything works as\nexpected (with filtering on the remote server).\n\nI need a workaround to make this query execute remotely. One option may be\nusing a materialized view with these enum values converted to text but then\nI will need to refresh this view periodically on the remote server.\nAnd actually it looks like a performance bug in the DBMS...\n\n\nвт, 16 июл. 2019 г. в 17:45, Nikolay Samokhvalov <[email protected]>:\n\n> Hi,\n>\n> On Tue, Jul 16, 2019 at 4:00 PM Vladimir Ryabtsev <[email protected]>\n> wrote:\n>\n>> I am querying a remote server through a foreign table definition.\n>>\n>> CREATE TABLE example (id integer, product product_enum, status\n>> status_enum)\n>>\n> ...\n>\n>> When I am querying the foreign table on enum predicate like\n>>\n>> select * from example where product = 'a' and status = 'active'\n>>\n>> I see that filtering happens on my server which can be seen in the plan\n>> and can be felt from the query performance (indices are not used of course).\n>>\n>\n> What Postgres version do you use?\n>\n> Any changes in plans if you collect stats on the FDW table (\"analyze\n> example;\")?\n>\n> Have you considered changing the option \"use_remote_estimate\" (see\n> https://www.postgresql.org/docs/current/postgres-fdw.html#id-1.11.7.42.10)?\n>\n>\n>\n\nSorry, the version() is\"PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit\"I gave use_remote_estimate a try but unfortunately it is the same.Additionally I see on your page (in \"Remote Execution Options\"):\"By default, only WHERE clauses using built-in operators and functions will be considered for execution on the remote server. Clauses involving non-built-in functions are checked locally after rows are fetched.\"I think enum types somehow fall into the same category and they are filtered only locally, which is seen in the plan (Filter clause).If I use only columns of built-in type in the predicate everything works as expected (with filtering on the remote server).I need a workaround to make this query execute remotely. One option may be using a materialized view with these enum values converted to text but then I will need to refresh this view periodically on the remote server.And actually it looks like a performance bug in the DBMS...вт, 16 июл. 2019 г. в 17:45, Nikolay Samokhvalov <[email protected]>:Hi,On Tue, Jul 16, 2019 at 4:00 PM Vladimir Ryabtsev <[email protected]> wrote:I am querying a remote server through a foreign table definition.CREATE TABLE example (id integer, product product_enum, status status_enum)... When I am querying the foreign table on enum predicate likeselect * from example where product = 'a' and status = 'active'I see that filtering happens on my server which can be seen in the plan and can be felt from the query performance (indices are not used of course).What Postgres version do you use?Any changes in plans if you collect stats on the FDW table (\"analyze example;\")?Have you considered changing the option \"use_remote_estimate\" (see https://www.postgresql.org/docs/current/postgres-fdw.html#id-1.11.7.42.10)?",
"msg_date": "Tue, 16 Jul 2019 18:06:27 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Filtering on an enum field in a foreign table"
},
{
"msg_contents": "Wait folks,\n\nI realized that if I create a basic view with enum fields converted to\ntext, it does the trick! I query the view with text predicate and the view\nimplementation (I think) converts it into enums when querying the\nunderlying table.\n\nBut I still think it should work without such workarounds...\n\nвт, 16 июл. 2019 г. в 18:06, Vladimir Ryabtsev <[email protected]>:\n\n> Sorry, the version() is\n>\n> \"PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on x86_64-pc-linux-gnu,\n> compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit\"\n>\n> I gave use_remote_estimate a try but unfortunately it is the same.\n>\n> Additionally I see on your page (in \"Remote Execution Options\"):\n>\n> \"By default, only WHERE clauses using built-in operators and functions\n> will be considered for execution on the remote server. Clauses involving\n> non-built-in functions are checked locally after rows are fetched.\"\n>\n> I think enum types somehow fall into the same category and they are\n> filtered only locally, which is seen in the plan (Filter clause).\n> If I use only columns of built-in type in the predicate everything works\n> as expected (with filtering on the remote server).\n>\n> I need a workaround to make this query execute remotely. One option may be\n> using a materialized view with these enum values converted to text but then\n> I will need to refresh this view periodically on the remote server.\n> And actually it looks like a performance bug in the DBMS...\n>\n>\n> вт, 16 июл. 2019 г. в 17:45, Nikolay Samokhvalov <[email protected]>:\n>\n>> Hi,\n>>\n>> On Tue, Jul 16, 2019 at 4:00 PM Vladimir Ryabtsev <[email protected]>\n>> wrote:\n>>\n>>> I am querying a remote server through a foreign table definition.\n>>>\n>>> CREATE TABLE example (id integer, product product_enum, status\n>>> status_enum)\n>>>\n>> ...\n>>\n>>> When I am querying the foreign table on enum predicate like\n>>>\n>>> select * from example where product = 'a' and status = 'active'\n>>>\n>>> I see that filtering happens on my server which can be seen in the plan\n>>> and can be felt from the query performance (indices are not used of course).\n>>>\n>>\n>> What Postgres version do you use?\n>>\n>> Any changes in plans if you collect stats on the FDW table (\"analyze\n>> example;\")?\n>>\n>> Have you considered changing the option \"use_remote_estimate\" (see\n>> https://www.postgresql.org/docs/current/postgres-fdw.html#id-1.11.7.42.10)?\n>>\n>>\n>>\n>\n\nWait folks,I realized that if I create a basic view with enum fields converted to text, it does the trick! I query the view with text predicate and the view implementation (I think) converts it into enums when querying the underlying table.But I still think it should work without such workarounds...вт, 16 июл. 2019 г. в 18:06, Vladimir Ryabtsev <[email protected]>:Sorry, the version() is\"PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit\"I gave use_remote_estimate a try but unfortunately it is the same.Additionally I see on your page (in \"Remote Execution Options\"):\"By default, only WHERE clauses using built-in operators and functions will be considered for execution on the remote server. Clauses involving non-built-in functions are checked locally after rows are fetched.\"I think enum types somehow fall into the same category and they are filtered only locally, which is seen in the plan (Filter clause).If I use only columns of built-in type in the predicate everything works as expected (with filtering on the remote server).I need a workaround to make this query execute remotely. One option may be using a materialized view with these enum values converted to text but then I will need to refresh this view periodically on the remote server.And actually it looks like a performance bug in the DBMS...вт, 16 июл. 2019 г. в 17:45, Nikolay Samokhvalov <[email protected]>:Hi,On Tue, Jul 16, 2019 at 4:00 PM Vladimir Ryabtsev <[email protected]> wrote:I am querying a remote server through a foreign table definition.CREATE TABLE example (id integer, product product_enum, status status_enum)... When I am querying the foreign table on enum predicate likeselect * from example where product = 'a' and status = 'active'I see that filtering happens on my server which can be seen in the plan and can be felt from the query performance (indices are not used of course).What Postgres version do you use?Any changes in plans if you collect stats on the FDW table (\"analyze example;\")?Have you considered changing the option \"use_remote_estimate\" (see https://www.postgresql.org/docs/current/postgres-fdw.html#id-1.11.7.42.10)?",
"msg_date": "Tue, 16 Jul 2019 18:52:08 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Filtering on an enum field in a foreign table"
}
] |
[
{
"msg_contents": "My table is having data like below with 100M records (contains all dummy\ndata). I am having btree index on column (\"field\").\n*While searching for any text from that column takes longer (more than 1\nminute).*\n\nuser Id field\nd848f466-5e12-46e7-acf4-e12aff592241 Northern Arkansas College\n24c32757-e6a8-4dbd-aac7-1efd867156ce female\n6e225c57-c1d1-48a5-b9aa-513223efc81b 1.0, 3.67, 3.67, 4.67, 7.0, 3.0\n088c6342-a240-45a7-9d12-e0e707292031 Weber\nb05088cf-cba6-4bd7-8f8f-1469226874d0 addd#[email protected]\n\n\nTable and index are created using following query.\n\ncreate table fields(user_id varchar(64), field varchar(64));\nCREATE INDEX index_field ON public.fields USING btree (field);\n\nSearch Query:\nEXPLAIN (ANALYZE, BUFFERS) select * from fields where field='Mueller';\n\nBitmap Heap Scan on fields (cost=72.61..10069.32 rows=2586 width=55)\n(actual time=88.017..65358.548 rows=31882 loops=1)\n Recheck Cond: ((field)::text = 'Mueller'::text)\n Heap Blocks: exact=31403\n Buffers: shared hit=2 read=31492\n -> Bitmap Index Scan on index_field (cost=0.00..71.96 rows=2586\nwidth=0) (actual time=55.960..55.960 rows=31882 loops=1)\n Index Cond: ((field)::text = 'Mueller'::text)\n Buffers: shared read=91\nPlanning Time: 0.331 ms\nExecution Time: 65399.314 ms\n\n\nAny suggestions for improvement?\n\nBest Regards,\nMayank\n\nMy table is having data like below with 100M records (contains all dummy data). I am having btree index on column (\"field\").While searching for any text from that column takes longer (more than 1 minute).user Id fieldd848f466-5e12-46e7-acf4-e12aff592241Northern Arkansas College24c32757-e6a8-4dbd-aac7-1efd867156cefemale6e225c57-c1d1-48a5-b9aa-513223efc81b1.0, 3.67, 3.67, 4.67, 7.0, 3.0088c6342-a240-45a7-9d12-e0e707292031Weberb05088cf-cba6-4bd7-8f8f-1469226874d0addd#[email protected] and index are created using following query.create table fields(user_id varchar(64), field varchar(64));CREATE INDEX index_field ON public.fields USING btree (field);Search Query: EXPLAIN (ANALYZE, BUFFERS) select * from fields where field='Mueller';Bitmap Heap Scan on fields (cost=72.61..10069.32 rows=2586 width=55) (actual time=88.017..65358.548 rows=31882 loops=1) Recheck Cond: ((field)::text = 'Mueller'::text) Heap Blocks: exact=31403 Buffers: shared hit=2 read=31492 -> Bitmap Index Scan on index_field (cost=0.00..71.96 rows=2586 width=0) (actual time=55.960..55.960 rows=31882 loops=1) Index Cond: ((field)::text = 'Mueller'::text) Buffers: shared read=91Planning Time: 0.331 msExecution Time: 65399.314 msAny suggestions for improvement?Best Regards,Mayank",
"msg_date": "Wed, 17 Jul 2019 16:33:41 +0530",
"msg_from": "mayank rupareliya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Searching in varchar column having 100M records"
},
{
"msg_contents": "Hello\n\nPlease recheck with track_io_timing = on in configuration. explain (analyze,buffers) with this option will report how many time we spend during i/o\n\n> Buffers: shared hit=2 read=31492\n\n31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD\n\nYour query reads table data from disks (well, or from OS cache). You need more RAM for shared_buffers or disks with better performance.\n\nregards, Sergei\n\n\n",
"msg_date": "Wed, 17 Jul 2019 14:53:20 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 02:53:20PM +0300, Sergei Kornilov wrote:\n>Hello\n>\n>Please recheck with track_io_timing = on in configuration. explain\n>(analyze,buffers) with this option will report how many time we spend\n>during i/o\n>\n>> Buffers: shared hit=2 read=31492\n>\n>31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD\n>\n>Your query reads table data from disks (well, or from OS cache). You need\n>more RAM for shared_buffers or disks with better performance.\n>\n\nEither that, or try creating a covering index, so that the query can do an\nindex-only scan. That might reduce the amount of IO against the table, and\nin the index the data should be located close to each other (same page or\npages close to each other).\n\nSo try something like\n\n CREATE INDEX ios_idx ON table (field, user_id);\n\nand make sure the table is vacuumed often enough (so that the visibility\nmap is up to date).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 17 Jul 2019 14:48:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "\n\nAm 17.07.19 um 14:48 schrieb Tomas Vondra:\n> Either that, or try creating a covering index, so that the query can \n> do an\n> index-only scan. That might reduce the amount of IO against the table, \n> and\n> in the index the data should be located close to each other (same page or\n> pages close to each other).\n>\n> So try something like\n>\n> CREATE INDEX ios_idx ON table (field, user_id);\n>\n> and make sure the table is vacuumed often enough (so that the visibility\n> map is up to date). \n\nyeah, and please don't use varchar(64), but instead UUID for the user_id \n- field to save space on disk and for faster comparison.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Wed, 17 Jul 2019 15:00:38 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 4:04 AM mayank rupareliya <[email protected]>\nwrote:\n\n> create table fields(user_id varchar(64), field varchar(64));\n> CREATE INDEX index_field ON public.fields USING btree (field);\n>\n> Any suggestions for improvement?\n>\n\nReduce the number of rows by constructing a relationally normalized data\nmodel.\n\nDavid J.\n\nOn Wed, Jul 17, 2019 at 4:04 AM mayank rupareliya <[email protected]> wrote:create table fields(user_id varchar(64), field varchar(64));CREATE INDEX index_field ON public.fields USING btree (field);Any suggestions for improvement?Reduce the number of rows by constructing a relationally normalized data model.David J.",
"msg_date": "Wed, 17 Jul 2019 06:57:27 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "On 17/07/2019 23:03, mayank rupareliya wrote:\n[...]\n> Table and index are created using following query.\n>\n> create table fields(user_id varchar(64), field varchar(64));\n> CREATE INDEX index_field ON public.fields USING btree (field);\n\n[...]\n\nAny particular reason for using varchar instead of text, for field?\n\nAlso, as Andreas pointed out, use UUID for the user_id.\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Thu, 18 Jul 2019 10:54:44 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "*Please recheck with track_io_timing = on in configuration. explain\n(analyze,buffers) with this option will report how many time we spend\nduring i/o*\n\n*> Buffers: shared hit=2 read=31492*\n\n*31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD*\n\n*Your query reads table data from disks (well, or from OS cache). You need\nmore RAM for shared_buffers or disks with better performance.*\n\n\nThanks Sergei..\n*track_io_timing = on helps.. Following is the result after changing that\nconfig.*\n\nAggregate (cost=10075.78..10075.79 rows=1 width=8) (actual\ntime=63088.198..63088.199 rows=1 loops=1)\n Buffers: shared read=31089\n I/O Timings: read=61334.252\n -> Bitmap Heap Scan on fields (cost=72.61..10069.32 rows=2586 width=0)\n(actual time=69.509..63021.448 rows=31414 loops=1)\n Recheck Cond: ((field)::text = 'Klein'::text)\n Heap Blocks: exact=30999\n Buffers: shared read=31089\n I/O Timings: read=61334.252\n -> Bitmap Index Scan on index_field (cost=0.00..71.96 rows=2586\nwidth=0) (actual time=58.671..58.671 rows=31414 loops=1)\n Index Cond: ((field)::text = 'Klein'::text)\n Buffers: shared read=90\n I/O Timings: read=45.316\nPlanning Time: 66.435 ms\nExecution Time: 63088.774 ms\n\n\n*So try something like*\n\n* CREATE INDEX ios_idx ON table (field, user_id);*\n\n*and make sure the table is vacuumed often enough (so that the visibility*\n*map is up to date).*\n\nThanks Tomas... I tried this and result improved but not much.\n\nThanks Andreas, David, Gavin\n\n*Any particular reason for using varchar instead of text, for field?* No\n\nuse UUID for the user_id. Agreed\n\n\n*Regards,Mayank*\n\nOn Thu, Jul 18, 2019 at 4:25 AM Gavin Flower <[email protected]>\nwrote:\n\n> On 17/07/2019 23:03, mayank rupareliya wrote:\n> [...]\n> > Table and index are created using following query.\n> >\n> > create table fields(user_id varchar(64), field varchar(64));\n> > CREATE INDEX index_field ON public.fields USING btree (field);\n>\n> [...]\n>\n> Any particular reason for using varchar instead of text, for field?\n>\n> Also, as Andreas pointed out, use UUID for the user_id.\n>\n>\n> Cheers,\n> Gavin\n>\n>\n>\n>\n\nPlease recheck with track_io_timing = on in configuration. explain (analyze,buffers) with this option will report how many time we spend during i/o> Buffers: shared hit=2 read=3149231492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDDYour query reads table data from disks (well, or from OS cache). You need more RAM for shared_buffers or disks with better performance.Thanks Sergei.. track_io_timing = on helps.. Following is the result after changing that config.Aggregate (cost=10075.78..10075.79 rows=1 width=8) (actual time=63088.198..63088.199 rows=1 loops=1) Buffers: shared read=31089 I/O Timings: read=61334.252 -> Bitmap Heap Scan on fields (cost=72.61..10069.32 rows=2586 width=0) (actual time=69.509..63021.448 rows=31414 loops=1) Recheck Cond: ((field)::text = 'Klein'::text) Heap Blocks: exact=30999 Buffers: shared read=31089 I/O Timings: read=61334.252 -> Bitmap Index Scan on index_field (cost=0.00..71.96 rows=2586 width=0) (actual time=58.671..58.671 rows=31414 loops=1) Index Cond: ((field)::text = 'Klein'::text) Buffers: shared read=90 I/O Timings: read=45.316Planning Time: 66.435 msExecution Time: 63088.774 msSo try something like CREATE INDEX ios_idx ON table (field, user_id);and make sure the table is vacuumed often enough (so that the visibilitymap is up to date).Thanks Tomas... I tried this and result improved but not much.Thanks Andreas, David, GavinAny particular reason for using varchar instead of text, for field? Nouse UUID for the user_id. AgreedRegards,MayankOn Thu, Jul 18, 2019 at 4:25 AM Gavin Flower <[email protected]> wrote:On 17/07/2019 23:03, mayank rupareliya wrote:\n[...]\n> Table and index are created using following query.\n>\n> create table fields(user_id varchar(64), field varchar(64));\n> CREATE INDEX index_field ON public.fields USING btree (field);\n\n[...]\n\nAny particular reason for using varchar instead of text, for field?\n\nAlso, as Andreas pointed out, use UUID for the user_id.\n\n\nCheers,\nGavin",
"msg_date": "Thu, 18 Jul 2019 17:21:49 +0530",
"msg_from": "mayank rupareliya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 05:21:49PM +0530, mayank rupareliya wrote:\n>*Please recheck with track_io_timing = on in configuration. explain\n>(analyze,buffers) with this option will report how many time we spend\n>during i/o*\n>\n>*> Buffers: shared hit=2 read=31492*\n>\n>*31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD*\n>\n>*Your query reads table data from disks (well, or from OS cache). You need\n>more RAM for shared_buffers or disks with better performance.*\n>\n>\n>Thanks Sergei..\n>*track_io_timing = on helps.. Following is the result after changing that\n>config.*\n>\n>Aggregate (cost=10075.78..10075.79 rows=1 width=8) (actual\n>time=63088.198..63088.199 rows=1 loops=1)\n> Buffers: shared read=31089\n> I/O Timings: read=61334.252\n> -> Bitmap Heap Scan on fields (cost=72.61..10069.32 rows=2586 width=0)\n>(actual time=69.509..63021.448 rows=31414 loops=1)\n> Recheck Cond: ((field)::text = 'Klein'::text)\n> Heap Blocks: exact=30999\n> Buffers: shared read=31089\n> I/O Timings: read=61334.252\n> -> Bitmap Index Scan on index_field (cost=0.00..71.96 rows=2586\n>width=0) (actual time=58.671..58.671 rows=31414 loops=1)\n> Index Cond: ((field)::text = 'Klein'::text)\n> Buffers: shared read=90\n> I/O Timings: read=45.316\n>Planning Time: 66.435 ms\n>Execution Time: 63088.774 ms\n>\n\nHow did that help? It only gives you insight that it's really the I/O that\ntakes time. You need to reduce that, somehow.\n\n>\n>*So try something like*\n>\n>* CREATE INDEX ios_idx ON table (field, user_id);*\n>\n>*and make sure the table is vacuumed often enough (so that the visibility*\n>*map is up to date).*\n>\n>Thanks Tomas... I tried this and result improved but not much.\n>\n\nWell, you haven't shown us the execution plan, so it's hard to check why\nit did not help much and give you further advice.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 18 Jul 2019 14:41:09 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "On 18/07/2019 23:51, mayank rupareliya wrote:\n[...]\n> Thanks Andreas, David, Gavin\n>\n> /Any particular reason for using varchar instead of text, for field?/ No\n>\n> use UUID for the user_id.Agreed\n\n/[...]/\n\n/Use of text is preferred, but I can't see it making any significant \ndifference to performance -- but I could be wrong!/\n\n/Cheers,\nGavin\n/\n\n\n\n",
"msg_date": "Fri, 19 Jul 2019 10:03:57 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "Well, you haven't shown us the execution plan, so it's hard to check why\nit did not help much and give you further advice.\n\n\nThis is the latest query execution with explain after adding indexing on\nboth columns.\n\nAggregate (cost=174173.57..174173.58 rows=1 width=8) (actual\ntime=65087.657..65087.658 rows=1 loops=1)\n -> Bitmap Heap Scan on fields (cost=1382.56..174042.61 rows=52386\nwidth=0) (actual time=160.340..65024.533 rows=31857 loops=1)\n Recheck Cond: ((field)::text = 'Champlin'::text)\n Heap Blocks: exact=31433\n -> Bitmap Index Scan on index_field (cost=0.00..1369.46\nrows=52386 width=0) (actual time=125.078..125.079 rows=31857 loops=1)\n Index Cond: ((field)::text = 'Champlin'::text)\nPlanning Time: 8.595 ms\nExecution Time: 65093.508 ms\n\nOn Thu, Jul 18, 2019 at 6:11 PM Tomas Vondra <[email protected]>\nwrote:\n\n> On Thu, Jul 18, 2019 at 05:21:49PM +0530, mayank rupareliya wrote:\n> >*Please recheck with track_io_timing = on in configuration. explain\n> >(analyze,buffers) with this option will report how many time we spend\n> >during i/o*\n> >\n> >*> Buffers: shared hit=2 read=31492*\n> >\n> >*31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD*\n> >\n> >*Your query reads table data from disks (well, or from OS cache). You need\n> >more RAM for shared_buffers or disks with better performance.*\n> >\n> >\n> >Thanks Sergei..\n> >*track_io_timing = on helps.. Following is the result after changing that\n> >config.*\n> >\n> >Aggregate (cost=10075.78..10075.79 rows=1 width=8) (actual\n> >time=63088.198..63088.199 rows=1 loops=1)\n> > Buffers: shared read=31089\n> > I/O Timings: read=61334.252\n> > -> Bitmap Heap Scan on fields (cost=72.61..10069.32 rows=2586 width=0)\n> >(actual time=69.509..63021.448 rows=31414 loops=1)\n> > Recheck Cond: ((field)::text = 'Klein'::text)\n> > Heap Blocks: exact=30999\n> > Buffers: shared read=31089\n> > I/O Timings: read=61334.252\n> > -> Bitmap Index Scan on index_field (cost=0.00..71.96 rows=2586\n> >width=0) (actual time=58.671..58.671 rows=31414 loops=1)\n> > Index Cond: ((field)::text = 'Klein'::text)\n> > Buffers: shared read=90\n> > I/O Timings: read=45.316\n> >Planning Time: 66.435 ms\n> >Execution Time: 63088.774 ms\n> >\n>\n> How did that help? It only gives you insight that it's really the I/O that\n> takes time. You need to reduce that, somehow.\n>\n> >\n> >*So try something like*\n> >\n> >* CREATE INDEX ios_idx ON table (field, user_id);*\n> >\n> >*and make sure the table is vacuumed often enough (so that the visibility*\n> >*map is up to date).*\n> >\n> >Thanks Tomas... I tried this and result improved but not much.\n> >\n>\n> Well, you haven't shown us the execution plan, so it's hard to check why\n> it did not help much and give you further advice.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\nWell, you haven't shown us the execution plan, so it's hard to check whyit did not help much and give you further advice.This is the latest query execution with explain after adding indexing on both columns.Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual time=65087.657..65087.658 rows=1 loops=1) -> Bitmap Heap Scan on fields (cost=1382.56..174042.61 rows=52386 width=0) (actual time=160.340..65024.533 rows=31857 loops=1) Recheck Cond: ((field)::text = 'Champlin'::text) Heap Blocks: exact=31433 -> Bitmap Index Scan on index_field (cost=0.00..1369.46 rows=52386 width=0) (actual time=125.078..125.079 rows=31857 loops=1) Index Cond: ((field)::text = 'Champlin'::text)Planning Time: 8.595 msExecution Time: 65093.508 msOn Thu, Jul 18, 2019 at 6:11 PM Tomas Vondra <[email protected]> wrote:On Thu, Jul 18, 2019 at 05:21:49PM +0530, mayank rupareliya wrote:\n>*Please recheck with track_io_timing = on in configuration. explain\n>(analyze,buffers) with this option will report how many time we spend\n>during i/o*\n>\n>*> Buffers: shared hit=2 read=31492*\n>\n>*31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD*\n>\n>*Your query reads table data from disks (well, or from OS cache). You need\n>more RAM for shared_buffers or disks with better performance.*\n>\n>\n>Thanks Sergei..\n>*track_io_timing = on helps.. Following is the result after changing that\n>config.*\n>\n>Aggregate (cost=10075.78..10075.79 rows=1 width=8) (actual\n>time=63088.198..63088.199 rows=1 loops=1)\n> Buffers: shared read=31089\n> I/O Timings: read=61334.252\n> -> Bitmap Heap Scan on fields (cost=72.61..10069.32 rows=2586 width=0)\n>(actual time=69.509..63021.448 rows=31414 loops=1)\n> Recheck Cond: ((field)::text = 'Klein'::text)\n> Heap Blocks: exact=30999\n> Buffers: shared read=31089\n> I/O Timings: read=61334.252\n> -> Bitmap Index Scan on index_field (cost=0.00..71.96 rows=2586\n>width=0) (actual time=58.671..58.671 rows=31414 loops=1)\n> Index Cond: ((field)::text = 'Klein'::text)\n> Buffers: shared read=90\n> I/O Timings: read=45.316\n>Planning Time: 66.435 ms\n>Execution Time: 63088.774 ms\n>\n\nHow did that help? It only gives you insight that it's really the I/O that\ntakes time. You need to reduce that, somehow.\n\n>\n>*So try something like*\n>\n>* CREATE INDEX ios_idx ON table (field, user_id);*\n>\n>*and make sure the table is vacuumed often enough (so that the visibility*\n>*map is up to date).*\n>\n>Thanks Tomas... I tried this and result improved but not much.\n>\n\nWell, you haven't shown us the execution plan, so it's hard to check why\nit did not help much and give you further advice.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 19 Jul 2019 19:43:26 +0530",
"msg_from": "mayank rupareliya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 8:13 AM mayank rupareliya <[email protected]>\nwrote:\n\n> Well, you haven't shown us the execution plan, so it's hard to check why\n> it did not help much and give you further advice.\n>\n>\n> This is the latest query execution with explain after adding indexing on\n> both columns.\n>\n> Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual\n> time=65087.657..65087.658 rows=1 loops=1)\n> -> Bitmap Heap Scan on fields (cost=1382.56..174042.61 rows=52386\n> width=0) (actual time=160.340..65024.533 rows=31857 loops=1)\n> Recheck Cond: ((field)::text = 'Champlin'::text)\n> Heap Blocks: exact=31433\n> -> Bitmap Index Scan on index_field (cost=0.00..1369.46\n> rows=52386 width=0) (actual time=125.078..125.079 rows=31857 loops=1)\n> Index Cond: ((field)::text = 'Champlin'::text)\n> Planning Time: 8.595 ms\n> Execution Time: 65093.508 ms\n>\n>>\n>>\n\nAre you on a solid state drive? If so, have you tried setting\neffective_io_concurrency to 200 or 300 and checking performance? Given\nnearly all of the execution time is doing a bitmap heap scan, I wonder\nabout adjusting this.\n\nhttps://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR\neffective_io_concurrency\n\"The allowed range is 1 to 1000, or zero to disable issuance of\nasynchronous I/O requests. Currently, this setting only affects bitmap heap\nscans.\"\n\"The default is 1 on supported systems, otherwise 0. \"\n\nOn Fri, Jul 19, 2019 at 8:13 AM mayank rupareliya <[email protected]> wrote:Well, you haven't shown us the execution plan, so it's hard to check whyit did not help much and give you further advice.This is the latest query execution with explain after adding indexing on both columns.Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual time=65087.657..65087.658 rows=1 loops=1) -> Bitmap Heap Scan on fields (cost=1382.56..174042.61 rows=52386 width=0) (actual time=160.340..65024.533 rows=31857 loops=1) Recheck Cond: ((field)::text = 'Champlin'::text) Heap Blocks: exact=31433 -> Bitmap Index Scan on index_field (cost=0.00..1369.46 rows=52386 width=0) (actual time=125.078..125.079 rows=31857 loops=1) Index Cond: ((field)::text = 'Champlin'::text)Planning Time: 8.595 msExecution Time: 65093.508 msAre you on a solid state drive? If so, have you tried setting effective_io_concurrency to 200 or 300 and checking performance? Given nearly all of the execution time is doing a bitmap heap scan, I wonder about adjusting this.https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOReffective_io_concurrency\"The allowed range is 1 to 1000, or zero to disable issuance of asynchronous I/O requests. Currently, this setting only affects bitmap heap scans.\"\"The default is 1 on supported systems, otherwise 0. \"",
"msg_date": "Fri, 19 Jul 2019 11:33:41 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 07:43:26PM +0530, mayank rupareliya wrote:\n>Well, you haven't shown us the execution plan, so it's hard to check why\n>it did not help much and give you further advice.\n>\n>\n>This is the latest query execution with explain after adding indexing on\n>both columns.\n>\n>Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual\n>time=65087.657..65087.658 rows=1 loops=1)\n> -> Bitmap Heap Scan on fields (cost=1382.56..174042.61 rows=52386\n>width=0) (actual time=160.340..65024.533 rows=31857 loops=1)\n> Recheck Cond: ((field)::text = 'Champlin'::text)\n> Heap Blocks: exact=31433\n> -> Bitmap Index Scan on index_field (cost=0.00..1369.46\n>rows=52386 width=0) (actual time=125.078..125.079 rows=31857 loops=1)\n> Index Cond: ((field)::text = 'Champlin'::text)\n>Planning Time: 8.595 ms\n>Execution Time: 65093.508 ms\n>\n\nThat very clearly does not use the index-only scan, so it's not\nsurprising it's not any faster. You need to find out why the planner\nmakes that decision.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 19 Jul 2019 20:39:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
},
{
"msg_contents": "Another suggestion, try to cluster the table using the index for the\n\"field\" column, then analyze. If you're on a spinning disk it will help if\nyou sort your search \"field\" during bulk insert.\n--\n\nregards\n\nmarie g. bacuno ii\n\n\nOn Fri, Jul 19, 2019 at 11:39 AM Tomas Vondra <[email protected]>\nwrote:\n\n> On Fri, Jul 19, 2019 at 07:43:26PM +0530, mayank rupareliya wrote:\n> >Well, you haven't shown us the execution plan, so it's hard to check why\n> >it did not help much and give you further advice.\n> >\n> >\n> >This is the latest query execution with explain after adding indexing on\n> >both columns.\n> >\n> >Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual\n> >time=65087.657..65087.658 rows=1 loops=1)\n> > -> Bitmap Heap Scan on fields (cost=1382.56..174042.61 rows=52386\n> >width=0) (actual time=160.340..65024.533 rows=31857 loops=1)\n> > Recheck Cond: ((field)::text = 'Champlin'::text)\n> > Heap Blocks: exact=31433\n> > -> Bitmap Index Scan on index_field (cost=0.00..1369.46\n> >rows=52386 width=0) (actual time=125.078..125.079 rows=31857 loops=1)\n> > Index Cond: ((field)::text = 'Champlin'::text)\n> >Planning Time: 8.595 ms\n> >Execution Time: 65093.508 ms\n> >\n>\n> That very clearly does not use the index-only scan, so it's not\n> surprising it's not any faster. You need to find out why the planner\n> makes that decision.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\nAnother suggestion, try to cluster the table using the index for the \"field\" column, then analyze. If you're on a spinning disk it will help if you sort your search \"field\" during bulk insert. --regardsmarie g. bacuno iiOn Fri, Jul 19, 2019 at 11:39 AM Tomas Vondra <[email protected]> wrote:On Fri, Jul 19, 2019 at 07:43:26PM +0530, mayank rupareliya wrote:\n>Well, you haven't shown us the execution plan, so it's hard to check why\n>it did not help much and give you further advice.\n>\n>\n>This is the latest query execution with explain after adding indexing on\n>both columns.\n>\n>Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual\n>time=65087.657..65087.658 rows=1 loops=1)\n> -> Bitmap Heap Scan on fields (cost=1382.56..174042.61 rows=52386\n>width=0) (actual time=160.340..65024.533 rows=31857 loops=1)\n> Recheck Cond: ((field)::text = 'Champlin'::text)\n> Heap Blocks: exact=31433\n> -> Bitmap Index Scan on index_field (cost=0.00..1369.46\n>rows=52386 width=0) (actual time=125.078..125.079 rows=31857 loops=1)\n> Index Cond: ((field)::text = 'Champlin'::text)\n>Planning Time: 8.595 ms\n>Execution Time: 65093.508 ms\n>\n\nThat very clearly does not use the index-only scan, so it's not\nsurprising it's not any faster. You need to find out why the planner\nmakes that decision.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 19 Jul 2019 13:51:36 -0700",
"msg_from": "mgbii bax <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Searching in varchar column having 100M records"
}
] |
[
{
"msg_contents": "Hi. I've got an app that queries pg_catalog to find any table columns that\nhave comments. After setting up PgBadger, it was #2 on my list of time\nconsuming queries, with min/max/avg duration of 199/2351/385 ms (across\n~12,000 executions logged).\n\nI'm wondering if there are any ways to speed this query up, including if\nthere are better options for what to query.\n\nI'm running on 9.6.14 on CentOS 7.\n\nI've copied the EXPLAIN below. Let me know if additional info would be\nhelpful. Thanks in advance!\n\nKen\n\nag_reach=> EXPLAIN (ANALYZE, VERBOSE,BUFFERS,TIMING, COSTS)\nSELECT c.relname AS table,a.attname AS\ncolumn,pg_catalog.col_description(a.attrelid, a.attnum) AS comment\nFROM pg_catalog.pg_attribute a, pg_class c\nWHERE a.attrelid = c.oid\nAND pg_catalog.col_description(a.attrelid, a.attnum) IS NOT NULL;\n\n QUERY\nPLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=197.09..22533.42 rows=39858 width=160) (actual\ntime=92.538..386.047 rows=8 loops=1)\n Output: c.relname, a.attname, col_description(a.attrelid,\n(a.attnum)::integer)\n Hash Cond: (a.attrelid = c.oid)\n Buffers: shared hit=81066\n -> Seq Scan on pg_catalog.pg_attribute a (cost=0.00..11718.81\nrows=41278 width=70) (actual time=76.069..369.410 rows=8 loops=1)\n Output: a.attrelid, a.attname, a.atttypid, a.attstattarget,\na.attlen, a.attnum, a.attndims, a.attcacheoff, a.atttypmod, a.attbyval,\na.attstorage, a.attalign, a.attnotnull, a.atthasdef, a.attisdropped,\na.attislocal, a.attinhcount, a.attcollation, a.attacl, a.attoptions,\na.attfdwoptions\n Filter: (col_description(a.attrelid, (a.attnum)::integer) IS NOT\nNULL)\n Rows Removed by Filter: 40043\n Buffers: shared hit=80939\n -> Hash (cost=144.82..144.82 rows=4182 width=68) (actual\ntime=15.932..15.934 rows=4183 loops=1)\n Output: c.relname, c.oid\n Buckets: 8192 Batches: 1 Memory Usage: 473kB\n Buffers: shared hit=103\n -> Seq Scan on pg_catalog.pg_class c (cost=0.00..144.82\nrows=4182 width=68) (actual time=0.015..7.667 rows=4183 loops=1)\n Output: c.relname, c.oid\n Buffers: shared hit=103\n Planning time: 0.408 ms\n Execution time: 386.148 ms\n(18 rows)\n\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing list\n<[email protected]?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nHi. I've got an app that queries pg_catalog to find any table columns that have comments. After setting up PgBadger, it was #2 on my list of time consuming queries, with min/max/avg duration of 199/2351/385 ms (across ~12,000 executions logged).I'm wondering if there are any ways to speed this query up, including if there are better options for what to query.I'm running on 9.6.14 on CentOS 7.I've copied the EXPLAIN below. Let me know if additional info would be helpful. Thanks in advance!Kenag_reach=> EXPLAIN (ANALYZE, VERBOSE,BUFFERS,TIMING, COSTS)SELECT c.relname AS table,a.attname AS column,pg_catalog.col_description(a.attrelid, a.attnum) AS commentFROM pg_catalog.pg_attribute a, pg_class cWHERE a.attrelid = c.oidAND pg_catalog.col_description(a.attrelid, a.attnum) IS NOT NULL; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=197.09..22533.42 rows=39858 width=160) (actual time=92.538..386.047 rows=8 loops=1) Output: c.relname, a.attname, col_description(a.attrelid, (a.attnum)::integer) Hash Cond: (a.attrelid = c.oid) Buffers: shared hit=81066 -> Seq Scan on pg_catalog.pg_attribute a (cost=0.00..11718.81 rows=41278 width=70) (actual time=76.069..369.410 rows=8 loops=1) Output: a.attrelid, a.attname, a.atttypid, a.attstattarget, a.attlen, a.attnum, a.attndims, a.attcacheoff, a.atttypmod, a.attbyval, a.attstorage, a.attalign, a.attnotnull, a.atthasdef, a.attisdropped, a.attislocal, a.attinhcount, a.attcollation, a.attacl, a.attoptions, a.attfdwoptions Filter: (col_description(a.attrelid, (a.attnum)::integer) IS NOT NULL) Rows Removed by Filter: 40043 Buffers: shared hit=80939 -> Hash (cost=144.82..144.82 rows=4182 width=68) (actual time=15.932..15.934 rows=4183 loops=1) Output: c.relname, c.oid Buckets: 8192 Batches: 1 Memory Usage: 473kB Buffers: shared hit=103 -> Seq Scan on pg_catalog.pg_class c (cost=0.00..144.82 rows=4182 width=68) (actual time=0.015..7.667 rows=4183 loops=1) Output: c.relname, c.oid Buffers: shared hit=103 Planning time: 0.408 ms Execution time: 386.148 ms(18 rows)-- AGENCY Software A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/[email protected](253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.",
"msg_date": "Fri, 19 Jul 2019 16:03:27 -0700",
"msg_from": "Ken Tanzer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speeding up query pulling comments from pg_catalog"
},
{
"msg_contents": "Ken Tanzer <[email protected]> writes:\n> Hi. I've got an app that queries pg_catalog to find any table columns that\n> have comments. After setting up PgBadger, it was #2 on my list of time\n> consuming queries, with min/max/avg duration of 199/2351/385 ms (across\n> ~12,000 executions logged).\n> I'm wondering if there are any ways to speed this query up, including if\n> there are better options for what to query.\n\n> ag_reach=> EXPLAIN (ANALYZE, VERBOSE,BUFFERS,TIMING, COSTS)\n> SELECT c.relname AS table,a.attname AS\n> column,pg_catalog.col_description(a.attrelid, a.attnum) AS comment\n> FROM pg_catalog.pg_attribute a, pg_class c\n> WHERE a.attrelid = c.oid\n> AND pg_catalog.col_description(a.attrelid, a.attnum) IS NOT NULL;\n\nUnfortunately, the planner isn't smart enough to inline the\ncol_description() function. But if you do so manually you'll end up\nwith something like\n\nSELECT c.relname AS table, a.attname AS column, d.description AS comment\nFROM\n pg_catalog.pg_attribute a JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n LEFT JOIN pg_catalog.pg_description d ON d.classoid = c.tableoid and d.objoid = c.oid and d.objsubid = a.attnum\nWHERE d.description IS NOT NULL;\n\nFor me, that formulation is quite a bit faster than the original ---\nwhat you wrote basically forces a nestloop join against pg_description,\nand then to add insult to injury, has to search pg_description a second\ntime for each hit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Jul 2019 10:46:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up query pulling comments from pg_catalog"
},
{
"msg_contents": "On Sat, Jul 20, 2019 at 7:46 AM Tom Lane <[email protected]> wrote:\n\n> But if you do so manually you'll end up with something like\n>\n> SELECT c.relname AS table, a.attname AS column, d.description AS comment\n> FROM\n> pg_catalog.pg_attribute a JOIN pg_catalog.pg_class c ON a.attrelid =\n> c.oid\n> LEFT JOIN pg_catalog.pg_description d ON d.classoid = c.tableoid and\n> d.objoid = c.oid and d.objsubid = a.attnum\n> WHERE d.description IS NOT NULL;\n>\n> For me, that formulation is quite a bit faster than the original ---\n>\n\nA lot faster for me too (~30-40 ms). Thanks!\n\n\nand then to add insult to injury, has to search pg_description a second\n> time for each hit.\n>\n\nNot sure if I'm understanding this correctly, but are you saying that\nbecause col_description() is specified in two places in the query, that it\nactually will get called twice? I was under the impression that a function\n(at least a non-volatile one) specified multiple times, but with the same\narguments, would only get called once. Is that just wishful thinking?\n\nCheers,\n\nKen\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing list\n<[email protected]?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nOn Sat, Jul 20, 2019 at 7:46 AM Tom Lane <[email protected]> wrote:But if you do so manually you'll end up with something like\n\nSELECT c.relname AS table, a.attname AS column, d.description AS comment\nFROM\n pg_catalog.pg_attribute a JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n LEFT JOIN pg_catalog.pg_description d ON d.classoid = c.tableoid and d.objoid = c.oid and d.objsubid = a.attnum\nWHERE d.description IS NOT NULL;\n\nFor me, that formulation is quite a bit faster than the original ---\nA lot faster for me too (~30-40 ms). Thanks! and then to add insult to injury, has to search pg_description a secondtime for each hit. Not sure if I'm understanding this correctly, but are you saying that because col_description() is specified in two places in the query, that it actually will get called twice? I was under the impression that a function (at least a non-volatile one) specified multiple times, but with the same arguments, would only get called once. Is that just wishful thinking?Cheers,Ken-- AGENCY Software A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/[email protected](253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.",
"msg_date": "Sat, 20 Jul 2019 12:08:00 -0700",
"msg_from": "Ken Tanzer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up query pulling comments from pg_catalog"
},
{
"msg_contents": "Ken Tanzer <[email protected]> writes:\n> On Sat, Jul 20, 2019 at 7:46 AM Tom Lane <[email protected]> wrote:\n>> and then to add insult to injury, has to search pg_description a second\n>> time for each hit.\n\n> Not sure if I'm understanding this correctly, but are you saying that\n> because col_description() is specified in two places in the query, that it\n> actually will get called twice?\n\nYes.\n\n> I was under the impression that a function\n> (at least a non-volatile one) specified multiple times, but with the same\n> arguments, would only get called once. Is that just wishful thinking?\n\nAfraid so. There's been assorted talk about various optimizations to\navoid unnecessary duplicate function calls, but I don't think that\nmerging textually-separate calls has even been on the radar. The\ndiscussions I can recall have been more about not calling stable functions\n(with fixed arguments) more than once per query.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Jul 2019 15:25:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up query pulling comments from pg_catalog"
},
{
"msg_contents": "On Sat, Jul 20, 2019 at 12:25 PM Tom Lane <[email protected]> wrote:\n\n> Ken Tanzer <[email protected]> writes:\n> > On Sat, Jul 20, 2019 at 7:46 AM Tom Lane <[email protected]> wrote:\n> >> and then to add insult to injury, has to search pg_description a second\n> >> time for each hit.\n>\n> > Not sure if I'm understanding this correctly, but are you saying that\n> > because col_description() is specified in two places in the query, that\n> it\n> > actually will get called twice?\n>\n> Yes.\n>\n> > I was under the impression that a function\n> > (at least a non-volatile one) specified multiple times, but with the same\n> > arguments, would only get called once. Is that just wishful thinking?\n>\n> Afraid so.\n\n\nThat's good to know! Just to help me understand:\n\n\n\n> There's been assorted talk about various optimizations to\n> avoid unnecessary duplicate function calls,\n\n\nSo I had read the sentence below to mean my functions would only get called\nonce. But is that sentence only supposed to apply to index scans? Or does\nit mean the planner is allowed to optimize, but it just doesn't know how\nyet?\n\nA STABLE function cannot modify the database and is guaranteed to return\nthe same results given the same arguments for all rows within a single\nstatement. *This category allows the optimizer to optimize multiple calls\nof the function to a single call.* In particular, it is safe to use an\nexpression containing such a function in an index scan condition. (Since an\nindex scan will evaluate the comparison value only once, not once at each\nrow, it is not valid to use a VOLATILE function in an index scan condition.)\n(https://www.postgresql.org/docs/9.6/xfunc-volatility.html)\n\nCheers,\nKen\n\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing list\n<[email protected]?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nOn Sat, Jul 20, 2019 at 12:25 PM Tom Lane <[email protected]> wrote:Ken Tanzer <[email protected]> writes:\n> On Sat, Jul 20, 2019 at 7:46 AM Tom Lane <[email protected]> wrote:\n>> and then to add insult to injury, has to search pg_description a second\n>> time for each hit.\n\n> Not sure if I'm understanding this correctly, but are you saying that\n> because col_description() is specified in two places in the query, that it\n> actually will get called twice?\n\nYes.\n\n> I was under the impression that a function\n> (at least a non-volatile one) specified multiple times, but with the same\n> arguments, would only get called once. Is that just wishful thinking?\n\nAfraid so. That's good to know! Just to help me understand: There's been assorted talk about various optimizations to\navoid unnecessary duplicate function calls, So I had read the sentence below to mean my functions would only get called once. But is that sentence only supposed to apply to index scans? Or does it mean the planner is allowed to optimize, but it just doesn't know how yet?A STABLE function cannot modify the database and is guaranteed to return the same results given the same arguments for all rows within a single statement. This category allows the optimizer to optimize multiple calls of the function to a single call. In particular, it is safe to use an expression containing such a function in an index scan condition. (Since an index scan will evaluate the comparison value only once, not once at each row, it is not valid to use a VOLATILE function in an index scan condition.)(https://www.postgresql.org/docs/9.6/xfunc-volatility.html) Cheers,Ken-- AGENCY Software A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/[email protected](253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.",
"msg_date": "Mon, 22 Jul 2019 10:53:09 -0700",
"msg_from": "Ken Tanzer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up query pulling comments from pg_catalog"
},
{
"msg_contents": "Ken Tanzer <[email protected]> writes:\n> On Sat, Jul 20, 2019 at 12:25 PM Tom Lane <[email protected]> wrote:\n>> There's been assorted talk about various optimizations to\n>> avoid unnecessary duplicate function calls,\n\n> So I had read the sentence below to mean my functions would only get called\n> once. But is that sentence only supposed to apply to index scans? Or does\n> it mean the planner is allowed to optimize, but it just doesn't know how\n> yet?\n\n> A STABLE function cannot modify the database and is guaranteed to return\n> the same results given the same arguments for all rows within a single\n> statement. *This category allows the optimizer to optimize multiple calls\n> of the function to a single call.*\n\nIt says \"allows\", not \"requires\". But in particular, we've interpreted\nthat to mean trying to call a stable function (with constant or at least\nstable arguments) once per query rather than once per row, as the naive\ninterpretation of SQL semantics would have us do. Matching up textually\ndistinct calls has not been on the radar --- it seems fairly expensive\nto do, with no return in typical queries, and relatively small return\neven if we find a match.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 15:57:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up query pulling comments from pg_catalog"
}
] |
[
{
"msg_contents": "Hello,\n\n\nI recently spent a bit of time benchmarking effective_io_concurrency on Postgres.\n\nI would like to share my findings with you:\n\nhttps://portavita.github.io/2019-07-19-PostgreSQL_effective_io_concurrency_benchmarked/\n\nComments are welcome.\n\nregards,\n\nfabio pardi\n\n\n",
"msg_date": "Mon, 22 Jul 2019 08:41:59 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "benchmarking effective_io_concurrency"
},
{
"msg_contents": "On Mon, Jul 22, 2019 at 2:42 AM Fabio Pardi <[email protected]> wrote:\n\n> Hello,\n>\n>\n> I recently spent a bit of time benchmarking effective_io_concurrency on\n> Postgres.\n>\n> I would like to share my findings with you:\n>\n>\n> https://portavita.github.io/2019-07-19-PostgreSQL_effective_io_concurrency_benchmarked/\n>\n> Comments are welcome.\n>\n> regards,\n>\n> fabio pardi\n>\n\nYou didn't mention what type of disk storage you are using, or if that\nmatters. The number of cores in your database could also matter.\n\nDoes the max_parallel_workers setting have any influence on how\neffective_io_concurrency works?\n\nBased on your data, one should set effective_io_concurrency at the highest\npossible setting with no ill effects with the possible exception that your\ndisk will get busier. Somehow I suspect that as you scale the number of\nconcurrent disk i/o tasks, other things may start to suffer. For example\ndoes CPU wait time start to increase as more and more threads are consumed\nwaiting for i/o instead of doing other processing? Do you run into lock\ncontention on the i/o subsystem? (Back in the day, lock contention for\n/dev/tcp was a major bottleneck for scaling busy webservers vertically. I\nhave no idea if modern linux kernels could run into the same issue waiting\nfor locks for /dev/sd0. Surely if anything was going to push that issue,\nit would be setting effective_io_concurrency really high and then demanding\na lot of concurrent disk accesses.)\n\nOn Mon, Jul 22, 2019 at 2:42 AM Fabio Pardi <[email protected]> wrote:Hello,\n\n\nI recently spent a bit of time benchmarking effective_io_concurrency on Postgres.\n\nI would like to share my findings with you:\n\nhttps://portavita.github.io/2019-07-19-PostgreSQL_effective_io_concurrency_benchmarked/\n\nComments are welcome.\n\nregards,\n\nfabio pardiYou didn't mention what type of disk storage you are using, or if that matters. The number of cores in your database could also matter.Does the max_parallel_workers setting have any influence on how effective_io_concurrency works?Based on your data, one should set effective_io_concurrency at the highest possible setting with no ill effects with the possible exception that your disk will get busier. Somehow I suspect that as you scale the number of concurrent disk i/o tasks, other things may start to suffer. For example does CPU wait time start to increase as more and more threads are consumed waiting for i/o instead of doing other processing? Do you run into lock contention on the i/o subsystem? (Back in the day, lock contention for /dev/tcp was a major bottleneck for scaling busy webservers vertically. I have no idea if modern linux kernels could run into the same issue waiting for locks for /dev/sd0. Surely if anything was going to push that issue, it would be setting effective_io_concurrency really high and then demanding a lot of concurrent disk accesses.)",
"msg_date": "Mon, 22 Jul 2019 08:06:09 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking effective_io_concurrency"
},
{
"msg_contents": "Hi Rick, \n\nthanks for your inputs.\n\nOn 22/07/2019 14:06, Rick Otten wrote:\n> \n> \n> \n> You didn't mention what type of disk storage you are using, or if that matters. \n\nI actually mentioned I m using SSD, in RAID 10. Also is mentioned I tested in a no-RAID setup. Is that what you mean?\n\n The number of cores in your database could also matter.\n> \n\nTrue, when scaling I think it can actually bring up problems as you mention here below. (BTW, Tested on a VM with 6 cores and on HW with 32. I updated the blogpost, thanks)\n\n\n> Does the max_parallel_workers setting have any influence on how effective_io_concurrency works?\n> \n\nI m not sure about that one related to the tests I ran, because the query plan does not show parallelism. \n\n> Based on your data, one should set effective_io_concurrency at the highest possible setting with no ill effects with the possible exception that your disk will get busier. Somehow I suspect that as you scale the number of concurrent disk i/o tasks, other things may start to suffer. For example does CPU wait time start to increase as more and more threads are consumed waiting for i/o instead of doing other processing? Do you run into lock contention on the i/o subsystem? (Back in the day, lock contention for /dev/tcp was a major bottleneck for scaling busy webservers vertically. I have no idea if modern linux kernels could run into the same issue waiting for locks for /dev/sd0. Surely if anything was going to push that issue, it would be setting effective_io_concurrency really high and then demanding a lot of concurrent disk accesses.)\n> \n> \n> \n\nMy suggestion would be to try by your own and find out what works for you, maybe slowly increasing the value of effective_io_concurrency. \n\nEvery workload is peculiar, so I suspect there is no silver bullet here. Also the documentation gives you directions in that way...\n\n\n\nregards,\n\nfabio pardi\n\n\n",
"msg_date": "Mon, 22 Jul 2019 14:28:12 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: benchmarking effective_io_concurrency"
},
{
"msg_contents": "On Mon, Jul 22, 2019 at 1:42 AM Fabio Pardi <[email protected]> wrote:\n>\n> Hello,\n>\n>\n> I recently spent a bit of time benchmarking effective_io_concurrency on Postgres.\n>\n> I would like to share my findings with you:\n>\n> https://portavita.github.io/2019-07-19-PostgreSQL_effective_io_concurrency_benchmarked/\n>\n> Comments are welcome.\n\nI did very similar test a few years back and came up with very similar results:\nhttps://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n\neffective_io_concurrency is an oft overlooked tuning parameter and I'm\ncurious if the underlying facility (posix_fadvise) can't be used for\nmore types of queries. For ssd storage, which is increasingly common\nthese days, it really pays of to crank it with few downsides from my\nmeasurement.\n\nmerlin\n\n\n",
"msg_date": "Mon, 22 Jul 2019 13:32:09 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking effective_io_concurrency"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.